Article From:https://www.cnblogs.com/lay2017/p/9973298.html

hiveIs a data warehouse tool, built on hadoop, it exists to make the query and analysis of large data more convenient. Hive provides a simple SQL query function and eventually converts to MapReduce task execution.

I. environment

  • JDK1.8+Officially recommend the new version of JDK, otherwise there may be incompatibility between different versions.
  • hadoop2.0 + version, otherwise hive2.0 + will not support Hadoop 1.0+
  • LinuxBoth environment and windows can be used as production environment, but MacOS is usually used as development environment.

This article adoptscentos7System, JDk1.8, and Hadoop 2.9

JDKInstallation Reference:https://www.cnblogs.com/lay2017/p/7442217.html

hadoopSingle node installation reference:https://www.cnblogs.com/lay2017/p/9912381.html

hdfsSingle Node Installation Configuration:https://www.cnblogs.com/lay2017/p/9919905.html

Above all, we built JDK, Hadoop and configured HDFS in hadoop. Hive will store data in HDFS based on JDK and Hadoop environments.

At the same time, for simplicity, we only configure a single node installation, and only configure hdfs, without MapReduce and yarn configuration.

And, we don’t use separate MySQL or derby for metadata storage, we just use the defaultBuilt-in database derby, using built-in database will only allow one connection, so if the production environment is usually using a separate database mysql

In this way, we build the basic environment of hive, and then we install and configure hive.

 

II. Hive Installation Configuration

Let’s first create a Hive directory and then enter it.

mkdirs /usr/local/hadoop/hive
cd /usr/local/hadoop/hive

This article uses version 1.2.2 of hive to download the tar package. It will take a while to wait here.

wget http://mirrors.hust.edu.cn/apache/hive/hive-1.2.2/apache-hive-1.2.2-bin.tar.gz

Then decompress

tar apache-hive-1.2.2-bin.tar.gz

You can see the unzipped file.

Next we need to configure environment variables for hive

Of course, make sure that Hadoop and JDK you installed before are available configurations

Use commands to make the configuration effective

source /etc/profile

We enter the hive directory

cd /usr/local/hadoop/hive/apache-hive-1.2.2-bin

Go into the hive shell and see(Please remember to start hdfs: start-dfs.sh first.)

This means that we have successfully configured hive and quit hive.

We see that Metastore is created under this directory

This means that your metadata is stored in the apache-hive-1.2.2-bin directory, andNext time you have to start hive in this directory,If you start Hive elsewhere, you’ll be surprised that you can’t find these tables.

Above all, we simply install and configure hive, and successfully start hive shell and automatically create metastore_db.

 

Leave a Reply

Your email address will not be published. Required fields are marked *