1.将下载的hive安装包拖至/opt/softwore/中
将hive解压到cd /opt/module文件夹下
cd /opt/software/ tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/module/
2.修改系统环境变量
vi /etc/profile
在编辑面板下添加下面代码
export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin
重启环境配置
source /etc/profile
3.修改hive的环境变量
cd /opt/module/apache-hive-3.1.2-bin/bin/
编辑hive-config.sh文件
vi hive-config.sh
添加如下内容
export JAVA_HOME=/opt/module/jdk1.8.0_212 export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin export HADOOP_HOME=/opt/module/hadoop-3.2.0 export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf
4.拷贝hive配置的文件
cd /opt/module/apache-hive-3.1.2-bin/conf/ cp hive-default.xml.template hive-site.xml
5.修改Hive配置文件,找到对应的位置进行修改
javax.jdo.option.ConnectionDriverName com.mysql.cj.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName root Username to use against metastore database javax.jdo.option.ConnectionPassword 123456 password to use against metastore database javax.jdo.option.ConnectionURL jdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. datanucleus.schema.autoCreateAll true Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead. hive.metastore.schema.verification false Enforce metastore schema version consistency. True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. hive.exec.local.scratchdir /opt/module/apache-hive-3.1.2-bin/tmp/${user.name} Local scratch space for Hive jobs system:java.io.tmpdir /opt/module/apache-hive-3.1.2-bin/iotmp hive.downloaded.resources.dir /opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources Temporary local directory for added resources in the remote file system. hive.querylog.location /opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name} Location of Hive run time structured log file hive.server2.logging.operation.log.location /opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs Top level directory where operation logs are stored if logging functionality is enabled hive.metastore.db.type mysql Expects one of [derby, oracle, mysql, mssql, postgres]. Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it. hive.cli.print.current.db true Whether to include the current database in the Hive prompt. hive.cli.print.header true Whether to print the names of the columns in query output. hive.metastore.warehouse.dir /opt/hive/warehouse location of default database for the warehouse
6.上传mysql驱动包到/opt/module/apache-hive-3.1.2-bin/lib/文件夹下
驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包
7.确保 mysql数据库中有名称为hive的数据库
mysql> create database hive; #注意hive后的;不要忘记
8.初始化元数据库
schematool -dbType mysql -initSchema
9.群起一下
start-all.sh #Hadoop100上 start-yarn.sh #Hadoop101上
10.启动hive
hive
11.检测是否启动成功
show databases;



