栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

hive初次启动问题

hive初次启动问题

文章目录

重点问题


重点

重点:启动hive前要先启动zk,然后启动hdfs,然后是初始化 Hive 元数据库,配置文件一定要配对

问题

为什么:都需要cp /opt/dtc/software//flume/conf/flume-env.sh.template /opt/dtc/software/flume/conf/flume-env.sh
hadoop-daemon.sh start zkfc和zkServer.sh
问题1:

Mon Mar 07 23:28:09 PST 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Mon Mar 07 23:28:10 PST 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.

原因:根据mysql的要求,登录mysql服务器验证,这只是个警告,想要解决就需要加东西
解决办法:
找到hive的配置文件/hive/conf/hive-site.xml修改javax.jdo.option.ConnectionURL
在连接结尾加useSSL=false


	javax.jdo.option.ConnectionURL
	jdbc:mysql://a:3306/metastore?characterEncoding=UTF-8&createDatabaseIfNotExist=true&useSSL=false  
  

问题2:

[Fatal Error] hive-site.xml:505:90: The reference to entity "createDatabaseIfNotExist" must end with the ';' delimiter.
Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/opt/dtc/software/hive/conf/hive-site.xml; lineNumber: 505; columnNumber: 90; The reference to entity "createDatabaseIfNotExist" must end with the ';' delimiter.
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2860)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2706)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2579)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1350)
        at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:3558)
        at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:3622)
        at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:3709)
        at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:3652)
        at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:82)
        at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:66)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:657)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.xml.sax.SAXParseException; systemId: file:/opt/dtc/software/hive/conf/hive-site.xml; lineNumber: 505; columnNumber: 90; The reference to entity "createDatabaseIfNotExist" must end with the ';' delimiter.
        at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
        at org.apache.xerces.jaxp.documentBuilderImpl.parse(Unknown Source)
        at javax.xml.parsers.documentBuilder.parse(documentBuilder.java:150)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2684)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2672)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2743)
        ... 17 more

原因:这是因为xml文件中的编码规则引起的。

解决办法:
找到hive的配置文件/hive/conf/hive-site.xml修改&---->&


			javax.jdo.option.ConnectionURL
			jdbc:mysql://a:3306/metastore?characterEncoding=UTF-8&createDatabaseIfNotExist=true&useSSL=false<  
		 

问题3:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
        at org.apache.hadoop.fs.Path.initialize(Path.java:254)
        at org.apache.hadoop.fs.Path.(Path.java:212)
        at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:644)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:563)
        at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
        at java.net.URI.checkPath(URI.java:1823)
        at java.net.URI.(URI.java:745)
        at org.apache.hadoop.fs.Path.initialize(Path.java:251)
        ... 12 more

原因:这种方式应该是让 Java 程序去通过 System 类 来读取这些配置项,比如:
System.getProperty(“java.io.tmpdir”);
解决办法:可以直接将配置文件中${system:java.io.tmpdir} 这类配置值中的system: 直接去掉,改为 j a v a . i o . t m p d i r , 让 J a v a 程 序 直 接 读 {java.io.tmpdir},让 Java 程序直接读 java.io.tmpdir,让Java程序直接读{java.io.tmpdir}即可。
问题4:

[root@a bin]# hdfs haadmin -transitionToActive nn1
Automatic failover is enabled for NameNode at b/192.168.80.129:8020
Refusing to manually manage HA state, since it may cause
a split-brain scenario or other incorrect state.
If you are very sure you know what you are doing, please
specify the --forcemanual flag.

原因:因为开启了zkfc 自动选active的namenode 不能手动切换了 zkfc会自动选择namenode节点作为active的。
解决办法:
先把zk和hdfs停掉
然后
zkStart-all.sh start
hadoop-daemon.sh start zkfc
start-all.sh
#查看namenode状态
hdfs haadmin -getServiceState nn1

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/762105.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号