1,将服务器hive conf/hive-site.xml放到代码resources中
2,添加pom依赖
org.apache.spark spark-hive_2.11${spark.version}
3,可能出现的问题:
org/apache/tez/dag/api/SessionNotRunning when creating Hive client using cla
解决: 将hive-site.xml 中, 此值 从 tez 改为 mr
hive.execution.engine mr
4, 可能出现的问题:
native snappy library not available: this version of libhadoop was built wit
解决:
下载支持 snappy的依赖,用管理员方式解压,
将 hadoop2.7/bin 加载到idea本地库中
链接:https://pan.baidu.com/s/1BAL6h34CSHagvEnhmqT7Qw
提取码:ezyt
5,可能出现的问题:
Caused by: java.lang.RuntimeException: native snappy library not available: SnappyCompressor has not been loaded.
解决:
启动 cmd.exe
hadoop // 查看是否正常
hadoop checknative -a // 查看snappy is true 且 目录是否正确,不正确修改环境变量,并且重启 IDEA



