当使用idea使用saprk连接hive时,报以下错误:Unrecognized Hadoop major version number: 3.1.1
pom依赖如下:
4.0.0 com.yuwang spark 1.0-SNAPSHOT 2.3.2 8.0.22 3.1.1 4.0 2.8.6 4.16 org.apache.spark spark-core_2.11 ${spark.version} org.apache.spark spark-sql_2.11 ${spark.version} org.apache.spark spark-hive_2.11 ${spark.version} mysql mysql-connector-java ${mysql.java.connector.version} org.apache.hadoop hadoop-client ${hadoop.version} org.apache.hadoop hadoop-hdfs ${hadoop.version} com.google.protobuf protobuf-java com.google.code.gson gson ${gson.version} org.apache.maven.plugins maven-compiler-plugin 3.8.1 ${java.version} ${java.version} true ${java.version} net.alchim31.maven scala-maven-plugin 4.3.1 2.12.11 2.12.11 scala-compile-first process-resources add-source compile org.apache.maven.plugins maven-resources-plugin 3.1.0 maven-compiler-plugin 3.6.1 1.8 1.8 maven-assembly-plugin jar-with-dependencies make-assembly package single
原因:hdp-3.1.5集成了hadoop3.1和spark2.3,当我idea使用2.3的spark时,不认识3.1的hadoop,查看源码发现:
spark2.3的源码里,没有对应hadoop3.x的版本,继续深究源码发现是以common-version-info.properties文件里包含了版本信息
解决办法:
在classpath目录下添加common-version-info.properties文件,内容指定版本
(虽然我Pom依赖里Hadoop是3.x,但是不影响),内容如下:
最后代码spark代码如下:
package org.yw
import java.sql.DriverManager
import java.util.Properties
import org.apache.spark.sql.SparkSession
import org.yw.db.PhoenixDB
object SparkTest {
def main(args: Array[String]): Unit = {
var master = "local"
var sql = "select * from person"
val spark = SparkSession
.builder()
.appName("test hive")
.master(master)
// 指定hive的metastore的端口 默认为9083 在hive-site.xml中查看
.config("hive.metastore.uris", "thrift://bigdata03:9083")
//指定hive的warehouse目录
.config("spark.sql.warehouse.dir", "hdfs://ns//warehouse/tablespace/external/hive")
.enableHiveSupport()
.getOrCreate()
spark.sql(sql).show()
}
}
输出如下



