栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

spark3 本地读hive 1.1.0版本,解决版本兼容问题

spark3 本地读hive 1.1.0版本,解决版本兼容问题

Spark3 读 hive 1.1.0 遇到的问题

Exception in thread “main” org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table test1. Invalid method name: ‘get_table_req’;

Exception in thread "main" org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table test1. Invalid method name: 'get_table_req';
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:113)
	at org.apache.spark.sql.hive.HiveExternalCatalog.tableExists(HiveExternalCatalog.scala:855)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.tableExists(ExternalCatalogWithListener.scala:146)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:432)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:185)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTablemetadata(SessionCatalog.scala:445)
	at org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.loadTable(V2SessionCatalog.scala:66)
	at org.apache.spark.sql.connector.catalog.CatalogV2Util$.loadTable(CatalogV2Util.scala:283)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.loaded$lzycompute$1(Analyzer.scala:1010)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.loaded$1(Analyzer.scala:1010)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.$anonfun$lookupRelation$3(Analyzer.scala:1022)
	at scala.Option.orElse(Option.scala:447)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupRelation(Analyzer.scala:1021)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:977)

原因:版本不兼容,spark 3.x默认支持hive 2.x,这篇文章讲解的比较清楚
https://blog.csdn.net/OldDirverHelpMe/article/details/105325439

解决方式一:配置spark.sql.hive.metastore.version pom.xml
	
        8
        8
        3.0.1
        2.12
    

    

        
            org.apache.spark
            spark-core_${scala.version}
            ${spark.version}
        

        
            org.apache.spark
            spark-streaming_${scala.version}
            ${spark.version}
            provided
        

        
            org.apache.spark
            spark-sql_${scala.version}
            ${spark.version}
            provided
        

        
            org.apache.spark
            spark-hive_2.12
            3.0.1
            provided
        
        
            org.apache.spark
            spark-hive-thriftserver_2.12
            3.0.1
            provided
        

    
resources
resources:
	core-site.xml
	hdfs-site.xml
	hive-site.xml
	yarn-site.xml
spark 读 hive代码
package com.persist
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path
import org.apache.hadoop.security.UserGroupInformation
import org.apache.spark.sql.SparkSession


object Spark3ReadHiveTest {
  def main(args: Array[String]): Unit = {
  
    // 本地读取服务器安全配置,根据各自情况可删除
    val conf = new Configuration
    System.setProperty("java.security.krb5.conf", "/xxx/etc/hive/krb5.conf")
    conf.addResource(new Path("/xxx/etc/hive/conf/hdfs-site.xml"))
    conf.set("hadoop.security.authentication", "Kerberos")
    UserGroupInformation.setConfiguration(conf)
    UserGroupInformation.loginUserFromKeytab("xxx", "/xxx/etc/xxx.keytab")
    println("login user: " + UserGroupInformation.getLoginUser())

	// spark
    val spark = SparkSession
      .builder()
      .appName("zjj-spark")
      .master("local[*]")
      .config("spark.sql.hive.metastore.version", "1.2.1") 
      .config("spark.sql.hive.metastore.jars", "maven") // 生产环境不建议配置maven
      //.config("spark.sql.hive.metastore.jars", "/Users/xxx/etc/hive/hive1_2_1jars
object Spark3ReadHiveTest {
  def main(args: Array[String]): Unit = {
  
    // 本地读取服务器安全配置,根据各自情况可删除
    val conf = new Configuration
    System.setProperty("java.security.krb5.conf", "/xxx/etc/hive/krb5.conf")
    conf.addResource(new Path("/xxx/etc/hive/conf/hdfs-site.xml"))
    conf.set("hadoop.security.authentication", "Kerberos")
    UserGroupInformation.setConfiguration(conf)
    UserGroupInformation.loginUserFromKeytab("xxx", "/xxx/etc/xxx.keytab")
    println("login user: " + UserGroupInformation.getLoginUser())

	// spark
    val spark = SparkSession
      .builder()
      .appName("zjj-spark")
      .master("local[*]")
      .enableHiveSupport()
      .getOrCreate()

    spark.sql("show tables").show()
    spark.sql("select * from public.test1").show()

    spark.stop()

  }
}

解决方式三:重编译spark3指定hive版本(未尝试)

~~我不会

解决过程中遇到的其他问题

问题1:Exception in thread “main” org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]

解决:集群core-site.xml放到项目resources目录下

Exception in thread "main" org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

问题2:Can‘t get Master Kerberos principal for use as renewer
解决:集群yarn-site.xml放到项目resources目录下

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/327023.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号