错误现象 :解决方案 :
错误现象 :解决方案 :Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorParam
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
重新装一边 Hive
https://blog.csdn.net/qq_44226094/article/details/123218860
关键在于 初始化 Hive 元数据库
把原来的数据库里的数据清空 , 在初始化:
schematool -initSchema -dbType mysql -verbose
下面是我的配置文件
conf 目录下新建 hive-site.xml 文件
javax.jdo.option.ConnectionURL jdbc:mysql://cpu102:3306/metastore?useSSL=false&useUnicode=true&characterEncoding=UTF-8 javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName root javax.jdo.option.ConnectionPassword xxxxxx hive.metastore.warehouse.dir /user/hive/warehouse hive.metastore.schema.verification false hive.server2.thrift.port 10000 hive.server2.thrift.bind.host cpu101 hive.metastore.event.db.notification.api.auth false hive.cli.print.header true hive.cli.print.current.db true spark.yarn.jars hdfs://mycluster/spark-jars/* hive.execution.engine spark
未配置 HA :
spark.yarn.jars hdfs://cpu101:8020/spark-jars/*
spark-defaults.conf
spark.master yarn spark.eventLog.enabled true spark.eventLog.dir hdfs://cpu101:8020/spark-history spark.executor.memory 1g spark.driver.memory 1g



