栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

hive on spark 配置时报错:Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorPa

hive on spark 配置时报错:Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorPa

1.执行sql语句,报错信息。

hive> insert into table student values(1,'abc'); Query ID = atguigu_20200814150018_318272cf-ede4-420c-9f86-c5357b57aa11 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorParam FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. 

原因:由于当前的hive的版本3.1.2,spark版本3.0.0,只能自己编译。

建议用官方发布的hive+spark版本搭配。

安装和Spark对应版本一起编译的Hive,当前官网推荐的版本关系如下:

HiveVersionSparkVersion
1.1.x1.2.0
1.2.x1.3.1
2.0.x1.5.0
2.1.x1.6.0
2.2.x1.6.0
2.3.x2.0.0
3.0.x2.3.0
master2.3.0

若版本一致,还报该错误:

若配置的是HA,则:hive-site.xml是如下:


    spark.yarn.jars
    hdfs://mycluster/spark-jars/*

若不是以上原因:

则删掉hive,重新进行安装,也许是在hive还没解压完,你就进行了mv加压后的目录,导致jar包不全;

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/681916.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号