栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Java

SparkSQL读取hive数据本地idea运行的方法详解

Java 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

SparkSQL读取hive数据本地idea运行的方法详解

环境准备:

hadoop版本:2.6.5
spark版本:2.3.0
hive版本:1.2.2
master主机:192.168.100.201
slave1主机:192.168.100.201

pom.xml依赖如下:



 4.0.0
 
 com.spark
 spark_practice
 1.0-SNAPSHOT
 
 
  UTF-8
  1.8
  1.8
  2.3.0
 
 
 
  
   junit
   junit
   4.11
   test
  
  
   org.apache.spark
   spark-core_2.11
   ${spark.core.version}
  
 
  
   org.apache.spark
   spark-sql_2.11
   ${spark.core.version}
  
  
   mysql
   mysql-connector-java
   5.1.38
  
  
   org.apache.spark
   spark-hive_2.11
   2.3.0
  
 
 

注意:一定要将hive-site.xml配置文件放到工程resources目录下

hive-site.xml配置如下: 



 

 
 hive.metastore.uris
 thrift://192.168.100.201:9083
 
 
 hive.server2.thrift.port
 10000
  
   
  javax.jdo.option.ConnectionURL
  jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true 
  
   
  javax.jdo.option.ConnectionDriverName
  com.mysql.jdbc.Driver 
  
   
  javax.jdo.option.ConnectionUserName
  root
 
 
  javax.jdo.option.ConnectionPassword
  123456
 
 
  hive.zookeeper.quorum
   node01,node02,node03
  
 
  
  hbase.zookeeper.quorum
   node01,node02,node03
  
  
 
 hive.metastore.warehouse.dir
 /user/hive/warehouse
 
 
 
 fs.defaultFS
 hdfs://192.168.100.201:9000
 
 
 hive.metastore.schema.verification
 false
 
 
 datanucleus.autoCreateSchema
 true
 
 
 datanucleus.autoStartMechanism
 checked
 
 

主类代码:

import org.apache.spark.sql.SparkSession
 
object SparksqlTest2 {
 def main(args: Array[String]): Unit = {
 
 val spark: SparkSession = SparkSession
  .builder
  .master("local[*]")
  .appName("Java Spark Hive Example")
  .enableHiveSupport
  .getOrCreate
 
 spark.sql("show databases").show()
 spark.sql("show tables").show()
 spark.sql("select * from person").show()
 spark.stop()
 }
}

前提:数据库访问的是default,表person中有三条数据。

 测试前先确保hadoop集群正常启动,然后需要启动hive的metastore服务。

./bin/hive --service metastore 

运行,结果如下:

 如果报错:

Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.io.IOException: (null) entry in command string: null chmod 0700 C:UsersdellAppDataLocalTempc530fb25-b267-4dd2-b24d-741727a6fbf3_resources;
 at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
 at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
 at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
 at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
 at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
 at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
 at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
 at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.(HiveSessionStateBuilder.scala:69)
 at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
 at org.apache.spark.sql.internal.baseSessionStateBuilder$$anonfun$build$2.apply(baseSessionStateBuilder.scala:293)
 at org.apache.spark.sql.internal.baseSessionStateBuilder$$anonfun$build$2.apply(baseSessionStateBuilder.scala:293)
 at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
 at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
 at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
 at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
 at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
 at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
 at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
 at com.tongfang.learn.spark.hive.HiveTest.main(HiveTest.java:15)

解决:

1.下载hadoop windows binary包,链接:https://github.com/steveloughran/winutils

2.在启动类的运行参数中设置环境变量,HADOOP_HOME=D:winutilshadoop-2.6.4,后面是hadoop windows 二进制包的目录。

到此这篇关于SparkSQL读取hive数据本地idea运行的方法详解的文章就介绍到这了,更多相关SparkSQL读取hive数据本地idea运行内容请搜索考高分网以前的文章或继续浏览下面的相关文章希望大家以后多多支持考高分网!

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/131716.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号