栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

spark wordcount

spark wordcount

1.创建maven项目,引入pom依赖

```xml

 
 org.apache.spark
 spark-core_2.12
 3.0.0
 


 
 
 
 net.alchim31.maven
 scala-maven-plugin
 3.2.2
 
 
 
 
 testCompile
 
 
 
 
 
 org.apache.maven.plugins
 maven-assembly-plugin
 3.1.0
 
 
 jar-with-dependencies
 
 
 
 
 make-assembly
 package
 
 single
 
 
 
 
 

2.log4j.properties

log4j.rootCategory=ERROR, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd 
HH:mm:ss} %p %c{1}: %m%n
# Set the default spark-shell log level to ERROR. When running the spark-shell,
the
# log level for this class is used to overwrite the root logger's log level, so
that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=ERROR
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=ERROR
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=ERROR
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=ERROR
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent
UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR

3.mian

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object sparkWC {
  def main(args: Array[String]): Unit = {
    //创建spark运行配置对象
    val sparkConf=new SparkConf()
    sparkConf.setAppName("wordCount01")
    sparkConf.setMaster("local")
    //创建上下文连接对象
    val sc = new SparkContext(sparkConf)
    //读取文件数据
    val fileRDD=sc.textFile("D:\BigData\spark\wordCount\src\main\word\hello")
    val value: RDD[String] = fileRDD.flatMap(_.split(" "))
    val value1: RDD[(String, Int)] = value.map((_, 1))
    val value2: RDD[(String, Int)] = value1.reduceByKey((v1: Int, v2: Int) => {
      v1 + v2
    })
    value2.foreach(println)
  }
}
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/423200.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号