栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

SparkStreaming有状态算子updateStateByKey

SparkStreaming有状态算子updateStateByKey

updateStateByKey
    • updateStateByKey与reduceByKey
    • 代码实现updateStateByKey

updateStateByKey与reduceByKey

代码实现updateStateByKey

虚拟机端:nc -lk 8888 用于测试
代码在IDEA中运行,从虚拟机nc -lk 8888指令的命令行中接收数据

package sparkstreaming

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Durations, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}

object Demo2UpdateStateByKey {
  def main(args: Array[String]): Unit = {

  val conf: SparkConf = new SparkConf()
  conf.setMaster("local[2]")
  conf.setAppName("Demo1")

  val sc: SparkContext = new SparkContext(conf)

  
  val ssc: StreamingContext = new StreamingContext(sc, Durations.seconds(5))

    
    ssc.checkpoint("SparkLearning/src/main/data/checkpoint")

    
  val linesDS: ReceiverInputDStream[String] = ssc.socketTextStream("master", 8888)

  val words: DStream[String] = linesDS.flatMap(_.split(","))

  val kvDS: DStream[(String, Int)] = words.map((_,1))

  
  
  val updateFun = (seq:Seq[Int],opt:Option[Int]) =>{

    //计算当前batch
    val currCount: Int = seq.sum

    //获取之前的计算结果
    val befCount: Int = opt.getOrElse(0)

    //最新一个单词的数量
    val newCount = currCount + befCount

    //返回Option
    Option(newCount)

  }

  //reduceBykey:每次只统计当前batch的数据,不能实现累加
  val countDS: DStream[(String, Int)] = kvDS.updateStateByKey(updateFun)

  countDS.print()

  ssc.start()
  ssc.awaitTermination()
  ssc.stop()

  }
}
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/698555.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号