栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

flink-sql消费kafka实时关联hbase获取维度信息

flink-sql消费kafka实时关联hbase获取维度信息

  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
       // env.setParallelism(1);
      // 1、创建表执行环节
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        tableEnv.executeSql("CREATE TABLE MyUserTable (n" +
                "     trans_date TIMESTAMP(3)n" +
                "    ,cuacno     Stringn" +
                "    ,guiyls     Stringn" +
                "    ,trans_amt  DECIMAL(20,6) n" +
                "    ,WATERMARK FOR trans_date AS trans_date - INTERVAL '0' SECOND n"+
                ") WITH (n" +
                "'connector' = 'kafka',n" +
                "  'topic' = 'flink_topc',n" +
                "  'properties.bootstrap.servers' = 'cdh001:9092,cdh002:9092,cdh003:9092',n" +
                "  'properties.group.id' = 'kafkaTask',n" +
                "  'scan.startup.mode' = 'latest-offset',n" +
                "  'format' = 'csv'n" +
                ")");


        

        TableResult tableResult = tableEnv.executeSql("n" +
                "CREATE TABLE hTable (n" +
                " rowkey String,n" +
                " cf ROW,n" +
                "proc_time AS PROCTIME(),n" +
                " PRIMARY KEY (rowkey) NOT ENFORCED" +
                ") WITH (n" +
                " 'connector' = 'hbase-2.2',n" +
                " 'table-name' = 'flink:acctinfo',n" +
                " 'zookeeper.quorum' = 'cdh001:2181,cdh002:2181,cdh003:2181'n" +
                ")");
        TableResult execute = tableEnv.sqlQuery("" +
                "select n" +
                "TUMBLE_END(MyUserTable.trans_date,INTERVAL '5' SECOND) as MyUserTable n"+
                ",MyUserTable.cuacnon" +
                ",sum(MyUserTable.trans_amt) as trans_amt n" +
              
                ",hTable.cf.actnmn" +
                
                "from MyUserTable n" +
                "LEFT JOIN hTable FOR SYSTEM_TIME AS OF hTable.proc_time hTable  n " +
                "ON MyUserTable.cuacno=hTable.rowkey  group by TUMBLE(MyUserTable.trans_date,INTERVAL '5' SECOND),MyUserTable.cuacno,hTable.cf.actnm").execute();
        execute.print();


       env.execute();

代码很简单
1 flink sql直接创建kafka source表,WATERMARK FOR trans_date AS trans_date - INTERVAL ‘0’ SECOND 设置trans_date 为事件时间,后续的时间窗口操作都是以trans_date 也就是交易时间为标准, 这里设置了延迟0s 但是还是有两秒的延迟,不知道为什么,希望大神解答.

2 创建连接hbase维度表 ,官网也有文档 https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/table/connectors/hbase.html
3 流水表和维度表left join ,left join 不用说,主要是后面的 FOR SYSTEM_TIME AS OF hTable.proc_time hTable 必须要加这个,不然维度表的数据更新了但是flink 流数据关联不上,
4 TUMBLE(MyUserTable.trans_date,INTERVAL ‘5’ SECOND) 这是就是按照事件事件开窗口了,TUMBLE是flink stream 里面的翻滚窗口

另外我flink 版本是 1.12 低版本好像对hbase的支持不是很好

此记录用作学习和交流,如有问题还望多多指教

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/457256.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号