栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

Kafka的API

Kafka的API

首先导入依赖:


    org.apache.kafka
    kafka-clients
    2.3.1
 
生产者 

代码: 

package com.test;

import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.Recordmetadata;

import java.util.Properties;

public class MyProducer {


    public static void main(String[] args) {

        //创建kafka生产者的配置信息
        Properties properties = new Properties();

        //指定连接的Kafka
        properties.put("bootstrap.servers", "spark-local:9092");

        //ack的应答级别
        properties.put("acks", "all");

        //重试次数
        properties.put("retries", 3);

        //批次大小
        properties.put("batch.size", 16384);

        //等待时间
        properties.put("linger.ms", 10);

        //RecordAccumulator缓冲区大小
        properties.put("buffer.memory", 33554432);

        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer producer = new KafkaProducer(properties);

        for (int i = 0; i < 10; i++) {
            producer.send(new ProducerRecord("word", "up" + i ));
        }
        producer.close();
    }
}

(设置的配置信息都是有默认值得,可以省略,链接地址和key、value序列化不能省略) 

运行代码之前需要吧zookeeper和kafka启动起来:
zkServer.sh start
 bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties &

再启动消费者
 bin/kafka-console-consumer.sh --zookeeper spark-local:2181 --topic word

运行代码得到:

 这里我们可以采用回调函数查看数据存放的分区,offest,topic等信息打印到控制台。

for (int i = 0; i < 10; i++) {
            producer.send(new ProducerRecord("word", "up" + i ), new Callback() {
                public void onCompletion(Recordmetadata recordmetadata, Exception e) {
                    if (e == null){
                        System.out.println(recordmetadata.offset() + "--" + recordmetadata.partition());
                    } else {
                        e.printStackTrace();
                    }
                }
            });
        }

将以上代码替换for循环,再次运行代码可以得到:

消费者

代码:

package com.test.comsumer;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class MyConsumer {
    public static void main(String[] args) {

        Properties properties = new Properties();

        //连接集群地址
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "spark-local:9092");

        //开启自动提交
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);

        //自动提交的延时
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");

        //key,value的反序列化
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        //消费者组
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, "bigdata");

        //创建消费者
        KafkaConsumer consumer = new KafkaConsumer(properties);

        //定约主题
        consumer.subscribe(Arrays.asList("word", "MyTopic"));

        while (true) {
            //获取数据
            ConsumerRecords consumerRecords = consumer.poll(100);

            //j解析并打印consumerRdcords
            for (ConsumerRecord consumerRecord : consumerRecords) {

                System.out.println(consumerRecord.key() + "--" + consumerRecord.value());

            }
        }

//        consumer.close();
    }
}

 同时运行生产者的进程,可以得到:

 


转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/743335.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号