栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Java

kafka指定时间戳timestamp消费

Java 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

kafka指定时间戳timestamp消费

kafka指定时间戳消费
  • 代码如下


代码如下

代码如下(示例):

@Slf4j
@Data
public class KafkaTimeStampConsumer {

    private KafkaConsumer kafkaConsumer;

    public KafkaTimeStampConsumer(String topic,Long timestamp,String cluster){
        Map props = Maps.newHashMap();

        //集群地址
        props.put("bootstrap.servers", cluster);

        //设置我们独特的消费者的组id
        props.put("group.id", "mygroup" );
        //设置手动提交
        props.put("enable.auto.commit", "false");
        //这个可以设置大一点
        props.put("session.timeout.ms", "30000");

        //props.put("auto.offset.reset", "earliest");

        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        //我一般测试单条报错数据,1
        props.put("max.poll.records", 1500);
        kafkaConsumer = new KafkaConsumer<>(props);
        Map map =new HashMap<>();
        final List partitionInfos = kafkaConsumer.partitionsFor(topic);
        for (PartitionInfo partitionInfo : partitionInfos) {
            map.put(new TopicPartition(topic,partitionInfo.partition()),timestamp);
        }
        final Map timestampMap = kafkaConsumer.offsetsForTimes(map);

        List topicPartitionList = new ArrayList<>(timestampMap.keySet());
        kafkaConsumer.assign(topicPartitionList);
        timestampMap.forEach((topicPartition, offsetAndTimestamp) -> {
            if(offsetAndTimestamp!=null){
                kafkaConsumer.seek(topicPartition,offsetAndTimestamp.offset());
                int partition = topicPartition.partition();
                long timestamps =offsetAndTimestamp.timestamp();
                long offset = offsetAndTimestamp.offset();
                log.info("[KafkaTimeStampConsumer] partition = {} timestamp={} offset = {}",partition,timestamps,offset);
            }
        });
    }

    public ConsumerRecords consume(){
        return kafkaConsumer.poll(36000);
    }

    public void close(){
        log.info("[KafkaTimeStampConsumer] close topic = {}", JSON.toJSONString(kafkaConsumer.listTopics()));
        kafkaConsumer.close();
    }


    public static void main(String[] args) {
        KafkaTimeStampConsumer kafkaTimeStampConsumer=new KafkaTimeStampConsumer("aaa",1632705050000L,"test-kafka.aaa.com:9092");
        while (true){
            final ConsumerRecords records = kafkaTimeStampConsumer.kafkaConsumer.poll(36000);
            for (ConsumerRecord record : records) {
                System.out.println("partition = "+record.partition()+"record = "+record+"offset = "+ record.offset());
                kafkaTimeStampConsumer.kafkaConsumer.commitAsync();
            }
        }
    }
}
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/273745.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号