[| test_kafka_group] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=test_kafka_group] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
原因:对kafka的max_poll_records 设置过大,处理不过来。
记录二:kafka写入异常 magic v1 does not supprt record headers原因:0.10的kafka服务端是不支持header的,用高版本的kafka客户端0.11不报错。用了skywalking,不能用kafka的0.11以下版本,需要升级。
记录三:kafka分区主题分区数。kafka通过分区策略,将不同的分区分配在一个集群中的broker上,一般会分散在不同的broker上,当只有一个broker时,所有的分区就只分配到该Broker上。
min-partition-count: 3 #定义最小分区个数 replication-factor: 3 # 用来设置主题的副本数
spring.cloud.stream.bindings.testa.consumer.concurrency=2 #设置消费线程数 spring.cloud.stream.bindings.testa.consumer.partitioned=true #消费者设置分区
# 属于consumer的配置项,出现在0.10及其以上版本中;一次调用poll()操作时返回的最大记录数,默认值为500 spring.kafka.consumer.max-poll-records: 500



