kafka1: 192.168.10.11
kafka2 192.168.10.12
kafka3 192.168.10.13
注意:3.0版本会有改动,建议使用2.7及以下版本
部署:1、下载
mkdir /opt/kakfa && cd /opt/kakfa
wget https://archive.apache.org/dist/kafka/2.7.0/kafka_2.12-2.7.0.tgz
2、解压
tar xf kafka_2.12-2.7.0.tgz
3、修改配置文件
cd /opt/kafka/kafka_2.12-2.7.0
vim config/zookeeper.properties
dataDir=/opt/kafka/zookeeper clientPort=2181 maxClientCnxns=0 server.1=kafka0:2888:3888 server.2=kafka1:2888:3888 server.3=kafka2:2888:3888 initLimit=5 syncLimit=2
mkdir zookeeper && echo 1 > zookeeper/myid #根据不通的节点修改数字
vim config/server.properties
broker.id=1 ##注意不通的节点需要修改此处 listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://192.168.10.11:9092 log.dirs=/opt/kafka/kafka_logs num.partitions=3 zookeeper.connect=192.168.10.11:2181,192.168.10.12:2181,192.168.10.13:2181
4、启动
./bin/kafka-server-start.sh -daemon config/server.properties
./bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
节点二、节点三重复步骤
测试kafka:
#创建topic
kafka-topics.sh --create --zookeeper 192.168.10.11:2181,192.168.10.12:2181,192.168.10.13:2181 --replication-factor 2 --partitions 3 --topic test
#查看topic
kafka-topics.sh --list --zookeeper 192.168.10.11:2181,192.168.10.12:2181,192.168.10.13:2181
#生产
kafka-console-producer.sh --broker-list 192.168.10.11:2181,192.168.10.12:2181,192.168.10.13:2181 test
#消费
kafka-console-consumer.sh --bootstrap-server 192.168.10.11:2181,192.168.10.12:2181,192.168.10.13:2181 --topic test
详细操作可查看 https://blog.csdn.net/ispringmw/article/details/108834144



