在同一个云服务上搭建Zookeeper集群,这里仅仅搭建三个服务的最小集群
以下步骤说明带有较强的个人主观性,仅供参考
安装步骤-
下载Zookeeper,解压
-
Zookeeper的配置文件在[Zookeeper所在路径]/conf/zoo_sample.cfg,默认配置如下
-
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true
-
需要改动的配置有dataDir, clientPort,以及添加集群相关配置
-
dataDir: Zookeeper快照数据保存的位置,官方不建议保存在/tmp下
clientPort: Zookeeper客户端与服务端通信端口,需要对外暴露改端口
-
-
复制zoo_sample.cfg到同目录下,并改名为zoo.cfg。修改dataDir和clientPort属性
-
在zoo.cfg配置文件中添加如下配置
-
server.1=127.0.0.1:3001:4001 server.2=127.0.0.1:3002:4002 server.3=127.0.0.1:3003:4003
-
大致意思是server.
= : : , -
serverId也需要我们配置
-
ip我这里是都是在同一个服务器上,就都是127.0.0.1,如果是真集群,应该是本机对外的ip
-
3001,3002,3003这个三个端口表示Zookeeper集群之间的通信的端口
-
4001,4002,4003三个端口表示Zookeeper集群之间选举时通信的端口
-
-
zoo.cfg配置示例
-
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/opt/application/zookeeper/zookeeper1/data # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true server.1=127.0.0.1:3001:4001 server.2=127.0.0.1:3002:4002 server.3=127.0.0.1:3003:4003
-
-
-
在配置的dataDir目录下新建一个名称为myid的文件,文件的内容是zoo.cfg中server后的数字。需要保持集群之间的serverId是唯一的
-
复制两份Zookeeper
- 分别修改zoo.cfg文件中的clientPort,dataDir属性
- 分别修改dataDir下myid文件的serverId
-
分别启动三个Zookeeper服务: ./zkServer.sh start
-
或者编写一个简单脚本,依次启动
-
/opt/application/zookeeper/zookeeper1/bin/zkServer.sh start /opt/application/zookeeper/zookeeper2/bin/zkServer.sh start /opt/application/zookeeper/zookeeper3/bin/zkServer.sh start
-
添加执行权限sudo chmod +x start-all.sh
-
执行脚本./start-all.sh
-
-
-
分别查看三个服务的状态,没问题就搭建成功了
- 可以尝试去关闭leader服务,观察服务器选举情况
# 服务器启动关闭 ./zkServer.sh start # 启动Zookeeper服务 ./zkServer.sh stop # 关闭Zookeeper服务 ./zkServer.sh status # 查看Zookeeper状态信息 # 客户端连接服务器 ./zkCli.sh -server: # 指定连接某个Zookeeper服务器 # 客户端操作命令 ls # 查看指定路径的节点 ls -w # 监听指定节点子节点的数量变化(只监听一次) create [data] # 创建一个永久节点 create -s [data] # 创建一个有序的永久节点 create -e [data] # 创建一个临时节点 get # 获取指定路径的data get -w # 监听指定节点data的变化(只监听一次) delete # 删除没有子节点的空节点 deleteall # 删除一个节点以及其下面的所有子节点



