之前我们介绍了 使用Docker快速搭建redis三主三从。但是随着业务量的增长,我们有扩容需求,这时又改如何处理呢?本次依然使用docker来演示,使用编译安装的同学可以看我之前的文章
Redis6.x节点高可用之Cluster集群搭建
docker run -d --name redis-node7 --net host --privileged=true -v /data/redis/share/redis-node7:/data redis:6.2.5 --cluster-enabled yes --appendonly yes --port 6387 docker run -d --name redis-node8 --net host --privileged=true -v /data/redis/share/redis-node8:/data redis:6.2.5 --cluster-enabled yes --appendonly yes --port 6388
本次案例使用node7做主节点node8做从节点
二、添加node7到集群中并重新分配槽位[root@lhj ~]# docker exec -it redis-node7 /bin/bash root@lhj:/data# redis-cli --cluster add-node 192.168.186.10:6387 192.168.186.10:6381 >>> Adding node 192.168.186.10:6387 to cluster 192.168.186.10:6381 >>> Performing Cluster Check (using node 192.168.186.10:6381) M: 153cc92c3b2007dcabf6a5df7136df8697c990be 192.168.186.10:6381 slots:[0-6826],[10923-12287] (8192 slots) master 1 additional replica(s) S: 3516db70dd4dafcb3fe1ba43f3d2fc383cf364b9 192.168.186.10:6384 slots: (0 slots) slave replicates 153cc92c3b2007dcabf6a5df7136df8697c990be S: 4925283241288ed83d7d2b90ee85c886b1fd6220 192.168.186.10:6386 slots: (0 slots) slave replicates e1932ffb388350211e7c981f66e976e5905b172d M: e1932ffb388350211e7c981f66e976e5905b172d 192.168.186.10:6383 slots:[12288-16383] (4096 slots) master 1 additional replica(s) M: e5b5e6c823d59c601b95f5f7a017653c09a2c922 192.168.186.10:6382 slots:[6827-10922] (4096 slots) master 1 additional replica(s) S: d52121486e6ee09b72ec26ce30a2e9eddf10ec82 192.168.186.10:6385 slots: (0 slots) slave replicates e5b5e6c823d59c601b95f5f7a017653c09a2c922 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.186.10:6387 to make it join the cluster. [OK] New node added correctly. ## 重新分配槽位 root@lhj:/data# redis-cli --cluster reshard 192.168.186.10:6381 >>> Performing Cluster Check (using node 192.168.186.10:6381) M: 153cc92c3b2007dcabf6a5df7136df8697c990be 192.168.186.10:6381 slots:[151-6826],[10923-12287] (8041 slots) master 1 additional replica(s) S: 3516db70dd4dafcb3fe1ba43f3d2fc383cf364b9 192.168.186.10:6384 slots: (0 slots) slave replicates 153cc92c3b2007dcabf6a5df7136df8697c990be S: 4925283241288ed83d7d2b90ee85c886b1fd6220 192.168.186.10:6386 slots: (0 slots) slave replicates e1932ffb388350211e7c981f66e976e5905b172d M: e1932ffb388350211e7c981f66e976e5905b172d 192.168.186.10:6383 slots:[12288-16383] (4096 slots) master 1 additional replica(s) M: e5b5e6c823d59c601b95f5f7a017653c09a2c922 192.168.186.10:6382 slots:[6827-10922] (4096 slots) master 1 additional replica(s) S: d52121486e6ee09b72ec26ce30a2e9eddf10ec82 192.168.186.10:6385 slots: (0 slots) slave replicates e5b5e6c823d59c601b95f5f7a017653c09a2c922 M: 517cd884bf05a60257e637430799692c9343303c 192.168.186.10:6387 slots:[0-150] (151 slots) master [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID? 517cd884bf05a60257e637430799692c9343303c Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: all
上面的操作是将其余三台redis服务的槽位分配给新节点
在重新分配槽位时我中途停止了操作导致分配不完全,可以使用
redis-cli --cluster fix 192.168.186.10:6381命令修复. 槽位分配大概需要几分钟耐心等待。
最后将node8节点加入到集群中,当node7的从节点
root@lhj:/data# redis-cli --cluster add-node 192.168.186.10:6388 192.168.186.10:6387 --cluster-slave --cluster-master-id 517cd884bf05a60257e637430799692c9343303c三、缩容
随着业务量的下降,我们需要将之前添加的Redis节点给移除,这时应该从集群中去掉node8和node7
3.1 首先将从节点从集群中移除>>> Performing Cluster Check (using node 192.168.186.10:6381) M: 153cc92c3b2007dcabf6a5df7136df8697c990be 192.168.186.10:6381 slots:[2180-6826],[10923-12287] (6012 slots) master 1 additional replica(s) S: 3516db70dd4dafcb3fe1ba43f3d2fc383cf364b9 192.168.186.10:6384 slots: (0 slots) slave replicates 153cc92c3b2007dcabf6a5df7136df8697c990be S: 4925283241288ed83d7d2b90ee85c886b1fd6220 192.168.186.10:6386 slots: (0 slots) slave replicates e1932ffb388350211e7c981f66e976e5905b172d M: e1932ffb388350211e7c981f66e976e5905b172d 192.168.186.10:6383 slots:[13321-16383] (3063 slots) master 1 additional replica(s) M: e5b5e6c823d59c601b95f5f7a017653c09a2c922 192.168.186.10:6382 slots:[7860-10922] (3063 slots) master 1 additional replica(s) S: d52121486e6ee09b72ec26ce30a2e9eddf10ec82 192.168.186.10:6385 slots: (0 slots) slave replicates e5b5e6c823d59c601b95f5f7a017653c09a2c922 S: 2178fdd18b79f047f5c148823cae46c2cab5f634 192.168.186.10:6388 slots: (0 slots) slave replicates 517cd884bf05a60257e637430799692c9343303c M: 517cd884bf05a60257e637430799692c9343303c 192.168.186.10:6387 slots:[0-2179],[6827-7859],[12288-13320] (4246 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. root@lhj:/data# redis-cli --cluster del-node 192.168.186.10:6388 2178fdd18b79f047f5c148823cae46c2cab5f634 >>> Removing node 2178fdd18b79f047f5c148823cae46c2cab5f634 from cluster 192.168.186.10:6388 >>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node.3.2 将node7的槽位清空(重新分配槽位)
root@lhj:/data# redis-cli --cluster reshard 192.168.186.10:6381 >>> Performing Cluster Check (using node 192.168.186.10:6381) M: 153cc92c3b2007dcabf6a5df7136df8697c990be 192.168.186.10:6381 slots:[151-6826],[10923-12287] (8041 slots) master 1 additional replica(s) S: 3516db70dd4dafcb3fe1ba43f3d2fc383cf364b9 192.168.186.10:6384 slots: (0 slots) slave replicates 153cc92c3b2007dcabf6a5df7136df8697c990be S: 4925283241288ed83d7d2b90ee85c886b1fd6220 192.168.186.10:6386 slots: (0 slots) slave replicates e1932ffb388350211e7c981f66e976e5905b172d M: e1932ffb388350211e7c981f66e976e5905b172d 192.168.186.10:6383 slots:[12288-16383] (4096 slots) master 1 additional replica(s) M: e5b5e6c823d59c601b95f5f7a017653c09a2c922 192.168.186.10:6382 slots:[6827-10922] (4096 slots) master 1 additional replica(s) S: d52121486e6ee09b72ec26ce30a2e9eddf10ec82 192.168.186.10:6385 slots: (0 slots) slave replicates e5b5e6c823d59c601b95f5f7a017653c09a2c922 M: 517cd884bf05a60257e637430799692c9343303c 192.168.186.10:6387 slots:[0-150] (151 slots) master [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID? 517cd884bf05a60257e637430799692c9343303c(6387的id) Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: e1932ffb388350211e7c981f66e976e5905b172d(6383的id) Source node #3: done
这里特别说明一下,之前分配槽位是将其余三台节点的槽位各分一部分给新节点,这里在清空槽位时,可以选择将槽位只给到一台服务。也可以分多步操作,最终将node7上的槽位全部分配。(这里说的大家实操理解一下)
3.3 查看节点槽位清空并踢出node7root@lhj:/data# redis-cli --cluster check 192.168.186.10:6381 192.168.186.10:6381 (153cc92c...) -> 0 keys | 6012 slots | 1 slaves. 192.168.186.10:6383 (e1932ffb...) -> 0 keys | 1015 slots | 1 slaves. 192.168.186.10:6382 (e5b5e6c8...) -> 0 keys | 9357 slots | 1 slaves. 192.168.186.10:6387 (517cd884...) -> 0 keys | 0 slots | 0 slaves. root@lhj:/data# redis-cli --cluster del-node 192.168.186.10:6387 517cd884bf05a60257e637430799692c9343303c >>> Removing node 517cd884bf05a60257e637430799692c9343303c from cluster 192.168.186.10:6387 >>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node.



