栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 系统运维 > 运维 > Linux

docker部署redis集群实现动态扩缩容

Linux 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

docker部署redis集群实现动态扩缩容

目录

思考

分布式存储的解决方案

哈希取余分区

一致性哈希算法分区

哈希槽分区

 采用哈希槽分区 部署三主三从(docker)

准备工作

创建3主3从redis实例

进入容器中,构建主从关系

主从容错切换迁移

主从扩容

主从缩容

思考

假如现有1~2亿的数据需要缓存,请问如何设置存储架构??

若是分布式存储,用redis如何实现落地??

分布式存储的解决方案

1.哈希取余分区

2.一致性哈希算法分区

3.哈希槽分区

哈希取余分区

2亿的数据就是2亿个key value,必须要分布式多台。

1.原理

假设3台机器组成集群,用户所有的读写操作都是根据公式[ hash(key)%N台机器数量  ]计算出哈希值,根据哈希值把数据映射到对应的节点上。

2.优点

简单粗暴,直接有效,是需要提前预估数据规划节点。使用Hash算法有一部分请求固定到一台服务器上,这样此台服务器会固定处理和维护这些请求,起到负载均衡和分而治之的作用。

3.缺点

原规划好的节点,进行扩缩容比较麻烦了。不管扩缩,每次数据变动导致节点有变动,映射关系需要重新进行计算,在服务器个数固定不变时没有问题,如果需要弹性扩容或故障停机的情况下,原来的取模公式就会发生变化: Hash(key)/3 会变成Hash(key) /?。

此时地址经过取余运算的结果将发生很大变化,根据公式获取的服务器也会变得不可控。
某个redis机器宕机了,由于台数数量变化,会导致hash取余全部数据重新洗牌。

一致性哈希算法分区

1.原理

一致性哈希算法必然有个hash函数并按照算法产生hash值,这个算法的所有可能哈希值会构成一个全量集, 这个集合可以成为一个hash空间[0,2^32-1],这个是一个线性空间,但是在算法中,我们通过适当的逻辑控制将它首尾相连(0=2^32),这样让它逻辑上形成了一个环形空间。
它也是按照使用取模的方法,前面笔记介绍的节点取模法是对节点(服务器)的数量进行取模。而一致性Hash算法是对2^32取模, 简单来说,一致性Hash算法将整个哈希值空间组织成一一个 虚拟的圆环,如假设某哈希函数H的值空间为0-2^32-1 (即哈希值是一个32位无符号整形) ,整个哈希环:整个空间按顺时针方向组织,圆环的正上方的点代表0,0点右侧的第一个点代表1,以此类推,2、3、4、....直到232-1, 也就是说0点左侧的第一个 点代表2^32-1,0和2^32-1在零 点中方向重合,我们把这个由2^32个点组成的圆环称为Hash环。

2.优点

   2.1节点映射
         将集群中各个IP节点映射到环上的某一个位置。将各个服务器使用Hash进行一一个哈希,具体可以选择服务器的IP或主机名作为关键字进行哈希,这样每台机器就能确定其在哈希环上的位置。假如4个节点NodeA、B、C、D,经过IP地址的哈希函数计算(hash(ip)),使用IP地址哈希后在环空间的位置如下:

   2.2 key落到服务器的落键规则
         当我们需要存储一 个kv键值对时,首先计算key的hash值,hash(key), 将这个key使用相同的函数Hash计算出哈希值并确定此数据在环上的位置,从此位置沿环顺时针“行走”,第一台遇到的服务器就是其应该定位到的服务器,并将该键值对存储在该节点上。
        如我们有Object A、Object B、Object C、Object D四个数据对象,经过哈希计算后,在环空间上的位置如下:根据一致性Hash算法, 数据A会被定为到Node A上,B被定为到Node B.上,C被定为到Node C上,D被定为到Node D上。

  2.3一致性哈希算法的容错性
       假设Node C宕机,可以看到此时对象A、B、D不会受到影响,只有C对象被重定位NodeD。一般的,在一致性Hash算法中, 如果一台服务器不可用,则受影响的数据仅仅是此服务器到其环空间中前一台服务器(即沿着逆时针方向行走遇到的第一台服务器) 之间数据,其它不会受到影响。简单说,就是C挂了,受到影响的只是B、C之间的数据,并且这些数据会转移到D进行存储。

  2.4一致性哈希算法的扩展性
      数据量增加了,需要增加一台节点NodeX, X的位置在A和B之间,那受到影响的也就是A到X之间的数据,重新把A到X的数据录入到X上即可,不会导致hash取余全部数据重新洗牌。

3.缺点

Hash环的数据倾斜问题:
        一致性Hash算法在服务节点太少时,容易因为节点分布不均匀而造成数据倾斜(被缓存的对象大部分集中缓存在某一台服务器上)问题。

4.小总结
为在节点数目发生改变时尽可能少的迁移数据,将所有的存储节点排列在收尾相接的Hash环上,每个key在计算Hash后会顺时针找到临近的存储节点存放。而当有节点加入或退出时仅影响该节点在Hash环上顺时针相邻的后续节点。
优点:加入和删除节点只影响哈希环中顺时针方向的相邻的节点,对其他节点无影响。
缺点:数据的分布和节点的位置有关,因为这些节点不是均匀的分布在哈希环上的,所以数据在进行存储时达不到均匀分布的效果。

哈希槽分区

解决均匀分配的问题,在数据和节点之间又加入了一层,把这层称为哈希槽(slot) ,用于管理数据和节点之间的关系,现在就相当于节点上放的是槽,槽里放的是数据。

槽解决的是粒度问题,相当于把粒度变大了,这样便于数据移动。哈希解决的是映射问题,使用key的哈希值来计算所在的槽,便于数据分配。

一个集群只能有16384个槽,编号0-16383 (0-2^14-1)。这些槽会分配给集群中的所有主节点,分配策略没有要求。可以指定哪些编号的槽分配给哪个主节点。集群会记录节点和槽的对应关系。解决了节点和槽的关系后,接下来就需要对key求哈希值,然后对16384取余,余数是几,key就落入对应的槽里。slot = CRC16(key) % 16384。以槽为单位移动数据,因为槽的数目是固定的,处理起来比较容易,这样数据移动问题就解决了。

哈希槽计算
Redis集群中内置了16384个哈希槽,redis 会根据节点数量大致均等的将哈希槽映射到不同的节点。当需要在Redis集群中放置一个 key-value时,redis先对key使用crc16算法算出一个结果,然后把结果对16384求余数,这样每个key都会对应一个编号在0-16383之间的哈希槽,也就是映射到某个节点上。如下代码,key之A、B在Node2,key之 C落在Node3上。

 采用哈希槽分区 部署三主三从(docker)

准备工作

部署好docker环境,关闭防火墙等

部署docker环境脚本访问一下地址:

docker-compose+gitlab部署CICD_兰兰IT的博客-CSDN博客_docker部署cicd

创建3主3从redis实例
[root@redis ~]# docker run -d --name redis-node1 --net  host --privileged=true -v /data/redis/share/redis-node1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381
[root@redis ~]# docker run -d --name redis-node2 --net  host --privileged=true -v /data/redis/share/redis-node2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382
[root@redis ~]# docker run -d --name redis-node3 --net  host --privileged=true -v /data/redis/share/redis-node3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383
[root@redis ~]# docker run -d --name redis-node4 --net  host --privileged=true -v /data/redis/share/redis-node4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384
[root@redis ~]# docker run -d --name redis-node5 --net  host --privileged=true -v /data/redis/share/redis-node5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385
[root@redis ~]# docker run -d --name redis-node6 --net  host --privileged=true -v /data/redis/share/redis-node6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386

进入容器中,构建主从关系
##随意进入其中一个容器即可
[root@localhost ~]# docker exec -it  redis-node1 /bin/bash
root@localhost:/data# redis-cli --cluster create 192.168.102.129:6381 192.168.102.129:6382 192.168.102.129:6383 192.168.102.129:6384 192.168.102.129:6385 192.168.102.129:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.102.129:6385 to 192.168.102.129:6381
Adding replica 192.168.102.129:6386 to 192.168.102.129:6382
Adding replica 192.168.102.129:6384 to 192.168.102.129:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381
   slots:[0-5460] (5461 slots) master
M: b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382
   slots:[5461-10922] (5462 slots) master
M: 099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383
   slots:[10923-16383] (5461 slots) master
S: d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384
   replicates 40a6e217e2d22abc3700853cc2de467809710735
S: 4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385
   replicates b45605a7c132a753ff531196e9a4a24db61d081a
S: b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386
   replicates 099bbd4c9e873369d0449ac7f1321c4e640f75a0
Can I set the above configuration? (type 'yes' to accept): yes  #输入yes!!
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.102.129:6381)
M: 40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385
   slots: (0 slots) slave
   replicates b45605a7c132a753ff531196e9a4a24db61d081a
S: b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386
   slots: (0 slots) slave
   replicates 099bbd4c9e873369d0449ac7f1321c4e640f75a0
M: b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384
   slots: (0 slots) slave
   replicates 40a6e217e2d22abc3700853cc2de467809710735
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.     #看到‘ok’,表明集群搭建成功

##进入node1节点查看信息
root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:20
cluster_stats_messages_pong_sent:23
cluster_stats_messages_sent:43
cluster_stats_messages_ping_received:18
cluster_stats_messages_pong_received:20
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:43
127.0.0.1:6381> CLUSTER NODES
4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385@16385 slave b45605a7c132a753ff531196e9a4a24db61d081a 0 1651819560674 2 connected
b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386@16386 slave 099bbd4c9e873369d0449ac7f1321c4e640f75a0 0 1651819559000 3 connected
40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381@16381 myself,master - 0 1651819560000 1 connected 0-5460
b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382@16382 master - 0 1651819558660 2 connected 5461-10922
099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383@16383 master - 0 1651819559668 3 connected 10923-16383
d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384@16384 slave 40a6e217e2d22abc3700853cc2de467809710735 0 1651819561683 1 connected

主从容错切换迁移
---stop node1案例
[root@localhost ~]# docker exec -it  redis-node1 /bin/bash
root@localhost:/data# redis-cli  -p 6381 -c      #-c 优化路由,防止路由失效
127.0.0.1:6381> flushall
127.0.0.1:6381> cluster nodes
127.0.0.1:6381> exit
[root@localhost ~]# docker ps 
[root@localhost ~]# docker stop redis-node1
[root@localhost ~]# docker ps
[root@localhost ~]# docker exec -it redis-node2 bash
root@localhost:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes

--restart node1案例
[root@localhost ~]# docker restart redis-node1 
[root@localhost ~]# docker ps 
[root@localhost ~]# docker exec -it redis-node1 bash 
127.0.0.1:6382> cluster nodes                    #查看集群的实际主从情况

---stop node4
[root@localhost ~]# docker ps 
[root@localhost ~]# docker exec -it redis-node1 bash 
127.0.0.1:6382>cluster nodes

---restart node4
[root@localhost ~]# docker restart redis-node1 
[root@localhost ~]# docker ps 
[root@localhost ~]# docker exec -it redis-node1 bash 
127.0.0.1:6382> cluster nodes

主从扩容

当大量数据请求/高并发,原有的集群无法支撑,需要添加一主一从

[root@localhost ~]# docker run -d --name redis-node7 --net  host --privileged=true -v /data/redis/share/redis-node7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
[root@localhost ~]# docker run -d --name redis-node8 --net  host --privileged=true -v /data/redis/share/redis-node8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
[root@localhost ~]# docker ps 
CONTAINER ID   IMAGE         COMMAND                  CREATED         STATUS         PORTS     NAMES
b1a96e99db57   redis:6.0.8   "docker-entrypoint.s…"   9 minutes ago   Up 9 minutes             redis-node8
2d10f6370b70   redis:6.0.8   "docker-entrypoint.s…"   9 minutes ago   Up 9 minutes             redis-node7
770ced44e7d4   redis:6.0.8   "docker-entrypoint.s…"   7 days ago      Up 5 hours               redis-node6
5f86e8ea1b42   redis:6.0.8   "docker-entrypoint.s…"   7 days ago      Up 5 hours               redis-node5
6d6937b9efd5   redis:6.0.8   "docker-entrypoint.s…"   7 days ago      Up 5 hours               redis-node4
2d7ff5b08b30   redis:6.0.8   "docker-entrypoint.s…"   7 days ago      Up 5 hours               redis-node3
6019febedd93   redis:6.0.8   "docker-entrypoint.s…"   7 days ago      Up 5 hours               redis-node2
41c4483e66da   redis:6.0.8   "docker-entrypoint.s…"   7 days ago      Up 5 hours               redis-node1

#将redis-node7加入集群
[root@localhost ~]# docker exec -it redis-node7 bash
root@localhost:/data# redis-cli --cluster add-node 192.168.102.129:6387 192.168.102.129:6381
root@localhost:/data# redis-cli --cluster check  192.168.102.129:6381
192.168.102.129:6381 (40a6e217...) -> 0 keys | 10922 slots | 1 slaves.
192.168.102.129:6387 (176f51a8...) -> 0 keys | 0 slots | 0 slaves.
192.168.102.129:6382 (b45605a7...) -> 1 keys | 2731 slots | 1 slaves.
192.168.102.129:6383 (099bbd4c...) -> 1 keys | 2731 slots | 1 slaves.

root@localhost:/data# redis-cli -p 6382 -c 
127.0.0.1:6382> cluster nodes
099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383@16383 master - 0 1652423786000 3 connected 13653-16383
4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385@16385 slave b45605a7c132a753ff531196e9a4a24db61d081a 0 1652423789103 2 connected
d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384@16384 slave 40a6e217e2d22abc3700853cc2de467809710735 0 1652423787086 10 connected
40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381@16381 master - 0 1652423788000 10 connected 0-8191 10923-13652
b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382@16382 myself,master - 0 1652423787000 2 connected 8192-10922
b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386@16386 slave 099bbd4c9e873369d0449ac7f1321c4e640f75a0 0 1652423787000 3 connected
176f51a882978f64579de86c2eabfbb46423cbb3 192.168.102.129:6387@16387 master - 0 1652423788094 9 connected


#重新分配槽号
root@localhost:/data# redis-cli --cluster reshard 192.168.102.129:6381
How many slots do you want to move (from 1 to 16384)? 4096     #16384/4(4台master)
What is the receiving node ID? 176f51a882978f64579de86c2eabfbb46423cbb3  #输入redis-node7的docker-id
Source node #1:all
Do you want to proceed with the proposed reshard plan (yes/no)? yes

#发现redis-node7的槽号是原来的三个master分一点给node7,共4096个槽号
root@localhost:/data# redis-cli --cluster check 192.168.102.129:6381   
192.168.102.129:6381 (40a6e217...) -> 0 keys | 8191 slots | 1 slaves.
192.168.102.129:6387 (176f51a8...) -> 0 keys | 4095 slots | 0 slaves.
192.168.102.129:6382 (b45605a7...) -> 1 keys | 2049 slots | 1 slaves.
192.168.102.129:6383 (099bbd4c...) -> 1 keys | 2049 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.102.129:6381)
M: 40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381
   slots:[2731-8191],[10923-13652] (8191 slots) master
   1 additional replica(s)
S: b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386
   slots: (0 slots) slave
   replicates 099bbd4c9e873369d0449ac7f1321c4e640f75a0
M: 176f51a882978f64579de86c2eabfbb46423cbb3 192.168.102.129:6387
   slots:[0-2730],[8192-8873],[13653-14334] (4095 slots) master
S: d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384
   slots: (0 slots) slave
   replicates 40a6e217e2d22abc3700853cc2de467809710735
M: b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382
   slots:[8874-10922] (2049 slots) master
   1 additional replica(s)
M: 099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383
   slots:[14335-16383] (2049 slots) master
   1 additional replica(s)
S: 4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385
   slots: (0 slots) slave
   replicates b45605a7c132a753ff531196e9a4a24db61d081a

#为node7分配slave
root@localhost:/data# redis-cli --cluster add-node 192.168.102.129:6388 192.168.102.129:6387 --cluster-slave --cluster-master-id 176f51a882978f64579de86c2eabfbb46423cbb3

#查看集群状态
root@localhost:/data# redis-cli --cluster check 192.168.102.129:6381
root@localhost:/data# redis-cli --cluster check 192.168.102.129:6381
192.168.102.129:6381 (40a6e217...) -> 0 keys | 8191 slots | 1 slaves.
192.168.102.129:6387 (176f51a8...) -> 0 keys | 4095 slots | 1 slaves.
192.168.102.129:6382 (b45605a7...) -> 1 keys | 2049 slots | 1 slaves.
192.168.102.129:6383 (099bbd4c...) -> 1 keys | 2049 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.102.129:6381)
M: 40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381
   slots:[2731-8191],[10923-13652] (8191 slots) master
   1 additional replica(s)
S: a7685cc7a6cae3e525a2c3e1009c7a5c26988a26 192.168.102.129:6388
   slots: (0 slots) slave
   replicates 176f51a882978f64579de86c2eabfbb46423cbb3
S: b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386
   slots: (0 slots) slave
   replicates 099bbd4c9e873369d0449ac7f1321c4e640f75a0
M: 176f51a882978f64579de86c2eabfbb46423cbb3 192.168.102.129:6387
   slots:[0-2730],[8192-8873],[13653-14334] (4095 slots) master
   1 additional replica(s)
S: d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384
   slots: (0 slots) slave
   replicates 40a6e217e2d22abc3700853cc2de467809710735
M: b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382
   slots:[8874-10922] (2049 slots) master
   1 additional replica(s)
M: 099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383
   slots:[14335-16383] (2049 slots) master
   1 additional replica(s)
S: 4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385
   slots: (0 slots) slave
   replicates b45605a7c132a753ff531196e9a4a24db61d081a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

主从缩容
#获取6388的docker-id并删除
root@localhost:/data# redis-cli --cluster check 192.168.102.129:6381
root@localhost:/data# redis-cli --cluster del-node 192.168.102.129:6388 a7685cc7a6cae3e525a2c3e1009c7a5c26988a26
>>> Removing node a7685cc7a6cae3e525a2c3e1009c7a5c26988a26 from cluster 192.168.102.129:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

#重新分配槽位,将剩余的曹位全部分配给其中一个node
root@localhost:/data# redis-cli --cluster reshard 192.168.102.129:6381
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 40a6e217e2d22abc3700853cc2de467809710735
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all

Do you want to proceed with the proposed reshard plan (yes/no)? yes

root@localhost:/data#redis-cli --cluster check 192.168.102.129:6381
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.102.129:6381)
M: 40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381
   slots:[0-10921],[10923-16382] (16382 slots) master
   1 additional replica(s)
S: b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386
   slots: (0 slots) slave
   replicates 099bbd4c9e873369d0449ac7f1321c4e640f75a0
M: 176f51a882978f64579de86c2eabfbb46423cbb3 192.168.102.129:6387
   slots: (0 slots) master
S: d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384
   slots: (0 slots) slave
   replicates 40a6e217e2d22abc3700853cc2de467809710735
M: b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382
   slots:[10922] (1 slots) master
   1 additional replica(s)
M: 099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383
   slots:[16383] (1 slots) master
   1 additional replica(s)
S: 4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385
   slots: (0 slots) slave
   replicates b45605a7c132a753ff531196e9a4a24db61d081a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


#删除node7
root@localhost:/data# redis-cli --cluster del-node 192.168.102.129:6387 b38266249f924467bc7e91cf42b6b5d5d2e2474c

#查询集群状态
[root@localhost ~]# docker exec -it redis-node1 bash
root@localhost:/data# redis-cli --cluster check 192.168.102.129:6381
Could not connect to Redis at 192.168.102.129:6387: Connection refused
192.168.102.129:6381 (40a6e217...) -> 2 keys | 16382 slots | 1 slaves.
192.168.102.129:6382 (b45605a7...) -> 0 keys | 1 slots | 1 slaves.
192.168.102.129:6383 (099bbd4c...) -> 0 keys | 1 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.102.129:6381)
M: 40a6e217e2d22abc3700853cc2de467809710735 192.168.102.129:6381
   slots:[0-10921],[10923-16382] (16382 slots) master
   1 additional replica(s)
S: b38266249f924467bc7e91cf42b6b5d5d2e2474c 192.168.102.129:6386
   slots: (0 slots) slave
   replicates 099bbd4c9e873369d0449ac7f1321c4e640f75a0
S: 4e2951e704fedb761a93a16040f75a86dcb1e1e9 192.168.102.129:6385
   slots: (0 slots) slave
   replicates b45605a7c132a753ff531196e9a4a24db61d081a
M: b45605a7c132a753ff531196e9a4a24db61d081a 192.168.102.129:6382
   slots:[10922] (1 slots) master
   1 additional replica(s)
M: 099bbd4c9e873369d0449ac7f1321c4e640f75a0 192.168.102.129:6383
   slots:[16383] (1 slots) master
   1 additional replica(s)
S: d9b436771ac2715f95b98a706b529313f56053e4 192.168.102.129:6384
   slots: (0 slots) slave
   replicates 40a6e217e2d22abc3700853cc2de467809710735
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

参考视频链接地址:

【尚硅谷】全新2022版Docker与微服务实战教程(从入门到进阶)_哔哩哔哩_bilibili一键三连呀【点赞、投币、收藏】感谢支持~本教程基础篇讲解细致且深度,并新增了大厂进阶篇,技术点从入门到高级全面覆盖!教程同时适用于零基础小白和已熟悉Docker的使用者,可各取所需,有选择性和针对性的学习,哪里不会点哪里。https://www.bilibili.com/video/BV1gr4y1U7CY?p=42

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/880594.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号