- ENV
- 1 单机
- 1.1 创建容器
- 1.2 进入容器
- 1.2.1 创建用户(不建议用于生产环境)
- 2 创建集群
- 2.1 创建配置文件和用于volume映射的权限
- 2.1.1目录结构
- 2.2 创建容器作为集群结点
- 2.2.1 从host进入其中一个结点
- 2.3 创建集群
- 2.3 使用 Jedis 进行连接,随便添加一个结点
- 2.4 关闭主结点
Docker version 19.03.5, build 633a0ea838
redis 6.2
创建容器,并设置 volume 映射
docker run --publish 6379:6379 --publish 16379:16379 --name redis --volume $(pwd)/data:/data redis:6.2
--publish=637${port}:6379 --publish=1637${port}:16379: 端口映射
--name redis: 容器名
--volume=${basepath}/redis/node-${port}/data:/data: volume 映射
redis:6.2: 指定所使用的 image
进入容器,并在映射 volume 下创建文件
[hostuser@host-machine]$ docker exec -it redis bash root@eddf2c42677b:/data# ls -l total 0 root@eddf2c42677b:/data# echo "root in container" > r.txt root@eddf2c42677b:/data# su redis $ whoami redis $ id redis uid=999(redis) gid=999(redis) groups=999(redis) $ id root uid=0(root) gid=0(root) groups=0(root) $ echo "user in container" > u.txt $ ls -l total 8 -rw-r--r--. 1 root root 18 May 4 04:51 r.txt -rw-r--r--. 1 redis redis 18 May 4 04:51 u.txt $ pwd /data $
reddis:6.2 镜像里配置了用户名 redis。可以看到用户名 root 和 redis 在container 里的 ID 分别为 0,999。
从 host 里的映射 volume 里看:
drwxr-xr-x. 2 100999 100000 4096 May 4 12:51 data
ls -l data total 8 -rw-r--r--. 1 100000 100000 18 May 4 12:51 r.txt -rw-r--r--. 1 100999 100999 18 May 4 12:51 u.txt
映射 volume 目录 data 也是 container 的用户创建的,其 owner 100999:100000 对应的是 container 里的用户是 999:0 即 redis:root。所以 100000 对应 是 container 里的 root 用户,100999 对应的是 container 里的 redis 用户。
1.2.1 创建用户(不建议用于生产环境)用上面的 id 在host上创建用户
[hostuser@host-machine]$ sudo groupadd -g 100000 redis-docker-root [hostuser@host-machine]$ sudo groupadd -g 100999 redis-docker-user [hostuser@host-machine]$ sudo useradd -u 100000 -g redis-docker-root redis-docker-root [hostuser@host-machine]$ sudo useradd -u 100999 -g redis-docker-user redis-docker-user
这样再从 host 里的映射 volume 里看:
drwxrwxr-x. 2 redis-docker-user redis-docker-root 4096 May 4 14:39 data
ls -l data total 8 -rw-r--r--. 1 redis-docker-root redis-docker-root 18 May 4 14:33 r.txt -rw-r--r--. 1 redis-docker-user redis-docker-user 27 May 4 14:35 u.txt2 创建集群 2.1 创建配置文件和用于volume映射的权限
basepath="/home/data/cluster"
docker network create redis-net --subnet 172.28.0.0/16
for port in $(seq 1 6);
do
mkdir -p ./node-${port}/conf
mkdir -p ./node-${port}/data
touch ./node-${port}/conf/redis.conf
cat << EOF > ./node-${port}/conf/redis.conf
#端口
port 6379
#非保护模式
protected-mode no
#启用集群模式
cluster-enabled yes
cluster-config-file nodes.conf
#超时时间
cluster-node-timeout 15000
#集群各节点IP地址,记得修改为你的ip地址
cluster-announce-ip 172.28.0.1${port}
#集群节点映射端口
cluster-announce-port 6379
#集群总线端口
cluster-announce-bus-port 16379
#开启aof持久化策略
appendonly yes
#后台运行
#daemonize yes
#进程号存储
pidfile /var/run/redis_6379.pid
#外部访问
bind 0.0.0.0
#集群加密
#masterauth itheima
#requirepass itheima
EOF
cd ${basepath}/redis/
sudo chown -R redis-docker-root:redis-docker-root ${basepath}/redis/node-${port}
sudo chown -R redis-docker-user:redis-docker-root ${basepath}/redis/node-${port}/data
done
2.1.1目录结构
[hostuser@host-machine]$ ls -l . drwxrwxr-x. 4 redis-docker-root redis-docker-root 4096 May 8 00:43 node-1 drwxrwxr-x. 4 redis-docker-root redis-docker-root 4096 May 8 00:43 node-2 drwxrwxr-x. 4 redis-docker-root redis-docker-root 4096 May 8 00:43 node-3 drwxrwxr-x. 4 redis-docker-root redis-docker-root 4096 May 8 00:43 node-4 drwxrwxr-x. 4 redis-docker-root redis-docker-root 4096 May 8 00:43 node-5 drwxrwxr-x. 4 redis-docker-root redis-docker-root 4096 May 8 00:43 node-6 [hostuser@host-machine]$ ls -l node-1/ total 8 drwxrwxr-x. 2 redis-docker-root redis-docker-root 4096 May 8 00:43 conf drwxrwxr-x. 2 redis-docker-user redis-docker-root 4096 May 8 00:54 data [hostuser@host-machine]$ ls -l node-1/conf total 4 -rw-rw-r--. 1 redis-docker-root redis-docker-root 568 May 8 00:43 redis.conf
.
├── node-1
│ ├── conf
│ │ └── redis.conf
│ └── data
├── node-2
│ ├── conf
│ │ └── redis.conf
│ └── data
├── node-3
│ ├── conf
│ │ └── redis.conf
│ └── data
├── node-4
│ ├── conf
│ │ └── redis.conf
│ └── data
├── node-5
│ ├── conf
│ │ └── redis.conf
│ └── data
└── node-6
├── conf
│ └── redis.conf
└── data
2.2 创建容器作为集群结点
for port in $(seq 1 6);
do
docker run --publish 637${port}:6379 --publish 1637${port}:16379 --name redis-node-${port}
--volume ${basepath}/redis/node-${port}/data:/data
--volume ${basepath}/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf
--detach --net redis-net --ip 172.28.0.1${port} redis:6.2 redis-server /etc/redis/redis.conf
done
docker ps -a | grep redis-node
#在NetworkSettings的Networks下的redis-net中查找ip
for port in `seq 1 6`;
do
echo -n "$(docker inspect --format '{{ (index .NetworkSettings.Networks "redis-net").IPAddress }}' "redis-node-${port}")":637${port}" ";
done
也可以不创建 data 目录,由容器里的用户自动创建。如果手动创建 data 目录,也需要手动修改属主,否则容器里的用户没有权限。
2.2.1 从host进入其中一个结点[hostuser@host-machine]$ docker exec --interactive --tty redis-node-1 bash root@63513cddea37:/data# ls appendonly.aof nodes.conf root@63513cddea37:/data#
此时host的目录为
[hostuser@host-machine]$ ls -l node-1/data/ total 8 -rw-r--r--. 1 redis-docker-user redis-docker-user 0 May 8 00:44 appendonly.aof -rw-r--r--. 1 redis-docker-user redis-docker-user 175 May 8 00:54 dump.rdb -rw-r--r--. 1 redis-docker-user redis-docker-user 793 May 8 00:54 nodes.conf [hostuser@host-machine]$ ls -l node-1/conf/ total 4 -rw-rw-r--. 1 redis-docker-root redis-docker-root 568 May 8 00:43 redis.conf
.
├── node-1
│ ├── conf
│ │ └── redis.conf
│ └── data
│ ├── appendonly.aof
│ ├── dump.rdb
│ └── nodes.conf
├── node-2
│ ├── conf
│ │ └── redis.conf
│ └── data
│ ├── appendonly.aof
│ ├── dump.rdb
│ └── nodes.conf
├── node-3
│ ├── conf
│ │ └── redis.conf
│ └── data
│ ├── appendonly.aof
│ ├── dump.rdb
│ └── nodes.conf
├── node-4
│ ├── conf
│ │ └── redis.conf
│ └── data
│ ├── appendonly.aof
│ ├── dump.rdb
│ └── nodes.conf
├── node-5
│ ├── conf
│ │ └── redis.conf
│ └── data
│ ├── appendonly.aof
│ ├── dump.rdb
│ └── nodes.conf
└── node-6
├── conf
│ └── redis.conf
└── data
├── appendonly.aof
├── dump.rdb
└── nodes.conf
2.3 创建集群
在该结点用,使用 redis-cli 创建集群,将六个结点随机分配为三主三从
root@63513cddea37:/data# redis-cli --cluster create 172.28.0.11:6379 172.28.0.12:6379 172.28.0.13:6379 172.28.0.14:6379 172.28.0.15:6379 172.28.0.16:6379 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.28.0.15:6379 to 172.28.0.11:6379 Adding replica 172.28.0.16:6379 to 172.28.0.12:6379 Adding replica 172.28.0.14:6379 to 172.28.0.13:6379 M: ce99907711e032bb13c9c26b9c31369808a80ad3 172.28.0.11:6379 slots:[0-5460] (5461 slots) master M: 1d8911fb5b06e4888fac75c6ab7a5af917cea172 172.28.0.12:6379 slots:[5461-10922] (5462 slots) master M: b2e81ba1c006f31751b34ecda4f0942fe3e7a679 172.28.0.13:6379 slots:[10923-16383] (5461 slots) master S: 731b8b2792d81519f8cb7b8cdb4fb746404ecf75 172.28.0.14:6379 replicates b2e81ba1c006f31751b34ecda4f0942fe3e7a679 S: a85a113440e81aa47690a95f79f2f27317571ff5 172.28.0.15:6379 replicates ce99907711e032bb13c9c26b9c31369808a80ad3 S: bd98d6d6684fed834d6cf9a57d50a1a39ad74493 172.28.0.16:6379 replicates 1d8911fb5b06e4888fac75c6ab7a5af917cea172 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join . >>> Performing Cluster Check (using node 172.28.0.11:6379) M: ce99907711e032bb13c9c26b9c31369808a80ad3 172.28.0.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 731b8b2792d81519f8cb7b8cdb4fb746404ecf75 172.28.0.14:6379 slots: (0 slots) slave replicates b2e81ba1c006f31751b34ecda4f0942fe3e7a679 M: 1d8911fb5b06e4888fac75c6ab7a5af917cea172 172.28.0.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: b2e81ba1c006f31751b34ecda4f0942fe3e7a679 172.28.0.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: a85a113440e81aa47690a95f79f2f27317571ff5 172.28.0.15:6379 slots: (0 slots) slave replicates ce99907711e032bb13c9c26b9c31369808a80ad3 S: bd98d6d6684fed834d6cf9a57d50a1a39ad74493 172.28.0.16:6379 slots: (0 slots) slave replicates 1d8911fb5b06e4888fac75c6ab7a5af917cea172 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. root@63513cddea37:/data#
可以看到结点的从属关系, M 为master结点, S 为 slave 结点
2.3 使用 Jedis 进行连接,随便添加一个结点import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
import java.util.HashSet;
import java.util.Set;
public class Main {
public static void main (String[] args) {
Set jedisClusterNodes = new HashSet();
jedisClusterNodes.add(new HostAndPort("localhost", 6373));
JedisCluster jedis = new JedisCluster(jedisClusterNodes);
System.out.println(jedis.getClusterNodes().keySet());
jedis.set("planets", "Mars");
System.out.println(jedis.get("planets"));
}
}
在字符界面获取 java jedis 添加的数据,在某一个集群结点container里 运行 redis-cli -c 进入集群
[hostuser@host-machine]$ docker exec --interactive --tty redis-node-6 bash root@e147a1e822a1:/data# redis-cli -c 127.0.0.1:6379> get planets -> Redirected to slot [8813] located at 172.28.0.12:6379 "Mars" 172.28.0.12:6379>
[hostuser@host-machine]$ docker exec --interactive --tty redis-node-6 bash root@e147a1e822a1:/data# redis-cli -h 172.28.0.14 -c 172.28.0.14:6379> get planets -> Redirected to slot [8813] located at 172.28.0.12:6379 "Mars" 172.28.0.12:6379>2.4 关闭主结点
11 主 - 15 从, 12 主 - 16 从, 13 主 - 14 从。
[codc@192 tmp4]$ docker exec -it redis-node-6 bash root@e147a1e822a1:/data# redis-cli -c 127.0.0.1:6379> cluster nodes a85a113440e81aa47690a95f79f2f27317571ff5 172.28.0.15:6379@16379 slave ce99907711e032bb13c9c26b9c31369808a80ad3 0 1651999725081 1 connected b2e81ba1c006f31751b34ecda4f0942fe3e7a679 172.28.0.13:6379@16379 master - 0 1651999723072 3 connected 10923-16383 bd98d6d6684fed834d6cf9a57d50a1a39ad74493 172.28.0.16:6379@16379 myself,slave 1d8911fb5b06e4888fac75c6ab7a5af917cea172 0 1651999720000 2 connected ce99907711e032bb13c9c26b9c31369808a80ad3 172.28.0.11:6379@16379 master - 0 1651999723000 1 connected 0-5460 1d8911fb5b06e4888fac75c6ab7a5af917cea172 172.28.0.12:6379@16379 master - 0 1651999721063 2 connected 5461-10922 731b8b2792d81519f8cb7b8cdb4fb746404ecf75 172.28.0.14:6379@16379 slave b2e81ba1c006f31751b34ecda4f0942fe3e7a679 0 1651999724076 3 connected 127.0.0.1:6379> GET planets -> Redirected to slot [8813] located at 172.28.0.12:6379 "Mars" 127.0.0.1:6379> quit root@e147a1e822a1:/data# redis-cli -c 127.0.0.1:6379> cluster nodes a85a113440e81aa47690a95f79f2f27317571ff5 172.28.0.15:6379@16379 slave ce99907711e032bb13c9c26b9c31369808a80ad3 0 1651999942053 1 connected b2e81ba1c006f31751b34ecda4f0942fe3e7a679 172.28.0.13:6379@16379 master,fail - 1651999773287 1651999770000 3 connected bd98d6d6684fed834d6cf9a57d50a1a39ad74493 172.28.0.16:6379@16379 myself,master - 0 1651999941000 8 connected 5461-10922 ce99907711e032bb13c9c26b9c31369808a80ad3 172.28.0.11:6379@16379 master - 0 1651999940045 1 connected 0-5460 1d8911fb5b06e4888fac75c6ab7a5af917cea172 172.28.0.12:6379@16379 master,fail - 1651999875800 1651999872000 2 connected 731b8b2792d81519f8cb7b8cdb4fb746404ecf75 172.28.0.14:6379@16379 master - 0 1651999940000 7 connected 10923-16383 127.0.0.1:6379> quit root@e147a1e822a1:/data# exit exit [codc@192 tmp4]$ docker exec -it redis-node-6 bash root@e147a1e822a1:/data# redis-cli -h 172.28.0.15 -c 172.28.0.15:6379> GET planets -> Redirected to slot [8813] located at 172.28.0.16:6379 "Mars" 172.28.0.16:6379>
当关闭 13,12 后,14代替13成为主结点,16代替12 成为主结点
- Docker格式化输出命令:“docker inspect --format” 学习笔记 csdn
- jedis github
- docker-container-usage runoob
- docker-and-userns-remap-how-to-manage-volume-permissions-to-share-data-betwee stackoverflow
- docker-user-namespacing-map-user-in-container-to-host stackoverflow
- Unable to deploy a docker container in “privileged” mode when user namespace is enabled for docker daemon #1904 github issue
- Official DOCS adding-a-new-node
- docker上部署 redis 三主三从集群 csdn
Docker搭建Redis Cluster集群 tencent - Redis6.x.x 搭建RedisCluster集群 tencent
- Docker搭建redis集群(三主三从)【图文详解】 csdn
- docker安装redis并搭建集群 csdn



