- KIND简介和环境
- 安装和使用cni bridge插件
- 验证
KIND(kubernetes in docker)是在本地通过docker容器运行kubernetes环境的一个工具,它能够把kubernetes环境在一台主机上运行起来。
KIND默认使用了kindnetd作为网络插件
KIND ships with a simple networking implementation (“kindnetd”) based around standard CNI plugins (ptp, host-local, …) and simple netlink routes.
用户可以不使用这个默认的网络插件,以下是启动kind的配置文件,disableDefaultCNI: true禁用了默认的kindnetd插件
# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
# the default CNI will not be installed
disableDefaultCNI: true
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
# patch it further using a JSON 6902 patch
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname
# 1 control plane node and 3 workers
nodes:
# the control plane node config
- role: control-plane
# the three workers
- role: worker
#启动kind,运行2个节点的k8s,1master 1worker sudo ./kind-linux-arm64 create cluster --name=cluster1 --config kind-config.yaml
kind实际上是在宿主机上启动了两个kindest/node容器,作为k8s的两个节点
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6a8dea42d50d kindest/node:v1.21.1 "/usr/local/bin/entr…" 20 hours ago Up 20 hours 127.0.0.1:45369->6443/tcp cluster1-control-plane 63bbfef54d1a kindest/node:v1.21.1 "/usr/local/bin/entr…" 20 hours ago Up 20 hours cluster1-worker
由于禁用了网络插件,集群启动后node的状态为notready
因为网络地址无法分配,coredns 也没有起来
CNI官方对bridge的介绍:官方介绍
Bridge插件是典型的CNI基础插件,其工作原理类似物理交换机,通过创建虚拟网桥将所有容器连接到一个二层网络,从而实现容器间的通信。
下载插件: containernetworking / plugins
# 将压缩包复制到master node容器中 sudo docker cp ./cni-plugins-linux-arm64-v1.0.1.tgz 6a8dea42d50d:/opt/cni/bin # 进到master容器中 sudo docker exec -ti 6a8dea42d50d /bin/bash cd /opt/cni/bin rm -rf /opt/cni/bin/* # 解压 tar -xvzf cni-plugins-linux-arm64-v1.0.1.tgz
设置网络配置文件,“type”: “bridge”
# 使用bridge cat >/etc/cni/net.d/10-mynet.conf </etc/cni/net.d/99-loopback.conf < 创建容器时bridge插件的动作
1)按名称检查网桥是否存在,若不存在则创建一个虚拟网桥
2)创建虚拟网卡对,将host端的veth口连接到网桥上
3)IPAM从地址池中分配IP给容器使用,并计算出对应网关配置到网桥
4)进入容器网络名称空间,修改容器端的网卡ip并配置路由
5)使用iptables增加容器内部网段到外部网段的masquerade规则
6)获取当前网桥信息,返回给调用者master节点配置完成后,状态自动更新为Ready
验证
以相同方式处理worker节点,注意/etc/cni/net.d/10-mynet.conf中的"subnet"设置为与master节点不同。两个节点设置好以后,集群节点状态都更新为Ready,pod状态为Running
部署测试的deployment
apiVersion: apps/v1 kind: Deployment metadata: labels: test: app name: busybox spec: replicas: 1 selector: matchLabels: test: app template: metadata: labels: test: app spec: containers: - name: app image: busybox args: - /bin/sh - -c - sleep 10; touch /tmp/healthy; sleep 30000 readinessProbe: exec: command: - cat - /tmp/healthyPod busybox-858dd8c7c-6nrqb被调度到worker节点,分配了地址10.23.0.2
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default busybox-858dd8c7c-6nrqb 1/1 Running 0 43s 10.23.0.2 cluster1-worker进入busybox-858dd8c7c-6nrqb (10.23.0.2) ping master节点上的coredns-558bd4d5db-hrt6b (10.22.0.4),可以ping通
sudo kubectl exec -it busybox-858dd8c7c-6nrqb /bin/sh / # ping 10.22.0.4 PING 10.22.0.4 (10.22.0.4): 56 data bytes 64 bytes from 10.22.0.4: seq=0 ttl=56 time=6.476 ms 64 bytes from 10.22.0.4: seq=1 ttl=56 time=6.746 ms 64 bytes from 10.22.0.4: seq=2 ttl=56 time=19.118 ms最后k8s上所有Pod的情况
参考:
CNI插件之bridge plugin



