栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 系统运维 > 运维 > Linux

failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

Linux 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

 failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

的解决方式

  Warning  FailedCreatePodSandBox  3m18s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1506a90c486e2c187e21e8fb4b6888e5d331235f48eebb5cf44121cc587a6f05" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
  Normal   SandboxChanged          3m1s (x12 over 4m13s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  2m59s (x4 over 3m14s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a8dc84257ca6f4543c223735dd44e79c1d001724a54cd20ab33e3a7596fba5c9" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

启动pod时,查看pod一直报如上的错误,

# ifconfig
cni0: flags=4163  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255
        inet6 fe80::80bc:10ff:feb0:9d1b  prefixlen 64  scopeid 0x20
        ether 82:bc:10:b0:9d:1b  txqueuelen 1000  (Ethernet)
        RX packets 1478990  bytes 119510314 (113.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1486862  bytes 136242849 (129.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

flannel.1: flags=4163  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::605e:12ff:feb8:7ce3  prefixlen 64  scopeid 0x20
        ether 62:5e:12:b8:7c:e3  txqueuelen 0  (Ethernet)
        RX packets 55074  bytes 9896264 (9.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 57738  bytes 5642813 (5.3 MiB)
        TX errors 0  dropped 10 overruns 0  carrier 0  collisions 0

 

# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

如果直接删掉cni0等信息

# ifconfig cni0 down
# ip link delete cni0

这样操作后,虽然这个错能解决,pod也运行正常,但会将dns的pod挤掉

# kubectl  get po -o wide -n kube-system 
NAME                                        READY   STATUS             RESTARTS         AGE   IP             NODE                NOMINATED NODE   READINESS GATES
coredns-6d8c4cb4d-7lswb                     0/1     CrashLoopBackOff   9 (116s ago)     22h   10.244.0.3     master                           
coredns-6d8c4cb4d-84z48                     0/1     CrashLoopBackOff   9 (2m6s ago)     22h   10.244.0.2     master                           
ds-4cqxm                                    1/1     Running            0                33m   10.244.0.4     master                           
ds-d58vg                                    1/1     Running            0                33m   10.244.2.185   node2                            
ds-sjxwn                                    1/1     Running            0                33m   10.244.1.48    node1                            

此时查看coredns的pod信息

# kubectl  describe po coredns-6d8c4cb4d-84z48 -n kube-system 
Name:                 coredns-6d8c4cb4d-84z48
Namespace:            kube-system
Priority:             2000000000
......

Events:
  Type     Reason     Age                    From     Message
  ----     ------     ----                   ----     -------
  Warning  Unhealthy  28m (x5 over 29m)      kubelet  Liveness probe failed: Get "http://10.244.0.2:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Normal   Killing    28m                    kubelet  Container coredns failed liveness probe, will be restarted
  Normal   Pulled     28m (x2 over 22h)      kubelet  Container image "registry.aliyuncs.com/google_containers/coredns:v1.8.6" already present on machine
  Normal   Created    28m (x2 over 22h)      kubelet  Created container coredns
  Normal   Started    28m (x2 over 22h)      kubelet  Started container coredns
  Warning  BackOff    9m29s (x27 over 16m)   kubelet  Back-off restarting failed container
  Warning  Unhealthy  4m32s (x141 over 29m)  kubelet  Readiness probe failed: Get "http://10.244.0.2:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

需要重新寻找解决办法,将之前的pod删掉,dns的pod也还是异常

没办法,将dns的pod删除后,自行拉起,问题才解决

# kubectl  delete pod coredns-6d8c4cb4d-7lswb  -n kube-system
pod "coredns-6d8c4cb4d-7lswb" deleted
# kubectl  delete pod coredns-6d8c4cb4d-84z48   -n kube-system                       
pod "coredns-6d8c4cb4d-84z48" deleted

# kubectl get pod -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS         AGE     IP             NODE                NOMINATED NODE   READINESS GATES
coredns-6d8c4cb4d-8xghq                     1/1     Running   0                3m48s   10.244.2.186   node2  			             
coredns-6d8c4cb4d-q65vq                     1/1     Running   0                3m48s   10.244.1.49    node1  			             

相比较的变化是,master,node1和node2也都是做了

# ifconfig cni0 down
# ip link delete cni0

操作,但只有master没有重新生成cni0,其他两台都自动重新生成,此时,coredns就是运行在node节点上。

master上cni0为何没有重新生成成功,还未去找原因

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/757826.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号