官网解释https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/:
如果希望在 IP 地址或端口层面(OSI 第 3 层或第 4 层)控制网络流量, 则可以考虑为集群中特定应用使用 Kubernetes 网络策略(NetworkPolicy)。 NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许 Pod 与网络上的各类网络“实体” 通信。
Pod 可以通信的 Pod 是通过如下三个标识符的组合来辩识的:
- 其他被允许的 Pods(例外:Pod 无法阻塞对自身的访问)
- 被允许的名字空间
- IP 组块(例外:与 Pod 运行所在的节点的通信总是被允许的, 无论 Pod 或节点的 IP 地址)
在定义基于 Pod 或名字空间的 NetworkPolicy 时,你会使用 选择算符 来设定哪些流量 可以进入或离开与该算符匹配的 Pod。
同时,当基于 IP 的 NetworkPolicy 被创建时,我们基于 IP 组块(CIDR 范围) 来定义策略。
前置条件和POD网络策略通过网络插件 来实现。要使用网络策略,你必须使用支持 NetworkPolicy 的网络解决方案。 创建一个 NetworkPolicy 资源对象而没有控制器来使它生效的话,是没有任何作用的。
默认情况下,Pod 是非隔离的,它们接受任何来源的流量。
Pod 在被某 NetworkPolicy 选中时进入被隔离状态。 一旦名字空间中有 NetworkPolicy 选择了特定的 Pod,该 Pod 会拒绝该 NetworkPolicy 所不允许的连接。 (名字空间下其他未被 NetworkPolicy 所选择的 Pod 会继续接受所有的流量)
网络策略不会冲突,它们是累积的。 如果任何一个或多个策略选择了一个 Pod, 则该 Pod 受限于这些策略的 入站(Ingress)/出站(Egress)规则的并集。因此评估的顺序并不会影响策略的结果。
为了允许两个 Pods 之间的网络数据流,源端 Pod 上的出站(Egress)规则和 目标端 Pod 上的入站(Ingress)规则都需要允许该流量。 如果源端的出站(Egress)规则或目标端的入站(Ingress)规则拒绝该流量, 则流量将被拒绝。
实验要求在特定namespace中禁止任何流量。
实验步骤- 实验环境
root@master-01:~/tmp# kubectl get nodes -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master-01 Ready control-plane,master 16h v1.21.3 192.168.21.110Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://20.10.7 worker-01 Ready 158m v1.21.3 192.168.21.111 Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://20.10.7 worker-02 Ready 157m v1.21.3 192.168.21.112 Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://20.10.7
- pod和服务情况
使用下面的yaml文件在名为testing的namespace生成pod和服务:
apiVersion: v1
kind: Pod
metadata:
labels:
role: frontend
name: frontend
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/tanzu/network-multitool:1.1
name: frontend
nodeName: worker-01
---
apiVersion: v1
kind: Pod
metadata:
labels:
role: backend
name: backend1
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/tanzu/network-multitool:1.1
name: backend1
nodeName: worker-01
---
apiVersion: v1
kind: Pod
metadata:
labels:
role: backend
name: backend2
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/tanzu/network-multitool:1.1
name: backend2
nodeName: worker-02
---
apiVersion: v1
kind: Service
metadata:
labels:
app: backendsvc
name: backendsvc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
role: backend
type: ClusterIP
root@master-01:~/tmp# kubectl get pod -n testing -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES backend1 1/1 Running 0 5m34s 10.0.171.3 worker-01backend2 1/1 Running 0 5m34s 10.0.37.197 worker-02 frontend 1/1 Running 0 5m34s 10.0.171.4 worker-01
使用了Calico CNI插件,其拓扑图
默认是IPIP模式,以worker-02 node为例:
root@worker-02:~# ifconfig
calib6ff442cef6 link encap:Ethernet HWaddr ee:ee:ee:ee:ee:ee
inet6 addr: fe80::ecee:eeff:feee:eeee/64 Scope:link
UP BROADCAST RUNNING MULTICAST MTU:1480 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1596 (1.5 KB) TX bytes:1848 (1.8 KB)
docker0 link encap:Ethernet HWaddr 02:42:8f:fa:10:90
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens33 link encap:Ethernet HWaddr 00:0c:29:04:1c:d2
inet addr:192.168.21.112 Bcast:192.168.21.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe04:1cd2/64 Scope:link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:182399 errors:0 dropped:0 overruns:0 frame:0
TX packets:33439 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:249508992 (249.5 MB) TX bytes:3521200 (3.5 MB)
lo link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:22069 errors:0 dropped:0 overruns:0 frame:0
TX packets:22069 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1555534 (1.5 MB) TX bytes:1555534 (1.5 MB)
tunl0 link encap:IPIP Tunnel HWaddr
inet addr:10.0.37.192 Mask:255.255.255.255
UP RUNNING NOARP MTU:1480 Metric:1
RX packets:37 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:2398 (2.3 KB) TX bytes:3003 (3.0 KB)
root@worker-02:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.21.2 0.0.0.0 UG 0 0 0 ens33
10.0.37.192 0.0.0.0 255.255.255.192 U 0 0 0 *
10.0.37.197 0.0.0.0 255.255.255.255 UH 0 0 0 calib6ff442cef6
10.0.171.0 192.168.21.111 255.255.255.192 UG 0 0 0 tunl0
10.0.184.64 192.168.21.110 255.255.255.192 UG 0 0 0 tunl0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.21.0 0.0.0.0 255.255.255.0 U 0 0 0 ens33
root@worker-02:~# ip route
default via 192.168.21.2 dev ens33 onlink
blackhole 10.0.37.192/26 proto bird
10.0.37.197 dev calib6ff442cef6 scope link
10.0.171.0/26 via 192.168.21.111 dev tunl0 proto bird onlink
10.0.184.64/26 via 192.168.21.110 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.21.0/24 dev ens33 proto kernel scope link src 192.168.21.112
- 默认pod和服务联通情况
root@master-01:~/tmp# kubectl exec -it -n testing frontend -- sh / # curl backendsvc Praqma Network MultiTool (with NGINX) - backend1 - 10.0.171.3 / # curl backendsvc Praqma Network MultiTool (with NGINX) - backend1 - 10.0.171.3 / # curl backendsvc Praqma Network MultiTool (with NGINX) - backend2 - 10.0.37.197 / # curl backendsvc Praqma Network MultiTool (with NGINX) - backend2 - 10.0.37.197 / # curl backendsvc Praqma Network MultiTool (with NGINX) - backend1 - 10.0.171.3 / # ping 10.0.171.3 PING 10.0.171.3 (10.0.171.3) 56(84) bytes of data. 64 bytes from 10.0.171.3: icmp_seq=1 ttl=63 time=0.065 ms 64 bytes from 10.0.171.3: icmp_seq=2 ttl=63 time=0.094 ms ^C --- 10.0.171.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.065/0.079/0.094/0.014 ms / # ping 10.0.37.197 PING 10.0.37.197 (10.0.37.197) 56(84) bytes of data. 64 bytes from 10.0.37.197: icmp_seq=1 ttl=62 time=1.01 ms 64 bytes from 10.0.37.197: icmp_seq=2 ttl=62 time=0.719 ms ^C --- 10.0.37.197 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.719/0.865/1.011/0.146 ms
- 使用NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: denypolicy
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
[root@master-01 tmp]# kubectl apply -f 3-policy.yaml -n testing networkpolicy.networking.k8s.io/denypolicy created
- 查看效果
root@master-01:~/tmp# kubectl exec -it -n testing frontend -- sh / # ping 10.0.171.3 PING 10.0.171.3 (10.0.171.3) 56(84) bytes of data. ^C --- 10.0.171.3 ping statistics --- 22 packets transmitted, 0 received, 100% packet loss, time 21142ms / # ping 10.0.37.197 PING 10.0.37.197 (10.0.37.197) 56(84) bytes of data. ^C --- 10.0.37.197 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2015ms / # curl backendsvc curl: (6) Could not resolve host: backendsvc
Network Policy生效。



