参考: 27.kubernetes(k8s)笔记 Ingress(二) Envoy - SegmentFault 思否
1. 环境
K8s nodes:
master-node(192.168.2.51)
slave-node(192.168.2.52)
2. 部署
2.1 部署contour+envoy
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
有2个envoy pod,查看其中1个:
kubectl describe pod -n projectcontour envoy-g4gk9
每个envoy pod里有2个container:shutdown-manager和envoy:
2.2 部署不同版本的业务pod
为测试envoy功能,部署2个不同业务pod;按照不同的规则,流量引到某个pod上。
2.1.1 create
kubectl create deployment demoappv11 --image='ikubernetes/demoapp:v1.1' -n dev kubectl create deployment demoappv12 --image='ikubernetes/demoapp:v1.2' -n dev
2.1.2 创造与之对应的SVC
kubectl create service clusterip demoappv11 --tcp=80 -n dev kubectl create service clusterip demoappv12 --tcp=80 -n dev
kubectl get svc -n dev
查看每个pod的ClusterIP及对应的service ip:
2.1.3 pods部署结果:
master-node与worker-node 各1个envoy pod;
2个demoapp pod都在worker-node上.
2.1.4 访问测试:
在master上curl demoapp的ip:
2.3 部署dev namespace的httpproxy
sudo vim httpproxy-headers-routing.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-headers-routing
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes: #路由
- conditions:
- header:
name: X-Canary #header中包含X-Canary:true
present: true
- header:
name: User-Agent #header中包含curl
contains: curl
services: #满足以上两个条件路由到demoappv11
- name: demoappv11
port: 80
- services: #其他不满足条件路由到demoapp12
- name: demoappv12
port: 80
kubectl apply -f httpproxy-headers-routing.yaml
3 数据流转与流量流转
参见 云原生 -- contour+envoy流量转发
4 测试Envoy
4.1 基本测试
4.1.1 配置
在其他主机上配置 /etc/hosts: 让 www.ilinux.io 选worker node 192.168.2.52
sudo vim /etc/hosts #添加hosts 192.168.2.52 www.ilinux.io #设置成worker-node
4.1.2 打开log监测
envoy的log:
Master:
kubectl logs envoy-rlrng envoy -n projectcontour -f
Worker:
kubectl logs envoy-g4gk9 envoy -n projectcontour -f
业务pod的log:
worker上的demoappv11和demoappv12的log:
kubectl logs demoappv11-c94b75cdb-2l4js -n dev -f kubectl logs demoappv12-5565b67f86-8fcwf -n dev -f
4.2 在主机上curl
4.2.1 默认为1.2版本
curl http://www.ilinux.io
iKubernetes demoapp v1.2 !! ClientIP: 10.244.0.6, ServerName: demoappv12-5565b67f86-8fcwf, ServerIP: 10.244.1.4!
先在worker-node的demoapp1.2上可以看到log:
10.244.0.6 - - [17/Feb/2022 03:16:41] "GET / HTTP/1.1" 200 -
再envoy worker-node上可以看到log:
[2022-02-17T03:16:41.160Z] "GET / HTTP/1.1" 200 - 0 113 4 3 "192.168.2.52" "curl/7.68.0" "1741571e-087e-46cd-994d-98527496fc88" "www.ilinux.io" "10.244.1.4:80"
4.2.2 带 X-Canary:true选 1.1版本:
curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 10.244.0.6, ServerName: demoappv11-c94b75cdb-2l4js, ServerIP: 10.244.1.3!
先在worker-node的demoapp1.1上可以看到log:
10.244.0.6 - - [17/Feb/2022 03:15:42] "GET / HTTP/1.1" 200 -
再envoy worker-node上可以看到log:
[2022-02-17T03:15:42.897Z] "GET / HTTP/1.1" 200 - 0 112 4 3 "192.168.2.52" "curl/7.68.0" "e80b8dfe-8a51-4a80-9eb0-cdbab4a9cff3" "www.ilinux.io" "10.244.1.3:80"



