1.镜像扫描ImagePolicyWebhook2. sysdig检测pod3. clusterroole4. AppArmor5. PodSecurityPolicy6. 网络策略7. dockerfile检测及yaml文件问题8. pod安全9. 创建SA10. trivy检测镜像安全11. 创建secret12. kube-benct13. gVsior14. 审计15. 默认网络策略
考试信息
2小时
15-20题目
预约时间同CKA,32小时出成绩
满分不到100分,87分或93分,但67分及格
模拟环境
4套环境 1个控制台
NAT网段192.168.26.0
模拟考题
切换集群 kubectl config use-context k8s
context
A container image scanner is set up on the cluster,but It’s not yet fully
integrated into the cluster’s configuration When complete,the container image
scanner shall scall scan for and reject the use of vulnerable images.
task
You have to complete the entire task on the cluster’s master node,where all
services and files have been prepared and placed
Glven an incomplete configuration in directory /etc/kubernetes/aa and a
functional container image scanner with HTTPS sendpitont
http://192.168.26.60:1323/image_policy
1.enable the necessary plugins to create an image policy
2.validate the control configuration and chage it to an implicit deny
3.Edit the configuration to point the provied HTTPS endpoint correctiy
Finally,test if the configurateion is working by trying to deploy the valnerable
resource /csk/1/web1.yaml
解题思路
ImagePolicyWebhook
关键字:image_policy,deny 1. 切换集群,查看master,sshmaster 2. ls /etc/kubernetes/xxx 3. vi /etc/kubernetes/xxx/xxx.yaml 更改 true 为 false vi /etc/kubernetes/xxx/xxx.yaml 中 https的地址 volume需要挂载进去 4. 启用ImagePolicyWebhook和- --admission-control-config-file= 5. systemctl restart kubelet 6.kubectl run pod1 --image=nginx
案例:
创建/etc/kubernetes/admission拷贝证书与配置文件至/etc/kubernetes/admission配置/etc/kubernetes/manifests/kube-apiserver.yaml
添加ImagePolicyWebhook相关策略重启api-server,systemctl restart kubelet验证镜像创建pod失败修改/etc/kubernetes/admission/admission_config.yaml 策略defaultAllow: true重新验证镜像创建pod
root@master:~# ls cks/cks-course-environment-master/course-content/supply-chain-security/secure-the-supply-chain/whitelist-registries/ImagePolicyWebhook/ admission_config.yaml apiserver-client-cert.pem apiserver-client-key.pem external-cert.pem external-key.pem kubeconf root@master:~# mkdir /etc/kubernetes/admission root@master:~# cp /root/cks/cks-course-environment-master/course-content/supply-chain-security/secure-the-supply-chain/whitelist-registries/ImagePolicyWebhook/* /etc/kubernetes/admission/
root@master:~# cd /etc/kubernetes/admission
root@master:~# cat kubeconf
apiVersion: v1
kind: Config
# clusters refers to the remote service.
clusters:
- cluster:
certificate-authority: /etc/kubernetes/admission/external-cert.pem # CA for verifying the remote service.
server: https://external-service:1234/check-image # URL of remote service to query. Must use 'https'.
name: image-checker
contexts:
- context:
cluster: image-checker
user: api-server
name: image-checker
current-context: image-checker
preferences: {}
# users refers to the API server's webhook configuration.
users:
- name: api-server
user:
client-certificate: /etc/kubernetes/admission/apiserver-client-cert.pem # cert for the webhook admission controller to use
client-key: /etc/kubernetes/admission/apiserver-client-key.pem # key matching the cert
root@master:~# cat admission_config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/admission/kubeconf
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false
root@master:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.211.40:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --admission-control-config-file=/etc/kubernetes/admission/admission_config.yaml #添加此行
- --advertise-address=192.168.211.40
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook # #修改此行
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.20.7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.211.40
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 192.168.211.40
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.211.40
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/admission #添加此行
name: k8s-admission #添加此行
readOnly: true #添加此行
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath: #添加此行
path: /etc/kubernetes/admission #添加此行
type: DirectoryOrCreate #添加此行
name: k8s-admission #添加此行
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
root@master:~# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.20.1
node1 Ready 9d v1.20.1
node2 Ready 9d v1.20.1
#创建pod失败
root@master:~# k run test --image=nginx
Error from server (Forbidden): pods "test" is forbidden: Post "https://external-service:1234/check-image?timeout=30s": dial tcp: lookup external-service on 8.8.8.8:53: no such host
#修改admission_config.yaml 配置
root@master:~# vim /etc/kubernetes/admission/admission_config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/admission/kubeconf
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true #修改此行为true
#重启api-server
root@master:/etc/kubernetes/manifests# mv kube-apiserver.yaml ../
root@master:/etc/kubernetes/manifests# ps -ef |grep api
root 78871 39023 0 20:17 pts/3 00:00:00 grep --color=auto api
root@master:/etc/kubernetes/manifests# ps -ef |grep api^C
root@master:/etc/kubernetes/manifests# mv ../kube-apiserver.yaml .
root@master:~/imagev1.20.7# k -n kube-system get pod
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-57fc9c76cc-t45js 1/1 Running 0 9d
calico-node-jkfm9 1/1 Running 0 9d
calico-node-lrkbp 1/1 Running 0 9d
calico-node-pdwqm 1/1 Running 0 9d
coredns-74ff55c5b-gmfsj 1/1 Running 0 9d
coredns-74ff55c5b-s6lt8 1/1 Running 0 9d
etcd-master 1/1 Running 0 9d
kube-apiserver-master 1/1 Running 0 39s
kube-controller-manager-master 1/1 Running 2 9d
kube-proxy-6m8q6 1/1 Running 0 9d
kube-proxy-8vprs 1/1 Running 0 9d
kube-proxy-b6r2s 1/1 Running 0 9d
kube-scheduler-master 1/1 Running 2 9d
#创建pod成功
root@master:~/imagev1.20.7# k run test --image=nginx
pod/test created
2. sysdig检测pod
切换集群 kubectl config use-context k8s
you may user you brower to open one additonal tab to access sysdig’s
documentation ro Falco’s documentaion
Task:
user runtime detection tools to detect anomalous processes spawning and executing
frequently in the sigle container belorging to Pod redis.
Tow tools are avaliable to use:
sysdig
falico
the tools are pre-installed on the cluster’s worker node only;the are not
avaliable on the base system or the master node.
using the tool of you choice(including any non pre-install tool) analyse the
container’s behaviour for at lest 30 seconds,using filers that detect newly
spawing and executing processes
store an incident file at /opt/2/report,containing the detected incidents one per
line in the follwing format:
[timestamp],[uid],[processName]
解题思路
关键字:sysdig 0. 记住使用sysdig -l |grep 搜索相关字段 1. 切换集群,查询对应的pod,ssh到pod对应的node主机上 2. 使用sysdig,注意要求格式和时间,结果重定向到对应的文件 3. sysdig -M 30 -p "*%evt.time,%user.uid,%proc.name" container.id=容器id > /opt/2/report
案例
3. clusterroole切换集群 kubectl config use-context k8s
context
A Role bound to a pod’s serviceAccount grants overly permissive permission
Complete the following tasks to reduce the set of permissions.
task
Glven an existing Pod name web-pod running in the namespace monitoring Edit the
Roleebound to the Pod’s serviceAccount sa-dev-1 to only allow performing list
operations,only on resources of type Endpoints
create a new Role named role-2 in the namespaces monitoring which only allows
performing update operations,only on resources of type persistentvoumeclaims.
create a new Rolebind name role role-2-bindding binding the newly created Roleto
the Pod’s serviceAccount
解题思路
RBAC
关键字:role,rolebindding 1. 查找rollebind对应的rolle修改权限为list 和 endpoints kubectl edit role role-1 -n monitoring 2. 记住 --verb是权限 --resource是对象 kubectl create role role-2 --verb=update --resource=persistentvolumeclaims -n monitoring 3. 创建绑定 绑定为对应的sa kubectl create rolebinding role-2-bindding --role=role-2 -- serviceaccount=monitoring:sa-dev-1 -n monitoring4. AppArmor
切换集群 kubectl config use-context k8s
Context
AppArmor is enabled on the cluster’s worker node. An AppArmor profile is
prepared, but not
enforced yet. You may use your browser to open one additional tab to access
theAppArmor documentation. Task
On the cluster’s worker node, enforce the prepared AppArmor profile located at
/etc/apparmor.d/nginx_apparmor . Edit the prepared manifest file located at
/cks/4/pod1.yaml to apply the AppArmor profile. Finally, apply the manifest file
and create the pod specified in it
解题思路
apparmor
关键字:apparmor
1. 切换结群,记住查看nodes,ssh到node节点
2. 查看对应的配置文件和名字
cd /etc/apparmor.d
vi nginx_apparmor
apparmor_status |grep nginx-profile-3 # 没有grep到说明没有启动
apparmor_parser -q nginx_apparmor # 加载启用这个配置文件
3. 修改对应yaml应用这个规则 ,打开官网的网址复制例子,修改容器名字和本地的配置名
vi /cks/4/pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-apparmor
annotations:
container.apparmor.security.beta.kubernetes.io/hello: localhost/nginx-profile-3
spec:
containers:
- name: hello
image: busybox
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
4. 修改后创建出来
kubectl apply -f /cks/4/pod1.yaml
5. PodSecurityPolicy
切换集群 kubectl config use-context k8s63
context
A PodsecurityPolicy shall prevent the create on of privileged Pods in a specific
namespace. Task
Create a new PodSecurityPolicy named prevent-psp-policy , which prevents the creation of privileged Pods.Create a new ClusterRole named restrict-access-role , which uses the newly created PodSecurityPolicy prevent-psp-policy .Create a new serviceAccount named pspdenial-sa in the existing namespace development .Finally, create a new clusterRoleBinding named dany-access-bind ,which binds the newly created ClusterRole restrict-access-role to the newly created serviceAccount
解题思路
PodSecurityPolicy
关键词: psp policy privileged 0. 切换结群,查看是否启用 # vi /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # systemctl restart kubelet 1. 官方网址复制psp,修改拒绝特权 # cat psp.yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: prevent-psp-policy spec: privileged: false seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny volumes: - '*' # kubectl create -f psp.yaml 2. 创建对应的clusterrole kubectl create clusterrole restrict-access-role --verb=use -- resource=podsecuritypolicy --resource-name=prevent-psp-policy 3. 创建sa 看对应的ns kubectl create sa psp-denial-sa -n development 4. 创建绑定关系 kubectl create clusterrolebinding dany-access-bind --clusterrole=restrict-access-role --serviceaccount=development:psp-denial-sa6. 网络策略
切换集群 kubectl config use-context k8s
create a NetworkPolicy named pod-access to restrict access to Pod products-service running in namespace development . only allow the following Pods to connect to Pod productsservice :
Pods in the namespace testingPods with label environment: staging , in any namespace Make sureto apply the NetworkPolicy. You can find a skelet on manifest file at/cks/6/p1.yaml
解题思路
关键字:NetworkPolicy 1. 主机查看pod的标签 kubectl get pod -n development --show-labels 2. 查看对应ns的标签,没有需要设置一下 kubectl label ns testing name=testing 3. cat networkpolicy.yaml kind: NetworkPolicy metadata: name: "pod-access" namespace: "development" spec: podSelector: matchLabels: environment: staging policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: testing - from: - namespaceSelector: matchLabels: podSelector: matchLabels: environment: staging kubectl create -f networkpolicy https://kubernetes.io/zh/docs/concepts/services-networking/networkpolicies/#networkpolicy-resource7. dockerfile检测及yaml文件问题
切换集群 kubectl config use-context k8s
Task
Analyze and edit the given Dockerfile (based on the ubuntu:16.04 image)
/cks/7/Dockerfile fixing
two instructions present in the file being prominent security/best-practice
issues. Analyze and edit the given manifest file /cks/7/deployment.yaml
fixing two fields present in the file being prominent security/best-practice
issues.
解题思路
关键字:Dockerfile issues 1.注意dockerfile提示的错误数量 注释:USER root 2.注意api版本问题,和特权网络,也是要看题目中说的错误是几处8. pod安全
切换集群 kubectl config use-context k8s
context
lt is best-practice to design containers to best teless and immutable. Task
lnspect Pods running in namespace testing and delete any Pod that is either not
stateless or not
immutable. use the following strict interpretation of stateless and immutable:
Pods being able to store data inside containers must be treated as not stateless.
You don’t have to worry whether data is actually stored inside containers or not
already. Pods being configured to be privileged in any way must be treated as
potentially not stateless
and not immutable.
解题思路
关键字:stateless immutable
1. get 所有pod
2. 查看是否有特权 privi*
3. 查看是否有volume
4. 把特权网络和volume都删除
kubectl get pod pod1 -n testing -o jsonpath={.spec.volumes} | jq
kubectl get pod sso -n testing -o yaml |grep "privi.*: true"
kubectl delete pod xxxxx -n testing
9. 创建SA
切换集群 kubectl config use-context k8s
context
A Pod fails to run because of an incorrectly specified ServiceAcccount.
Task
create a new ServiceAccount named frontend-sa in the existing namespace qa ,which
must not
have access to any secrets.
lnspect the Pod named frontend running inthe namespace qa . Edit the Pod to use
the newly created serviceAccount
解题思路
关键字: ServiceAccount "must not have access to any secrets" 1.获取sa模板 kubectl create serviceaccount frontend-sa -n qa --dry-run -o yaml 2.通过官方文档查找自动挂载 automountServiceAccountToken: false 3.修改pod中serviceAccountName 4.创建pod删除其他sa10. trivy检测镜像安全
切换集群 kubectl config use-context k8s
Task
Use the Trivy open-source container scanner to detect images with severe
vulnerabilities used by
Pods in the namespace yavin . Look for images with High or Critical severity
vulnerabilities,and delete the Pods that use those
images. Trivy is pre-installed on the cluster’s master node only; it is not
available on the base system or
the worker nodes. You’ll have to connect to the cluster’s master node to use
Trivy
解题思路
关键字:Trivy scanner High or Critical 1. 切换集群,ssh到对应的master 2. get pod 把对应的image都扫描一下,不能有High or Critical 3. 把有问题的镜像pod删除11. 创建secret
切换集群 kubectl config use-context k8s
Task
Retrieve the content of the existing secret named db1-test in the istio-system
namespace. store the username field in a file named /cks/11/old-username.txt ,
and the password field in a
file named /cks/11/old-pass.txt. You must create both files; they don’t exist
yet.
Do not use/modify the created files in!the following steps, create new
temporaryfiles if needed. Create a new secret named test-workflow inthe istiosystem namespace, with the following
content:
username : thanos
password : hahahaha
Finally, create a new Pod that has access to the secret test-workflow via a
volume:
pod name dev-pod
namespace istio-system
container name dev-container
image nginx:1.9
volume name dev-volume
mount path /etc/test-secret
解题思路
关键字:secret kubectl get secrets db1-test -n istio-system -o yaml echo -n "aGFoYTAwMQ==" | base64 -d > /cks/11/old-pass.txt echo -n "dG9t" | base64 -d > /cks/11/old-username.txt kubectl create secret generic test-workflow --from-literal=username=thanos -- from-literal=password=hahahaha -n istio-system 更具需求创建secrt的pod12. kube-benct
切换集群 kubectl config use-context k8s65
context
ACIS Benchmark tool was run against the kubeadm-created cluster and found
multiple issues that
must be addressed immediately. Task
Fix all issues via configuration and restart theaffected components to ensure the
new settings
take effect. Fix all of the following violations that were found against the API
server:
Ensure that the
1.2.7 --authorization-mode FAIL argument is not set to AlwaysAllow
Ensure that the
1.2.8 --authorization-mode FAIL argument includes Node
Ensure that the
1.2.9 --authorization-mode FAIL argument includes RBAC
Ensure that the
1.2.18 --insecure-bind-address FAIL argument is not set
Ensure that the
1.2.19 --insecure-port FAIL argument is set to 0
Fix all of the following violations that were found against the kubelet:
Ensure that the
4.2.1 anonymous-auth FAIL argument is set to false
Ensure that the4.2.2 --authorization-mode FAIL argument is not set to
AlwaysAllow
Use webhook authn/authz
解题思路
关键字: 看条目确定是扫描 1. 切换机器到对应的ssh 到 master节点 2. kube-benct run 查找对应的条目,然后修复 考试中有个ETCD13. gVsior
换集群 kubectl config use-context k8s67
context
This cluster uses containerd as CRl runtime. Containerd’s default runtime handler
is runc . Containerd has been prepared to support an additional runtime handler ,
runsc (gVisor). Task
Create a RuntimeClass named untrusted using the prepared runtime handler named
runsc . Update all Pods in the namespace client to run on gvisor, unless they are
already running on
anon-default runtime handler. You can find a skeleton manifest file at
/cks/13/rc.yaml
解题思路
关键词:gVisor 1.切换集群 用官网文档创建一个runtimeclass 2.再更具题目要求创建pod使用这个runtime https://kubernetes.io/zh/docs/concepts/containers/runtime-class/#2- %E5%88%9B%E5%BB%BA%E7%9B%B8%E5%BA%94%E7%9A%84-runtimeclass-%E8%B5%84%E6%BA%9014. 审计
切换集群 kubectl config use-context k8s
Task
Enable audit logs in the cluster. To do so, enable the log backend, and
ensurethat:
logs are stored at /var/log/kubernetes/audit-logs.txt
log files are retained for 5 days
at maximum, a number of 10 auditlog files are retained
A basic policy is provided at /etc/kubernetes/logpolicy/sample-policy.yaml . it
only specifies what
not to log. The base policy is located on thecluster’s master node. Edit and
extend the basic policy to log:namespaces changes at RequestResponse level
the request body of pods changes in the namespace front-apps
configMap and secret changes in all namespaces at the metadata level
Also, add a catch-all ruie to log all otherrequests at the metadata level. Don’t
forget to apply
解题思路
关键字:policy 1.切换集群登录master,然后创建目录,修改yaml,启用审计 2.更具官网文档来修改对应的策略 3.重启kubelet https://kubernetes.io/zh/docs/tasks/debug-application-cluster/audit/#log- %E5%90%8E%E7%AB%AF15. 默认网络策略
切换集群 kubectl config use-context k8s
context
A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace
that doesn’t
have any other NetworkPolicy defined. Task
Create a new default-deny NetworkPolicy named denynetwork in the namespace
development
for all traffic of type Ingress . The new NetworkPolicy must deny all lngress
traffic in the namespace development . Apply the newly created default-deny
NetworkPolicy to all Pods running in namespace
development . You can find a skeleton manifest file
解题思路
关键字:NetworkPolicy defined 1.观察清楚是默认拒绝所有还是其他条件,更具题目要求官方文档来写yaml https://kubernetes.io/zh/docs/concepts/services-networking/networkpolicies/#%E9%BB%98%E8%AE%A4%E6%8B%92%E7%BB%9D%E6%89%80%E6%9C%89%E5%85%A5%E7%AB% 99%E6%B5%81%E9%87%8F



