KTHW | Testing the cluster

August 29, 2020 - Reading time: 5 minutes

Now that all the services are up and running in the worker and controller nodes, we'll ensure that all the basic componets are working.

Testing encryption

We'll use k8s secrets to test encryption https://kubernetes.io/docs/concepts/configuration/secret/
Back when we set up the services in the controller, we created the encryption-config.yaml file, with an AES-CBC symetric key:

cloud_user@ctl01:~$ cat /var/lib/kubernetes/encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: dj2W+t0wxcF+LdACvX/qw0i6Gq8WSEM2fnH4W/Xpt/A=
      - identity: {}

The Pods can then reference the secret in three ways

  • As a file in a volume mounted in a container
  • As an env var in a container
  • Read by the kubelet when pulling images for the pod

Kubernetes also automatically creates secrets, to store ServiceAccount private keys.

cloud_user@ctl01:~$ kubectl create secret generic kubernetes-the-hard-way --from-literal="mykey=mydata"
secret/kubernetes-the-hard-way created
cloud_user@ctl01:~$ kubectl get secrets
NAME                      TYPE                                  DATA   AGE
default-token-xdb6v       kubernetes.io/service-account-token   3      41d
kubernetes-the-hard-way   Opaque                                1      31s
#
# Read the secret 
#
cloud_user@ctl01:~$ kubectl get secret kubernetes-the-hard-way -o yaml | head -n4
apiVersion: v1
data:
  mykey: bXlkYXRh
kind: Secret
cloud_user@ctl01:~$ echo "bXlkYXRh" | base64 -d
mydata

We can also confirm that the secret is encrypted in etcd by reading the value of the document

cloud_user@ctl01:~$ sudo ETCDCTL_API=3 etcdctl get   --endpoints=https://127.0.0.1:2379   --cacert=/etc/etcd/ca.pem   --cert=/etc/etcd/kubernetes.pem   --key=/etc/etcd/kubernetes-key.pem  /registry/secrets/default/kubernetes-the-hard-way | xxd -c 32
00000000: 2f72 6567 6973 7472 792f 7365 6372 6574 732f 6465 6661 756c 742f 6b75 6265 726e  /registry/secrets/default/kubern
00000020: 6574 6573 2d74 6865 2d68 6172 642d 7761 790a 6b38 733a 656e 633a 6165 7363 6263  etes-the-hard-way.k8s:enc:aescbc
00000040: 3a76 313a 6b65 7931 3a54 7fb0 b327 4932 1e75 0eb9 2f99 67d0 987a c03b 76e1 e055  :v1:key1:T...'I2.u../.g..z.;v..U
00000060: 3922 8584 b639 13a5 5820 1e5e 9012 7aab eac0 47d4 ae1c 0432 241a d8c8 e2c1 eeb7  9"...9..X .^..z...G....2$.......
00000080: efbb ade7 2895 121c 4ca6 87ea 7fc2 1168 7195 1c34 109d 84c3 4c8d b396 24ec a7c0  ....(...L......hq..4....L...$...
000000a0: 1879 ba54 ae6f a081 d6af 303f 7564 5b81 30d9 0a2d 1910 1568 840b db96 d62e f5e5  .y.T.o....0?ud[.0..-...h........
000000c0: 1549 5ef9 de90 d894 7527 7278 6370 8c2a 70c2 558b 9b52 cfa8 e169 9698 cd42 272b  .I^.....u'rxcp.*p.U..R...i...B'+
000000e0: 40d7 3ea6 6b61 50f5 27e1 956e aca0 8eae 7e9f b116 bddc 86b7 4d8a 8078 6c9c 9b8d  @.>.kaP.'..n....~.......M..xl...
00000100: 97aa 5070 f455 9430 3a9e d589 2094 fbf6 02ea 8233 c320 8a17 40a5 cf61 dcf2 de55  ..Pp.U.0:... ......3. ..@..a...U
00000120: 4423 cfcc 7f2f e1cf 2e2a 86f6 1388 a388 18b5 70c5 562f ad17 166b 0da0 babd 61d5  D#.../...*........p.V/...k....a.
00000140: 8760 4968 7893 74ab 530a                                                         .`Ihx.t.S.

Testing deployments

Let's simply the run command to create and run a particular image in a pod.

cloud_user@ctl01:~$ kubectl run nginx --image=nginx
pod/nginx created
cloud_user@ctl01:~$ kubectl get pods -l run=nginx -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP             NODE             NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          14s   10.200.192.2   wrk01.kube.com   <none>           <none>

Testing port-forwarding

kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to.

cloud_user@ctl01:~$ kubectl port-forward  nginx 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

# from a different bash 
cloud_user@ctl01:~$ netstat -tupan | grep 8081
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 127.0.0.1:8081          0.0.0.0:*               LISTEN      2584/kubectl
tcp6       0      0 ::1:8081                :::*                    LISTEN      2584/kubectl

cloud_user@ctl01:~$ curl localhost:8081
<!DOCTYPE html>
<html>
[...]

A pcap on the worker shows that the controller sends the request to the kubelet in the worker (listening on port 10250)

root@wrk01:/home/cloud_user# netstat -tupan | grep 10250
tcp6       0      0 :::10250                :::*                    LISTEN      607/kubelet
tcp6       0      0 172.31.29.196:10250     172.31.19.77:51844      ESTABLISHED 607/kubelet

root@wrk01:/home/cloud_user# tcpdump -nnr /var/tmp/test -s0 -A port 48418 or host 172.31.19.77 
...
10:40:59.139112 IP 172.31.19.77.51844 > 172.31.29.196.10250: Flags [P.], seq 244:352, ack 159, win 267, options [nop,nop,TS val 1714706 ecr 1714323], length 108
E....p@.@......M......(
.......{...........
..*...(.....g.6G.(..0G.qE.1.h(J.]Y..OJ.`.yT.z$xJ..|^.p....M.P...@..V...<;...    .wc...w.........$......K.#.....2......&
10:40:59.139801 IP 127.0.0.1.48418 > 127.0.0.1.41343: Flags [P.], seq 112:198, ack 49, win 350, options [nop,nop,TS val 1714323 ecr 1714323], length 86
E.....@.@............".......&3....^.~.....
..(...(........NGET / HTTP/1.1
...