KTHW - The kubernetes control plane

July 13, 2020 - Reading time: 8 minutes

Our controller nodes make global decisions about the cluster and reacts to cluster events. The main components of the server are:

  • kube-apiserver: It's the interface to the controller
  • etcd: Datastore.
  • kube-scheduler: Finds a node where we can place a pod. (pods contain one or mode containers)
  • kube-controller-manager: It's the manager of the rest of controllers.
  • cloud-controller-manager: Handles interaction with underlying cloud providers.

Installing the binaries

cloud_user@pzolo2c:~$ for bin in kube-apiserver kube-controller-manager kubectl kube-scheduler ; do curl -LO "https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/$bin" ; done
cloud_user@pzolo2c:~$ chmod 770 kub*
cloud_user@pzolo2c:~$ sudo cp kube-apiserver kube-controller-manager kubectl kube-scheduler  /usr/local/bin/

The kube-apiserver

Provides the primary interface for the Kubernetes control plane and the cluster as a whole

cloud_user@pzolo2c:~$ sudo /usr/local/bin/kube-apiserver --version
Kubernetes v1.18.5
cloud_user@pzolo2c:~$ sudo mkdir -p /var/lib/kubernetes/
cloud_user@pzolo2c:~$ sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml /var/lib/kubernetes/
cloud_user@pzolo2c:~$ INTERNAL_IP=172.31.29.101 ; CONTROLLER1_IP=$INTERNAL_IP ; CONTROLLER0_IP=172.31.22.121
cloud_user@pzolo2c:~$ cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --kubelet-https=true \\
  --runtime-config=api/all=true \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2 \\
  --kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Set up controller manager

We need to place the kubeconfig file for the controller manager in the kubernetes folder. And then create a systemd file with the instructions to start the service

cloud_user@ctl01:~$ kube-controller-manager --version
Kubernetes v1.18.6
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
cat << EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Set up the kubernetes scheduler

The schedulre requires a yaml config file that points to the kubeconfig file.

cloud_user@ctl01:~$ kube-scheduler --version
I0719 00:37:31.273277    8689 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0719 00:37:31.273343    8689 registry.go:150] Registering EvenPodsSpread predicate and priority function
Kubernetes v1.18.6
cloud_user@pzolo2c:~$ sudo mkdir -p /etc/kubernetes/config/
cloud_user@pzolo2c:~$ sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
cloud_user@pzolo2c:~$ cat << EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha2
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF
cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Now that all the required control plane services have been created, we can enabled and start them

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Setting up the healthz proxy

This step is required ONLY if the load balancer can't monitor services running on https. You can setup a proxy with nginx that will send requests on port 80 to the local instance of the kube-apiserver running on port 6443.

cloud_user@ctl02:~$ sudo apt-get install -y nginx
cloud_user@ctl02:~$ cat > kubernetes.default.svc.cluster.local <<EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF
cloud_user@ctl02:~$ sudo mv kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
cloud_user@ctl02:~$ sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
cloud_user@ctl01:~$ sudo systemctl restart nginx
cloud_user@ctl01:~$ sudo systemctl enable nginx

Then confirm that the service is up and running with

cloud_user@linuxacademy:~$ curl http://ctl01/healthz -H "Host: kubernetes.default.svc.cluster.local" ; echo ""
ok

Allow the kube-apiserver to access the kublets on the workers

The kube-apiserver running in the controller nodes needs to be able to make changes on the kublets in the worker nodes. The type of authorization used when accesing a service is defined by the --authorization-mode flag. For example, the kube-apiserver allows Node and RBAC authorization. Node: https://kubernetes.io/docs/reference/access-authn-authz/node/ - allows worker nodes to contact the api RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/

We'll need to create a ClusterRole and assign this role to the Kubernetes user with a ClusterRoleBinding

To interact with the controller api we use kubectl and specify the kubeconfig for admin.

cloud_user@ctl01:~$ cat newRole.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
cloud_user@ctl01:~$ kubectl apply --kubeconfig admin.kubeconfig -f newRole.yml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
# verify it was created 
cloud_user@ctl01:~$ kubectl --kubeconfig admin.kubeconfig get clusterroles
NAME                                                                   CREATED AT
admin                                                                  2020-07-19T00:20:20Z
cluster-admin                                                          2020-07-19T00:20:20Z
edit                                                                   2020-07-19T00:20:20Z
system:aggregate-to-admin                                              2020-07-19T00:20:20Z
system:aggregate-to-edit                                               2020-07-19T00:20:20Z
system:aggregate-to-view                                               2020-07-19T00:20:20Z
[...]
system:kube-apiserver-to-kubelet                                       2020-07-23T21:59:55Z
[...]

Now we want to create a binding

cloud_user@ctl01:~$ cat binding.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetest
cloud_user@ctl01:~$ kubectl apply --kubeconfig admin.kubeconfig -f binding.yml
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
# verify it was created 
cloud_user@ctl01:~$ kubectl --kubeconfig admin.kubeconfig get clusterrolebinding
NAME                                                   ROLE                                                               AGE
cluster-admin                                          ClusterRole/cluster-admin                                          4d21h
system:basic-user                                      ClusterRole/system:basic-user                                      4d21h
system:controller:attachdetach-controller              ClusterRole/system:controller:attachdetach-controller              4d21h
[...]
system:kube-apiserver                                  ClusterRole/system:kube-apiserver-to-kubelet                       44s

KTHW - Encrypting the configuration

July 13, 2020 - Reading time: 3 minutes

The etcd module allows us to store sensitive data in an encrypted format. We'll need to create an encryptioin-config.yaml file that will be use by the etcd client when storing settings. The file contains a randomly generated 32 bit secrey key used by AES-CBC

cloud_user@linuxacademy:~/kthw$ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cloud_user@linuxacademy:~/kthw$ cat encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: hjwmr9dCeI1/S1yqBn8arDCyXkoC6r2qxES2AAy8CfE=
      - identity: {}
# Place the file on the controller nodes
cloud_user@linuxacademy:~/kthw$ scp encryption-config.yaml ctl01:~/
cloud_user@linuxacademy:~/kthw$ scp encryption-config.yaml ctl02:~/

etcd is a distributed key/value store that provides a reliable way to store data across a cluster. It only runs on the controller nodes, and it needs to be clustered. It uses https://raft.github.io/ as a consensus algorithm.

Set-up etcd

cloud_user@pzolo1c:~$ curl -LO https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
cloud_user@pzolo1c:~$ tar -xzf etcd-v3.4.9-linux-amd64.tar.gz
cloud_user@pzolo1c:~$ sudo cp etcd-v3.4.9-linux-amd64/etcd* /usr/local/bin/
cloud_user@pzolo1c:~$ sudo mkdir -p /etc/etcd /var/lib/etcd
# Place the CA file and key/cert for controller on the etc folder 
cloud_user@pzolo1c:~$ sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
# Create a new service for systemd 
cloud_user@pzolo1c:~$ ETCD_NAME=$(hostname) ; INTERNAL_IP=172.31.22.121 ; INITIAL_CLUSTER=$ETCD_NAME=https://$INTERNAL_IP:2380,pzolo2c.mylabserver.com=https://172.31.29.101:2380
cloud_user@pzolo1c:~$ cat << EOF > etcd.service
> [Unit]
> Description=etcd
> Documentation=https://github.com/coreos
>
> [Service]
> ExecStart=/usr/local/bin/etcd \\
>   --name ${ETCD_NAME} \\
>   --cert-file=/etc/etcd/kubernetes.pem \\
>   --key-file=/etc/etcd/kubernetes-key.pem \\
>   --peer-cert-file=/etc/etcd/kubernetes.pem \\
>   --peer-key-file=/etc/etcd/kubernetes-key.pem \\
>   --trusted-ca-file=/etc/etcd/ca.pem \\
>   --peer-trusted-ca-file=/etc/etcd/ca.pem \\
>   --peer-client-cert-auth \\
>   --client-cert-auth \\
>   --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
>   --listen-peer-urls https://${INTERNAL_IP}:2380 \\
>   --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
>   --advertise-client-urls https://${INTERNAL_IP}:2379 \\
>   --initial-cluster-token etcd-cluster-0 \\
>   --initial-cluster ${INITIAL_CLUSTER} \\
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF>   --initial-cluster-state new \\
>   --data-dir=/var/lib/etcd
> Restart=on-failure
> RestartSec=5
>
> [Install]
> WantedBy=multi-user.target
> EOF
cloud_user@pzolo1c:~$ sudo  cp etcd.service /etc/systemd/system/
# Enable / start the service 
cloud_user@pzolo1c:~$ sudo systemctl daemon-reload
cloud_user@pzolo1c:~$ sudo systemctl enable etcd
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
cloud_user@pzolo1c:~$ sudo systemctl start etcd

After starting the service on both controllers, we can verify that the cluster is active with:

cloud_user@pzolo2c:~$ sudo ETCDCTL_API=3 etcdctl member list   --endpoints=https://127.0.0.1:2379   --cacert=/etc/etcd/ca.pem   --cert=/etc/etcd/kubernetes.pem   --key=/etc/etcd/kubernetes-key.pem
c1b4898f05dfeb2, started, pzolo2c.mylabserver.com, https://172.31.29.101:2380, https://172.31.29.101:2379, false
f80fdba247d2636b, started, pzolo1c.mylabserver.com, https://172.31.22.121:2380, https://172.31.22.121:2379, false

KTHW - Kubeconfigs

July 12, 2020 - Reading time: 3 minutes

It's a file that stores information about clusters, users, namespaces and auth mechanisms. All the data required to connect and interact with a kubernetes cluster.

# Admin kubeconfig # 
# Connects to the controller on localhost 
# The embed certs option allows us to move the config file to other machines 
# First step, define the cluster settings 
cloud_user@pzolo6c:~/kthw$ kubectl config set-cluster kubernetes-the-hard-way --certificate-authority=ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRV
    server: https://127.0.0.1:6443
  name: kubernetes-the-hard-way
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
# Second step, set credentials for the admin user 
cloud_user@pzolo6c:~/kthw$ kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRV
    server: https://127.0.0.1:6443
  name: kubernetes-the-hard-way
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUS
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFUR
# Third step create the default context 
cloud_user@pzolo6c:~/kthw$ kubectl config set-context default --cluster=kubernetes-the-hard-way --user=admin --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRV
    server: https://127.0.0.1:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: admin
  name: default
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUS
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFUR
# Final step, specify that we want to use the default context 
cloud_user@pzolo6c:~/kthw$ kubectl config use-context default --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRV
    server: https://127.0.0.1:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUS
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFUR

We'll need to repeat the same steps for kube-scheduler , kube-controller-manager , kube-proxy and for each worker node (kublet). The worker nodes and the kube-proxy won't connect to localhost:6443, but to the private address of the proxy.

Then send the file over to the worker and controller nodes:

cloud_user@pzolo6c:~/kthw$ scp pzolo4c.mylabserver.com.kubeconfig kube-proxy.kubeconfig   wrk01:~/
cloud_user@pzolo6c:~/kthw$ scp pzolo5c.mylabserver.com.kubeconfig kube-proxy.kubeconfig   wrk02:~/
cloud_user@pzolo6c:~/kthw$ scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ctl01:~/
cloud_user@pzolo6c:~/kthw$ scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ctl02:~/

KTHW - Set-up the CA and certificates

July 12, 2020 - Reading time: 5 minutes
  • Client certificates Used for authentication from any kube-* service to the Kube API LB.

  • Kube API Server Certificate Signed certificate for the API LB

  • Service Account Key Pair Certificate used to sign service account tokens.

Generate the CA

## Details of CA that I want to create 
cloud_user@pzolo6c:~/kthw$ cat ca-csr.json | jq .
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
## Certificate details 
cloud_user@pzolo6c:~/kthw$ cat ca-config.json | jq .
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
## Generate a JSON with CSR, cert and key and then write to a file 
cloud_user@pzolo6c:~/kthw$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2020/07/12 20:30:32 [INFO] generating a new CA key and certificate from CSR
2020/07/12 20:30:32 [INFO] generate received request
2020/07/12 20:30:32 [INFO] received CSR
2020/07/12 20:30:32 [INFO] generating key: rsa-2048
2020/07/12 20:30:32 [INFO] encoded CSR
2020/07/12 20:30:32 [INFO] signed certificate with serial number 113170212320509007336156775422336010737695630373
cloud_user@pzolo6c:~/kthw$ openssl x509 -in ca.pem -text -noout | grep -e Issuer -e Subject -e Before -e After -e Sign -e TRUE
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = US, ST = Oregon, L = Portland, O = Kubernetes, OU = CA, CN = Kubernetes
            Not Before: Jul 12 20:26:00 2020 GMT
            Not After : Jul 11 20:26:00 2025 GMT
        Subject: C = US, ST = Oregon, L = Portland, O = Kubernetes, OU = CA, CN = Kubernetes
        Subject Public Key Info:
                Certificate Sign, CRL Sign
                CA:TRUE
            X509v3 Subject Key Identifier:
    Signature Algorithm: sha256WithRSAEncryption

Generate Client certs

The certificates will be created using a csr.json file for the client, and specifying the newqly created ca.pem and ca-key.pem

# Admin client 
cloud_user@pzolo6c:~/kthw$ jq .CN admin-csr.json
"admin"
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
# Workers client 
cloud_user@pzolo6c:~/kthw$ WORKER0_IP=172.31.22.212
cloud_user@pzolo6c:~/kthw$ WORKER1_IP=172.31.27.176
cloud_user@pzolo6c:~/kthw$ jq .CN pzolo*.mylabserver.com-csr.json
"system:node:pzolo4c.mylabserver.com"
"system:node:pzolo5c.mylabserver.com"
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=${WORKER0_IP},${WORKER0_HOST} -profile=kubernetes ${WORKER0_HOST}-csr.json | cfssljson -bare ${WORKER0_HOST}
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=${WORKER1_IP},${WORKER1_HOST} -profile=kubernetes ${WORKER1_HOST}-csr.json | cfssljson -bare ${WORKER1_HOST}
# Controller Manager Client Certificate
cloud_user@pzolo6c:~/kthw$ jq .CN kube-controller-manager-csr.json
"system:kube-controller-manager"
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
# Kube Proxy Client Certificate 
cloud_user@pzolo6c:~/kthw$ jq .CN kube-proxy-csr.json
"system:kube-proxy"
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# Kube Scheduler Client Certificate 
cloud_user@pzolo6c:~/kthw$ jq .CN kube-scheduler-csr.json
"system:kube-scheduler"
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
# Service account certificates 
cloud_user@pzolo6c:~/kthw$ jq .CN service-account-csr.json
"service-accounts"
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes service-account-csr.json | cfssljson -bare service-account

Generate the API LB server cert

This certificate has a long list of hostname, because it's the entry point for all clients.

# 10.32.0.1 is a kubernetes IP that some service clients may use. The rest are IP and hostnames for the controllers and the API LB 
cloud_user@pzolo6c:~/kthw$ CERT_HOSTNAME=10.32.0.1,127.0.0.1,localhost,kubernetes.default,172.31.22.121,172.31.29.101,172.31.29.90,pzolo1c.mylabserver.com,pzolo2c.mylabserver.com,pzolo3c.mylabserver.com
# Kube Proxy API LB Server certificate 
cloud_user@pzolo6c:~/kthw$ jq .CN kubernetes-csr.json
"kubernetes"
cloud_user@pzolo6c:~/kthw$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=${CERT_HOSTNAME} -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

Distribute the certs


cloud_user@pzolo6c:~/kthw$ scp ca.pem pzolo4c.mylabserver.com-key.pem pzolo4c.mylabserver.com.pem wrk01:~/
cloud_user@pzolo6c:~/kthw$ scp ca.pem pzolo5c.mylabserver.com-key.pem pzolo5c.mylabserver.com.pem wrk02:~/
cloud_user@pzolo6c:~/kthw$ scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem ctl01:~/
cloud_user@pzolo6c:~/kthw$ scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem ctl02:~/

KTHW - Set-up the client

July 12, 2020 - Reading time: ~1 minute

CFSSL - CloudFare's open source toolkit for everything TLS/SSL.

cloud_user@pzolo6c:~/cfssl$ curl https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 -LO
cloud_user@pzolo6c:~/cfssl$ curl https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 -LO
cloud_user@pzolo6c:~/cfssl$ chmod 777 cfssl*
cloud_user@pzolo6c:~/cfssl$ sudo mv cfssl_1.4.1_linux_amd64 /usr/local/bin/cfssl
cloud_user@pzolo6c:~/cfssl$ sudo mv cfssljson_1.4.1_linux_amd64 /usr/local/bin/cfssljson
cloud_user@pzolo6c:~/cfssl$ cfssl version
Version: 1.4.1
Runtime: go1.12.12

Kubectl - The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
cloud_user@pzolo6c:~$ chmod 770 kubectl
cloud_user@pzolo6c:~$ sudo mv kubectl /usr/local/bin/
cloud_user@pzolo6c:~$ kubectl version --client| egrep -o "\{.*"
{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}