The loadbalancer will be use to access both controllers from a single point. In this example we'll use nginx with a stream load balancer for port 443 and 6443
sudo apt-get install -y nginx
sudo systemctl enable nginx
sudo mkdir -p /etc/nginx/tcpconf.d
sudo vi /etc/nginx/nginx.conf
## Add the line:
## include /etc/nginx/tcpconf.d/*;
## create the kubernetes config:
cloud_user@kubelb:~$ cat /etc/nginx/tcpconf.d/kubernetes.conf
stream {
upstream kubernetes {
server 172.31.19.77:6443;
server 172.31.24.213:6443;
}
server {
listen 6443;
listen 443;
proxy_pass kubernetes;
}
}
sudo nginx -s reload
The nodes are responsible for the actual work of running container applications managed by kubernetes. Components:
Socat: Multipurpose relay (SOcket CAT) Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. It enables support to the kubectl port-forward command.
Conntrack: command line interface for netfilter connection tracking Using conntrack , you can dump a list of all (or a filtered selection of) currently tracked connections, delete connections from the state table, and even add new ones.
Ipset: administration tool for IP sets A netfilter projectm some of the uses are: store multiple IP addresses or port numbers and match against the collection by iptables at one swoop; dynamically update iptables rules against IP addresses or ports without performance penalty; express complex IP address and ports based rulesets with one single iptables rule and benefit from the speed of IP sets
cri-tools Introduced in Kubernetes 1.5, the Container Runtime Interface (CRI) is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile. https://github.com/kubernetes-sigs/cri-tools
runc runc is a CLI tool for spawning and running containers according to the OCI specification. Open Container Initiative is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. Currently contains two specifications: the Runtime Specification (runtime-spec) and the Image Specification (image-spec). The Runtime Specification outlines how to run a “filesystem bundle” that is unpacked on disk. https://github.com/opencontainers/runc
cni The Container Network Interface project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. We'll use the cni-plugins project. This is a Cloud Native Computing Foundation (CNCF) project currently on incubation phase. (known incubation projects: etcd, cni) (known graduated projects from CNCF: kubernetes, prometheus, coreDNS, containerd, fluentd) https://github.com/containernetworking
containerd An industry-standard container runtime with an emphasis on simplicity, robustness and portability Graduated on Cloud Native Computing Foundation on 2019.
kubectl
kube-proxy
kublet
sudo -y install socat conntrack ipset
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc91/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz \
https://github.com/containerd/containerd/releases/download/v1.3.6/containerd-1.3.6-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubelet
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
mkdir containerd
tar -xvf crictl-v1.18.0-linux-amd64.tar.gz
tar -xvf containerd-1.3.6-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
Starting on K8s v1.8 the Kubelet won't work on a machine with swap enabled. To permanently disable swap, we need to comment out the line on fstab that mounts the swap partition:
cloud_user@wrk02:~$ grep swap /etc/fstab
#/swapfile swap swap defaults 0 0
To verify if swap is currently turned on:
cloud_user@wrk01:~$ sudo swapon --show
NAME TYPE SIZE USED PRIO
/swapfile file 1000M 0B -1
#
# Turn off with swapoff:
#
cloud_user@wrk01:~$ sudo swapoff -a
cloud_user@wrk01:~$
HOSTNAME=$(hostname)
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
cat << EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
cat << EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2 \\
--hostname-override=${HOSTNAME} \\
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
cat << EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
cat << EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
We can check this from one of the controllers:
cloud_user@ctl01:~$ kubectl get nodes --kubeconfig admin.kubeconfig
NAME STATUS ROLES AGE VERSION
wrk01.kube.com NotReady <none> 10m v1.18.6
wrk02.kube.com NotReady <none> 3m38s v1.18.6
Our controller nodes make global decisions about the cluster and reacts to cluster events. The main components of the server are:
Installing the binaries
cloud_user@pzolo2c:~$ for bin in kube-apiserver kube-controller-manager kubectl kube-scheduler ; do curl -LO "https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/$bin" ; done
cloud_user@pzolo2c:~$ chmod 770 kub*
cloud_user@pzolo2c:~$ sudo cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/local/bin/
Provides the primary interface for the Kubernetes control plane and the cluster as a whole
cloud_user@pzolo2c:~$ sudo /usr/local/bin/kube-apiserver --version
Kubernetes v1.18.5
cloud_user@pzolo2c:~$ sudo mkdir -p /var/lib/kubernetes/
cloud_user@pzolo2c:~$ sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml /var/lib/kubernetes/
cloud_user@pzolo2c:~$ INTERNAL_IP=172.31.29.101 ; CONTROLLER1_IP=$INTERNAL_IP ; CONTROLLER0_IP=172.31.22.121
cloud_user@pzolo2c:~$ cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2 \\
--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
We need to place the kubeconfig file for the controller manager in the kubernetes folder. And then create a systemd file with the instructions to start the service
cloud_user@ctl01:~$ kube-controller-manager --version
Kubernetes v1.18.6
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
cat << EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
The schedulre requires a yaml config file that points to the kubeconfig file.
cloud_user@ctl01:~$ kube-scheduler --version
I0719 00:37:31.273277 8689 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0719 00:37:31.273343 8689 registry.go:150] Registering EvenPodsSpread predicate and priority function
Kubernetes v1.18.6
cloud_user@pzolo2c:~$ sudo mkdir -p /etc/kubernetes/config/
cloud_user@pzolo2c:~$ sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
cloud_user@pzolo2c:~$ cat << EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha2
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Now that all the required control plane services have been created, we can enabled and start them
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
This step is required ONLY if the load balancer can't monitor services running on https. You can setup a proxy with nginx that will send requests on port 80 to the local instance of the kube-apiserver running on port 6443.
cloud_user@ctl02:~$ sudo apt-get install -y nginx
cloud_user@ctl02:~$ cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
cloud_user@ctl02:~$ sudo mv kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
cloud_user@ctl02:~$ sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
cloud_user@ctl01:~$ sudo systemctl restart nginx
cloud_user@ctl01:~$ sudo systemctl enable nginx
Then confirm that the service is up and running with
cloud_user@linuxacademy:~$ curl http://ctl01/healthz -H "Host: kubernetes.default.svc.cluster.local" ; echo ""
ok
The kube-apiserver running in the controller nodes needs to be able to make changes on the kublets in the worker nodes. The type of authorization used when accesing a service is defined by the --authorization-mode flag. For example, the kube-apiserver allows Node and RBAC authorization. Node: https://kubernetes.io/docs/reference/access-authn-authz/node/ - allows worker nodes to contact the api RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
We'll need to create a ClusterRole and assign this role to the Kubernetes user with a ClusterRoleBinding
To interact with the controller api we use kubectl and specify the kubeconfig for admin.
cloud_user@ctl01:~$ cat newRole.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
cloud_user@ctl01:~$ kubectl apply --kubeconfig admin.kubeconfig -f newRole.yml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
# verify it was created
cloud_user@ctl01:~$ kubectl --kubeconfig admin.kubeconfig get clusterroles
NAME CREATED AT
admin 2020-07-19T00:20:20Z
cluster-admin 2020-07-19T00:20:20Z
edit 2020-07-19T00:20:20Z
system:aggregate-to-admin 2020-07-19T00:20:20Z
system:aggregate-to-edit 2020-07-19T00:20:20Z
system:aggregate-to-view 2020-07-19T00:20:20Z
[...]
system:kube-apiserver-to-kubelet 2020-07-23T21:59:55Z
[...]
Now we want to create a binding
cloud_user@ctl01:~$ cat binding.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetest
cloud_user@ctl01:~$ kubectl apply --kubeconfig admin.kubeconfig -f binding.yml
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
# verify it was created
cloud_user@ctl01:~$ kubectl --kubeconfig admin.kubeconfig get clusterrolebinding
NAME ROLE AGE
cluster-admin ClusterRole/cluster-admin 4d21h
system:basic-user ClusterRole/system:basic-user 4d21h
system:controller:attachdetach-controller ClusterRole/system:controller:attachdetach-controller 4d21h
[...]
system:kube-apiserver ClusterRole/system:kube-apiserver-to-kubelet 44s
The etcd module allows us to store sensitive data in an encrypted format. We'll need to create an encryptioin-config.yaml file that will be use by the etcd client when storing settings. The file contains a randomly generated 32 bit secrey key used by AES-CBC
cloud_user@linuxacademy:~/kthw$ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cloud_user@linuxacademy:~/kthw$ cat encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: hjwmr9dCeI1/S1yqBn8arDCyXkoC6r2qxES2AAy8CfE=
- identity: {}
# Place the file on the controller nodes
cloud_user@linuxacademy:~/kthw$ scp encryption-config.yaml ctl01:~/
cloud_user@linuxacademy:~/kthw$ scp encryption-config.yaml ctl02:~/
etcd is a distributed key/value store that provides a reliable way to store data across a cluster. It only runs on the controller nodes, and it needs to be clustered. It uses https://raft.github.io/ as a consensus algorithm.
cloud_user@pzolo1c:~$ curl -LO https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
cloud_user@pzolo1c:~$ tar -xzf etcd-v3.4.9-linux-amd64.tar.gz
cloud_user@pzolo1c:~$ sudo cp etcd-v3.4.9-linux-amd64/etcd* /usr/local/bin/
cloud_user@pzolo1c:~$ sudo mkdir -p /etc/etcd /var/lib/etcd
# Place the CA file and key/cert for controller on the etc folder
cloud_user@pzolo1c:~$ sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
# Create a new service for systemd
cloud_user@pzolo1c:~$ ETCD_NAME=$(hostname) ; INTERNAL_IP=172.31.22.121 ; INITIAL_CLUSTER=$ETCD_NAME=https://$INTERNAL_IP:2380,pzolo2c.mylabserver.com=https://172.31.29.101:2380
cloud_user@pzolo1c:~$ cat << EOF > etcd.service
> [Unit]
> Description=etcd
> Documentation=https://github.com/coreos
>
> [Service]
> ExecStart=/usr/local/bin/etcd \\
> --name ${ETCD_NAME} \\
> --cert-file=/etc/etcd/kubernetes.pem \\
> --key-file=/etc/etcd/kubernetes-key.pem \\
> --peer-cert-file=/etc/etcd/kubernetes.pem \\
> --peer-key-file=/etc/etcd/kubernetes-key.pem \\
> --trusted-ca-file=/etc/etcd/ca.pem \\
> --peer-trusted-ca-file=/etc/etcd/ca.pem \\
> --peer-client-cert-auth \\
> --client-cert-auth \\
> --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
> --listen-peer-urls https://${INTERNAL_IP}:2380 \\
> --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
> --advertise-client-urls https://${INTERNAL_IP}:2379 \\
> --initial-cluster-token etcd-cluster-0 \\
> --initial-cluster ${INITIAL_CLUSTER} \\
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF> --initial-cluster-state new \\
> --data-dir=/var/lib/etcd
> Restart=on-failure
> RestartSec=5
>
> [Install]
> WantedBy=multi-user.target
> EOF
cloud_user@pzolo1c:~$ sudo cp etcd.service /etc/systemd/system/
# Enable / start the service
cloud_user@pzolo1c:~$ sudo systemctl daemon-reload
cloud_user@pzolo1c:~$ sudo systemctl enable etcd
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
cloud_user@pzolo1c:~$ sudo systemctl start etcd
After starting the service on both controllers, we can verify that the cluster is active with:
cloud_user@pzolo2c:~$ sudo ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
c1b4898f05dfeb2, started, pzolo2c.mylabserver.com, https://172.31.29.101:2380, https://172.31.29.101:2379, false
f80fdba247d2636b, started, pzolo1c.mylabserver.com, https://172.31.22.121:2380, https://172.31.22.121:2379, false
It's a file that stores information about clusters, users, namespaces and auth mechanisms. All the data required to connect and interact with a kubernetes cluster.
# Admin kubeconfig #
# Connects to the controller on localhost
# The embed certs option allows us to move the config file to other machines
# First step, define the cluster settings
cloud_user@pzolo6c:~/kthw$ kubectl config set-cluster kubernetes-the-hard-way --certificate-authority=ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRV
server: https://127.0.0.1:6443
name: kubernetes-the-hard-way
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
# Second step, set credentials for the admin user
cloud_user@pzolo6c:~/kthw$ kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRV
server: https://127.0.0.1:6443
name: kubernetes-the-hard-way
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUS
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFUR
# Third step create the default context
cloud_user@pzolo6c:~/kthw$ kubectl config set-context default --cluster=kubernetes-the-hard-way --user=admin --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRV
server: https://127.0.0.1:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: admin
name: default
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUS
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFUR
# Final step, specify that we want to use the default context
cloud_user@pzolo6c:~/kthw$ kubectl config use-context default --kubeconfig=admin.kubeconfig
cloud_user@pzolo6c:~/kthw$ cat admin.kubeconfig | cut -b -50
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRV
server: https://127.0.0.1:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUS
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFUR
We'll need to repeat the same steps for kube-scheduler , kube-controller-manager , kube-proxy and for each worker node (kublet). The worker nodes and the kube-proxy won't connect to localhost:6443, but to the private address of the proxy.
Then send the file over to the worker and controller nodes:
cloud_user@pzolo6c:~/kthw$ scp pzolo4c.mylabserver.com.kubeconfig kube-proxy.kubeconfig wrk01:~/
cloud_user@pzolo6c:~/kthw$ scp pzolo5c.mylabserver.com.kubeconfig kube-proxy.kubeconfig wrk02:~/
cloud_user@pzolo6c:~/kthw$ scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ctl01:~/
cloud_user@pzolo6c:~/kthw$ scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ctl02:~/