The kubectl command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server.
💡 Commands for this lab must be executed in the same directory where the certificate files reside.
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availabilit, we'll use the NodeBalancer public address we generated before. You can copy it from the CloudManager or get it using the CLI:
KUBERNETES_PUBLIC_ADDRESS=$(linode nodebalancers ls --format label,ipv4 --text --delimiter ' ' | grep khw | cut -f 2 -d ' ')
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.
We set:
- The cluster public address, pointing to the API server
- The credentials that will identify the node
- The default context, pointing to the data above
💡 A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user
for i in 0 1 2; do
INSTANCE="khw-wn${i}"
echo "Set a cluster entry in kubeconfig for ${INSTANCE}"
kubectl config set-cluster khw \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${INSTANCE}.kubeconfig
echo "Set a user entry in kubeconfig for ${INSTANCE}"
kubectl config set-credentials system:node:${INSTANCE} \
--client-certificate=${INSTANCE}.pem \
--client-key=${INSTANCE}-key.pem \
--embed-certs=true \
--kubeconfig=${INSTANCE}.kubeconfig
echo "Set a context entry in kubeconfig for ${INSTANCE}"
kubectl config set-context default \
--cluster=khw \
--user=system:node:${INSTANCE} \
--kubeconfig=${INSTANCE}.kubeconfig
echo "Set the current-context in the kubeconfig file"
kubectl config use-context default --kubeconfig=${INSTANCE}.kubeconfig
done
Result
khw-wn0.kubeconfig
khw-wn1.kubeconfig
khw-wn2.kubeconfig
We now generate the config file that will be used by the kube-proxy
service.
kubectl config set-cluster khw \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=khw \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Result
kube-proxy.kubeconfig
As we've done before, we'll copy the files to the worker nodes
for i in 0 1 2; do
INSTANCE="khw-wn${i}"
EXTERNAL_IP=$(lin linodes ls --format label,ipv4 --text --delimiter ' ' | grep ${INSTANCE} | cut -f 2 -d ' ')
echo ${INSTANCE},$EXTERNAL_IP
scp -i ~/.ssh/[key] ${INSTANCE}.kubeconfig kube-proxy.kubeconfig root@${EXTERNAL_IP}:~
done
The following components live within the control plane instances, hence we call the API server using localhost
kubectl config set-cluster khw \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=khw \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
Result
kube-controller-manager.kubeconfig
kubectl config set-cluster khw \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=khw \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
Result
kube-scheduler.kubeconfig
Config for the admin user, to be used within the control plane instances
kubectl config set-cluster khw \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=khw \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
Result:
admin.kubeconfig
We now generate a config file with the key for encrypting cluster data at rest
ENCRYPTION_KEY=$(openssl rand --base64 32)
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
Result:
encryption-config.yaml
Finally we distribute the files to the control plane intances
for i in 0 1 2; do
INSTANCE="khw-cp${i}"
EXTERNAL_IP=$(lin linodes ls --format label,ipv4 --text --delimiter ' ' | grep ${INSTANCE} | cut -f 2 -d ' ')
echo ${INSTANCE},$EXTERNAL_IP
scp -i ~/.ssh/[key] encryption-config.yaml kube-controller-manager.kubeconfig \
kube-scheduler.kubeconfig admin.kubeconfig root@${EXTERNAL_IP}:~
done