We now start bootstraping our control plane. This lab encompasses setting up an etcd cluster, and configure it for high availability and secure remote access.
SSH into the controller instances individually or at the same time using tmux
ssh -i ~/.ssh/[key] root@[node]
💡 The commands need to be executed in all three control plane instances
Download the binary files
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.5.11/etcd-v3.5.11-linux-amd64.tar.gz"
Then extract and install them
tar -xvf etcd-v3.5.11-linux-amd64.tar.gz
sudo mv etcd-v3.5.11-linux-amd64/etcd* /usr/local/bin/
Create the required directories
sudo mkdir -pv /etc/etcd /var/lib/etcd
sudo chmod 700 -v /var/lib/etcd
Copy the certificates
sudo cp ca.pem -v kubernetes-key.pem kubernetes.pem /etc/etcd/
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. We get it using the metadata API.
export TOKEN=$(curl -X PUT -H "Metadata-Token-Expiry-Seconds: 3600" http://169.254.169.254/v1/token)
IPAM=$(curl -H "Metadata-Token: $TOKEN" http://169.254.169.254/v1/network | grep interfaces.ipam_address | cut -d ' ' -f 2)
INTERNAL_IP=${IPAM::-3}
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
ETCD_NAME=$(hostname -s)
Create the service unit file
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster khw-cp0=https://10.240.0.10:2380,khw-cp1=https://10.240.0.11:2380,khw-cp2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
initial-cluster
list names and ips must match the values we defined for our control plane instances
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
After executing the previous commands in all the control plane nodes, we need to check all is working as expected. List the etcd cluster members:
sudo etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
Result
3a51233972cb5131, started, khw-cp2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
f98dc20bc13675a0, started, khw-cp0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
ffed16797ff684b5, started, khw-cp1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
If we get a list with the three instances marked as started
, we're ready to move to the next step.