Skip to content

Latest commit

 

History

History
 
 

deploy

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Deployment of Portals

This document describes how a kubernetes cluster can be created and how the portals can be deployed on it.

Note

This guide only focuses on the cluster deployment the Fachschaftsrat Elektro- und Informationstechnik uses inside Hetzner Cloud with Cloudflare DNS. It is not a general guide on how to deploy the application. Please see the README for more information.

This guide will:

  1. Install prerequisites
  2. Setup the Hetzner Cloud project
  3. Create a Management Cluster for Cluster API
  4. Install Cluster API on the Management Cluster
  5. Create a Workload Cluster with Cluster API
  6. Deploy Cluster Addons on the Workload Cluster
  7. Deploy the Portals Application on the Workload Cluster

Important

The files used in this guide use placeholders. You need to copy the files and replace them with your values/secrets.

Warning

You need to have good knowledge of kubernetes to follow this guide. There will be no explanation of the kubernetes basics.

Prerequisites

To follow this guide you will need

  • A linux client to work from (I would suggest using WSL2 on Windows)
  • A Hetzner Cloud account with an empty project where the cluster should be deployed
  • A Cloudflare account with a domain and a zone for the domain
  • A personal ssh-key pair (or more keys if you want to use different keys or want to grant access for more people)
  • A S3 based object storage (e.g. aws s3 or minio)

Step 0: Install prerequisites

You will need some tools installed on your client to follow this guide. You can install them the way you want or use the following commands.

You will need:

Optional:

# updates
sudo apt update
sudo apt upgrade -y
sudo apt install bash-completion -y

# homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
(echo; echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"') >> /home/$USER/.profile
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
cat <<EOF >> ~/.profile
if type brew &>/dev/null
then
  HOMEBREW_PREFIX="$(brew --prefix)"
  if [[ -r "${HOMEBREW_PREFIX}/etc/profile.d/bash_completion.sh" ]]
  then
    source "${HOMEBREW_PREFIX}/etc/profile.d/bash_completion.sh"
  else
    for COMPLETION in "${HOMEBREW_PREFIX}/etc/bash_completion.d/"*
    do
      [[ -r "${COMPLETION}" ]] && source "${COMPLETION}"
    done
  fi
fi
EOF

# hcloud-cli
brew install hcloud

# kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
echo "alias k=kubectl" >> ~/.bashrc
echo "complete -o default -F __start_kubectl k" >> ~/.bashrc
echo "export KUBE_EDITOR=\"nano\"" >> ~/.bashrc

# helm
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
echo "source <(helm completion bash)" >> ~/.bashrc

# kubectx and kubens
sudo apt install kubectx
brew install fzf

# clusterctl
brew install clusterctl

Step 1.1: Setup Hetzner Cloud

Create API Tokens

You need to create some API tokens inside the cloud project. You can do this in the Hetzner Cloud Console under Access > API Tokens. You will need the following tokens:

  • cli@<YOUR_NAME>@<YOUR_CLIENT_NAME> (used by hcloud-cli on your linux client)
  • capi@<CLUSTER_NAME> (used by the hcloud capi controller inside the management cluster)
  • ccm@<CLUSTER_NAME> (used by the hcloud controller manager inside the cluster)
  • csi@<CLUSTER_NAME> (used by the hcloud csi driver inside the cluster)

You can change the names of the tokens to fit your needs. You will need to replace the names in the following commands.

Important

Please save the tokens in a safe place. You will need the values in this guide and you will not be able to see them again.

Setup hcloud-cli

You need to setup the hcloud-cli on your linux client. You can do this by using the following commands. You will need to replace the placeholders with your values.

hcloud context create <CONTEXT_NAME> # replace context name with a name of your choice (e.g. the hcloud project name)

The command will ask for the token you have created in the previous step.

Upload SSH Keys

You need to upload your public ssh key to the cloud project. You can do this in the Hetzner Cloud Console under Access > SSH Keys or by using the following commands. You can upload multiple keys and reference them later to grant access to more people.

hcloud ssh-key create --public-key-from-file ~/.ssh/<YOUR_KEY_FILE>.pub --name <YOUR_NAME>@<YOUR_CLIENT_NAME>

Step 1.2 Setup Cloudflare

Create API Token

You need to create two API tokens for Cloudflare. You can do this in the Cloudflare Console under My Profile > API Tokens. You will need the following tokens:

  • Zone Edit for all needed DNS Zones (for your client)
  • Zone Edit for all needed DNS Zones (for cert-manager)

You can change the names of the tokens to fit your needs. You will need to replace the names in the following commands.

Step 1.3 Setup S3

In this guide we will use AWS S3 because it is cheap and easy to use. Feel free to use other object storage with s3 compatibility like minio.

Create Bucket

If you use aws, go to the aws console and create a bucket. You don't need special bucket settings.

If you use your root aws account, please create a new IAM user with s3 permissions and create access and secret key for this user. Do not create and publish access and secret keys for your root account.

Remember to create a user, a role with attached policy to access s3 and a s3 bucket access policy that grants access for the user to the bucket.

Please note your bucket name, bucket region, access and secret key.

Step 2: Create a Management Cluster

To use cluster api to create a workload kubernetes cluster you need to create a management cluster. This cluster will be used to deploy the cluster api components and to create the workload cluster.

In this example we will use kind to create the management cluster.

Warning

Exposing kind clusters to the internet is not recommended and can cause security risks.

Create VM

To create a vm to run kind on you can use the following command:

hcloud server create --location nbg1 --image debian-12 --name initial-mgmt-cluster --ssh-key <YOUR_NAME>@<YOUR_CLIENT_NAME> --type cx21

Wait for the server to be created and then login to the server with ssh root@<IP_ADDRESS>.

Setup VM and create cluster

Run the following commands on the server to create a kind kubernetes cluster:

# updates
apt update
apt upgrade -y

# install docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

# install kind
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# create cluster config. remember to replcace the placeholder
cat <<EOF > initial-mgmt-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: mgmt
networking:
  apiServerAddress: "<SERVER_IP_INITIAL_MGMT_CLUSTER_VM>"
  apiServerPort: 6443
nodes:
  - role: control-plane
  - role: worker
EOF

# create cluster
kind create cluster --config initial-mgmt-cluster.yaml

Gain cluster access

Run the following commands on your local machine to copy the kubeconfig file from the server to your local machine:

# copy kubeconfig from server to local machine
scp root@<IP_ADDRESS>:/root/.kube/config ~/.kube/initial-mgmt-cluster.kubeconfig
chmod 600 ~/.kube/initial-mgmt-cluster.kubeconfig

# set currently used kubeconfig
export KUBECONFIG=~/.kube/initial-mgmt-cluster.kubeconfig

# test connection
kubectl get nodes

You now have an exposed kind cluster running on the server.

Step 3: Install Cluster API

Prepare Management Cluster

Before installing cluster api you need to do a workaround for the container images.

Login again to the management cluster server with ssh root@<IP_ADDRESS> and run the following commands:

docker pull registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:<YOUR_CAPI_VERSION>
kind load docker-image -n mgmt registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:<YOUR_CAPI_VERSION>
docker pull registry.k8s.io/cluster-api/kubeadm-control-plane-controller:<YOUR_CAPI_VERSION>
kind load docker-image -n mgmt registry.k8s.io/cluster-api/kubeadm-control-plane-controller:<YOUR_CAPI_VERSION>
docker pull registry.k8s.io/cluster-api/cluster-api-controller:<YOUR_CAPI_VERSION>
kind load docker-image -n mgmt registry.k8s.io/cluster-api/cluster-api-controller:<YOUR_CAPI_VERSION>

Install Cluster API

With the following commands you will install cluster api on the management cluster.

Important

Make sure that you have selected the right kubernetes cluster (maybe check with kubectx)

clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure hetzner

You can check if the installation was successful by running kubectl get pods -A. You should see pods in the caph-system, capi-system, capi-kubeadm-bootstrap-system and capi-kubeadm-control-plane-system namespace.

Step 4: Create a Workload Cluster

Create Cluster

Run the following commands to create a workload cluster:

Note

You will need to replace some values base64 encoded. You can use echo -n "<VALUE>" | base64 -w 0 to encode the values.

# replace placeholders before applying
kubectl apply -f cluster/

Create Infrastructure beneath Cluster

After the HetznerCluster object is ready (you can verify this with k get hetznercluster <CLUSTER_NAME>) you have to run the following commands:

# create nat gateway
hcloud network add-route <CLUSTER_NAME> --destination 0.0.0.0/0 --gateway 10.0.255.254
hcloud server create --location nbg1 --image debian-11 --name <CLUSTER_NAME>-nat-gateway --placement-group <CLUSTER_NAME>-gw --ssh-key <YOUR_NAME>@<YOUR_CLIENT_NAME> --type cx11 --user-data-from-file ./nat-gateway/cloud-config.yaml
hcloud server attach-to-network -n <CLUSTER_NAME> --ip 10.0.255.254 <CLUSTER_NAME>-nat-gateway

# create dns records
curl --request POST --url https://api.cloudflare.com/client/v4/zones/<CLOUDFLARE_ZONE_ID>/dns_records --header 'Content-Type: application/json' --header 'Authorization: Bearer <YOUR_CLOUDFLARE_API_TOKEN>' --data '{"content": "<IP_OF_API_LOADBALANCER>", "name": "<CLUSTER_API_URL>", "proxied": false, "type": "A", "comment": "Kubernetes API", "tags": [], "ttl": 1}'

Get cluster access

To get cluster access you can run the following commands:

# get kubeconfig
kubectl get secret -n <CLUSTER_NAME> <CLUSTER_NAME>-kubeconfig -o jsonpath='{.data.value}' | base64 -d > ~/.kube/<CLUSTER_NAME>.kubeconfig
chmod 600 ~/.kube/<CLUSTER_NAME>.kubeconfig

# set currently used kubeconfig
export KUBECONFIG=~/.kube/<CLUSTER_NAME>.kubeconfig

Deploy CNI and CCM

To finish the cluster setup you need to deploy the CNI (container network interface) and the CCM (cloud controller manager). You can do this by running the following commands:

# cilium (cni)
helm repo add cilium https://helm.cilium.io/
helm upgrade --install cilium cilium/cilium --namespace cilium-system --create-namespace -f deployments/addons/cilium-values.yaml # remember to replace the placeholders

# ccm
kubectl create ns hcloud-system
kubectl apply -f deployments/addons/ccm-secret.yaml # remember to replace the placeholders
helm repo add hcloud https://charts.hetzner.cloud
helm upgrade --install ccm hcloud/hcloud-cloud-controller-manager -n hcloud-system -f deployments/addons/ccm-values.yaml

Wait for Cluster to be ready

After deploying the csi and ccm you have to wait for all nodes to come up. You can watch the process with watch kubectl get nodes,pods -A.

Step 5: Deploy Cluster Addons

In this step you will deploy the addons to the cluster. You can do this by running the following commands.

This will install:

  • hcloud csi (container storage interface)
  • metrics server
  • nginx ingress
  • cert-manager
  • postgresql cluster
  • redis cluster
  • monitoring
  • logging
# csi (container storage interface)
kubectl apply -f deployments/addons/csi-secret.yaml
kubectl apply -f deployments/addons/csi-2.7.0.yaml

# metrics server
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm upgrade --install metrics-server metrics-server/metrics-server --namespace kube-system

# ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace -f deployments/addons/ingress-nginx-values.yaml

# create dns records for ingress
curl --request POST --url https://api.cloudflare.com/client/v4/zones/<CLOUDFLARE_ZONE_ID>/dns_records --header 'Content-Type: application/json' --header 'Authorization: Bearer <YOUR_CLOUDFLARE_API_TOKEN>' --data '{"content": "$SERVER_IP_INGRESS_LOADBALANCER", "name": "<CLUSTER_INGRESS_URL>", "proxied": false, "type": "A", "comment": "Kubernetes Cluster <CLUSTER_NAME> Ingress", "tags": [], "ttl": 1}'
curl --request POST --url https://api.cloudflare.com/client/v4/zones/<CLOUDFLARE_ZONE_ID>/dns_records --header 'Content-Type: application/json' --header 'Authorization: Bearer <YOUR_CLOUDFLARE_API_TOKEN>' --data '{"content": "<CLUSTER_INGRESS_URL>", "name": "*.<CLUSTER_INGRESS_URL>.<BASE_DOMAIN>", "proxied": false, "type": "CNAME", "comment": "Kubernetes Cluster <CLUSTER_NAME> Ingress", "tags": [], "ttl": 1}'

# cert-manager
helm repo add jetstack https://charts.jetstack.io
helm upgrade --install cert-manager jetstack/cert-manager --namespace cert-manager-system --create-namespace -f deployments/addons/cert-manager-values.yaml
kubectl apply -f deployments/addons/cert-manager-issuer.yaml

# postgresql operator
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm upgrade --install cnpg cnpg/cloudnative-pg --namespace postgresql-system --create-namespace

# redis operator
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
helm upgrade --install redis-operator ot-helm/redis-operator --namespace redis-system --create-namespace

# monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm upgrade --install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring-system --create-namespace -f ../deploy/deployments/addons/prometheus-values.yaml
kubectl apply -f deployments/addons/cilium-pod-monitor.yaml
kubectl apply -f deployments/addons/pgsql-operator-pod-monitor.yaml

Step 6: Deploy Portals

Deploy Portals

In this step you will deploy the portals application, a PostgreSQL database and a redis cluster. You can do this by running the following commands.

Remember to replace the placeholders in the values files with your values.

# create namespace
kubectl create namespace portals

# postgresql cluster
kubectl apply -f deployments/portals/pgsql.yaml

# get the db password
kubectl get secret -n portals portals-db-app -o jsonpath='{.data.password}' | base64 -d

# redis cluster
kubectl apply -f deployments/portals/redis-pw-secret.yaml
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
helm upgrade --install portals-redis ot-helm/redis-cluster --namespace portals -f deployments/portals/redis-values.yaml

# portals
kubectl create configmap -n portals portals-tutors-csv --from-file=tutors.csv=../database/seeders/tutors.csv
kubectl create configmap -n portals portals-students-csv --from-file=students.csv=../database/seeders/students.csv
helm repo add fsr5-fhaachen https://fsr5-fhaachen.github.io/charts/
helm upgrade --install portals fsr5-fhaachen/portals --namespace portals -f deployments/portals/portals-values.yaml

Setup DNS

To setup the dns records for portals you need to create a dns record for the ingress. You can do this by running the following command:

# wildcard record for ingress
curl --request POST --url https://api.cloudflare.com/client/v4/zones/<CLOUDFLARE_ZONE_ID>/dns_records --header 'Content-Type: application/json' --header 'Authorization: Bearer <YOUR_CLOUDFLARE_API_TOKEN>' --data '{"content": "<IP_OF_INGRESS_LOADBALANCER>", "name": "*.<YOUR_INGRESS_IP>", "proxied": false, "type": "A", "comment": "Kubernetes Ingress", "tags": [], "ttl": 1}'

# record for portals (only if not inside ingress wildcard)
curl --request POST --url https://api.cloudflare.com/client/v4/zones/<CLOUDFLARE_ZONE_ID>/dns_records --header 'Content-Type: application/json' --header 'Authorization: Bearer <YOUR_CLOUDFLARE_API_TOKEN>' --data '{"content": "<ONE_OF_THE_WILDCARD_DOMAINS_BEFORE>", "name": "<YOUR_PORTALS_DOMAIN>", "proxied": false, "type": "CNAME", "comment": "Kubernetes Ingress Portals", "tags": [], "ttl": 1}'

Ready, you can connect to portals on your configured url.

Load Test

If you want to load test the application you could use wrk.

docker run -it --name load-test --rm alpine:latest /bin/sh -c "apk update && apk add wrk && apk add curl && ulimit -n 65535 && wrk -t12 -c400 -d120s https://<YOUR_PORTALS_DOMAIN>/login"