Skip to content

[bug] ensure non empty $HOME in deployments #1337

Allan Roger Reid edited this page Jan 5, 2023 · 12 revisions

Status: Open

Pull Request

None

Issues

https://github.com/minio/operator/issues/1337

Steps: Reproduce

1a. Install multipass

b. Launch an Ubuntu Control Plane. Purge any outstanding

brew cask install multipass

multipass version

multipass delete --purge control-plane-k3s
multipass delete --purge kind-worker
multipass delete --purge kind-worker2
multipass delete --purge kind-worker3
multipass delete --purge kind-worker4

multipass find
multipass launch --name control-plane-k3s --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker2 --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker3 --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker4 --cpus 2 --mem 2048M --disk 5G focal
multipass list

c. In multiple terminals, open shells

multipass shell control-plane-k3s
multipass shell kind-worker
multipass shell kind-worker2
multipass shell kind-worker3
multipass shell kind-worker4

d. Install a CRI on each node https://kubernetes.io/docs/setup/production-environment/container-runtimes/ Install and configure prerequisites

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

lsmod | grep br_netfilter
lsmod | grep overlay

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

e. On each node Install containerd using systemd

https://github.com/containerd/containerd/blob/main/docs/getting-started.md
wget https://github.com/containerd/containerd/releases/download/v1.6.14/containerd-1.6.14-linux-arm64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.6.14-linux-arm64.tar.gz
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/local/lib/systemd/system
sudo mv containerd.service /usr/local/lib/systemd/system/containerd.service

f. Install runc

sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64
sudo install -m 755 runc.arm64 /usr/local/sbin/runc

g. Install CNI plugins

sudo wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.1.1.tgz 

h. Generate config file and check logs

sudo mkdir -p /etc/containerd
sudo touch /etc/containerd/config.toml
sudo chmod 666 /etc/containerd/config.toml
sudo containerd config default > /etc/containerd/config.toml
sudo systemctl restart containerd
journalctl -u containerd

i. Finish installing kubelet, kubeadm and kubectl

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo apt-add-repository 'deb http://packages.cloud.google.com/apt/ kubernetes-xenial main' 
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

j. Create cluster https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ On control plane

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

k.- Install Flannel as a CNI

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.20.2/Documentation/kube-flannel.yml

NB: https://stackoverflow.com/questions/40534837/kubernetes-installation-and-kube-dns-open-run-flannel-subnet-env-no-such-file

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

l. On each node, assuming server is 192.168.64.13

sudo kubeadm join 192.168.64.13:6443 --token 3uu7ox.ajvchyrf1e2ebjov --discovery-token-ca-cert-hash sha256:5bffe34b9e4598d62867a41f0ce92fb6259f7f1f55fd31ac7fbdb06fee0e3c44 

Wait for joining then check status

kubectl get nodes -o wide

m. Install Helm

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Check version

helm version

Check available helm repos

helm repo list

n. Install with helm

export SCRIPT_DIR=$HOME/operator/testing
mkdir -p /home/ubuntu/operator
cd /home/ubuntu/
git clone https://github.com/minio/operator.git

change operator image to quay.io/minio/operator:v4.5.5 in common.sh

    yq -i '.operator.image.tag = "quay.io/minio/operator:v4.5.5"' "${SCRIPT_DIR}/../helm/operator/values.yaml"

OR modify deployment after

kubectl -n minio-operator edit deployment/minio-operator

Install a local path provisioner

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Comment out kind functions (since these steps don't use kind)

vi /home/ubuntu/operator/testing/check-helm.sh
sudo apt install make -y
cd operator
vi testing/helm/values.yaml

Run

$SCRIPT_DIR/check-helm.sh

o. Access from outside e.g. https://192.168.64.13:9443/login (where https://192.168.64.13:9443/buckets and 192.168.64.13 is the control plane node)

kubectl port-forward svc/minio1-console -n default 9443 --address 0.0.0.0

Use minio/minio123

p. Validate then change to crun sudo apt -y install crun

On all nodes run

sudo systemctl stop kubelet

On control plane

kubectl drain kind-worker --ignore-daemonsets
kubectl drain kind-worker2 --ignore-daemonsets
kubectl drain kind-worker3 --ignore-daemonsets
kubectl drain kind-worker4 --ignore-daemonsets
kubectl drain control-plane-k3s --ignore-daemonsets

Modify toml See https://github.com/containerd/containerd/discussions/6162 or https://raw.githubusercontent.com/containerd/containerd/main/docs/cri/config.md vi /etc/containerd/config.toml

         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun]
           runtime_type = "io.containerd.runc.v2"
         
         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun.options]
           BinaryName = "/usr/bin/crun"

Add default

default_runtime_name = "crun"

Create new runtime class

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: crun
handler: crun
EOF
cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: runc
handler: runc
EOF

Apply a storage class

cat <<EOF | kubectl apply -f -
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"standard"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
      storageclass.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2022-12-30T22:33:06Z"
    name: standard
    resourceVersion: "278"
    uid: 44d05183-5c38-4769-90cc-c3a8a7ccc4f1
  provisioner: rancher.io/local-path
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
kind: List
metadata:
  resourceVersion: ""
EOF

On control plane, Uncordon

kubectl uncordon kind-worker
kubectl uncordon kind-worker2
kubectl uncordon kind-worker3
kubectl uncordon kind-worker4
kubectl uncordon control-plane-k3s 

On all nodes run

sudo systemctl restart containerd
sudo systemctl start kubelet
journalctl -u kubelet
journalctl -u containerd

q. Add to tenant and validate nodes are working properly

runtimeClassName: runc

r. Modify the following in tenant and validate nodes are not working properly and failing with

minio: <ERROR> Unable to get mcConfigDir. exit status 2.

NOTE: This is fixed if the following is added to tenant:

spec:
  env:
  - name: HOME
    value: /
runtimeClassName: crun

Steps: Add helm for console and operator

a. Add to

helm/operator/templates/operator-deployment.yaml
helm/operator/templates/console-deployment.yaml
{{- with .Values.operator.runtimeClassName }}
runtimeClassName: 
{{- toYaml . | nindent 8 }}
{{- end }}

b. Add under console and operator

helm/operator/values.yaml

as a child of operator and console

runtimeClassName: runc

e.g.

# Default values for minio-operator.
operator:
...
  ## Configure Runtime Class
  runtimeClassName: runc
console:
...
  ## Configure Runtime Class
  runtimeClassName: runc

c. Create a runtime class as soon as the cluster comes online, before the console and operator are deployed

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: runc
handler: runc
EOF

d. Observe console and operator as Running

k -n minio-operator get pods
image

e. Observe console and operator with runtimeClass=runc

k -n minio-operator describe pod/console-588bb55c87-v9pd7 | grep -i runtime
k -n minio-operator describe pod/minio-operator-576dd8f499-m65sg | grep -i runtime
k -n minio-operator describe pod/minio-operator-576dd8f499-pxxsn | grep -i runtime
image

Steps: Add kustomize for console and operator

a. Skip

b. Add under console and operator

resources/base/console-ui.yaml
resources/base/deployment.yaml

as a child of spec

runtimeClassName: runc

e.g.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: console
  labels:
    app.kubernetes.io/instance: minio-operator
    app.kubernetes.io/name: operator
spec:
    spec:
      runtimeClassName: runc
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio-operator
  namespace: minio-operator
  labels:
    app.kubernetes.io/instance: minio-operator
    app.kubernetes.io/name: operator
spec:
    spec:
      runtimeClassName: runc

c. Create a runtime class as soon as the cluster comes online, before the console and operator are deployed

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: runc
handler: runc
EOF

d. Observe console and operator as Running

k -n minio-operator get pods
image

e. Observe console and operator with runtimeClass=runc

k -n minio-operator describe pod/console-645b9f6678-m6v68 | grep -i runtime
k -n minio-operator describe pod/minio-operator-5559795c86-lcmxj | grep -i runtime
k -n minio-operator describe pod/minio-operator-5559795c86-tjmjv | grep -i runtime
image