https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
- Scaling up / down easily number of replicas
- Rolling updates to deploy new SW version
Here's an example of a deployment for an NGINX app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-deployment
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: nginx
image: nginx:1.19.1
ports:
- containerPort: 80
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
Three options for rolling updates:
1- Using imperative command kubectl set image
:
(use --record
to easily rollback)
kubectl set image deployment/my-deployment nginx=nginx:1.19.2 --record
2- Using imperative command kubectl edit deploy
:
kubectl edit deployment my-deployment
Note: any change is applied immediately, no need for kubectl apply
3- Editing specs in the deployment YAML manifest:
vi deployment.yaml
kubectl apply -f deployment.yaml
Check deployment status:
kubectl rollout status deployment my-deployment
kubectl rollout status deployment/my-deployment
kubectl rollout history deployment my-deployment
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment
Three options for rolling back:
1- Using imperative command kubectl rollout undo
:
kubectl rollout history deployment/my-deployment
kubectl rollout undo deployment.v1.apps/my-deployment
kubectl rollout undo deployment.v1.apps/my-deployment --to-revision=1
Note: use --to-revision
when —-record
was used
2- Using imperative command kubectl edit deploy
kubectl edit deployment my-deployment
kubectl rollout status deployment my-deployment
Note: any change is applied immediately, no need for kubectl apply
3- Editing specs in the deployment YAML manifest:
vi deployment.yaml
kubectl apply -f deployment.yaml
Check rollout status:
kubectl rollout status deployment my-deployment
kubectl rollout history deployment my-deployment
https://kubernetes.io/docs/concepts/configuration/configmap/
ConfigMap is an API object used to store non-confidential data in key-value pairs.
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
data:
# property-like keys; each key maps to a simple value
player_initial_lives: "3"
ui_properties_file_name: "user-interface.properties"
# file-like keys
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
https://kubernetes.io/docs/concepts/configuration/secret/
Secrets are API objects used to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys.
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: opaque # opaque = arbitrary user-defined data
data:
secretkey1: <base64-string1>
Or using the imperative command:
kubectl create secret generic my-secret --from-file=path/to/bar
How to get a base64 key from a string:
echo -n 'secret' | base64
Note: the -n
flag for echo
means: "do not output the trailing newline".
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
apiVersion: v1
kind: Pod
...
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo "configmap: $CONFIGMAPVAR secret: $SECRETVAR"']
env:
- name: CONFIGMAPVAR
valueFrom:
configMapKeyRef:
name: my-configmap
key: key1
- name: SECRETVAR
valueFrom:
secretKeyRef:
name: my-secret
key: secretkey1
https://kubernetes.io/docs/concepts/storage/volumes/
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: config-volume
configMap:
name: nginx-config
- name: htpasswd-volume
secret:
secretName: nginx-htpasswd
containers:
- name: webserver
image: nginx:1.19.1
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/nginx
- name: htpasswd-volume
mountPath: /etc/nginx/conf
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment
Three options for scaling an app:
1- Using imperative command kubectl scale
:
kubectl scale deployment.v1.apps/my-deployment --replicas=5
2- Using imperative command kubectl edit deploy
:
kubectl edit deployment my-deployment
Note: any change is applied immediately, no need for kubectl apply
3- Editing replicas number in the deployment YAML manifest:
kubectl apply -f deployment.yaml
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/ https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Testing whether the container is running.
apiVersion: v1
kind: Pod
metadata:
name: liveness-pod
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do sleep 3600; done']
livenessProbe:
exec:
command: ["echo", "Hello, world!"]
initialDelaySeconds: 5 ## Delay before kubelet triggers 1st probe
periodSeconds: 5 ## How ofter kubelet performs a liveness probe
Testing whether the application within the container is started.
- Useful for Pods that have containers that take a long time to come into service.
- allowing a time longer than the liveness interval would allow.
apiVersion: v1
kind: Pod
metadata:
name: liveness-pod-http
spec:
containers:
- name: nginx
image: nginx:1.19.1
startupProbe:
httpGet:
path: /
port: 80
failureThreshold: 30 ## Nb times try before restarting container
periodSeconds: 10 ## How ofter kubelet performs a startup probe
Testing whether a container is ready to start accepting traffic
apiVersion: v1
kind: Pod
metadata:
name: readiness-pod
spec:
containers:
- name: nginx
image: nginx:1.19.1
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5 ## Delay before kubelet triggers 1st probe
periodSeconds: 5 ## How ofter kubelet performs probe
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
For containers that should always run
apiVersion: v1
kind: Pod
metadata:
name: always-pod
spec:
restartPolicy: Always ## Default (so optional)
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do sleep 10; done']
Restart ONLY IF the container exits w an error code or determined unhealthy with liveness probe.
apiVersion: v1
kind: Pod
metadata:
name: onfailure-pod
spec:
restartPolicy: OnFailure
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do sleep 10; done']
command: ['sh', '-c', 'bad command that should fail my container']
For containers that should only be run once and never restarted.
apiVersion: v1
kind: Pod
metadata:
name: onfailure-pod
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do sleep 10; done']
command: ['sh', '-c', 'bad command that should fail my container']
https://kubernetes.io/docs/concepts/workloads/pods/#using-pods https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/
“Cross-container” interaction >> Network & storage “Sidecar” = secondary container to the Pod
apiVersion: v1
kind: Pod
metadata:
name: sidecar-pod
spec:
containers:
- name: busybox1
image: busybox
command: ['sh', '-c', 'while true; do echo logs data > /output/output.log; sleep 5; done']
volumeMounts:
- name: sharedvol
mountPath: /output
- name: sidecar
image: busybox
command: ['sh', '-c', 'tail -f /input/output.log']
volumeMounts:
- name: sharedvol
mountPath: /input
volumes:
- name: sharedvol
emptyDir: {}
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Specialized containers that run before app containers in a Pod. Each init container must complete successfully before the next one starts. If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init1
image: busybox:1.28
command: ['sleep', '10']
- name: init2
image: busybox:1.28
command: ['sh', '-c', 'until nslookup shipping-svc; do echo waiting for shipping-svc; sleep 2; done']
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: busybox
image: busybox
resources:
requests: ## Requests = estimate for scheduling
memory: "64Mi" ## in bytes, 64Mi = 64MiB
cpu: "250m" ## CPU units in 1/1000 of a CPU, 250m = 1/4
limits: ## Limits = enforced limit, stop if exceed
memory: "128Mi"
cpu: "500m"
https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Using nodeName
: only run the Pod on this specific node:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: node1
Using nodeSelector
: run the Pod on any node with matching label(s):
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
myLabel1: mustMatchString
myLabel2: "true"
disk: fast
Obviously the label must be included in the node definition:
apiVersion: v1
kind: Node
metadata:
labels:
disk: fast
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
A DaemonSet ensures that all (or some) nodes run a copy of a Pod.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/
Pod managed directly by Kubelet, not by K8s API server
Mirror Pod created to represent a Static Pod in the Kuberntes API, allowing you to easily view the Static Pod's status. But no change can be done to them via API.
Manifest path for static pods on each node under:
vi /etc/kubernetes/manifests/<podname>
Restart kubelet on the node where the kubelet is running:
systemctl restart kubelet
Check result on the Control Plane:
kubectl get pods
https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/
- helm
- Templating (charts) & package mgmt
- kcompose
- from Docker compose to K8s objects
- kustomize
- Config managment tool (similar to helm)
- https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/