There are multiple ways of validating that our cluster is working. In this lab we'll execute some from the original tutorial plus others described in the Kubernetes debug services guide
kubectl create deployment hostnames --image=registry.k8s.io/serve_hostname --replicas=3
This will create a pod on each worker node. Validate it succeds by running
kubectl get pods
Get each pods IP by running
kubectl get pods -l app=hostnames \
-o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
Result:
10.200.2.59
10.200.0.58
10.200.1.60
Note that the IPs are within the POD CIDR we defined earlier
We can test the networking capabilities of our pods by calling accessing the websites by IP using curl. Create a new pod
kubectl run alpine --image=curlimages/curl:latest --command -- sleep 3600
Describe the pod we just created, and notice where it's been deployed
kubectl describe pod alpine
Result
Name: alpine
Namespace: default
Priority: 0
Service Account: default
Node: khw-wn1/[...]
Run the sh command inside the pod to test the acces from within:
kubectl exec -ti alpine -- sh
Now we execute the curl for each POD
curl 10.200.2.59:9376
curl 10.200.0.58:9376
curl 10.200.1.60:9376
If all three respond correctly, it means we're able to access pods both in the same and different nodes where our alpine pod reside.
Type exit
to finish the interactive session.
Exit the interactive mode, and describe the kubernetes
service using kubectl
kubectl describe svc kubernetes
Result:
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.0.1
IPs: 10.32.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.240.0.10:6443,10.240.0.11:6443,10.240.0.12:6443
Session Affinity: None
Events: <none>
Note the IP address for the service. Let's create another pod, now using the busybox image
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
Execute nslookup
to test our DNS add on is working
nslookup kubernetes
Result:
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
Note that Address 1 for kubernetes matches the IP on the Service description.
Now create a service from the hostname deployment we defined earlier. We'll access it from port 80, forwarding to port 9376 in the pod.
kubectl expose deployment hostnames --port=80 --target-port=9376
Get the service information using
kubectl describe svc hostnames
Name: hostnames
Namespace: default
Labels: app=hostnames
Annotations: <none>
Selector: app=hostnames
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.0.215
IPs: 10.32.0.215
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.200.0.58:9376,10.200.1.2:9376,10.200.2.59:9376
Session Affinity: None
Events: <none>
Notice the endpoints data lists the IPs of the pods we created earlier.
Using again our alpine pod
kubectl exec -ti alpine -- sh
we call the service IP using curl
curl 10.32.0.215
The result shows the hostname that corresponds to the pods responding.
hostnames-689b9dd5dd-v8z2k
If we repeat the process we'll get responses from all the pods, validating that the network configuration is working as expected.
In this section you will verify the ability to encrypt secret data at rest.
Create a generic secret:
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
SSH into one of the control plane nodes, and print a hexdump of the kubernetes-the-hard-way secret stored in etcd:
sudo etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C
Output:
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
0000015a
The etcd key should be prefixed with k8s:enc:aescbc:v1:key1
, which indicates the aescbc
provider was used to encrypt the data with the key1
encryption key.
Next > Cleaning up