This pattern covers essential aspects of how to use Spiffe/Spire and Istio to bridge trust between multi-cluster Kubernetes service meshes. This allows microservices in different clusters to trust each other and communicate securely. You can also see Istio Day talk on this topic.
Modern architectures have a "web of complexity" with many interconnected microservices, legacy monolithic applications, cloud functions etc. spanning multiple environments like public cloud, private cloud, on-premises. This distributed, heterogeneous architecture often exists across different teams, business units or even organizations who need to integrate their components. With services heavily dependent on network communication, there are serious security implications in ensuring trusted interactions between the right components. Simply reusing the existing disparate security identity models from each independent application/workload is not straightforward. Establishing trust across disparate components is challenging due to the need for constant alignment between platform, security and app teams on security models and threat models. Developers must continuously modify microservices to accommodate different identity models. Managing misconfigurations becomes difficult when juggling multiple security models across an intricate architecture. The crux of the challenge is how to secure this "web of complexity" and establish trust across the widely distributed, heterogeneous environments as they evolve.
This pattern will address the challenge by achieving unified secure workload identity and cross-cluster trust using Istio mesh federation combined with the Spiffe/Spire identity attestation system. The pattern show two independent Kubernetes clusters (foo-eks-cluster
and bar-eks-cluster
) with different root CAs generated by the cert-manager. In each cluster, Spiffe/Spire is installed and acts as an intermediate CA for the workloads. Federation is enabled between the two clusters by exchanging their trust bundles during Spire installation. This allows workloads from different clusters with different root CAs to communicate.
Increased security: By using Spiffe/Spire and Istio, you can ensure that only authorized workloads are able to communicate with each other. This helps to protect your microservices from unauthorized access and malicious attacks.
Simplified management: Spiffe/Spire and Istio provide a centralized way to manage trust policies for your microservices. This makes it easier to enforce security policies across multiple clusters.
Improved scalability: Spiffe/Spire and Istio are designed to scale to large deployments. This makes them a good choice for organizations that need to secure a large number of microservices.
Ensure that you have installed the following tools locally:
## clone this repo and got this folder
cd istio-on-eks/patterns/eks-istio-mesh-spire-federation
cert-manager is also installed as part of EKS Addons because it would act as the root CA for each cluster. After installing cert-manager, we also passed the self-signed CA for the root CA manager as we want every cluster to be an "independent trust domain" with its own self-signed root CA from cert-manager.
cd terraform/0.vpc
terraform init
terraform apply --auto-approve
cd ../1.foo-eks
terraform init
terraform apply --auto-approve
cd ../2.bar-eks
terraform init
terraform apply --auto-approve
Go to home folder
This script automates the exchange of CA bundles between clusters for federation. The script does the following:
- In order for cert-manager to act as the root CA, you need to "add the cert-manager as a clusterRole" to granting the cert-manager deployment the necessary RBAC permissions to function as the cluster-wide root certificate authority by binding it to the clusterRole.
- Creating the necessary namespace for Spire components
- Applying the Spire server and agent manifests/YAML files
- Configuring the trust domain for the Spire server
- Enabling the federation mode for the Spire server. For federation, each Spire server needs to share its bundle with the other trust domains. So defining this endpoint allows the bundles to be securely exposed and retrieved by the federated partners.
- Populating the trust bundle from the other cluster to enable cross-cluster communication. The bundle endpoint refers to the URL that the Spire server uses to distribute the bundle (collection of root CA certificates) to downstream Spire components like agents and workload attestors.
./spire/install-spire.sh
For the Istio installation, we are using the Istio Operator to deploy Istio on the clusters. The sidecar injection webhook for Spire - This allows the Spire sidecar to be automatically injected into application pods for identity.
Note: You will need EKS API endpoints from terraform outputs
istioctl
needs to be in the PATH
./istio/install-istio.sh <cluster_endpoint_foo> <cluster_endpoint_bar>
Two separate deployments named "hello-world-v1" and "hello-world-v2" are created, one in each cluster (foo-eks-cluster
and bar-eks-cluster
respectively). Both these deployments are configured to use the same Kubernetes service name "hello-world". Additionally, a separate "sleep" deployment (just serving as a test client) is also created in foo-eks-cluster
.
./examples/deploy-helloworld.sh
kubectl get po -A --context="${CTX_CLUSTER1}"
kubectl get po -A --context="${CTX_CLUSTER2}"
You should see similar to below:
kubectl get po -A --context $CTX_CLUSTER1
NAMESPACE NAME READY STATUS RESTARTS AGE
amazon-guardduty aws-guardduty-agent-dd78x 1/1 Running 6 (3h29m ago) 3h32m
amazon-guardduty aws-guardduty-agent-v945w 1/1 Running 0 3h32m
amazon-guardduty aws-guardduty-agent-zg9mr 1/1 Running 0 3h32m
cert-manager cert-manager-55657857dd-zd8jl 1/1 Running 0 3h28m
cert-manager cert-manager-cainjector-7b5b5d4786-gcnmd 1/1 Running 0 3h28m
cert-manager cert-manager-webhook-55fb5c9c88-wc9f9 1/1 Running 0 3h28m
helloworld helloworld-v1-6bb5b589d6-54fbk 2/2 Running 0 44s
istio-system istio-eastwestgateway-6c585467fd-sxwpd 1/1 Running 0 42m
istio-system istio-ingressgateway-66595464dd-wplqc 1/1 Running 0 42m
istio-system istiod-fc9564898-mvnzx 1/1 Running 0 42m
kube-system aws-load-balancer-controller-57765c4b45-mqgj7 1/1 Running 0 3h33m
kube-system aws-load-balancer-controller-57765c4b45-qr7r2 1/1 Running 0 3h33m
kube-system aws-node-4948g 2/2 Running 0 3h27m
kube-system aws-node-mfn2g 2/2 Running 0 3h27m
kube-system aws-node-xzgpb 2/2 Running 0 3h27m
kube-system coredns-86bbb5f9b5-bdqwj 1/1 Running 0 3h27m
kube-system coredns-86bbb5f9b5-m7b2g 1/1 Running 0 3h27m
kube-system ebs-csi-controller-54457b68b-7fkxj 6/6 Running 0 3h22m
kube-system ebs-csi-controller-54457b68b-rglqg 6/6 Running 0 3h27m
kube-system ebs-csi-node-4ccgg 3/3 Running 0 3h27m
kube-system ebs-csi-node-g9bnb 3/3 Running 0 3h27m
kube-system ebs-csi-node-pdxmz 3/3 Running 0 3h27m
kube-system kube-proxy-7nkj2 1/1 Running 0 3h27m
kube-system kube-proxy-gwxtb 1/1 Running 0 3h27m
kube-system kube-proxy-j5zp8 1/1 Running 0 3h27m
sleep sleep-86bfc4d596-pl72r 2/2 Running 0 20s
spire spire-agent-bkk8q 3/3 Running 0 52m
spire spire-agent-jb57z 3/3 Running 0 52m
spire spire-agent-r9j86 3/3 Running 0 52m
spire spire-server-0 2/2 Running 0 52m
kubectl get po -A --context $CTX_CLUSTER2
NAMESPACE NAME READY STATUS RESTARTS AGE
amazon-guardduty aws-guardduty-agent-42twx 1/1 Running 0 3h10m
amazon-guardduty aws-guardduty-agent-fxngx 1/1 Running 0 3h10m
amazon-guardduty aws-guardduty-agent-tsn55 1/1 Running 0 3h10m
cert-manager cert-manager-55657857dd-wcwjh 1/1 Running 0 65m
cert-manager cert-manager-cainjector-7b5b5d4786-wmq2g 1/1 Running 0 65m
cert-manager cert-manager-webhook-55fb5c9c88-kn9kw 1/1 Running 0 65m
helloworld helloworld-v2-7fd66fcfdc-w7l7l 2/2 Running 0 38s
istio-system istio-eastwestgateway-57d65dfc66-5qgwt 1/1 Running 0 41m
istio-system istio-ingressgateway-85b7dbbfd8-92pzg 1/1 Running 0 41m
istio-system istiod-64d84b9dff-pxg67 1/1 Running 0 42m
kube-system aws-load-balancer-controller-7466ccb95b-gbksz 1/1 Running 0 3h11m
kube-system aws-load-balancer-controller-7466ccb95b-hb7l4 1/1 Running 0 3h11m
kube-system aws-node-gdbtw 2/2 Running 0 64m
kube-system aws-node-ndt5l 2/2 Running 0 64m
kube-system aws-node-vd4x2 2/2 Running 0 64m
kube-system coredns-86bbb5f9b5-tw8pq 1/1 Running 0 64m
kube-system coredns-86bbb5f9b5-vm8zd 1/1 Running 0 64m
kube-system ebs-csi-controller-c88bff885-6kt58 6/6 Running 0 64m
kube-system ebs-csi-controller-c88bff885-nhqqm 6/6 Running 0 59m
kube-system ebs-csi-node-7lc7n 3/3 Running 0 64m
kube-system ebs-csi-node-jqw5n 3/3 Running 0 64m
kube-system ebs-csi-node-jsdf6 3/3 Running 0 64m
kube-system kube-proxy-57w59 1/1 Running 0 64m
kube-system kube-proxy-pvzx2 1/1 Running 0 64m
kube-system kube-proxy-wzblc 1/1 Running 0 64m
sleep sleep-64cbcc4cd9-xrqvv 2/2 Running 0 26s
spire spire-agent-bh4zq 3/3 Running 0 51m
spire spire-agent-hvpt8 3/3 Running 0 51m
spire spire-agent-tcv27 3/3 Running 0 51m
spire spire-server-0 2/2 Running 0 51m
From a sleep pod in foo-eks-cluster
, curling the hello-world service receives responses from both v1 and v2 deployments, proving east-west gateway communication across clusters. This curl command is actually reaching the "hello-world" service in foo-eks-cluster
. However, since federation is enabled between the two clusters, Spiffe has issued identities to the workloads in both clusters in a way that allows them to communicate seamlessly. So when the curl command is executed from the sleep pod, it receives responses from both the "hello-world-v1" deployment in foo-eks-cluster
as well as the "hello-world-v2" deployment in bar-eks-cluster
. This demonstrates that traffic is flowing freely across the east-west gateway between the two federated clusters, despite the workloads originating from different clusters with different root CAs. It proves that the federated Spire setup with exchanged trust bundles has successfully enabled secure mTLS communication for these workloads deployed across different clusters.
kubectl exec --context="${CTX_CLUSTER1}" -n sleep -c sleep \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sleep -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- sh -c "while :; do curl -sS helloworld.helloworld:5000/hello; sleep 1; done"
You should see this:
Hello version: v1, instance: helloworld-v1-6bb5b589d6-54fbk
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v1, instance: helloworld-v1-6bb5b589d6-54fbk
Hello version: v1, instance: helloworld-v1-6bb5b589d6-54fbk
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v1, instance: helloworld-v1-6bb5b589d6-54fbk
Initially, both the "hello-world-v1" deployment in foo-eks-cluster
and "hello-world-v2" deployment in bar-eks-cluster
were running. Now lets scale down the "hello-world-v1" deployment in foo-eks-cluster
to 0 replicas. This means there are no pods running for this deployment in foo-eks-cluster
.
Scale down to 0 the helloworld-v1. Lets also create a Gateway and a Virtual Service for helloworld
on bar-eks-cluster
for the "hello-world-v2" deployment. An Istio Gateway acts as a load balancer to handle incoming traffic from outside the mesh. The VirtualService configures the routing rules for this ingress traffic. So now, instead of curling the "hello-world" service directly, lets curls the Gateway URL associated with the VirtualService in bar-eks-cluster
. This Gateway URL is essentially an entry point from outside the mesh (in this case, from foo-eks-cluster
) to reach the "hello-world-v2" service in bar-eks-cluster
. When this curl command is run from the sleep pod in foo-eks-cluster
, it receives only the responses from the "hello-world-v2" deployment behind the Gateway/VirtualService in bar-eks-cluster
.
kubectl -n helloworld scale deploy helloworld-v1 --context="${CTX_CLUSTER1}" --replicas 0
sleep 2
kubectl apply --context="${CTX_CLUSTER2}" \
-f ./examples/helloworld-gateway.yaml -n helloworld
export INGRESS_NAME=istio-ingressgateway
export INGRESS_NS=istio-system
GATEWAY_URL=$(kubectl -n "$INGRESS_NS" --context="${CTX_CLUSTER2}" get service "$INGRESS_NAME" -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Check the service by calling the Virtual Service from the foo-eks-cluster
kubectl exec --context="${CTX_CLUSTER1}" -n sleep -c sleep \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sleep -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- sh -c "while :; do curl -s http://$GATEWAY_URL/hello; sleep 1; done"
You should see this:
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
Hello version: v2, instance: helloworld-v2-7fd66fcfdc-w7l7l
We will deploy the bookinfo application to illustrate that the Root CA is cert-manager and Spiffe is the intermediate CA. With this we will state that:
- That the root CA being used is cert-manager, as set up initially.
- That Spiffe is acting as the intermediate CA and issuing the workload identities/certificates.
./examples/deploy-bookinfo.sh
The script checks if Spiffe has issued an identity to the workloads
kubectl exec -i -t -n spire -c spire-server \
"$(kubectl get pod -n spire -l app=spire-server -o jsonpath='{.items[0].metadata.name}')" \
-- ./bin/spire-server entry show -socketPath /run/spire/sockets/server.sock
...
Entry ID : 539a2cc6-69b5-44fd-b89f-853a4c3585e9
SPIFFE ID : spiffe://bar.com/ns/default/sa/bookinfo-details
Parent ID : spiffe://bar.com/k8s-workload-registrar/bar-eks-cluster/node/ip-10-3-1-116.eu-west-2.compute.internal
Revision : 0
X509-SVID TTL : default
JWT-SVID TTL : default
Selector : k8s:node-name:ip-10-3-1-116.eu-west-2.compute.internal
Selector : k8s:ns:default
Selector : k8s:pod-uid:13f60817-53a4-4c0e-a443-68236771f6e8
DNS name : details-v1-b95b447b-c2hp9
Entry ID : 0ee6bd73-b552-4b3c-a93c-81d5fe054990
SPIFFE ID : spiffe://bar.com/ns/default/sa/bookinfo-productpage
Parent ID : spiffe://bar.com/k8s-workload-registrar/bar-eks-cluster/node/ip-10-3-35-66.eu-west-2.compute.internal
Revision : 0
X509-SVID TTL : default
JWT-SVID TTL : default
Selector : k8s:node-name:ip-10-3-35-66.eu-west-2.compute.internal
Selector : k8s:ns:default
Selector : k8s:pod-uid:2cc90986-5a36-4e48-ae65-f874a2504d9d
DNS name : productpage-v1-6d84b4786f-4gnmq
...
istioctl proxy-config secret deployment/productpage-v1 -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem
Split the chain.pem
and save the two certificates in separate files. Use the openssl command openssl x509 -noout -text -in $FILE
to parse the certificate contents.
split -p "-----BEGIN CERTIFICATE-----" chain.pem cert-
ls cert-a*
cert-aa cert-ab
Open cert-ab
certificate and you notice the Root CA issuer is cert-manager.
openssl x509 -noout -text -in cert-ab
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
39:c5:d3:74:ba:59:fa:3c:78:3a:e6:00:33:f9:29:8f
Signature Algorithm: sha256WithRSAEncryption
Issuer: O=cert-manager, CN=certmanager-ca
Validity
Not Before: Jun 16 17:59:00 2024 GMT
Not After : Jun 17 17:59:00 2024 GMT
Subject: C=US, O=SPIFFE
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:9b:0c:37:38:23:a7:27:f2:ef:a0:f8:23:0a:a5:
5b:6a:f7:f0:23:a4:82:6c:3e:6c:22:8d:98:f0:b4:
f4:0b:15:a0:31:de:23:cd:44:52:5f:77:3f:f7:89:
d4:86:4f:e0:67:24:23:d1:ca:c1:96:97:d6:96:a3:
af:c0:4b:0e:e6:05:83:79:cb:2c:da:7c:aa:bc:70:
09:1c:28:de:be:87:cb:d5:94:d6:95:cc:a0:b7:be:
42:22:4d:b2:c5:98:9e:18:54:8d:e4:cc:ec:ed:d2:
8c:ff:e0:18:a6:45:17:3f:b8:c5:37:a0:f8:17:63:
58:e5:25:99:98:23:df:ad:80:93:6f:ec:2c:f9:8d:
b0:be:49:bb:da:d3:29:f2:1c:9f:27:43:a2:e5:a2:
c9:d0:73:98:e8:ee:c5:cf:20:15:c5:3f:57:14:84:
b8:35:0a:72:db:68:e5:24:37:ca:ba:d0:41:48:5f:
b7:89:b0:4e:22:2d:eb:e9:7d:45:7e:17:7b:b8:3a:
f9:37:23:e2:3a:09:c5:6b:3f:62:e5:60:8d:96:42:
8c:d6:ba:9f:69:85:d5:f7:0c:91:be:37:b0:ff:a5:
9a:1d:22:e2:8e:99:2b:15:f4:80:01:4c:2e:be:91:
f1:30:0a:d7:3d:f9:b0:96:ec:7e:e5:ce:81:18:d3:
e4:f3
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
A9:83:FC:71:38:8C:C4:F7:22:26:87:19:6D:90:4E:4B:61:02:4F:09
X509v3 Authority Key Identifier:
6C:B2:E3:3F:F1:77:B6:BA:75:74:A6:4A:86:36:02:93:33:6E:A7:79
X509v3 Subject Alternative Name:
URI:spiffe://bar.com
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
4c:32:e1:8f:bb:9d:f8:c6:fd:4a:4b:2b:60:cc:03:8c:48:b6:
39:8c:59:d1:3d:c8:54:35:0b:42:3c:8f:69:e3:ee:70:d5:71:
10:65:34:9c:91:d4:63:45:ca:ec:3e:6f:4c:bb:5d:8d:d7:44:
a0:39:47:67:2c:4a:d1:c8:c6:9d:52:0b:6c:fa:56:39:79:82:
88:74:dd:e7:85:6d:30:cc:a9:c0:fb:7a:f7:5d:9f:6f:60:82:
df:7b:08:77:9d:1d:71:06:aa:39:c1:0d:67:3c:e6:a7:1c:ee:
a6:e5:82:da:f1:93:1e:2d:8d:1c:01:ff:14:ab:df:0b:39:3e:
9e:6d:f9:7c:df:42:00:4d:88:19:b0:2f:dd:74:4f:5c:ce:1c:
dc:c7:ef:9c:28:3a:46:e7:1d:f3:61:1a:a4:88:8a:0f:dd:07:
98:d7:16:21:f4:76:8d:72:b5:24:92:23:61:2e:b2:21:8a:72:
ee:eb:77:88:57:31:19:0c:cb:08:e5:84:f0:cf:27:45:4c:39:
8d:72:97:d1:a5:8f:c0:36:95:e2:a8:f5:43:e5:7a:4d:c1:4b:
fa:c4:c9:d8:13:29:33:b2:2e:f0:b6:dc:3c:b4:a8:85:d3:12:
07:7b:ed:9f:e7:4d:af:98:a7:9a:ba:f6:05:55:84:06:84:cd:
53:9e:f4:6e
Open cert-aa
certificate and you will notice Spiffe is the intermediate CA issuing certificate to the productpage workload
openssl x509 -noout -text -in cert-aa
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
b4:b9:67:89:57:16:b3:17:73:47:fa:85:fb:52:ea:e5
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, O=SPIFFE
Validity
Not Before: Jun 17 10:34:58 2024 GMT
Not After : Jun 17 11:35:08 2024 GMT
Subject: C=US, O=SPIRE, CN=productpage-v1-6d84b4786f-4gnmq, x500UniqueIdentifier=b0b469180e5196f3d78eea3b6b04b0b2
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:7d:02:14:e4:cb:59:db:77:c9:5c:84:ce:a9:72:
db:51:02:7a:2d:e8:61:b8:5c:7d:a7:e2:98:29:a7:
37:5f:0b:0f:b8:3e:ce:9e:26:58:1d:f8:0f:03:5a:
d1:b1:39:77:17:10:9b:a3:75:61:1e:aa:75:cb:d6:
5f:ee:26:91:62
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment, Key Agreement
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
35:8C:3B:15:12:5F:8A:EE:8C:47:6E:AC:8A:75:23:4D:B2:40:44:16
X509v3 Authority Key Identifier:
A9:83:FC:71:38:8C:C4:F7:22:26:87:19:6D:90:4E:4B:61:02:4F:09
X509v3 Subject Alternative Name:
DNS:productpage-v1-6d84b4786f-4gnmq, DNS:productpage.default.svc, URI:spiffe://bar.com/ns/default/sa/bookinfo-productpage
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
31:49:bd:a8:5b:b3:4e:ae:f8:27:28:e7:39:21:33:5f:d6:34:
03:4b:dc:65:49:11:c7:cd:53:a7:0e:6a:5b:0c:6b:27:b4:11:
32:a1:e2:d9:57:2f:02:f4:ae:85:68:eb:bb:c4:74:29:9a:4d:
64:80:93:fa:91:c6:a6:53:fa:3d:25:4f:73:b2:d8:8a:b2:9a:
c9:85:34:02:42:22:13:42:eb:e7:c2:87:60:cf:52:7a:24:01:
51:45:90:97:0f:02:9d:58:05:36:15:4d:24:95:ee:d6:aa:04:
95:e0:6e:c9:25:db:2c:8b:47:f2:ba:5b:ad:f6:0e:22:91:ec:
5c:e5:7d:5e:a9:d8:c4:74:46:d4:03:14:13:69:d9:d7:00:a0:
46:bc:a5:49:2b:ca:37:23:9d:e2:91:41:27:79:2e:b6:dd:ce:
60:d7:0d:cd:8d:04:28:bc:ce:80:04:96:54:b6:b7:69:80:31:
21:3d:5f:1b:50:9a:e1:e2:a4:36:9c:f0:94:19:d6:d1:c1:5b:
c3:86:67:1f:66:ab:59:18:b9:fb:43:18:34:13:5d:65:9a:6b:
66:ac:c3:f8:9d:cf:c6:d4:6e:55:6f:74:df:10:d0:e8:96:83:
a9:96:13:20:25:83:da:ae:84:09:c8:47:c4:95:c7:5f:5e:38:
a8:42:3d:9a
Modify the rotation period for istiod certificates from 60 days (1440 hours) to 30 days (720 hours), run the following command:
kubectl apply -f ./cert-manager/cert-rotation.yaml --context=$CTX_CLUSTER1
Check istiod
logs
kubectl logs -l app=istiod -n istio-system -f --context=$CTX_CLUSTER1
Uninstall Istio on both clusters
./istio/cleanup-istio.sh
Uninstall Spire on both clusters
./spire/cleanup-spire.sh
Uninstall EKS clusters
cd terraform/1.foo-eks
terraform destroy --auto-approve
cd ../2.bar-eks
terraform destroy --auto-approve
cd ../0.vpc
terraform destroy --auto-approve