This is a demo of how to deploy ECK and its dependencies in an air-gapped environment
This repo is for demo purposes only and should not be used in production
SELinux is not enabled for the RKE2 cluster and is not enforcing the CIS 1.23 Kubernetes Benchmark
⚠️ The deployment of this repo takes almost 2 hours due to the RKE2 playbook and the large containers needed to download, tag, and push
During development, the second RKE2 server continued to fail on the initial boot. Once I restarted the rke2-server
service it would work. There is an additional task in the existing RKE2 playbooks which differs from their playbook if you wish to implement a similar architecture
# ansible/roles/rke2_server/other-servers.yml
- name: Start rke2-server
ansible.builtin.systemd:
name: rke2-server
state: started
enabled: true
timeout: 120
ignore_errors: true
- name: Restart rke2-server
ansible.builtin.systemd:
name: rke2-server
state: restarted
- ansible 2.15.2+
- terraform v1.5.7+
- kubectl v1.25.4+
- MaxMind license key
-
Create ssh key pair with a password
This has to be done since a STIGd RHEL 8 does not allow passwordless login
ssh-keygen
-
Update auto.tfvars with your variables
gcp_region="us-east1" gcp_zone = "us-east1-c" gce_ssh_pub_key_file = "ssh-rsa AAAAB...= " project = "air-gap-demo" workers = 5 servers = 3
-
You must have your Google credentials as an environment variable follow this guide for instructions
export GOOGLE_CREDENTIALS=<path_to_downloaded_json>
-
Deploy the infrastructure
terraform -chdir=terraform/ apply -var-file=auto.tfvars -auto-approve
-
Configure RKE2
eval "$(ssh-agent -s)" ssh-add /path/to/your/ssh_key cd ansible ansible-galaxy install -r requirements.yaml ansible-playbook -i hosts.ini rke2.yml --private-key=/path/to/your/ssh_key --extra-vars "LICENSE=<your_maxmind_license>"
-
Export the downloaded config and verify your cluster is ready
export KUBECONFIG=<absolute_path_to>/ansible/config cd .. kubectl get nodes
-
Deploy certmanager
kubectl apply -f k8s/cert-manager/cert-manager.yaml # wait for cert-manager to be ready kubectl apply -f k8s/cert-manager/ca-issuer.yaml
-
Deploy longhorn (Ensure you update the images to point to your registry as explained here; find and replace <your_registry_here>)
kubectl apply -f k8s/longhorn/longhorn.yaml kubectl apply -f k8s/longhorn/ingress.yaml
-
Deploy ECK CRDs
kubectl apply -f k8s/eck/00_crds.yaml
-
Deploy ECK operator (Update the image to point to your registry and add your registry to the config map)
data: eck.yaml: |- log-verbosity: 0 metrics-port: 0 container-registry: "<your_registry_here>:5000" --- spec: terminationGracePeriodSeconds: 10 serviceAccountName: elastic-operator securityContext: runAsNonRoot: true containers: - image: "<your_registry_here>:5000/elastic/eck-operator:2.9.0"
kubectl apply -f k8s/eck/01_operator.yaml
-
Deploy Elastic license
kubectl apply -f k8s/eck/02_license.yaml
-
Deploy namespaces
kubectl apply -f k8s/eck/03_namespaces.yaml
-
Deploy Elastic Package Registry (Update the image to point to your registry; find and replace <your_registry_here>)
kubectl apply -f k8s/eck/epr/epr.yaml
-
Deploy Elastic Artifact Registry (Update the image to point to your registry; find and replace <your_registry_here>)
kubectl apply -f k8s/eck/ear/ear.yaml
-
Deploy Elastic Endpoint Artifact Repository (Update the image to point to your registry; find and replace <your_registry_here>)
kubectl apply -f k8s/eck/eer/eer.yaml
-
Deploy Elastic Learned Sparse EncodeR (Update the image to point to your registry; find and replace <your_registry_here>)
kubectl apply -f k8s/eck/elser/elser.yaml
-
Deploy GeoIP database (Update the image to point to your registry; find and replace <your_registry_here>)
kubectl apply -f k8s/eck/geoip/geoip.yaml
-
Deploy monitor cluster
kubectl apply -f k8s/eck/04_monitor.yaml
-
Deploy production cluster
kubectl apply -f k8s/eck/05_prod.yaml
-
Deploy Fleet server (wait for prod cluster to be ready)
kubectl apply -f k8s/eck/06_fleet.yaml
-
Deploy Elastic Maps Service
kubectl apply -f k8s/eck/07_maps.yaml
-
Update
/etc/hosts
with your loadbalancers external IP (found in hosts.ini or in the GCP console) and FQDN to your servicesecho "<your_lb_external_ip> longhorn.air-gap.demo monitor.air-gap.demo prod.air-gap.demo maps.air-gap.demo" >> /etc/hosts
-
Get credentials
kubectl get secret monitor-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n monitor kubectl get secret elasticsearch-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n prod
-
Login to your clusters! Ensure you update the default Elastic Artifact Registry and the path to Elastic Endpoint Artifact Repository in Kibana
You may need to add an ingress for each of these services if Agents outside the k8s cluster need access
This demo does not provide all the best security practices. Here are just a few things to keep in mind if you want to deploy in production.
- Ensure you have SELinux enabled for RKE2 and the host
- Add the CIS 1.23 profile
- Update container security context (Some containers in this demo were updated)
- Enforce network policies to segment the cluster
- Use hardened images from Iron Bank