This module provisions Kubernetes clusters on AWS EC2 instances using Kubeadm with a stacked etcd topology.
- Each node is created with a user-data script that runs at first boot.
- The script prepares each node with prerequisite configurations and packages.
kubeadm init
is run on the first control-plane node to initialize the Kubernetes cluster.- The resulting
kubeadm join
commands are copied to an S3 bucket. - All other nodes download the
kubeadm join
command from the S3 bucket and join the cluster. - The cluster admin can then manage the cluster using
kubectl
, etc. via the first control-plane node.
Important
No CNI plugin is installed by this module. You will need to install one yourself before running workloads.
module "mycluster" {
source = "bcbrookman/kubeadm-k8s/aws"
cluster_name = "mycluster"
controlplane_node_count = 1
worker_node_count = 3
subnet_id = aws_subnet.mysubnet.id
ssh_key_name = aws_key_pair.mykeypair.key_name
}
The DNS name of the API server load balancer is available as a module output so it can be used in a CNAME record to customize the API server's DNS name.
For example:
module "mycluster" {
source = "bcbrookman/kubeadm-k8s/aws"
...
apiserver_dns = "mycluster.mydomain.example"
}
resource "aws_route53_record" "mycluster" {
zone_id = aws_route53_zone.example.zone_id
name = "mycluster.mydomain.example"
type = "CNAME"
ttl = 300
records = [module.mycluster.apiserver_lb_dns_name]
}