A Terraform resource
definition and provisioner
that lets you install Kubernetes on a cluster.
The underlying resources
where the provisioner
runs could be things like
AWS instances, libvirt
machines, LXD containers or any other
resource that supports SSH-like connections. The kubeadm
provisioner
will run over this SSH connection all the commands necessary for installing
Kubernetes in those resources, according to the configuration specified in
the resource "kubeadm"
block.
Here is an example that will setup Kubernetes in a cluster created with the Terraform libvirt provider:
resource "kubeadm" "main" {
api {
external = "loadbalancer.external.com" # external address for accessing the API server
}
cni {
plugin = "flannel" # could be 'weave' as well...
}
network {
dns_domain = "my_cluster.local"
services = "10.25.0.0/16"
}
# install some extras: helm, the dashboard...
helm { install = "true" }
dashboard { install = "true" }
}
# from the libvirt provider
resource "libvirt_domain" "master" {
name = "master"
memory = 1024
# this provisioner will start a Kubernetes master in this machine,
# with the help of "kubeadm"
provisioner "kubeadm" {
# there is no "join", so this will be the first node in the cluster: the seeder
config = "${kubeadm.main.config}"
# when creating multiple masters, the first one (the _seeder_) must join="",
# and the rest will join it afterwards...
join = "${count.index == 0 ? "" : libvirt_domain.master.network_interface.0.addresses.0}"
role = "master"
install {
# this will try to install "kubeadm" automatically in this machine
auto = true
}
}
# provisioner for removing the node from the cluster
provisioner "kubeadm" {
when = "destroy"
config = "${kubeadm.main.config}"
drain = true
}
}
# from the libvirt provider
resource "libvirt_domain" "minion" {
count = 3
name = "minion${count.index}"
# this provisioner will start a Kubernetes worker in this machine,
# with the help of "kubeadm"
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
# this will make this minion "join" the cluster started by the "master"
join = "${libvirt_domain.master.network_interface.0.addresses.0}"
install {
# this will try to install "kubeadm" automatically in this machine
auto = true
}
}
# provisioner for removing the node from the cluster
provisioner "kubeadm" {
when = "destroy"
config = "${kubeadm.main.config}"
drain = true
}
}
Note well that:
- all the
provisioners
must specify theconfig = ${kubeadm.XXX.config}
, - any other nodes that join the seeder must specify the
join
attribute pointing to the<IP/name>
they must join. You can use the optionalrole
parameter for specifying whether it is joining as amaster
or as aworker
.
Now you can see the plan, apply it, and then destroy the infrastructure:
$ terraform plan
$ terraform apply
$ terraform destroy
You can find examples of the privider/provisioner in other environments like OpenStack, LXD, etc. in the examples directory)
- Easy deployment of kubernetes clusters in any platform supported
by Terraform, just adding our
provisioner "kubeadm"
in the machines you want to be part of the cluster.- All operations are performed through the SSH connection created by Terraform, so you can create a k8s cluster in completely isolated machines.
- Multi-master deployments. Just add a Load Balancer that points to your masters and you will have a HA cluster!.
- Easy scale-up/scale-down of the cluster by just changing the
count
of your masters or workers. - Use the
kubeadm
attributes in other parts of your Terraform script. This makes it easy to do things like:- enabling SSL termination by using the certificates generated for
kubeadm
in the code you have for creating your Load Balancer. - create machine templates (for example,
cloud-init
code) that can be used for creating machines dynamically, without Terraform being involved (like autoscaling groups in AWS).
- enabling SSL termination by using the certificates generated for
- Automatic rolling upgrade of the cluster by just changing the base image of your machines. Terraform will take care of replacing old nodes with upgraded ones, and this provider will take care of draining the nodes.
- Automatic deployment of some addons, like CNI drivers, the k8s Dashboard, Helm, etc.
(check the TODO for an updated list of features).
This provider
/provisioner
is being actively developed, but I would still consider
it ALPHA, so there can be many rough edges and some things can change without
any previous notice. To see what is left or planned, see the
issues list and the
roadmap.
- Terraform
- Go >= 1.12 (for compiling)
$ mkdir -p $HOME/.terraform.d/plugins
$ # with go>=1.12
$ go build -v -o $HOME/.terraform.d/plugins/terraform-provider-kubeadm \
github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provider-kubeadm
$ go build -v -o $HOME/.terraform.d/plugins/terraform-provisioner-kubeadm \
github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provisioner-kubeadm
- More details on the installation instructions.
- Using
kubeadm
in your Terraform scripts:- The
resource "kubeadm"
configuration block. - The
provisioner "kubeadm"
block. - Additional stuff ncessary for having a fully functional Kubernetes cluster, like installing CNI, the dashboard, etc...
- The
- Deployment examples for:
- Roadmap, TODO and vision.
- FAQ.
You can run the unit tests with:
$ make test
There are end-to-end tests as well, that can be launched with
$ make tests-e2e
- Alvaro Saurin <alvaro.saurin@gmail.com>
- Apache 2.0, See LICENSE file