Skip to content

Deploy the Base Infrastructure Stack

Nicolas Ochem edited this page Dec 11, 2023 · 3 revisions

You need:

  • a private repository on Github teztnets-infra
  • a pulumi organization to collaborate on the same stack

Set up project, repo, stack

Create a new gcloud project and switch to it in the CLI:

gcloud config set-project <my-teztnets-project>

In order to deploy the infra, you need to be logged in as a user with full access to said project.

Next, you need a pulumi stack. There are two ways:

  1. create a new project pulumi new and follow the wizard. Pick your organization and pick "typescript" as deployment type, then populate index.ts with the desired infrastructure. Or:
  2. use the pre-existing project in this private repo

We are going with method 2.

In index.ts a region and set the variables correctly for project and region:

const project = "tf-teztnets";
const region = "us-central1-a";

The index.ts file deploys:

  • a k8s cluster
  • a 2-VM node pool
  • 4 infrastructure charts: cert-manager, external-dns, nginx and the prometheus operator
  • a service account for CI automated deployments.

See the comments in index.ts for more details.

Deploy with pulumi up

From the `teztnets-infra repo root, issue the following:

npm install
pulumi up

Pulumi will ask if you want to create a stack. Create an organization stack such as tacoinfra/teztnets-infra/prod. This requires a team plan and allow for several team members to work on the same infrastructure.

Pulumi then displays a preview of the infrastructure it's planning to deploy. Review and select "Yes".

There is a challenge related to deploying a cluster with IaC, then some things inside the cluster: there are two providers, the GCP provider and the k8s provider. This means, pulumi will deploy the cluster with the GCP API, then retrieve k8s control plane credentials, and deploy the charts with the newly configured k8s provider. This does not always work. Specifically, you may need to run pulumi up twice for everything to converge.

The infrastructure repo is meant to be deployed by humans and has no CI attached to it. This is for safety reasons.

Get Local Cluster Credentials

To interact with kubernetes locally, once the cluster is created, do the following:

gcloud container clusters list
gcloud container cluster get-credentials <cluster name>

Then you can run k9s and see the pods listed. There should be:

  • cert-manager pods and external-dns pod in the default namespace
  • nginx pod in the nginx namespace