Chainsail is designed to be run on a Kubernetes cluster either locally or in the cloud. We use the following set of tools for deploying chainsail:
- Terraform - Used for provisioning cloud resources and base k8s cluster resources
- Docker - For building Chainsail images
- Helm - For installing Chainsail itself
This guide describes how to set up a Chainsail environment from scratch, either locally or in the cloud.
- Deploying Chainsail
Make sure that you correctly edit the Terraform and Helm files. Specifically, you'll want to make sure that you have matching container registry in the following files / environment variables:
/terraform/cluster/local/main.tf
(container_registry
in thelocals
block),/helm/values.yaml
(imageHubNamespace
),/helm/values-local.yaml
(imageHubNamespace
),/helm/values-dev.yaml
(imageHubNamespace
),if you're considering a Google Cloud deployment,- and the
HUB_NAMESPACE
environment variable later.
To deploy locally, you first need to start a local cluster using Minikube:
minikube start
Then you can provision cluster resources with:
cd ./terraform/cluster/local
# The first time you run terraform you need to run an init command:
terraform init
terraform apply
The local cluster uses minio
for local object storage.
Note: Minikube has its own Docker registry, so if you want to deploy local versions of Chainsail you will need to build the latest version of its Docker images and add them to Minikube's Docker registry. One way to do this is:
# This makes docker commands use Minikube's Docker daemon
eval $(minikube docker-env)
HUB_NAMESPACE="<container registry>/" make images
The hub namespace environment variable has to match the value of the imageHubNamespace
property in helm/values.yaml
and the container_registry
value in the locals
block of terraform/cluster/local/main.tf
.
Then, you can install Chainsail with Helm:
helm install -f helm/values-local.yaml chainsail ./helm
For development purposes, you can run the frontend web app locally.
cd ./app/client
./run_dev_client.sh
For more information see the frontend README
.
Warning The link provided by the client to download samples won't work when Chainsail is deployed via Minikube, the reason being that the host machine does not see the Minikube-internal DNS server by default. To download sampling results, use the following command:
kubectl exec minio-0 -- curl --output - '<URL from download button>' >results.zip
Each time you make local changes to the Chainsail back-end, re-build the Docker image(s) for the services you have modified and run a Helm upgrade to deploy them locally:
eval $(minikube docker-env)
make images
helm upgrade -f helm/values-local.yaml chainsail ./helm
In addition to the general Prerequisites,
- Make sure that your local Google Cloud credentials are set correctly.
To that end, run
.
gcloud auth application-default login --project <project name>
- Fill in your Google Cloud project name and -region in
terraform/base/dev/main.tf
andterraform/cluster/dev/main.tf
- Manually provision a Google Cloud Storage bucket
chainsail-dev-terraform-state
that holds the Terraform state.
The first step in preparing a new Chainsail environment is ensuring that (1) a Google Cloud Project already exists, (2) you have adequate access rights in the project to deploy infrastructure, and (3) the GCS bucket for storing the Terraform state has already been created. If you try running the commands in this guide without these pre-requisites, you'll likely run into some error messages.
cd ./terraform/base/dev
# The first time you run terraform you need to run an init command:
terraform init
terraform apply
With the base Google Cloud environment created, we can now provision the Kubernetes cluster. This step creates things like k8s service accounts and the k8s secrets required to run chainsail.
cd ./terraform/clusters/dev
# The first time you run terraform you need to run an init command:
terraform init
terraform apply
If Docker images have not already been built and pushed to the Google Cloud Container Registry for your desired version of chainsail, you should go ahead and build those now.
In order to be able to push the images to the container registry, the Google Cloud credentials you use for the following need to have access to the container registry bucket created by Terraform.
It is called something like <eu, ...>.artifacts.<project name>
bucket.
The name of the bucket might vary depending on the zone
and node_location
entries in the chainsail_gcp
Terraform module in terraform/base/dev/main.tf
.
To build and push images, run
HUB_NAMESPACE="<container registry>/" make push-images
The hub namespace environment variable has to match the value of the imageHubNamespace
property in helm/values.yaml
.
The first time you deploy Chainsail, you will need to fetch the cluster's Kubernetes access credentials using gcloud
:
gcloud container clusters get-credentials --region $GCP_REGION chainsail
The GCP region can be found in terraform/base/dev/main.tf
in the node_location
entry of the chainsail_gcp
module.
Once all of the desired images are published, you can install Chainsail with:
helm install -f helm/values-dev.yaml chainsail ./helm
The Chainsail front-end is currently deployed separately using App Engine:
**Note: The App Engine app.yaml is generated by Terraform. Run the
terraform/base/<env-name>
module to recreate it.
cd app/client
npm run deploy
There are a couple of additional steps which need to be configured manually:
-
A Firebase project must exist which corresponds to your Google Cloud Project.
-
The app engine domain created in (4) must manually set as an authorized domain in Firebase at https://console.firebase.google.com
-
A VPC Connector must be created for the region in which you have deployed Chainsail (See https://cloud.google.com/vpc/docs/configure-serverless-vpc-access)
To upgrade an already running Chainsail cluster to a newer version of the chart. Use:
helm upgrade -f helm/values-dev.yaml chainsail ./helm
If using the latest
tag for images in the helm chart, you will also need to restart the services so that
the latest image is pulled:
kubectl rollout restart deployment scheduler-worker
kubectl rollout restart deployment scheduler
kubectl rollout restart deployment mcmc-stats-server
The scheduler pod supports adding / removing user email addresses from the allowed users whitelist.
To add a user:
export SCHEDULER_POD=$(kubectl get pods -l chainsail.io.service=scheduler -o jsonpath='{.items[0]..metadata.name}')
kubectl exec $SCHEDULER_POD -- scheduler-add-user --email someone@provider.com
To remove a user:
kubectl exec $SCHEDULER_POD -- scheduler-remove-user --email someone@provider.com
See the instructions in [/app/controller/README.md])(app/controller/README.md).