This is an end-to-end deployment of a single infrastructure component, in this case it's a ConcourseCI instance. We combine several tools to make this work:
- Rake - for task management
- Terraform - for infrastructure provisioning, heavily leaning on InfraBlocks
- Confidante - for configuration management
InfraBlocks modules, and our configuration, contain some terms which need explanation:
A collection of infrastructure, which together provides value. For example, a
typical micro-service to serve customer information, with a database and an ECS
Service, would all come under the component customer-service
. For this
codebase, we've just used concourse-example
, but this could easily be
ci-server
.
The individual bits that make up a component. Examples of roles include the
database
, log-group
, and the service
. You can see how we layer together
roles in the config/infra/roles
directory.
A label so you can differentiate between multiple deployments of the same
component. This could be tied to an environment, e.g. development
or
production
, or something more clever. We've used a mixture of environment and
build flavours, so we could run A/B tests of services, e.g. production-blue
and production-green
.
This code requires terraform 0.12 or greater.
We use the go
script to automate pre-install steps like installing Gems. To
get go
onto the PATH, we use direnv. If you want to skip this step, use rake
instead of go
in all the commands below.
$ brew install direnv
$ direnv allow
It's not recommended, but for this example we keep secrets in the repository.
We keep them locked up using git-crypt
, but we've provided the key for you to
unlock them.
If you want to deploy this for real, roll these secrets!
$ brew install git-crypt
$ git-crypt unlock ./git-crypt-key
Because S3 buckets are global, if you deployed this all as-is you'd likely bump into others (including us!) for things like S3 buckets. So you need to change the deployment identifier.
It can be anything you want. :-)
$ export DEPLOYMENT_IDENTIFIER=example
We need to store remote terraform state, so the first thing we do is build an S3 bucket to keep it all in.
$ go "bootstrap:provision[$DEPLOYMENT_IDENTIFIER]"
The state for this bucket is stored in the state
folder in this repository.
If you want to use this repository as part of a team environment, you need to go
into the .gitignore
file and delete the following:
# State bucket state - remove this
state/*
In this example, we stand up a public and private zone so we can refer to our CI by name rather than by IP address.
$ go "domain:provision[$DEPLOYMENT_IDENTIFIER,example.com]"
We need a valid certificate so we can access the Concourse dashboard over HTTPS.
$ go "certificate:provision[$DEPLOYMENT_IDENTIFIER]"
We need to build a network to put our services into. At the moment it just takes
up 10.0.0.0/16
.
$ go "network:provision[$DEPLOYMENT_IDENTIFIER]"
We want to deploy Concourse on ECS, so we need somewhere to put our Concourse Docker images, and then we need to deploy them.
$ go "web_image_repository:provision[$DEPLOYMENT_IDENTIFIER]"
$ go "web_image:publish[$DEPLOYMENT_IDENTIFIER]"
$ go "worker_image_repository:provision[$DEPLOYMENT_IDENTIFIER]"
$ go "worker_image:publish[$DEPLOYMENT_IDENTIFIER]"
We need to provision some machines to run our ECS cluster on. In this example
we spin up a single t2.medium
box per availability zone. In this case, it's
three.
$ go "cluster:provision[$DEPLOYMENT_IDENTIFIER]"
Concourse needs some kind of SQL database to store build information in, so we provision a Postgres instance using RDS.
$ go "database:provision[$DEPLOYMENT_IDENTIFIER]"
Once we have everything we need, now we just need to tell ECS to deploy the services. This will give us some ECS services, as well as a load balancer.
Note: In this example, we've opened up the CI to 0.0.0.0/0
.
$ go "services:provision[$DEPLOYMENT_IDENTIFIER]"