All commands are expected to be run in the
ci
folder.
Let's first set the target - what do we want the CI/CD pipeline to do for us?
Files changed in... | then... |
---|---|
packages/backend (or /package.json ) |
test packages/backend |
packages/app (or /package.json ) |
test and dummy build packages/app |
Files changed in... | then... |
---|---|
packages/backend (or /package.json ) |
deploy packages/backend |
packages/app (or /package.json ) |
build and deploy packages/app |
For passed tests, the test-level CI jobs write a token to the build project's Cloud Storage. The CD jobs can then check this, to know whether a certain code base has (at any earlier time) passed tests. This means deployment doesn't need to re-run tests, but has confidence that the code can be deployed.
Note: The
/dc/
folder is not directly used in CI. The Firebase emulator within it is prebuilt and pushed into theci-builder
project, which saves time.
The model recommended by the author is such:
┌──────────────────────────┐ ┌──────────────────┐ │ (1) │ │ │ │ CI-builder project │ PR changed │ GitHub repo │ │ │◄──────────────┤ │ │ - builder images │ │ │ │ - PR CI tasks │ │ │ │ "does it pass?" │──────────────►│ │ │ │ pass/fail │ │ │ ┌────────────────┐ │ └────────┬─────────┘ │ │ Cloud Storage │ │ ▲ │ ▲ │ │ - pass token │ - - - - - - - - - - - - ´ │ │ │ └────────────────┘ │ tests passed │ │ └───────────────── │ ────┬─┘ │ │ ▼ │ │ │ xxxxxx ┌─────────────────────┐ │ provide image │ │ x x xxxx deploy │ (2) │◄─┘ │ │ xxxxxxx xxxxxx ┌─────────┤ Staging project │ | merge to master │ │ xx xx x x ▼ │ │◄────────────────────────────┘ │ x x xxx │ - deploy CI task │ | │ xx x │ ├─────────────────────────────────┘ xx xxx │ │ | pass/fail xxxxxxxxxxxxxxxxxxxxxxx └─────────────────────┘ | ▲ | | .---------------------. | | | (2b) |<-' '---------| N projects | | | '---------------------'
This is a separately created GCP project (has no counterpart in Firebase) that:
- carries the builder Docker image(s)
- runs any "does this pass the tests?" CI tests (they don't involve access to a Firebase cloud project)
- for passed tests, writes their git SHA in the Cloud Storage
These GCP projects are created automatically by creation of a Firebase project.
They are used (by Firebase) and we piggy-back on them to also help in CI/CD. They run deployments, if a certain branch changes.
One can have as many deployment targets as one wants, from the same code base. These each map to a separate Firebase project, and each have their own data stores, users, deployed versions and CI/CD setup.
You might e.g. have staging
and production
environments[1], or multiple production environments, say one per customer.
[1]
: Even, whether you call them "stages", "deployments" or "environments" is up to you.
-
Keeps production keys safe
The "production keys" don't need to be shared - at all. The GCP projects deploy onto themselves and someone in the organization already has admin access to them.
-
Deployment access via GitHub
Deployments are now guarded by the version control access controls, since anyone who can merge to
master
can also deploy (they become the same thing). -
Independent environments
There is no central list of deployment environments. You can remove one simply by deleting such a GCP project. This removes the Firebase resources but also the associated CD triggers, without affecting other environments.
-
The layout doesn't provide means to handle inter-dependencies between the front-end and the backend.
Let's say you are deploying a feature where changes have been made both to the backend and the frontend. In order to work, the frontend needs the latest backend to be deployed.
Merging these changes (even as a single commit!) gives no guarantee of the order of deployment. Both CD jobs will run individually, and will finish unaware of each other.
Cloud Build does not provide a mechanism for making such synchronizations. We'd need to build that ourselves, complicating the setup.
The solution is to handle this manually. You can either use two separate merges, or (maybe preferred), deploy the backend manually, using the
first
folder's tools.
If you are aware of other cons, please leave a mention.
This project can be used not only across the target environments, but also across all web app projects that a single team, or company, is responsible for.
- It's good to find all (non-deploying) CI tasks in one place
- Access to it can be provided to all developers (to set up new test runs, or modify existing ones)
This layout seems light enough, yet flexible, to recommend. In the following text we expect you have it in place.
-
gcloud
CLIInstalling `gcloud` on macOS
-
Download the package from official installation page
-
Extract in the downloads folder, but then..
-
Move
google-cloud-sdk
to a location where you'd like it to remain (e.g.~/bin
).When you run the install script, the software is installed in place. You cannot move it around any more.
-
From here, you can follow the official instructions:
./google-cloud-sdk/install.sh
./google-cloud-sdk/bin/gcloud init
To update:
gcloud components update
Installing `gcloud` on Windows 10 + WSL2
$ apt-get install google-cloud-sdk
Note: This version may lack a bit behind, and doesn't have support for
gcloud components
, but should be enough.To update:
sudo apt-get upgrade google-cloud-sdk
-
Create a GCP project for the CI builder role, and make it the active project for gcloud
.
$ gcloud auth login
$ gcloud projects list
Pick the right one, then:
$ gcloud config set project <project-id>
Hint: To see the current active project:
$ gcloud config get-value project
>Note: If you are familiar with Firebase CLI projects (you won't touch them directly, with this repo), it's good to know one difference. Whereas `firebase use` projects are tied to a folder, `gcloud` project setting is system-wide. You can change it in any terminal or folders; the affect is global.
These are already created, by Firebase.
The CI scripts require your builder project to have the firebase-emulators
and cypress-custom
Docker images prepared in the Artifact Registry. The building (and pushing) itself is done in the Google cloud. Let's get to it!
-
Log into your "CI builder" GCloud project (see steps above).
-
Build and push the images to Artifact Registry
$ make pre ... 042aa377-[...] 2022-11-04T08:07:31+00:00 1M10S gs://ci-builder_cloudbuild/source/1667549250.010852-205fcd55a4ee413888e23ff17b8f8edf.tgz - SUCCESS ... 485b8a09-[...] 2022-11-04T08:08:46+00:00 2M37S gs://ci-builder_cloudbuild/source/1667549325.462326-8997b1bcf2eb404baef28445d1dbeda0.tgz - SUCCESS
This can take 4..5 mins.
-
Check the images
$ gcloud artifacts docker images list --include-tags us-central1-docker.pkg.dev/ci-builder/builders Listing items under project ci-builder, location us-central1, repository builders. IMAGE DIGEST TAGS CREATE_TIME UPDATE_TIME us-central1-docker.pkg.dev/ci-builder/builders/cypress-custom [...] 10.10.0 2022-11-04T10:11:23 2022-11-04T10:11:23 us-central1-docker.pkg.dev/ci-builder/builders/cypress-custom [...] 10.11.0 2022-11-04T10:20:38 2022-11-04T10:20:38 us-central1-docker.pkg.dev/ci-builder/builders/firebase-emulators [...] 11.16.0 2022-11-04T10:17:54 2022-11-04T10:17:54 us-central1-docker.pkg.dev/ci-builder/builders/firebase-emulators [...] 11.14.4 2022-11-04T10:08:41 2022-11-04T10:08:41
Note:
ci-builder
is the GCP builder project andbuilders
the repository within it. You'll use your own names.There are two versions of both the images. You'll only need one.
You can remove the unneeded images in the GCP Console (online), or with:
$ gcloud artifacts docker images delete us-central1-docker.pkg.dev/{project}/{repository}/{image}:{tag}
Note: Untagged images are something you've built earlier, with a tag that has since been moved to another image. You can safely remove such.
Why `us-central1`?
It's good to have the image in the same region where your Cloud Build (CI) runs.
Note that this has no connection to where you deploy your application backend to, nor implications to GDPR and other privacy aspects. The CI runs simply compile and test the sources from your GitHub repo. The CI jobs don't deal with your users, or their data, ever.
Storing Docker images in Artifact Registry has a cost. The free tier provides 0.5GB of free storage (Oct 2022). source: Artifact Registry pricing The Firebase Emulators image (186MB "virtual size") fits this budget, but the Cypress image (550MB) breaks the free tier.
You'll be charged $0.10 per month. Hope this is acceptable...
Note: What you get for this money is a 1..2 min reduction for each front-end CI test run.
The cloudbuild.*.deploy.yaml
scripts are run under your deployment GCP project, not the builder.
They reference the builder image as such:
substitutions:
_1: us-central1-docker.pkg.dev/ci-builder/builders/firebase-emulators:11.13.0
Replace ci-builder/builders
with the name of the builder project you created, and the repository you use.
The
ci-builder
project belongs to the author and doesn't provide public pull access. We need to eventually do something about this (it is not the intention that you need to edit anything in the repo, to use it on your projects).
Next, let's introduce GitHub and Cloud Build to each other.
You need to enable quite a few things within the GCP, to have things rolling.
Note: These changes can be done from command line as well (using
gcloud
) if you need to do them repeatedly.
Some steps are needed for the build project ("CI builder", above), some for the deployment projects ("staging project", above), some for both.
Note to future: It would be fair, to have a script /
gcloud
job that does these things.
-
GCP Console >
≡
>Artifact Registry
-
Press
Enable
-
Enable billing
-
+ CREATE REPOSITORY
Screenshot
![](.images/ar-create.png)Name builders
[1]Format Docker
Location type Region
Region us-central1
Description free text
-
Push CREATE
. Now, Docker images used by the CI/CD can be stored in this central location.
Note: You can name the
builders
folder differently, but then need to change the name where referenced.
- GCP Console >
≡
>APIs & Services
-
GCP Console >
≡
>APIs & Services
-
+ Enable APIs and Services
Identity and Access Management (IAM) API
>Enable
While here, also check that the following are enabled:
- Firebase Management API
- Firebase Hosting API
- Cloud Resource Manager API
Hint: Pick up the
Service account email
. You'll need it, shortly.Add "API Keys Admin" role to the Cloud Build service account
Note: Deploying to Firebase mentions this but the community Firebase builder
README
doesn't. Things might work without it, too?- Google Cloud console >
IAM & Admin
- Spot
@cloudbuild.gserviceaccount.com
account on the list >✎
(edit) - Add the
API Keys Admin
role:
Enable `secretmanager.versions.get` role to the Cloud Build service account
This is needed if your Cloud Functions use
secrets:
.Do as above, enabling
Secret Manager Viewer
.Enable `cloudscheduler.jobs.update` role to the Cloud Build service account
Do as above, enabling
Cloud Scheduler Admin
.Note: Technical roles (e.g.
cloudscheduler.jobs.update
) don't match 1-to-1 with Console UI roles. The latter are umbrellas that may open multiple technical roles.Enable access to CI Builder's Artifact Registry
Each deployment project needs to be able to read the builder image. This means granting them the
roles/artifactregistry.reader
IAM role.-
For the deployment project, pick up their "service account email":
- Google Cloud Console > (deployment project) > Cloud Build >
Settings
- pick up the Service account email, like
123...987@cloudbuild.gserviceaccount.com
- pick up the Service account email, like
- Google Cloud Console > (deployment project) > Cloud Build >
-
GCP Console > (builder project) > IAM & Admin
+👤 GRANT ACCESS
Push
SAVE
.
Your deployment project Cloud Build runs should now be able to pull the builder images.
Enable access to CI Builder's Cloud Storage
We also grant
storage.objects.list
role so that the deployment project can see, whether tests have successfully passed for a given commit.-
GCP Console > (builder project) > IAM & Admin
-
Pick the principal created for the deployment service account (it has
Artifact Registry Reader
access -
Edit >
+ ADD ANOTHER ROLE
>Storage Object Viewer
Push
SAVE
. -
Your deployment scripts will not be able to see whether tests have passed for a given git SHA.
To bridge GitHub with Cloud Build, let's enable the "Cloud Build Github app". This is an integration that Google has prepared that lets Cloud Build get triggered when something (push or merge) happens in the GitHub repo.
- GitHub Marketplace > Apps > Google Cloud Build >
Enable
- Add your GitHub repo to the Cloud Build app (covers all GCP projects where Cloud Build is enabled)
Note: The UI uses the term "purchase", but installing the application is completely free (Jun 2021). The costs - if any - are based on your agreements with GitHub and Cloud Build.
Finally, we can create the triggers we want to run in CI.
GCP Console > (project) >
Cloud Build
>Triggers
>CREATE TRIGGER
Note: These settings are not in the version control. The workflow relies on you to have set them up, appropriately. The suggested initial settings are below, to get you started.
For the GCP project responsible of running tests.
master-pr-backend
Description PR that affects packages/backend
Event (●) Pull Request (GitHub App only) Source Repository Pick one. On the first visit for this GCP project, you will need to connect to the GitHub App. Note for Safari users: see below how to enable popups.
Base branch ^master$
Comment control (●) Required except for owners and collaborators Included files filter (glob) packages/backend/**
,package.json
Ignored files filter (glob) *.md
,.images/*
Configuration Type (●) Cloud Build configuration file (yaml or json) Location (●) Repository: ci/cloudbuild.backend.yaml
It makes sense to keep the name of the CI entry and the respective
yaml
file the same (but the name cannot have a.
).Enabling popups for Safari browser
- `Preferences` > `Websites` > `Pop-up Windows` (lowest in left pane)
- `console.cloud.google.com`: `Allow`
Screenshot of the actual form:
master-pr-app
Description PR that affects packages/app
Event (●) Pull Request Source Repository pick Base branch ^master$
Comment control (●) Required except for owners and collaborators Included files filter (glob) packages/app/**
,package.json
Ignored files filter (glob) *.md
,.images/*
Configuration Type (●) Cloud Build configuration file (yaml or json) Location (●) Repository: ci/cloudbuild.app.yaml
Hint: The easiest way to do the secondary triggers is
⋮
>Duplicate
.These two CI steps now allow seeing the 🟢🟠🔴 status of pull requests that target
master
.Test it!
Make a Pull Request in GitHub.
You should see these (under
Checks
):Create these triggers in the deployment project.
backend-deploy
Description Merge to master
(affects backend)Event (●) Push to a branch Source Repository pick Base branch ^master$
Included files filter (glob) packages/backend/**
,package.json
Ignored files filter (glob) *.md
,.images/*
Configuration Type (●) Cloud Build configuration file (yaml or json) Location (●) Repository: ci/cloudbuild.backend.deploy.yaml
This takes care of deploying the backend.
For the front-end, create a similar trigger:
app-deploy
Description Merge to master
(affects app)Event (●) Push to a branch Source Repository pick Base branch ^master$
Included files filter (glob) packages/app/**
,package.json
Ignored files filter (glob) *.md
,.images/*
Configuration Type (●) Cloud Build configuration file (yaml or json) Location (●) Repository: ci/cloudbuild.app.deploy.yaml
With these two jobs in place, your deployments will track the contents of the
master
branch.To make multiple deployments, just dedicate a certain branch to each, create a Firebase project for it and add these steps.
The below commands pack your sources, send them to Cloud Build and let you see the build logs, in real time.
$ gcloud builds submit --config=cloudbuild.{app|backend}.yaml ..
$ gcloud builds submit --config=cloudbuild.{app|backend}.deploy.yaml ..
When using these, make sure you are logged into the correct GCP project.
The author finds the
gcloud builds
workflow great for developing one's CI scripts, since you don't need to commit the changes to version control! 🙂It makes sense to optimize the "tarball" going out. Not shipping unnecessary files speeds up your debug cycles, and also saves storage space (Cloud Build keeps these around).
Unfortunately Cloud Build is not quite capable of using
.gitignore
files in various subdirectories. This is why we've prepared a../.gcloudignore
that tries to duplicate the logic in those files.#hack
$ gcloud meta list-files-for-upload ..
This set of files is controlled by
.gcloudignore
in the project root.-
Cloud Build (GCP)
- Creating GitHub App triggers (Cloud Build docs)
- Deploying to Firebase (Cloud Build docs)
- Building and debugging locally (Cloud Build docs)
- Configuring access control (Artifact Registry docs)
-
gcloud builds submit --help
-