A template for maintaining a multiple environments infrastructure with Terraform. This template includes a CI/CD process, that applies the infrastructure in an AWS account.
environment | drone.io | GitHub Actions | Circle Ci | Travis CI |
dev | ||||
stg | ||||
prd |
-
Branches names are aligned with environments names, for example
dev
,stg
andprd
-
The CI/CD tool supports the variable
${BRANCH_NAME}
, for example${DRONE_BRANCH}
-
The directory
./live
contains infrastructure-as-code files -*.tf
,*.tpl
,*.json
-
Multiple Environments
- All environments are maintained in the same git repository
- Hosting environments in different AWS account is supported (and recommended)
-
Variables
- ${app_name} =
tfmultienv
- ${environment} =
dev
orstg
orprd
- ${app_name} =
-
We're going to create the following resources per environment
- AWS VPC, Subnets, Routes and Routing Tables, Internet Gateway
- S3 bucket (website) and an S3 object (index.html)
- Terraform remote backend - S3 bucket and DynamoDB table
-
Create a new GitHub repository by clicking - Use this template and don't tick Include all branches
-
AWS Console > Create IAM Users for the CI/CD service per environment
- Name:
${app_name}-${environment}-cicd
- Permissions: Allow
Programmatic Access
and attach the IAM policyAdministratorAccess
(See Recommendations) - Create AWS Access Keys and save them in a safe place, we'll use them in the next step
- Name:
-
GitHub > Create the following repository secrets for basic application details
APP_NAME
- Application name, such astfmultienv
AWS_REGION
- Region to deploy the application, such aseu-west-1
(Ireland)
-
GitHub > Create the following repository secrets for authenticating with AWS, according to the access keys that were created in previous steps
-
Deploying the infrastructure - Commit and push changes to your repository
git checkout dev git add . git commit -m "deploy dev" git push --set-upstream origin dev
-
Results
- Newly created resources in AWS Console - VPC, S3 and DynamoDB Table
- CI/CD logs in the Actions tab (this repo's logs)
- The URL of the deployed static S3 website is available in the
terraform-apply
logs, for example:s3_bucket_url = terraform-20200912173059419600000001.s3-website-***.amazonaws.com
- Replace
***
with theAWS_REGION
, for exampleeu-west-1
terraform-20200912175003424600000001.s3-website-eu-west-1.amazonaws.com
-
Create
stg
branchgit checkout dev git checkout -b stg git push --set-upstream origin stg
-
GitHub > Promote
dev
environment tostg
- Create a PR from
dev
tostg
- The plan to
stg
is added as a comment by the terraform-plan pipeline - Merge the changes to
stg
, and check the terraform-apply pipeline in the Actions tab
- Create a PR from
-
That's it, you've just deployed two identical environments! Go ahead and do the same with
prd
-
How to proceed from here
- Make changes in
dev
- commit and push - Promote
dev
tostg
- create a PR - Promote
stg
toprd
- create a PR - Revert changes in
dev
- reverting a commit - Revert changes in
stg
andprd
- reverting a PR
- Make changes in
- Naming Convention should be consistent across your application and infrastructure. Avoid using
master
forproduction
. A recommended set of names:dev
,tst
(qa),stg
andprd
. Using shorter names is preferred, since some AWS resources' names have a character limit - Resources Names should contain the environment name, for example
tfmultienv-natgateway-prd
- Terraform remote backend costs are negligible (less than 5$ per month)
- Using Multiple AWS Accounts for hosting different environments is recommended.
The way I implement it -dev
andstg
in the same account andprd
in a different account - Create a test environment to test new resources or breaking changes, such as migrating from MySQL to Postgres. The main goal is to avoid breaking the
dev
environment, which means blocking the development team.
- backend.tf.tpl - Terraform Remote Backend settings per environment. The script prepare-files-folders.sh replaces
APP_NAME
withTF_VARS_app_name
andENVIRONMENT
withBRANCH_NAME
- Remote Backend is deployed with a CloudFormation template to avoid the chicken and the egg situation
- Locked Terraform tfstate occurs when a CI/CD process is running per environment. Stopping and restarting, or running multiple deployments to the same environment will result in an error. This is the expected behavior, we don't want multiple entities (CI/CD or Users) to deploy to the same environment at the same time
- Unlock Terraform tfstate by deleting the items from the state-lock DynamoDB table, for example
- Table Name:
${app_name}-state-lock-${environment}
- Item Name:
${app_name}-state-${environment}/terraform.tfstate*
- Table Name:
- AdministratorAccess Permission for CI/CD should be used only in early dev stages. After running a few successful deployments, make sure you restrict the permissions per environment and follow the least-previleged best practice. Use CloudTrail to figure out which IAM policies the CI/CD user needs, a great tool for that - trailscraper
- IAM Roles for self-hosted CI/CD runners (nodes) are preferred over AWS key/secret
- Default Branch is dev since this is the branch that is mostly used
- Branches Names per environment makes the whole CI/CD process simpler
- Feature Branch per environment complicates the whole process, since creating an environment per feature-branch means creating a Terraform Backend per feature-branch.
- Modules should be stored in a different repository
- Infrastructure Repository should be separated from the Frontend and Backend Respositories. There's no need to re-deploy the infrastructure each time the application changes (loosely coupled)
- To get started with Terraform, watch this webinar - Getting started with Terraform in AWS
- Terraform Dynamic Subnets
- Terraform Best Practices - ozbillwang/terraform-best-practices
Created and maintained by Meir Gabay
This project is licensed under the MIT License - see the LICENSE file for details