Building a CI/CD pipeline with Azure DevOps.
- Introduction
- Prerequisites
- Project Dependencies
- Getting Started
- Installation & Configuration
- Automated Testing Output
This project uses Azure DevOps to build a CI/CD pipeline that creates disposable test environments and runs a variety of automated tests to ensure quality releases. It uses Terraform to deploy the infrastructure, Azure App Services to host the web application and Azure Pipelines to provision, build, deploy and test the project. The automated tests run on a self-hosted virtual machine (Linux) and consist of: UI Tests with selenium, Integration Tests with postman, Stress Test and Endurance Test with jmeter. Additionally, it uses an Azure Log Analytics workspace to monitor and provide insight into the application's behavior.
- Fork and clone this repository in your local environment
- Open the project on your favorite text editor or IDE
- Log into the Azure Portal
- Log into Azure DevOps
Log into your Azure account
az login
az account set --subscription="SUBSCRIPTION_ID"
Create Service Principle
az ad sp create-for-rbac --name ensuring-quality-releases-sp --role="Contributor" --scopes="/subscriptions/SUBSCRIPTION_ID"
This command will output 5 values:
{
"appId": "00000000-0000-0000-0000-000000000000",
"displayName": "azure-cli-2017-06-05-10-41-15",
"name": "http://azure-cli-2017-06-05-10-41-15",
"password": "0000-0000-0000-0000-000000000000",
"tenant": "00000000-0000-0000-0000-000000000000"
}
Create an .azure_envs.sh
file inside the project directory and copy the content of the .azure_envs.sh.template
to the newly created file.
Change the parameters based on the output of the previous command. These values map to the .azure_envs.sh
variables like so:
appId is the ARM_CLIENT_ID
password is the ARM_CLIENT_SECRET
tenant is the ARM_TENANT_ID
To configure the storage account and state backend run the bash script config_storage_account.sh providing a resource group name, and a desired location.
./terraform/config_storage_account.sh -g "RESOURCE_GROUP_NAME" -l "LOCATION"
This script will output 3 values:
storage_account_name: tstate$RANDOM
container_name: tstate
access_key: 0000-0000-0000-0000-000000000000
Replace the RESOURCE_GROUP_NAME
and storage_account_name
in the terraform/environments/test/main.tf
file and the access_key
in the .azure_envs.sh
script.
terraform {
backend "azurerm" {
resource_group_name = "RESOURCE_GROUP_NAME"
storage_account_name = "tstate$RANDOM"
container_name = "tstate"
key = "terraform.tfstate"
}
}
export ARM_ACCESS_KEY="access_key"
You will also need to replace this values in the azure-pipelines.yaml file.
backendAzureRmResourceGroupName: "RESOURCE_GROUP_NAME"
backendAzureRmStorageAccountName: 'tstate$RANDOM'
backendAzureRmContainerName: 'tstate'
backendAzureRmKey: 'terraform.tfstate'
To source this values in your local environment run the following command:
source .azure_envs.sh
NOTE: The values set in .azure_envs.sh
are required to run terraform commands from your local environment.
There is no need to run this script if terraform runs in Azure Pipelines.
To generate a public private key pair run the following command (no need to provide a passphrase):
cd ~/.ssh/
ssh-keygen -t rsa -b 4096 -f az_eqr_id_rsa
Ensure that the keys were created:
ls -ll | grep az_eqr_id_rsa
For additional information of how to create and use SSH keys, click on the links bellow:
Create a terraform.tfvars
file inside the test directory and copy the content of the terraform.tfvars.template
to the newly created file. Change the values based on the outputs of the previous steps.
- The
subscription_id
,client_id
,client_secret
, andtenant_id
can be found in the.azure_envs.sh
file. - Set your desired
location
andresource_group
for the infrastructure. - Ensure that the public key name
vm_public_key
is the same as the one created in step 2.1 of this guide.
Run Terraform plan
cd terraform/environments/test
terraform init
terraform plan -out solution.plan
After running the plan you should be able to see all the resources that will be created.
Run Terraform apply to deploy the infrastructure.
terraform apply "solution.plan"
If everything runs correctly you should be able to see the resources been created. You can also check the creation of
the resources in the Azure Portal under
Home > Resource groups > "RESOURCE_GROUP_NAME"
- a) Go to Azure DevOps
- b) Click on
New Project
- c) Give the project a name, a description and click
Create
- d) Create a new Service Connection
Project settings > Service connections > New service connection
- e) Inside new service connection select
Azure Resource Manager > Service principal (automatic)
- f) Select a Resource group, give the connection a name a description and click
Save
.
IMPORTANT: You will need to create two service connections:
service-connection-terraform
is created using the same resource group that you provided in step 1.2 of this guide.service-connection-webapp
is created using the resource group that you provided interraform.tfvars
file.
A detailed explanation on how to create a new Azure DevOps project and service connection can be found here.
- g) Make sure that the name of the service connections match the names provided in the azure-pipelines.yaml file.
serviceConnectionTerraform: 'service-connection-terraform'
serviceConnectionWebApp: 'service-connection-webapp'
- f) Make sure that the webAppName matches the name provided in the
terraform.tfvars
file.
- a) Create a New Environment in Azure Pipelines. From inside your project in Azure DevOps go to:
Pipelines > Environments > New environment
- b) Give the environment a name e.g.
test
, then selectVirtual machines > Next
. - c) From the dropdown select
Lunix
and copy theRegistration script
- d) From a local terminal connect to the Virtual Machine.
Use the ssh key created in step 2.1 of this guide. The public IP can be found in the Azure Portal under
Home > Resource groups > "RESOURCE_GROUP_NAME" > "Virtual machine"
ssh -o "IdentitiesOnly=yes" -i ~/.ssh/az_eqr_id_rsa marco@PublicIP
- e) Once you are logged into the VM paste the
Registration script
and run it. - f) (optional) Add a tag when promoted.
- a) Deploy a new log analytics workspace
Run the deploy_log_analytics_workspace.sh
script. Make sure to set a resource group and provide a workspace name when promoted, e.g.
ensuring-quality-releases-log
.
cd analytics
./deploy_log_analytics_workspace.sh
- b) From a local terminal connect to the Virtual Machine as described above.
- c) Once you are logged into the VM run the following commands:
wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh
sh onboard_agent.sh -w ${AZURE_LOG_ANALYTICS_ID} -s ${AZURE_LOG_ANALYTICS_PRIMARY_KEY}
IMPORTANT: The AZURE_LOG_ANALYTICS_ID
and AZURE_LOG_ANALYTICS_PRIMARY_KEY
can be found in the Azure Portal under:
Home > Resource groups > "RESOURCE_GROUP_NAME" > "Log Analytics workspace" > Agents management
There you will also find the command to Download and onboard agent for Linux
.
For more information on how to create and install Log Analytic agents click the links bellow:
- a) Add a secure file to Azure Pipelines. From inside your project in Azure DevOps go to:
Pipelines > Library > Secure files > + Secure file
- b) Add the public ssh key and the terraform.tfvars files to the secure files' library.
- c) Give the pipeline permissions to use the file:
"SECURE_FILE_NAME" > Authorize for use in all pipelines
- a) From inside your project in Azure DevOps go to:
Pipelines > Pipelines > Create new pipeline
- b) Select your project from GitHub
- c) Select
Existing Azure Pipelines YAML file
- d) Select the
main
branch and select the path to the azure-pipelines.yaml file. - e) Select
Continue
and thenRun pipeline
If everything goes well you should be able to see the pipeline running throughout the different stages. See images below.
- a) From the Azure Portal go to:
Home > Resource groups > "RESOURCE_GROUP_NAME" > "App Service Name" > Monitoring > Alerts
- b) Click on
New alert rule
- c) Double-check that you have the correct resource to make the alert for.
- d) Under
Condition
clickAdd condition
- d) Choose a condition e.g.
Http 404
- e) Set the
Threshold value
to e.g.1
. (You will get altered after two consecutive HTTP 404 errors) - f) Click
Done
- a) In the same page, go to the
Actions
section, clickAdd action groups
and thenCreate action group
- b) Give the action group a name e.g.
http404
- c) Add an Action name e.g.
HTTP 404
and chooseemail
in Action Type. - d) Provide your email and then click
OK
- a) In the same page, go to the
Alert rule details
section and add anAlert rule name
e.g.HTTP 404 greater than 1
- b) Provide a description and select a -Severity`.
- c) Click
Create alter rule
These are all excellent official documentation examples from Microsoft that explain key components of CI/CD on Azure: