Set of projects/Terraform scripts to provision environment for a node.js app. Uses Terraform Cloud to store state and get temporary credentials based on "license plate".
- Have access granted to the fapi7b project set on BCGOV AWS.
- Login to https://login.nimbus.cloud.gov.bc.ca/
- Expand the project you will use (tools, dev, test, or prod), click on copy credentials and paste them into the terminal
- The above will set the proper credentials to be able to successfully run the terraform plan, apply and destroy commands
- download project locally
- install terraform locally
- run 'terraform init'
- run 'terraform plan'
- run 'terraform apply -auto-approve'
- confirm that all AWS services are created as expected
If you need to rollback please note that Terraform willnot destroy s# bucket if it has any objects. If you need to destroy anyway - please empty s3 bucket via AWS console first, then run 'terraform destroy -auto-approve'
Note: if you need to destroy S3 buckets, remove lines 18-20 from modules/s3_bucket/main.tf
- !DO NOT MODIFY EXISTING VARIABLE
bucket_name
- create new variable in
variable.tf
file - i.e.
variable "foobar_bucket_name" {
default = "fapi7b-dev-ccbc-foobar"
}
- in
main.tf
create new module referencing new variable
module "foobar_s3" {
source = "./modules/s3_bucket"
vpc_id = data.aws_vpc.selected.id
bucket_name = var.foobar_bucket_name
}
- run 'terraform init'
- run 'terraform plan'
- confirm that plan does not contain any
destroy
actions, only 5 resources to be added - run 'terraform apply -auto-approve'
- confirm that new s3 bucket is created as expected
Please note that variables use dev
as target environment. If the same bucket need to be created in another environment, please follow the next steps:
- find
dev
and replace it withtest
invariable.tf
file - find
dev
and replace it withtest
inmain.tf
file (line 7 only) - run 'terraform init -reconfigure'
- run 'terraform plan'
- confirm that plan does not contain any
destroy
actions, only 5 resources to be added - run 'terraform apply -auto-approve'
- confirm that new s3 bucket is created as expected
Provided terraform modules also ensure provisioning of the ClamAV virus scanner for files in S3 bucket. Infrastructure includes lambda, lambda layer, CloudWatch event to periodically update virus definition database and all necessary roles/permissions. Lambda layer archive get generated by the https://github.com/bcgov/CONN-ClamAV-scan project.
Uploading any file in the fapi7b-XXX-ccbc-data
bucket triggers virus scan that results in tag av-status
to be set to clean
or dirty
. Notification about the infected file is posted in clamav-notification
SNS topic, so it's easy to subscribe to the topic and receive notifications. Application that reads the file from S3 can decide what to do with infected file.
It is possible to setup additional permissions on S3 bucket to prevent use of infected file - just add next lines to scanner.tf
data "aws_caller_identity" "current" {}
// Add a policy to the bucket that prevents download of infected files
resource "aws_s3_bucket_policy" "buckets-to-scan" {
count = "${length(var.buckets-to-scan)}"
bucket = "${element(var.buckets-to-scan, count.index)}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:root",
"arn:aws:sts::${data.aws_caller_identity.current.account_id}:assumed-role/${aws_iam_role.bucket-antivirus-scan.name}/${aws_lambda_function.scan-file.function_name}",
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${aws_iam_role.bucket-antivirus-scan.name}"
]
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${element(var.buckets-to-scan, count.index)}/*",
"Condition": {
"StringNotEquals": {
"s3:ExistingObjectTag/av-status": "CLEAN"
}
}
}
]
}
POLICY
}
Virus defintion files reside on S3 in the fapi7b-XXX-ccbc-clamav
bucket and get updated once a day by a CloudWatch job which triggers update-clamav-definitions
lambda.
Lambda scan-file
uses virus defintion files from the fapi7b-XXX-ccbc-clamav
S3 bucket.
When intake is closed, two cron job are executed:
- all application submitted before the intake closes are marked as 'Received' (by the job in the database);
- all attachments submitted by the customer are archived into single zip file and uploaded to S3 (by the lambda);
When attachments are added to the archive, additional file errors.txt
is generated.
If no errors happens, file contains only phrase Download successful
.
If any error detected during archiving, it is recorded in the file with the error code and details about the uploaded file.
Error codes are:
- 400 Bad data, indicating that file size mismatches between database and S3, possibly due to data corruption;
- 409 Infected file, indicating that virus scanner marked the file as infected (see https://learn.microsoft.com/en-us/openspecs/sharepoint_protocols/ms-wsshp/1c302d04-b76f-44e9-800d-c974250de84d);
- 500 Unexpected error for any other errors received from AWS SDK;