This repository builds a simple, static resume site that fetches the total count of site visitors from a database. It is hosted entirely on AWS and features:
- Resume template from Universal Resume
- Infrastructure as Code with Terraform
- DNS and SSL management with Cloudflare
- CI/CD with Github Actions
This project is inspired entirely by cloudresumechallenge and serves as a learning experience for working with cloud services. It is a simple hands-on project to practice working with cloud service and DevOps tools.
- Static site hosted on S3 bucket
- Request visitor count from API Gateway
- Lambda function that calls DynamoDB
- Infrastructure as Code
- Good Git hygiene and pull request discipline
- Full CI/CD deployment for any code changes
- SSL/TLS
- Units tests for Lambda function
- Remote state in separate S3 bucket
- API Gateway CloudWatch Logging
The static site is hosted on an Amazon S3 bucket and served with Cloudflare DNS. It retrieves the visitor count from DynamoDB through an API. The API is served by API Gateway and it invokes a Lambda function that accesses the database table. The count is returned as a response on the site.
All infrastructure is managed with Terraform, including the Cloudflare DNS records and the Python code in the Lambda function.
The CI/CD pipelines are conditionally run based on modified files using paths-filter.
- Changes to
site/
synchronizes the new static files to the S3 bucket - Changes to
api/
runs unit tests and updates the Lambda function with Terraform. Ideally, this should only change one resourceaws_lambda_function
. - Changes to
*.tf
files updates the Terraform resource states. These changes must be committed with a pull request. A plan output would be produced and vetted before the PR can be merged to master.
Prerequisites:
- AWS account (with free tier if possible)
- Terraform v1.2.0
- Python 3.9
- Docker (for npm Docker image or install npm locally)
To build the site locally:
$ make install
$ make build
$ make serve
To provision infrastructure, populate auto.tfvars
with relevant variables and
run:
$ terraform init
$ make tplan
$ make tapply
This provisions:
- S3 bucket
- Cloudflare CNAME DNS record
- Lambda function
- DynamoDB table and items
- API Gateway routes
Static files in S3 bucket must be added manually OR can be added via CI job on
push to master
.
$ aws s3 sync ./site/docs/ s3://[bucket_name]
To destroy infrastructure at the end of the day,
$ make tdestroy
The following secrets must be added for the CI/CD workflow to run successfully:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
CLOUDFLARE_TOKEN
AWS_S3_BUCKET
AWS_REGION
is hard-coded to beap-southeast-1
. Please change this if you are in a different region.
To enable CORS support for Lambda proxy integrations in API Gateway, the documentation (and others) mentions that the appropriate CORS headers must be included in the response of the Lambda function manually, instead of being configured directly in API Gateway.
However, I was not able to get this to work, with the browser still throwing a
No access-control-allow-origin header present
error. As such, the CORS headers
are directly configured in API Gateway in the cors_configuration
block. I hope
to understand and fix this in the future.
From the documentation, it also mentions that API Gateway ignores CORS headers returned from the backend integration if CORS is configured directly in the API.
Currently, the API endpoint URL is passed to the front-end by uploading a JSON
file api_url.json
to the S3 bucket. This file is then referenced in index.js
to update the visitor counter. There is probably a better way to pass the
endpoint URL to the site's front-end.