- Heroku and netlify abstractioms over aws. Platform as service. Aws is very low level. Dev experience sucks
- But they don’t scale well because you give up control. Instacart started on heroku then had to move. (Syntax flight control episode)
- Security concerns with shared infrastructure of a heroku.
- Costs more than raw aws.
- 80% cheaper than heroku at scale
- Install next
npx create-next-app@latest
npx lintier
without airbnb- settings.json
"editor.codeActionsOnSave": { "source.fixAll": true },
- verify autoformat on save
- Commit
- fix linting errors
npr lint:fix
- Commit
- Create a route with second page
app/foobar/page.tsx
- Add link to this page on home page:
<Link href="/foobar">go to foobar page!</Link>
- Client side spa style routing
- Commit
- Create github repo
git remote add origin [https://github.com/josh-stillman/code-along.git](https://github.com/josh-stillman/code-along.git)
- Get url from HTTPS dropdown
- push to repo
- Build for production
npm run build
- Build output shows
○ (Static)
- Build output shows
- Verify locally
- Inspect and show pre-rendered html
- Setup Static builds
- https://nextjs.org/docs/app/building-your-application/deploying/static-exports
const nextConfig = { output: 'export', trailingSlash: true, // needed for s3 hosting };
- Run
npm run build
and inspect.next/out
folder - Serve static files
"serve-static": "npx serve ./out",
-
Generous Free Tier for 1st year
-
Create account
- https://aws.amazon.com/
- click sign up button in nav bar
- Enter personal email and account name
- Verification code in email
- might be in spam folder
- Finish account sign up
- enter root password
- enter personal info
- enter credit card
- verify with SMS
- try the voice captcha option if chars are hard to read
- Choose basic plan
-
Setup Billing Alerts
- Type “billing” in text box
- Go to Billing Preferences
- Alert preferences
- AWS Free Tier alerts
- PDF invoices
- Budgets
- create budget
- zero spend budget
- create budget
-
Setup Users
- Search for IAM in text bar
- Setup 2FA for Root
- Google authenticator
- Create account alias
- Go to dashboard
- create account alias
- Root user is only for billing, etc. Never used again.
- Create admin user
- Users
- Add User
- Username: Admin
- Provide access to AWS MGMT Console
- Create IAM user (for programattic access)
- Custom Password
- Don’t user temporary password
- Attach policies directly
- Choose “AdministratorAccess”
- Check out the JSON - can do everything.
- No tags
- used for tracking
- Log out of root, log in as Admin
- Add MFA again
- Different phone name (use IAM user name)
- scan again
- Add MFA again
- Simple Storage Service
- Store files
- s3 Accounts made up of buckets. Buckets contain “objects” aka files.
- Can host web pages from buckets
- scalable, highly available and durable.
- key / value store. Flat hierarchy though the UI can show you “directories” if you want.
- Free tier: 5 gigs of storage
- Route 53
- Register Domain
- Do ahead of time to let the registration propagate
- verify the email when it comes
- s3
- create bucket with same name as your site
- acls disabled
- turn on public access
- dangerous but we’ll turn it off eventually.
- we want people to access the website
- keep versioning off
- default encryption
- bucket policies
- read but not write
- policy generator
-
s3 Bucket
-
Allow
-
Principal:
*
-
Actions:
GetObject
-
copy s3 ARN from prior page with /*
-
click add statement
-
generate policy and copy
{ "Version": "2012-10-17", "Id": "Policy1692219819119", "Statement": [ { "Sid": "Stmt1692219813668", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::jss.computer/*" } ] }
-
generate policy and copy
-
- upload the /out directory and all subdirectories
- Click index.html
- we’re on the internet!
https://s3.amazonaws.com/jss.computer/index.html
- go to Properties → static website hosting
- enable
- index.html as root
- 404.html as error
- Go to bucket website endpoint listed
- http://jss.computer.s3-website-us-east-1.amazonaws.com
- Root works
- Linking works
- 404 works
- Refreshing at any works thanks to trailing slash, which creates a new dir and index.html for each route
- create bucket with same name as your site
- Things to do:
- cdn
- better url
- no manual upload, ci/cd
- Install CLI
- Generate access token
- TODO: SSO?
- IAM → User → Security Credentials → Create Access Key
- Copy paste both public key and secret. Can only view once.
aws configure
- add all
- us-east-1
- json for output
aws s3 ls s3://jss.computer
- Setup main record
- Go to route 53 → Registered Domains
- hosted zone → click your zone → create record
- hit alias toggle
- alias to s3 website endpoint
- us-east-1
- auto fill endpoint
- create record
- go to your url.
- may need to wait a little bit
- Go to AWS Certificate Manager
- MAKE SURE you’re in us-east-1. Only that can be used with cloudfront
- Request cert → request public cert
- Add both the main name and wildcard in Fully Qualified name.
jss.computer
*.jss.computer
- Click request
- Click blue banner saying further action needed.
- Under domains, click “create record in route 53” to prove you own it.
- Need cloudfront to make it actually work.
Redirect www. to main site
- create another bucket
- don’t need public access for this one.
- go to bucket properties
- static website hosting
- enable
- Go to route 53
- add record to your domain
- for www
- alias
- to www bucket
- Test out www.jss.computer
- it redirects!
-
Go to cloudfront
-
Click create distribution
- click your main s3 bucket, then click convert to website
- Turn off WAF
- us and europe
- got down to cache policies
- keep compression on
- redirect http to https
- Alternate domains
- enter both
- jss.computer
- www.jss.computer
- enter both
- default root object
- index.html without a slash
- Paste in cert. MUST BE in east 1.
- Takes time to deploy
-
Go back to route 53 and point domain and www subdomain to cloudfront distribution
- autofill may not work.
- Copy distribution name from cloudfront ui
-
At this point we have:
- htttps and www working; along with routing.
- can delete second bucket
- redirects from http to https are working too!
-
Seal off bucket
- edit origin to be s3 bucket.
- turn on OAC
- Create control setting
- copy policy and go to bucket
- replace bucket policy and save
- turn off s3 public access
- save cloudfront policy
- double check website link from s3 and verify 403.
-
To make 404 work now:
-
edit policy to include bucket name without trailing slash as well as listing the objects
{ "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "AllowCloudFrontServicePrincipal", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::jss.computer/*", "arn:aws:s3:::jss.computer" ], "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::225934246878:distribution/E2NYXH5S9T80Y5" } } } ] }
-
Finalizing with cloudfront
- turn off public access
- use policy above
- configure a custom error page in cloudfront to be /404.html. For this one you need the slash!
- Remove the trailing slash config in next and re-upload
- Everything now works except reloads from nested routes.
- Need a lambda edge function for these rewrites.
-
Go to lambda
-
create function
-
name it html-redirect
-
use defaults. click create
-
click add trigger, select cloudfront
-
deploy to edge
- choose origin request along with your distribution
-
write function to append .html to routes without a file extension
'use strict'; export const handler = (event, context, callback) => { // Extract the request from the CloudFront event that is sent to Lambda@Edge var request = event.Records[0].cf.request; // Extract the URI from the request var olduri = request.uri; // Match any route after a slash without a file extension, and append .html if (olduri.match(/\/[^/.]+$/)) { const newUri = olduri + '.html'; request.uri = newUri; console.log("Old URI: " + olduri); console.log("New URI: " + newUri); } // Return to CloudFront return callback(null, request); }; export default handler;
-
Need to add role before adding the trigger
-
IAM → roles → find lambda role → trust relationship
-
content
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
-
OMFG it works!
-
-
Create IAM user role for CI/CD
-
IAM → Create Policy
-
Choose Cloudfront → Create Invalidation → Copy ARN
-
Choose S3 → ListBucket, PutObject, DeleteObject → Copy ARN
-
Create policy and name it
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:ListBucket", "s3:DeleteObject", "cloudfront:CreateInvalidation" ], "Resource": [ "arn:aws:s3:::jss.computer", "arn:aws:cloudfront::225934246878:distribution/E2NYXH5S9T80Y5" ] } ] }
-
-
Create user
- No aws console access
- attach policy you created
-
Create access key
- bypass recommendation
- keys only shown once. Keep it secret/safe
- bypass recommendation
-
Annoyingly, we need to update the bucket policy as well to let the user in. Seems to be the only option at the moment if we want to block public access to the bucket but still let cloudfront in.
{ "Version": "2012-10-17", "Id": "Policy1692219819119", "Statement": [ { "Sid": "Stmt1692219813668", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::jss.computer/*", "arn:aws:s3:::jss.computer" ], "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::225934246878:distribution/E2NYXH5S9T80Y5" } } }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::225934246878:user/FE-CI-CD-Pipeline" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:ListBucket", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::jss.computer/*", "arn:aws:s3:::jss.computer" ] } ] }
-
-
Setup Github Action
- Setup File
- Add Secrets to Github
- Settings → Security → Secrets & Variables →
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- BUCKET_NAME
aws s3 ls
(or console)
- DISTRIBUTION_ID
aws cloudfront list-distributions
(or console)
- push up a change and voila!
-
-
Create a new strapi project
-
npx create-strapi-app@latest code-along-api --quickstart - -typescript
-
This creates a strapi project with sqlite db
-
See docs here
Quick Start Guide - Strapi Developer Docs | Strapi Documentation
-
Create content type
- NewsItem
- singular and plural
- Add Title (short) Body (long) and PublicationDate (
- NewsItem
-
Create a first NewsItem
-
Change permissions
- Settings → Roles → Public → News-item → click find and findOne → save
-
curl [http://localhost:1337/api/news-items](http://localhost:1337/api/news-items)
-
-
Dockerize for production
- Download Docker
- Copy this file to Dockerfile in root dir (no extension)
- Call outs: package.json and npm install on different lines than copying the main app files.
- Used for caching. If app files change but no dependencies, use cached dependencies.
- add .dockerignore. Note that we’ll be copying over our local db
# Keeping our local DB in place for now # .tmp/ .cache/ .git/ build/ node_modules/ .env data/
- Test it locally
docker build -t strapi-test .
docker images ls
- verify you see strapi-testdocker run -rm -p 1337:1337 --env-file .env strapi-test
- If on an M1 must add prefix to image!
- Ensure you’re on a working version of strapi. 4.12.6 doesn’t work in production, use 4.12.1 instead. Test by logging into admin dashboard at
/login
- Search for ECR → Switch to US-East-1
- Click Create Repository
- Add a name, leave it private, leave all others disabled, click create.
- Push the image
- select your repository check box, then click View Push Commands
- Follow each step:
- Login
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin [225934246878.dkr.ecr.us-east-1.amazonaws.com](http://225934246878.dkr.ecr.us-east-1.amazonaws.com/)
- Build
docker build -t code-along .
We already did this. - Tag with respository uri
docker tag code-along:latest [225934246878.dkr.ecr.us-east-1.amazonaws.com/code-along:latest](http://225934246878.dkr.ecr.us-east-1.amazonaws.com/code-along:latest)
- Push
docker tag code-along:latest [225934246878.dkr.ecr.us-east-1.amazonaws.com/code-along:latest](http://225934246878.dkr.ecr.us-east-1.amazonaws.com/code-along:latest)
- Login
- Click your repo and verify that the image is there
- Go to lifecycle policy and imageCountMoreThan 1, which caps it at the latest image
- Create Cluster
- Keep Fargate selected (AWS finds resources for you)
- Default VPC
- Choose two subnets
- Don’t turn on monitoring for now.
- click create
- Go to Task Definitions on the side nav
- Create new Task Definition
- Name:
code-along-api-task-def
- Keep fargate and linux x86
- no task role for now
- slim down to smallest resources: 1 vCPU and 2GB
- 1 vCPU = 1 thread, needed for Node.
- Hard limit must be 2GB as well
- Copy image URI from ECR
- 225934246878.dkr.ecr.us-east-1.amazonaws.com/code-along:latest
- Using latest allows us to update.
- Add port 1337
- Don’t make read only.
- Add the env vars from the .env file
- Keep the logs on for now
- Name:
- Create a Service (how to launch tasks —i.e. containers—defined by the definition you just created)
- select existing cluster
- keep default compute options
- Add service name
code-along-api-service
and keep defaults, including 1 task (1 container) - Keep default VPC and subnets for now
- Create a new security group
- Open up ports 1337 (strapi), 80 (http) and 443 (https)
- Choose Custom TCP and Source of Anywhere
- Keep public IP (want this accessible from the internet for now.
- Click create
- Go to IP at 1337. Try curling, and it should be up.
- Create new Task Definition
-
Let’s commit our DB for now.
-
Update the .gitignore under Logs/DB
############################ # Logs and databases ############################ # For present purposes we'll commit our DB # .tmp
-
-
Add IAM user for Backend CI/CD
- IAM - Policies - Create User
- Attach Policies
- Search for AmazonEC2ContainerRegistryPowerUser
- Copy to JSON
- Create Policy
- Copy JSON
- Scope to Resource
- Get ARN from ECR → Repositories → Summary
- Copy in ARN
- User must also have get token on *
- Needs PassRole and AssumeRole on the ecs task def role. Copy the ARN from the Roles tab in IAM.
- Final policy should look like this:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:DescribeImages", "ecr:BatchGetImage", "ecr:GetLifecyclePolicy", "ecr:GetLifecyclePolicyPreview", "ecr:ListTagsForResource", "ecr:DescribeImageScanFindings", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", "ecr:CompleteLayerUpload", "ecr:PutImage" ], "Resource": [ "arn:aws:ecr:us-east-1:225934246878:repository/code-along" ] }, { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecs:RegisterTaskDefinition", "ecs:ListTaskDefinitions", "ecs:DescribeTaskDefinition" ], "Resource": "*" }, { "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::225934246878:role/ecsTaskExecutionRole" }, { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::225934246878:role/ecsTaskExecutionRolee" } ] }
- Create and name policy
- In create user tab, refresh policies and filter by customer managed
- attach BE-CI-CD policy
- Create User
- Click User and go to Security Credentials
- Create Token
- Skip warning
- Copy/Paste credentials somewhere safe (again, you won’t be able to view these again).
-
Add Github action file
-
Starter here: https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-amazon-elastic-container-service
-
.github/workflows/main.yml
. Copy in starter. It must be named main. -
We’re going to use the latest tag, so we can just keep the region and repository envs
env: AWS_REGION: us-east-1 ECR_REPOSITORY: code-along
-
Change the image tag env var to
latest
- name: Build, tag, and push image to Amazon ECR id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} IMAGE_TAG: latest
-
Delete the last two steps about task definition. The last step should be
- name: Build, tag, and push image to Amazon ECR
-
Add secrets to backend github repo
- Settings → Security → Secrets and variables → Actions → new
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
-
Now push up and verify!
- Github action should succeed
- You should see the new image in ECR
-
But it doesn’t redeploy to ECS!
-
-
Secrets Manager → Create secrets
- Paste in everything from the strapi .env file except port and host
- Add name, select all default options
-
Need to give your task def permission to access the secrets
-
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-tutorial.html
-
Go to IAM → Roles → ecsTaskExecutionRole → Add Permissions → Inline policy
- Secrets Manger → ReadSecretValue
- Scope to the Arn of your secret, copied from secrets manager
- Should look like this:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu" } ] }
-
-
Edit task definition to use “ValueFrom” and paste in the secrets ARN for each. Need to reference each key at the end of the ARN. And need two additional colons at the end to reference default version stage and id. See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/secrets-envvar-secrets-manager.html
{ "taskDefinitionArn": "arn:aws:ecs:us-east-1:225934246878:task-definition/code-along-api-task-def:3", "containerDefinitions": [ { "name": "code-along-api", "image": "225934246878.dkr.ecr.us-east-1.amazonaws.com/code-along:latest", "cpu": 0, "memory": 2048, "memoryReservation": 2048, "portMappings": [ { "name": "code-along-api-80-tcp", "containerPort": 80, "hostPort": 80, "protocol": "tcp", "appProtocol": "http" }, { "name": "code-along-api-1337-tcp", "containerPort": 1337, "hostPort": 1337, "protocol": "tcp", "appProtocol": "http" } ], "essential": true, "environment": [], "environmentFiles": [], "mountPoints": [], "volumesFrom": [], "secrets": [ { "name": "ADMIN_JWT_SECRET", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:ADMIN_JWT_SECRET" }, { "name": "API_TOKEN_SALT", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:API_TOKEN_SALT" }, { "name": "APP_KEYS", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:APP_KEYS" }, { "name": "DATABASE_CLIENT", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:DATABASE_CLIENT" }, { "name": "DATABASE_FILENAME", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:DATABASE_FILENAME" }, { "name": "JWT_SECRET", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:JWT_SECRET" }, { "name": "TRANSFER_TOKEN_SALT", "valueFrom": "arn:aws:secretsmanager:us-east-1:225934246878:secret:prod/code-along-api-W3kDvu:TRANSFER_TOKEN_SALT" } ], "ulimits": [], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "/ecs/code-along-api-task-def", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "ecs" }, "secretOptions": [] } } ], "family": "code-along-api-task-def", "executionRoleArn": "arn:aws:iam::225934246878:role/ecsTaskExecutionRole", "networkMode": "awsvpc", "revision": 3, "volumes": [], "status": "ACTIVE", "requiresAttributes": [ { "name": "com.amazonaws.ecs.capability.logging-driver.awslogs" }, { "name": "ecs.capability.execution-role-awslogs" }, { "name": "com.amazonaws.ecs.capability.ecr-auth" }, { "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19" }, { "name": "ecs.capability.secrets.asm.environment-variables" }, { "name": "com.amazonaws.ecs.capability.docker-remote-api.1.21" }, { "name": "ecs.capability.execution-role-ecr-pull" }, { "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18" }, { "name": "ecs.capability.task-eni" }, { "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29" } ], "placementConstraints": [], "compatibilities": [ "EC2", "FARGATE" ], "requiresCompatibilities": [ "FARGATE" ], "cpu": "1024", "memory": "2048", "runtimePlatform": { "cpuArchitecture": "X86_64", "operatingSystemFamily": "LINUX" }, "registeredAt": "2023-08-31T20:14:04.114Z", "registeredBy": "arn:aws:iam::225934246878:user/Admin", "tags": [] }
-
Go to your service, edit it, point it to the new task def revision, and force redeploy.
-
Wait for task to redeploy and verify that it’s working.
- Services → Tasks → Network Bindings → 18.212.103.210:1337
- Copy the JSON of your task definition and save it to
.aws/task-definition.json
- Make sure no secret values in it!! This is why secrets manager was necessary.
- Update github action to render new task def, tag with commit sha, and push to service.
# This workflow uses actions that are not certified by GitHub. # They are provided by a third-party and are governed by # separate terms of service, privacy policy, and support # documentation. # GitHub recommends pinning actions to a commit SHA. # To get a newer version, you will need to update the SHA. # You can also reference a tag or branch, but the action may change without warning. name: Deploy to Amazon ECS on: push: branches: - main workflow_dispatch: # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. concurrency: group: "frontend" cancel-in-progress: false env: AWS_REGION: us-east-1 # set this to your preferred AWS region, e.g. us-west-1 ECR_REPOSITORY: code-along # set this to your Amazon ECR repository name ECS_SERVICE: code-along-api-service-2 # set this to your Amazon ECS service name ECS_CLUSTER: code-along-api # set this to your Amazon ECS cluster name ECS_TASK_DEFINITION: .aws/task-definition.json # set this to the path to your Amazon ECS task definition file, e.g. .aws/task-definition.json CONTAINER_NAME: code-along-api # set this to the name of the container in the # containerDefinitions section of your task definition jobs: deploy: name: Deploy runs-on: ubuntu-latest environment: production steps: - name: Checkout uses: actions/checkout@v3 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@0e613a0980cbf65ed5b322eb7a1e075d28913a83 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@62f4f872db3836360b72999f4b87f1ff13310f3a - name: Build, tag, and push image to Amazon ECR id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} IMAGE_TAG: ${{ github.sha }} run: | # Build a docker container and # push it to ECR so that it can # be deployed to ECS. docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT - name: Fill in the new image ID in the Amazon ECS task definition id: task-def uses: aws-actions/amazon-ecs-render-task-definition@c804dfbdd57f713b6c079302a4c01db7017a36fc with: task-definition: ${{ env.ECS_TASK_DEFINITION }} container-name: ${{ env.CONTAINER_NAME }} image: ${{ steps.build-image.outputs.image }} - name: Deploy Amazon ECS task definition uses: aws-actions/amazon-ecs-deploy-task-definition@df9643053eda01f169e64a0e60233aacca83799a with: task-definition: ${{ steps.task-def.outputs.task-definition }} service: ${{ env.ECS_SERVICE }} cluster: ${{ env.ECS_CLUSTER }} wait-for-service-stability: true
- et voila!
- Right now we’re on ephemeral IPs, no dns, no https. Let’s fix that.
- We need to create a load balancer in front of our ECS service and route traffic through that for DNS and SSL.
- Create new service
- select defaults for Environment
- defaults for deployment but select lastest task def revision
- networking: default vpc and same subnets as you selected in your cluster (us-east-1 1a and 1b subnets for me)
- Select the existing security group that allows 1337 and 80.
- DO assign a public IP this time. Otherwise networking becomes very complex. See this for more: https://stackoverflow.com/questions/61265108/aws-ecs-fargate-resourceinitializationerror-unable-to-pull-secrets-or-registry
- You can still restrict public access to this IP, which we’ll do later.
- Add a load balancer in the LB section
- name is code-along-api-lb
- Port to balance is 1337
- Listener is HTTPS on 443
- Choose your ACM certificate
- Create new target group on HTTP (verify). We’ll use HTTP internally so we don’t need more certs, and use HTTPS for public traffic.
- name is code-along-api-tg
- healthcheck endpoint for strapi is
/_health
- Setup route 53
-
Add record
-
Add subdomain of api.
-
Check alias
-
Route to Application and Classic LB
-
In us-east-1
-
Select the only one availabe. Starts with dualstack and includes the name you gave your LB.
-
Simple routing.
-
Summming up what worked here:
- create a service with a load balancer. public IP must be on.
- service must have a SG allowing for traffic on 1337 and 80
- 1337 is the app. 80 is http - I think it’s necessary for healthcheck?
- must create a separate security group for the load balancer, allowing public traffic on 443 for https.
- In the security group for the service, open those ports only to the SG for the load balancer.
- In route 53, route api.jss.computer to the load balancer.
- The key here is two separate security groups (and public IP for the service even though you won’t allow any public access to it).
- But we’re not done yet! must configure the health check codes.
- Ec2 → Target Groups → TG from your service → health checks → codes → 200-204.
- Strapi will return a 204… and that doesn’t work by default.
- Verify that the public IP isn’t reachable directly. SUCCESS FINALLY!
- Update our github action to point to new service
- update our IAM user for BE pipeline to use name of new service.
-
-
Need to setup a client component that will pull live data from strapi.
-
We’ll use SWR, which is a convention in Next.
-
npm i-E swr
-
Create a components dir at
src/components
-
Create a
NewsFeed.tsx
component. -
include the
'use client';
directive at the top, telling Next how to render it. -
create a
.env
file at root with this line. This is the default for all envs and will be committed into the repo. These are client-side variables and not secrets, but we’ll still use GH secrets to configure the deployed envs for greater flexibility and colocation. -
Add the component as seen here:
'use client'; import useSWR from 'swr'; import { NewsItemsResponse } from '../types/api'; import styles from './NewsFeed.module.css'; const fetcher = (url: string) => fetch(url).then(res => res.json()); const params = new URLSearchParams(); params.set('sort', 'publishedAt:desc'); export function NewsFeed() { const { data, error, isLoading } = useSWR<NewsItemsResponse>( `${process.env.NEXT_PUBLIC_API_URL}/api/news-items?${params.toString()}`, fetcher ); if (isLoading) return <div>loading...</div>; if (error || !data?.data) return <div>failed to load</div>; return ( <div className={styles.newsItemList}> <h1>NewsFeed! 🗞️</h1> {data.data.map(({ attributes, id }) => ( <div key={id} className={styles.newsItem}> <h2>{attributes.Title}</h2> <h3> <i>{attributes.Body}</i> </h3> <span>{new Date(attributes.publishedAt).toLocaleDateString()}</span> </div> ))} </div> ); }
- SWR is like React Query. It will make the calls to the API on the client.
-
In your FE github repo, add a secret for the FE URL. It’s not a “secret” per se, but we’ll use this to configure multiple environments. Save it as
API_URL
and have it point to your deployed api -https://api.jss.computer
-
Inject the
API_URL
into your build step by adding this to your GH action inmain.yml
build-and-deploy: runs-on: ubuntu-latest env: NEXT_PUBLIC_API_URL: ${{ secrets.API_URL }} steps: - name: Checkout uses: actions/checkout@v3
-
Make sure there are no build errors locally.
npm run build
. Fix or suppress them before pushing.