Replies: 1 comment
-
|
Hey @NTSER, this is a great idea! I work on AWS cloud infrastructure and have deployed FastAPI apps in serverless setups. Here's a practical architecture using Terraform on AWS that maps well to this template's structure. Architecture OverviewHow each component of this template maps
Key changes needed in the FastAPI appThe only code change required is adding Mangum as the Lambda handler: # backend/app/main.py (add at the bottom)
from mangum import Mangum
handler = Mangum(app)Mangum translates API Gateway events into ASGI requests that FastAPI understands. The rest of the app code stays exactly the same. You'd also add "mangum>=0.19.0,<1.0.0",Terraform module structureKey Terraform resources (simplified)Lambda (backend)resource "aws_lambda_function" "backend" {
function_name = "${var.project_name}-api"
package_type = "Image"
image_uri = "${aws_ecr_repository.backend.repository_url}:latest"
timeout = 30
memory_size = 512
environment {
variables = {
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg://${var.db_user}:${var.db_password}@${aws_rds_cluster.main.endpoint}/${var.db_name}"
SECRET_KEY = var.secret_key
ENVIRONMENT = "production"
FRONTEND_HOST = "https://${var.domain_name}"
}
}
vpc_config {
subnet_ids = var.private_subnet_ids
security_group_ids = [aws_security_group.lambda.id]
}
}Using Aurora Serverless v2 (database)resource "aws_rds_cluster" "main" {
cluster_identifier = "${var.project_name}-db"
engine = "aurora-postgresql"
engine_mode = "provisioned"
engine_version = "16.4"
database_name = var.db_name
master_username = var.db_user
master_password = var.db_password
serverlessv2_scaling_configuration {
min_capacity = 0.5
max_capacity = 4
}
vpc_security_group_ids = [aws_security_group.db.id]
db_subnet_group_name = aws_db_subnet_group.main.name
}
resource "aws_rds_cluster_instance" "main" {
cluster_identifier = aws_rds_cluster.main.id
instance_class = "db.serverless"
engine = aws_rds_cluster.main.engine
}Aurora Serverless v2 scales down to 0.5 ACU when idle — keeps costs very low for dev/staging while handling production traffic. API Gatewayresource "aws_apigatewayv2_api" "main" {
name = "${var.project_name}-api"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_integration" "lambda" {
api_id = aws_apigatewayv2_api.main.id
integration_type = "AWS_PROXY"
integration_uri = aws_lambda_function.backend.invoke_arn
payload_format_version = "2.0"
}
resource "aws_apigatewayv2_route" "default" {
api_id = aws_apigatewayv2_api.main.id
route_key = "$default"
target = "integrations/${aws_apigatewayv2_integration.lambda.id}"
}S3 + CloudFront (frontend)resource "aws_s3_bucket" "frontend" {
bucket = "${var.project_name}-frontend"
}
resource "aws_cloudfront_distribution" "main" {
enabled = true
default_root_object = "index.html"
# Frontend (React SPA)
origin {
domain_name = aws_s3_bucket.frontend.bucket_regional_domain_name
origin_id = "s3-frontend"
origin_access_control_id = aws_cloudfront_origin_access_control.main.id
}
# API backend
origin {
domain_name = replace(aws_apigatewayv2_api.main.api_endpoint, "https://", "")
origin_id = "api-backend"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
# Route /api/* to Lambda backend
ordered_cache_behavior {
path_pattern = "/api/*"
target_origin_id = "api-backend"
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
forwarded_values {
query_string = true
headers = ["Authorization", "Origin"]
cookies { forward = "none" }
}
viewer_protocol_policy = "redirect-to-https"
}
# Everything else serves React SPA
default_cache_behavior {
target_origin_id = "s3-frontend"
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
forwarded_values {
query_string = false
cookies { forward = "none" }
}
}
# Handle React client-side routing
custom_error_response {
error_code = 403
response_code = 200
response_page_path = "/index.html"
}
}Handling Alembic migrationsSince Lambda is stateless, you can run migrations as a separate step in CI/CD: # Option 1: Dedicated migration Lambda
resource "aws_lambda_function" "migrate" {
function_name = "${var.project_name}-migrate"
package_type = "Image"
image_uri = "${aws_ecr_repository.backend.repository_url}:latest"
timeout = 300
# Same VPC config as backend Lambda
}Or run it via GitHub Actions after aws lambda invoke --function-name myapp-migrate --payload '{}' /dev/nullCost estimateFor a low-traffic app (dev/staging):
Total: ~$45-50/month (Aurora is the main cost — you could use RDS free tier Caveats to keep in mind
I've built similar setups in production — happy to help further or even put together a PR with the Terraform configs if there's interest from the maintainers. You can also check out my Blue-Green AWS Terraform project for reference on structuring Terraform modules with AWS. Hope this helps! Let me know if you have questions about any specific part. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Description
The current deployment setup works well, but I think it would be very useful for this project to include Terraform configuration files that enable serverless deployment, for example on Google Cloud. I need this for my own project, and since I don’t have much experience with infrastructure and Terraform, setting it up has been challenging.
Just throwing a word out here...
Beta Was this translation helpful? Give feedback.
All reactions