This guide outlines the steps required to set up the Supabase backend, GCP infrastructure, Vercel frontend deployment, and necessary configurations for the Concept Visualizer project.
Ensure you have the following command-line tools installed and configured:
- Git: For version control.
- Node.js & npm: (v18 or later recommended) For frontend development. (https://nodejs.org/)
- Python & uv: (Python 3.11 recommended) For backend development and dependency management. (https://www.python.org/, https://github.com/astral-sh/uv)
- Google Cloud SDK (
gcloud): For interacting with GCP. (https://cloud.google.com/sdk/docs/install)- Authenticate after installation:
gcloud auth loginandgcloud auth application-default login.
- Authenticate after installation:
- Terraform CLI: (v1.3 or later recommended) For managing GCP infrastructure. (https://developer.hashicorp.com/terraform/downloads)
- Supabase CLI: For managing Supabase functions and secrets. (https://supabase.com/docs/guides/cli)
- Log in after installation:
supabase login.
- Log in after installation:
- GitHub CLI (
gh): (Optional, but required for GitHub secret automation viagcp_apply.sh) For interacting with GitHub. (https://cli.github.com/)- Authenticate after installation:
gh auth login.
- Authenticate after installation:
- Obtain an API key from JigsawStack.
- Add this key to your
backend/.env.developandbackend/.env.mainfiles asCONCEPT_JIGSAWSTACK_API_KEY. - Ensure it's populated in GCP Secret Manager via
gcp_populate_secrets.sh.
- This setup assumes you are using Upstash or a similar Redis provider.
- Get your Redis endpoint URL (e.g.,
relevant-stud-56361.upstash.io) and password. - Add these to your
backend/.env.developandbackend/.env.mainfiles asCONCEPT_UPSTASH_REDIS_ENDPOINTandCONCEPT_UPSTASH_REDIS_PASSWORD. - Ensure they are populated in GCP Secret Manager via
gcp_populate_secrets.sh
You need two Supabase projects: one for development (dev) and one for production (prod).
-
Create Supabase Project:
- Go to supabase.com and sign in.
- Click "New Project".
- Choose an organization.
- Name: e.g.,
concept-visualizer-devorconcept-visualizer-prod. - Database Password: Generate and securely save this password.
- Region: Choose a region (ideally matching your GCP region).
- Pricing Plan: Select the "Free" tier for development.
- Click "Create new project".
-
Get API Keys & URLs:
- Navigate to "Project Settings" > "API".
- Note down the following for both your
devandprodprojects:- Project URL (e.g.,
https://[project-id].supabase.co) anonkey (public)service_rolekey (Keep this secure! Never expose in frontend code).- JWT Secret (Keep this secure!)
- Project URL (e.g.,
-
Configure Authentication:
- Go to "Authentication" in the sidebar.
- Under "Configuration" > "Auth Providers", ensure Email is enabled.
- Under "Configuration" > "Settings", enable "Allow anonymous sign-ins".
- Under "Configuration" > "URL Configuration":
- Site URL: Set to your frontend's deployed URL (e.g.,
http://localhost:5173for dev, your Vercel URL for prod). - Additional Redirect URLs: Add any other URLs your app might redirect to after auth (e.g.,
http://localhost:5173/**).
- Site URL: Set to your frontend's deployed URL (e.g.,
-
Create Storage Buckets:
- Go to "Storage" in the sidebar.
- Click "Create a new bucket".
- Create two private buckets (important: start as private):
- Name:
concept-images(orconcept-images-dev/concept-images-prodif defined differently in your.tfvarsand backend config). - Name:
palette-images(orpalette-images-dev/palette-images-prod).
- Name:
- Note: Access policies (RLS) will be set up later by the SQL script.
-
Set Up Database Schema & RLS:
- Go to "SQL Editor" in the sidebar.
- Click "New query".
- Execute the SQL script: Copy the content from the appropriate SQL file (
backend/scripts/dev/pg-bucket-schema-policies.sqlfor dev,backend/scripts/prod/pg-bucket-schema-policies.sqlfor prod) and run it. This creates tables (concepts,color_variations,tasks) and sets up Row Level Security policies for both database tables and storage buckets.
-
Enable Realtime:
- Go to "Database" > "Replication".
- Under "Source", click on "0 tables".
- Use the toggle switch to enable replication for the
taskstable (ortasks_dev/tasks_prod).
-
Deploy Edge Function (Data Cleanup):
- Link Project: Link your local directory to the correct Supabase project:
# Run from project root cd backend supabase link --project-ref <your-dev-project-id> # or supabase link --project-ref <your-prod-project-id>
- Set Secrets: Set the required secrets for the function (replace placeholders):
# Run from project root (adjust values for dev/prod) cd backend supabase secrets set SUPABASE_URL=<your-supabase-url> supabase secrets set SERVICE_ROLE_KEY=<your-service-role-key> supabase secrets set STORAGE_BUCKET_CONCEPT=<your-concept-bucket> supabase secrets set STORAGE_BUCKET_PALETTE=<your-palette-bucket> supabase secrets set APP_ENVIRONMENT=<your-palette-bucket>
- Deploy: Deploy the function:
# Run from project root supabase functions deploy cleanup-old-data --project-ref <proj_id> --no-verify-jwt
- Note: The scheduling of this function is handled via a GitHub Actions workflow (
.github/workflows/schedule-cleanup.yml).
- Link Project: Link your local directory to the correct Supabase project:
-
Authenticate
gcloud:- Swap accounts (if needed)
gcloud config set account ACCOUNT_NAME - Run
gcloud auth loginto authenticate your primary user account. - Run
gcloud auth application-default login.
- Swap accounts (if needed)
-
Create GCP Projects: Manually create three separate GCP Projects via the Cloud Console: one for
dev(e.g.,yourproject-dev) and one forprod(e.g.,yourproject-prod) and one for managed tf state (yourproject-managed). Note down their unique Project IDs. -
Enable APIs: For both the
devandprodprojects, run the followinggcloudcommand (replaceYOUR_PROJECT_ID_HEREaccordingly for each project):gcloud services enable \ compute.googleapis.com \ run.googleapis.com \ secretmanager.googleapis.com \ artifactregistry.googleapis.com \ pubsub.googleapis.com \ cloudresourcemanager.googleapis.com \ iam.googleapis.com \ logging.googleapis.com \ monitoring.googleapis.com \ cloudbuild.googleapis.com \ cloudresourcemanager.googleapis.com \ eventarc.googleapis.com \ cloudfunctions.googleapis.com --project=YOUR_PROJECT_ID_HERE -
Create GCS Bucket for Terraform State (Manual/Bootstrap):
- Action: Choose one GCP project to host the state bucket (e.g., the
managedproject). Manually create a GCS bucket in that project using the Cloud Console orgcloud. - Crucially: Enable Object Versioning on this bucket.
- Example Name:
yourproject-tfstate(Replace with your globally unique name). - Purpose: Securely store Terraform state files for both
devandprodworkspaces.
- Action: Choose one GCP project to host the state bucket (e.g., the
-
Grant Initial IAM Setter Permission: Manually grant the identity running the first
terraform apply(likely your user account) the permission to set IAM policies on the state bucket. Theroles/storage.adminrole on the bucket is sufficient initially.gcloud storage buckets add-iam-policy-binding gs://yourproject-tfstate \ --member="user:your-gcp-email@example.com" \ --role="roles/storage.admin" \ --project="yourproject-managed" # Project where bucket lives
-
Populate environment variable files:
backend/env.mainbackend/env.developterraform/environments/dev.tfvarsterraform/environments/prod.tfvars
-
Initialize Terraform:
cd terraform terraform init -backend-config="bucket=BUCKET_NAME" terraform workspace new dev terraform workspace new prod
-
Remove Terraform Config (if needed)
cd terraform rm -f tfplan tfplan.* rm -rf .terraform
-
Update tf.vars
Deploy the frontend application using Vercel.
- Connect GitHub Repo: Import your GitHub repository into Vercel.
- Configure Project:
- Framework Preset: Select "Vite".
- Root Directory: Set to
frontend/my-app. - Build Command: Should default correctly (
npm run build). - Output Directory: Should default correctly (
dist). - Install Command: Should default correctly (
npm install).
- Configure Environment Variables: In Vercel project settings > "Environment Variables":
- Production (for
mainbranch):VITE_API_BASE_URL:/api(Vercel rewrite handles proxying)VITE_SUPABASE_URL: Value fromPROD_SUPABASE_URLGitHub Secret.VITE_SUPABASE_ANON_KEY: Value fromPROD_SUPABASE_ANON_KEYGitHub Secret.VITE_ENVIRONMENT:production
- Development/Preview (for
developbranch and PRs):VITE_API_BASE_URL:/apiVITE_SUPABASE_URL: Value fromDEV_SUPABASE_URLGitHub Secret.VITE_SUPABASE_ANON_KEY: Value fromDEV_SUPABASE_ANON_KEYGitHub Secret.VITE_ENVIRONMENT:development
- Production (for
- Configure Git: In Vercel project settings > "Git", ensure "Production Branch" is set to
main. - Update
vercel.json:- Ensure your
frontend/my-app/vercel.jsoncontains the correctrewritessection, including the SPA fallback rule at the end. - Crucially, update the
destinationIP address in the API rewrite rules to match the External IP provided by the Terraform output for the respective environment (devorprod).
{ "rewrites": [ // API rewrites (ensure destination IP is correct) { "source": "/api/healthz", "destination": "http://<YOUR_BACKEND_VM_IP>/api/health/ping" // Replace IP }, { "source": "/api/:path*", "destination": "http://<YOUR_BACKEND_VM_IP>/api/:path*" // Replace IP }, // SPA Fallback (Must be LAST) { "source": "/((?!_next/static|static|favicon.ico|vite.svg|assets/).*)", "destination": "/index.html" } ] } - Ensure your
- Deploy: Trigger a deployment on Vercel (e.g., by pushing to
mainordevelop). - Get the Vercel secrets for GitHub:
VERCEL_ORG_IDVERCEL_TOKENPROD_VERCEL_PROJECT_IDDEV_VERCEL_PROJECT_ID
Configure secrets in your GitHub repository settings ("Settings" > "Secrets and variables" > "Actions") to allow CI/CD workflows to authenticate with GCP and other services.
Automation Note: The scripts/gcp_apply.sh script, upon successful completion, will now automatically attempt to populate many of the necessary GitHub Actions secrets using the scripts/gh_populate_secrets.sh helper script. This automation is branch-aware (setting DEV_* or PROD_* secrets based on the current branch).
Prerequisite for Automation: You must have the GitHub CLI (gh) installed and authenticated (gh auth login) for the automation to work.
The following secrets will be set by the automation script. Values are sourced from Terraform outputs or active .tfvars files.
- Global:
GCP_REGIONTF_STATE_BUCKET_NAME(fromterraform_state_bucket_namein tfvars; used by deploy_backend workflow)
- **Environment-Specific (prefixed with
DEV_orPROD_):GCP_PROJECT_IDGCP_ZONENAMING_PREFIXAPI_SERVICE_ACCOUNT_EMAILWORKER_SERVICE_ACCOUNT_EMAILCICD_SERVICE_ACCOUNT_EMAILWORKLOAD_IDENTITY_PROVIDERARTIFACT_REGISTRY_REPO_NAMEFRONTEND_UPTIME_CHECK_CONFIG_IDFRONTEND_ALERT_POLICY_IDALERT_NOTIFICATION_CHANNEL_FULL_IDFRONTEND_STARTUP_ALERT_DELAYALERT_ALIGNMENT_PERIODWORKER_MIN_INSTANCESWORKER_MAX_INSTANCES
These secrets still need to be added manually to your GitHub repository settings. They are typically sensitive, obtained from third-party services, or not directly output by the Terraform configuration in a way suitable for this automation.
- Global:
VERCEL_ORG_IDVERCEL_TOKEN— Now set automatically byTF_STATE_BUCKET_NAMEgcp_apply.sh(Step 7:gh_populate_secrets.sh) fromterraform_state_bucket_namein your tfvars.
- **Production (prefixed with
PROD_):PROD_JIGSAWSTACK_API_KEYPROD_SUPABASE_ANON_KEYPROD_SUPABASE_JWT_SECRETPROD_SUPABASE_SERVICE_ROLEPROD_SUPABASE_URLPROD_VERCEL_PROJECT_IDPROD_UPSTASH_REDIS_ENDPOINTPROD_UPSTASH_REDIS_PASSWORDPROD_UPSTASH_REDIS_PORT(optional, defaults to6379)
- **Development (prefixed with
DEV_):DEV_JIGSAWSTACK_API_KEYDEV_SUPABASE_ANON_KEYDEV_SUPABASE_JWT_SECRETDEV_SUPABASE_SERVICE_ROLEDEV_SUPABASE_URLDEV_VERCEL_PROJECT_IDDEV_UPSTASH_REDIS_ENDPOINTDEV_UPSTASH_REDIS_PASSWORDDEV_UPSTASH_REDIS_PORT(optional)
(The comprehensive list of all secrets and their descriptions can be found in .github/SECRETS.md which should align with the above distinction of automated vs. manual.)
- Backend:
cd backend # Ensure .env is linked to .env.develop via post-checkout hook # (or manually copy: cp .env.develop .env) uvicorn app.main:app --reload --port 8000
- Frontend:
Access the app at
cd frontend/my-app # Ensure .env is linked to .env.develop via post-checkout hook # (or manually copy: cp .env.develop .env) npm run dev
http://localhost:5173(or the port Vite uses).
After the backend and frontend is deployed on GCP and Vercel, respectively. Head on over to the project url given by vercel.
- If you run
scripts/gcp_destroy.shand thenscripts/gcp_apply.sh, you are fully recreating the GCP infrastructure. - After a full recreation:
- The External IP Address of your backend VM will likely change (unless you explicitly reserved it outside Terraform's management, which isn't standard here). You MUST update the IP in
frontend/my-app/vercel.jsonand redeploy the frontend. - Terraform outputs (Service Account emails, WIF provider) might change. You MUST update the corresponding GitHub Secrets.
- You likely need to re-run
scripts/gcp_populate_secrets.shas the Secret Manager resources were recreated.
- The External IP Address of your backend VM will likely change (unless you explicitly reserved it outside Terraform's management, which isn't standard here). You MUST update the IP in
Regular deployments via CI/CD (pushing code changes) will typically not require these full update steps, only updating the application code or container image.