✨ https://settlemint.com ✨
Standardized, auditable Terraform to provision platform dependencies and deploy SettleMint BTP across clouds.
Works with AWS, Azure, GCP, and any Kubernetes cluster. Mix managed, Kubernetes (Helm), or bring-your-own backends per dependency.
The SettleMint BTP (Blockchain Technology Platform) Universal Terraform repository provides a standardized, production-ready deployment solution for the SettleMint platform across multiple cloud providers. This repository automates the provisioning of all required infrastructure dependencies and deploys the SettleMint BTP platform using Helm charts.
What is SettleMint BTP? SettleMint BTP is a comprehensive blockchain development platform that provides tools and services for building, deploying, and managing blockchain applications. It includes features like smart contract development, API management, monitoring, and integration capabilities.
Key Benefits:
- One-click deployment across AWS, Azure, GCP, or any Kubernetes cluster
- Flexible dependency management - choose between managed cloud services, Kubernetes-native deployments, or bring-your-own solutions
- Built-in observability with Prometheus, Grafana, and Loki
- Enterprise security with OAuth integration, secrets management, and TLS encryption
- Scalable architecture designed for production workloads
This repository provides a consistent Terraform flow to provision BTP platform dependencies and install the BTP Helm chart. Use the same module to deploy to AWS, Azure, and GCP or any existing Kubernetes cluster. Each dependency can be provided via a managed cloud service, installed inside Kubernetes (Helm), or wired to your own (BYO) endpoints.
For deeper guidance, dive into the in-repo docs starting at docs/README.md.
- Unified module layout for dependencies with three modes: k8s (Helm) | managed (cloud) | byo (external)
- Consistent
-var-filebased configuration across environments - Secrets flow through
TF_VAR_*inputs, and Terraform marks sensitive outputs automatically - Observability stack via kube-prometheus-stack and Loki
- Maintained docs under
docs/covering configuration, operations, and troubleshooting
Before starting the deployment, ensure you have the following:
- Terraform (v1.0+) - Download here
- kubectl - Installation guide
- Helm (v3.0+) - Installation guide
- Cloud CLI:
- AWS CLI (v2.0+) - Installation guide
- gcloud CLI (for GCP) - Installation guide
- Azure CLI (for Azure) - Installation guide
- SettleMint License - Contact SettleMint for platform licensing
- Cloud Account - Choose one or more:
- AWS Account - With appropriate permissions for EKS, RDS, ElastiCache, S3, Route53, and Cognito
- GCP Account - With appropriate permissions for GKE, Cloud SQL, Memorystore, Cloud Storage, and Cloud DNS
- Azure Account - With appropriate permissions for AKS, PostgreSQL, Redis, Storage, and DNS
- Domain Name - For SSL certificates and platform access (e.g.,
yourcompany.com)
Your AWS credentials need permissions for:
- EKS: Create/manage clusters, node groups, and IAM roles
- RDS: Create/manage PostgreSQL databases
- ElastiCache: Create/manage Redis clusters
- S3: Create/manage buckets and objects
- Route53: Create/manage hosted zones and DNS records
- Cognito: Create/manage user pools and clients
- IAM: Create/manage roles and policies
- VPC: Create/manage VPCs, subnets, and security groups
Your GCP account needs permissions for:
- GKE (Kubernetes Engine): Create/manage clusters and node pools
- Cloud SQL: Create/manage PostgreSQL instances
- Memorystore: Create/manage Redis instances
- Cloud Storage: Create/manage buckets and objects
- Cloud DNS: Create/manage DNS zones and records (optional)
- Compute Engine: Create/manage VPCs, subnets, firewall rules, and Cloud NAT
- IAM & Service Accounts: Create/manage service accounts and IAM bindings
- Service Networking: Configure private service connections for Cloud SQL
Required GCP APIs (enable these in your project):
gcloud services enable container.googleapis.com # GKE
gcloud services enable compute.googleapis.com # Compute/VPC
gcloud services enable sqladmin.googleapis.com # Cloud SQL
gcloud services enable redis.googleapis.com # Memorystore Redis
gcloud services enable storage.googleapis.com # Cloud Storage
gcloud services enable dns.googleapis.com # Cloud DNS
gcloud services enable servicenetworking.googleapis.com # Private networkingRequired GCP IAM Roles for Terraform Service Account:
The service account used by Terraform needs the following roles. You can create a service account and grant these permissions:
# Create service account
gcloud iam service-accounts create btp-universal-terraform \
--display-name="BTP Universal Terraform Deployer" \
--project=YOUR_PROJECT_ID
# Grant required roles
PROJECT_ID="YOUR_PROJECT_ID"
SA_EMAIL="btp-universal-terraform@${PROJECT_ID}.iam.gserviceaccount.com"
# Core infrastructure roles
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/compute.admin" # VPC, networks, firewalls, NAT
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/container.admin" # GKE clusters and node pools
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/cloudsql.admin" # Cloud SQL PostgreSQL instances
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/redis.admin" # Memorystore Redis instances
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/storage.admin" # Cloud Storage buckets
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/dns.admin" # Cloud DNS zones and records
# IAM and security roles
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/iam.serviceAccountAdmin" # Create/manage service accounts
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/iam.serviceAccountTokenCreator" # Create HMAC keys for storage
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/resourcemanager.projectIamAdmin" # Manage IAM bindings
# Grant service account user role on compute default service account
gcloud iam service-accounts add-iam-policy-binding \
$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")-compute@developer.gserviceaccount.com \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/iam.serviceAccountUser" \
--project=$PROJECT_ID
# Create and download JSON key
gcloud iam service-accounts keys create ~/btp-terraform-gcp-key.json \
--iam-account="${SA_EMAIL}"
echo "✓ Service account created with all required permissions"
echo "✓ Key saved to ~/btp-terraform-gcp-key.json"
echo ""
echo "Store this key securely (e.g., in 1Password) and set GOOGLE_CLOUD_KEYFILE_JSON in your .env file"Running with User Credentials (Alternative to Service Account):
If you're running Terraform with your own user credentials (via gcloud auth application-default login) instead of a service account, you need iam.serviceAccountUser permission on the GKE node service account that Terraform creates. After the first terraform apply fails with a permission error, run:
gcloud iam service-accounts add-iam-policy-binding \
btp-gke-nodes@YOUR_PROJECT_ID.iam.gserviceaccount.com \
--member="user:YOUR_EMAIL@example.com" \
--role="roles/iam.serviceAccountUser"Then re-run terraform apply.
Role Purpose Breakdown:
| Role | Purpose | Required For |
|---|---|---|
roles/compute.admin |
Manage VPC networks, subnets, firewalls, Cloud NAT | GKE networking, custom VPCs |
roles/container.admin |
Create and manage GKE clusters, node pools, and configurations | GKE cluster deployment |
roles/cloudsql.admin |
Create and manage Cloud SQL instances, databases, and users | PostgreSQL database |
roles/redis.admin |
Create and manage Memorystore Redis instances | Redis cache |
roles/storage.admin |
Create buckets, manage objects, create HMAC keys | Object storage for artifacts |
roles/dns.admin |
Create DNS zones and manage DNS records | Domain management (optional) |
roles/iam.serviceAccountAdmin |
Create service accounts for GKE nodes and storage access | GKE node service accounts |
roles/iam.serviceAccountTokenCreator |
Generate HMAC keys for Cloud Storage S3-compatible access | Storage credentials |
roles/resourcemanager.projectIamAdmin |
Grant IAM roles to service accounts | GKE node permissions |
roles/iam.serviceAccountUser (on compute SA) |
Allow using the default compute service account | GKE cluster creation |
This repository supports deployment to AWS, GCP, and Azure. Choose your target platform:
Follow these steps to deploy SettleMint BTP on AWS:
Contact SettleMint to obtain your platform license. You'll receive the following parameters:
- License Username (
TF_VAR_license_username) - Your license username - License Password (
TF_VAR_license_password) - Your license password - License Signature (
TF_VAR_license_signature) - Cryptographic signature for license validation - License Email (
TF_VAR_license_email) - Email associated with the license - License Expiration Date (
TF_VAR_license_expiration_date) - License validity period (format: YYYY-MM-DD)
Note: These parameters will be used in Step 4 when configuring your environment variables.
# Configure AWS CLI with your credentials
aws configure
# Enter your Access Key ID, Secret Access Key, and preferred region- Log into AWS Console → Go to IAM service
- Create IAM User:
- Click "Users" → "Create user"
- Username:
btp-terraform-user(or your preferred name) - Select "Programmatic access"
- Attach Policies:
AmazonEKSClusterPolicyAmazonEKSWorkerNodePolicyAmazonEKS_CNI_PolicyAmazonRDSFullAccessAmazonElastiCacheFullAccessAmazonS3FullAccessAmazonRoute53FullAccessAmazonCognitoPowerUserIAMFullAccessAmazonVPCFullAccess
- Create Access Keys:
- Go to "Security credentials" tab
- Click "Create access key"
- Choose "Application running outside AWS"
- Save the Access Key ID and Secret Access Key - you'll need these in Step 4
- Log into AWS Console → Go to Route53 service
- Create Hosted Zone:
- Click "Hosted zones" → "Create hosted zone"
- Domain name:
yourdomain.com(replace with your actual domain) - Type: "Public hosted zone"
- Click "Create hosted zone"
- Copy Nameservers from the created hosted zone (4 NS records)
- Update Domain Registrar:
- Log into your domain registrar (GoDaddy, Namecheap, etc.)
- Go to DNS management for your domain
- Replace existing nameservers with the Route53 nameservers
- Wait 24-48 hours for DNS propagation
# Check if nameservers are updated
dig NS yourdomain.com
# Should show Route53 nameservers# Clone the repository
git clone https://github.com/settlemint/btp-universal-terraform.git
cd btp-universal-terraform
# Copy the AWS example configuration
cp examples/aws-config.tfvars aws-config.tfvars
# Copy the environment template
cp .env.example .envUpdate the following parameters in your configuration file:
Required Changes:
# Update domain (replace with your actual domain)
base_domain = "yourdomain.com"
# Update VPC and cluster names (replace 'yourname' with your identifier)
vpc = {
aws = {
vpc_name = "btp-vpc-yourname"
region = "eu-central-1" # Change to your preferred AWS region
}
}
k8s_cluster = {
aws = {
cluster_name = "btp-eks-yourname"
region = "us-east-1" # Must match VPC region
}
}
# Update DNS configuration
dns = {
domain = "yourdomain.com" # Must match your actual domain
aws = {
zone_name = "yourdomain.com" # Must match your Route53 hosted zone
}
}
# Update OAuth callback URL
oauth = {
aws = {
domain_prefix = "btp-yourname-platform" # Must be globally unique
callback_urls = ["https://yourdomain.com/api/auth/callback/cognito"]
}
}Optional Changes:
- Region: Change
eu-central-1to your preferred AWS region - Instance Types: Modify
t3.mediumtot3.largeort3.xlargefor higher performance - Node Count: Adjust
desired_size,min_size,max_sizebased on your needs - Database Size: Change
db.t3.smallto larger instance for production
Fill in all the required environment variables:
# AWS Credentials (from Step 2)
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
# SettleMint License (from Step 1)
TF_VAR_license_username=your-license-username
TF_VAR_license_password=your-license-password
TF_VAR_license_signature=your-license-signature
TF_VAR_license_email=your-email@example.com
TF_VAR_license_expiration_date=2025-12-31
# Database Passwords (generate strong passwords)
TF_VAR_postgres_password=your-strong-postgres-password
TF_VAR_redis_password=your-strong-redis-password
# Object Storage Credentials (generate unique keys)
TF_VAR_object_storage_access_key=your-access-key
TF_VAR_object_storage_secret_key=your-secret-key
# Platform Secrets (generate strong, unique values)
TF_VAR_grafana_admin_password=your-grafana-password
TF_VAR_oauth_admin_password=your-oauth-password
TF_VAR_jwt_signing_key=your-jwt-signing-key
TF_VAR_ipfs_cluster_secret=your-64-char-hex-string
TF_VAR_state_encryption_key=your-state-encryption-key
# AWS Credentials for deployment engine
TF_VAR_aws_access_key_id=AKIA... # Same as AWS_ACCESS_KEY_ID
TF_VAR_aws_secret_access_key=... # Same as AWS_SECRET_ACCESS_KEY# Load environment variables
set -a && source .env && set +a
# Initialize Terraform
terraform init
# Review the deployment plan
terraform plan -var-file examples/aws-config.tfvars
# Deploy the platform (takes 15-20 minutes)
terraform apply -var-file examples/aws-config.tfvarsThe deployment will create:
- VPC with public/private subnets
- EKS cluster with worker nodes
- RDS PostgreSQL database
- ElastiCache Redis cluster
- S3 bucket for object storage
- Route53 DNS records
- Cognito user pool
- SettleMint BTP platform
After successful deployment, create a user in AWS Cognito:
- Log into AWS Console → Go to Cognito service
- Find Your User Pool:
- Look for pool named
btp-users(or as configured) - Click on the pool name
- Look for pool named
- Create User:
- Click "Users" tab → "Create user"
- Username:
admin(or your preferred username) - Email:
admin@yourdomain.com - Password: Create a strong password
- Uncheck "Mark email as verified" (you'll verify manually)
- Verify Email:
- Click on the created user
- Click "Actions" → "Confirm user"
- Confirm the email verification
After deployment completes, you'll see output similar to:
post_deploy_urls = {
platform_url = "https://yourdomain.com"
grafana_url = "http://kps-grafana.btp-deps.svc.cluster.local"
# ... other endpoints
}
Access Points:
- SettleMint Platform:
https://yourdomain.com - Grafana Monitoring: Use kubectl port-forward or ingress
- Database: Connection details in Terraform output
- Object Storage: S3 bucket details in Terraform output
Login Credentials:
- Use the Cognito user created in Step 6
- Platform URL:
https://yourdomain.com
Follow these steps to deploy SettleMint BTP on Google Cloud Platform:
Same as AWS - Contact SettleMint to obtain your platform license parameters.
Create or Select a GCP Project:
# List existing projects
gcloud projects list
# Create a new project (optional)
gcloud projects create YOUR_PROJECT_ID --name="BTP Platform"
# Set the active project
gcloud config set project YOUR_PROJECT_ID
# Enable billing for the project (required)
# Go to: https://console.cloud.google.com/billingInstall gcloud CLI:
- Download from: https://cloud.google.com/sdk/docs/install
- Or use package manager:
# macOS brew install google-cloud-sdk # Linux (Ubuntu/Debian) sudo apt-get install google-cloud-sdk
# Authenticate your user account
gcloud auth login
# Set up application default credentials for Terraform
gcloud auth application-default login
# Verify authentication
gcloud auth list# Enable all required APIs
gcloud services enable container.googleapis.com # GKE
gcloud services enable compute.googleapis.com # Compute/VPC
gcloud services enable sqladmin.googleapis.com # Cloud SQL
gcloud services enable redis.googleapis.com # Memorystore Redis
gcloud services enable storage.googleapis.com # Cloud Storage
gcloud services enable dns.googleapis.com # Cloud DNS (optional)
gcloud services enable servicenetworking.googleapis.com # Private networking
# Verify APIs are enabled
gcloud services list --enabledCreate a DNS Managed Zone:
# Create managed zone
gcloud dns managed-zones create btp-zone \
--dns-name="yourdomain.com." \
--description="BTP Platform DNS Zone"
# Get nameservers
gcloud dns managed-zones describe btp-zone --format="value(nameServers)"Update Domain Registrar:
- Copy the Cloud DNS nameservers from the output above
- Log into your domain registrar (GoDaddy, Namecheap, etc.)
- Update your domain's nameservers to use the Cloud DNS nameservers
- Wait 24-48 hours for DNS propagation
Verify DNS Setup:
# Check if nameservers are updated
dig NS yourdomain.comClone the repository and create configuration:
# Copy the example configuration
cp examples/gcp-config.tfvars my-gcp-deployment.tfvars
# Edit the configuration file
# Update the following in my-gcp-deployment.tfvars:
# - All instances of "my-gcp-project" with YOUR_PROJECT_ID
# - base_domain with your actual domain
# - region with your preferred GCP region (e.g., us-central1, europe-west1)Key GCP-specific settings to configure:
platform = "gcp"
base_domain = "yourdomain.com"
k8s_cluster = {
mode = "gcp"
gcp = {
project_id = "YOUR_PROJECT_ID"
cluster_name = "btp-cluster"
region = "us-central1"
kubernetes_version = "1.31"
node_pools = {
default = {
machine_type = "e2-standard-4" # 4 vCPU, 16GB RAM
min_node_count = 1
max_node_count = 10
auto_scaling = true
}
}
}
}
postgres = {
mode = "gcp"
gcp = {
project_id = "YOUR_PROJECT_ID"
instance_name = "btp-postgres"
tier = "db-custom-2-7680" # 2 vCPU, 7.5GB RAM
availability_type = "REGIONAL" # High availability
}
}
redis = {
mode = "gcp"
gcp = {
project_id = "YOUR_PROJECT_ID"
instance_name = "btp-redis"
tier = "STANDARD_HA" # High availability
memory_size_gb = 5
}
}
object_storage = {
mode = "gcp"
gcp = {
project_id = "YOUR_PROJECT_ID"
location = "US" # Multi-region
}
}Create .env file:
# Copy the example
cp .env.example .env
# Edit .env and set these required variables:
TF_VAR_postgres_password="your-secure-password-min-8-chars"
TF_VAR_redis_password="your-secure-password-min-16-chars"
TF_VAR_grafana_admin_password="your-secure-password-min-12-chars"
TF_VAR_oauth_admin_password="your-secure-password-min-16-chars"
# Platform secrets (generate random strings)
TF_VAR_jwt_signing_key="random-32-character-string-here"
TF_VAR_state_encryption_key="random-32-character-string-here"
TF_VAR_ipfs_cluster_secret="64-character-hexadecimal-string"
# License credentials from Step 1
TF_VAR_license_username="your-license-username"
TF_VAR_license_password="your-license-password"
TF_VAR_license_signature="your-license-signature"
TF_VAR_license_email="your-email@example.com"
TF_VAR_license_expiration_date="2025-12-31"Load environment variables:
set -a
source .env
set +a# Initialize Terraform
terraform init
# Validate configuration
terraform validate
# Review the deployment plan
terraform plan -var-file=my-gcp-deployment.tfvars# Apply the configuration
terraform apply -var-file=my-gcp-deployment.tfvars
# Type 'yes' when prompted to confirmExpected deployment time: ~20-25 minutes
- GKE cluster creation: ~10 minutes
- Cloud SQL instance: ~8 minutes
- Memorystore Redis: ~5 minutes
- Kubernetes workloads: ~5 minutes
# Get GKE credentials
gcloud container clusters get-credentials btp-cluster \
--region=us-central1 \
--project=YOUR_PROJECT_ID
# Verify connection
kubectl get nodes
kubectl get namespacesCheck infrastructure status:
# View Terraform outputs
terraform output
# Check GKE cluster
gcloud container clusters describe btp-cluster --region=us-central1
# Check Cloud SQL
gcloud sql instances describe btp-postgres
# Check Memorystore Redis
gcloud redis instances describe btp-redis --region=us-central1
# Check all pods are running
kubectl get pods -AGet access URLs:
# Get Grafana URL (monitoring)
kubectl get ingress -n btp-deps grafana-ingress
# Get platform URL
kubectl get ingress -n settlemintOnce DNS has propagated (if configured), access your platform:
- Platform URL:
https://yourdomain.com - Grafana:
https://grafana.yourdomain.com
Initial Setup:
- Access the platform URL in your browser
- Complete the Google OAuth setup (see GCP Console > APIs & Services > Credentials)
- Create your first administrator user
- Start deploying blockchain networks!
For a minimal test deployment (without production workloads), see the GCP Testing Guide which includes:
- Automated testing script (
./test-gcp.sh) - Minimal configuration example (
test-gcp.tfvars) - Cost estimates for testing (~$120-200/month)
- Step-by-step troubleshooting
Problem: Platform not accessible via domain name Solutions:
# Check DNS propagation
dig yourdomain.com
nslookup yourdomain.com
# Verify Route53 nameservers
dig NS yourdomain.com
# Check if nameservers are correctly set at domain registrarProblem: AccessDenied errors during deployment
Solutions:
- Verify IAM user has all required policies attached
- Check if AWS credentials are correctly configured:
aws sts get-caller-identity - Ensure region matches between credentials and configuration
Problem: State file conflicts or corruption Solutions:
# Refresh state
terraform refresh -var-file aws-config.tfvars
# Import existing resources if needed
terraform import aws_instance.example i-1234567890abcdef0
# Backup state before major changes
cp terraform.tfstate terraform.tfstate.backupProblem: Cluster not accessible or nodes not joining Solutions:
# Check cluster status
aws eks describe-cluster --name btp-eks-yourname --region us-east-1
# Verify kubectl context
kubectl config current-context
# Check node status
kubectl get nodesProblem: SSL certificates not issued or invalid Solutions:
# Check cert-manager logs
kubectl logs -n btp-deps -l app=cert-manager
# Verify ClusterIssuer
kubectl get clusterissuer
# Check certificate status
kubectl get certificate -n btp-depsProblem: SettleMint platform pods not running Solutions:
# Check pod status
kubectl get pods -n settlemint
# Check logs
kubectl logs -n settlemint -l app=settlemint-platform
# Verify all dependencies are running
kubectl get pods -n btp-deps- Check Logs: Use
kubectl logsto examine pod logs - Verify Resources: Use
kubectl get allto check resource status - AWS Console: Check AWS services directly in the console
- Documentation: Refer to
docs/directory for detailed guides - Issues: Report bugs at GitHub Issues
To remove all resources:
# Destroy all infrastructure
terraform destroy -var-file examples/aws-config.tfvars
# Clean up local files
rm -f .env examples/aws-config.tfvarsChoose the configuration that matches your deployment target (inherit and edit as needed):
examples/k8s-config.tfvars– Kubernetes-native (Helm charts for all dependencies)examples/aws-config.tfvars– AWS managed services plus ingress DNS automationexamples/azure-config.tfvars– Azure bring-your-own endpoints (managed modules landing soon)examples/gcp-config.tfvars– GCP bring-your-own endpoints (managed modules landing soon)examples/mixed-config.tfvars– Sample blend of managed + k8s + byo modesexamples/byo-config.tfvars– Fully external dependencies
See docs/configuration.md for the inputs you typically override and how to supply secrets.
# Initialize Terraform
terraform init
# Review plan and apply using your config
terraform plan -var-file examples/aws-config.tfvars
terraform apply -var-file examples/aws-config.tfvars
# Tear down when finished
terraform destroy -var-file examples/aws-config.tfvarsNeed more guidance? Follow docs/getting-started.md for prerequisites and verification steps.
To deploy the SettleMint platform itself, enable the /btp module in your tfvars (see the btp block in variables.tf) and follow the notes in docs/configuration.md.
Terraform requires sensitive credentials (passwords, API keys, license details) to provision dependencies. Supply these via environment variables—never commit them to version control.
Quick start:
# Copy the example and fill in your values
cp .env.example .env
# Load variables and apply
set -a && source .env && set +a
terraform apply -var-file examples/aws-config.tfvarsThe .env.example file lists all required variables with the TF_VAR_ prefix that Terraform reads automatically.
Using a password manager:
Integrate with 1Password, AWS Secrets Manager, HashiCorp Vault, or other tools to inject secrets at runtime. See docs/configuration.md for detailed examples of each method.
For a complete guide on environment variable handling, credential requirements, and password manager integration, refer to the "Secrets and credentials" section in docs/configuration.md.
- Edit module code under
./deps/*or root variables/outputs. - Format and validate:
terraform fmt -recursive
terraform validate
terraform plan -var-file examples/aws-config.tfvars
terraform apply -var-file examples/aws-config.tfvars- Destroy when finished:
terraform destroy -var-file examples/aws-config.tfvars- Ingress controller ready; cert-manager
ClusterIssuerexists - Postgres/Redis services resolvable in-cluster; MinIO UI/API reachable
- Grafana accessible; Prometheus up; Loki receiving logs
- Keycloak admin reachable; Vault server responding (dev mode)
See docs/operations.md for additional day-2 tasks and verification tips.
- Dependencies deploy to
btp-depsby default (override per dependency viavar.<dep>.k8s.namespaceorvar.namespaces). - The BTP chart deploys to
btpby default (configurable inbtpmodule).
See docs/architecture.md for an overview diagram showing how modules connect.
- Root module wires dependency modules and normalizes outputs.
- Modules:
./deps/{postgres,redis,object_storage,oauth,secrets,ingress_tls,metrics_logs}implement managed/k8s/byo modes../btpmodule maps normalized outputs to BTP chart values.
- Examples live in
./examples/*.tfvars.
terraform fmt -recursive # formatting
terraform validate # static validation
tflint --init && tflint # lint (if TFLint is installed)
checkov -d . # optional security scan (if installed)Before PRs: include plan output for the relevant tfvars and note any input/output changes. See AGENTS.md for conventions.
For local development, the default local state is fine. For shared environments, configure a remote backend (e.g., S3, GCS, AzureRM). Example (commented):
# terraform {
# backend "s3" {
# bucket = "my-tf-state"
# key = "btp-universal-terraform/terraform.tfstate"
# region = "us-east-1"
# }
# }