Skip to content

Conversation

@galt-tr
Copy link
Contributor

@galt-tr galt-tr commented Jul 25, 2025

This pull request introduces significant updates to improve deployment workflows and configuration management for the UHRP project. Key changes include the addition of environment-specific configurations, new deployment workflows for AWS and EKS, and updates to Docker and Kubernetes configurations. Below is a summary of the most important changes grouped by theme:

Deployment Workflows

  • AWS Deployment Workflow: Added a new GitHub Actions workflow (.github/workflows/deploy-aws.yaml) to automate deployment to AWS ECS, including building Docker images, pushing to ECR, and updating ECS task definitions and Lambda functions. This workflow supports both staging and production environments.
  • EKS Deployment Workflow: Added a new GitHub Actions workflow (.github/workflows/deploy-eks.yaml) for deploying to AWS EKS. It includes building Docker images, updating Kubernetes manifests, and managing ConfigMaps and secrets.

Configuration Management

  • Environment Configuration Example: Added a new .env.aws.example file with placeholders for AWS, server, and pricing configurations, providing a template for environment-specific settings.
  • S3 CORS Configuration: Added an aws/s3-cors.json file to define CORS rules for the S3 bucket, allowing all origins and common HTTP methods.

Docker and Kubernetes Updates

  • Docker Build Workflow: Updated the GitHub Actions workflow (.github/workflows/build.yaml) to build and push Docker images to GitHub Container Registry (GHCR) instead of Docker Hub.
  • Kubernetes Deployment: Added a Kubernetes deployment manifest (deploy/uhrp-server-deployment.yaml) for the UHRP server, specifying environment variables, container ports, and restart policies.

AWS Task Definition

  • ECS Task Definition: Added an aws/task-definition.json file defining the ECS task configuration, including container image, environment variables, secrets, and health checks.

Comment on lines +19 to +139
name: Deploy to AWS
runs-on: ubuntu-latest
environment: ${{ github.ref_name }}

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1

- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.ref_name }}-${{ github.sha }}
run: |
# Build the Docker image
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker tag $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG $ECR_REGISTRY/$ECR_REPOSITORY:latest-${{ github.ref_name }}

# Push both tags to ECR
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest-${{ github.ref_name }}

# Output the image URI
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT

- name: Download current task definition
run: |
aws ecs describe-task-definition \
--task-definition ${{ env.TASK_DEFINITION_FAMILY }} \
--query taskDefinition > task-definition.json

# Remove fields that shouldn't be in the new definition
jq 'del(.taskDefinitionArn, .revision, .status, .requiresAttributes, .compatibilities, .registeredAt, .registeredBy)' task-definition.json > task-definition-clean.json
mv task-definition-clean.json task-definition.json

- name: Update task definition with new image
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: task-definition.json
container-name: uhrp-storage
image: ${{ steps.build-image.outputs.image }}
environment-variables: |
NODE_ENV=${{ github.ref_name == 'production' && 'production' || 'staging' }}
AWS_BUCKET_NAME=${{ github.ref_name == 'production' && secrets.PROD_AWS_BUCKET_NAME || secrets.STAGING_AWS_BUCKET_NAME }}
SERVER_URL=${{ github.ref_name == 'production' && secrets.PROD_SERVER_URL || secrets.STAGING_SERVER_URL }}
CORS_ORIGIN=${{ github.ref_name == 'production' && secrets.PROD_CORS_ORIGIN || secrets.STAGING_CORS_ORIGIN }}
PER_BYTE_PRICE=${{ github.ref_name == 'production' && secrets.PROD_PER_BYTE_PRICE || secrets.STAGING_PER_BYTE_PRICE }}
BASE_PRICE=${{ github.ref_name == 'production' && secrets.PROD_BASE_PRICE || secrets.STAGING_BASE_PRICE }}
BSV_NETWORK=${{ github.ref_name == 'production' && 'mainnet' || 'testnet' }}
MIN_HOSTING_MINUTES=${{ github.ref_name == 'production' && secrets.PROD_MIN_HOSTING_MINUTES || secrets.STAGING_MIN_HOSTING_MINUTES }}
WALLET_STORAGE_URL=${{ github.ref_name == 'production' && secrets.PROD_WALLET_STORAGE_URL || secrets.STAGING_WALLET_STORAGE_URL }}

- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true

- name: Package and deploy Lambda function
run: |
# Package the notifier
cd notifier
npm ci --production
zip -r ../notifier.zip .
cd ..

# Update Lambda function code
aws lambda update-function-code \
--function-name ${{ env.LAMBDA_FUNCTION }} \
--zip-file fileb://notifier.zip

# Update Lambda environment variables
aws lambda update-function-configuration \
--function-name ${{ env.LAMBDA_FUNCTION }} \
--environment Variables="{
NODE_ENV=${{ github.ref_name == 'production' && 'production' || 'staging' }},
SERVER_PRIVATE_KEY=${{ github.ref_name == 'production' && secrets.PROD_SERVER_PRIVATE_KEY || secrets.STAGING_SERVER_PRIVATE_KEY }},
BSV_NETWORK=${{ github.ref_name == 'production' && 'mainnet' || 'testnet' }},
AWS_BUCKET_NAME=${{ github.ref_name == 'production' && secrets.PROD_AWS_BUCKET_NAME || secrets.STAGING_AWS_BUCKET_NAME }}
}"

# Wait for configuration update to complete
aws lambda wait function-updated \
--function-name ${{ env.LAMBDA_FUNCTION }}

- name: Verify deployment
run: |
echo "🚀 Deployment completed!"
echo "ECS Service: ${{ env.ECS_SERVICE }}"
echo "Lambda Function: ${{ env.LAMBDA_FUNCTION }}"
echo "Image: ${{ steps.build-image.outputs.image }}"

# Get service info
aws ecs describe-services \
--cluster ${{ env.ECS_CLUSTER }} \
--services ${{ env.ECS_SERVICE }} \
--query 'services[0].{desiredCount:desiredCount,runningCount:runningCount,pendingCount:pendingCount}' \
--output table

- name: Send deployment notification
if: always()
run: |
if [ "${{ job.status }}" == "success" ]; then
echo "✅ Deployment to ${{ github.ref_name }} succeeded"
else
echo "❌ Deployment to ${{ github.ref_name }} failed"
fi No newline at end of file

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI 4 months ago

To fix the issue, we will add a permissions block at the workflow level to explicitly define the minimal permissions required for the workflow. Based on the actions used in the workflow, the following permissions are necessary:

  • contents: read for accessing the repository's contents.
  • secrets: read for accessing secrets used in the workflow.

This change will ensure that the workflow adheres to the principle of least privilege.


Suggested changeset 1
.github/workflows/deploy-aws.yaml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/deploy-aws.yaml b/.github/workflows/deploy-aws.yaml
--- a/.github/workflows/deploy-aws.yaml
+++ b/.github/workflows/deploy-aws.yaml
@@ -2,2 +2,6 @@
 
+permissions:
+  contents: read
+  secrets: read
+
 on:
EOF
@@ -2,2 +2,6 @@

permissions:
contents: read
secrets: read

on:
Copilot is powered by AI and may make mistakes. Always verify output.
Comment on lines +17 to +181
name: Deploy to EKS
runs-on: ubuntu-latest
environment: ${{ github.ref_name }}

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1

- name: Build and push storage server image
id: build-storage-server
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.ref_name }}-${{ github.sha }}
run: |
# Build and push storage server
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker tag $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG $ECR_REGISTRY/$ECR_REPOSITORY:latest-${{ github.ref_name }}
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest-${{ github.ref_name }}
echo "storage_image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT

- name: Build and push event handler image
id: build-event-handler
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.ref_name }}-${{ github.sha }}
run: |
# Build and push event handler
cd k8s/s3-event-handler
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY-event-handler:$IMAGE_TAG .
docker tag $ECR_REGISTRY/$ECR_REPOSITORY-event-handler:$IMAGE_TAG $ECR_REGISTRY/$ECR_REPOSITORY-event-handler:latest-${{ github.ref_name }}
docker push $ECR_REGISTRY/$ECR_REPOSITORY-event-handler:$IMAGE_TAG
docker push $ECR_REGISTRY/$ECR_REPOSITORY-event-handler:latest-${{ github.ref_name }}
echo "event_handler_image=$ECR_REGISTRY/$ECR_REPOSITORY-event-handler:$IMAGE_TAG" >> $GITHUB_OUTPUT

- name: Setup kubectl
uses: aws-actions/setup-kubectl@v3
with:
version: 'v1.28.0'

- name: Update kubeconfig
run: |
aws eks update-kubeconfig --region ${{ env.AWS_REGION }} --name ${{ env.EKS_CLUSTER_NAME }}

- name: Create namespace if not exists
run: |
kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o yaml | kubectl apply -f -

- name: Update ConfigMap
run: |
# Update configmap with environment-specific values
kubectl apply -f k8s/configmap.yaml

# Patch configmap with dynamic values
kubectl patch configmap storage-server-config -n ${{ env.NAMESPACE }} --type merge -p '
{
"data": {
"NODE_ENV": "${{ github.ref_name == 'production' && 'production' || 'staging' }}",
"AWS_BUCKET_NAME": "${{ github.ref_name == 'production' && secrets.PROD_AWS_BUCKET_NAME || secrets.STAGING_AWS_BUCKET_NAME }}",
"SERVER_URL": "${{ github.ref_name == 'production' && secrets.PROD_SERVER_URL || secrets.STAGING_SERVER_URL }}",
"CORS_ORIGIN": "${{ github.ref_name == 'production' && secrets.PROD_CORS_ORIGIN || secrets.STAGING_CORS_ORIGIN }}",
"BSV_NETWORK": "${{ github.ref_name == 'production' && 'mainnet' || 'testnet' }}",
"SQS_QUEUE_URL": "${{ github.ref_name == 'production' && secrets.PROD_SQS_QUEUE_URL || secrets.STAGING_SQS_QUEUE_URL }}"
}
}'

- name: Update Secrets
run: |
# Check if secret exists, update or create
if kubectl get secret uhrp-secrets -n ${{ env.NAMESPACE }} >/dev/null 2>&1; then
echo "Updating existing secret"
else
echo "Creating new secret"
kubectl create secret generic uhrp-secrets \
--namespace=${{ env.NAMESPACE }} \
--from-literal=server-private-key="${{ github.ref_name == 'production' && secrets.PROD_SERVER_PRIVATE_KEY || secrets.STAGING_SERVER_PRIVATE_KEY }}" \
--from-literal=admin-token="${{ github.ref_name == 'production' && secrets.PROD_ADMIN_TOKEN || secrets.STAGING_ADMIN_TOKEN }}" \
--from-literal=bugsnag-api-key="${{ github.ref_name == 'production' && secrets.PROD_BUGSNAG_API_KEY || secrets.STAGING_BUGSNAG_API_KEY }}"
fi

- name: Deploy storage server
run: |
# Update deployment with new image
kubectl set image deployment/storage-server \
storage-server=${{ steps.build-storage-server.outputs.storage_image }} \
-n ${{ env.NAMESPACE }}

# Apply other manifests
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/ingress.yaml
kubectl apply -f k8s/hpa.yaml

- name: Deploy event handler
run: |
# Update deployment with new image
if kubectl get deployment event-handler -n ${{ env.NAMESPACE }} >/dev/null 2>&1; then
kubectl set image deployment/event-handler \
event-handler=${{ steps.build-event-handler.outputs.event_handler_image }} \
-n ${{ env.NAMESPACE }}
else
# First deployment - apply the manifest then update image
kubectl apply -f k8s/s3-event-handler/deployment.yaml
kubectl set image deployment/event-handler \
event-handler=${{ steps.build-event-handler.outputs.event_handler_image }} \
-n ${{ env.NAMESPACE }}
fi

- name: Wait for rollout
run: |
kubectl rollout status deployment/storage-server -n ${{ env.NAMESPACE }} --timeout=300s
kubectl rollout status deployment/event-handler -n ${{ env.NAMESPACE }} --timeout=300s

- name: Verify deployment
run: |
echo "🚀 Deployment completed!"
echo "Storage Server Image: ${{ steps.build-storage-server.outputs.storage_image }}"
echo "Event Handler Image: ${{ steps.build-event-handler.outputs.event_handler_image }}"

# Get deployment status
kubectl get deployments -n ${{ env.NAMESPACE }}
kubectl get pods -n ${{ env.NAMESPACE }}
kubectl get ingress -n ${{ env.NAMESPACE }}

# Get ingress URL
INGRESS_URL=$(kubectl get ingress storage-server-ingress -n ${{ env.NAMESPACE }} -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' || echo "Pending...")
echo "Ingress URL: $INGRESS_URL"

- name: Run smoke tests
run: |
# Wait for ingress to be ready
echo "Waiting for ingress to be ready..."
for i in {1..30}; do
INGRESS_URL=$(kubectl get ingress storage-server-ingress -n ${{ env.NAMESPACE }} -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' 2>/dev/null || echo "")
if [ -n "$INGRESS_URL" ] && [ "$INGRESS_URL" != "Pending..." ]; then
echo "Ingress ready at: $INGRESS_URL"
break
fi
echo "Waiting for ingress... ($i/30)"
sleep 10
done

# Basic health check
if [ -n "$INGRESS_URL" ]; then
curl -f -s -o /dev/null -w "%{http_code}" "http://$INGRESS_URL/" || echo "Health check pending..."
fi

- name: Notify deployment status
if: always()
run: |
if [ "${{ job.status }}" == "success" ]; then
echo "✅ Successfully deployed to ${{ github.ref_name }} environment"
else
echo "❌ Deployment to ${{ github.ref_name }} failed"
fi No newline at end of file

Check warning

Code scanning / CodeQL

Workflow does not contain permissions Medium

Actions job or workflow does not limit the permissions of the GITHUB_TOKEN. Consider setting an explicit permissions block, using the following as a minimal starting point: {contents: read}

Copilot Autofix

AI 4 months ago

To fix the issue, we will add a permissions block at the root level of the workflow. This block will define the minimal permissions required for the workflow to function correctly. Based on the actions used in the workflow, the following permissions are necessary:

  • contents: read for accessing the repository contents.
  • packages: write for pushing Docker images to Amazon ECR.
  • secrets: read for accessing GitHub secrets.

The permissions block will be added after the name field at the top of the workflow file.


Suggested changeset 1
.github/workflows/deploy-eks.yaml

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/.github/workflows/deploy-eks.yaml b/.github/workflows/deploy-eks.yaml
--- a/.github/workflows/deploy-eks.yaml
+++ b/.github/workflows/deploy-eks.yaml
@@ -1,2 +1,6 @@
 name: Deploy to EKS
+permissions:
+  contents: read
+  packages: write
+  secrets: read
 
EOF
@@ -1,2 +1,6 @@
name: Deploy to EKS
permissions:
contents: read
packages: write
secrets: read

Copilot is powered by AI and may make mistakes. Always verify output.
@galt-tr galt-tr changed the title AWS + EKS Migration Support [WIP] AWS + EKS Migration Support Jul 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants