Skip to content

Commit

Permalink
Scaling up and down lambdas (#103)
Browse files Browse the repository at this point in the history
* Scaling up and down lambdas done, but still need to test on infrastructure

* Moved lambda to common folder as it will be used across instances. Fixed formatting and a few working issues

* Updated to use single lambda and rules to alternate functionality. Added environment parameter for staging to allow force-redeploys ignoring cache

* Updated to combine lambdas into one and use cloudwatch events to handle difference.

* Updated handler and timeout values

* Update infrastructure/environments/cloudformation/full/common/lambda/ecs_scale/README.md

Co-authored-by: Christina Moore <30839329+christina-moore@users.noreply.github.com>

* Updated testing setup, added actions to auto test lambdas

---------

Co-authored-by: Christina Moore <30839329+christina-moore@users.noreply.github.com>
  • Loading branch information
cyramic and christina-moore authored Oct 11, 2024
1 parent c9bd8b3 commit 547d018
Show file tree
Hide file tree
Showing 16 changed files with 640 additions and 5 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/build_dagster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ on:
- main

jobs:
push_to_registry:
name: Push Docker image to Docker Hub
push_to_staging:
name: Push Latest Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/build_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ on:
- '*'

jobs:
push_to_registry:
name: Push Docker image to Docker Hub
push_to_prod:
name: Push Tagged Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
Expand Down
47 changes: 47 additions & 0 deletions .github/workflows/test_lambdas.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Publish Docker image
on:
push:
branches:
- main
pull_request:
branches:
- main

jobs:
test_lambda_functions:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ "3.11" ]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install poetry
uses: abatilo/actions-poetry@v2
- name: Setup a local virtual environment (if no poetry.toml file)
run: |
poetry config virtualenvs.create true --local
poetry config virtualenvs.in-project true --local
- uses: actions/cache@v3
name: Define a cache for the virtual environment based on the dependencies lock file
with:
path: ./.venv
key: venv-${{ hashFiles('poetry.lock') }}
- name: Install the project dependencies
working-directory: ./infrastructure/environments/cloudformation/full/common/lambda/ecs_scale
run: poetry install
- name: Run the automated tests (for example)
working-directory: ./infrastructure/environments/cloudformation/full/common/lambda/ecs_scale
run: poetry run pytest -v
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.11.9
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Scaling Lambda

## Manual Process
To package this, do the following:

1. If it doesn't exist, make the package directory
```commandline
mkdir package
```
2. Next, we want to ensure all the libraries are present in the package:
```commandline
poetry run pip install --target ./package boto3
```
3. Zip Everything Up
```commandline
cd package
zip -r ../ecs-scaling-lambda.zip
```
4. Add the lambda function handler to the zip
```commandline
cd ../ecs_scale
zip ../ecs-scaling-lambda.zip ./handler.py
```
5. The zip should have a flat directory structure, ready to be uploaded to s3

## Automated Process
To do. The above process could be done via a github actions and then an upload step could push this to s3,
and redeployed to the lambda.
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import boto3
import os
import logging


def lambda_handler(event, context):
# Will be fed by the event rule if the pipeline is to be turned on or off
if event["pipelineStatus"] == "true":
pipeline_count = 1
elif event["pipelineStatus"] == "false":
pipeline_count = 0

# On staging systems, we want to make sure to reload the pipeline every time.
# On production systems, we should use cached version unless there's an infrastructure update
force_new_deployment = False if os.environ.get("ENVIRONMENT") == "prod" else True

if logging.getLogger().hasHandlers():
# The Lambda environment pre-configures a handler logging to stderr. If a handler is already configured,
# `.basicConfig` does not execute. Thus we set the level directly.
logging.getLogger().setLevel(logging.INFO)
else:
logging.basicConfig(level=logging.INFO)

logger = logging.getLogger("ecs_scaler")

ecs_client = boto3.client("ecs")
cluster = os.environ.get("ECS_CLUSTER_NAME")
dagit_service = os.environ.get("ECS_DAGIT_SERVICE_NAMES")
daemon_service = os.environ.get("ECS_DAEMON_SERVICE_NAMES")
code_service = os.environ.get("ECS_CODE_SERVER_SERVICE_NAMES")

for service in (dagit_service, daemon_service, code_service):
try:
response = ecs_client.update_service(
cluster=cluster,
service=service,
desiredCount=pipeline_count,
forceNewDeployment=force_new_deployment,
)
logger.info(
f"Successfully scaled Service {service} to {pipeline_count}: {response}"
)
except Exception as e:
logger.error(f"Could not Scale Service {service} to {pipeline_count}: {e}")
continue


if __name__ == "__main__":
print(lambda_handler(None))
Loading

0 comments on commit 547d018

Please sign in to comment.