Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM python:3.6
FROM python:3.13

RUN apt-get update
RUN apt-get install -y jq zip
Expand Down
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@ Stored as secrets or env vars, doesn't matter. But also please don't put your AW
- Partial ARN - `123456789012:function:my-function`
- `requirements_txt`
The name/path for the `requirements.txt` file. Defaults to `requirements.txt`.
- `use_s3`
Whether to upload the dependency layer zip to S3 (required if the zip exceeds 50MB) or not. (If not, it's uploaded directly to Lambda.) Defaults to `false`.
- `s3_bucket_name`
The S3 bucket name used if you are uploading the dependency layer to S3.


### Example workflow
Expand Down
8 changes: 8 additions & 0 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,14 @@ inputs:
lambda_function_name:
description: The Lambda function name. Check the AWS docs/readme for examples.
required: true
use_s3:
description: Whether to use S3 (true) or not (false) for the dependencies layer upload.
required: false
default: 'false'
s3_bucket_name:
description: The S3 bucket name for the dependencies layer upload.
required: false
default: no-bucket-name-here
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s3_bucket_name is then conditionally required, right? If use_s3 is true s3_bucket_name must be set. If the user does not set it, it will try to upload to no-bucket-name-here. I don't think this should be supported; the error messages might not make sense to the user, and there is a chance, however tiny, that someone owns this bucket.

Possible solutions:

  • Remove use_s3 and only keep s3_bucket_name which would not have a default and if set, we upload to S3
  • Add a conditional in the bash script to not upload to S3 if the s3_bucket_name is not explicitly set

Either way I don't think we should have a default value for this. What do you think?

runs:
using: 'docker'
image: 'Dockerfile'
Expand Down
29 changes: 26 additions & 3 deletions entrypoint.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
#!/bin/bash
set -e

poll_command="aws lambda get-function --function-name ${INPUT_LAMBDA_FUNCTION_NAME} --query Configuration.[State,LastUpdateStatus]"

wait_state(){
echo "Waiting on function state update..."
until ${poll_command} | grep "Active"
do
sleep 1
done
until ${poll_command} | grep "Successful"
do
sleep 1
done
}

install_zip_dependencies(){
echo "Installing and zipping dependencies..."
mkdir python
Expand All @@ -9,16 +23,24 @@ install_zip_dependencies(){
}

publish_dependencies_as_layer(){
echo "Publishing dependencies as a layer..."
local result=$(aws lambda publish-layer-version --layer-name "${INPUT_LAMBDA_LAYER_ARN}" --zip-file fileb://dependencies.zip)
if [ "$INPUT_USE_S3" = true ]
then
echo "Uploading dependencies to S3..."
aws s3 cp dependencies.zip s3://"${INPUT_S3_BUCKET_NAME}"/dependencies.zip
echo "Publishing dependencies from S3 as a layer..."
local result=$(aws lambda publish-layer-version --layer-name "${INPUT_LAMBDA_LAYER_ARN}" --content S3Bucket="${INPUT_S3_BUCKET_NAME}",S3Key=dependencies.zip)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the file key on S3 should be either configurable or derived from the lambda layer ARN; if the user has more than one lambda layer they would have to have different buckets for it which is not something we should force imo

else
echo "Publishing dependencies as a layer..."
local result=$(aws lambda publish-layer-version --layer-name "${INPUT_LAMBDA_LAYER_ARN}" --zip-file fileb://dependencies.zip)
fi
LAYER_VERSION=$(jq '.Version' <<< "$result")
rm -rf python
rm dependencies.zip
}

publish_function_code(){
echo "Deploying the code itself..."
zip -r code.zip . -x \*.git\*
zip -r code.zip *.py -x \*.git\*
aws lambda update-function-code --function-name "${INPUT_LAMBDA_FUNCTION_NAME}" --zip-file fileb://code.zip
}

Expand All @@ -31,6 +53,7 @@ deploy_lambda_function(){
install_zip_dependencies
publish_dependencies_as_layer
publish_function_code
wait_state
update_function_layers
}

Expand Down