Skip to content

Conversation

@rosenbergj
Copy link

No description provided.

echo "Uploading dependencies to S3..."
aws s3 cp dependencies.zip s3://"${INPUT_S3_BUCKET_NAME}"/dependencies.zip
echo "Publishing dependencies from S3 as a layer..."
local result=$(aws lambda publish-layer-version --layer-name "${INPUT_LAMBDA_LAYER_ARN}" --content S3Bucket="${INPUT_S3_BUCKET_NAME}",S3Key=dependencies.zip)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the file key on S3 should be either configurable or derived from the lambda layer ARN; if the user has more than one lambda layer they would have to have different buckets for it which is not something we should force imo

s3_bucket_name:
description: The S3 bucket name for the dependencies layer upload.
required: false
default: no-bucket-name-here
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s3_bucket_name is then conditionally required, right? If use_s3 is true s3_bucket_name must be set. If the user does not set it, it will try to upload to no-bucket-name-here. I don't think this should be supported; the error messages might not make sense to the user, and there is a chance, however tiny, that someone owns this bucket.

Possible solutions:

  • Remove use_s3 and only keep s3_bucket_name which would not have a default and if set, we upload to S3
  • Add a conditional in the bash script to not upload to S3 if the s3_bucket_name is not explicitly set

Either way I don't think we should have a default value for this. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants