How to update AWS Lambda using a file stored at S3

10,130 views
Skip to first unread message

Borja López Altarriba

unread,
Feb 14, 2017, 8:03:43 AM2/14/17
to Terraform
I'm using Terraform to create AWS Lambda function using a file (jar and I also use always the same name) stored at S3. Also I create a version (publish=true) and an alias which points the new version created. My code is:


resource "aws_lambda_function" "lambda-function" {
    function_name = "${var.function_name}"
    description = "${var.description}"
    runtime = "${var.runtime}"
    s3_bucket = "${var.s3_bucket}"
    s3_key = "${var.s3_key}"
    role = "${vare.role_arn}"
    handler = "${var.handler}"
    memory_size = "${var.memory_size}"
    timeout = "${var.timeout}"
    publish = true
}

resource "aws_lambda_alias" "lambda-alias" {
    depends_on = ["aws_lambda_function.lambda-function"]
    name = "${var.environment}"
    description = "${var.environment} alias"
    function_name = "${aws_lambda_function.lambda-function.arn}"
    function_version = "${aws_lambda_function.lambda-function.version}"
}


The problem is that if the file stored at S3 changes (is updated but with the same name), I run again the Terraform project but nothing changes. Terraform is not capable to see the stored file has changed. 

I tried with S3 bucket versioned but doesn't work neither.
I see another option is to use source_code_hash but is not available when 's3_*' fields are used.

Could you please explain me how you are doing AWS Lambda versioning? I need help.

Thank you.


Franck Ratier

unread,
Feb 15, 2017, 3:42:40 AM2/15/17
to Terraform
You can create another Lambda function (triggered by changes in your S3 bucket) to reload your main function.
There's an example here: https://aws.amazon.com/blogs/compute/new-deployment-options-for-aws-lambda/ in the 'Bonus' section.

Borja López Altarriba

unread,
Feb 15, 2017, 10:36:35 AM2/15/17
to Terraform
Hi Franck!

Thank you for your reply, it's a good option your suggestion. 

Finally the only option to do this via Terraform is using a file at S3with different name. That is not an option to me, so finally I decide to update the AWS Lambda function code via AWS CLI. The commands (bash code) I'm going to use are:
 
# Update lambda function code and get the new version created 
LAMBDA_VERSION=$(aws lambda update-function-code --function-name ${FUNCTION_NAME} --s3-bucket ${S3_BUCKET} --s3-key ${S3_KEY} --publish --profile ${PROFILE} --query Version) 

# Update alias to point to the new version 
aws lambda update-alias --function-name ${FUNCTION_NAME} --name ${ALIAS_NAME} --function-version ${LAMBDA_VERSION} --profile ${PROFILE}


Melroy Dmello

unread,
Jun 5, 2018, 5:32:55 PM6/5/18
to Terraform
might be a little late, but hopefully this helps for others who visit this thread.

source_code_hash does work with deployment packages stored in s3, however the path provided in source_code_hash must match the local path of the zip that is uploaded to s3.
So, the following configuration works fine.

s3_bucket = "example_bucket"
s3_key    = "example_folder/example_deployment_package.zip"
source_code_hash = "${base64sha256(file("${path.module}/../lambdas/xyz/example_deployment_package.zip"))}"  # local path from where zip is uploaded to s3

sam...@whistle.com

unread,
Sep 20, 2018, 8:38:13 PM9/20/18
to Terraform
Running into a similar issue and due to continuous integration server, calculating source hash from local file is not possible. Were you able to solve this problem just using s3_bucket, s3_key and source_code_hash?

Thanks!
Reply all
Reply to author
Forward
0 new messages