I ran into an issue today where I have a template that looks like this:
resource "aws_ecs_service" "service" {
name = "${var.service_name}"
cluster = "${var.cluster_name}"
task_definition = "arn:aws:ecs:${var.region}:<acct>:task-definition/${var.task_name}"
desired_count = 1
iam_role = "arn:aws:iam::<acct>:role/ecsServiceRole"
load_balancer {
target_group_arn = "${aws_alb_target_group.service.arn}"
container_name = "${var.container_name}"
container_port = 8080
}
}
This works fine for a basic apply/destroy scenario. The problem i noticed is that once the plan is applied, any other plan will insist on replacing the current service thus:
~ aws_ecs_service.service
task_definition: "arn:aws:ecs:us-east-1:<acct>:task-definition/test-service-dev:5" => "arn:aws:ecs:us-east-1:<acct>:task-definition/test-service-dev"
Basically, you can create the service with using the default "latest" version of a task, but once a version is defined in the service, it sees it as needing to be updated, even if latest still 5, as in the example.
The way this *should* act, I would think, is that if the service is created with latest, it should assume latest for each iteration. This scenario is also almost necessary if you're updating your task/container through another means and doing dynamic deployments and service updates. I haven't tried doing Tasks with updates in terraform, as we already have jenkins processes to do those rolling updates. This begs the question though, how does it do ecs_service changes? does it delete and re-create the service? or use the api to just update it?
I wasn't sure if this was a bug, or intended. I do notice the documentation says you should provide both a task and a version. So maybe it's even a bug the *other* way around, in that it shouldn't even create the resource with "latest".
-= Jay =-