Hey guys
I have a very simple config
resource "aws_instance" "ecs_scraper_worker" {
count = 2
key_name = "scraper"
ami = "ami-d0b9acb8"
instance_type = "c4.large"
iam_instance_profile = "ecsInstanceRole"
}
resource "aws_ecs_task_definition" "scraper" {
family = "scraper"
container_definitions = "${file("etc/scraper-ecs-task-definition.json")}"
}
resource "aws_ecs_service" "scraper" {
depends_on = ["aws_instance.ecs_scraper_worker"]
name = "scraper"
cluster = "arn:aws:ecs:us-east-1:027466452922:cluster/default"
task_definition = "${aws_ecs_task_definition.scraper.arn}"
desired_count = 2
}
some things are hardcoded since they already configured and I don't need terraform to manage them.
All I need - create some instances, run and from time to time update task on these instances.
But I have 2 questions:
1. After instances are created it takes some time for them to join the cluster, and when service is created and tries to find appropriate instances in specified cluster, there are not instances in cluster yet. I tried to add depends_on into service config, but it didn't help since it needs more time to wait until instances are added to cluster. How can I deal with this?
2. From time to time I need to update the docker image which is run on my instances. Without terraform I would stop running task, create new task revision and run new revision on my instances. But I didn't find how to create new revision with terraform, the only way I found is to destroy everything and then re-create the whole environment which results in new task revision. Is it possible to achieve the same result without destroying the whole environment?
Thanks!