How to detach EBS from ec2 that is created under Auto scaling group?

1,582 views
Skip to first unread message

egul...@gmail.com

unread,
Mar 21, 2017, 10:52:47 AM3/21/17
to Terraform
Hello everyone,

I've been looking and it seems like there isn't really a good answer to this question.

The problem I'm faced with is that I have ec2 created with ASG and I'm attaching EBS to it per application requirements. I have 'lifecycle' set to true in autoscaling group resource defined so when we update AMI, new ec2 is created automatically.

That part is working fine, however EBS volume must be detached and attached to new instance and that is not happening. (again requirement per application to have that volume attached at boot)



I was testing below piece of code:


resource "aws_volume_attachment" "fore_ebs_detachment_on_lifecycle" {
  device_name = "/dev/sdf"
  volume_id = "${var.ebs_id}"
  #instance_id = "${aws_instance.web.id}"
  force_detach = "true"
}

but I can't figure it out how to get instance_id since instances are provisioned through ASG and instance_id is NOT exported in aws_autoscaling_group resource. 



Have anyone had similar issue and solved it somehow?




Any advice greatly appreciated,
Thank you.

Derek Helmick

unread,
Mar 23, 2017, 2:01:21 PM3/23/17
to Terraform
If I recall correctly, "aws_volume_attachment" is used to describe the relationship between a volume and an instance that Terraform has an understanding of -- and more often than not that Terraform itself provisioned. 

In this case, Terraform is provisioning the ASG, and it is the ASG that is provisioning the instance, the volume, and doing the attachment. In that situation, Terraform doesn't have a clear understanding of what the instances or volumes are without imports; nor is it the one responsible for attaching the volume as you're trying to do with the vol-attachment block. 

I think what you want instead is for Terraform to provision your volumes, and then supply those volume-ids to a template that will be imported into your ASG's user-data. Then your user-data needs to do two things: 

1) on boot, take the volume id, grab the volume and attach it to itself, and mount it and;

2) setup a shutdown script to detach the volume from itself.

Alternatively, as it sounds like your baking amis, you can build the above scripts into your bake. Thereafter, have Terraform supply the ASG with tags of the vol-id's you provisioned, and build into the scripts a function to either grab the volumes/ build new vols from snapshots of the ids/etc. 

There are a bunch of ways to attack this, but volume-attachment with an ASG is going to be a difficult - though not impossible - mechanism. Unless you very specifically want Terraform to control the relationship, you're probably best served by having the instance manage the relationship itself. 

egul...@gmail.com

unread,
Mar 24, 2017, 10:58:28 AM3/24/17
to Terraform
Hi Derek,

I'm sorry I should have been more descriptive but EBS volume for 'data' is actually provisioned outside of ASG and it is attached by init script script that takes 'EBS tag' & 'volume-id'. This mechanism works great and that's not the problem.



The problem I'm facing is that when I'm in need of replacing that instance (example of replacing AMI). Lifecycle creates new ec2 before destroying the first one but because EBS is still attached to the instance that is currently still alive boot process on new instance can't attach EBS as it doesn't see it as 'available'.


I started testing ebs_volume_attachment with this code:



resource "aws_volume_attachment" "force_ebs_detachment_on_lifecycle" {
  device_name = "/dev/sdf"
  volume_id = "${module.DataVolume.id}"
  instance_id = "${module.AutoScalingServerGroup2.local_exec_output}"
  force_detach = "true"
}



but I have problem with getting instance_id and my code for setting output in AutoScaling module is this:


resource "null_resource" "ec2_under_asg_id" {
   provisioner "local-exec" {
      command = "aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name ${aws_autoscaling_group.main_asg.name} --region=${var.region} --output='text' --query 'AutoScalingGroups[*].Instances[*].[InstanceId]'"
   }
}


output "local_exec_output" {
     value = "${null_resource.ec2_under_asg_id}"
}


but Terraform -update fails with below error:


Error reading config for output local_exec_output: null_resource.ec2_under_asg_id: resource variables must be three parts: TYPE.NAME.ATTR in:

${null_resource.ec2_under_asg_id}



I'd appreciate any advice on that part. Also if you think there's a different/better way of handling that problem then I'm open to any suggestions.


Thank you,
Ernest

Derek Helmick

unread,
Mar 24, 2017, 2:12:54 PM3/24/17
to Terraform
I'm not sure that the null_resource has outputs that can be queried. I think what you may be looking for is https://www.terraform.io/docs/providers/external/data_source.html but even that is going to be pretty fragile. And aside from that, though there is an option in the aws_volume_attachment to force_detach, I think that mechanism is only going to work when either: terraform preformed the attach or; when the attach occurred in the order that terraform would have done it -- which is going to be somewhat difficult to consistently do. 

I think the more reliable mechanism might be to add to your boot script a portion that looks for the volumes attached to the old ASG's instances, detaches the volume from the old instances, and reattaches the volume to itself. You might also need some middle step in there to have the new instance either signal to the old to unmount the volume (before the detach); or maybe have just have the new instance ssh into it's predecessor to do the unmount.  

Paul Treszczotko

unread,
May 3, 2017, 10:37:05 PM5/3/17
to Terraform
Hi Ernest, 
Have you figured out a reliable way to attach/mount EBS to ASG provisioned instances? I'm stuck with the same dilemma and it seems like there's no 'proper' way of doing that.
I started looking at lambda functions to hook into ASG lifecycle to mount and provision newly created instance. There are a few pieces to it but it looks promising -- here's a little write up on it -- https://github.com/awslabs/aws-lambda-lifecycle-hooks-function

pt


On Tuesday, March 21, 2017 at 10:52:47 AM UTC-4, egul...@gmail.com wrote:

egul...@gmail.com

unread,
May 4, 2017, 11:22:35 AM5/4/17
to Terraform
Hi Paul,
Yes I did. I wrote separate module for "EBS", provisioning it and then using output for volume-id I'm passing it to user_data under ASG. Application process is taking volume-id and will attach it to instance without any problems.


However I was in need of provisioning EBS volume to instance at boot for application requirements and I solved that by updating my ASG module code where I have different scenarios for cases:

 - I want to attach ebs
 - I don't want to attach ebs

I'm using count parameter to accomplish that and in my project directory in main.tf ASG looks something like that:


module "ASG" {
 source = ....
 
 attach_ebs                              = "0" # 0 = false & 1 = true
 donot_attach_ebs                    = "1"
 
 ebs_type                                 = ""
 ebs_size                                 = ""
 ebs_delete_on_termination       = ""
 encrypted                               = ""
 ebs_snapshot_id                     = ""
 ebs_letter                               = ""
 ... and other parameters.
 }



Here's a code that I have within my ASG module for provisioning EBS at boot:



resource "aws_launch_configuration" "launch_config_with_ebs_no_template" { # add EBS & no template
  count                                       = "${var.attach_ebs * var.donot_use_template_file}"
  name_prefix                             = "${var.lc_name}"
  image_id                                  = "${var.lc_ami_id}"
  instance_type                           = "${var.lc_instance_type}"
  security_groups                         = ["${var.lc_security_groups}"]
  associate_public_ip_address      = "${var.associate_public_ip_address}"
  key_name                                = "${var.key_name}"
  iam_instance_profile                 = "${var.instance_iam_role}"
  user_data                                 = "CLUSTER_NAME=${var.cluster_name}\nEBS_VOLUME=${var.ebs_id}\nVolumeSize=${var.volume_size}\nVolumeType=${var.volume_type}\n"

  # EBS configuration
  #
  ebs_block_device {
     device_name                     = "${var.ebs_letter}"
     snapshot_id                      = "${var.ebs_snapshot_id}"
     volume_type                     = "${var.ebs_type}"
     volume_size                     = "${var.ebs_size}"
     delete_on_termination       = "${var.ebs_delete_on_termination}"
     encrypted                        = "${var.encrypted}"
  }
}


I have multiple blocks like above to cover different case scenarios and it works flawlessly. 

And yes, there's additional parameter "donot_use_template_file" as I'm covering additional scenarios like:
 - I want to use user_data template file
 - I don't want to use user_data template file
 with combination of attaching/not attaching EBS volume.

I hope that helps.
Reply all
Reply to author
Forward
0 new messages