--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/42240c16-c2a5-4dac-b6f9-a30fc6e5b8d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
| +## Option 2 | ||
| +This does everything that Option 1 does, but is contained inside the module. It's more opaque, | ||
| +but the playbooks end up being much clearer. | ||
| + | ||
| + | ||
| +- ec2_asg: | ||
| + name: myasg | ||
| + health_check_period: 60 | ||
| + health_check_type: ELB | ||
| + replace_all_instances: yes | ||
| + min_size: 5 | ||
| + max_size: 5 | ||
| + desired_capacity: 5 | ||
| + region: us-east-1 | ||
| + | ||
| + |
Michael,
The reason for having both was to spur this very discussion. :). Option 1 is a bit more complicated but more transparent, option 2 is much easier but less transparent. I'm more fond of option 2, and happy to make it the only one. BTW, are we talking about the docs or the actual feature?
As far as what the instances are being replaced with-- the ASG is going to spin up new instances with the current launch configuration. With option 2, the module starts by building a list of which instances should be replaced. This list is made up of all instances that have not been launched with the current launch configuration. The module then bumps the size of the ASG by the replace_batch size. It then terminates replace_batch_size instances at a time, waits for the ASG to spin up new instances in their place and become healthy, then continues on down the list until there are no more left to replace. Then it sets the ASG size back to it's original value.
James
Michael,
The reason for having both was to spur this very discussion. :). Option 1 is a bit more complicated but more transparent, option 2 is much easier but less transparent. I'm more fond of option 2, and happy to make it the only one. BTW, are we talking about the docs or the actual feature?
As far as what the instances are being replaced with-- the ASG is going to spin up new instances with the current launch configuration. With option 2, the module starts by building a list of which instances should be replaced. This list is made up of all instances that have not been launched with the current launch configuration. The module then bumps the size of the ASG by the replace_batch size. It then terminates replace_batch_size instances at a time, waits for the ASG to spin up new instances in their place and become healthy, then continues on down the list until there are no more left to replace. Then it sets the ASG size back to it's original value.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgxBOkPpMDPRjeV98CcpCc5W4F14a1RJ5aNTLcOYnCJO9w%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/0cbd0506-887c-4dbc-9bc9-57576a5b2005%40googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "Ansible Project" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ansible-project/JXiZgm36sHU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/D5B502F2-C5E4-4399-970E-E0B22FEC6736%40gmail.com.
For more options, visit https://groups.google.com/d/optout.
Scott,Neat to see someone else's approach. The "fast method" you have there probably could be worked into what's been merged. Another approach (maybe simpler) would just be stand up a parallel ASG with the new AMI.
I like making the AutoScale Group do the instance provisioning, versus your approach of provisioning the instance and then moving it to an ASG. From what I can tell, your module doesn't seem to be idempotent -- so if it's run, it's always going to act. The feature I added only updates instances if they have a launch config that is different from what's currently assigned to the ASG. So it's safe to run again (or continue a run that failed for some reason), without having to cycle through all the instances again.
We will be publishing an article on some different approaches that we've worked through for doing this "immutablish" deploy stuff sometime next week.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CAMP2DW5tC%3DZkyPFALX5Axc%2B_7dHeyJ%2BPiAen3Daq3PbEHDnRxA%40mail.gmail.com.
The general problem with this approach is that it doesn’t work well for blue-green deployments, nor if the new code can’t coexist with the currently running code.
I think we’re probably going to move to a system that uses a tier of proxies and two ELBs. That way we can update the idle ELB, change out the AMIs, and bring the updated ELB up behind an alternate domain for the blue-green testing. Then when everything checks out, switch the proxies to the updated ELB and take down the remaining, now idle ELB.
Amazon would suggest using Route53 to point to the new ELB, but there’s too great a chance of faulty DNS caching breaking a switch to a new ELB. Plus there’s a 60s TTL to start with regardless, even in the absence of caching.
You may have missed the “cycle_all” parameter. If False, only instances that don’t match the new AMI are cycled.
Using the ASG to do the provisioning might be preferable if it’s reliable. At first I went that route, but I was having problems with the ASG’s provisioning being non-deterministic. Manually creating the instances seems to ensure that things happen in a particular order and with predictable speed. As mentioned, the manual method definitely works every time, although I need to add some more timeout and error checking (like what happens if I ask for 3 new instances and only get 2).
I have a separate task that cleans up the old AMIs and LCs, incidentally. I keep the most recent around as a backup for quick rollbacks.
I think we’re probably going to move to a system that uses a tier of proxies and two ELBs. That way we can update the idle ELB, change out the AMIs, and bring the updated ELB up behind an alternate domain for the blue-green testing. Then when everything checks out, switch the proxies to the updated ELB and take down the remaining, now idle ELB.Not following this exactly -- what's your tier of proxies? You have a group of proxies (haproxy, nginx) behind a load balancer that point to your application?
Amazon would suggest using Route53 to point to the new ELB, but there’s too great a chance of faulty DNS caching breaking a switch to a new ELB. Plus there’s a 60s TTL to start with regardless, even in the absence of caching.Quite right. There are some interesting things you can do with tools you could run on the hosts that would redirect traffic from blue hosts to the green LB, socat being one. After you notice no more traffic coming to blue, you can terminate it.
You're right, I did miss that. By checking the AMI, you're only updating the instance if the AMI changes. If you a checking the launch config, you are updating the instances if any component of the launch config has changed -- AMI, instance type, address type, etc.
Using the ASG to do the provisioning might be preferable if it’s reliable. At first I went that route, but I was having problems with the ASG’s provisioning being non-deterministic. Manually creating the instances seems to ensure that things happen in a particular order and with predictable speed. As mentioned, the manual method definitely works every time, although I need to add some more timeout and error checking (like what happens if I ask for 3 new instances and only get 2).I didn't have any issues with the ASG doing the provisioning, but I would say nothing is predictable with AWS :).
I have a separate task that cleans up the old AMIs and LCs, incidentally. I keep the most recent around as a backup for quick rollbacks.That's cool, care to share?