1. Use metadata & cloud-init in Cloudformation to run bash scripts directly
3. Install packages, Ansible, copy playbooks from a private repo, and run playbooks locally
4. Install packages, Ansible, and use 'ansible-pull'
I get the lc user data script to install ansible, pull from an s3 bucket and then just run Ansible locally (not ansible-pull). I have it run a bootstrap playbook which get's the instance tags from the metadata. In the ASG configuration I have a tag, ansible_host_group which includes the host groups this instance will belong to in the Ansible (dynamic ec2) inventory. The bootstrap playbook uses add_host to add the localhost i.e. the instance to the ansible_host_group group, site.yml is then included. site.yml includes the other application tier playbooks, webserver.yml, database.yml etc. The hosts: value restricts what instances run the plays/roles.
# webserver.yml
---
- hosts: webserver:tag_ansible_host_group_webserver
roles:
- common
- webserver
The tag* host value is just if I need to run the plays on running instances, which shouldn't really ever be the case.
It requires installing Ansible and aws-cli tools on the host and giving access to an s3 bucket and the instances metadata. A lot of people are fine with this, but I feel less is more. Autoscaling lifecycle hooks seems like a good idea to and can now call out to a lambda (python i.e. Ansilbe) functions. It's wait and success/failure features seem like a smart option too.
I was also thinking of triggering an SNS topic to execute a lambda function (i.e. Ansible). It's kinda the same thing, lifecycle hooks seem like a better choice though aren't as popular as SNS I've found.
I would like to find time to implement it a lambda, Ansible bootstrap solution.