Iterating through a list of names to create EC2 instances

5,153 views
Skip to first unread message

Renaud Guerin

unread,
Nov 21, 2013, 12:42:31 PM11/21/13
to ansible...@googlegroups.com
Hello,

The EC2 module's docs show how to create several instances at once using the "count" variable, and register the results into another variable ("ec2").

You can then iterate on the created instances and do stuff (like add them to an inventory group) using "with_items: ec2.instances"

However, I can't figure out how to do the following :

1) Have the user provide a "names" list ( { "web1", "web2", "web3" } ). Or better yet, a naming prefix ("web"), a count (3) and a start index (1). 
2) Create them using the ec2 module, register results into "ec2" to keep precious info like ec2.instances.public_dns_name.
3) Here's the tricky bit : add a route53 DNS entry that CNAMEs together each entry in the "names" list with the public_dns_name values in "ec2.instances".

I don't believe with_nested will help with 3) and you can't walk through the 2 lists together side by side either by having several with_items: statements, correct ?

Short of adding the feature to the ec2 module, how would you do that ? 

Can you do something like :
- include add_instance.yml
  with_items: names

and then inside add_instance.yml use the ec2 module with count=1 and create the DNS entries from there ?

Thanks !

James Tanner

unread,
Nov 21, 2013, 1:47:40 PM11/21/13
to ansible...@googlegroups.com
On 11/21/2013 12:42 PM, Renaud Guerin wrote:
> Hello,
>
> The EC2 module's docs show how to create several instances at once
> using the "count" variable, and register the results into another
> variable ("ec2").
>
> You can then iterate on the created instances and do stuff (like add
> them to an inventory group) using "with_items: ec2.instances"
>
> However, I can't figure out how to do the following :
>
> 1) Have the user provide a "names" list ( { "web1", "web2", "web3" }
> ). Or better yet, a naming prefix ("web"), a count (3) and a start
> index (1).
> 2) Create them using the ec2 module, register results into "ec2" to
> keep precious info like ec2.instances.public_dns_name.
> 3) Here's the tricky bit : add a route53 DNS entry that CNAMEs
> together each entry in the "names" list with the public_dns_name
> values in "ec2.instances".

Can you give an example for what you are trying to convey here? I'm a
bit confused.

>
> I don't believe with_nested will help with 3) and you can't walk
> through the 2 lists together side by side either by having several
> with_items: statements, correct ?
>
> Short of adding the feature to the ec2 module, how would you do that ?
>
> Can you do something like :
> - include add_instance.yml
> with_items: names
>
> and then inside add_instance.yml use the ec2 module with count=1 and
> create the DNS entries from there ?
>
> Thanks !
> --
> You received this message because you are subscribed to the Google
> Groups "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ansible-proje...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

Renaud Guérin

unread,
Nov 21, 2013, 2:14:49 PM11/21/13
to ansible...@googlegroups.com, James Tanner
On 21 novembre 2013 at 18:47:46, James Tanner (tann...@gmail.com) wrote:
> The EC2 module's docs show how to create several instances at once 
> using the "count" variable, and register the results into another 
> variable ("ec2"). 
> 
> You can then iterate on the created instances and do stuff (like add 
> them to an inventory group) using "with_items: ec2.instances" 
> 
> However, I can't figure out how to do the following : 
> 
> 1) Have the user provide a "names" list ( { "web1", "web2", "web3" } 
> ). Or better yet, a naming prefix ("web"), a count (3) and a start 
> index (1). 
> 2) Create them using the ec2 module, register results into "ec2" to 
> keep precious info like ec2.instances.public_dns_name. 
> 3) Here's the tricky bit : add a route53 DNS entry that CNAMEs 
> together each entry in the "names" list with the public_dns_name 
> values in "ec2.instances". 

Can you give an example for what you are trying to convey here? I'm a 
bit confused. 

Thanks for replying James,

I’m trying to create a fleet of (for instance) web servers on EC2, and would like to give them meaningful ansible hostnames (web1, web2, etc) instead of the AWS generated public DNS names (ec2-xxxxx.compute-1.amazonaws.com)

I’d also like to create a CNAME DNS record (using the route53 module) pointing their chosen name (web1, web2, etc) to the EC2 DNS record (ec2-xxxxx.compute-1.amazonaws.com). The latter is accessible in the « instances.public_dns_name » 

Ideally, I would just need to provide the ec2 module with a list of hostnames for the instances it’s going to create (internally it just needs to set the « Name » EC2 tag), and I’d get back the « item.hostname » (web1,web2) alongside each « item.public_dns_name » (ec2-xxxxx.compute-1.amazonaws.com) when using "with_items: ec2"

The more I think of it, the more it sounds like just a few changes are needed in the ec2 module. I can try and have a go at it, but I’d like to know if there’s something similar already.

Michael DeHaan

unread,
Nov 23, 2013, 11:38:22 AM11/23/13
to ansible...@googlegroups.com, James Tanner


--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
Michael DeHaan <mic...@ansibleworks.com>
CTO, AnsibleWorks, Inc.
http://www.ansibleworks.com/

Renaud Guérin

unread,
Nov 23, 2013, 11:55:34 AM11/23/13
to Michael DeHaan, ansible...@googlegroups.com, James Tanner
Thanks Michael,

I know (and mentioned) I’ll need to use the route53 module, that’s not the difficulty here.
I did some more research and it looks like ‘with_together' is what I was looking for.

If anyone is looking to do the same (it's a quite common provisioning pattern) :

- provide an « instances_names »  list
- create the required number of instances using the ec2 module, register the result.
- Use ec2_tag, add_host and route53 for, respectively: setting the AWS Name tag, adding the host to the inventory with its desired hostname (not the default EC2 name), adding a CNAME DNS entry for hostname -> EC2 public_dns_name
 -For each of the modules above, use « with_together »  to iterate through both the created instances list (returned by the EC2 module), and the hostnames list you provided. 

This would be slightly easier if the ec2 module would take an optional « instances_names » list as a parameter; and used it to set different EC2 Name tags for each of the created instances.
I can look at adding this functionality if it’s likely to be accepted and merged in. What do you think ?
You received this message because you are subscribed to a topic in the Google Groups "Ansible Project" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ansible-project/9geyZDmbrtU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ansible-proje...@googlegroups.com.

Peter Sankauskas

unread,
Nov 24, 2013, 1:34:39 PM11/24/13
to ansible...@googlegroups.com, Michael DeHaan, James Tanner
Hi Renaud,

People that have been using AWS for a while don't really use it this way, or at least, they shouldn't. Treat servers like cattle, not pets.

If you haven't yet, take a look at AutoScaling Groups: 
... and think about how to architect your application to run on dynamic infrastructure.

--
Kind regards,
Peter Sankauskas

Peter Sankauskas

unread,
Nov 25, 2013, 4:24:51 PM11/25/13
to ansible...@googlegroups.com, Michael DeHaan, James Tanner

Renaud Guérin

unread,
Nov 27, 2013, 9:48:35 AM11/27/13
to ansible...@googlegroups.com, Michael DeHaan, James Tanner
Hi Peter,

Very interesting talk and this is probably the better approach, thanks.

You do need to address individual servers from time to time though : ssh’ing into them to debug something, for instance.

In this case, finding and copying the public_dns_name for a box sounds like a pain, and a big usability regression from a human-readable naming convention and DNS CNAMEs.

Am I missing a clever way to use ansible to provide « vagrant ssh » functionality basically ?
Or do you know of any wrapper tools to do something similar ?

Peter Sankauskas

unread,
Nov 27, 2013, 11:38:21 AM11/27/13
to ansible...@googlegroups.com, Michael DeHaan, James Tanner
Actually yes. You can tag AWS resources and find them using tags. For example, an Amazon Linux instance with tag Name=foo could be SSHed into using

ansible-ec2 ssh --name foo -u ec2-user

You can find the code here:


It uses Ansible's EC2 inventory plugin to do the lookup of public DNS name (or whatever you configure it to).

One further tip. In a distributed environment, debugging can be tough because you don't know which server is causing issues. Get all of the logging out of the instances and into some central and searchable location. There are plenty of SaaS providers (Loggly, Splunk, Sumologic, Papertrail) and open source (logstash, etc) options available.

--
Kind regards,
Peter Sankauskas

Renaud Guérin

unread,
Nov 27, 2013, 12:36:12 PM11/27/13
to ansible...@googlegroups.com, Peter Sankauskas, Michael DeHaan, James Tanner
Nice tool !

But your ssh example shows how a unique name is still unavoidable sometimes, be it Tags or DNS based.

If I left instances with their EC2 birth names, I couldn’t just make up « ansible-ec2 ssh --name web1 » when I need to log into a web server, I’d have to « ansible-ec2 list » first with a raw group name to find web servers, then pick one and ssh into it.

Also, some tools just rely on a nice hostname being set (the New Relic server dashboard would be unreadable with ec2-xxxx host names all mixed up)
RabbitMQ for instance, names its task queues after the current hostname and warns against changing it. I know it’s bad practice and not autoscaling friendly, but it’s just how it is in some occasions.

So, I’ll forget about CNAMEs in Route53 but I think my original feature request is still relevant : being able to pass a list of names to the EC2 module to be added to the Name tag of each created instance.
If I submitted such a patch, would you guys merge it ?

Peter Sankauskas

unread,
Nov 28, 2013, 7:08:52 PM11/28/13
to ansible...@googlegroups.com, Peter Sankauskas, Michael DeHaan, James Tanner
One more example to clear up uniqueness: 

3 instances tagged Name=foo, SSH into 2nd instance:

ansible-ec2 ssh --name foo -u ec2-user -n 2

(sorted by public DNS name, I think)

Dan Vaida

unread,
Jun 23, 2014, 7:05:21 AM6/23/14
to ansible...@googlegroups.com
Hello,

What about adding an EIP to the instance(s)?

Because that changes the game completely making the registered ec2 instances info outdated (at least the IP-related one).

How about cycling through Route53 with those EIPs? Has anyone done that successfully? Don't want to hijack this thread, but I find this rather relevant.

I know some would indicate the ec2_facts module, registering the output from ec2_eip or using the add_host module but I couldn't do it in any way so even a pseudo-code would be appreciated.

Thanks !

Sankalp Khare

unread,
Oct 29, 2014, 5:02:19 AM10/29/14
to ansible...@googlegroups.com, pas...@gmail.com, mic...@ansibleworks.com, tann...@gmail.com
Hi Renaud,

I totally get what you are looking to achieve. Perhaps you've achieved it already in the past year. Assuming that you are happy with specifying a start index = x and a count = N to produce machines with names containing x, x+1, x+2, ... x+N, I think the following playbook example will be instructive. I had the same requirement and this is what I was able to produce:

# ansible-playbook create-web.yml --extra-vars "count=n startindex=x env=production"
# provisions n web servers in prod env with indices x, x+1, x+2, ... x+n

---
  - name: "create and provision web servers in {{ env }} environment"
    hosts: localhost
    gather_facts: False
    tasks: 
      - name: launch instances
        local_action:
          module: ec2
          key_name: "{{ launch_key.ec2_classic }}"
          instance_type: "{{ instance.web[env] }}"
          volumes:
          - device_name: /dev/sda1
            volume_size: 512
            delete_on_termination: true
          - device_name: "{{ ephemeral[0] }}"
            ephemeral: ephemeral0 
          - device_name: "{{ ephemeral[1] }}"
            ephemeral: ephemeral1
            # ephemerals are deleted by default on termination
          region: "{{ region }}"
          image: "{{ os.amazon.ami_id }}"
          wait: yes
          group: "web-{{ env }}"
          count: "{{ count }}"
          wait_timeout: 1000
        register: created
        tags:
          - create

      - name: write instance ids and public dns of the instances to local hosts file
        local_action:
          module: lineinfile
          dest: ./hosts
          line: "{{ item.id }} {{ item.public_dns_name }}"
          create: yes
        with_items: created.instances
        tags:
          - create

      - name: create identifier sequence for tagging
        debug: msg="{{ item }}"
        with_sequence: start="{{ startindex }}" count="{{ count }}" format=%02d
        no_log: true # mute output
        register: sequence
        tags:
          - tag

      - name: tag instances
        no_log: true
        local_action: >-
          ec2_tag
          resource={{ item.0.id }}
          region={{ region }}
        args:
          tags:        
            Name: "Web {{ env|title }} {{ item.1.msg }}"
            Env: "{{ env }}"
            Type: server
            Function: web
            OS: "{{ os.amazon.name }}"
            Region: "{{ region }}"
            ID: "{{ item.1.msg }}"
        with_together:
          - created.instances
          - sequence.results
        tags:
          - tag
  
      - name: update dns records
        route53: >-
          command=create
          zone=yoursite.com
          record=web.{{ item.1.msg }}.{{ env }}.server.yoursite.com
          type=CNAME
          ttl=300
          value={{ item.0.public_dns_name }}
          overwrite=true
        with_together:
          - created.instances
          - sequence.results
        tags:
          - deploy

      - name: register instances with load balancers
        local_action: ec2_elb
        args:
          instance_id: "{{ item.id }}"
          ec2_elbs: "{{ elb_names.web[env] }}"
          region: "{{ region }}"
          state: present
          wait: no
        with_items: created.instances
        tags:
          - deploy

      - name: add instances to an in-memory group
        no_log: true
        local_action: add_host hostname="{{ item.public_dns_name }}" groupname=fresh
        with_items: created.instances
        tags:
          - create
      
      - name: wait for ssh to come up
        local_action: wait_for host="{{ item.public_dns_name }}" port=22 delay=60 timeout=320 state=started
        with_items: created.instances
        tags:
          - create
  
  - name: provision the instances
    hosts: fresh
    user: "{{ os.amazon.user }}"
    vars:
      user: "{{ os.amazon.user }}"
    roles:
      - common
      - swap
      - python
    tags:
      - configure

  - name: summary of created instances
    hosts: fresh
    gather_facts: false
    sudo: no
    tasks:
      - name: Get instance ec2 facts
        action: ec2_facts
        no_log: true # mute output
        register: ec2_facts        
      - name: Get resource tags from ec2 facts
        sudo: false
        no_log: true # mute output
        local_action: >-
          ec2_tag
          resource={{ ec2_facts.ansible_facts.ansible_ec2_instance_id }}
          region={{ region }}
          state=list
        register: ec2_tags
      - debug: msg="{{ ec2_facts.ansible_facts.ansible_ec2_instance_id }} | {{ ec2_facts.ansible_facts.ansible_ec2_instance_type }} | {{ ec2_tags.tags.Name }} | {{ ec2_facts.ansible_facts.ansible_ec2_public_hostname }}"
    tags:
      - create


Yes, parts of it are very contrived, but it gets the job done the way I want it to.

@Others: Yes, the cattle model is the way to go, but while we're getting there, ansible must still do what we want. And thus continue our Sisyphian labours ;)

Sankalp Khare

unread,
Oct 29, 2014, 5:06:11 AM10/29/14
to ansible...@googlegroups.com, pas...@gmail.com, mic...@ansibleworks.com, tann...@gmail.com
I must also add that I've got a central group_vars/all file from which I pull all the variables like region, instance types, environment specific load balancer names, etc.
...

gabrie...@minted.com

unread,
Aug 31, 2016, 8:09:54 PM8/31/16
to Ansible Project, pas...@gmail.com, mic...@ansibleworks.com, tann...@gmail.com
Thank you very much for answering the question.

Gabriel
Reply all
Reply to author
Forward
0 new messages