add_host module not adding multiple hosts

1,479 views
Skip to first unread message

Eric S

unread,
May 23, 2016, 1:15:28 PM5/23/16
to Ansible Project
ansible ver=2.0.2

When I'm trying to add more than one host to a group its only adding the first one in the list.  Here is my task to add hosts:
- name: Add hosts group temporary inventory group
   add_host: name={{ item.private_ip }} groups=dynamic
   with_items: "{{ ec2.instances }}"

This task does use the ec2.instances properly and will do it for all the hosts that I have created:
- name: Wait for SSH
  wait_for:
      host: "{{ item.private_ip }}"
      port: 22
      delay: 10
      timeout: 320
      state: started
  with_items: "{{ ec2.instances }}"

So I'm creating 3 hosts and the add_host task only adds the first one but the wait for ssh task will wait for all three and I get an ok back for the wait for ssh task on all three.

Eric S

unread,
May 24, 2016, 1:41:44 PM5/24/16
to Ansible Project
anyone have an idea of why this is working like this?  I've tried all version of ansible 1.9/2.0.0/2.0.1/2.0.2/2.1 and all the same result.

Matt Martz

unread,
May 24, 2016, 1:45:07 PM5/24/16
to ansible...@googlegroups.com
It might help if you provided the output of your playbook run with `-v` so we can see what you are seeing.

On Tue, May 24, 2016 at 12:41 PM, Eric S <erics...@gmail.com> wrote:
anyone have an idea of why this is working like this?  I've tried all version of ansible 1.9/2.0.0/2.0.1/2.0.2/2.1 and all the same result.

--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/ffde348d-eebd-4109-b179-31653d5e81cb%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Matt Martz
@sivel
sivel.net

Eric S

unread,
May 24, 2016, 2:29:12 PM5/24/16
to Ansible Project
sorry for the long output:
PLAY [ec2_instances] ***********************************************************

TASK [setup] *******************************************************************
Tuesday 24 May 2016  13:24:42 -0500 (0:00:00.019)       0:00:00.019 ***********
ok: [aaa-frs-ans-test2]
ok: [aaa-frs-ans-test3]
ok: [aaa-frs-ans-test1]

TASK [Launch Instance] *********************************************************
Tuesday 24 May 2016  13:24:46 -0500 (0:00:04.019)       0:00:04.038 ***********
changed: [aaa-frs-ans-test3] => {"changed": true, "instance_ids": ["i-683925ef"], "instances": [{"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-712c60da"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-683925ef", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-142-189.ec2.internal", "private_ip": "10.196.142.189", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test2"}, "tenancy": "default", "virtualization_type": "hvm"}], "tagged_instances": []}
changed: [aaa-frs-ans-test2] => {"changed": true, "instance_ids": ["i-b53a2632"], "instances": [{"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-1a2c60b1"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-b53a2632", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-141-57.ec2.internal", "private_ip": "10.196.141.57", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test2"}, "tenancy": "default", "virtualization_type": "hvm"}], "tagged_instances": []}
changed: [aaa-frs-ans-test1] => {"changed": true, "instance_ids": ["i-4f3a26c8"], "instances": [{"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-722c60d9"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-4f3a26c8", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-145-104.ec2.internal", "private_ip": "10.196.145.104", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test1"}, "tenancy": "default", "virtualization_type": "hvm"}], "tagged_instances": []}

TASK [Add hosts group temporary inventory group] *******************************
Tuesday 24 May 2016  13:25:29 -0500 (0:00:43.674)       0:00:47.713 ***********
changed: [aaa-frs-ans-test1] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-196-145-104.ec2.internal', u'public_ip': None, u'private_ip': u'10.196.145.104', u'id': u'i-4f3a26c8', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-722c60d9'}}, u'key_name': u'FSR_DEV_QA_KEYPAIR', u'image_id': u'ami-3683605b', u'tenancy': u'default', u'groups': {u'sg-7c538d04': u'fsr-allow-all'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'aaa-fsr-ans-test1', u'Env': u'Dev'}, u'placement': u'us-east-1a', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'us-east-1', u'launch_time': u'2016-05-24T18:24:48.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {"add_host": {"groups": ["dynamic"], "host_name": "10.196.145.104", "host_vars": {}}, "changed": true, "item": {"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-722c60d9"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-4f3a26c8", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-145-104.ec2.internal", "private_ip": "10.196.145.104", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test1"}, "tenancy": "default", "virtualization_type": "hvm"}}

TASK [Wait for SSH] ************************************************************
Tuesday 24 May 2016  13:25:29 -0500 (0:00:00.046)       0:00:47.759 ***********
ok: [aaa-frs-ans-test1] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-196-145-104.ec2.internal', u'public_ip': None, u'private_ip': u'10.196.145.104', u'id': u'i-4f3a26c8', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-722c60d9'}}, u'key_name': u'FSR_DEV_QA_KEYPAIR', u'image_id': u'ami-3683605b', u'tenancy': u'default', u'groups': {u'sg-7c538d04': u'fsr-allow-all'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'aaa-fsr-ans-test1', u'Env': u'Dev'}, u'placement': u'us-east-1a', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'us-east-1', u'launch_time': u'2016-05-24T18:24:48.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {"changed": false, "elapsed": 72, "item": {"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-722c60d9"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-4f3a26c8", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-145-104.ec2.internal", "private_ip": "10.196.145.104", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test1"}, "tenancy": "default", "virtualization_type": "hvm"}, "path": null, "port": 22, "search_regex": null, "state": "started"}
ok: [aaa-frs-ans-test2] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-196-141-57.ec2.internal', u'public_ip': None, u'private_ip': u'10.196.141.57', u'id': u'i-b53a2632', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-1a2c60b1'}}, u'key_name': u'FSR_DEV_QA_KEYPAIR', u'image_id': u'ami-3683605b', u'tenancy': u'default', u'groups': {u'sg-7c538d04': u'fsr-allow-all'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'aaa-fsr-ans-test2', u'Env': u'Dev'}, u'placement': u'us-east-1a', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'us-east-1', u'launch_time': u'2016-05-24T18:24:48.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {"changed": false, "elapsed": 72, "item": {"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-1a2c60b1"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-b53a2632", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-141-57.ec2.internal", "private_ip": "10.196.141.57", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test2"}, "tenancy": "default", "virtualization_type": "hvm"}, "path": null, "port": 22, "search_regex": null, "state": "started"}
ok: [aaa-frs-ans-test3] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-196-142-189.ec2.internal', u'public_ip': None, u'private_ip': u'10.196.142.189', u'id': u'i-683925ef', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-712c60da'}}, u'key_name': u'FSR_DEV_QA_KEYPAIR', u'image_id': u'ami-3683605b', u'tenancy': u'default', u'groups': {u'sg-7c538d04': u'fsr-allow-all'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'aaa-fsr-ans-test2', u'Env': u'Dev'}, u'placement': u'us-east-1a', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'us-east-1', u'launch_time': u'2016-05-24T18:24:48.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {"changed": false, "elapsed": 72, "item": {"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-712c60da"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-7c538d04": "fsr-allow-all"}, "hypervisor": "xen", "id": "i-683925ef", "image_id": "ami-3683605b", "instance_type": "t2.micro", "kernel": null, "key_name": "FSR_DEV_QA_KEYPAIR", "launch_time": "2016-05-24T18:24:48.000Z", "placement": "us-east-1a", "private_dns_name": "ip-10-196-142-189.ec2.internal", "private_ip": "10.196.142.189", "public_dns_name": "", "public_ip": null, "ramdisk": null, "region": "us-east-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test2"}, "tenancy": "default", "virtualization_type": "hvm"}, "path": null, "port": 22, "search_regex": null, "state": "started"}

TASK [Wait a little longer for centos] *****************************************
Tuesday 24 May 2016  13:26:42 -0500 (0:01:12.287)       0:02:00.047 ***********
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [aaa-frs-ans-test1] => {"changed": false, "delta": 19, "rc": 0, "start": "2016-05-24 13:26:42.068693", "stderr": "", "stdout": "Paused for 20.0 seconds", "stop": "2016-05-24 13:27:02.068025", "user_input": ""}

PLAY [dynamic] *****************************************************************

TASK [setup] *******************************************************************
Tuesday 24 May 2016  13:27:02 -0500 (0:00:20.040)       0:02:20.087 ***********
ok: [10.196.145.104]

TASK [ansible-hostname-master : gather facts] **********************************
Tuesday 24 May 2016  13:27:06 -0500 (0:00:04.062)       0:02:24.150 ***********
ok: [10.196.145.104] => {"ansible_facts": {"ansible_ec2_ami_id": "ami-3683605b", "ansible_ec2_ami_launch_index": "0", "ansible_ec2_ami_manifest_path": "(unknown)", "ansible_ec2_block_device_mapping_ami": "/dev/sda1", "ansible_ec2_block_device_mapping_root": "/dev/sda1", "ansible_ec2_hostname": "ip-10-196-145-104.aws.foreseeresults.com", "ansible_ec2_instance_action": "none", "ansible_ec2_instance_id": "i-4f3a26c8", "ansible_ec2_instance_type": "t2.micro", "ansible_ec2_local_hostname": "ip-10-196-145-104.aws.foreseeresults.com", "ansible_ec2_local_ipv4": "10.196.145.104", "ansible_ec2_mac": "0a:16:b5:46:d4:a1", "ansible_ec2_metrics_vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_device_number": "0", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_interface_id": "eni-5d46c71e", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_local_hostname": "ip-10-196-145-104.aws.foreseeresults.com", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_local_ipv4s": "10.196.145.104", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_mac": "0a:16:b5:46:d4:a1", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_owner_id": "325603188418", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_security_group_ids": "sg-7c538d04", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_security_groups": "fsr-allow-all", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_subnet_id": "subnet-4aa02e3c", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_subnet_ipv4_cidr_block": "10.196.128.0/19", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_vpc_id": "vpc-debf9bba", "ansible_ec2_network_interfaces_macs_0a_16_b5_46_d4_a1_vpc_ipv4_cidr_block": "10.196.0.0/16", "ansible_ec2_placement_availability_zone": "us-east-1a", "ansible_ec2_placement_region": "us-east-1", "ansible_ec2_product_codes": "aw0evgkw8e5c1q413zgy5pjce", "ansible_ec2_profile": "default-hvm", "ansible_ec2_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGUJO+rS5fIh3bGcKrQzt66hywpxcVQMhz7rb4WyeMtii2gZ3PHb2mnxrTaLE5Q7g6tL8aBfluzASeP0dijWtSDM7YGD1TR6sYEHysxcHRd7dJLhoEsJzwwa8HgRnus66FKnCNtV9XahG/BdAqqDpcWtkmJfkegmzgJ4rjP46EKGW4CZ3JF1/sjVgHAxsU7tv6WWoaCv9+zqKTwWGlBbDHRQxrTmHzWsQlmrpBdr2OnUPSsGMmOcdFX89T4+0T8BQt8nJusFeVV2Nj0jGRFs6fy/a9+wOKLYYLer80Z5RP3NhKVxIEZWfZDN93McBaKRKaQJoj1VnZi0O8yW6P+0rR FSR_DEV_QA_KEYPAIR\n", "ansible_ec2_reservation_id": "r-f0f66425", "ansible_ec2_security_groups": "fsr-allow-all", "ansible_ec2_services_domain": "amazonaws.com", "ansible_ec2_services_partition": "aws", "ansible_ec2_user_data": null}, "changed": false}

TASK [ansible-hostname-master : get resource tags] *****************************
Tuesday 24 May 2016  13:27:07 -0500 (0:00:01.044)       0:02:25.194 ***********
ok: [10.196.145.104 -> localhost] => {"changed": false, "tags": {"Env": "Dev", "Name": "aaa-fsr-ans-test1"}}

TASK [ansible-hostname-master : Keep temporary old hostname in /etc/hosts] *****
Tuesday 24 May 2016  13:27:07 -0500 (0:00:00.423)       0:02:25.617 ***********
ok: [10.196.145.104] => {"backup": "", "changed": false, "msg": ""}

TASK [ansible-hostname-master : Set hostname] **********************************
Tuesday 24 May 2016  13:27:08 -0500 (0:00:00.826)       0:02:26.444 ***********
changed: [10.196.145.104] => {"ansible_facts": {"ansible_domain": "", "ansible_fqdn": "aaa-fsr-ans-test1", "ansible_hostname": "aaa-fsr-ans-test1", "ansible_nodename": "aaa-fsr-ans-test1"}, "changed": true, "name": "aaa-fsr-ans-test1"}

TASK [ansible-hostname-master : Update /etc/hosts] *****************************
Tuesday 24 May 2016  13:27:10 -0500 (0:00:02.190)       0:02:28.634 ***********
changed: [10.196.145.104] => {"backup": "", "changed": true, "msg": "line added"}

TASK [ansible-hostname-master : add the preserve_hostname true to /etc/cloud/cloud.cfg] ***
Tuesday 24 May 2016  13:27:11 -0500 (0:00:00.837)       0:02:29.471 ***********
changed: [10.196.145.104] => {"backup": "", "changed": true, "msg": "line added"}

PLAY RECAP *********************************************************************
10.196.145.104             : ok=7    changed=3    unreachable=0    failed=0
aaa-frs-ans-test1          : ok=5    changed=2    unreachable=0    failed=0
aaa-frs-ans-test2          : ok=3    changed=1    unreachable=0    failed=0
aaa-frs-ans-test3          : ok=3    changed=1    unreachable=0    failed=0

Tuesday 24 May 2016  13:27:12 -0500 (0:00:00.774)       0:02:30.246 ***********
===============================================================================
TASK: Wait for SSH ----------------------------------------------------- 72.29s
TASK: Launch Instance -------------------------------------------------- 43.67s
TASK: Wait a little longer for centos ---------------------------------- 20.04s
TASK: setup ------------------------------------------------------------- 4.06s
TASK: setup ------------------------------------------------------------- 4.02s
TASK: ansible-hostname-master : Set hostname ---------------------------- 2.19s
TASK: ansible-hostname-master : gather facts ---------------------------- 1.04s
TASK: ansible-hostname-master : Update /etc/hosts ----------------------- 0.84s
TASK: ansible-hostname-master : Keep temporary old hostname in /etc/hosts --- 0.83s
TASK: ansible-hostname-master : add the preserve_hostname true to /etc/cloud/cloud.cfg --- 0.77s
TASK: ansible-hostname-master : get resource tags ----------------------- 0.42s
TASK: Add hosts group temporary inventory group ------------------------- 0.05s

Matt Martz

unread,
May 24, 2016, 2:51:23 PM5/24/16
to ansible...@googlegroups.com
I'm not quite positive what you are doing.  You omitted the task that creates the instances.  However it looks like you are targeting 3 hosts:

aaa-frs-ans-test1
aaa-frs-ans-test2
aaa-frs-ans-test3

And you are creating 1 instance for each host.

add_host only runs on the first host in the group, so you see it only run for aaa-frs-ans-test1.  It is called a host loop bypass plugin, meaning that it doesn't execute for each host you target in your play.

You are trying to loop ec2.instances, but there is only 1 host in that result, since you created a host on each instance, since vars are scoped to hosts.

Instead of targeting those 3 hosts, maybe you should instead target just localhost, and build your ec2 instances using a with_items to create 3 hosts.

But again, still not really sure what you are doing, and if you have reasons for what you have done so far.

--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Eric S

unread,
May 24, 2016, 3:07:52 PM5/24/16
to Ansible Project
Here's the create task:
- hosts: ec2_instances
  4   connection: local
  5   gather_facts: true
  6   tasks:
  7     - name: Launch Instance
  8       ec2:
  9         group_id: "{{ hostvars[inventory_hostname].group_id }}"
 10         instance_type: 't2.micro'
 11         image: "{{ hostvars[inventory_hostname].image }}"
 12         wait: true
 13         region: 'us-east-1'
 14         keypair: "{{ key_pair }}"
 15         vpc_subnet_id: "{{ subnet }}"
 16         instance_tags: "{{ hostvars[inventory_hostname].tags }}"
 17       register: ec2_info

but why does that module not loop through ec2.instances like the others do?  Also there are examples online that do i that way of using the add_hosts mod with {var}.instances.

Johannes Kastl

unread,
May 24, 2016, 4:14:31 PM5/24/16
to ansible...@googlegroups.com
On 24.05.16 21:07 Eric S wrote:
> Here's the create task:
> - hosts: ec2_instances
> connection: local

That seems to be contradicting each other. Or is this (the local
connection) something special use for ec2-stuff?

Johannes

signature.asc

Eric S

unread,
May 24, 2016, 4:35:50 PM5/24/16
to Ansible Project
Yes when launching ec2 instances you use the local connection.  They all launch fine and are able to be picked up by the wait for ssh task:

- name: Wait for SSH
      wait_for:
        host: "{{ item.private_ip }}"
        port: 22
        delay: 10
        timeout: 320
        state: started
      with_items: "{{ ec2_info.instances }}"

just not the add_host task.

Eric S

unread,
May 27, 2016, 12:42:14 PM5/27/16
to Ansible Project
is it an issue with the add_host module?  I even tried upgrading to 2.1 and still am experiencing the same thing.  Can the add_host module only add one host at a time?

Matt Martz

unread,
May 27, 2016, 12:56:48 PM5/27/16
to ansible...@googlegroups.com
No, I tried to explain this in a previous response.

add_host is what is called a "bypass host loop" plugin.  This means that add_host will only execute for the first host targeted by your `hosts` specification.  Your `hosts` targets a group containing the following hosts:

aaa-frs-ans-test1
aaa-frs-ans-test2
aaa-frs-ans-test3

As such, add_host only runs for aaa-frs-ans-test1.

The problem is with how you have structured your tasks.  You have instructed ansible to build a new server, for each one of these servers.  The results of your `ec2` variable that you register, get stored per host (aaa-frs-ans-test1, aaa-frs-ans-test2, aaa-frs-ans-test3).  This means that `ec2.instances` only hold the results of a single host build, per host.

Most people only target localhost, instead of a group that contains multiple hosts.

And then they build servers with the ec2 module using exact_count, or with_items, then all of the results for building all new instances, are stored with 1 inventory host.

So again, you are building 3 new instances, using 3 old instances, so effectively you end up with an `ec2` variable stored for each host (aaa-frs-ans-test1, aaa-frs-ans-test2, aaa-frs-ans-test3).

However add_host only runs on the first host (bypass host loop plugin), and thus only has access to the `ec2` var stored for aaa-frs-ans-test1.

Another solution would be to do something like:


- name: Add hosts group temporary inventory group
   add_host: name={{ hostvars[item]['ec2'].instances.0.private_ip }} groups=dynamic
   with_items: "{{ play_hosts }}"


However, that assumes you always have only created 1 instance via the ec2 task per host in the play.

Another option, and this is completely untested, is to use the `extract` filter, something like:


- name: Add hosts group temporary inventory group
   add_host: name={{ item.private_ip }} groups=dynamic
   with_items: "{{ play_hosts|map('extract', hostvars, ['ec2', 'instances'])|list }}"




On Fri, May 27, 2016 at 11:42 AM, Eric S <erics...@gmail.com> wrote:
is it an issue with the add_host module?  I even tried upgrading to 2.1 and still am experiencing the same thing.  Can the add_host module only add one host at a time?

--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Eric S

unread,
May 27, 2016, 1:49:09 PM5/27/16
to Ansible Project
Thanks for the deeper explanation, I get it now.

When you say "Most people only target localhost, instead of a group that contains multiple hosts." do you mean when I'm applying a role?  Because in the same playbook s the ec2 instance(s) creation i'm apply a role(s) to the newly created instances.  So the whole playbook is:
- hosts: ec2_test
  connection: local
  gather_facts: true
  tasks:
    - name: Launch Instance
      ec2:
        group_id: "{{ groupID }}"
        instance_type: 't2.micro'
        image: "{{ image }}"
        wait: true
        region: "{{ region }}"
        keypair: "{{ keypair }}"
        vpc_subnet_id: "{{ subnet }}"
        instance_tags:
          Name: "{{ inventory_hostname }}"
          ENV: "{{ tagEnv }}"
      register: ec2

    - name: Add hosts to group

      add_host: name="{{ hostvars[item]['ec2'].instances.0.private_ip }}" groups=dynamic
      with_items: "{{ play_hosts }}"

    - name: Wait for SSH
      wait_for:
        host: "{{ item.private_ip }}"
        port: 22
        delay: 10
        timeout: 320
        state: started
      with_items: "{{ ec2.instances }}"

    - name: Wait a little longer for centos
      pause: seconds=20


- hosts: dynamic
  gather_facts: yes
  sudo: yes
  roles:
    - hostname
Reply all
Reply to author
Forward
0 new messages