I'm trying to create two different aws EC2 instances within the same play but with different tags. This has previously worked reliably, but after a few runs only the first instance will be created, and ansible will completely skip the creation of the second instance (without indicating "skipping") even though it doesn't exist.
(playbook)---
- name: Spin up postgresql instance(s)
hosts: localhost
vars:
app_type: postgresql
quantity: 1
roles:
- ec2
- name: Spin up geoserver instance(s)
hosts: localhost
vars:
app_type: geoserver
quantity: 1
roles:
- ec2
(ec2 role)---
- name: Create ec2 instance
local_action:
module: ec2
image: "{{ image }}"
instance_type: "{{ instance_type }}"
#aws_access_key: "{{ ec2_access_key }}"
#aws_secret_key: "{{ ec2_secret_key }}"
keypair: "{{ keypair }}"
count_tag:
App: "{{ app_type }}"
instance_tags:
App: "{{ app_type }}"
exact_count: 1 # "{{ quantity }}" #weird behaviour when variable is used
region: "{{ region }}"
#group: "{{ group }}"
#vpc_subnet_id: "{{ vpc_subnet }}"
wait: true
register: ec2_info
- add_host: hostname={{ item.public_ip }} groupname=tag_{{ app_type }}
with_items: ec2_info.instances
when: ec2_info|changed
- name: wait for instances to listen on port 22
wait_for: host={{ item.public_dns_name }} port=22 state=started search_regex=OpenSSH delay=10 timeout=320
with_items: "{{ ec2_info.instances }}"
- name: Add access key
shell: ssh-add "{{ path_to_key }}"
- meta: refresh_inventory # changed cache_max_age = 0 in ec2.ini

Above is an example of how the play progresses. The first instance will be created, the second will not and the play will instead go straight into configuring the instances (causing obvious errors).
The issue seems to be that the refresh_inventory removes my localhost, so the second play has no hosts to run on - but I'm not sure if that's expected or how to fix it.
ansible version: 2.0.2.0