This is an interesting side effect of using local tasks vs plays with hosts: 127.0.0.1.
In my case I was running a playbook and on the command line limiting with -l command.
ansible-playbook -l host1 site.yml
in the play book I had the following play:
hosts: localhost
user: root
tasks:
- file: path="tmp/file1.txt" state=absent
- file: path="tmp/file2.txt" state=absent
The plays in the playbook all finished successfully however at the end I got the error listed in the subject of this post: FATAL: all hosts have already failed -- aborting.
Looking at the source (lib/ansible/callbacks.py and lib/ansible/playbook/__init__.py) this error seems to be spit out when (but not only when) the the expected play_count/host_count minus the actual play_count/host_count is greater than the max fail pct (in my case 0).
I figured that having an explicit play with localhost might be confusing the counts. So I changed my code to:
- hosts: dhcp_servers
user: root
tasks:
- local_action: file path="tmp/file1.txt" state=absent
- local_action: file path="tmp/file2.txt" state=absent
The FATAL at the end of the run disappeared. Not sure if it matters but in my inventory file I do have this line:
localhost ansible_ssh_host=127.0.0.1 ansible_connection=local
Something doesn't seem right here.