I am creating a playbook that is used for multiple hosts - now this might not be the best way but I still want to figure it out. I have two hosts now one that is centos 7 and one that is in amazon ec2. Obviously one uses systemd and the other does not. All the tasks apply to both hosts except for the stopping of firewalld
---
- hosts: main
user: root
tasks:
- name: Ensure firewalld is stopped
systemd:
name: firewalld
state: stopped
masked: yes
- name: Disable Selinux
selinux:
state: disabled
- name: Ensure we have latest updates
yum:
name: "*"
state: latest
Once it gets to this part one host is "ok" and the other host ( amazon ec2 ) fails for obvious reasons. There are like 10 other tasks that are after this that then only run on the local centos 7 server and of course the amazon ec2 instance does not get included.
TASK [Ensure firewalld is stopped] *********************************************
ok: [ip of centos 7]
fatal: [ip of amazon ec2]: FAILED! => {"changed": false, "cmd": "None show firewalld", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
right after you see this
TASK [Disable Selinux] *********************************************************
ok: [ip of centos 7]
and of course nothing for the amazon ec2 instance. How can I keep the tasks running on the failed host.