problem about include statement and host queue

8 views
Skip to first unread message

stillf...@gmail.com

unread,
Jul 14, 2016, 4:45:13 AM7/14/16
to Ansible Project
I use include in my playbook, but it influenced host queue.Here is the problem:

I got a playbook like this:
- name: Init base system env
  hosts
: all
  gather_facts
: yes
  sudo
: yes
  vars_files
:
   
- secrets.yml
  roles
:
   
- system_env
   
- set_timezone


in roles/system_env/tasks/main.yml :
---
- include: init_env.yml
- include: set_iptables.yml
- include: set_ulimit.yml


at first, host queue has 3 hosts, so include init_env.yml for 3 host:
2016-07-14 15:50:18,392 p=34561 u=polar |  TASK [system_env : include] ****************************************************
2016-07-14 15:50:18,443 p=34561 u=polar |  included: /data/ansible/roles/system_env/tasks/init_env.yml for host_10.23.3.71, host_10.23.3.74, host_10.23.3.75

error occured:
2016-07-14 15:50:19,757 p=34561 u=polar |  TASK [system_env : Install MySQL-shared-compat] ********************************
2016-07-14 15:50:19,991 p=34561 u=polar |  fatal: host_10.23.3.71]: FAILED! => {"changed": true, "cmd": ["rpm", "-ivh", "MySQL-shared-compat-5.5.32-2.el6.x86_64.rpm"], "delta": "0:00:00.017759", "end": "2016-07-14 15:50:19.992610", "failed": true, "rc": 1, "start": "2016-07-14 15:50:19.974851", "stderr": "\tpackage MySQL-shared-compat-5.5.32-2.el6.x86_64 is already installed", "stdout": "Preparing...                ##################################################", "stdout_lines": ["Preparing...                ##################################################"], "warnings": ["Consider using yum module rather than running rpm"]}
2016-07-14 15:50:20,572 p=34561 u=polar |  changed: [host_10.23.3.74]
2016-07-14 15:50:20,572 p=34561 u=polar | [WARNING]: Consider using yum module rather than running rpm
2016-07-14 15:50:20,626 p=34561 u=polar |  changed: [host_10.23.3.75]

host_10.23.3.71 got fatal, it should out of host queue. But at next inlcude, ansible include it again.
2016-07-14 15:50:22,520 p=34561 u=polar | TASK [system_env : include] ****************************************************
2016-07-14 15:50:22,549 p=34561 u=polar | included: /data/ansible/roles/system_env/tasks/set_iptables.yml for host_10.23.3.74, host_10.23.3.75
****some task in set_iptables.yml****
2016-07-14 15:50:23,350 p=34561 u=polar | TASK [system_env : include] ****************************************************
2016-07-14 15:50:23,382 p=34561 u=polar | included: /data/ansible/roles/system_env/tasks/set_iptables.yml for host_10.23.3.71
****some task in set_iptables.yml****
2016-07-14 15:50:23,350 p=34561 u=polar | TASK [system_env : include] ****************************************************
2016-07-14 15:50:23,409 p=34561 u=polar | included: /data/ansible/roles/system_env/tasks/set_ulimit.yml for host_10.23.3.74,host_10.23.3.75
****some task in set_ulimit.yml****
2016-07-14 15:50:25,232 p=34561 u=polar | included: /data/ansible/roles/system_env/tasks/set_ulimit.yml for host_10.23.3.71
****some task in set_ulimit.yml****

it looks like ansible apply a new queue to run host_10.23.3.71.
In ansible's philosophy, while a host got error, it should never execute remain task. It confuse me. And my ansible version is 2.0.0.0
 

Reply all
Reply to author
Forward
0 new messages