retry file not generated if failed hosts associated to succeded hosts

12 views
Skip to first unread message

Nicolas Heinen

unread,
Apr 19, 2016, 11:12:35 AM4/19/16
to Ansible Project
Hello,

I don't know if it's a bug or not, so I prefer to ask first:

ansible --version
ansible
2.1.0 (devel 0e2f1b423d) last updated 2016/04/19 09:21:18 (GMT +200)
  lib
/ansible/modules/core: (detached HEAD 5409ed1b28) last updated 2016/04/19 09:22:20 (GMT +200)
  lib
/ansible/modules/extras: (detached HEAD 3afe117730) last updated 2016/04/19 09:22:20 (GMT +200)
  config file
= /etc/ansible/ansible.cfg
  configured
module search path = Default w/o overrides

When I run a playbook, if a failed host is associated with a successful host, it won't generate a retry file while if the playbook is run only on failed hosts, the retry file will be generated:

# ansible-playbook command_check.yml -l 'db1.avau,db1.avip'

PLAY
[all] *********************************************************************

TASK
[Check command] ***********************************************************
changed
: [db1.avau]

cmd
: crontab -l | grep ansible

start
: 2016-04-19 09:32:30.193211

end: 2016-04-19 09:32:30.205075

delta
: 0:00:00.011864

stdout
: # WARNING: Ansible managed: /etc/ansible/templates/croot.j2 modified on 2016-03-24 10:18:01 by manjaro on manjaro
fatal
: [db1.avip]: FAILED! => {"changed": true, "cmd": "crontab -l | grep ansible", "delta": "0:00:00.012649", "end": "2016-04-19 09:32:30.586609", "failed": true, "rc": 1, "start": "2016-04-19 09:32:30.573960", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}

cmd
: crontab -l | grep ansible

start
: 2016-04-19 09:32:30.573960

end: 2016-04-19 09:32:30.586609

delta
: 0:00:00.012649

NO MORE HOSTS LEFT
*************************************************************

PLAY RECAP
*********************************************************************
db1
.avau                   : ok=1    changed=1    unreachable=0    failed=0  
db1
.avip                   : ok=0    changed=0    unreachable=0    failed=1  

and

# ansible-playbook command_check.yml -l 'db1.avip,db1.autovise'

PLAY [all] *********************************************************************

TASK
[Check command] ***********************************************************
fatal
: [db1.autovise]: FAILED! => {"changed": true, "cmd": "crontab -l | grep ansible", "delta": "0:00:00.012493", "end": "2016-04-19 09:46:20.225544", "failed": true, "rc": 1, "start": "2016-04-19 09:46:20.213051", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}

cmd
: crontab -l | grep ansible

start
: 2016-04-19 09:46:20.213051

end: 2016-04-19 09:46:20.225544

delta
: 0:00:00.012493
fatal
: [db1.avip]: FAILED! => {"changed": true, "cmd": "crontab -l | grep ansible", "delta": "0:00:00.013013", "end": "2016-04-19 09:46:20.571096", "failed": true, "rc": 1, "start": "2016-04-19 09:46:20.558083", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}

cmd
: crontab -l | grep ansible

start
: 2016-04-19 09:46:20.558083

end: 2016-04-19 09:46:20.571096

delta
: 0:00:00.013013

NO MORE HOSTS LEFT
*************************************************************
        to
retry, use: --limit @/etc/ansible/.ansible-retry/command_check.retry

PLAY RECAP
*********************************************************************
db1
.autovise               : ok=0    changed=0    unreachable=0    failed=1  
db1
.avip                   : ok=0    changed=0    unreachable=0    failed=1  

Regards,


Nicolas.
Reply all
Reply to author
Forward
0 new messages