

Hi Iohannes,
In that case, I think that the best approach is to “delegate” the authentication on the Ansible side. In my case, I followed this way:
Ansible Inventory (you can use the “vars” section to define specific users and extra stuff to groups or nodes, you have an excellent explanation here):
[the_farm] 192.168.56.20 192.168.56.21 192.168.56.22 [the_farm:vars] ansible_user=vagrantNote: These remote nodes are already configured to receive access from the rundeck host via private key.
Ansible Config (the ansible.cfg file):
[defaults] inventory = /home/user/ansible/config/hosts nocows = 1Node Executor Config: first part and the SSH config (also, the privilege escalation section).
Model Source Config: first part and the SSH config.
Nodes (you will see something like this on your Rundeck service.log file).
Also, consider this if you have the localhost (Rundeck instance node) included in the inventory file. In certain environments, this solves a node discovering issues.
Hope this helps!
Hi Iohannes,
Have you tried using Ansible directly first? Something like: ansible all -i your_inventory_file -m ping (also, you can test it on the rundeck’s command page to check if the rundeck user can reach the ansible config/inventory and remote nodes).
Greetings.