Sorry, I'm not really following you. The pseudocode you posted is somewhat similar to what I'm doing, but the problems are:
- My current delegate host (your "build_systems") cannot resolve an IP address from my target machine's FQDN (your "publication_system")
- I don't really have the ability/permissions/desire to create arbitrary users with arbitrary permissions on my delegate hosts. They are simply file repos that can change at any time; I have variables that can be overridden like: artifacts_host, artifacts_user, artifacts_passwd, artifacts_path, etc.
The play I'm trying to make should ultimately look like:
- name: artifacts_setup | rsync WAR artifact from artifacts host
synchronize: >
src={{ artifacts_path }}/{{ artifact_filename }}.war
dest={{ artifact_installation_dir }}
delegate_to: "{{ artifacts_host }}"
I came very close to getting this to work by using ansible-vault to encrypt a "secrets.yml" variable file with the artifact_host's public key and then installed it on the target machine's auth file like:
- name: install artifacts_host's public key to auth file
authorized_key: >
user={{ ansible_ssh_user }}
key='{{ artifacts_host_public_key }}'
sudo: yes
but the problem still remains that my artifacts_host cannot resolve an IP address from the FQDN that Ansible passes to it. If I was able to "inform" the artifacts_host of the IP to use (what the fqdn _should_ resolve to) then I would be fine. I would also be fine having the task fire off on the target machine to pull from the artifacts_host, but I can't find an idempotent way of accomplishing this, nor can I figure out how to feed the target machine a login/password OR ssh key to use.
Am I just gonna have to template out a script to push to my targets???
Thanks again,
- Dan