Different initial remote_user per host - how best to arrange inventory?

1,187 views
Skip to first unread message

Adrian Simmons

unread,
Mar 20, 2014, 1:24:06 PM3/20/14
to ansible...@googlegroups.com
Hi,

My current 'hosts' inventory file has sections for a dev/stage/production workflow, and I don't imagine it will ever have more than one or two servers in each section. eg

```
[devel]
192.0.2.10 hostname=devel fqdn=devel.example.com

[stage]
192.0.2.10 hostname=stage fqdn=stage.example.com

[production]
203.0.113.10 hostname=production fqdn=production.example.com

[testing]
192.0.2.24 hostname=testing fqdn=testing.example.com
203.0.113.11 hostname=remotest fqdn=remotest.example.com

```

I have a playbook at the moment that does most of the work - main.yml - and it uses an ansible specific user I called 'ansa' that's set in group_vars/all:
```
# A user just for ansible
ansible_ssh_user: ansa
ansible_ssh_private_key_file: /path/to/the/ssh/id_rsa
```

To set up this 'ansa' user I have a separate setup.yml.

All of this is working fine. What I want to do now is run the setup.yml play on different hosts, with different initial login requirements. Namely some servers will be created using vagrant, so I want to use vagrant as remote_user:
```
- hosts: all
  accelerate: false
  remote_user: vagrant
  sudo: yes

 vars:
    ansible_ssh_user: vagrant
    ansible_ssh_private_key_file: "~/.vagrant.d/insecure_private_key"

 tasks: etc...
```

and others without vagrant using a standard root login:
```
- hosts: all
  accelerate: false
  remote_user: root

  vars:
    ansible_ssh_user: root
    ansible_ssh_private_key_file: "~/.ssh/id_rsa"


  tasks: etc...
```

So, two interelated problems:
One, I'm duplicating code with two different versions of this setup.yml, I'd like to have a single setup.yml with logic to decide which remote_user and ansible_ssh_user to use. I'm not sure how to achieve that.

Two, if I have an inventory section as show above with a [testing] section and two hosts in it, one of which is vagrant based and one not, how can I specify the right version of setup to use? (assuming problem One is unanswered)

'ansible-playbook --limit testing vagrant-setup_do.yml' will throw an error for the other host.

So far I've not managed to find a better way to address this than simply commenting out hosts in the inventory file before running the setup play. Once the setup play has run the main.yml play will work fine with this inventory layout, its just the inital setup run that is clunky.

Suggestions for making the initial setup less clunky are welcome. Right now the only thing I can think of is to work outside of ansible, setting up the root account with vagrant to match that on the non-vagrant hosts, so I can have a single setup play.

Or given that setup only has to be run once per host, maybe I'm trying to hard and just need to accept a little manual work getting things set up...


 


Adrian Simmons

unread,
Mar 24, 2014, 6:25:50 AM3/24/14
to ansible...@googlegroups.com

Right now the only thing I can think of is to work outside of ansible, setting up the root account with vagrant to match that on the non-vagrant hosts, so I can have a single setup play.

Looks like that question was to long and rambling to get answers :P
At least writing it all down sometimes makes you think of the best answer.

I took my own advice and did the work in vagrant instead, to set the root account up with ssh key access so my setup play can work just as it can on non-vagrant nodes. I'll have my main ansible play remove that key as a part of securing ssh access.
 

Michael DeHaan

unread,
Mar 24, 2014, 8:32:10 AM3/24/14
to ansible...@googlegroups.com
I highly recommend arranging variables in group_vars/ and host_vars/ directories versus trying to fit them into the INI file -- IMHO, it's a lot easier to read.

In this case, group_vars/production:

ansible_ssh_user: production_username

Don't define ansible_ssh_user in "vars" as this will override that setting, which seems to be not what you want to do.



--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/c4955e25-6220-4048-b432-990c3f14db47%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Adrian Simmons

unread,
Mar 25, 2014, 7:39:12 AM3/25/14
to ansible...@googlegroups.com
Thanks Michael,


On Monday, 24 March 2014 12:32:10 UTC, Michael DeHaan wrote:
I highly recommend arranging variables in group_vars/ and host_vars/ directories versus trying to fit them into the INI file -- IMHO, it's a lot easier to read.
I do have the ansa user in host_vars/all – this is a user setup just for ansible to use and I do want *all* hosts to use it.
I need to run a setup play once to get the ansa user into place on each host and that setup play has ansible_ssh_user in "vars"...
 
 ansible_ssh_user in "vars" as this will override that setting, which seems to be not what you want to do.
 ...because overriding that setting just for the setup play is *exactly* what I want to do.

My question was regarding two different types of base boxes. Vagrant and non-vagrant – for the vagrant boxes the setup play needs to set the var ansible_ssh_user to 'vagrant' and for the non-vagrant it needs set to 'root'. There's no neat and tidy separation between groups, [production], [testing] etc may each have vagrant or non-vagrant hosts, so I cant just use group_vars.

What I've done as a workaround is to have my vagrantfile setup the root user for ssh access – so that the vagrant based boxes and non-vagrant behave exactly the same as far as my setup play is concerned.

Reading the documentation again today I'm thinking I should probably have my setup play look for an ansible fact specific to a vagrant box and then a conditional in the setup play based on that, so that I can have one setup play that deals with both types of box.

Callum Macdonald

unread,
May 11, 2015, 11:58:53 AM5/11/15
to ansible...@googlegroups.com
@Adrian: Did you ever find a better solution?

I tried adding a custom value per host, like `bootstrap_user=root`, so I could manually configure that this user should be used for the bootstrap playbook. It seems like `remote_user: "{{ bootstrap_user }}"` is not interpreted, but passed directly to the ssh command as the literal username. So this approach doesn't really work.

In the end, I did like you, and wrote the user into the playbook. I'll manually edit it when bootstrapping new hosts. :-(

Cheers - Callum.

Callum Macdonald

unread,
May 11, 2015, 12:04:03 PM5/11/15
to ansible...@googlegroups.com
I did find another workaround, but it's not very elegant either.

In my bootstrap.yml file I created two identical blocks, one with `remote_user: root` and one without. Then I can conditionally run the playbook against one host like so:

ansible-playbook bootstrap.yml -l host -t root
or
ansible-playbook bootstrap.yml -l host -t current

It's also ugly, but at least doesn't require code changes when I want to init a new host.

Cheers - Callum.

Brian Coca

unread,
May 11, 2015, 12:18:45 PM5/11/15
to ansible...@googlegroups.com
disable fact gathering, have the first task be setup: as root
(ignore_errors), condition the rest of the tasks on it's success or
failure
> https://groups.google.com/d/msgid/ansible-project/3b62439c-620a-4e32-8d8f-6df329c4a38e%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Brian Coca

Shawn Ferry

unread,
May 11, 2015, 9:15:59 PM5/11/15
to ansible...@googlegroups.com
If the list of hosts is fixed or follows a recognizable pattern by IP or by name you could define the User parameter in .ssh/config

Shawn
Reply all
Reply to author
Forward
0 new messages