Any way to re-read ssh_config on-the-fly from Ansible playbook?

237 views
Skip to first unread message

Даниил Ярославцев

unread,
Jan 23, 2015, 4:10:32 AM1/23/15
to ansible...@googlegroups.com
Hello, Ansible admins and users!

I am using Ansible with Amazon EC2.
I've configured provisioning of EC2 private hosts over public SSH bastions (exactly as specified here: http://alexbilbie.com/2014/07/using-ansible-with-a-bastion-host).
So I have an ssh_config like below containing settings for forwarding an SSH requests to private hosts over public ones:

# DEV bastion
Host ###.###.###.###
    User                   ubuntu
    HostName               ###.###.###.###
    ProxyCommand          none
    BatchMode              yes
    PasswordAuthentication no
    ForwardAgent           yes

Host *
    User                   ubuntu
    ServerAliveInterval    60
    TCPKeepAlive           yes
    ProxyCommand           ssh -q -A ubu...@bastion.dev.xxx.com nc %h %p
    ControlMaster          auto
    ControlPath            ~/.ssh/mux-%r@%h:%p
    ControlPersist         8h


But now I want to make generate this ssh_config on-the-fly from the playbook. I.e what do I need to implement:

1. Single playbook spins up public and private EC2 hosts, attaches EIPs to public host (SSH bastions, etc.) and adds them to public hosted zones so SSH bastions will have public DNS names
2. Using info about created topology, playbook generates new ssh_config file and starts using it instead of default one (without relaunch or retries)
3. Playbook continues nodes provisioning, but now private EC2 hosts are provisioned over public SSH bastions as configured in generated ssh_config specified

I am done with p.1, but now there is the problem - there is no way to tell Ansible to reload SSH config on-the-fly (without restarting playbook).
So no way to continue provisioning private hosts over public ones at the same playbook.

Could you, please suggest me an option to overcome this? Or, maybe, point me to the code I need to modify in Ansible to make this possible - any help will be very appreciated ;)

Trevor Baker

unread,
May 6, 2015, 10:08:04 AM5/6/15
to ansible...@googlegroups.com
If you already have a bastion host, security groups, and the proxy command configured you can do the following for new private subnet instances:

- name: Launch new instance
  hosts: localhost
  gather_facts: true
  vars:
    some vars: ...

  tasks:

    - name: Launch instance
      ec2:
         key_name: "{{ key_name }}"
         group_id: "{{ security_group }}"
         instance_type: "{{ instance_type }}"
         image: "{{ image }}"
         wait: true
         region: "{{ region }}"
         etc.....
      register: ec2

    - name: Add new instance to host group
      add_host: hostname="{{ item.private_ip }}" groupname=launched
      with_items: ec2.instances  # note, this only works if you've launched a single instance!

    - name: Pause for EC2 instance to start SSH
      pause:
        prompt: "Pausing to allow new instance to start accepting ssh connections"
        minutes: 2

- name: Provision Jenkins in EC2
  hosts: launched
  gather_facts: True

  tasks:
    whatever...

The issue you're encountering is vexing me too.  

Problem summary: can't use dynamic inventories with dynamic proxycommand

You can't add multiple machines via add_host (https://github.com/ansible/ansible/issues/3848) and you can't patch ssh's config at runtime. 

I have multiple vpcs and multiple bastions.  I am using cut in the proxy command per the following:

Using a dynamic proxy command with a separator per the above link only works in my example code above when there is a single ec2 host passed to add_host.  I am stuck wanting to use a tag_some_tag list with multiple hosts returned from the ec2 inventory plugin with this workflow.  

Any help is appreciated!

Trevor Baker

unread,
May 6, 2015, 11:00:31 AM5/6/15
to ansible...@googlegroups.com
Another hack would be to have a ssh config and ansible config per vpc.  The ansible config points to the appropriate ssh config for the vpc, which contains the correct bastion ip for said vpc, otherwise the files are the same.

ie. ansible-vpc1.cfg / ssh-vpc1.cfg
     ansible-vpc2.cfg / ssh-vpc2.cfg

Then start the playbook as:
$ export ANSIBLE_CONFIG=$(pwd)/ansible-vpc1.cfg && ansible-playbook playbook.yml


Reply all
Reply to author
Forward
0 new messages