Hi
I can see the benefits of using the ec2.py dynamic inventory script,
but I run into some issues which I can't figure out how to fix.
1. The ec2.py requires the credentials to be available as environment
variables. But my deployment only has them available inside a vaulted
vars file (in host_vars/localhost, so they can be used by a previous
ansible role that creates infrastructure). How do people handle the
storage of credentials?
2. Up to now my inventory looks pretty simple:
[proxy]
proxy1 ansible_host=10.20.1.16
proxy2 ansible_host=10.20.1.17
[web]
web1 ansible_host=10.20.1.24
web2 ansible_host=10.20.1.25
[vars:all]
ansible_user=admin ansible_ssh_common_args='-o ProxyJump="
ad...@3.112.14.198"'
I've set things up so that the AWS instance names are the same as my
old inventory (i.e. proxy1, proxy2, web1, web2).
So, I can successfully ping instances by their name, for instance
using this syntax:
(ansible-2.7.12) dick.visser@nuc8 scripts$ ansible -i ec2.py
tag_Name_proxy* -m ping
10.20.1.16 | SUCCESS => {
"changed": false,
"ping": "pong"
}
10.20.1.17 | SUCCESS => {
"changed": false,
"ping": "pong"
}
But how do I set up the groups now?
Do i have to assign a "group" tag to the instance in AWS first with
value 'web', 'proxy', etc?
Ideally I'd like to keep the 'simple' group name like web, proxy, etc.
thx!!
--
Dick Visser
Trust & Identity Service Operations Manager
GÉANT