All that should be good, I think. But when I run this playbook from the openshift-ansible project, I get a credentials error:
$ ansible-playbook -i /var/www/html/provision-openshift/inventory/provisioning-inventory.ini /var/www/html/openshift-ansible/playbooks/aws/openshift-cluster/prerequisites.yml -e @/var/www/html/provision-openshift/inventory/provisioning_vars.yml -vvv
ansible-playbook 2.6.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
Parsed /var/www/html/provision-openshift/inventory/provisioning-inventory.ini inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: prerequisites.yml ***************************************************************************************************************
3 plays in /var/www/html/openshift-ansible/playbooks/aws/openshift-cluster/prerequisites.yml
PLAY [localhost] **************************************************************************************************************************
META: ran handlers
TASK [openshift_aws : Create AWS VPC] *****************************************************************************************************
task path: /var/www/html/openshift-ansible/roles/openshift_aws/tasks/vpc.yml:2
Monday 06 August 2018 13:38:42 -0400 (0:00:00.082) 0:00:00.082 *********
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: developer
<127.0.0.1> EXEC /bin/sh -c 'echo ~developer && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150 `" && echo ansible-tmp-1533577122.72-96497498776150="` echo /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150 `" ) && sleep 0'
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_vpc_net.py
<127.0.0.1> PUT /home/developer/.ansible/tmp/ansible-local-8154HxVYj9/tmp9sLZEU TO /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150/ec2_vpc_net.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150/ /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150/ec2_vpc_net.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-bjpqfqmloapttckvdgwmfalyyeckoclc; /usr/bin/python /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150/ec2_vpc_net.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/developer/.ansible/tmp/ansible-tmp-1533577122.72-96497498776150/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_iBOj3w/ansible_module_ec2_vpc_net.py", line 182, in vpc_exists
matching_vpcs = vpc.describe_vpcs(Filters=[{'Name': 'tag:Name', 'Values': [name]}, {'Name': 'cidr-block', 'Values': cidr_block}])['Vpcs']
File "/tmp/ansible_iBOj3w/ansible_modlib.zip/ansible/module_utils/aws/core.py", line 224, in deciding_wrapper
return unwrapped(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 599, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 148, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 173, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 157, in create_request
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 156, in sign
auth.add_auth(request)
File "/usr/local/lib/python2.7/dist-packages/botocore/auth.py", line 352, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials
fatal: [localhost]: FAILED! => {
"boto3_version": "1.7.50",
"botocore_version": "1.10.50",
"changed": false,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"cidr_block": [
],
"dhcp_opts_id": null,
"dns_hostnames": true,
"dns_support": true,
"ec2_url": null,
"multi_ok": false,
"name": "vpctest",
"profile": null,
"purge_cidrs": false,
"region": "us-east-1",
"security_token": null,
"state": "present",
"tags": {
"Name": "vpctest"
},
"tenancy": "default",
"validate_certs": true
}
},
"msg": "Failed to describe VPCs: Unable to locate credentials"
}
PLAY RECAP ********************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
Monday 06 August 2018 13:38:44 -0400 (0:00:01.726) 0:00:01.809 *********
===============================================================================
openshift_aws : Create AWS VPC ----------------------------------------------------------------------------------------------------- 1.73s
/var/www/html/openshift-ansible/roles/openshift_aws/tasks/vpc.yml:2 ----------------------------------------------------------------------
A co-worker can run ansible against AWS. He can run this playbook. I've tried:
* swapping out my config and credentials files with his, but I get the same error.
* 'chmod 777' on those files. Didn't help.
* uninstalling boto, boto3, botocore, and ansible from pip globally, pip as user, and from apt, and then reinstalling them just via pip globally. Didn't help.
* uninstalled and reinstalled with pip as user. Couldn't run it.
* rolling back the versions of boto, boto3, and botocore to previous versions, the ones my co-worker is running. Still get the error.
* creating a new user (adduser) and setting only the environment variables and 'aws configure'. Got the same error.
One possible clue, I don't know, is that when I run this:
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile TestAdmin manual --profile
access_key ****************I5PE assume-role
secret_key ****************ifrs assume-role
region us-east-1 config-file ~/.aws/config
The four characters at the end of the access_key and secret_key values don't match my actual access_key and secret_key. Are they supposed to? Maybe there's some sort of cache somewhere? But then why wouldn't it be cleared by the uninstall/reinstalls?
But then I tried simplifying the problem case, as one is supposed to do when communicating problems to other people. I tried just running a straightforward ad-hoc command:
$ ansible localhost -c local -m ec2_vpc_net -a "cidr_block=
10.103.0.0/16 name=vpctest"
127.0.0.1 | SUCCESS => {
"changed": true,
"vpc": {
"cidr_block_association_set": [
{
"association_id": "vpc-cidr-assoc-d18d2cbd",
"cidr_block_state": {
"state": "associated"
}
}
],
"classic_link_enabled": false,
"dhcp_options_id": "dopt-8d3787f4",
"id": "vpc-1551856f",
"instance_tenancy": "default",
"is_default": false,
"state": "available",
"tags": {
"Name": "vpctest"
}
}
}
Success? Huh? Well if that was successful then I know that Ansible/Python/Boto is reading the credentials file correctly. It must be 'unable to locate' because it's becoming another user when running the playbook. And, indeed, I see now in the output that it's using sudo to run the play. The plot thins, I suppose. I tried running the very same playbook with the very same command line, but put sudo in front of it, and it runs successfully. I suppose because if *I* run it as sudo it's inheriting my environment, and thus my credentials file. But if ansible uses sudo, it doesn't have that environment? I guess?
But anyway, why is it sudo-ing in the first place? I'm not telling it to "become". Not on the command line. And I don't see it in any of the playbooks I'm running:
$ cat /var/www/html/openshift-ansible/playbooks/aws/openshift-cluster/prerequisites.yml
---
- import_playbook: provision_vpc.yml
- import_playbook: provision_ssh_keypair.yml
- import_playbook: provision_sec_group.yml
$ cat /var/www/html/openshift-ansible/playbooks/aws/openshift-cluster/provision_vpc.yml
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: create a vpc
import_role:
name: openshift_aws
tasks_from: vpc.yml
when: openshift_aws_create_vpc | default(True) | bool
$ cat /var/www/html/openshift-ansible/playbooks/aws/openshift-cluster/roles/openshift_aws/tasks/vpc.yml
---
- name: Create AWS VPC
ec2_vpc_net:
state: present
cidr_block: "{{ openshift_aws_vpc.cidr }}"
dns_support: True
dns_hostnames: True
region: "{{ openshift_aws_region }}"
name: "{{ openshift_aws_clusterid }}"
tags: "{{ openshift_aws_vpc_tags }}"
register: vpc
[...]
The last playbook has more in it, but it's at that first play that it fails. Why is it sudo-ing? Then I checked the provisioning_vars file I was reading in for variables. There it is. "ansible_become: true", set as a connection variable for running the playbooks.
Fine, so now why will it run when *I* sudo, but not when ansible uses sudo? I've read through various Ansible documentation and I don't see why. But I tried setting "-c local" on the command line, thinking that forcing the connection type to be local would preclude the become. Well, it doesn't. That makes sense. I should have known that. Then I tried setting an extra variable on the command line: -e "ansible_become=false". That works! It created the VPC. It failed at a later step, but I think that's something else. I think setting that extra variable on this step (where everything is being run locally against aws) is the answer to my problems.
--
Todd