Hello!
First off, Packer is great - I'm finding it's a dream to work with.
I'm posting here as I think I'm doing something silly which is slowing me down.
Versions
packer 1.5.4
ansible 2.8.6
High level
I'm trying to ask Packer to execute something akin to the below command with a combination of arguments inside an ansible provisioner.
`ansible-galaxy install -r provisioners/test/ansible/requirements.yaml --roles-path ./roles --force`
Expected behaviour
Given this block in packer.json
{
"type": "ansible",
"galaxy_file": "provisioners/{{ user `product` }}/ansible/requirements.yaml",
"galaxy_force_install": true,
"roles_path": "roles",
"playbook_file": "provisioners/{{ user `product` }}/ansible/playbook.yaml",
"user": "ec2-user",
"ansible_env_vars": [
"INVALID_TASK_ATTRIBUTE_FAILED=False"
],
"extra_arguments": [
"-v",
"--extra-vars", "env={{ user `env` }} product_version={{ user `product_version` }}"
]
},
Where galaxy file reads a file looking like this:
---
- name: 'my-test-role'
scm: git
src: 'g...@bitbucket.myaccount/my-test-role.git'
version: 'master'
I expect packer to install all requirements at the roles_path in this case just roles/ in cwd and force overwrite (if exists) because galaxy_force_install is set true.
Actual behaviour
The role is installed the first time packer build is invoked, but on subsequent runs where it has already been installed at roles_path, packer will print the galaxy error: amazon-ebs: [WARNING]: - ansible-role-testing was NOT installed successfully and the build will fail.
Workaround
Deleting the local role and running packer build allows the role to be reinstalled correctly, so I have been doing this every time I need to re-run packer build.
Question
Am I interpreting galaxy_force_install correctly or is this just a galaxy bug?
Thanks and hope to hear from you soon!