Hi,
I'm trying to integrate Ansible into kube-deploy to allow for a faster and smoother deployment. I've ran into a strange issue where if I'm running the script for the ansible playbook, I cant mount gluster volumes in pods. But, if I run the worker.sh/master.sh locally as root (or with sudo) everything seems to work fine.
Here is an example of the Ansible playbook for a master node. In theory, everything should run under the root user with the correct ENV variables.
- name: Download the kube-deploy files git: repo=https://github.com/kubernetes/kube-deploy.git dest=/opt/kube-deploy version=master - name: Run the master deploy script shell: echo Y | ./master.sh args: chdir: /opt/kube-deploy/docker-multinode/ environment: USE_CNI: true USE_CONTAINERIZED: true K8S_VERSION: v1.4.0-alpha.2
And here is the actual task definition
--- - hosts: k8-master become: yes become_method: sudo gather_facts: yes roles: #- common - master