Thank you for your comments, I am bound to make some mistakes since I do not have that much experience. Nevertheless, it is when using NFS volume or Hosted volume (aprox. name, cannot check it right now) that I get the error. With no volume or Empty dir volume, it works fine, the plugin does start-up a pod in Openshift. I am using a Vagrant all-in-one version of Openshift Origin as kubernetes host, and therefore I just use a system-admin login for all purposes, so I would not expect permission issues. I am replicating the setup described in here https://blog.openshift.com/jenkins-slaves-in-openshift-using-an-external-jenkins-environment/ where two ways are used to have jenkins slaves, one is Swarm plugin (preallocated slave pods selfdiscover the master) the other kubernetes-plugin (the master actively launches slaves on kubernetes, that would be within openshift in this case). For swarm plugin I found that v2.2 does have regression in terms of honoring the NO_PROXY env. var, which is important on my setup, so it works fine if I stay in 2.0. For kubernetes-plugin I am finding the problem already stated, it launches slaves OK only if they have no volumes or empty dir volumes. I am wildguessing it might have to do with some bug in kubernetes-client which is used for the plugin?, Since it is a jenkins for experimenting, I am using an 'admin' account in jenkins too. The pod slaves preinstantiated (the ones that connect to master jenkins via swarm) do also have the same NFS volume defined (via openshift deployment config in this case) and they work just fine. NFS directory has an nfsnobody owner and group, that volumes is accessed ok for the preallocated pods launched via openshift, and anyway the other option via 'host volume' does not work either (same error). NFS directories, Openshift origin and master jenkins all are located in the same machine. Master jenkins cohabits with Openshift Origin installation (so master jenkins is not a pod within Openshift, the slaves are meant to be deployed in Openshift). The whole point of the setup is to have all slaves share a persistent volume to keep there maven's repository, so that when maven 'downloads half of internet' upon a build, is kept there and all those jars do not disappear and have to be downloaded again for a next build. Slaves preallocated with swarm do fine on this, the ones with kubernetes-plugin... seemingly cannot have a persistent volume for now...Sorry for the lengthy comment! Your suggestions are very welcome. Should this be a separate issue? |