absolute path customWorkspace with docker-plugin and declarative pipeline

67 views
Skip to first unread message

Alex Suttmiller

unread,
Mar 19, 2018, 12:33:44 AM3/19/18
to Jenkins Users
Is it possible to use absolute paths for the workspace at the time the docker container is being created/run with declarative pipeline? I have done this before with mesos docker cloud, but I am hoping to replicate that ability locally, without having a mesos cluster on my localhost.

An example pipeline:
pipeline {
    agent none
    stages
{
        stage
('Foo') {
            agent
{
                docker
{
                    label
'dind'
                    image
'alpine'
                   
customWorkspace '/foo'
               
}
           
}

But when I try this, I end up with the results:
sh: can't create /foo@tmp/durable-5a40a590/jenkins-log.txt: nonexistent directory
sh: can'
t create /foo@tmp/durable-5a40a590/jenkins-result.txt.tmp: nonexistent directory
mv
: can't rename '/foo@tmp/durable-5a40a590/jenkins-result.txt.tmp': No such file or directory

This post leads me to believe it may not be possible: https://github.com/jenkinsci/docker-plugin/issues/540
This post leads me to believe it may be possible: https://groups.google.com/d/msg/jenkinsci-users/_YNnkdYRXoE/4LifSIOoAAAJ, but that I haven't figured out the right combination of volumes and containers.

I am using docker-compose to create my environment. Can recreate locally from this branch: https://github.com/NeverOddOrEven/dind-jenkins/commits/custom-ws-dind-agent-not-working.

Any help would be truly appreciated! Thanks.

Björn Pedersen

unread,
Mar 19, 2018, 2:05:28 AM3/19/18
to Jenkins Users
Hi,

so what is happening:

  1. Jenkins starts up a docker container.
  2. It connects as jenkins user
  3. In your case, you try to create a workspace +aux. dirs on / (root level) as this user.
    While the real workspace is bind-mounted, jenkins attempts to create the aux.dir locally, but only
    root has write access there.

Is it really essential to use a non-standard directory here? It is both non-standard in the jenkins sense, where workspace is typically a sub-dir of

the jenkins user  home, and non-standard according to the Filesystem Hierarchy Standard which makes thing much harder...



Björn

Alex Suttmiller

unread,
Mar 20, 2018, 12:08:55 AM3/20/18
to Jenkins Users
Thanks for the explanation. It helps me understand why that path does not exist. I added 'jenkins' to the root group, and then chmod'd 775 on /. Unfortunately, this doesn't work either. When I do this, I suspect the workspace is being mounted at /, which breaks all the things (specifically, the SSH injection path is broken causing the provisioning to fail).


Is it really essential to use a non-standard directory here?
> No, not necessarily, however solving this would enable support for any legal path in that field. And, it does seem to be a legal value according to declarative pipeline, as following code works.

pipeline {
    agent none
    stages
{
        stage
('Foo') {
            agent
{

                node
{
                    label
'dind'
                    customWorkspace
'/test'
               
}
           
}
            steps
{
                sh
'ls -l /test'

Things only break when I expose the host to my ephemeral agents. If I rely on the docker daemon within the ephemeral agent to pull and run pipeline, everything works. Unfortunately, this means pulling every image in every stage every time.

The former is a peer-relationship between the build agent and the dind host. The latter is a parent child relationship. At the time "docker run -w /foo/bar -v /foo/bar:/foo/bar is called, if /foo/bar does not exist on either host or container, they are created.  It makes sense why this works in the parent-child relationship. I expected that when I tried to run the same command as the jenkins user, against the dind-docker host emulating the "expose DOCKER_HOST" functionality, that the -v /foo/bar:/foo/bar mapping would fail. But it didn't. It created that path in the container and dind-docker.

So I am perplexed now. I haven't been able to recreate it outside of Jenkins, but I think it must have something to do with the "docker run" command and file permissions.

Any advice on either 1. some other caching mechanism beside "expose DOCKER_HOST", or 2. point me at the relevant section of code for the docker-plugin, or 3. I'm way off base and need more help :)? Similar to mounting a volume to /home/jenkins, could the plugin mount a volume to back absolute paths?

Any thoughts?




Reply all
Reply to author
Forward
0 new messages