Hi mailing list,
I'm building a pipeline for neural net training that needs to operate on around 650G of data using the kubernetes plugin to run agents in pods. In between builds I'd like the data to stick around in a persistent volume for the workspace. It seems like the persistentVolumeClaimWorkspaceVolume is perfect for this, but the way I have it configure is not working.
Jenkinsfile:
pipeline {
agent {
kubernetes {
yamlFile 'jenkins/pv-pod.yaml'
defaultContainer 'tree'
}
}
options {
podTemplate(workspaceVolume: persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false))
}
stages {
stage('read workspace') {
steps {
echo 'current env'
sh 'env'
sh '/usr/bin/tree'
echo 'previous env'
sh 'cat old-env.txt || true'
sh 'env > old-env.txt'
}
}
}
}
jenkins/pv-pod.yaml:
apiVersion: v1
kind: Pod
spec:
containers:
- name: tree
image: iankoulski/tree
resources: {}
command:
- /bin/cat
tty: true
I have already defined the PersistentVolumeClaim and applied it to the same namespace the jenkins pods are running in (default)
pv-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: workspace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Is there a mistake in this setup or have I misunderstood how this is supposed to work?
Thanks