| We have isolated the cause of the slowdown to the 1s wait in java.io.PipedOutputStream's write() method. The symptom being that whenever the process tries to write to the buffer (namely all the export environment variable statements), a number of the write() calls will block for 1+ seconds due to the buffer being full. Our solution was to, instead of the main thread writing, have the main thread delegate the write calls to asynchronous writer threads (with each one being in charge of writing an export statement to the buffer), then ensuring all the writer threads have finished at the end. This dramatically reduced our overhead time of `sh` calls from 3-4 seconds down to less than 1 second. We're currently in the process of refining it and then will submit a formal PR with our changes, but if there are any comments or suggestions please let us know. Also a note, we noticed that the slow `sh` behavior only occurred when called within a `container` block, not when the `sh` calls were simply using the default container. However, even using the same container as the default container produced slow `sh` calls. Example: ``` pipeline { agent { kubernetes { label "pod-name" defaultContainer "jnlp" yaml """ apiVersion: v1 kind: Pod spec: containers: - name: jnlp ... """ } } stages { stage("Loop in Default") { steps { script { for (i = 0; i < 10; i++) { sh "which jq" } } } } } stages { stage("Loop in JNLP") { steps { container("jnlp") { script { for (i = 0; i < 10; i++) { sh "which jq" } } } } } } } ``` |