Block Build when downstream project is *queued*

936 views
Skip to first unread message

Dirk Kuypers

unread,
Feb 9, 2012, 9:42:53 AM2/9/12
to jenkins...@googlegroups.com
Hi,

is it possible to block a build as long as downstream projects are
queued, but not running?

Background: We have a continuous compile job triggered by SCM changes
which starts about 30 unit test jobs after successfull compile. Most
test jobs copy their workspace via copy-workspace-scm plugin from this
compile job. Compile takes about 5 minutes, some test projects 10
minutes and longer. The round-trip build cycle is around 18 minutes at
the moment. When I disable "Block build when downstream jobs are
running" at the compile job it will happen, that there are some tests
queued from the previous build which will get started after the second
compile is successfull. So those tests get their workspace from the
second compile and not from the first one. If the second compile job
had to wait untill all remaining tests left the queue and entered the
state running I think it would be save to start the next compile (and
could save us about 5 minutes in our build cycle).

Another possibility would be to tie the unit-test job to its starting
compile job with the workspace attached and not to "Most recent
completed build".

Any thoughts on this? Any other way to achieve my goal?

Dirk

--
Never trust a short-haired guru

Didier Durand

unread,
Feb 10, 2012, 12:38:29 AM2/10/12
to Jenkins Users
Hi,

Lock & Latch plugin should help you achieve what you need:
https://wiki.jenkins-ci.org/display/JENKINS/Locks+and+Latches+plugin

regards

didier

Dirk Kuypers

unread,
Feb 10, 2012, 4:25:54 AM2/10/12
to jenkins...@googlegroups.com
Hi Didier,

are you sure? I am using it already for some jobs which I do not want
to run concurrently. But how could I achieve that my compile job waits
in the queue until the last downstream job entered the state running?
If all downstream jobs are running it is save to start the next
upstream compile job. But maybe I overlook something. And: it is a
proposed deprecation:

https://wiki.jenkins-ci.org/display/JENKINS/Proposed+Plugin+Deprecation

The "Throttle Concurrent Builds Plugin" also only allows to control
the executor nodes. I would need to have control about the waiting
queue...

I have played with priority sorter plugin but it did not work reliably for me.

Any other idea?

Dirk

2012/2/10 Didier Durand <durand...@gmail.com>:

Didier Durand

unread,
Feb 10, 2012, 5:34:20 AM2/10/12
to Jenkins Users
What about Exclusion plugin: https://wiki.jenkins-ci.org/display/JENKINS/Exclusion-Plugin

On Feb 10, 10:25 am, Dirk Kuypers <kuypers.d...@googlemail.com> wrote:
> Hi Didier,
>
> are you sure? I am using it already for some jobs which I do not want
> to run concurrently. But how could I achieve that my compile job waits
> in the queue until the last downstream job entered the state running?
> If all downstream jobs are running it is save to start the next
> upstream compile job. But maybe I overlook something. And: it is a
> proposed deprecation:
>
> https://wiki.jenkins-ci.org/display/JENKINS/Proposed+Plugin+Deprecation
>
> The "Throttle Concurrent Builds Plugin" also only allows to control
> the executor nodes. I would need to have control about the waiting
> queue...
>
> I have played with priority sorter plugin but it did not work reliably for me.
>
> Any other idea?
>
> Dirk
>
> 2012/2/10 Didier Durand <durand.did...@gmail.com>:

Dirk Kuypers

unread,
Feb 10, 2012, 7:19:00 AM2/10/12
to jenkins...@googlegroups.com
Hi,

2012/2/10 Didier Durand <durand...@gmail.com>:

hm. It's also only restricted to the tun time of the job. I do not see
how I could sort the waiting queue with it.

Sami Tikka

unread,
Feb 11, 2012, 1:00:36 AM2/11/12
to jenkins...@googlegroups.com
In Jenkins it is usually a bad idea to use another job's workspace because you cannot precisely control what gets executed and when.

You have 2 possibilities:

1) the best practice using Jenkins is usually for the compile job to archive the build artifacts. This means that a configured set of files will be copied away from the job workspace to a build specific archive area where they are safe. The unit test jobs can then be triggered and they can use the copy artifact plugin to get a copy of the artifacts. If you want, the compile job can even pass its build number to the unit test jobs.

2) If your unit tests need to use the compile job workspace directly because the compile artifacts are so huge they cannot be reasonably copied around, your should consider executing the unit tests in the same compile job.

Another idea: I recently learned there is a plugin that creates a new build step where you can trigger another job and wait for its completion. This might be useful to you but I cannot remember what the plugin was called.

-- Sami

Dirk Kuypers

unread,
Feb 15, 2012, 4:16:09 AM2/15/12
to jenkins...@googlegroups.com
Hi,

sorry for the late answer, I was busy with other things. Copy
artifacts seems to be the thing that would fit perfectly, thanks for
the suggestion! Running everything in one job is no alternative
because it would take more than 2 hours until all the tests are ready.
I have 12 executors in parallel at the moment to run the tests so our
build-cycle is only 15 minutes now. I had a shared workspace in the
beginning but this caused problems, too. Some test projects use the
same deployment files which sometimes led to test failing because the
resource was blocked by another test. And sometimes xunit plugin found
got confused when parsing test results when two jobs were ready at the
same time so you could find the results of both test projects at each
project.

But I just implemented a pilot job with copy artifact. Unfortunately I
have two new problems with that:
1. Why is copying the files taking that much time (about 16 minutes)?

09:05:06  Started by upstream project "ContestContinuous" build number 3456
09:05:06  Building remotely on 1SP1-SLAVE1 in workspace
C:\Jenkins-Slave\workspace\ClimateChamberTest
09:05:06  No emails were triggered.
09:21:43  Copied 813 artifacts from "ContestContinuous" build number 3456
10:02:58  [ClimateChamberTest] $ cmd /c call
C:\Users\KUYPER~1.1SP\AppData\Local\Temp\hudson662483330654974304.bat

Here is an old build with copy-workspace-scm plugin (same file set, I did just
copy and paste for copy artifacts):

15:25:03  Started by upstream project "ContestContinuous" build number 3447
15:25:03  Building remotely on 1SP1-SLAVE1 in workspace
C:\Jenkins-Slave\workspace\ClimateChamberTest
15:26:54  No emails were triggered.
15:26:54  [ClimateChamberTest] $ cmd /c call
C:\Users\KUYPER~1.1SP\AppData\Local\Temp\hudson356075031990640251.bat

2. I loose the changes from the compile job in the test job. With the
old setup I fingerprinted one certain file in the compile and in the
test job and with that due to some magic the unit test job now knew
about the changes of the upstream job. Now this relation is somehow
gone, although the compile job automatically fingerprints the same
file (and many others) and I din't change the fingerprinting in the
test job.

Thanks for further hints on that
Dirk

2012/2/11 Sami Tikka <sjt...@gmail.com>:


> In Jenkins it is usually a bad idea to use another job's workspace because you cannot precisely control what gets executed and when.
>
> You have 2 possibilities:
>
> 1) the best practice using Jenkins is usually for the compile job to archive the build artifacts. This means that a configured set of files will be copied away from the job workspace to a build specific archive area where they are safe. The unit test jobs can then be triggered and they can use the copy artifact plugin to get a copy of the artifacts. If you want, the compile job can even pass its build number to the unit test jobs.
>
> 2) If your unit tests need to use the compile job workspace directly because the compile artifacts are so huge they cannot be reasonably copied around, your should consider executing the unit tests in the same compile job.
>
> Another idea: I recently learned there is a plugin that creates a new build step where you can trigger another job and wait for its completion. This might be useful to you but I cannot remember what the plugin was called.
>
> -- Sami

--

Sami Tikka

unread,
Feb 22, 2012, 5:40:16 PM2/22/12
to jenkins...@googlegroups.com
2012/2/15 Dirk Kuypers <kuyper...@googlemail.com>:

> Running everything in one job is no alternative
> because it would take more than 2 hours until all the tests are ready.

OK. But you do know it is possible to run several builds of one job
concurrently? Check the "Execute concurrent builds if necessary" on
job configuration page.

> But I just implemented a pilot job with copy artifact. Unfortunately I
> have two new problems with that:
> 1. Why is copying the files taking that much time (about 16 minutes)?
>
> 09:05:06  Started by upstream project "ContestContinuous" build number 3456
> 09:05:06  Building remotely on 1SP1-SLAVE1 in workspace
> C:\Jenkins-Slave\workspace\ClimateChamberTest
> 09:05:06  No emails were triggered.
> 09:21:43  Copied 813 artifacts from "ContestContinuous" build number 3456
> 10:02:58  [ClimateChamberTest] $ cmd /c call
> C:\Users\KUYPER~1.1SP\AppData\Local\Temp\hudson662483330654974304.bat

That's quite a lot of artifacts. How much data is that? I wonder if
the author of the Copy Artifacts plugin is reading this?

Another possibility is to not use Copy Artifacts plugin and do things
the old-fashioned way: use wget or curl to download the artifacts.

In your case, I probably would not download 813 files individually. I
would download the link that provides all artifacts as one
"archive.zip" package.

>
> Here is an old build with copy-workspace-scm plugin (same file set, I did just
> copy and paste for copy artifacts):
>
> 15:25:03  Started by upstream project "ContestContinuous" build number 3447
> 15:25:03  Building remotely on 1SP1-SLAVE1 in workspace
> C:\Jenkins-Slave\workspace\ClimateChamberTest
> 15:26:54  No emails were triggered.
> 15:26:54  [ClimateChamberTest] $ cmd /c call
> C:\Users\KUYPER~1.1SP\AppData\Local\Temp\hudson356075031990640251.bat

Strange. I would have thought it takes much longer to copy a full
workspace than some artifacts. I admit I haven't used the Copy
Workspace SCM plugin a lot. I did some testing with it but found it
slow and did not use it any more.

> 2. I loose the changes from the compile job in the test job. With the
> old setup I fingerprinted one certain file in the compile and in the
> test job and with that due to some magic the unit test job now knew
> about the changes of the upstream job. Now this relation is somehow
> gone, although the compile job automatically fingerprints the same
> file (and many others) and I din't change the fingerprinting in the
> test job.

You need to fingerprint and archive one unique build artifact in the
upstream job, then copy that artifact into the downstream job and
fingerprint it in there too. When Jenkins sees the same fingerprint in
two builds, it knows they are connected.

-- Sami

Dirk Kuypers

unread,
Feb 27, 2012, 4:24:14 AM2/27/12
to jenkins...@googlegroups.com
Hi Sami,

sorry for the late answer, I was two days off...

2012/2/22 Sami Tikka <sjt...@gmail.com>:


> 2012/2/15 Dirk Kuypers <kuyper...@googlemail.com>:
>> Running everything in one job is no alternative
>> because it would take more than 2 hours until all the tests are ready.
>
> OK. But you do know it is possible to run several builds of one job
> concurrently? Check the "Execute concurrent builds if necessary" on
> job configuration page.

That's exactly the thing I want to achieve. The problem with copy
artifacts here is that the tests run about 1-10 minutes but copying
the artifacts takes about 15 minutes. With 35 test jobs this adds 35
times 15 minutes for nothing but slow IO...
Maybe it is related to https://issues.jenkins-ci.org/browse/JENKINS-12007
At least I voted for this issue and I am currently watching it.;-)

>> But I just implemented a pilot job with copy artifact. Unfortunately I
>> have two new problems with that:
>> 1. Why is copying the files taking that much time (about 16 minutes)?
>>
>> 09:05:06  Started by upstream project "ContestContinuous" build number 3456
>> 09:05:06  Building remotely on 1SP1-SLAVE1 in workspace
>> C:\Jenkins-Slave\workspace\ClimateChamberTest
>> 09:05:06  No emails were triggered.
>> 09:21:43  Copied 813 artifacts from "ContestContinuous" build number 3456
>> 10:02:58  [ClimateChamberTest] $ cmd /c call
>> C:\Users\KUYPER~1.1SP\AppData\Local\Temp\hudson662483330654974304.bat
>
> That's quite a lot of artifacts. How much data is that? I wonder if
> the author of the Copy Artifacts plugin is reading this?

It is a zip of about 60 MB. So nothing that should take 15 minutes to
transfer and unzip... The original workspace where everything is
compiled has more than 30000 files and some GB.

>> Here is an old build with copy-workspace-scm plugin (same file set, I did just
>> copy and paste for copy artifacts):
>>
>> 15:25:03  Started by upstream project "ContestContinuous" build number 3447
>> 15:25:03  Building remotely on 1SP1-SLAVE1 in workspace
>> C:\Jenkins-Slave\workspace\ClimateChamberTest
>> 15:26:54  No emails were triggered.
>> 15:26:54  [ClimateChamberTest] $ cmd /c call
>> C:\Users\KUYPER~1.1SP\AppData\Local\Temp\hudson356075031990640251.bat
>
> Strange. I would have thought it takes much longer to copy a full
> workspace than some artifacts. I admit I haven't used the Copy
> Workspace SCM plugin a lot. I did some testing with it but found it
> slow and did not use it any more.

Funny because for me it is fast, at least compared to copy artifacts.

>> 2. I loose the changes from the compile job in the test job. With the
>> old setup I fingerprinted one certain file in the compile and in the
>> test job and with that due to some magic the unit test job now knew
>> about the changes of the upstream job. Now this relation is somehow
>> gone, although the compile job automatically fingerprints the same
>> file (and many others) and I din't change the fingerprinting in the
>> test job.
>
> You need to fingerprint and archive one unique build artifact in the
> upstream job, then copy that artifact into the downstream job and
> fingerprint it in there too. When Jenkins sees the same fingerprint in
> two builds, it knows they are connected.

That's exactly what I am doing. I fingerprint one specific DLL in the
upstream job where it is compiled and fingerprint it again in the test
job where it is just used. With capy artifacts all artifacts
(including "my" specific DLL) are fingerprinted automatically, maybe
that's a difference. The test job just fingerprints one of the 813
artifacts, should it fingerprint all of them? This would mean I had to
copy the rule which artifatcs to extract from the workspace to all my
35 test jobs. Not that nice: if I had to add yet another file I would
end up editing 35 test jobs again to fingerprint all files.

BR
Dirk

Reply all
Reply to author
Forward
0 new messages