--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to a topic in the Google Groups "Jenkins Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/jenkinsci-users/xMYzhcYnC0s/unsubscribe.
To unsubscribe from this group and all its topics, send an email to jenkinsci-use...@googlegroups.com.
Okay, thinking laterally here.
Let’s assume that the Git plugin timeout is hardcoded to 10 minutes. Now, all you have to do is bring the checkout under 10 minutes. This may well be solvable.
First, if you haven’t done it already, use shallow checkouts. The Git plugin has had that since 1.1.23 (September 2012). If you’re building, you don’t need the histories, just the current version. If that doesn’t help…
Get a sysadmin and profile the pulls. Is your Git server maxing out on CPU or (more likely) disk I/O? Is your client?
If your server is maxxing out, you need to either beef up your server or reduce the load. Increasing disk speed is between you and your sysadmins, assuming that you own the server. Reducing load? Try one or more of these:
· Stop polling. If you can use GitLab, there’s a plugin to have GitLab push to Jenkins. If you have a dozen polling projects, this will reduce load big-time. There may be other Git push solutions for other Git servers; I don’t know.
· The last time I had checkouts take over 10 minutes (on a proprietary system, not Git), the problem was that nightly builds kicked off all at once and tried to pull 60 branches of the code at once. Solution? Use the https://wiki.jenkins-ci.org/display/JENKINS/Throttle+Concurrent+Builds+Plugin, make pulling from source control into its own step, and only allow 3-5 simultaneous pulls.
· If you have to poll, set up your polling schedule with ‘H’ notation (see the help for the polling schedule on your job) to spread the polling around.
· Compress the binaries you have in Git. That can’t all be source, can it?
· Better yet, put the binaries into something like Artifactory and have the build job pull them down after getting the actual source.
If your network is maxxing out, try one or more of these (some assume that you own your Git server hardware and network—if you’re running off of GitHub, some of these won’t work).
· Put your build machines (and thus your Jenkins slaves) on the same subnet as the Git server, whether or not the Jenkins server is there as well. If that’s impossible, at least get it to the same site (so it’s all LAN, no WAN).
· Replicate the Git server on the subnet your build hosts are on. Git is built to be distributed.
· If you can’t put your build farm near your source farm, at least get a Jenkins slave over on the same network as the Git server. Give it a job that polls Git. Rather than actually performing the build, have it compress the sources into a giant Zip file, archives that, then kick off a downstream job (that runs on your local build farm) that unzips the artifact and does the build and test run. You may need plugins to do this right. The upstream job will still be able to tell you the changes made to the source, and point you to the downstream job with the actual results.
If the server and network are fine, but your build box is maxxed out on I/O writes, you’re going to have to beef up your hardware (or run fewer builds at once, if you run multiple builds on one host). Get faster drives and/or get a RAID controller for your builds and put it into some sort of striping mode for faster writes. If you just keep your sources and builds on the RAID (having more permanent things like the OS and your compilers on another drive/RAID), you probably don’t have to have that RAID actually be redundant. If a drive blows, you lose your current build, swap out another drive, and try again.
--Rob
Click here to report this email as spam.
Okay, thinking laterally here.
Let’s assume that the Git plugin timeout is hardcoded to 10 minutes. Now, all you have to do is bring the checkout under 10 minutes. This may well be solvable.
First, if you haven’t done it already, use shallow checkouts. The Git plugin has had that since 1.1.23 (September 2012). If you’re building, you don’t need the histories, just the current version. If that doesn’t help…
Get a sysadmin and profile the pulls. Is your Git server maxing out on CPU or (more likely) disk I/O? Is your client?
If your server is maxxing out, you need to either beef up your server or reduce the load. Increasing disk speed is between you and your sysadmins, assuming that you own the server. Reducing load? Try one or more of these:
· Stop polling. If you can use GitLab, there’s a plugin to have GitLab push to Jenkins. If you have a dozen polling projects, this will reduce load big-time. There may be other Git push solutions for other Git servers; I don’t know.
· The last time I had checkouts take over 10 minutes (on a proprietary system, not Git), the problem was that nightly builds kicked off all at once and tried to pull 60 branches of the code at once. Solution? Use the https://wiki.jenkins-ci.org/display/JENKINS/Throttle+Concurrent+Builds+Plugin, make pulling from source control into its own step, and only allow 3-5 simultaneous pulls.
· If you have to poll, set up your polling schedule with ‘H’ notation (see the help for the polling schedule on your job) to spread the polling around.
· Compress the binaries you have in Git. That can’t all be source, can it?
· Better yet, put the binaries into something like Artifactory and have the build job pull them down after getting the actual source.
If your network is maxxing out, try one or more of these (some assume that you own your Git server hardware and network—if you’re running off of GitHub, some of these won’t work).
· Put your build machines (and thus your Jenkins slaves) on the same subnet as the Git server, whether or not the Jenkins server is there as well. If that’s impossible, at least get it to the same site (so it’s all LAN, no WAN).
· Replicate the Git server on the subnet your build hosts are on. Git is built to be distributed.
· If you can’t put your build farm near your source farm, at least get a Jenkins slave over on the same network as the Git server. Give it a job that polls Git. Rather than actually performing the build, have it compress the sources into a giant Zip file, archives that, then kick off a downstream job (that runs on your local build farm) that unzips the artifact and does the build and test run. You may need plugins to do this right. The upstream job will still be able to tell you the changes made to the source, and point you to the downstream job with the actual results.
If the server and network are fine, but your build box is maxxed out on I/O writes, you’re going to have to beef up your hardware (or run fewer builds at once, if you run multiple builds on one host). Get faster drives and/or get a RAID controller for your builds and put it into some sort of striping mode for faster writes. If you just keep your sources and builds on the RAID (having more permanent things like the OS and your compilers on another drive/RAID), you probably don’t have to have that RAID actually be redundant. If a drive blows, you lose your current build, swap out another drive, and try again.
--Rob