jeremy mordkoff:
> You need 99 jobs for the 99 programs, but what about the
> libraries?
Yes, one job per project, and let MSBuild take care of
building the libraries. Thankfully, I do not have to worry
about it in Jenkins. The libraries will be available at
their correct relative paths in the repository and its local
mirror.
> I assume a program will depend on one or more libraries.
> Are there dependencies between libraries? (this is common
> and not a problem if it is understood.)
Yes, and MSBuid takes care of them too, because all the
dependencies are specified as project references in our
project files. Did you want to suggest a manual way to
specify those dependencies (e.g. as in simple Makefiles) or
did you have some automated method in mind? I should be
interested to learn about it in case I need it.
> Are there dependencies between programs? (this would be
> bad.)
No, there are not.
> How fast is your job if there are no changes?
What changes do you mean -- source modifications? For the
time being, my primary concern is not build speed but ease
of configuration and maintenance of Jenkins jobs with
minimal duplication of code between similar jobs (which
deserves a separate thread).
> Are the artifacts available so that the build system
> detects that nothing needs to be done quickly?
Yes, the binaries are co-located with the SVN mirror, but
they are local files not under version-control, so that if
several programs depend on the same library the build system
(MSBuild) will find and reuse the same binary of the
library.
> If so, then you could consider a single jenkins job that
> iterated over the libraries in the correct order and then
> over the programs. This would by default use a single
> workspace. It has the advantage of only scanning each
> library once.
Not very easy, because some of the programs and libraries
have specific settings and build parameters, which I should
like to specify in their individual JenkinsFiles. Also, as I
wrote in the previous post, I (for now) want Jenkins to host
an individual web-page for each program.
> otherwise....there is actually nothing that says the build
> has to use the workspace checked out by jenkins.
OK, but I have three questions:
1. Who and when updates the local copy from the vesion-
control repository if no `checkout' step is present in
the JenkinsFile?
2. Does the `checkout' step imply update (e.g.
svn update) after the initial run? I hope it will not
rewrite an existing mirror every time...
3. If I set the same `Local module directory' in all
jobs, will they properly reuse the same SVN mirror or
not?
> I suspect your predecessor had extra workspaces that he
> reused for every build.
I doubt it. He used `ExecutorRepo_${EXECUTOR_NUMBER}'
properly expanded and located in the Jenkins directory on
the node machine. It has all the needed files. Why would he
need extra workspaces?
Anyway, only one executor is currenly setup for each node,
and if I stick to that, I could use a hard-coded path
instead.
> Have you found these so you know they exist?
No.
> Are there containers involved? It's possible the
> workspaces are not getting mounted inside the container.
No containers involved. In this regard everything is
transparent: executors work directly on our manually created
virtual machines.
> If you are forced to have many jenkins jobs, ->
Not that I am forced, no, but that is outwardly compatible
with what I have inherited and I myself think this approach
offers better flexibility regading the configuration of
individual programs. I fear it will require more work and
knowledge than I currenly have to manage them all in a
single pipeline. Futhermore, it will require special
provision not to rebuild everything after every commit but
only the projects affected by the commit.
> -> then I would put a JenkinsFile in every program dir.
That's what I am going to do. I already have two projects
set up that way, but only one them works so far. It is a
simple `pandoc' build for a document with no external
dependecies. That is how far I have gotten in the way of
pipelines :-)
> whether they are scripted or declarative is up to you. I
> like scripted so much I rewrote everything my predecessor
> did because he used all declarative (and had terrible
> program structure).
It is too early for me to make a justified choice. Is the
scripted syntax better documented, more flexible, or allows
better code reuse within and between pipelines? Does the
declarative syntax become too convoluted for non-trivial
build processes? What were your reasons for the change?
> As far as ${EXECUTOR_NUMBER} not getting expanded, this
> could be an issue of timing. It's possible that whatever
> process is doing that expansion is before an executor is
> chosen. But the expansion in a script.
That must be it, but I can't figure out who or what is
responsible for expanding it *after* an executor is chosen
in my the current setup. In fact, I have not been able to
find a single reference to that variable, yet it works...