On Mon, Sep 20, 2021 at 12:57 PM Jesse Glick <
jgl...@cloudbees.com> wrote:
>
> Any notion yet of why that would be?
Why do you ask? The maximum heap size seems to have been 1516 MiB in
e.g.
https://ci.jenkins.io/job/Infra/job/pipeline-steps-doc-generator/job/master/299/consoleFull
but had dropped to 954 MiB by e.g.
https://ci.jenkins.io/job/Infra/job/pipeline-steps-doc-generator/job/master/322/consoleFull
so the problem with pipeline-steps-doc-generator seems clear to me:
the operators mistakenly reduced the memory size of the test system,
and the job happened to continue to work for a while until organic
growth exposed the original operational issue. With the operational
issue resolved, PRs like jenkins-infra/pipeline-steps-doc-generator#92
are now passing against recent core releases. As far as I can tell,
this was a false alarm. I should not have been pinged about this.
I do not think it is appropriate to imply that a developer caused a
regression (for example, by describing jenkinsci/jenkins#5687 as "the
culprit") simply because an operational failure occurred. The cause of
the operational failure should be understood, and if that cause points
to a regression caused by a developer (such as a memory leak), then
the developer should be notified.
Anyway, one theory is that the organic increase in heap usage may be
coming from ClassLoader#getClassLoadingLock(String). If the
ClassLoader object is registered as parallel-capable, this method
returns a dedicated object associated with the specified class name;
otherwise, it returns the ClassLoader object. Perhaps there are enough
of these dedicated objects to cause a modest increase in heap usage on
some installations (~300 MiB in the case of
pipeline-steps-doc-generator).