Tim Drury <tdrury <at>
gmail.com> writes:
>
>
> I'm doing a heap-dump analysis now and I think I might know what the issue
was. The start of this whole problem was the disk-usage plugin hanging our
attempts to view a job in Jenkins
(see
https://issues.jenkins-ci.org/browse/JENKINS-20876) so we disabled that
plugin. After disabling, Jenkins complained about data in an
older/unreadable format:
>
> You have data stored in an older format and/or unreadable data.If I click
the "Manage" button to delete it, it takes a _long_ time for it to display
all the disk-usage plugin data - there must be thousands of rows, but it
does display it all eventually. The error shown in each row is:
>
>
> CannotResolveClassException: hudson.plugins.disk_usage.BuildDiskUsageAction
>
>
> If I click "Discard Unreadable Data" at the bottom of the page, I quickly
get a stack trace:
>
>
> javax.servlet.ServletException: java.util.ConcurrentModificationException
>
> at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:735)
>
> at org.kohsuke.stapler.Stapler.invoke(Stapler.java:799)
>
> at org.kohsuke.stapler.MetaClass$6.doDispatch(MetaClass.java:239)
>
> at
org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:53)
>
> at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:685)
>
> at org.kohsuke.stapler.Stapler.invoke(Stapler.java:799)
>
> at org.kohsuke.stapler.Stapler.invoke(Stapler.java:587)
>
> at org.kohsuke.stapler.Stapler.service(Stapler.java:218)
>
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:45)
>
> at winstone.ServletConfiguration.execute(ServletConfiguration.java:
>
> at winstone.RequestDispatcher.forward(RequestDispatcher.java:333)
>
> at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:376)
>
> at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.
>
> at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.
>
> at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.
>
> at
net.bull.javamelody.PluginMonitoringFilter.doFilter(PluginMonitoringFilter.
>
> and it fails to discard the data. Older data isn't usually a problem so I
brushed off this error. However, here is dominator_tree of the heap dump:
>
>
> Class Name
| Shallow Heap | Retained Heap
| Percentage
>
--------------------------------------------------------------------
> hudson.diagnosis.OldDataMonitor <at> 0x6f9f2c4a0
| 24 |
3,278,466,984 | 88.69%
> com.thoughtworks.xstream.converters.SingleValueConverterWrapper <at>
0x6f9da8780 | 16 |
13,825,616 | 0.37%
> hudson.model.Hudson <at> 0x6f9b8b8e8
| 272 |
3,572,400 | 0.10%
> org.eclipse.jetty.webapp.WebAppClassLoader <at> 0x6f9a73598
| 88 |
2,308,760 | 0.06%
> org.apache.commons.jexl.util.introspection.Introspector <at> 0x6fbb74710
| 32 |
1,842,392 | 0.05%
> org.kohsuke.stapler.WebApp <at> 0x6f9c0ff10
| 64 |
1,127,480 | 0.03%
> java.lang.Thread <at> 0x7d5c2d138 Handling GET
/view/Alle/job/common-translation-main/ : RequestHandlerThread[#105] Thread|
112 | 971,336 | 0.03%
>
------------------------------------------------------------------------------
>
>
> What is hudson.diagnosis.OldDataMonitor? Could the disk-usage plugin data
be the cause of all my recent OOM errors? If so, how do I get rid of it?
>
> -tim
>
> On Monday, December 9, 2013 9:41:25 AM UTC-5, Tim Drury wrote:I intended
to install 1.532 on Friday, but mistakenly installed 1.539. It gave us the
same OOM exceptions. I'm installing 1.532 now and will - hopefully - know
tomorrow whether it's stable or not. I'm not exactly sure what's going to
happen with our plugins though. Hopefully Jenkins will tell me if they must
be downgraded too.
> -timOn Monday, December 9, 2013 7:45:28 AM UTC-5, Stephen Connolly wrote:
> How does the current LTS (1.532.1) hold up?
>
> hudson.model.AbstractProject.getBuildByNumber(AbstractProject.java:1077)
> hudson.maven.MavenBuild.getParentBuild(MavenBuild.java:165)
> hudson.maven.MavenBuild.getWhyKeepLog(MavenBuild.java:273)
> hudson.model.Run.isKeepLog(Run.java:572)
> ...
>
>
> It seems something in "core" Jenkins has changed and not for the better.
Anyone seeing these issues?
>
>
> -tim
>
>
>
>
>
Hello,
I don't know if that can help but I had an issue with both disk usage
plugin and Jenkins Job Configuration History Plugin.
Configuration History Plugin was badly set and kept history of many things,
for example each time a configuration page was visited.
It seems that recent update of disk usage plugin increased that effect.
I didn't fully understand why (perhaps the usage for each build and each
configuration was logged).
I have multi-configuration projects with thousands of configurations.
So after some time, Jenkins got slower and finally we ran out of inodes (I
am working on Linux).
To fix that, I had to clean config-history and change settings of
Job Config History in Configure System tab to disable some of the logs.
Regards,
Robert