I've encountered a problem on my Jenkins test instances that are running the 2.204.3 release candidate inside Docker. Some of the folders fail to open when I click them and I receive the following stack track in an "Oops" page on one instance:
org.apache.commons.jelly.JellyTagException: jar:file:/var/jenkins_home/war/WEB-INF/lib/jenkins-core-2.204.3-SNAPSHOT.jar!/hudson/model/View/index.jelly:42:43: <st:include> org.apache.commons.jelly.JellyTagException: jar:file:/var/jenkins_home/war/WEB-INF/lib/jenkins-core-2.204.3-SNAPSHOT.jar!/lib/hudson/projectView.jelly:84:48: <j:forEach> java.nio.CharBuffer.rewind()Ljava/nio/CharBuffer;
at org.apache.commons.jelly.impl.TagScript.handleException(TagScript.java:726)
at org.apache.commons.jelly.impl.TagScript.run(TagScript.java:281)
In the other test instance, the message appears in the console log and does not appear in the Jenkins Oops page. In the console log, it reports:
2020-02-15 16:37:04.540+0000 [id=164374] INFO j.b.MultiBranchProject$BranchIndexing#run: Bugs-Pipeline-Checks/jenkins-bugs-multibranch-pipeline-bitbucket #20200215.093700 branch indexing action completed: SUCCESS in 3.6 sec
2020-02-15 16:39:26.329+0000 [id=39] SEVERE hudson.triggers.SafeTimerTask#run: Timer task com.cloudbees.jenkins.Cleaner@7ed56677 failed
java.lang.NoSuchMethodError: java.nio.CharBuffer.rewind()Ljava/nio/CharBuffer;
at hudson.Util.rawEncode(Util.java:886)
at hudson.model.AbstractItem.getShortUrl(AbstractItem.java:576)
at hudson.model.AbstractItem.getUrl(AbstractItem.java:537)
I don't know why there is a difference in behavior. I don't know if the issue is related to something in my local environment, the Docker image definition that I'm using, or something completely different.
The folders which fail to open are different in the two instances, but the stack traces seem to consistently be associated with CharBuffer.rewind().
Later in the stack trace, it reports:
Caused by: java.lang.NoSuchMethodError: java.nio.CharBuffer.rewind()Ljava/nio/CharBuffer;
at hudson.Util.rawEncode(Util.java:886)
Duplicating the problem:
If others would like to duplicate the problem, they can try the following steps:
- Install git large file support on your Linux computer (or download from git-lfs.github.com)
- Iniitalize git lfs with
$ git lfs install
Git LFS initialized. - Clone my docker-lfs repository
$ git clone https://github.com/MarkEWaite/docker-lfs
Cloning into 'docker-lfs'...
Resolving deltas: 100% (12717/12717), done.
- Change to the docker-lfs directory
$ cd docker-lfs - Checkout the lts-with-plugins-rc branch
$ git checkout -b lts-with-plugins-rc -t origin/lts-with-plugins-rc
Filtering content: 100% (190/190), 244.87 MiB | 5.36 MiB/s, done.
Branch 'lts-with-plugins-rc' set up to track remote branch 'lts-with-plugins-rc' from 'origin'.
Switched to a new branch 'lts-with-plugins-rc'
- Build the docker image
$ docker build -f Dockerfile -t markewaite/lts-rc:2.204.3 . - Run the docker image
$ docker run --rm -i -e JENKINS_ADVERTISED_HOSTNAME=`hostname` -e START_QUIET=True -p 8080:8080 -t markewaite/lts-rc:2.204.3 - Connect to the running image with a web browser
$ python -m webbrowser http://$(hostname):8080/ - Open each of the folders at the root of that Jenkins server. One of them will fail to an Oops screen (at least does on the 3 machines where I've tested)
End of symptoms, switching to wild, unjustified speculation:
There is mention on the jetty project and on stackoverflow that code compiled with JDK 9 or later may fail in this way when running with Java 8. I'm running Java 8 in both the failure cases. References:
Was the Jenkins 2.204.3 release candidate compiled with Java 11?
Thanks!
Mark Waite