Nexus OSS 3.8.0 ERROR: error pulling image configuration: unexpected EOF

823 views
Skip to first unread message

Adrian Cisło

unread,
Mar 12, 2018, 9:32:12 AM3/12/18
to Nexus Users
Hello community

Recently we have upgraded nexus repository oss from 3.0.0 to 3.8.0 and bumped into problem that sometimes image pull operation does not work and fails with following error on client side:

3.36.0.0-SNAPSHOT: Pulling from report-service-rest
9af7279b9dbd: Already exists
31816c948f2f: Already exists
 ......
a382baafda1a: Already exists
5fe038b05711: Pulling fs layer
a0a5c937cc07: Pulling fs layer
ERROR: error pulling image configuration: unexpected EOF

.. and from server side in logs we can see:

2018-03-12 12:43:03,087+0100 WARN  [qtp743785540-187]  docker-test org.sonatype.nexus.repository.httpbridge.internal.ViewServlet - Failure servicing: GET /repository/docker-hosted/v2/report-service-rest/blobs/sha256:5fe038b057115fd29831bbd64dcb5cbd1e0f3a898e32e98f52df162cc6d82c47
java.nio.channels.ClosedChannelException: null
at org.eclipse.jetty.util.IteratingCallback.close(IteratingCallback.java:427)
at org.eclipse.jetty.server.HttpConnection.onClose(HttpConnection.java:497)
at org.eclipse.jetty.io.ssl.SslConnection.onClose(SslConnection.java:255)
at org.eclipse.jetty.io.SelectorManager.connectionClosed(SelectorManager.java:343)
at org.eclipse.jetty.io.ManagedSelector$DestroyEndPoint.run(ManagedSelector.java:841)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Unknown Source)

It does happen rare, more less 1 of 5 attempts is failed. When pulling image manually we can always to try once again, but all of automatic builds fail due to that error. 
Error happens on old as well as on newly created images, no rule here. No matter how big is the image and no matter what is the week day ;) It just happens randomly.
We rather want to avoid downgrading back to 3.0.0. We also found that Jetty version was changed between 3.0.0 -> 3.8.0.

I ran nexus oss 3.8.0 locally on VM as fresh installation and I do not experience such problem.

Adrian Cisło

unread,
Mar 12, 2018, 10:11:27 AM3/12/18
to Nexus Users
And also sometimes the stacktrace is longer
2018-03-12 15:08:04,285+0100 WARN  [qtp743785540-455]  docker-user org.sonatype.nexus.repository.httpbridge.internal.ViewServlet - Failure servicing: GET /repository/docker-hosted/v2/sales-service-demo-webapp/blobs/sha256:a7cc55d3f3abec7780307b8bedafa3a172a024ec67abf79dec3442dcbc7cd6cf
org.eclipse.jetty.io.EofException: null
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:292)
at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:429)
at org.eclipse.jetty.io.WriteFlusher.completeWrite(WriteFlusher.java:384)
at org.eclipse.jetty.io.ChannelEndPoint$3.run(ChannelEndPoint.java:139)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(Unknown Source)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.write(Unknown Source)
at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:270)
... 10 common frames omitted




msu...@sonatype.com

unread,
Mar 12, 2018, 10:47:43 AM3/12/18
to Nexus Users
Hi Adrian,

The below errors indicates your client side is closing the connection before Nexus has completed the request. This could be due to the client timing out and/or reverse proxy closing the connections. You could and try to narrow this down to determine if it is happening with certain images or images of a certain size. If you are using a reverse proxy, try checking the reverse proxy logs.

java.nio.channels.ClosedChannelException: null


Caused by: java.io.IOException: Connection reset by peer



Regards,
Mahendra

Adrian Cisło

unread,
Mar 13, 2018, 4:35:59 AM3/13/18
to Nexus Users
Hi Mahendra, 
thanks for answer

There is no any reverse proxy set up. I am not sure what kind of timeout you mean. Docker client timeout ?

About images: they are like 296MB, 319MB, 354MB, 153MB. I do not see any correlation...
I also noticed that these images come only from 'hosted' repository. And no matter either this is an old image (built some time ago) or built few seconds before pulling.

Is it possible to downgrade to 3.7.0? In 3.8.0 Jetty's version was upgraded 9.3.x to 9.4.x. And maybe this is the problem. 
Or maybe some our mistake during upgrade process from 3.0.0 -> 3.8.0.

Rich Seddon

unread,
Mar 13, 2018, 9:43:13 AM3/13/18
to Nexus Users

It is not possible to downgrade, don't try to do that.

The upgrade of Jetty to 9.4 is not causing this by itself, we have a very large installed base and I can assure you if Docker was broken by Jetty 9.4 we would be very aware of this.  That said, it is of course possible that some change in Nexus 3.8 in combination with something specific to your environment could be causing this.   Or it could be something entirely within your environment, and not due to Nexus at all.   At this point we don't have enough information to know.

I'd suggest raising an issue in the "Dev - Nexus" project at https://issues.sonatype.org so we can get more diagnostic information.

Adrian Cisło

unread,
Mar 21, 2018, 6:17:17 AM3/21/18
to Nexus Users
Hello,

Thanks for input.

I am busy a bit now with some production issues in the different project and the priority of the nexus problem was reduced, no one complains right now in the company.
I will try to upgrade nexus to the most recent version in the next week and just track the situation for longer period.

Adrian Cisło

unread,
Mar 27, 2018, 10:43:11 AM3/27/18
to Nexus Users
Hello,

The most recent time we have seen the problem is 2018-03-12, since that date we haven't seen that anymore. Seems it disappears itself... I am a bit confused. As we have no problems with that right now we will upgrade the nexus version to 3.9.0 as fast as operations find some spare time and we will track the situation after the upgrade.

I think we can close the topic for now, if situation appears again I will try to investigate it more. Now I close the internal ticket as 'no issue'.

Thanks for your help guys!
Cheers!
Reply all
Reply to author
Forward
0 new messages