Response has already been committed error followed by Out of memory problem.

363 views
Skip to first unread message

Mullai at UM

unread,
Jan 11, 2018, 2:28:58 PM1/11/18
to Dataverse Users Community
Hi,
We are using Dataverse version 4.8.4, with Postgresql version 9.6.5, with Glassfish 4.1, we keep getting this error, "Response has already been committed"....
 and the application runs out of Java memory (3Gb).
Any help is appreciated..Thanks in advance.

Error log below......
 
[2018-01-11T12:49:34.207-0600] [glassfish 4.1] [WARNING] [] [] [tid: _ThreadID=35 _ThreadName=http-listener-1(10)] [timeMillis: 1515696574207] [levelValue: 900] [[
  Response has already been committed, and further write operations are not permitted. This may result in an IllegalStateException being triggered by the underlying application. To avoid this situation, consider adding a Rule `.when(Direction.isInbound().and(Response.isCommitted())).perform(Lifecycle.abort())`, or figure out where the response is being incorrectly committed and correct the bug in the offending code.]]

[2018-01-11T12:49:34.594-0600] [glassfish 4.1] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=35 _ThreadName=http-listener-1(10)] [timeMillis: 1515696574594] [levelValue: 900] [[
  org.apache.catalina.core.StandardHostValve@3acd62e9: Exception processing ErrorPage[errorCode=404, location=/404.xhtml]
java.io.IOException: Connection is closed
        at org.glassfish.grizzly.nio.NIOConnection.assertOpen(NIOConnection.java:432)
        at org.glassfish.grizzly.http.io.OutputBuffer.flush(OutputBuffer.java:735)
        at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:291)
        at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:275)
        at org.apache.catalina.connector.Response.flushBuffer(Response.java:689)
        at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:336)
        at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:701)
        at org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:380)
        at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:234)
        at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:418)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:283)
        at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
        at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
        at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
        at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
        at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
        at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
        at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
        at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
        at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
        at sun.nio.ch.IOUtil.write(IOUtil.java:51)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
        at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:149)
        at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeCompositeBuffer(TCPNIOUtils.java:87)
        at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:129)
        at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:106)
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:260)
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:169)
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:71)
        at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(TCPNIOTransportFilter.java:126)
        at org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(TransportFilter.java:191)
        at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)



Cheers,
-Mullai.

Philip Durbin

unread,
Jan 11, 2018, 2:36:59 PM1/11/18
to dataverse...@googlegroups.com
Interesting. Do you have any advice on how developers could reproduce this error on their laptops? You are welcome to create an issue about this at https://github.com/IQSS/dataverse/issues

Thanks,

Phil

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.
To post to this group, send email to dataverse-community@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dataverse-community/da6e149f-7e1e-4d88-bc81-dda42620a0cd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

Mullai

unread,
Jan 11, 2018, 3:15:39 PM1/11/18
to dataverse...@googlegroups.com
Hi Philip,
This happens all the time when the server is up and it take about 24 hours to runout of 2Gb memory which i increased to 3 Gb now and it still runs out of memory. This happens even there are no user browsing  and I think it is triggered by the solr related activities, i could be wrong.

As you suggested I will post an issue.

Cheers,
-Mullai.




To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsubscribe...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Dataverse Users Community" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dataverse-community/db3PjaQ1UQ4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dataverse-community+unsub...@googlegroups.com.

To post to this group, send email to dataverse-community@googlegroups.com.

Don Sizemore

unread,
Jan 11, 2018, 3:27:48 PM1/11/18
to dataverse...@googlegroups.com
Hello,

I wouldn't give any Dataverse VM (running Glassfish, Postgres and Solr on the same host) less than 16GB of RAM. Now that I think about it, I tried to save money on a test Dataverse in AWS by choosing the 16GB-sized VM, and the Linux out-of-memory zapper routinely kills Solr. Bump it up to 24GB, if you're able?

Donald

To unsubscribe from this group and all its topics, send an email to dataverse-community+unsubscribe...@googlegroups.com.

To post to this group, send email to dataverse-community@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dataverse Users Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dataverse-community+unsub...@googlegroups.com.

To post to this group, send email to dataverse-community@googlegroups.com.

Mullai

unread,
Jan 12, 2018, 11:19:08 AM1/12/18
to dataverse...@googlegroups.com
Hi Donald,
We have the Postgres running on a different environment, just the Glassfish & Solr on the same VM. 
24GB seems to be massive, but I will certainly increase it from 4Gb VM, (3Gb Java max memory).
Thanks for the reply.

Cheers,
-Mullai.

  
 


--
You received this message because you are subscribed to a topic in the Google Groups "Dataverse Users Community" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dataverse-community/db3PjaQ1UQ4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dataverse-community+unsub...@googlegroups.com.

To post to this group, send email to dataverse-community@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages