Per-Client response sequencing for WebSocket using ORM

107 views
Skip to first unread message

James Olsen

unread,
Jun 7, 2020, 8:29:56 PM6/7/20
to Quarkus Development mailing list
I have a WebSocket ServerEndpoint that needs to access the database inside a message handler.  When trying to do so it receives the following error:

[vert.x-eventloop-thread-0] ... java.lang.IllegalStateException: You have attempted to perform a blocking operation on a IO thread. This is not allowed, as blocking the IO thread will cause major performance issues with your application. If you want to perform blocking EntityManager operations make sure you are doing it from a worker thread.

The issue to switching to a worker thread is that we need to maintain per client response sequencing, i.e. for a given client we should respond to requests in the order that we received them.

There are two simple options:
  • We can trick Quarkus via Executor.submit().get() - i.e. block the thread but so that ORM is not aware.  This is cheating.
  • We can serialise all responses using a single worker.  This does not scale.
Shouldn't Quarkus be handling this for me like it does for REST calls? i.e. using a vert.x worker thread and ensuring messages are delivered in order for a given client (as I presume the WebSocket spec must require).

Stuart Douglas

unread,
Jun 8, 2020, 8:01:30 PM6/8/20
to James Olsen, Quarkus Development mailing list
Looks like this option was not wired up: https://github.com/quarkusio/quarkus/pull/9873

As a workaround you could directly use io.undertow.websockets.jsr.OrderedExecutor, this is an executor that applies ordering semantics over an underlying executor. If you just create one per endpoint instance it should give you the semantics you are after.

Stuart
 

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/quarkus-dev/0a76bd05-5907-46de-81ab-39729ac8d1a4o%40googlegroups.com.

Željko Trogrlić

unread,
Jun 9, 2020, 3:43:45 AM6/9/20
to Quarkus Development mailing list
You can also use Mutiny's .emitOn(Infrastructure.getDefaultWorkerPool())

James Olsen

unread,
Jun 11, 2020, 12:18:07 AM6/11/20
to Quarkus Development mailing list
Stuart, Thanks.  We have your workaround in place.  Will stick with that until 1.5.1 is ready.

James Olsen

unread,
Jun 14, 2020, 9:18:06 PM6/14/20
to Quarkus Development mailing list
Stuart,  I've upgraded to 1.5.1.Final, added quarkus.websocket.dispatch-to-worker=true and removed the io.undertow.websockets.jsr.OrderedExecutor work-around, however things still don't work as expected.  I'm seeing the following error:

2020-06-15 13:12:34,834 WARN  [com.redacted.streaming.StreamingApiEndPoint] 'executor-thread-2' #onError io.undertow.websockets.jsr.UndertowSession@5d3def5e: javax.enterprise.context.ContextNotActiveException: interface javax.enterprise.context.RequestScoped
at io.quarkus.hibernate.orm.runtime.RequestScopedEntityManagerHolder_ClientProxy.arc$delegate(RequestScopedEntityManagerHolder_ClientProxy.zig:68)
at io.quarkus.hibernate.orm.runtime.RequestScopedEntityManagerHolder_ClientProxy.getOrCreateEntityManager(RequestScopedEntityManagerHolder_ClientProxy.zig:220)
at io.quarkus.hibernate.orm.runtime.entitymanager.TransactionScopedEntityManager.getEntityManager(TransactionScopedEntityManager.java:77)
at io.quarkus.hibernate.orm.runtime.entitymanager.TransactionScopedEntityManager.getCriteriaBuilder(TransactionScopedEntityManager.java:479)
at io.quarkus.hibernate.orm.runtime.entitymanager.ForwardingEntityManager.getCriteriaBuilder(ForwardingEntityManager.java:252)
at com.redacted.streaming.query.ApiKeyQuery.findByKey(ApiKeyQuery.java:25)
at com.redacted.streaming.query.ApiKeyQuery_Subclass.findByKey$$superaccessor1(ApiKeyQuery_Subclass.zig:211)
at com.redacted.streaming.query.ApiKeyQuery_Subclass$$function$$1.apply(ApiKeyQuery_Subclass$$function$$1.zig:33)
at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorBase.invokeInNoTx(TransactionalInterceptorBase.java:253)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorSupports.doIntercept(TransactionalInterceptorSupports.java:32)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorBase.intercept(TransactionalInterceptorBase.java:53)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorSupports.intercept(TransactionalInterceptorSupports.java:26)
at io.quarkus.narayana.jta.runtime.interceptor.TransactionalInterceptorSupports_Bean.intercept(TransactionalInterceptorSupports_Bean.zig:339)
at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41)
at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41)
at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32)
at com.redacted.streaming.query.ApiKeyQuery_Subclass.findByKey(ApiKeyQuery_Subclass.zig:168)
at com.redacted.streaming.query.ApiKeyQuery_ClientProxy.findByKey(ApiKeyQuery_ClientProxy.zig:185)
at com.redacted.streaming.StreamingApiEndPoint.onMessage(StreamingApiEndPoint.java:85)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.undertow.websockets.jsr.annotated.BoundMethod.invoke(BoundMethod.java:87)
at io.undertow.websockets.jsr.annotated.AnnotatedEndpoint$2.onMessage(AnnotatedEndpoint.java:142)
at io.undertow.websockets.jsr.FrameHandler$5.run(FrameHandler.java:328)
at io.undertow.websockets.jsr.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:166)
at io.undertow.websockets.jsr.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:163)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at io.undertow.websockets.jsr.ServerWebSocketContainer.invokeEndpointMethod(ServerWebSocketContainer.java:645)
at io.undertow.websockets.jsr.ServerWebSocketContainer$8.run(ServerWebSocketContainer.java:627)
at io.undertow.websockets.jsr.OrderedExecutor$ExecutorTask.run(OrderedExecutor.java:67)
at io.quarkus.runtime.CleanableExecutor$CleaningRunnable.run(CleanableExecutor.java:231)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2046)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1578)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1452)
at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.jboss.threads.JBossThread.run(JBossThread.java:479)

com.redacted.streaming.StreamingApiEndPoint is a @ServerEndpoint and com.redacted.streaming.query.ApiKeyQuery is an @ApplicationScoped bean.

If I change com.redacted.streaming.query.ApiKeyQuery to be @RequestScoped, the stack simply changes to:

2020-06-15 13:01:43,840 WARN  [com.redacted.streaming.StreamingApiEndPoint] 'executor-thread-3' #onError io.undertow.websockets.jsr.UndertowSession@7cd2b48: javax.enterprise.context.ContextNotActiveException: interface javax.enterprise.context.RequestScoped
at com.redacted.streaming.query.ApiKeyQuery_ClientProxy.arc$delegate(ApiKeyQuery_ClientProxy.zig:68)
at com.redacted.streaming.query.ApiKeyQuery_ClientProxy.findByKey(ApiKeyQuery_ClientProxy.zig:189)
at com.redacted.streaming.StreamingApiEndPoint.onMessage(StreamingApiEndPoint.java:85)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.undertow.websockets.jsr.annotated.BoundMethod.invoke(BoundMethod.java:87)
at io.undertow.websockets.jsr.annotated.AnnotatedEndpoint$2.onMessage(AnnotatedEndpoint.java:142)
at io.undertow.websockets.jsr.FrameHandler$5.run(FrameHandler.java:328)
at io.undertow.websockets.jsr.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:166)
at io.undertow.websockets.jsr.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:163)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at io.undertow.websockets.jsr.ServerWebSocketContainer.invokeEndpointMethod(ServerWebSocketContainer.java:645)
at io.undertow.websockets.jsr.ServerWebSocketContainer$8.run(ServerWebSocketContainer.java:627)
at io.undertow.websockets.jsr.OrderedExecutor$ExecutorTask.run(OrderedExecutor.java:67)
at io.quarkus.runtime.CleanableExecutor$CleaningRunnable.run(CleanableExecutor.java:231)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2046)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1578)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1452)
at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.jboss.threads.JBossThread.run(JBossThread.java:479)

Am I missing something?

Stuart Douglas

unread,
Jun 14, 2020, 9:28:47 PM6/14/20
to James Olsen, Quarkus Development mailing list
Can you add @ActivateRequestScope to the endpoint? And also open an issue.

This is definitely a bug, but this should work around it.

Stuart

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.

James Olsen

unread,
Jun 14, 2020, 11:39:45 PM6/14/20
to Quarkus Development mailing list
Adding @ActivateRequestContext is an effective work-around.  Raised https://github.com/quarkusio/quarkus/issues/9994 for the underlying issue.

James Olsen

unread,
Jun 22, 2020, 8:47:28 PM6/22/20
to Quarkus Development mailing list
Stuart,

We have been running with 1.5.1.Final, quarkus.websocket.dispatch-to-worker=true and @ActivateRequestScope for several days now and spotted a new issue.  The connection count is creeping up.  There is an imbalance between the number of OnOpen and OnClose events.  We still have ping/pong in place to disconnect slow/zombie consumers although this only checks Sessions that are open.  So I suspect it's a similar issue to one we had before where the Session is marked as closed but there is no OnClose event.

I would speculate that maybe some error conditions are not being propagated from the worker thread?

We do see an OnClose for these:

2020-06-22 23:09:35,475 WARN  [io.netty.channel.DefaultChannelPipeline] 'vert.x-eventloop-thread-4' An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.: java.io.IOException: Connection reset by peer
        at java
.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at java
.base/sun.nio.ch.SocketDispatcher.read(Unknown Source)
        at java
.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
        at java
.base/sun.nio.ch.IOUtil.read(Unknown Source)
        at java
.base/sun.nio.ch.IOUtil.read(Unknown Source)
        at java
.base/sun.nio.ch.SocketChannelImpl.read(Unknown Source)
        at io
.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
        at io
.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
        at io
.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
        at io
.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
        at io
.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
        at io
.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
        at io
.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
        at io
.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
        at io
.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
        at io
.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io
.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java
.base/java.lang.Thread.run(Unknown Source)
2020-06-22 23:09:35,475 INFO  [com.redacted.StreamingApiEndPoint] 'executor-thread-38' #onClose io.undertow.websockets.jsr.UndertowSession@29eaee00 CloseReason[1001]

But we don't see an OnClose for these (Pong Timeout calls Session.close on a Session that is open.  Session.close does not throw an IOException.  Sometimes Pong Timeout handling works fine.):

2020-06-22 14:21:34,009 INFO  [com.redacted.StreamingMetrics] 'pool-4-thread-1' Pong Timeout, closing io.undertow.websockets.jsr.UndertowSession@ae0e4c
2020-06-22 14:21:34,010 ERROR [org.jboss.threads.errors] 'executor-thread-4921' Thread Thread[executor-thread-4921,5,executor] threw an uncaught exception: java.lang.RuntimeException: java.lang.IllegalStateException: Instance already destroyed
        at io
.undertow.websockets.jsr.ServerWebSocketContainer.invokeEndpointMethod(ServerWebSocketContainer.java:647)
        at io
.undertow.websockets.jsr.ServerWebSocketContainer$8.run(ServerWebSocketContainer.java:627)

        at org
.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
        at org
.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2046)
        at org
.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1578)
        at org
.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1452)
        at org
.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)
        at org
.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)

        at java
.base/java.lang.Thread.run(Unknown Source)
        at org
.jboss.threads.JBossThread.run(JBossThread.java:479)
Caused by: java.lang.IllegalStateException: Instance already destroyed
        at io
.quarkus.arc.impl.InstanceHandleImpl.get(InstanceHandleImpl.java:55)
        at io
.quarkus.arc.runtime.ArcRecorder$2$1$1.get(ArcRecorder.java:88)
        at io
.quarkus.undertow.runtime.UndertowDeploymentRecorder$7$1$1.getInstance(UndertowDeploymentRecorder.java:449)
        at io
.undertow.websockets.jsr.annotated.AnnotatedEndpoint$5.run(AnnotatedEndpoint.java:229)

        at io
.undertow.websockets.jsr.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:166)
        at io
.undertow.websockets.jsr.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:163)
        at io
.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
        at io
.undertow.websockets.jsr.ServerWebSocketContainer.invokeEndpointMethod(ServerWebSocketContainer.java:645)

       
... 9 more


We will probably have to revert to the DIY io.undertow.websockets.jsr.OrderedExecutor work-around.

We do have some additional errors that are of our own making.  This is due to the ErrorLoggingSendHandler still trying to decode Thorntail exceptions instead of Quarkus exceptions so that it can just log the interesting stuff.  But these always occur just after an OnClose so are probably just queued async writes being discarded.  Is it possible that the failure of the SendHandler causes an issue upstream?  We'll get this fixed anyway.

2020-06-23 00:11:17,214 WARN  [io.netty.util.concurrent.DefaultPromise] 'vert.x-eventloop-thread-14' An exception was thrown by io.undertow.websockets.jsr.SendHandlerAdapter.operationComplete(): java.lang.NullPointerException
        at com
.redacted.ErrorLoggingSendHandler.onResult(ErrorLoggingSendHandler.java:28)
        at io
.undertow.websockets.jsr.SendHandlerAdapter.operationComplete(SendHandlerAdapter.java:45)
        at io
.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
        at io
.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
        at io
.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
        at io
.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
        at io
.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
        at io
.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
        at io
.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
        at io
.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
        at io
.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:709)
        at io
.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:792)
        at io
.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:702)
        at datadog
.trace.instrumentation.netty41.server.HttpServerResponseTracingHandler.write(HttpServerResponseTracingHandler.java:21)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:709)
        at io
.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:792)
        at io
.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:702)
        at io
.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:302)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:709)
        at io
.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:792)
        at io
.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:702)
        at io
.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:110)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
        at io
.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
        at io
.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1071)
        at io
.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
        at io
.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
        at io
.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:497)
        at io
.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
        at io
.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io
.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java
.base/java.lang.Thread.run(Unknown Source)


Unfortunately the total count of all these interesting events is still less than the OnOpen/OnClose discrepancy, so we may still be missing something else.

Stuart Douglas

unread,
Jun 23, 2020, 12:58:49 AM6/23/20
to James Olsen, Quarkus Development mailing list

My theory is that maybe the attempt to write the close frame may be failing, which stops the method execution.

Stuart

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.

James Olsen

unread,
Jun 23, 2020, 5:26:48 PM6/23/20
to Quarkus Development mailing list
That looks like a good change but I don't see how it explains the difference in behaviour between 1.5.0 with DIY OrderedExecutor and 1.5.1 with quarkus.websocket.dispatch-to-worker=true.  We observed this problem immediately after making that migration but had run with the earlier config for 5 days with no issues.

James Olsen

unread,
Jun 23, 2020, 5:48:59 PM6/23/20
to Quarkus Development mailing list
I'll trial 1.5.2 with DIY OrderedExecutor and quarkus.websocket.dispatch-to-worker=false and publish results.

Stuart Douglas

unread,
Jun 23, 2020, 8:33:27 PM6/23/20
to James Olsen, Quarkus Development mailing list
It might be a timing issue of some kind, dispatch-to-worker also means that the close0 method is called in the worker thread, while with a manual dispatch this would not be the case.

Stuart

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.

James Olsen

unread,
Jun 25, 2020, 12:10:12 AM6/25/20
to Quarkus Development mailing list
Have been running 1.5.1 (not 1.5.2 as suggested earlier) with DIY OrderedExecutor and quarkus.websocket.dispatch-to-worker=false for several hours now with no issue.  So that config with 1.5.0 and 1.5.1 performs fine.
This build also confirms that our self-inflicted SendHandler errors are not part of the problem as I left them in to check.

With 1.5.1 and quarkus.websocket.dispatch-to-worker=true, we had around a 20% leakage rate.  So that could well be down to a race condition with the close0.

On Wednesday, June 24, 2020 at 12:33:27 PM UTC+12, Stuart Douglas wrote:
It might be a timing issue of some kind, dispatch-to-worker also means that the close0 method is called in the worker thread, while with a manual dispatch this would not be the case.

Stuart

On Wed, 24 Jun 2020 at 07:26, James Olsen <ja...@inaseq.com> wrote:
That looks like a good change but I don't see how it explains the difference in behaviour between 1.5.0 with DIY OrderedExecutor and 1.5.1 with quarkus.websocket.dispatch-to-worker=true.  We observed this problem immediately after making that migration but had run with the earlier config for 5 days with no issues.

On Tuesday, June 23, 2020 at 4:58:49 PM UTC+12, Stuart Douglas wrote:

My theory is that maybe the attempt to write the close frame may be failing, which stops the method execution.

Stuart

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quark...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages