WildFly 26.1.3 increasing number of IO Threads stuck

23 views
Skip to first unread message

Sascha Janz

unread,
Apr 30, 2026, 7:51:38 AM (yesterday) Apr 30
to WildFly
when starting wildfly the number of threads stuck in a stacktrace like this

"default task-12" #443 [7720] prio=5 os_prio=0 cpu=1671.88ms elapsed=1089.51s tid=0x000001f4de215c10 nid=7720 runnable  [0x0000007ea0afd000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.WEPoll.wait(java...@21.0.6/Native Method)
at sun.nio.ch.WEPollSelectorImpl.doSelect(java...@21.0.6/WEPollSelectorImpl.java:114)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java...@21.0.6/SelectorImpl.java:130)
- locked <0x000001e9809499e0> (a sun.nio.ch.Util$2)
- locked <0x000001e980949728> (a sun.nio.ch.WEPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java...@21.0.6/SelectorImpl.java:147)
at org.xnio.nio.SelectorUtils.await(SelectorUtils.java:51)
at org.xnio.nio.NioSocketConduit.awaitWritable(NioSocketConduit.java:233)
at io.undertow.protocols.ssl.SslConduit.awaitWritable(SslConduit.java:482)
at org.xnio.conduits.AbstractSinkConduit.awaitWritable(AbstractSinkConduit.java:66)
at io.undertow.conduits.ChunkedStreamSinkConduit.awaitWritable(ChunkedStreamSinkConduit.java:398)
at org.xnio.conduits.ConduitStreamSinkChannel.awaitWritable(ConduitStreamSinkChannel.java:134)
at io.undertow.channels.DetachableStreamSinkChannel.awaitWritable(DetachableStreamSinkChannel.java:87)
at io.undertow.server.HttpServerExchange$WriteDispatchChannel.awaitWritable(HttpServerExchange.java:2133)
at io.undertow.servlet.spec.ServletOutputStreamImpl.writeBufferBlocking(ServletOutputStreamImpl.java:614)
at io.undertow.servlet.spec.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:153)
at org.apache.cxf.io.AbstractWrappedOutputStream.write(AbstractWrappedOutputStream.java:51)
at javax.activation.DataHandler.writeTo(DataHandler.java:292)
at org.apache.cxf.attachment.AttachmentSerializer.writeAttachments(AttachmentSerializer.java:323)
at org.apache.cxf.interceptor.AttachmentOutInterceptor$AttachmentOutEndingInterceptor.handleMessage(AttachmentOutInterceptor.java:126)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
- locked <0x000001ebc1002328> (a org.apache.cxf.phase.PhaseInterceptorChain)
at org.apache.cxf.interceptor.OutgoingChainInterceptor.handleMessage(OutgoingChainInterceptor.java:90)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
- locked <0x000001ebc10024b0> (a org.apache.cxf.phase.PhaseInterceptorChain)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265)
at org.jboss.wsf.stack.cxf.RequestHandlerImpl.handleHttpRequest(RequestHandlerImpl.java:110)
at org.jboss.wsf.stack.cxf.transport.ServletHelper.callRequestHandler(ServletHelper.java:134)
at org.jboss.wsf.stack.cxf.CXFServletExt.invoke(CXFServletExt.java:88)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:217)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:523)
at org.jboss.wsf.stack.cxf.CXFServletExt.service(CXFServletExt.java:136)
at org.jboss.wsf.spi.deployment.WSFServlet.service(WSFServlet.java:140)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at io.opentracing.contrib.jaxrs2.server.SpanFinishingFilter.doFilter(SpanFinishingFilter.java:52)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)

increases over time. after 15 minutes all of our io threads get stuck like this. so our application becomes unavailiabe. 

i think slow or aborted request seems to be the reason.

any suggestions what to do? 

Regards 
Sascha

Sascha Janz

unread,
Apr 30, 2026, 10:26:32 AM (yesterday) Apr 30
to WildFly
with the following simple client i could produce this behaviour.


the client opens the url and ignores the response or does not read it complete


public class SlowClient
{

        public static void main(String[] args) throws Exception {

            for (int i = 0; i <200 ; i++)
            {

                HttpClient client = HttpClient.newBuilder()
                                              .version(HttpClient.Version.HTTP_1_1)
                                              .build();

                HttpRequest request = HttpRequest.newBuilder()
                                                 .uri(new URI("https://test-211/IO/state.jsp"))
                                                 .POST(HttpRequest.BodyPublishers.ofString("<xml>...</xml>"))
                                                 .build();

                HttpResponse<InputStream> response =
                        client.send(request, HttpResponse.BodyHandlers.ofInputStream());

                System.out.println("Response gestartet – ich lese NICHT weiter.");
        /*        System.out.println("Headers: " + response.headers());
                try (InputStream in = response.body();
                     ByteArrayOutputStream out = new ByteArrayOutputStream()) {

                    in.transferTo(out);

                    String body = out.toString("UTF-8");

                    System.out.println("Status: " + response.statusCode());
                    System.out.println("Body:\n" + body);
                }*/

            }



            Thread.sleep(1_000_000);

        }
}

Sascha Janz

unread,
5:18 AM (10 hours ago) 5:18 AM
to WildFly
i think we found the problem.

we got following io subsystem configuration.

<subsystem xmlns="urn:jboss:domain:io:3.0">
<worker name="default" io-threads="100" task-core-threads="25"/>
<buffer-pool name="default"/>
</subsystem>

this seems wrong. io-threads are higher than worker threads.

after changing to 

<subsystem xmlns="urn:jboss:domain:io:3.0">
<worker name="default" io-threads="20" task-core-threads="75" task-max-threads="500" />
<buffer-pool name="default"/>
</subsystem>

the testcilent doesn't produce the problem any more. the io threads delegate the work so no blocking io situation happens.
Reply all
Reply to author
Forward
0 new messages