[Wildfly 27] Performance problem with >5k threads

536 views
Skip to first unread message

Michael

unread,
Aug 28, 2023, 12:10:59 PM8/28/23
to WildFly
We're running Wildfly 27 with a couple of EAR and WAR modules deployed. Every couple of weeks or so we see the Wilfly JVM hit 100% CPU without apparent reason. It won't recover until Wildfly gets restarted. 

Since we have JFR creating a continious recording I managed to create a JFR dump last time the JVM was at 100% load. I can see that there are 5824 threads out of wich are 4714 threads named 'blocking-thread--p8-t1' (with different numbers) and 1610 threads named 'non-blocking-thread--p76-t1' (also with different numbers). That doesn't seem right.

The stack traces of all those threads seem to be more or less the same:

"non-blocking-thread--p76-t1" #766 daemon prio=5 os_prio=0 cpu=6410.39ms elapsed=177251.53s tid=0x000056067d1503b0 nid=0x19d0be waiting on condition  [0x00007f6c47f7e000]
   java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java...@17.0.8/Native Method)
- parking to wait for  <0x00000004ba320d30> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java...@17.0.8/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java...@17.0.8/AbstractQueuedSynchronizer.java:506)
at java.util.concurrent.ForkJoinPool.unmanagedBlock(java...@17.0.8/ForkJoinPool.java:3465)
at java.util.concurrent.ForkJoinPool.managedBlock(java...@17.0.8/ForkJoinPool.java:3436)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java...@17.0.8/AbstractQueuedSynchronizer.java:1623)
at java.util.concurrent.LinkedBlockingQueue.take(java...@17.0.8/LinkedBlockingQueue.java:435)
at java.util.concurrent.ThreadPoolExecutor.getTask(java...@17.0.8/ThreadPoolExecutor.java:1062)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java...@17.0.8/ThreadPoolExecutor.java:1122)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java...@17.0.8/ThreadPoolExecutor.java:635)
at java.lang.Thread.run(java...@17.0.8/Thread.java:833)

From the names 'blocking-thread' and 'non-blocking-thread' I assume that they're related to Infinispan which our applications take heavy use of (https://docs.wildfly.org/25/wildscribe/subsystem/infinispan/cache-container/thread-pool/non-blocking/index.html).

If I'm right and these threads are related to Infinispan I assume it is because our application creates the cache-containers from an external xml file and doesn't inject managed cache-containers configured in standalone.xml. This way the parameters like keepalive-time, max-threads, etc. aren't applied and thus some thread pools inside Infinispan just keep growing.

I'd like to know if I'm right and if yes if there's a way to configure those thread pool parameters in a different way.

Thank you very much in advance for any advice!
Michael

Eduardo Martins

unread,
Aug 29, 2023, 5:31:57 AM8/29/23
to Michael, WildFly
Hi Michael, what’s your server configuration for the Infinispan subsystem? I don’t know that internal code but it seems that your threads may be just idle and not being reused.

—E

--
You received this message because you are subscribed to the Google Groups "WildFly" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wildfly+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wildfly/ecc5c461-66f6-47a8-9ab8-c788cccfcc1an%40googlegroups.com.

Michael

unread,
Aug 29, 2023, 5:42:03 AM8/29/23
to WildFly
The only change I made to the Infinispan subsystem is adding a cache-container for hibernate:

<cache-container name="hibernate" statistics-enabled="true" marshaller="JBOSS" modules="org.infinispan.hibernate-cache">
                <non-blocking-thread-pool keepalive-time="10000"/>
                <local-cache name="entity">
                    <heap-memory size="500000"/>
                    <expiration max-idle="100000"/>
                </local-cache>
                <local-cache name="local-query">
                    <heap-memory size="500000"/>
                    <expiration max-idle="100000"/>
                </local-cache>
                <local-cache name="timestamps">
                    <expiration interval="0"/>
                </local-cache>
                <local-cache name="pending-puts">
                    <expiration max-idle="60000"/>
                </local-cache>
            </cache-container>

I agree. These threads seem to be parked and for whatever reason neither reused nor destroyed. I'm trying to refactor my code to use container managed caches instead of manually handling caches.

Michael

Eduardo Martins

unread,
Aug 29, 2023, 6:10:18 AM8/29/23
to Michael, WildFly
That configuration doesn’t seem to align with thousands of threads scenario, maybe the thread park is not due to idle but some Infinispan data locking mechanism… Is that the only stack info you got, no reference to any of your code?

—E

Reply all
Reply to author
Forward
0 new messages