Slow MessageListener startup and/or breakdown using Wildfly and ActiveMQ

37 views
Skip to first unread message

Christoffer A. Nilsen

unread,
Aug 28, 2025, 11:11:40 AM (8 days ago) Aug 28
to WildFly
We are testing to see how many concurrent MessageListener sessions that are working at the same time. And it seems like the startup or breakdown of the MessageListener is very slow. Meaning that we are not able to get more than around 5 concurrent MDB's working at the same time. However, by using a ExecutorService we can greatly improve the performance by splitting the workload of our code and the startup and breakdown. 

I'm wondering if anyone has any ideas on why, how we could investigate this further, and/or if it is recommended/common to use a ExecutorService in the MDB?

Our test setup
  • Wildfly 31.0.0.Final
  • Remote ActiveMQ Artemis 2.30.0
  • MDB maxSessions: 40
  • I disable the MDB while filling the ActiveMQ queue with 1000 messages. 
  • Keep a count of current working threads
    • At the start of the onMessage() method we add 1 to a global counter
    • At the end of the onMessage() method we subtract 1 from the global counter
    • Log the counter in onMessage()
  • The onMessage() method does work that takes approx. 500ms.

Expected behavior
The majority of the logged messages should be: "Running 40/40", except at startup and when we are close to finished with all 1000 messages.

Results without ExecutorService
When I start the MDB again, having 1000 messages queued, we see the counter jump to 40/40 as expected. But after the first 40 are finished processing, it might take around 2000ms before we see 1/40 working threads, and it increases and stabilize around 5/40 until all 1000 messages are done processing.

Results with ExecutorService
I add a global ExecutorService with FixedThreadPool=40, and have all my onMessage() method logic handles by the ExecutorService.

When I start the MDB again, having 1000 messages queued, we see the expected behavior having 40/40 working threads until all 1000 messages are done processing. This is also a lot quicker than not using the ExecutorService.

My thought is that by using a ExecutorService, we delegate 40 threads to the startup and breakdown of the MDBs, and 40 threads to the onMessage() processing. Meaning the startup or most likely the breakdown of the MDB is slower than my onMessage() processing, creating a huge bottleneck.

Any thoughts?

Emmanuel Hugonnet

unread,
Aug 29, 2025, 9:58:03 AM (8 days ago) Aug 29
to Christoffer A. Nilsen, WildFly
What is the size of the pool affected to the MDBs ?
Emmanuel

Le 28/08/2025 à 17:11, 'Christoffer A. Nilsen' via WildFly a écrit :
> We are testing to see how many concurrent /MessageListener/ sessions that are working at the same time. And it seems like the startup or
> breakdown of the /MessageListener/ is very slow. Meaning that we are not able to get more than around 5 concurrent MDB's working at the
> same time. However, by using a /ExecutorService/ we can greatly improve the performance by splitting the workload of our code and the
> startup and breakdown.
>
> I'm wondering if anyone has any ideas on why, how we could investigate this further, and/or if it is recommended/common to use a
> /ExecutorService/ in the MDB?
>
> *Our test setup*
>
> * Wildfly 31.0.0.Final
> * Remote ActiveMQ Artemis 2.30.0
> * MDB maxSessions: 40
> * I disable the MDB while filling the ActiveMQ queue with 1000 messages.
> * Keep a count of current working threads
> o At the start of the onMessage() method we add 1 to a global counter
> o At the end of the onMessage() method we subtract 1 from the global counter
> o Log the counter in onMessage()
> * The onMessage() method does work that takes approx. 500ms.
>
> *
> *
> *Expected behavior*
> The majority of the logged messages should be: /"Running 40/40", /except at startup and when we are close to finished with all 1000 messages.
> *
> *
> *Results without ExecutorService*
> When I start the MDB again, having 1000 messages queued, we see the counter jump to 40/40 as expected. But after the first 40 are finished
> processing, it might take around 2000ms before we see 1/40 working threads, and it increases and stabilize around 5/40 until all 1000
> messages are done processing.
>
> *Results with ExecutorService*
> I add a global /ExecutorService/ with /FixedThreadPool/=40, and have all my /onMessage()/ method logic handles by the /ExecutorService/.
>
> When I start the MDB again, having 1000 messages queued, we see the expected behavior having 40/40 working threads until all 1000 messages
> are done processing. This is also a lot quicker than not using the /ExecutorService/.
>
> My thought is that by using a /ExecutorService/, we delegate 40 threads to the startup and breakdown of the MDBs, and 40 threads to the
> /onMessage()/ processing. Meaning the startup or most likely the breakdown of the MDB is slower than my /onMessage()/ processing, creating
> a huge bottleneck.
>
> Any thoughts?
> *
> *
> --
> You received this message because you are subscribed to the Google Groups "WildFly" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to wildfly+u...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/wildfly/adfecdc8-9358-4967-b6a0-0bb36a096fbdn%40googlegroups.com
> <https://groups.google.com/d/msgid/wildfly/adfecdc8-9358-4967-b6a0-0bb36a096fbdn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Christoffer A. Nilsen

unread,
Aug 29, 2025, 10:19:04 AM (8 days ago) Aug 29
to WildFly
The MDB is in a pool with max-pool-size="40".
The slsb-strict-max-pool has max-pool-size="100"
We have tried with different subsystem:ejb/max-threads count values, 50, 100 and 200, and see no difference when changing that value.
Have also tried different pooled-connection-factory max-pool-size values, 100 and 200, and see no difference when changing that value.

When we simplified the onMessage() to not run any code except the counter, we never passed 1/40.
When we added a Thread.sleep(500) we never passed 5/40.
When we added a Thread.speed(5000) we never passed 5/40.
We also see that the EJB stats say (after running): 
  • Pool Available Count=35
  • Pool Max Size=40
  • Pool Create Count=5
The only thing that seems to increase performance is setting consumerWindowSize=0, then our test MDB setup with Thread.sleep(500) actually shows 40/40 and is extremely quick.
However when we tried with the same setting for a real life scenario (real MDB onMessage() logic), it was slow and normal again, stabilizing around 5/40, and having the downtime of around 2 sec after the first batch was ran.

Have also tried consumerMaxRate=-1, which seems to make no difference. (Did previously not set this value, and believe -1 is default, so that makes sense).

Is there any logging we can enable to see how Wildfly is handling this?

Emmanuel Hugonnet

unread,
Aug 29, 2025, 10:20:02 AM (8 days ago) Aug 29
to Christoffer A. Nilsen, WildFly
What is the size of the pool affected to the MDBs ?
Emmanuel


Le 28/08/2025 à 17:11, 'Christoffer A. Nilsen' via WildFly a écrit :
> We are testing to see how many concurrent /MessageListener/ sessions that are working at the same time. And it seems like the startup or
> breakdown of the /MessageListener/ is very slow. Meaning that we are not able to get more than around 5 concurrent MDB's working at the
> same time. However, by using a /ExecutorService/ we can greatly improve the performance by splitting the workload of our code and the
> startup and breakdown.
>
> I'm wondering if anyone has any ideas on why, how we could investigate this further, and/or if it is recommended/common to use a
> /ExecutorService/ in the MDB?
>
> *Our test setup*
>
> * Wildfly 31.0.0.Final
> * Remote ActiveMQ Artemis 2.30.0
> * MDB maxSessions: 40
> * I disable the MDB while filling the ActiveMQ queue with 1000 messages.
> * Keep a count of current working threads
> o At the start of the onMessage() method we add 1 to a global counter
> o At the end of the onMessage() method we subtract 1 from the global counter
> o Log the counter in onMessage()
> * The onMessage() method does work that takes approx. 500ms.
>
> *
> *
> *Expected behavior*
> The majority of the logged messages should be: /"Running 40/40", /except at startup and when we are close to finished with all 1000 messages.
> *
> *
> *Results without ExecutorService*
> When I start the MDB again, having 1000 messages queued, we see the counter jump to 40/40 as expected. But after the first 40 are finished
> processing, it might take around 2000ms before we see 1/40 working threads, and it increases and stabilize around 5/40 until all 1000
> messages are done processing.
>
> *Results with ExecutorService*
> I add a global /ExecutorService/ with /FixedThreadPool/=40, and have all my /onMessage()/ method logic handles by the /ExecutorService/.
>
> When I start the MDB again, having 1000 messages queued, we see the expected behavior having 40/40 working threads until all 1000 messages
> are done processing. This is also a lot quicker than not using the /ExecutorService/.
>
> My thought is that by using a /ExecutorService/, we delegate 40 threads to the startup and breakdown of the MDBs, and 40 threads to the
> /onMessage()/ processing. Meaning the startup or most likely the breakdown of the MDB is slower than my /onMessage()/ processing, creating
> a huge bottleneck.
>
> Any thoughts?

Christoffer A. Nilsen

unread,
Aug 29, 2025, 10:31:20 AM (8 days ago) Aug 29
to WildFly
Note: You wrote the same message twice, I answered with additional information in a previous message.

All onMessage() logic is timed and takes on average 20ms.

Where is an example of the log when I start the queue (Note how at 16:02:51 it was almost a 10 seconds delay after running the first 40 queued messages):
2025-08-29T16:02:42,717 INFO [DeliveryMDB][]: Running 2/40 total=2
2025-08-29T16:02:42,718 INFO [DeliveryMDB][]: Running 3/40 total=3
2025-08-29T16:02:42,718 INFO [DeliveryMDB][]: Running 4/40 total=4
2025-08-29T16:02:42,718 INFO [DeliveryMDB][]: Running 5/40 total=5
2025-08-29T16:02:42,718 INFO [DeliveryMDB][]: Running 6/40 total=6
2025-08-29T16:02:42,719 INFO [DeliveryMDB][]: Running 7/40 total=7
2025-08-29T16:02:42,719 INFO [DeliveryMDB][]: Running 8/40 total=8
2025-08-29T16:02:42,720 INFO [DeliveryMDB][]: Running 9/40 total=9
2025-08-29T16:02:42,720 INFO [DeliveryMDB][]: Running 10/40 total=10
2025-08-29T16:02:42,721 INFO [DeliveryMDB][]: Running 10/40 total=10
2025-08-29T16:02:42,721 INFO [DeliveryMDB][]: Running 11/40 total=11
2025-08-29T16:02:42,722 INFO [DeliveryMDB][]: Running 12/40 total=12
2025-08-29T16:02:42,723 INFO [DeliveryMDB][]: Running 13/40 total=13
2025-08-29T16:02:42,725 INFO [DeliveryMDB][]: Running 14/40 total=14
2025-08-29T16:02:42,726 INFO [DeliveryMDB][]: Running 15/40 total=15
2025-08-29T16:02:42,731 INFO [DeliveryMDB][]: Running 16/40 total=16
2025-08-29T16:02:42,731 INFO [DeliveryMDB][]: Running 17/40 total=17
2025-08-29T16:02:42,732 INFO [DeliveryMDB][]: Running 18/40 total=18
2025-08-29T16:02:42,734 INFO [DeliveryMDB][]: Running 19/40 total=19
2025-08-29T16:02:42,736 INFO [DeliveryMDB][]: Running 20/40 total=20
2025-08-29T16:02:42,737 INFO [DeliveryMDB][]: Running 21/40 total=21
2025-08-29T16:02:42,746 INFO [DeliveryMDB][]: Running 22/40 total=22
2025-08-29T16:02:42,747 INFO [DeliveryMDB][]: Running 23/40 total=23
2025-08-29T16:02:42,747 INFO [DeliveryMDB][]: Running 24/40 total=24
2025-08-29T16:02:42,755 INFO [DeliveryMDB][]: Running 25/40 total=25
2025-08-29T16:02:42,755 INFO [DeliveryMDB][]: Running 26/40 total=26
2025-08-29T16:02:42,757 INFO [DeliveryMDB][]: Running 27/40 total=27
2025-08-29T16:02:42,757 INFO [DeliveryMDB][]: Running 28/40 total=28
2025-08-29T16:02:42,758 INFO [DeliveryMDB][]: Running 29/40 total=29
2025-08-29T16:02:42,760 INFO [DeliveryMDB][]: Running 30/40 total=30
2025-08-29T16:02:42,766 INFO [DeliveryMDB][]: Running 31/40 total=31
2025-08-29T16:02:42,768 INFO [DeliveryMDB][]: Running 32/40 total=32
2025-08-29T16:02:42,776 INFO [DeliveryMDB][]: Running 33/40 total=33
2025-08-29T16:02:42,777 INFO [DeliveryMDB][]: Running 34/40 total=34
2025-08-29T16:02:42,778 INFO [DeliveryMDB][]: Running 35/40 total=35
2025-08-29T16:02:42,779 INFO [DeliveryMDB][]: Running 36/40 total=36
2025-08-29T16:02:42,780 INFO [DeliveryMDB][]: Running 37/40 total=37
2025-08-29T16:02:42,781 INFO [DeliveryMDB][]: Running 38/40 total=38
2025-08-29T16:02:42,783 INFO [DeliveryMDB][]: Running 39/40 total=39
2025-08-29T16:02:42,835 INFO [DeliveryMDB][]: Running 40/40 total=40
2025-08-29T16:02:51,587 INFO [DeliveryMDB][]: Running 1/40 total=1
2025-08-29T16:03:04,223 INFO [DeliveryMDB][]: Running 2/40 total=2
2025-08-29T16:03:07,750 INFO [DeliveryMDB][]: Running 1/40 total=1
2025-08-29T16:03:07,776 INFO [DeliveryMDB][]: Running 2/40 total=2
2025-08-29T16:03:07,802 INFO [DeliveryMDB][]: Running 3/40 total=3
2025-08-29T16:03:07,828 INFO [DeliveryMDB][]: Running 3/40 total=3
2025-08-29T16:03:07,853 INFO [DeliveryMDB][]: Running 3/40 total=3
2025-08-29T16:03:07,879 INFO [DeliveryMDB][]: Running 2/40 total=2
... it will continue and fluctuate between 1-5/40 until all 1000 messages are completed.
... // Last message below
2025-08-29T16:03:22,141 INFO [DeliveryMDB][]: Running 5/40 total=5


Reply all
Reply to author
Forward
0 new messages