Support Request - Load Balancing using Wildfly - Even distribution of load across worker nodes

39 views
Skip to first unread message

Harshal Patil

unread,
Sep 3, 2024, 1:48:19 AM9/3/24
to WildFly
Hi,

We are a small organization using Wildfly 32. We are doing POC for load balancing, and we want the load to be distributed evenly across worker nodes.

So currently we have a load balancer node and worker1 node on server1 and a worker2 node on server2. Our webapp requests have 2 messages - one is ping message with low load which checks whether everything is up, immediately followed by an execution message(higher load) where some processing takes place. Most of the time all execution messages from our webapp are diverted to worker1 (which is present on server1 which also has load balancer node), and all ping messages are distributed to worker2. Sometimes it distributes the ping and execution messages evenly across worker1 and worker2 which is what we expect, but there is no consistency and this happens rarely.

This is how we are setting up mod_cluster in both the worker nodes:

<subsystem xmlns="urn:jboss:domain:modcluster:6.0">
            <proxy name="default" advertise-socket="modcluster" proxies="proxy1" listener ="ajp" sticky-session="false">
                <dynamic-load-provider>
                    <load-metric type="sessions" weight ="2" capacity="2"/>
                    <load-metric type="busyness" weight="1"/>
                </dynamic-load-provider>
            </proxy>
        </subsystem>

We have not set affinity in load balancer node undertow filter of load balancer.
<filters>
                <mod-cluster name="load-balancer" management-socket-binding="mcmp-management" advertise-socket-binding="modcluster" enable-http2="true" max-retries="3">
                    <no-affinity/>
                </mod-cluster>
            </filters>

The idea is if say 4 requests are initiated, the ping and execution message should not go to different worker nodes. They should be evenly distributed across the worker nodes (2 ping + 2 execution each). This should happen evenly under any situation, whether requests are initiated by webapps on worker node 1 or worker node 2. Any idea what configuration we may be missing? Any help would be appreciated. Thanks in advance.

Best Regards,
Harshal Patil

Bartosz Baranowski

unread,
Sep 9, 2024, 3:26:42 AM9/9/24
to WildFly
AFAIR thats due to small sample. Spam this setup with way more requests and it should even out. Also, check: https://docs.modcluster.io/#lbstatusrecaltime - with small load, this value might need a tweak to spread it.
This is old, but hints are there: https://developer.jboss.org/thread/213163

Paul Ferraro

unread,
Sep 19, 2024, 6:07:42 AM9/19/24
to WildFly
To reiterate Bartosz's point, a good load balancer only needs to distribute _concurrent_ requests evenly.  Distributing sequential load evenly does nothing for scalability.
Reply all
Reply to author
Forward
0 new messages