Support Request - Using Wildfly as load balancer

195 views
Skip to first unread message

Harshal Patil

unread,
Aug 22, 2024, 6:18:09 AM8/22/24
to WildFly
Hi,

We are a small organization using Wildfly. My team is trying to set up wildfly 32.0.1 as load balancer for other Wildfly 32 worker nodes. We have following configuration for this load balancing:

Load Balancer Wildfly(placed on server 1):

In Undertow:
<filter-ref name="load-balancer"/>

<mod-cluster name="load-balancer" advertise-frequency="0" management-socket-binding="mcmp-management" enable-http2="true" max-retries="3">
                    <single-affinity/>
                </mod-cluster>
</filters>

Sockets:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:500}">
<socket-binding name="mcmp-management" interface="public" port="${jboss.mcmp.port:8090}"/>
</socket-binding-group>


In worker node Wildfly(placed on server 2):

<subsystem xmlns="urn:jboss:domain:modcluster:6.0">
            <proxy name="default" proxies="proxy1" listener="https" ssl-context="applicationSSC" >
                <dynamic-load-provider>
                    <load-metric type="busyness"/>
                </dynamic-load-provider>
            </proxy>
        </subsystem>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:500}">
<outbound-socket-binding name="proxy1">
<remote-destination host="<load balancer server1 ip address>" port="8590"/>
</outbound-socket-binding>
</socket-binding-group>


We are getting following error in server.log of worker node wildfly: ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapterService - 1) MODCLUSTER000043: Failed to send INFO command to /< load balancer server1 ip address >:8590: Read timed out

if we try with listener=ajp for modcluster, we get another error:
Failed to send INFO command to /< load balancer server1 ip address>:8590: connect timed out

There is no proper load balancing documentation available but we refer to these:
Using Wildfly as a Load Balancer - Latest WildFly Documentation (jboss.org)
High Availability Guide (wildfly.org)

Is the load balancer - worker node configuration correct? Any idea about this error?

Kindly help in these regards.

Thanks,
Harshal

John Saccoccio

unread,
Aug 22, 2024, 2:17:39 PM8/22/24
to WildFly
I typically don't trust port offsets 100%, especially in the context above.  Have you tried to hardcode it:  <socket-binding name="mcmp-management" interface="public" port="8590}"/>

Harshal Patil

unread,
Aug 23, 2024, 1:48:22 AM8/23/24
to WildFly
Hello John,

We tried by removing the port offsets in load balancer server and worker node server, also tried listeners http, https for modcluster in worker node, we get the same error:


 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapterService - 1) MODCLUSTER000043: Failed to send INFO command to /< load balancer server1 ip address >:8590: Read timed out

We have configured firewall in load balancer server 1 and opened port 8590 to be accessed from other server.

Public interface has  jboss.bind.address:0.0.0.0

Staffan Horke

unread,
Aug 23, 2024, 8:10:04 AM8/23/24
to WildFly
Hi,

This looks like a network or firewall issue. Have you opened traffic from load balancer -> worker? If not, try this.

The standard port for AJP protocol is 8009. If you are using AJP and a port offset of 500 you would need to allow traffic on port 8509 from load balancer -> worker.
If you are using HTTP or HTTPS protocol, change port accordingly.

From worker -> load balancer port 8590 should be open given a port offset of 500.

Best Regards,
Staffan Hörke

--
You received this message because you are subscribed to the Google Groups "WildFly" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wildfly+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wildfly/b789a661-f42c-4cc5-8b07-cc1f8c94dc3dn%40googlegroups.com.
Message has been deleted

Staffan Horke

unread,
Aug 24, 2024, 6:56:05 AM8/24/24
to WildFly
Hi Harshal,

As I understand you want to run a load balancer and a worker on the same server instance. This is possible and you can use the server profiles provided with the wildfly distribution. Instead of trying to run a single instance that acts as both a load balancer and a worker, you can run two different wildfly instances on the same server.

Start one instance with the load-balancer profile and one instance with the ha profile. Assuming that you have only one network interface you will have to change the port config on one of the instances to avoid port conflict. The easiest way to do this is to use port offset. You can run the load balancer on standard ports and the worker instance with a port offset of 100.

Additional worker nodes can be added on the same server or on different servers. If you are running only one instance on a server there is no need to use port offset. If your network supports multicast you are now good to go. Otherwise specify the proxies as you have done before.

Best Regards,
Staffan Hörke

Den lör 24 aug. 2024 kl 08:41 skrev Harshal Patil <harshal.d...@gmail.com>:
Hi Staffan,

Thank you for your response. The firewall is configured properly for ports on both worker and load balancer. After further investigation, I think the issue is that the load balancer wildfly does not only have load balancer configuration. Its a full-ha profile with custom configuration and web app deployed, which we are trying to configure as a load balancer as well. If we run the barebones standalone-load-balancer profile, it seems to connect properly with worker node.

Is it possible to configure wildfly 32 to run a webapp and a load balancer (i.e. as a load balancer and a worker node for itself) in the same node or profile? If not, what alternative can be implemented? We dont want one server just to run a load balancer wildfly and would prefer it to also be used to deploy our webapp.

Thanks,
Harshal

Harshal Patil

unread,
Aug 24, 2024, 9:53:43 AM8/24/24
to WildFly
Hi Staffan,

Thank you for your response. As per your suggestion I created 2 nodes in same wildfly instance on server 1, one load balancer node and another as worker node. And another worker node on server 2. Both nodes are connecting with the load balancer node, further detailed testing is being done. We were able to achieve this using listener ajp in mod cluster. But if we want to set up with https listener, we are getting error:

ERROR [io.undertow] (default task-2) UT005043: Error in processing MCMP commands: Type:MEM, Mess: MEM: Can't read node

 Any idea what could be the issue for https listener in mod cluster? We have set up ssl-context for https listener in undertow for both loadbalancer node and worker nodes, so not sure what exactly we are missing here.

Staffan Horke

unread,
Aug 25, 2024, 12:50:36 PM8/25/24
to WildFly
Hi Harshal,

If it works with AJP and HTTP then most probably there is an error in the configuration or an error with the certificate trust.

On the wildfly load balancer instance you will have to enable HTTPS by setting the ssl-context attribute on the mod-cluster filter in the undertow subsystem [1]. That context [2] should then include a trust-manager which can validate the certificates sent by your backed server. The certificates that your backend server provides are controlled by the ssl-context configured on the https-listener in the undertow subsystem [3].

If all this is done, check again your certificate and keystore/truststore.

Best Regards,
Staffan Hörke



Harshal Patil

unread,
Aug 30, 2024, 4:26:08 AM8/30/24
to WildFly
Hello Staffan,

Thank you for these inputs. Our team will check this out and implement it.

I have one query regarding the load balancing. How do we ensure load is distriubuted evenly across worker nodes?

So currently we have a load balancer node and worker1 node on server1 and a worker2 node on server2. Most of the time all heavy-load requests from our webapp are diverted to worker1, and some minor low-load requests are distributed to worker2. Sometimes it distributes load evenly across worker1 and worker2. There is no consistency.

This is how we are setting up mod_cluster in both the worker nodes:

<subsystem xmlns="urn:jboss:domain:modcluster:6.0">
            <proxy name="default" advertise-socket="modcluster" proxies="proxy1" listener ="ajp">
                <!-- <dynamic-load-provider initial-load="0"> -->
                <dynamic-load-provider>
                    <load-metric type="sessions" weight ="2" capacity="2"/>
                    <load-metric type="busyness" weight="1"/>
                </dynamic-load-provider>
            </proxy>
        </subsystem>

We have not set affinity in load balancer node undertow filter of load balancer.

The idea is if say 4 requests are initiated, 2 heavy load and 2 low load, they should be evenly distributed across the worker nodes (1 heavy + 1 low each). Any idea what configuration we may be missing? Thanks in advance.

Best Regards,
Harshal Patil

Staffan Horke

unread,
Sep 3, 2024, 10:15:48 AM9/3/24
to WildFly
Hi,

If you don't want session affinity, make sure you have configured it to none.

If you have a very light load, infrequent requests, an uneven distribution is expected.

Your configuration specifies that the load should be calculated based on the number of active sessions and busy threads. That is what you will get.

To see if your load metrics works as you think you can check the management console or cli on the load balancer instance:
/subsystem=undertow/configuration=filter/mod-cluster=load-balancer/balancer=mycluster/node=nodename:query

You will see the load reported from a node as a number between 1 and 100. Note that a higher number means less load.


Best Regards,
Staffan Hörke


Reply all
Reply to author
Forward
0 new messages