2 nodes wildfly cluster

95 views
Skip to first unread message

Davide Rossi

unread,
Apr 1, 2021, 1:25:19 PM4/1/21
to WildFly
Hello, 
first of all i would like to thank in advance everyone that is providing direct support in this group, you guys do an amazing job, and your feedback is invaluable.

I really am not  an experienced wildfly user, but i am in a relatively urgent need of help. 
I have to deploy a highly available, 2 nodes jBPM cluster, and in order to achieve Business Central ha, i need to connect both the instances to a common AMQ server and a common Infinispan server. 
If i am not mistaken, wildfly with the standalone-full-ha configuration provisions both of these services, but obviously i need both of them to be highly available too, i think in a replicated, master/slave configuration with automatic failover. 

So far i have read a lot of stuff, in a lot of different places (wildfly doc, redhat guides, in this group ) but i still have a really confused view of how i can achieve this, and i am not even sure if it can be done the way i just described. 

T.L.D.R.
Is it possible to start 2 different wildfly instances, on 2 different nodes, with the standalone-full-ha configuration, but configuring the infinispan and AMQ servers, that the fulll-ha config provisions, in a replicated, master/slave configuration?

Paul Ferraro

unread,
Apr 2, 2021, 10:27:38 AM4/2/21
to WildFly
Some answers below.

On Thursday, April 1, 2021 at 1:25:19 PM UTC-4 Davide Rossi wrote:
Hello, 
first of all i would like to thank in advance everyone that is providing direct support in this group, you guys do an amazing job, and your feedback is invaluable.

In a venue often fraught with grumbling and frustration, your sentiment is very much appreciated. :)
 
I really am not  an experienced wildfly user, but i am in a relatively urgent need of help. 
I have to deploy a highly available, 2 nodes jBPM cluster, and in order to achieve Business Central ha, i need to connect both the instances to a common AMQ server and a common Infinispan server. 

By "common Infinispan server", do you mean a remote Infinispan cluster?  or [what Infinispan calls] an embedded Infinispan cache?

If i am not mistaken, wildfly with the standalone-full-ha configuration provisions both of these services, but obviously i need both of them to be highly available too, i think in a replicated, master/slave configuration with automatic failover. 

It's unclear to me what it would mean for an Infinispan server to be in a primary/backup configuration.  An Infinispan Java client does not need any specific "failover" logic to interact with a remote Infinispan cluster.   This logic is baked-in to the HotRod client itself.  If we are instead talking about an embedded Infinispan cache, the primary/backup concept is even less applicable, since, by their nature, embedded Infinispan caches rely on distributing or externalizing state - thus all members are peers, with no inherent hierarchy.  While you can configure the partitioning of data to vary between cluster members, without a better understanding of your use case, I do not see how that would be beneficial.

Can you elaborate on your requirements a little more?

So far i have read a lot of stuff, in a lot of different places (wildfly doc, redhat guides, in this group ) but i still have a really confused view of how i can achieve this, and i am not even sure if it can be done the way i just described. 

T.L.D.R.
Is it possible to start 2 different wildfly instances, on 2 different nodes, with the standalone-full-ha configuration, but configuring the infinispan and AMQ servers, that the fulll-ha config provisions, in a replicated, master/slave configuration?

Configuring AMQ in this manner should involve setting the appropriate ha-policy.  See: https://docs.wildfly.org/23/wildscribe/subsystem/messaging-activemq/server/index.html

Davide Rossi

unread,
Apr 4, 2021, 4:35:50 PM4/4/21
to WildFly
Hi Paul, thank you for taking the time to read and reply to my post. I hope you enjoyed a peaceful and happy easter.

About your doubts on what my exact requirements are regarding the infinispan service, i'll declare myself guilty straight away, and i'll admit i didn't really dive deep into why Infinispan is needed, nor how it works.
If it can help, and it's not against this group's policies, i'll provide you a link to the guide i was following, where all the requirements of an HA business central cluster are described. 

From my limited understanding, i think what i need is indeed a remote Infinispan server, to which all my business central instances will connect. 
My initial understanding was that, along with other, optional extensions, wildfly could provide me with an infinispan server, and by tweaking the wildfly configurations, i could setup the 2 different infinispan deployments in a live/backup, replicated configuration, similarly to what i planned to do with AMQ. But from your replies, i fear that my plan was wrong for the simple reason that wildfly doesn't actually include a whole Infinispan server, but just the HotRod client. 

Regarding the AMQ cluster, i think i made some progress in achieving the configuration that i planned, but i don't understand what the common interface to which the clients should connect is. I'll explain myself better. 
I have 2 different nodes, where AMQ ( the server ) and business central ( our AMQ client) will be deployed. AMQ listens on the socket injected in the http-acceptor field, and business-central will connect using the socket injected in the  http-connector field, right?
But since i have to configure the AMQ in a HA manner,  i have to create the http-connector using a virtual IP that gets resolved to the node where the currently live instance of AMQ is, right?  
I am also wondering if it could be possible to configure AMQ symmetrically, so that the load gets distributed on the 2 nodes, while still being fully functional in case of a failure of one the instances. I guess the issue that i just describe regarding a common interface would still be relevant in such configuration?

Thank you again for your time Paul, i really appreciate your work!
Reply all
Reply to author
Forward
0 new messages