ActiveMQ Primary, Secondary, Tertiary servers

131 views
Skip to first unread message

Legolash2o

unread,
Jan 25, 2023, 1:15:53 PM1/25/23
to A gathering place for the Open Rail Data community
Hi all,

It seems that ActiveMQ has a nice ability to specified multiple servers in the connection URL, has anybody had any experience in using this feature and how did it work for you?

The idea is that it will connect to the primary feed (approved), auto-connect to publicfeeds if the primary goes down and then connects to the standard datafeeds if the primary and secondary fail.

From what I've read, it should then re-reconnect once they are receiving messages.

public const string FALLOVER_CONNECTION_BETA = "failover:(tcp://approveddatafeeds.networkrail.co.uk,tcp://publicdatafeeds.networkrail.co.uk,tcp://datafeeds.networkrail.co.uk:61619)?randomize=false&backup=true&useExponentialBackOff=true&?initialReconnectDelay=5000&maxReconnectDelay=-1&maxReconnectDelay=60000&reconnectDelayExponent=2.0&consumerExpiryCheckEnabled=false";

Peter Hicks

unread,
Jan 27, 2023, 5:06:45 AM1/27/23
to openrail...@googlegroups.com
Hi Liam

It is a bad idea to use the failover transport across multiple, unclustered servers.  As the documentation (https://activemq.apache.org/failover-transport-reference.html) says, it will connect to one URI and if that connection does not succeed or subsequently fails, it will use another, so:

1. Failover will only happen if the TCP connection fails, not if the quality of the incoming data degrades, or if something fails on Network Rail's side
2. You will lose messages as the servers you're connecting to are not part of a common cluster and your durable subscriptions or queues are not replicated
3. Failover will not fail back to a higher priority server

Why are you trying to do this?  Both of the new platforms should be stable - if they are not (as we have now), the best way forward is to be one of the people complaining about the instability to get it fixed, rather than spending time trying to come up with some sort of workaround, which will probably cause you more issues.


Peter


--
You received this message because you are subscribed to the Google Groups "A gathering place for the Open Rail Data community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openraildata-t...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openraildata-talk/838b9498-8d5e-4266-ba06-59e02a7b85abn%40googlegroups.com.

Legolash2o

unread,
Jan 27, 2023, 12:29:24 PM1/27/23
to A gathering place for the Open Rail Data community
Hi Peter, thanks for the reply.

That's good to know and didn't know about item 3, which is a shame. Hopefully the '&priorityBackup=true' will aid in that (which I need to add).

"Given this URL a client will try to connect and stay connected to the local broker. If local broker fails, it will of course fail over to remote. However, as priorityBackup parameter is used, the client will constantly try to reconnect to local. Once the client can do so, the client will re-connect to it without any need for manual intervention.

By default, only the first URI in the list is considered prioritized (local). In most cases this will suffice."

The hope is that if the approvedfeeds go down (which have currently been rock-solid), I can still then receive some messages - even if the queues aren't replicated. Using the slightly more complex server string will avoid having to develop the workaround and use ActiveMQ/OpenWire's natural ability should the feeds hickup.

I currently have two clients running with 2nd one using the more complex server string. It'll be interesting to see how it works and if it does switch, and then switch back if the feeds go down.

I hope that makes sense after that brain dump.

Thanks again, appreciated!

Nigel Mundy

unread,
Jan 27, 2023, 12:53:54 PM1/27/23
to openraildata-talk
" Once the client can do so, the client will re-connect to it without any need for manual intervention."

ok, it may seamlessly reconnect, but  will it also resubscribe to everywhere it was connected to previously?  this sounds like something that probably needs careful and controlled testing (if using linux, maybe by using local iptables rules to "fail" the link to primary, for varying durations)

Just my apprehensive thoughts.

regards 

Nigel.

Peter Hicks

unread,
Jan 27, 2023, 1:16:11 PM1/27/23
to openrail...@googlegroups.com
Hi Liam

I really think you're overcomplicating this and introducing a load of difficult-to-troubleshoot failure scenarios.  But if you want to go along with it, test one thing - that the transport isn't too aggressive when trying to contact the primary server if it disconnects.

Also consider whether you're looking at the wrong design pattern here.


Peter


Legolash2o

unread,
Jan 27, 2023, 1:31:54 PM1/27/23
to A gathering place for the Open Rail Data community
Hi both,

I'm definitely going to make note of your comments and I will run it on my 2nd VM for a few months. 

I will keep an eye out for over-aggressive connections to the primary (I do enforce the backoff) and if it topic re-subscriptions happen. I'm not sure on other design patterns to be honest, my current setup has two applications. One that retrieves the feed data, stores it in a local and Azure log file, and then adds it onto an Azure queue. The 2nd app then takes it off the queue to save into database with some automation type code.

I just want to avoid having to code in detection of feed degradation and switching to another feed manually. That would require a whole bunch of testing when hopefully it can be done built-in.

I can keep an eye on the feeds using the link below and I may add some more details if possible such as when it switches and how aggressive it is (if applicable). Everything gets logged anyway, with errors/exceptions emailed. It'll be an interesting experiment either way!

Reply all
Reply to author
Forward
0 new messages