Comsuming Real Time Information data twice

38 views
Skip to first unread message

Jon Lewis

unread,
May 5, 2026, 3:22:53 AM (yesterday) May 5
to A gathering place for the Open Rail Data community
I'm trying to set up both a development and a production database, populated by the Darwin Real Time Information feed.  The credentials in Rail Data Marketplace only give a single Consumer Group Id.  Is it possible to register a second one?  Has anyone else run into this issue before?  How have you addressed it?  By writing same the data to 2 databases off a single feed perhaps?

Thanks in advance for any pointers

Peter Hicks

unread,
May 5, 2026, 3:27:43 AM (yesterday) May 5
to openrail...@googlegroups.com
Hi Jon

On Tuesday, 5 May 2026 at 08:22, Jon Lewis <jon...@jonnus.co.uk> wrote:

I'm trying to set up both a development and a production database, populated by the Darwin Real Time Information feed. The credentials in Rail Data Marketplace only give a single Consumer Group Id. Is it possible to register a second one? Has anyone else run into this issue before? How have you addressed it? By writing same the data to 2 databases off a single feed perhaps?

Your architectural pattern is very common - almost every system I've set up has separate development, test/staging and production environments.  I'm quite surprised RDM doesn't appear to support multiple consumers.

One way to get around it, and to separate out production and non-production, is to set up a second RDM account.  I'm not sure if the consumer groups are linked to a particular organisation or a particular account within an organisation - but I'd try it and report back.

Another way is to have something like ActiveMQ - which comes with Apache Camel - and consume from the RDM once, re-queueing the data on to either topics or virtual topics and having your environments hanging off that.


Peter

Seb Dazeley

unread,
May 5, 2026, 3:36:05 AM (yesterday) May 5
to openrail...@googlegroups.com
Hi Jon

I have a RabbitMQ broker which has two queues in it. RabbitMQ has options for persistent storage on disk so you can restart it and not worry about missing messages downstream in the database as long as ACKs are set up properly.

I use Python to connect to Kafka, then push to both RabbitMQ queues, then send an ACK to Kafka. Then I can pull from the RabbitMQ queues from prod and development without any issues. 

For the TD feed from Network Rail, I actually have two prod feeds, one for my database and one picked up by my WebSocket server. 

There might be more elegant ways without Python in the middle, but this is better than the jank I first made with just Python queues and RPyC to consume messages! 

Kind regards, 
Seb 

On Tue, May 5, 2026, at 08:22, Jon Lewis wrote:
I'm trying to set up both a development and a production database, populated by the Darwin Real Time Information feed.  The credentials in Rail Data Marketplace only give a single Consumer Group Id.  Is it possible to register a second one?  Has anyone else run into this issue before?  How have you addressed it?  By writing same the data to 2 databases off a single feed perhaps?

Thanks in advance for any pointers


--
You received this message because you are subscribed to the Google Groups "A gathering place for the Open Rail Data community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openraildata-t...@googlegroups.com.

Jon Lewis

unread,
May 5, 2026, 3:40:19 AM (yesterday) May 5
to A gathering place for the Open Rail Data community

Thanks guys, sounds like I'll need to set up some middleware to address this then.  Thanks for your quick responses and advice.

Ben Woodward

unread,
May 5, 2026, 3:53:37 AM (yesterday) May 5
to openraildata-talk
Sounds like my setup, but I'm using red panda which is Kafka backed 

Reply all
Reply to author
Forward
0 new messages