Debezium Server Crashed from Timeout Exception with rabbitmq sink

37 views
Skip to first unread message

Bibek Dahal

unread,
Oct 15, 2025, 4:24:47 AM (7 days ago) Oct 15
to debezium
Hello Debezium team,
We are using debezium server with oracle connector and rabbitmq as sink. As our test to test the stability of debezium, we updated 48k rows in a single commit in DB and looks like debezium crashes for this scenario. I have narrowed it down to rabbitmq ackTimeOut being crossed and the engine dies out because of this. Is this expected behavior or is there some settings to prevent this.

ERROR [io.deb.emb.asy.AsyncEmbeddedEngine] (pool-7-thread-1) Engine has failed with : java.util.concurrent.ExecutionException: io.debezium.DebeziumException: java.util.concurrent.TimeoutException
        at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at io.debezium.embedded.async.AsyncEmbeddedEngine.runTasksPolling(AsyncEmbeddedEngine.java:511)
        at io.debezium.embedded.async.AsyncEmbeddedEngine.run(AsyncEmbeddedEngine.java:221)
        at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:182)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: io.debezium.DebeziumException: java.util.concurrent.TimeoutException
        at io.debezium.server.rabbitmq.RabbitMqStreamChangeConsumer.handleBatch(RabbitMqStreamChangeConsumer.java:195)
        at io.debezium.embedded.async.ParallelSmtAndConvertBatchProcessor.processRecords(ParallelSmtAndConvertBatchProcessor.java:56)
        at io.debezium.embedded.async.AsyncEmbeddedEngine$PollRecords.doCall(AsyncEmbeddedEngine.java:1222)
        at io.debezium.embedded.async.AsyncEmbeddedEngine$PollRecords.doCall(AsyncEmbeddedEngine.java:1202)
        at io.debezium.embedded.async.RetryingCallable.call(RetryingCallable.java:47)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
        ... 3 more
Caused by: java.util.concurrent.TimeoutException
        at com.rabbitmq.client.impl.ChannelN.waitForConfirms(ChannelN.java:224)
        at com.rabbitmq.client.impl.ChannelN.waitForConfirmsOrDie(ChannelN.java:247)
        at com.rabbitmq.client.impl.recovery.AutorecoveringChannel.waitForConfirmsOrDie(AutorecoveringChannel.java:707)
        at io.debezium.server.rabbitmq.RabbitMqStreamChangeConsumer.handleBatch(RabbitMqStreamChangeConsumer.java:192)
        ... 10 more

This is the initial error which causes a chain of errors and engine ultimately dies out. The ackTimeOut when this happened was set to 3000ms which we are going to increase but our main concern is if the change events is lost during this failure. The way we discovered this was we changed 48k rows in one commit but only 2k events reached the final destination. Now, we do have some app components in between that consumes from the rabbitmq and sends the final events to the destination but it should be a one on one match and our best guess is that debezium only emitted those 2k events before crashing.

We also have a self-heal mechanism that detects debezium was down and auto restarts and should have emitted the whole 48k after it was back up again but did not see that happen. Tried to reproduce it again, but during later tests we noticed debezium started again from the same commit and it send the full events + some duplicates which we are not worried about. 

Our concern is, is there is situation when this shutdown happens in the engine, that it might be possible that when debezium is back up again, it might not resume from where it left since it was a single commit like we observed in the initial case because we looked at the logs of the app components and  did not see any indication of lost messages ? That situation happened only once and are worried if something like this happens on production, we have no way of knowing if full change events was emitted or not.

jiri.p...@gmail.com

unread,
Oct 15, 2025, 4:31:41 AM (7 days ago) Oct 15
to debezium
Hi,


WRT behaviour upon crash - Debezium updates transaction log coordinates based on messages that were successfully stored in the sink, so if only 2000 messages were pushed then upon restart the connector the connectr will either resume from the point of messages that were not delivered or will resen even those two thousand - at least once delivery is guaranteed. If this is not the case then there is a bug that has to be fixed - like on in the past https://issues.redhat.com/browse/DBZ-8307

Jiri

Girish Tharwani

unread,
Oct 16, 2025, 3:01:16 AM (6 days ago) Oct 16
to debezium

Hi,

I am trying to set up CDC from MySQL to ADLS using Debezium Server; however, the MySQL instance does not have a binary log enabled. Could you confirm if Debezium can be configured to connect directly to ADLS without using Kafka, Docker, or Kubernetes?

Thanks,
Girish

Amol saini

unread,
Oct 16, 2025, 3:12:45 AM (6 days ago) Oct 16
to debe...@googlegroups.com
Hi

Do you have a master-slave setup for your mysql instance.?

Thanks
Amol Saini
> --
> You received this message because you are subscribed to the Google Groups "debezium" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/debezium/ec166c02-a3bb-4bc9-b052-df926b91df6en%40googlegroups.com.

Girish Tharwani

unread,
Oct 16, 2025, 3:20:07 AM (6 days ago) Oct 16
to debe...@googlegroups.com
Hi,

Thanks for the quick response.
Currently we dont have master slave setup however binlog is enabled for the database.

Thanks,
Girish



--
Thanks and regards
Girish Tharwani
Cell: - 8793850178
Reply all
Reply to author
Forward
0 new messages