Missing transactions with Oracle connector after the restarts

40 views
Skip to first unread message

Shahul Nagoorkani

unread,
Jan 30, 2026, 12:12:22 AM (4 days ago) Jan 30
to debezium
Hello Debezium Experts,

We are noticing a peculiar problem where our logminer based debezium oracle connector is loosing transactions whenever the connector is getting restarted. We picked up few heavy traffic tables and did a hourly record counts from snowflake consumer, we noticed that logminer is dropping the transactions. Are there any known issues with logminer missing the transactions on a busy OLTP environment?

These sections are from debezium logs during oracle connector restarts at various times. From the below restart snippets with the scns, are there any clues present to tell why we are loosing data upon restarts?

Offsets as shutdown: OracleOffsetContext [scn=9430661332325, txId=35001a00420d4700, txSeq=14, commit_scn=["9430661332325:1:35001a00420d4700"], lcr_position=null] Found previous offset OracleOffsetContext [scn=9430661330770, txId=35001a00420d4700, txSeq=14, commit_scn=["9430661332322:1:1a000100cb1f3a01"], lcr_position=null] Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=9430662135664, commit_scn=[], lcr_position=null]]
Offsets as shutdown: OracleOffsetContext [scn=9430663516607, txId=14001a00b26a3701, txSeq=10, commit_scn=["9430663516612:1:14001a00b26a3701"], lcr_position=null] Found previous offset OracleOffsetContext [scn=9430663514922, txId=14001a00b26a3701, txSeq=10, commit_scn=["9430663516609:1:2800120026e8d000"], lcr_position=null] Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=9430665098253, commit_scn=[], lcr_position=null]] Offsets as shutdown: OracleOffsetContext [scn=9430666741794, txId=1800060015013501, txSeq=1, commit_scn=["9430666741794:1:01001500c5d5c200"], lcr_position=null] Found previous offset OracleOffsetContext [scn=9430666740584, txId=1800060015013501, txSeq=1, commit_scn=["9430666741788:1:3500060058fd4600"], lcr_position=null] Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=9430668015294, commit_scn=[], lcr_position=null]] Offsets as shutdown: OracleOffsetContext [scn=9430689002349, txId=0b0010005678ee00, txSeq=1, commit_scn=["9430689002349:1:18001000afd73401"], lcr_position=null] Found previous offset OracleOffsetContext [scn=9430689001605, txId=0b0010005678ee00, txSeq=1, commit_scn=["9430689002324:1:0d00140023fffa00"], lcr_position=null] Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=9430690709803, commit_scn=[], lcr_position=null]] 

Regards,
Shahul Nagoorkani

Chris Cranford

unread,
Jan 30, 2026, 1:19:10 AM (4 days ago) Jan 30
to debe...@googlegroups.com
Hi -

The log entries appear to show that the same offsets that were flushed at shutdown are used on start-up, and so far no one has reported this issue. So I'm afraid there isn't much I can provide right now.

Could you please raise a GitHub Issue [1] with the following:

    - Connector Configuration
    - Connector Version
    - Oracle database version (including if its standalone or RAC)
    - TRACE logs

With trace los, please use this logging configuration:

    io.debezium.connector.oracle=TRACE
    io.debezium.connector.oracle.OracleValueConverters=INFO

With the logging configuration, run your tests where you capture changes, stop and restart the connector where you observe the issues, and share the full, complete trace logs, including the list of transactions that you believe were missed and not replicated. 

In addition to this, if you could also clone our Oracle query tool [2] and build it, and run the `list-changes` command. In this case, you'd want to record your offset SCN before you perform the above test, and then record the database current scn after the test finishes.  Then use the offset scn as the start scn and the current scn as the end scn in the `list-changes` command.  This will export all the changes from LogMiner to a file that would be helpful for us to compare what Debezium saw to what is in the transaction logs themselves.

If you have any questions, don't hesitate to ask.

Thanks,
Chris

[1]: https://github.com/debezium/dbz
[2]: https://github.com/Naros/debezium-oracle-query-tool
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/434c87a0-fcb0-4640-9245-32745c9a5b5dn%40googlegroups.com.

Shahul Nagoorkani

unread,
Jan 30, 2026, 5:18:26 PM (3 days ago) Jan 30
to debezium
Thanks for your response, Chris.

Sure will raise an issue. We are also looking at trying out the latest connector to see if it helps.

Meanwhile enabling the trace was spitting out critical data into the logs and hence we certainly don't want to enable the trace logging in prod. Is there a way to restrict the data spilling over to the logs?

Regards,
Shahul Nagoorkani

Chris Cranford

unread,
Jan 31, 2026, 8:49:19 AM (3 days ago) Jan 31
to debe...@googlegroups.com
Hi -

You could try

    io.debezium.connector.oracle=TRACE
    io.debezium.connector.oracle.OracleValueConverters=INFO
    io.debezium.util.Loggings=DEBUG

But be aware that if the data becomes something that is necessary, we'll have to go through this process again.

Thanks,
-cc

Shahul Nagoorkani

unread,
Feb 2, 2026, 8:07:01 PM (5 hours ago) Feb 2
to debezium
Hello Chris,

Since we noticed the data loss, we created a table "APPS.APPS_HEARTBEAT_CUSTOM" and updated it every 2 seconds through a cronjob with the following update statement.

UPDATE APPS.APPS_HEARTBEAT_CUSTOM SET last_update=SYSTIMESTAMP,counter=counter+1 WHERE id=1;

Stopped the connector around: 2/2/2026 14:43 MST (21:43 UTC)

Connector took about 15 minutes to come backup as the schema history refresh took really long time as it captures the entire database schema.

From the ingested data into snowflake, we found that nearly 13 minutes of data based on the missing counter from APPS.APPS_HEARTBEAT_CUSTOM which is around the same time the connector was down.

Logs are attached for the entire time duration.

Meanwhile we created a XStream connector for the same set of tables and observed similar behavior too with the data loss. I will share the logs from the XStreams connector in the next thread.
So irrespective of the connector mode, logminer or Xstreams, we are noticing the data loss.

Please review the logs and let us know if we miss anything from the configuration stand point.

Regards,
Shahul Nagoorkani
logminer_debezium_logs_part1.csv.zip

Shahul Nagoorkani

unread,
Feb 2, 2026, 8:08:15 PM (5 hours ago) Feb 2
to debezium
Part 2 of the log attached.

Regards,
Shahul Nagoorkani
logminer_debezium_logs_part2.csv.zip

Shahul Nagoorkani

unread,
Feb 2, 2026, 8:12:47 PM (5 hours ago) Feb 2
to debezium
Hi Chris,

I am attaching similar logs(2 parts) for the XStreams connector which also have the data loss problem.

Regards,
Shahul Nagoorkani
xstreams_debezium_logs_part1.csv.zip

Shahul Nagoorkani

unread,
Feb 2, 2026, 8:13:39 PM (4 hours ago) Feb 2
to debezium
Part 2 of the XStreams connector logs.

Regards,
Shahul Nagoorkani

xstreams_debezium_logs_part2.csv.zip
Reply all
Reply to author
Forward
0 new messages