Debezium Oracle connector failing after archive/redo log cleanup ORA-00308

17 views
Skip to first unread message

채현진

unread,
Nov 4, 2025, 3:11:45 AM (10 days ago) Nov 4
to debezium
Hello,

We are running a Debezium Oracle source connector on a Kafka cluster for a customer. In both production and development Oracle environments, when archive/redo logs exceed a threshold, all logs in the log directory are backed up and then deleted.

After an incident, we asked the customer to exclude the most recent logs that are actively being read from deletion. However, our immediate problem is to recover a connector that keeps failing with errors and repeatedly retrying its task.

What we tried

Restored the backed-up archive/redo log files to the same path with the same permissions.

Restarted the connector multiple times.

The error persists and the task remains in a retry loop.

Environment notes

Used Oracle Debezium 3.1.2

Oracle database is configured with high availability (primary/standby).

Our connector configuration does not set log.mining.archive.destination.name (a.k.a. archive.destination.name in Debezium docs).

KakaoTalk_20251104_093320135.jpg
Could the missing log.mining.archive.destination.name be the root cause?

With limited collaboration from the customer’s DBA team, what minimum Oracle checks should we request (e.g., V$ARCHIVE_DEST_STATUS, V$ARCHIVED_LOG for required sequence/SCN availability, file permissions, ASM vs. filesystem paths)?

Any guidance or a checklist to bring the connector back to a healthy state would be greatly appreciated.

Thank you in advance, and apologies if the wording is a bit rough—I used a translator.

Chris Cranford

unread,
Nov 4, 2025, 3:45:49 AM (10 days ago) Nov 4
to debe...@googlegroups.com
Hi -

First and foremost, the removal of "all logs" is a common mistake. My recommendation is to have a script that runs periodically, perhaps every 6 or 12 hours that deletes logs with a creation time older than the script's execution frequency. The goal is to always have all logs that were created by the ARC process in the last N hours (where N is your retention policy). 

When an online redo log is archived, there is always a high probability that the last few seconds to a minute of redo entries that were flushed have not yet been read by Debezium. In this situation, the next mining step will not only read the next online redo log, but it will read the newly created archive log to make sure there is no data gap. If your script runs just before that mining step, you will experience ORA-00308.

When the logs were restored by the DBA team, did they add them back to the Oracle log catalog using RMAN? If they were not added back, that would explain why you continued to experience the issue after they were restored. The logs must be registered in Oracle's log catalog for Debezium and LogMiner to see them as Oracle does not allow direct disk access.

In addition, there is a bug that was present in older versions of Debezium where certain transactions may be left in an active status in the buffer, leading to situations where the read position was older than it should be, and this also could lead to ORA-00308 issues or SCN is not available errors, if lob.enabled was set to true or if the connector was restarted.

As to your question about archive destinations, this is an important configuration if your DBA team have configured multiple destinations that are both VALID and LOCAL. In such cases, typically one destination has a longer retention period than the other, and it's important to tell Debezium which of the two to use that has the longer retention, as the retention is driven by factors outside of Oracle by the DBA team. If this isn't done, Debezium picks a random destination and certain ORA errors can be raised similar to ORA-00308.

Finally, if the archive logs are still available, my suggesetion is to make sure the DBA team adds them back to the Oracle log catalog. But if I recall, RMAN only adds them back to LOG_ARCHIVE_DEST_1 and this cannot be changed. So if you are not using that destination or if its not the one that Debezium should be reading from, this can be problematic to restore the logs and have Debezium start where it left off. In such cases, you will need to clear your offsets and history topic and restart the connector, letting it re-take a historical snapshot or setting snapshot.mode to `no_data` if you are willing to accept the data loss.

Let us know if you have any other questions.
-cc
Thank you in advance, and apologies if the wording is a bit rough—I used a translator. --
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/e1d8c487-c829-4dff-ace4-4d28b2d078b1n%40googlegroups.com.

Chris Cranford

unread,
Nov 4, 2025, 3:48:31 AM (10 days ago) Nov 4
to debe...@googlegroups.com
Hi -

One point I failed to mention regarding the bug (DBZ-8747). Upgrading to Debezium 3.3.1.Final or later will avoid that bug moving forward.

Thanks,
-cc
Reply all
Reply to author
Forward
0 new messages