Hello,
We are running a Debezium Oracle source connector on a Kafka cluster for a customer. In both production and development Oracle environments, when archive/redo logs exceed a threshold, all logs in the log directory are backed up and then deleted.
After an incident, we asked the customer to exclude the most recent logs that are actively being read from deletion. However, our immediate problem is to recover a connector that keeps failing with errors and repeatedly retrying its task.
What we tried
Restored the backed-up archive/redo log files to the same path with the same permissions.
Restarted the connector multiple times.
The error persists and the task remains in a retry loop.
Environment notes
Used Oracle Debezium 3.1.2
Oracle database is configured with high availability (primary/standby).
Our connector configuration does not set
log.mining.archive.destination.name (a.k.a.
archive.destination.name in Debezium docs).

Could the missing
log.mining.archive.destination.name be the root cause?
With limited collaboration from the customer’s DBA team, what minimum Oracle checks should we request (e.g., V$ARCHIVE_DEST_STATUS, V$ARCHIVED_LOG for required sequence/SCN availability, file permissions, ASM vs. filesystem paths)?
Any guidance or a checklist to bring the connector back to a healthy state would be greatly appreciated.
Thank you in advance, and apologies if the wording is a bit rough—I used a translator.