ORA-00308 – Debezium oracle Connector Crash (Missing Archived Log 1618)

55 views
Skip to first unread message

Ramesh K

unread,
Sep 24, 2025, 7:29:27 AM (8 days ago) Sep 24
to debezium

Hi Team,

We have two Debezium connectors mining CDC from the same Oracle database (each handling different tables). Connector 1 was deployed much earlier and has been running without issue. Connector 2, deployed later, immediately failed with:

ORA-00308: cannot open archived log '/oracle/ISMFTEST/oraarch/1_1618_860427197.dbf'

It looks like Connector 1 consumed sequence 1618 while the log was still available, but by the time Connector 2 started, that archived log had already been purged.

Can you please advise

Attached error logs below.

Regards,
Ramesh.

error_log.txt

Chris Cranford

unread,
Sep 24, 2025, 7:56:41 AM (8 days ago) Sep 24
to debe...@googlegroups.com
Hi Ramesh -

Is it possible that connector 2 was down for a period of time and then you redeployed it? It seems unusual that connector 2 would immediately fail with such an error unless it had been down longer than your archive log retention period.

-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/379783e6-30bb-45b1-8807-b98aee8bb37bn%40googlegroups.com.

Ramesh K

unread,
Sep 24, 2025, 8:06:55 AM (8 days ago) Sep 24
to debe...@googlegroups.com

Hi Chris,

Just to clarify—it wasn’t a downtime issue.

I initially deployed the connector with one table using an initial snapshot. After that, I added more tables following the approach described here:

 https://debezium.io/documentation/reference/stable/connectors/oracle.html#oracle-capturing-data-from-tables-not-captured-by-the-initial-snapshot-no-schema-change

then triggered the signal via the signal table, and records started populating correctly in the new topics for each added table. However, shortly after, the connector crashed with the error I shared earlier.

Let me know what you think could be the cause or how we should proceed.

Regards,
Ramesh


Chris Cranford

unread,
Sep 24, 2025, 8:26:15 AM (8 days ago) Sep 24
to debe...@googlegroups.com
Hi Ramesh -

I would check with the DBA to determine when that log was deleted. Also, did you trigger an adhoc incremental snapshot or blocking snapshot?

-cc

Ramesh K

unread,
Sep 24, 2025, 8:48:29 AM (8 days ago) Sep 24
to debe...@googlegroups.com
Hi Chris,

I triggered incremental snapshot.

and can we use two different connectors pointing to the same db but different  tables included in connector configuration?

Regards,
Ramesh

Chris Cranford

unread,
Sep 24, 2025, 11:59:46 PM (7 days ago) Sep 24
to debe...@googlegroups.com
Hi Ramesh,

You can use two different connectors against the same Oracle database, but only if you have set `log.mining.strategy` to `hybrid` or `online_catalog`.

But to your issue, if you triggered an incremental snapshot and then it failed immediately afterward, this is either an issue where the volume where the file is stored failed, the log was corrupted, or the DBA/script deleted the file prematurely. When ORA-00308 errors happen only when the file is removed off disk without using RMAN to update the log catalog.  So I would start by trying to determine when the log was deleted and was it deleted too early.

Thanks,
-cc

Ramesh K

unread,
Sep 25, 2025, 12:40:42 AM (7 days ago) Sep 25
to debe...@googlegroups.com
Hi Chris,

Yes the file was deleted and DBA mentioned every 6hrs once the archive backup will be done.. after that backup archives will be deleted.

 and yes they are using 
RMAN to update the log catalog.

and what else can be done in these scenarios?
is something else needed to configured on the db side?

Regards,
Ramesh



Chris Cranford

unread,
Sep 25, 2025, 12:46:14 AM (7 days ago) Sep 25
to debe...@googlegroups.com
Can you clarify, when the archive backup runs every 6 hours, does it delete all archive logs or does it only delete archive logs with an age greater than 6 hours?

-cc

Ramesh K

unread,
Sep 25, 2025, 1:04:57 AM (7 days ago) Sep 25
to debe...@googlegroups.com

Thanks Chris for the quick response.

it will delete all the archives.

and if we use multiple connectors for the same db for diff tables then how does signalling  table or ad hoc/ blocking snapshots work.


Regards,
Ramesh.


Chris Cranford

unread,
Sep 26, 2025, 12:23:15 AM (6 days ago) Sep 26
to debe...@googlegroups.com
Hi Ramesh

If the script runs every 6 hours and removes all the archive log files, that's the issue. The script needs to only delete archive logs that are older than 6 hours, otherwise you risk deleting an archive log file that may have just been created but not yet consumed by the connector. That is most likely what happened in your case.

-cc

Ramesh K

unread,
Oct 1, 2025, 4:49:31 AM (20 hours ago) Oct 1
to debe...@googlegroups.com

Hi Chris,

Thanks for the update.

I have one more question. We're currently using Debezium 2.5 with Oracle 11g and we're planning to upgrade to version 3.2.

  • Is Oracle 11g compatible with Debezium 3.2?
  • What steps should we take to ensure the connector continues to run smoothly after the upgrade?
  • Also, what improvements or new features can we expect by moving to 3.2? 
  •  do we have to change any configs?

Below is the attached config.  

Appreciate your guidance on this


Regards,
Ramesh 
config.json

Chris Cranford

unread,
Oct 1, 2025, 8:17:01 AM (16 hours ago) Oct 1
to debe...@googlegroups.com
Hi Ramesh -

We officially do not test against Oracle 11g nor 12c, and so we don't claim support, but we do strive to keep Debezium compatible with those versions. Should you find there is an issue with a newer version that does not work older EoL versions of Oracle, you're always welcomed to reach out and we can try and do our best to address it.

As for the upgrade, my recommendation would be to move to 3.3.0.Final, released today and use the new `legacy.decimal.handling.strategy` option set to `true`. This will make your upgrade path significantly smoother (just ask anyone who did this upgrade before 3.3).  This allows you retain the same behavior for `decimal.handling.mode` used by 2.6.x and before as this changed in Debezium 2.7. This would provide you the ability to do a direct 1-to-1 update without having to worry about the various event schema differences with decimal values that will create some pain points since you use schema registry. Once you have a working upgrade, you can then look at removing the legacy option in a separate step when you have the cycles to address the event schema variances.

Aside from that, I would suggest you read the release notes and blog posts. There have been a few configuration changes with archive destination name and transaction retention properties, but otherwise I don't recall any other specific changes, but the release notes and blog posts would be the source of truth.

Thanks,
-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages