[ORACLE] Archive Logs Issues

38 views
Skip to first unread message

Vitor Pereira

unread,
Mar 6, 2026, 9:37:58 AM (5 days ago) Mar 6
to debezium
Hi,

Hope you're doing well.

Recently, I've been having the same issue: ORA-00308: cannot open archived log 'H:\\DBS\\DEV\\ARCHIVE\\DEV_T1R904240267_S456820'\nORA-27041: unable to open file

The error occurs in a database that is not pdb. I spoke to my DBA and both databases have the same settings, as do both connectors.

Do you have any idea what is happening here?

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 6, 2026, 10:39:28 AM (5 days ago) Mar 6
to debe...@googlegroups.com
Hi, this is the classic situation where Debezium needs to read a log file, the V$ARCHIVED_LOG table indicates the log exists, we interact with LogMiner, and when the file is read off disk, LogMiner discovers that it is no longer on the file system. This could be because it was deleted, permissions, or the mount volume isn't available. 

-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/c078d9d6-2428-44ab-9275-1ad486a9f156n%40googlegroups.com.

Vitor Pereira

unread,
Mar 6, 2026, 10:42:53 AM (5 days ago) Mar 6
to debezium
Hi,

So, that means it is not possible to change the settings in the source connector to resolve this, correct? It must be changed on the database side?

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 7, 2026, 12:23:06 PM (4 days ago) Mar 7
to debe...@googlegroups.com
Hi,

Well this depends. If you have multiple V$ARCHIVE_DEST_STATUS rows where they're LOCAL and VALID, and you haven't set archive.destination.name, then it's a configuration issue. The documentation covers the side effects you can face if you have multiple log destinations configured and you leave it up to Debezium to pick a random one. 
SELECT DEST_ID, DEST_NAME
  FROM V$ARCHIVE_DEST_STATUS
 WHERE TYPE = 'LOCAL'
   AND STATUS = 'VALID'

But if you have only one LOCAL and VALID destination, then the issue is on the Oracle side, and is most likely because the SCN we needed to read was in the log file that has since been deleted off disk, and the Oracle performance views were out of sync with the filesystem. That's something the DBA should always be making sure happens.
SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, BLOCKS,
       BLOCK_SIZE, CREATOR, REGISTRAR, ARCHIVED, COMPLETION_TIME,
       STATUS, ARCHIVAL_THREAD#
  FROM V$ARCHIVED_LOG
Have the DBA determine the log sequence number of the filename in the error below, and check the values of the above query's columns for that redo thread & log sequence number. If the STATUS is `A` but the file does not exist in the specified file path shown below in the error, then its because the log metadata is out of sync. But given we're looking at this some days later, it's also possible the DBA may have fixed htis since then, too fwiw.

-cc

On 3/6/26 10:42 AM, Vitor Pereira wrote:
Hi,

So, that means it is not possible to change the settings in the source connector to resolve this, correct? It must be changed on the database side?

Best regards,
Vitor Pereira

A sexta-feira, 6 de março de 2026 à(s) 15:39:28 UTC, Chris Cranford escreveu:
Hi, this is the classic situation where Debezium needs to read a log file, the V$ARCHIVED_LOG table indicates the log exists, we interact with LogMiner, and when the file is read off disk, LogMiner discovers that it is no longer on the file system. This could be because it was deleted, permissions, or the mount volume isn't available. 

-cc


On 3/6/26 9:37 AM, Vitor Pereira wrote:
Hi,

Hope you're doing well.

Recently, I've been having the same issue: ORA-00308: cannot open archived log 'H:\\DBS\\DEV\\ARCHIVE\\DEV_T1R904240267_S456820'\nORA-27041: unable to open file
The error occurs in a database that is not pdb. I spoke to my DBA and both databases have the same settings, as do both connectors.

Do you have any idea what is happening here?

Best regards,
Vitor Pereira
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/c078d9d6-2428-44ab-9275-1ad486a9f156n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.

Vitor Pereira

unread,
Mar 9, 2026, 1:05:47 PM (2 days ago) Mar 9
to debezium
Hi, Chris

I've talked with my DBA and we have only one and valid destination. We have a backup process that deletes the archives older that 2 days. Is this a problem? Does the source connectors needs to have all the archives in disk?
In the view returned by the query that you provided is clear that the log metadata is in sync. 

Besides that, I have two database instances, both with same configs and this only happens in one of them. This is my source connector configs for both databases:

        "database.dbname": "DEV",
        "database.server.name": "dev",
        "database.history.kafka.topic": "xxxx",
        "database.history.kafka.bootstrap.servers": "xxxx",
        "schema.history.internal.kafka.topic": "xxx",
        "schema.history.internal.kafka.bootstrap.servers": "xxxx",
        "schema.include.list": "SCHEMA1, DBZUSER",
        "table.include.list": "TABLES...",
        "column.exclude.list": "COLUMNS_TO_EXCLUDE",
        "datatype.propagate.source.type": "DATATYPES",
        "signal.data.collection": "xxxxxx",
        "schema.history.internal.store.only.captured.tables.ddl": "true",
        "schema.history.internal.store.only.captured.databases.ddl": "true",
        "heartbeat.interval.ms": "200000",
        "heartbeat.action.query": "INSERT INTO DBZUSER.HEARTBEAT_TABLE_ADM VALUES ('heartbeat')",
        "decimal.handling.mode": "precise",
        "snapshot.mode": "initial",
        "topic.prefix": "ADM_SBS_SCHEMAS",
        "lob.enabled": "false",
        "log.mining.strategy": "hybrid",
        "log.mining.transaction.retention.ms": "172800000",
        "log.mining.flush.table.name": "LOG_MINING_TABLE",
        "key.converter.schemas.enable": "true",
        "value.converter.schemas.enable": "true",
        "key.converter": "io.confluent.connect.avro.AvroConverter",
        "key.converter.schema.registry.url": "http://schema-registry:8081",
        "value.converter": "io.confluent.connect.avro.AvroConverter",
        "value.converter.schema.registry.url": "http://schema-registry:8081",
        "database.name": "DEV"
 

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 10, 2026, 5:57:51 PM (15 hours ago) Mar 10
to debezium
Hi Vitor 

Having a 48 hour retention on archive logs would only be an issue if there's a reason why the connector's offsets are not advancing or why the connector is reading changes too slowly.

Now there was a recent bug fix we added to 3.4.2 and 3.5 that addresses a corner case with LogMiner and rollback transactions. The issue was that LogMiner would provide transaction ids without the transaction sequence/slot portions of the ID, which meant the ROLLBACK event would have a different XID value than its corresponding START and DML events. This would lead to such transactions appearing to be active, inflight transactions that never ended. This could lead to offsets not progressing or waiting to progress until the transaction retention time was met.

Have you updated to 3.4.2 yet?
Reply all
Reply to author
Forward
0 new messages