[ORACLE] Archive Logs Issues

86 views
Skip to first unread message

Vitor Pereira

unread,
Mar 6, 2026, 9:37:58 AMMar 6
to debezium
Hi,

Hope you're doing well.

Recently, I've been having the same issue: ORA-00308: cannot open archived log 'H:\\DBS\\DEV\\ARCHIVE\\DEV_T1R904240267_S456820'\nORA-27041: unable to open file

The error occurs in a database that is not pdb. I spoke to my DBA and both databases have the same settings, as do both connectors.

Do you have any idea what is happening here?

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 6, 2026, 10:39:28 AMMar 6
to debe...@googlegroups.com
Hi, this is the classic situation where Debezium needs to read a log file, the V$ARCHIVED_LOG table indicates the log exists, we interact with LogMiner, and when the file is read off disk, LogMiner discovers that it is no longer on the file system. This could be because it was deleted, permissions, or the mount volume isn't available. 

-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/c078d9d6-2428-44ab-9275-1ad486a9f156n%40googlegroups.com.

Vitor Pereira

unread,
Mar 6, 2026, 10:42:53 AMMar 6
to debezium
Hi,

So, that means it is not possible to change the settings in the source connector to resolve this, correct? It must be changed on the database side?

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 7, 2026, 12:23:06 PMMar 7
to debe...@googlegroups.com
Hi,

Well this depends. If you have multiple V$ARCHIVE_DEST_STATUS rows where they're LOCAL and VALID, and you haven't set archive.destination.name, then it's a configuration issue. The documentation covers the side effects you can face if you have multiple log destinations configured and you leave it up to Debezium to pick a random one. 
SELECT DEST_ID, DEST_NAME
  FROM V$ARCHIVE_DEST_STATUS
 WHERE TYPE = 'LOCAL'
   AND STATUS = 'VALID'

But if you have only one LOCAL and VALID destination, then the issue is on the Oracle side, and is most likely because the SCN we needed to read was in the log file that has since been deleted off disk, and the Oracle performance views were out of sync with the filesystem. That's something the DBA should always be making sure happens.
SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, BLOCKS,
       BLOCK_SIZE, CREATOR, REGISTRAR, ARCHIVED, COMPLETION_TIME,
       STATUS, ARCHIVAL_THREAD#
  FROM V$ARCHIVED_LOG
Have the DBA determine the log sequence number of the filename in the error below, and check the values of the above query's columns for that redo thread & log sequence number. If the STATUS is `A` but the file does not exist in the specified file path shown below in the error, then its because the log metadata is out of sync. But given we're looking at this some days later, it's also possible the DBA may have fixed htis since then, too fwiw.

-cc

On 3/6/26 10:42 AM, Vitor Pereira wrote:
Hi,

So, that means it is not possible to change the settings in the source connector to resolve this, correct? It must be changed on the database side?

Best regards,
Vitor Pereira

A sexta-feira, 6 de março de 2026 à(s) 15:39:28 UTC, Chris Cranford escreveu:
Hi, this is the classic situation where Debezium needs to read a log file, the V$ARCHIVED_LOG table indicates the log exists, we interact with LogMiner, and when the file is read off disk, LogMiner discovers that it is no longer on the file system. This could be because it was deleted, permissions, or the mount volume isn't available. 

-cc


On 3/6/26 9:37 AM, Vitor Pereira wrote:
Hi,

Hope you're doing well.

Recently, I've been having the same issue: ORA-00308: cannot open archived log 'H:\\DBS\\DEV\\ARCHIVE\\DEV_T1R904240267_S456820'\nORA-27041: unable to open file
The error occurs in a database that is not pdb. I spoke to my DBA and both databases have the same settings, as do both connectors.

Do you have any idea what is happening here?

Best regards,
Vitor Pereira
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/c078d9d6-2428-44ab-9275-1ad486a9f156n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.

Vitor Pereira

unread,
Mar 9, 2026, 1:05:47 PMMar 9
to debezium
Hi, Chris

I've talked with my DBA and we have only one and valid destination. We have a backup process that deletes the archives older that 2 days. Is this a problem? Does the source connectors needs to have all the archives in disk?
In the view returned by the query that you provided is clear that the log metadata is in sync. 

Besides that, I have two database instances, both with same configs and this only happens in one of them. This is my source connector configs for both databases:

        "database.dbname": "DEV",
        "database.server.name": "dev",
        "database.history.kafka.topic": "xxxx",
        "database.history.kafka.bootstrap.servers": "xxxx",
        "schema.history.internal.kafka.topic": "xxx",
        "schema.history.internal.kafka.bootstrap.servers": "xxxx",
        "schema.include.list": "SCHEMA1, DBZUSER",
        "table.include.list": "TABLES...",
        "column.exclude.list": "COLUMNS_TO_EXCLUDE",
        "datatype.propagate.source.type": "DATATYPES",
        "signal.data.collection": "xxxxxx",
        "schema.history.internal.store.only.captured.tables.ddl": "true",
        "schema.history.internal.store.only.captured.databases.ddl": "true",
        "heartbeat.interval.ms": "200000",
        "heartbeat.action.query": "INSERT INTO DBZUSER.HEARTBEAT_TABLE_ADM VALUES ('heartbeat')",
        "decimal.handling.mode": "precise",
        "snapshot.mode": "initial",
        "topic.prefix": "ADM_SBS_SCHEMAS",
        "lob.enabled": "false",
        "log.mining.strategy": "hybrid",
        "log.mining.transaction.retention.ms": "172800000",
        "log.mining.flush.table.name": "LOG_MINING_TABLE",
        "key.converter.schemas.enable": "true",
        "value.converter.schemas.enable": "true",
        "key.converter": "io.confluent.connect.avro.AvroConverter",
        "key.converter.schema.registry.url": "http://schema-registry:8081",
        "value.converter": "io.confluent.connect.avro.AvroConverter",
        "value.converter.schema.registry.url": "http://schema-registry:8081",
        "database.name": "DEV"
 

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 10, 2026, 5:57:51 PMMar 10
to debezium
Hi Vitor 

Having a 48 hour retention on archive logs would only be an issue if there's a reason why the connector's offsets are not advancing or why the connector is reading changes too slowly.

Now there was a recent bug fix we added to 3.4.2 and 3.5 that addresses a corner case with LogMiner and rollback transactions. The issue was that LogMiner would provide transaction ids without the transaction sequence/slot portions of the ID, which meant the ROLLBACK event would have a different XID value than its corresponding START and DML events. This would lead to such transactions appearing to be active, inflight transactions that never ended. This could lead to offsets not progressing or waiting to progress until the transaction retention time was met.

Have you updated to 3.4.2 yet?

Vitor Pereira

unread,
Mar 24, 2026, 6:23:31 AM (8 days ago) Mar 24
to debezium
Hi, Chris.

Hope you're doing well.

I've updated to 3.4.2 to test this issue and another issue has arisen. 

Redo Thread 1 is inconsistent; an archive log with sequence 82268 is not available

Although, I've verified in the database, the archive log is there, the status is A and the sequence seems to be complete.

Could this be a bug introduced in this new version? If you need further details, please don’t hesitate to ask. I will contact my dba to look into this.

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 24, 2026, 11:07:56 AM (8 days ago) Mar 24
to debe...@googlegroups.com
Hi,

A few things to also check:

    1. Does it have the same RESETLOGS_TIME and RESETLOGS_CHANGE# as in V$DATABASE?
    2. Is it in the right DEST_ID, the right THREAD#, and has a value in NAME?

Thanks,
-cc

Vitor Pereira

unread,
Mar 24, 2026, 12:10:37 PM (8 days ago) Mar 24
to debezium
Hi,

I validated the archive log entry and everything looks consistent:

- THREAD# = 1
- DEST_ID = 1 (LOG_ARCHIVE_DEST_1)
- STATUS = A
- NAME is populated and correct
- RESETLOGS_CHANGE# and RESETLOGS_TIME match V$DATABASE

So from the Oracle metadata perspective, the log is valid and consistent.

Best regards,
Vitor Pereira

Chris Cranford

unread,
Mar 24, 2026, 10:26:53 PM (8 days ago) Mar 24
to debe...@googlegroups.com
Hi, so in this case, this is not a bug but rather timing. 

There will be moments when Debezium fetches the log metadata and simply because of when the query executes, Oracle does not yet know about the archive log because it hasn't been registered with the database control file, but it's actively being archived. Once ARC (the process responsible for archiving redo logs) adds the row to the control file and it shows up in the V$ARCHIVED_LOG table, the inconsistency for that sequence will clear and mining can proceed. 

This is why these are logged as INFO messages and not ERROR messages, just to indicate that Debezium's mining will stall briefly while we wait for Oracle.

Thanks
-cc
Reply all
Reply to author
Forward
0 new messages