Long data reading. Error deleting logs.

54 views
Skip to first unread message

Vladislav P

unread,
Nov 14, 2025, 6:17:08 AMNov 14
to debezium
Hi,

We're migrating data from Oracle to Postgre.  
I'd appreciate some optimization advice.  
I'm using the quay.io/debezium/connect:3.3.1.Final image.  
I'm running the connector in snapshot.mode: no_data. The migration involves 28 tables, with real-time data flowing for about 10 tables. Over 10 hours, 4 of them accumulated over 1 million records each.  
After that, I start incremental snapshots for tables that don't change frequently.  

1) According to the logs, the signal from the table is read only after 30 minutes.  
2) In 10 hours, the incremental snapshots only managed to migrate 2 tables—the first had 169 records, the second had 91. It's almost laughable.  
3) The Oracle database is configured to delete logs older than 6 hours. The connector couldn't read all the required logs before they were deleted and crashed. The lagFromSourceDuration was 7+ hours.  

How can this issue be resolved?
If the connector has such a large "lagFromSourceDuration", does it also transfer data with a huge delay? We just need to receive data quickly!

LOG:
2025-11-14T09:51:22,861 ERROR  Oracle||streaming  LogMiner session stopped due to an error.   [io.debezium.connector.oracle.logminer.AbstractLogMinerStreamingChangeEventSource]
java.sql.SQLException: ORA-00308: cannot open archived log '+FRA/RCDB/ARCHIVELOG/2025_11_14/thread_1_seq_517133.864.1217137965'
ORA-17503: ksfdopn:2 Failed to open file +FRA/RCDB/ARCHIVELOG/2025_11_14/thread_1_seq_517133.864.1217137965
ORA-15012: ASM file '+FRA/RCDB/ARCHIVELOG/2025_11_14/thread_1_seq_517133.864.1217137965' does not exist

        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:630) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:564) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1231) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:772) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:299) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:512) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:163) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:1010) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OracleStatement.prepareDefineBufferAndExecute(OracleStatement.java:1271) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1149) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OracleStatement.executeSQLSelect(OracleStatement.java:1661) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1470) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3761) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3936) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1102) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        at io.debezium.connector.oracle.logminer.AbstractLogMinerStreamingChangeEventSource.executeAndProcessQuery(AbstractLogMinerStreamingChangeEventSource.java:394) ~[debezium-connector-oracle-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.connector.oracle.logminer.buffered.BufferedLogMinerStreamingChangeEventSource.process(BufferedLogMinerStreamingChangeEventSource.java:243) ~[debezium-connector-oracle-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.connector.oracle.logminer.buffered.BufferedLogMinerStreamingChangeEventSource.executeLogMiningStreaming(BufferedLogMinerStreamingChangeEventSource.java:156) ~[debezium-connector-oracle-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.connector.oracle.logminer.AbstractLogMinerStreamingChangeEventSource.execute(AbstractLogMinerStreamingChangeEventSource.java:212) ~[debezium-connector-oracle-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.connector.oracle.logminer.AbstractLogMinerStreamingChangeEventSource.execute(AbstractLogMinerStreamingChangeEventSource.java:88) ~[debezium-connector-oracle-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:329) ~[debezium-core-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:207) ~[debezium-core-3.3.1.Final.jar:3.3.1.Final]
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:147) ~[debezium-core-3.3.1.Final.jar:3.3.1.Final]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?]
        atva.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
Caused by: oracle.jdbc.OracleDatabaseException: ORA-00308: cannot open archived log '+FRA/RCDB/ARCHIVELOG/2025_11_14/thread_1_seq_517133.864.1217137965'
ORA-17503: ksfdopn:2 Failed to open file +FRA/RCDB/ARCHIVELOG/2025_11_14/thread_1_seq_517133.864.1217137965
ORA-15012: ASM file '+FRA/RCDB/ARCHIVELOG/2025_11_14/thread_1_seq_517133.864.1217137965' does not exist

        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:637) ~[ojdbc11-21.15.0.0.jar:21.15.0.0.0]
        ... 27 more

2025-11-14T09:51:22,864 INFO   Oracle||streaming  Streaming metrics at shutdown: LogMinerStreamingChangeEventSourceMetrics{connectorConfig=io.debezium.connector.oracle.OracleConnectorConfig@442f894e, startTime=2025-11-13T22:27:40.114458957Z, clock=SystemClock[Z], currentScn=6340736960421, offsetScn=6340196619870, commitScn=6340199647966, oldestScn=6340196619871, oldestScnTime=2025-11-14T02:38:55.371Z, currentLogFileNames=[+FRA/RAC_RCDB/redo_2_15.log, +FRA/RAC_RCDB/redo_1_05.log], redoLogStatuses=[+FRA/RAC_RCDB/redo_2_14.log | ACTIVE, +FRA/RAC_RCDB/redo_1_04.log | ACTIVE, +FRA/RAC_RCDB/redo_1_03.log | ACTIVE, +FRA/RAC_RCDB/redo_2_15.log | CURRENT, +FRA/RAC_RCDB/redo_1_05.log | CURRENT, +FRA/RAC_RCDB/redo_2_13.log | INACTIVE, +FRA/RAC_RCDB/redo_1_02.log | INACTIVE, +FRA/RAC_RCDB/redo_2_11.log | INACTIVE, +FRA/RAC_RCDB/redo_2_08.log | INACTIVE, +FRA/RAC_RCDB/redo_1_01.log | INACTIVE, +FRA/RAC_RCDB/redo_2_12.log | INACTIVE], databaseZoneOffset=+03:00, batchSize=100000, logSwitchCount=233, logMinerQueryCount=3478, sleepTime=1000, minimumLogsMined=2, maximumLogsMined=144, maxBatchProcessingThroughput=24127, timeDifference=-70630, processedRowsCount=614037971, activeTransactionCount=7, rolledBackTransactionCount=1118861, oversizedTransactionCount=0, changesCount=51466436, scnFreezeCount=0, batchProcessingDuration={min=PT1.580171714S,max=PT49.077103069S,total=PT10H25M6.311171188S}, fetchQueryDuration={min=PT0.511166137S,max=PT4.015707933S,total=PT1H57M1.930050933S}, commitDuration={min=PT0.000000502S,max=PT0.377413689S,total=PT2M14.507328114S}, lagFromSourceDuration={min=PT4.985065378S,max=PT7H12M30.801846661S,total=PT254148427H41M13.987350505S}, miningSessionStartupDuration={min=PT0.002450002S,max=PT0.011668767S,total=PT13.482654438S}, parseTimeDuration={min=PT0.000018657S,max=PT0.016765153S,total=PT15M42.93421279S}, resultSetNextDuration={min=PT0.000000222S,max=PT4.33277465S,total=PT7H12M51.079128216S}, userGlobalAreaMemory={value=71538072,max=93822968}, processGlobalAreaMemory={value=78832640,max=143975424}, abandonedTransactionIds=[], rolledBackTransactionIds=[20010800ed762201, 23011500adea1a01, 3d001900f1fc7f01, b8001300dc53f000, 2f0017005f2d9f01, 25000b00c7d6d001, 350020004c5a8e01, c6001f007d31f800, 42001200d09f0401, 510003007fee2101]}    [io.debezium.connector.oracle.logminer.AbstractLogMinerStreamingChangeEventSource]
2025-11-14T09:51:22,900 INFO   Oracle||streaming  Offsets as shutdown: OracleOffsetContext [scn=6340196619870, txId=3e000900bc440601, txSeq=2, commit_scn=["6340199647966:1:48001c00d689c600","6340199647963:2:650012007aaad800"], lcr_position=null]   [io.debezium.connector.oracle.logminer.AbstractLogMinerStreamingChangeEventSource]
2025-11-14T09:51:22,902 INFO   Oracle||streaming  Finished streaming   [io.debezium.pipeline.ChangeEventSourceCoordinator]
2025-11-14T09:51:22,902 INFO   Oracle||streaming  Connected metrics set to 'false'   [io.debezium.pipeline.ChangeEventSourceCoordinator]

Vladislav P

unread,
Nov 14, 2025, 6:26:47 AMNov 14
to debezium
source-connector:

{
  "name": "source-connector-avro",
  "config": {
    "connector.class": "io.debezium.connector.oracle.OracleConnector",
    "tasks.max": "1",
    "database.hostname": "{{sourceDatabaseHost}}",
    "database.port": "{{sourceDatabasePort}}",
    "database.user": "{{sourceDatabaseUser}}",
    "database.password": "{{sourceDatabasePassword}}",
    "database.dbname": "{{sourceDatabaseName}}",
    "table.include.list": "...",
    "column.include.list": "...",
    "topic.prefix": "{{topicPrefix}}",
    "database.server.name": "{{topicPrefix}}",
    "schema.history.internal.kafka.topic": "dbz_oracle_wpms_history",
    "schema.history.internal.kafka.bootstrap.servers": "{{kafkaBootstrapServers}}",
    "log.mining.strategy": "hybrid",
    "log.mining.query.filter.mode": "in",

    "message.key.columns": "...",

    "signal.enable.channels": "source",
    "signal.data.collection": "{{sourceDatabaseName}}.SCHEMA.DEBEZIUM_SIGNAL",
"incremental.snapshot.chunk.size": 50000,
    "incremental.snapshot.allow.schema.changes": "true",
    "topic.creation.enable": "true",
    "topic.creation.default.replication.factor": 1,
    "topic.creation.default.partitions": 1,
    "topic.creation.default.cleanup.policy": "delete",

    "snapshot.mode": "no_data",
    "log.mining.transaction.retention.ms": "10800000",
    "schema.history.internal.store.only.captured.tables.ddl": "true",
    "snapshot.database.errors.max.retries": 2,
    "internal.log.mining.log.query.max.retries": 15,

    "notification.enabled.channels": "sink,jmx,log",
    "notification.sink.topic.name": "debezium_notifications",

    "key.converter": "io.apicurio.registry.utils.converter.AvroConverter",
    "key.converter.apicurio.registry.url": "{{apicurioRegistryUrl}}",
    "key.converter.apicurio.registry.auto-register": "true",
    "key.converter.apicurio.registry.find-latest": "true",
    "key.converter.schemas.enable": "false",
    "key.converter.apicurio.registry.headers.enabled": "false",
    "key.converter.apicurio.registry.as-confluent": "true",
    "key.converter.apicurio.use-id": "contentId",

    "value.converter": "io.apicurio.registry.utils.converter.AvroConverter",
    "value.converter.apicurio.registry.url": "{{apicurioRegistryUrl}}",
    "value.converter.apicurio.registry.auto-register": "true",
    "value.converter.apicurio.registry.find-latest": "true",
    "value.converter.schemas.enable": "false",
    "value.converter.apicurio.registry.headers.enabled": "false",
    "value.converter.apicurio.registry.as-confluent": "true",
    "value.converter.apicurio.use-id": "contentId",
    "schema.name.adjustment.mode": "avro",

    "header.converter": "org.apache.kafka.connect.json.JsonConverter",
    "header.converter.schemas.enable": "true",

    "heartbeat.interval.ms": "10000",
    "heartbeat.action.query": "MERGE INTO SCHEMA.DEBEZIUM_HEARTBEAT t USING (SELECT 1 id, CURRENT_TIMESTAMP ts FROM dual) s ON (t.id = s.id) WHEN MATCHED THEN UPDATE SET t.ts = s.ts WHEN NOT MATCHED THEN INSERT (id, ts) VALUES (s.id, s.ts)"
  }
}

пятница, 14 ноября 2025 г. в 15:17:08 UTC+4, Vladislav P:

Vladislav P

unread,
Nov 14, 2025, 8:09:41 AMNov 14
to debezium
I would like to clarify.
If some tables are not changed frequently, I can run "snpshot.mode":"initial_only" for them. After that, I can run "snpshot.mode": "no_data", for all tables that I need.
Is there a guarantee that data will not be lost during such a read (in tables that are not changed frequently)?

пятница, 14 ноября 2025 г. в 15:26:47 UTC+4, Vladislav P:

Chris Cranford

unread,
Nov 15, 2025, 2:06:37 AMNov 15
to debe...@googlegroups.com
Hi Vlad 

You can do this, but it requires very specific setup.
1. Deploy your original source connector with "no_data" and your desired topic.prefix.
2. Once data begins to stream, undeploy the connector but do not delete the offsets/history topic.
3. Deploy a new connector with the following differences
    - Uses a new name
    - Snapshot mode is set to initial_only
    - You can safely use the same topic.prefix
    - Use a completely separate schema changes topic (this will be thrown away)
    - Set include.schema.changes to false, this is so this temp connector using the same prefix doesn't write to the public facing schema changes topic
4. When connector finishes undeploy this new temp connector and remove its schema changes topic
5. Redeploy the original connector that had no_data
Just be sure that archive logs are retained during this process. If the initial snapshot takes longer than your log retention, then your original connector will not be able to resume after-the-fact.

In terms of the performance issue, this is related to DBZ-8747. I am actively thinking about what we can change to address this because the problem is going back introduces other inconsistencies and potential data loss, and the changes added in DBZ-8747 are designed specifically to force LogMiner to avoid this issue. We did introduce DBZ-9664 that may help some, but it does not solve the entire problem, particularly if you have users who are creating long-running transactions. By avoiding long-running transactions solves the issue and the fix in DBZ-8747 won't be as severe.

You may also want to consider setting "log.mining.transaction.retention.ms" to discard transactions that run longer than a specific time periods or using specifying the Oracle usernames that create transactions you can skip by specifying "log.mining.username.exclude.list" with all database usernames in uppercase.

Thanks,
-cc
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/429f28b9-f6dc-47e2-8db8-70c2d8364982n%40googlegroups.com.

Vladislav P

unread,
Nov 16, 2025, 3:15:11 PMNov 16
to debezium
Hi Chris, thanks for your reply.
There are more questions for understanding the work of Debezium:
1) If we run connector (LogMiner) in "no_data" mode, it means that we are reading all the information from "redo and archive logs". And if we run the connector in the "initial_only" mode, does it read data directly through sql? Is it just that when we delete "redo and archive logs", the historical data is still readable, so it is taken directly from the tables?
2) If I see the metric lagFromSourceDuration= max 12 hours, does it mean that the changes that are currently taking place in Oracle will arrive in Postgre only after 12 hours?

суббота, 15 ноября 2025 г. в 11:06:37 UTC+4, Chris Cranford:

Chris Cranford

unread,
Nov 20, 2025, 10:25:51 AM (13 days ago) Nov 20
to debe...@googlegroups.com
Hi -

For (1), that's mostly correct, but there is a small caveat. Yes, you have the historical data, but only the current historical state and not the granular historical data of how a row changed N times in the window where you deleted the redo/archive logs. But can retaking a snapshot get your back to a "consistent" state, absolutely.

For (2), yes, it means the connector is reading the changes from Oracle that happened 12 hours ago.

-cc
Reply all
Reply to author
Forward
0 new messages