debezium oracle logminer error

296 views
Skip to first unread message

Arabay

unread,
May 3, 2023, 5:06:31 AM5/3/23
to debezium
Hi,
After upgrading to version 2.2.0 my oracle connector started raising these errors:

[2023-05-03 13:24:31,085] WARN [debezium-oracle-GOLDDB|task-0] Failed to start Oracle LogMiner session, retrying... (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:589)
[2023-05-03 13:24:31,122] ERROR [debezium-oracle-GOLDDB|task-0] Failed to start Oracle LogMiner after '5' attempts. (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:592)
java.sql.SQLException: ORA-01291: missing log file
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1
 
        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:629)
        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:563)
        at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1150)
        at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:770)
        at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:298)
        at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:497)
        at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:111)
        at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:1010)
        at oracle.jdbc.driver.OracleStatement.executeSQLStatement(OracleStatement.java:1530)
        at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1310)
        at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:2162)
        at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:2117)
        at oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:327)
        at io.debezium.jdbc.JdbcConnection.executeWithoutCommitting(JdbcConnection.java:1448)
        at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.startMiningSession(LogMinerStreamingChangeEventSource.java:582)
        at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:208)
        at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:60)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:141)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: Error : 1291, Position : 0, Sql = BEGIN sys.dbms_logmnr.start_logmnr(startScn => '7440507009176', endScn => '7440507579175', OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG  + DBMS_LOGMNR.NO_ROWID_IN_
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1

        at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:636)
        ... 24 more
[2023-05-03 13:24:31,123] ERROR [debezium-oracle-GOLDDB|task-0] Got exception when starting mining session. (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:594)
java.sql.SQLException: ORA-01291: missing log file
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1

my connector configured like this:
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"database.user" : "${file:/home/debezium/kafka-connect/credentials.ini:GOLD_DBZUSER}",
"database.password" : "${file:/home/debezium/kafka-connect/credentials.ini:GOLD_DBZUSER_PASS}",
"database.dbname" : "GOLDDB",
"database.server.name" : "GOLDDB",
"database.url" : "${file:/home/debezium/kafka-connect/credentials.ini:GOLD_JDBC}",
"skipped.operations" : "t",
"signal.data.collection" : "GOLDDB.DBZ_ORACLE.DEBEZIUM_SIGNAL",
"schema.name.adjustment.mode" : "avro",

"schema.history.internal.kafka.bootstrap.servers" : "kafka1:9092,kafka2:9092,kafka3:9092",
"schema.history.internal.kafka.topic": "schema-changes.GOLDDB",
"schema.history.internal.skip.unparseable.ddl" : "true",
"schema.history.internal.store.only.captured.tables.ddl" : "true",
"snapshot.mode" : "schema_only",
"schema.include.list" : [schema],
"table.include.list" : [11 tables here],
"query.fetch.size" : 25000,
"scan.startup.mode" : "latest-offset",
"log.mining.strategy" : "online_catalog",
"log.mining.batch.size.min" : 10000,
"log.mining.batch.size.max" : 1000000,
"log.mining.batch.size.default" : 150000,

"log.mining.sleep.time.default": 200,
"log.mining.sleep.time.min": 0,
"log.mining.sleep.time.max": 1000,
"max.batch.size" : 30000,
"max.queue.size" : 50000,

"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "${file:/home/debezium/kafka-connect/credentials.ini:SCHEMA_REGISTRY_URL}",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "${file:/home/debezium/kafka-connect/credentials.ini:SCHEMA_REGISTRY_URL}",
"tasks.max": "1"

Chris Cranford

unread,
May 5, 2023, 8:34:13 AM5/5/23
to debe...@googlegroups.com
Hi Arabay -

It it possible that the archive log that the connector attempted to read was removed from the file system?  You can enable TRACE logging and we report which logs we attempt to mine and you can cross check that with the file system.  Often times we see situations where DBAs will use scripts to remove the files from the filesystem but do not use RMAN to clear up the V$ARCHIVED_LOG metadata and this can lead to situations where we believe there is a file available but when LogMiner starts, it detects the file is missing and causes this error.

Unfortunately, to avoid data loss, you will need to clear the offsets & schema history followed by taking a new snapshot.

Hope that helps.
Chris
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/c3d90a0c-c7d3-47e7-8f72-73a3b119d768n%40googlegroups.com.

Chris Cranford

unread,
May 5, 2023, 8:46:48 AM5/5/23
to debe...@googlegroups.com
To follow-up on this, I've logged https://issues.redhat.com/browse/DBZ-6436.

i believe it would be helpful for you and others moving forward that when ORA-01291 occurs, we provide you with as much information about the log that caused the failure. This will allow you to quickly reach out to your DBA with the information needed for them to help you understand the root-cause of the problem.

Thanks,
Chris

WG He

unread,
Nov 3, 2023, 3:41:08 AM11/3/23
to debezium
I also encountered this problem:

  2023-11-03 13:37:44.675 WARN [debezium-oracleconnector-oracle_logminer-change-event-source-coordinator] io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource : Failed to start Oracle LogMiner session, retrying...
2023-11-03 13:37:45.286 ERROR[debezium-oracleconnector-oracle_logminer-change-event-source-coordinator] io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource : Failed to start Oracle LogMiner after '5' attempts.

java.sql.SQLException: ORA-01291: missing log file
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1

at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:509) ~[com.oracle.database.jdbc-ojdbc8-19.3.0.0.jar!/:19.3.0.0.0]
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:461) ~[com.oracle.database.jdbc-ojdbc8-19.3.0.0.jar!/:19.3.0.0.0]  



In addition, I clean the archive logs by executing commands from the rman tool command:
------------------------------------------------------------
Crosscheck archivelog all;
Delete noprompt archivelog until time 'sysdate -1';
------------------------------------------------------------

Through the above command, the archived logs were retained for one day. During the insertion, deletion, and modification of the table within one day, I also encountered the issue of losing the logs mentioned above.

May I ask how to solve this problem。

Chris Cranford

unread,
Nov 3, 2023, 9:31:21 AM11/3/23
to debe...@googlegroups.com
Hi -

Is your installation Standalone or RAC? I would also recommend upgrading to 2.4 as there has been some advancements in how we manage logs, particularly on Oracle RAC if you're on Real Application Clusters.

Thanks,
Chris
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.

WG He

unread,
Nov 3, 2023, 9:41:43 PM11/3/23
to debezium
Standalone version,and flink-connector-oracle-cdc:2.4.1。

After starting the Flink CDC Oracle data capture service, it was found that the CPU has been consistently high, accepting 100% load. In addition, archive logs only retain a data volume of one day, with a size of 70GB. Capturing listening logs with a delay of about 30 minutes. How can I handle it to reduce data capture latency and high CPU load。
data-sync-hight-cpu.png
Reply all
Reply to author
Forward
0 new messages