Oracle connector is not retrieving new data

114 views
Skip to first unread message

Lucas Rangel Gazire

unread,
Aug 28, 2023, 1:49:51 PM8/28/23
to debezium
Hi Dbz team!

Just created an embedded connector for oracle and set the snapshot mode to schema_only. After waiting for the topic being created i added a new row but didn't received the event notification, if i set to initial i get all the data just fine. Is there something else that need to be done?

This is my current configuration:

"snapshot.locking.mode" -> "none"
"connector.class" -> "io.debezium.connector.oracle.OracleConnector"
"topic.creation.default.partitions" -> "2"
"incremental.snapshot.chunk.size" -> "10000"
"bootstrap.servers" -> "server"
"internal.log.mining.read.only" -> "true"
"schema.history.internal.store.only.captured.tables.ddl" -> "true"
"include.schema.changes" -> "false"
"topic.prefix" -> "mfi.debezium"
"schema.history.internal.kafka.topic" -> "DB_SCHEMA_HISTORY"
"offset.storage.partitions" -> "2"
"topic.creation.default.replication.factor" -> "1"
"offset.storage.topic" -> "OFFSET_TOPIC"
"log.mining.archive.log.only.mode" -> "true"
"database.user" -> "user"
"database.dbname" -> "dbname"
"offset.storage" -> "org.apache.kafka.connect.storage.KafkaOffsetBackingStore"
"schema.history.internal.kafka.bootstrap.servers" -> "server"
"snapshot.max.threads" -> "4"
"log.mining.read.only" -> "true"
"database.port" -> "port"
"database.hostname" -> "hostname"
"database.password" -> "xxxxxxxx"
"name" -> "TEST"
"offset.storage.replication.factor" -> "1"
"table.include.list" -> "XXXX.XXXX"
"snapshot.mode" -> "schema_only"



 Through the log i saw the following events:

Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=111304321401, commit_scn=[]]]
Connected metrics set to 'true'
Starting streaming
Redo Log Group Sizes:
Group #55: 5368709120 bytes
Group #56: 5368709120 bytes
Group #57: 5368709120 bytes
Group #58: 5368709120 bytes
Group #59: 5368709120 bytes
Starting SCN 111304321401 is not yet in archive logs, waiting for archive log switch.
Starting SCN 111304321401 is now available in archive logs, log mining unpaused.

Chris Cranford

unread,
Aug 29, 2023, 3:15:20 AM8/29/23
to debe...@googlegroups.com
Hi Lucas -

As you pointed out, your configuration is using the archive-log-onl
y mode, which is an operational mode that only mines changes from the archive logs.  Archive logs are transaction redo logs that have become full and no longer represent the online redo log files being used for recovery by the database.  Depending on the volume of changes in the system, archive logs are only written every several minutes by the system, so you will naturally see a stall on change events while the connector waits for a new archive log to be written.  When making changes in Oracle, changes are first written to the online redo logs first and only once those log files have filled will they be moved to the archive logs.  This explains why making changes take time to be captured by the connector, as well as the log entries

Starting SCN 111304321401 is not yet in archive logs, waiting for archive log switch.
Starting SCN 111304321401 is now available in archive logs, log mining unpaused.

Are you capturing changes from a live production system or a standby production system?

Thanks,
Chris
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/f85c911f-9076-42ba-bc99-4ffa26203dc7n%40googlegroups.com.

Lucas Rangel Gazire

unread,
Aug 29, 2023, 9:23:50 AM8/29/23
to debezium
Thanks for the reply, indeed this configuration is wrong, remove it but i'm still not receiving any updates.
 I'm trying to capture in the readonly database if i can check that no latency is being added i will move further to the live data.

Chris Cranford

unread,
Aug 30, 2023, 9:23:48 AM8/30/23
to debe...@googlegroups.com
Hi Lucas,

Can you share your new configuration after the adjustments to clarify we're on the right page?

Thanks,
Chris

Lucas Rangel Gazire

unread,
Aug 30, 2023, 9:55:09 AM8/30/23
to debezium
Hi Chris,

this is my new configuration:

"snapshot.locking.mode" -> "none"
"connector.class" -> "io.debezium.connector.oracle.OracleConnector"
"topic.creation.default.partitions" -> "2"
"query.fetch.size" -> "10000"
"log.mining.archive.destination.name" -> "LOG_ARCHIVE_DEST_1"
"bootstrap.servers" -> "localhost:9092"
"include.schema.changes" -> "false"
"schema.history.internal.store.only.captured.tables.ddl" -> "true"
"topic.prefix" -> "mfi.debezium"
"schema.history.internal.kafka.topic" -> "SCHEMA_HISTORY"
"offset.storage.partitions" -> "2"
"topic.creation.default.replication.factor" -> "1"
"offset.storage.topic" -> "OFFSET"
"database.dbname" -> "dbname"
"database.user" -> "user"
"offset.storage" -> "org.apache.kafka.connect.storage.KafkaOffsetBackingStore"
"log.mining.batch.size.max" -> "10000000"
"schema.history.internal.kafka.bootstrap.servers" -> "localhost:9092"
"snapshot.max.threads" -> "8"
"log.mining.read.only" -> "true"
"database.port" -> "1521"
"database.hostname" -> "hostname"
"log.mining.query.filter.mode" -> "in"
"database.password" -> "debezium"
"log.mining.batch.size.min" -> "10000"
"log.mining.batch.size.default" -> "200000"
"name" -> "NAME"
"offset.storage.replication.factor" -> "1"
"snapshot.mode" -> "schema_only"

Chris Cranford

unread,
Aug 30, 2023, 12:16:13 PM8/30/23
to debe...@googlegroups.com
Hi Lucas,

I don't see a "table.include.list" being specified, I would strongly suggest specifying the tables to snapshot or else using things like "log.mining.query.filter.mode=in" is quite useless.  Additionally, I would also recommend not adjusting the "log.mining.batch.size.max" to 10M rows as this can lead to unexpected and potentially fatal errors with running our of SGA memory or Oracle killing the LogMiner process if it consumes too much SGA memory space due to such large batches.

Thanks,
Chris

Lucas Rangel Gazire

unread,
Aug 30, 2023, 3:11:54 PM8/30/23
to debezium
Hi Chris,

I removed the batch size, i removed the table.include.list from the copied values, sorry:

"snapshot.locking.mode" -> "none"
"connector.class" -> "io.debezium.connector.oracle.OracleConnector"
"topic.creation.default.partitions" -> "2"
"query.fetch.size" -> "10000"
"log.mining.archive.destination.name" -> "LOG_ARCHIVE_DEST_1"
"bootstrap.servers" -> "localhost:9092"
"include.schema.changes" -> "false"
"schema.history.internal.store.only.captured.tables.ddl" -> "true"
"topic.prefix" -> "mfi.debezium"
"schema.history.internal.kafka.topic" -> "SCHEMA_HISTORY"
"offset.storage.partitions" -> "2"
"topic.creation.default.replication.factor" -> "1"
"offset.storage.topic" -> "OFFSET"
"database.dbname" -> "dbname"
"database.user" -> "user"
"offset.storage" -> "org.apache.kafka.connect.storage.KafkaOffsetBackingStore"
"schema.history.internal.kafka.bootstrap.servers" -> "localhost:9092"
"snapshot.max.threads" -> "8"
"log.mining.read.only" -> "true"
"database.port" -> "1521"
"database.hostname" -> "hostname"
"log.mining.query.filter.mode" -> "in"
"database.password" -> "debezium"
"name" -> "name"
"offset.storage.replication.factor" -> "1"
"table.include.list" -> "SCHEMA.TABLE"
"snapshot.mode" -> "schema_only"
Reply all
Reply to author
Forward
0 new messages