--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/931b8790-eb84-49dd-9ba7-94f2e59875d0n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/2f653ad4-e5bb-4fd1-9570-84b59d26458cn%40googlegroups.com.
Hi Chris,
Thank you for the detailed explanation.
I have upgraded my Debezium to version 2.3 in order to utilize the surrogate-key feature.
Here is the insert query that I am attempting:
insert into testDB.dbo.debezium_signal(id, type, data) values ('ad-hoc-test', 'execute-snapshot', '{"data-collections":["testDB.dbo.a"],"type":"incremental","surrogate-key":"id"}')
The 'id' column data type is numeric, and it is unique.
The query ran successfully, but I am encountering the same problem:
2023-09-08 12:39:52,287 INFO SQL_Server|server1|streaming Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.a]' with additional condition 'No condition passed' [io.debezium.pipeline.signal.ExecuteSnapshot]
2023-09-08 12:39:52,298 WARN SQL_Server|server1|streaming Incremental snapshot for table 'testDB.dbo.a' was skipped because the table has no primary keys [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
Could you please help me or suggest what I might be doing wrong? Any comments on this issue would be greatly appreciated.
Thanks, Amit
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/7059a4cd-828e-4397-a2bd-7810ab284898n%40googlegroups.com.
Hi Chris,
Thank you for getting back to me quickly and clarifying the expected log message. I appreciate your attention to detail.
I cross-checked and reinstalled Debezium 2.3, and now the incremental snapshot with the surrogate key is working fine.
However, I have encountered another issue. After the incremental
snapshot, I observed that only one record was present in Kafka,
Even
though the source table contains 8 rows.
I would greatly appreciate any insights or guidance you can offer to
help resolve this issue.
If you need any additional information or logs,
please let me know.
Thanks
~ Amit
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/1332a7b6-ad8d-48e5-933e-5cb93ce593e7n%40googlegroups.com.
Hi Chris,
Thank you for your help and response.
Yes, the surrogate key (id) is unique for all 8 rows, and they have integer values like 3, 11, 12, 13, 14, 9, 10, 15.
However, I am still experiencing the issue of only seeing the single last row from the table.
Here are the complete logs for your reference.
2023-09-12 18:44:52,508 INFO SQL_Server|server1|streaming Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.a]' with additional condition 'No condition passed' and surrogate key 'id' [io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot]
2023-09-12 18:44:52,513 WARN SQL_Server|server1|streaming Schema not found for table 'testDB.dbo.a', known tables [testDB.dbo.a, testDB.dbo.debezium_signal] [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:44:52,516 INFO SQL_Server|server1|streaming Received request to open window with id = 'a8c11f8d-8273-47c2-be65-c820bcd9544b-open', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:44:52,517 INFO SQL_Server|server1|streaming Received request to close window with id = 'a8c11f8d-8273-47c2-be65-c820bcd9544b-close', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:44:52,518 INFO SQL_Server|server1|streaming Received request to open window with id = '55be490e-7350-45da-b842-1620ba03ad96-open', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:44:52,518 INFO SQL_Server|server1|streaming Received request to close window with id = '55be490e-7350-45da-b842-1620ba03ad96-close', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:08,732 INFO || [AdminClient clientId=1--shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient]
2023-09-12 18:45:11,380 INFO || Committing files after waiting for rotateIntervalMs time but less than flush.size records available. [io.confluent.connect.s3.TopicPartitionWriter]
2023-09-12 18:45:11,587 INFO || Files committed to S3. Target commit offset for source_db_debezium_signal-0 is 287 [io.confluent.connect.s3.TopicPartitionWriter]
2023-09-12 18:45:16,889 INFO SQL_Server|server1|streaming Skipping read chunk because snapshot is not running [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:45:16,889 INFO SQL_Server|server1|streaming Received request to open window with id = '55be490e-7350-45da-b842-1620ba03ad96-open', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:16,890 INFO SQL_Server|server1|streaming Received request to close window with id = '55be490e-7350-45da-b842-1620ba03ad96-close', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:16,890 INFO SQL_Server|server1|streaming Received request to open window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-open', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:16,891 INFO SQL_Server|server1|streaming Received request to close window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-close', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:49,142 INFO SQL_Server|server1|streaming Skipping read chunk because snapshot is not running [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:45:49,143 INFO SQL_Server|server1|streaming Received request to open window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-open', expected = '55be490e-7350-45da-b842-1620ba03ad96', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:49,143 INFO SQL_Server|server1|streaming Received request to close window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-close', expected = '55be490e-7350-45da-b842-1620ba03ad96', request ignored [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:52,505 INFO SQL_Server|server1|streaming Skipping read chunk because snapshot is not running [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:46:11,589 INFO || Committing files after waiting for rotateIntervalMs time but less than flush.size records available. [io.confluent.connect.s3.TopicPartitionWriter]
2023-09-12 18:46:11,792 INFO || Files committed to S3. Target commit offset for source_db_debezium_signal-0 is 299 [io.confluent.connect.s3.TopicPartitionWriter]
Any assistance you can provide in resolving this would be greatly appreciated.
~ Thanks
Amit
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/812a7908-d9ce-47fb-a621-796d60fac0f2n%40googlegroups.com.
Hi Chris,
Thank you for your help and response.
Here I am sending complete details of my setup so that you can guide me
Step-01: I have re-built docker images with debezium2.3 and created a source connector for a single table name customers
Here are the configurations of my source connector
Step-03: I have updated my source connector with the configuration
Step 04: I have implemented an incremental snapshot using the below insert query
insert into debezium_signal(id, type, data)
values ('ad-hoc-employee', 'execute-snapshot', '{"data-collections":["testDB.dbo.employee"],"type":"incremental","surrogate-key":"id"}');
I have attached a screenshot of the debezium_signal data attachment please have a look.
But still, I am able to see only the last record from table employee (name=Mohit) in Kafka.
Here are the logs for your reference
Table testDB.dbo.employee is new to be monitored by capture instance dbo_employee [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1 | 2023-09-13 12:59:48,146 INFO || [Producer clientId=server1-schemahistory] Resetting the last seen epoch of partition schema-changes.inventory-0 to 0 since the associated topicId changed from null to WqAvUj64RLCVM4RGtDK5QQ [org.apache.kafka.clients.Metadata]
monitoring-connect-1 | 2023-09-13 12:59:48,151 INFO SQL_Server|server1|streaming Schema will be changed for Capture instance "dbo_employee" [sourceTableId=testDB.dbo.employee, changeTableId=testDB.cdc.dbo_employee_CT, startLsn=00000028:00000ed8:0047, changeTableObjectId=1733581214, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1 | 2023-09-13 12:59:48,387 INFO || [Producer clientId=connector-producer-inventory-connector-0] Resetting the last seen epoch of partition server1-0 to 0 since the associated topicId changed from null to c7H5eFaGSSaJg3Mp7D3cqQ [org.apache.kafka.clients.Metadata]
monitoring-connect-1 | 2023-09-13 13:00:44,385 INFO SQL_Server|server1|streaming Migrating schema to Capture instance "dbo_employee" [sourceTableId=testDB.dbo.employee, changeTableId=testDB.cdc.dbo_employee_CT, startLsn=00000028:00000ed8:0047, changeTableObjectId=1733581214, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1 | 2023-09-13 13:00:44,437 INFO SQL_Server|server1|streaming Migration skipped, no table schema changes detected. [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1 | 2023-09-13 13:00:44,487 INFO SQL_Server|server1|streaming Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.employee]' with additional condition 'No condition passed' and surrogate key 'id' [io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot]
monitoring-connect-1 | 2023-09-13 13:00:44,520 INFO SQL_Server|server1|streaming Incremental snapshot for table 'testDB.dbo.employee' will end at position [1005] [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
monitoring-connect-1 | 2023-09-13 13:00:45,002 INFO || 2 records sent during previous 00:00:57.377, last recorded offset of {server=server1, database=testDB} partition is {transaction_id=null, incremental_snapshot_correlation_id=ad-hoc-employee, event_serial_no=1, incremental_snapshot_maximum_key=aced0005757200135b4c6a6176612e6c616e672e4f626a6563743b90ce589f1073296c020000787000000001737200116a6176612e6c616e672e496e746567657212e2a0a4f781873802000149000576616c7565787200106a6176612e6c616e672e4e756d62657286ac951d0b94e08b0200007870000003ed, commit_lsn=00000028:000019a0:001c, change_lsn=00000028:000019a0:001b, incremental_snapshot_collections=[{"incremental_snapshot_collections_id":"testDB.dbo.employee","incremental_snapshot_collections_additional_condition":null,"incremental_snapshot_collections_surrogate_key":"id"}], incremental_snapshot_primary_key=aced000570} [io.debezium.connector.common.BaseSourceTask]
monitoring-kafka-1 | 2023-09-13 13:00:45,017 - INFO [data-plane-kafka-request-handler-4:Logging@66] - Creating topic server1.testDB.dbo.debezium_signal with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1))
monitoring-connect-1 | 2023-09-13 13:00:45,095 WARN || [Producer clientId=connector-producer-inventory-connector-0] Error while fetching metadata with correlation id 5 : {server1.testDB.dbo.debezium_signal=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient]
monitoring-kafka-1 | 2023-09-13 13:00:45,144 - INFO [Controller-1-to-broker-1-send-thread:NetworkClient@937] - [Controller id=1, targetBrokerId=1] Node 1 disconnected.
monitoring-kafka-1 | 2023-09-13 13:00:45,169 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(server1.testDB.dbo.debezium_signal-0)
monitoring-kafka-1 | 2023-09-13 13:00:45,189 - INFO [data-plane-kafka-request-handler-5:UnifiedLog$@1787] - [LogLoader partition=server1.testDB.dbo.debezium_signal-0, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2
monitoring-kafka-1 | 2023-09-13 13:00:45,192 - INFO [data-plane-kafka-request-handler-5:Logging@66] - Created log for partition server1.testDB.dbo.debezium_signal-0 in /kafka/data/1/server1.testDB.dbo.debezium_signal-0 with properties {}
monitoring-connect-1 | 2023-09-13 13:00:45,207 WARN || [Producer clientId=connector-producer-inventory-connector-0] Error while fetching metadata with correlation id 6 : {server1.testDB.dbo.debezium_signal=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient]
monitoring-kafka-1 | 2023-09-13 13:00:45,196 - INFO [data-plane-kafka-request-handler-5:Logging@66] - [Partition server1.testDB.dbo.debezium_signal-0 broker=1] No checkpointed highwatermark is found for partition server1.testDB.dbo.debezium_signal-0
I hope this information will be helpful to you.
Any assistance you can provide in resolving this would be greatly appreciated.
~ Thanks
Amit