How we can trigger a snapshot for that topic, which you removed and created with zero data?

1,342 views
Skip to first unread message

Amit Kumar Manjhi

unread,
Sep 7, 2023, 4:52:52 AM9/7/23
to debezium
Hi All

I've created a Debezium source SQL Server connector with a `table_include_list` consisting of a, b, c, d, debezium_signal', and I can see data for all tables in Kafka.

Next, I removed table 'b' from the `table_include_list` and Kafka (kafdrop). After updating the source connector, I can no longer see table 'b' data in Kafka.

I then re-added table 'b' to the source connector, and I can see the 'b' topic in Kafka, but it contains empty data.

How can you trigger a snapshot for that topic, which you removed and created with zero data?  

Any suggestions or comments will be helpful.

Thanks in advance.
~ Amit

Chris Cranford

unread,
Sep 7, 2023, 8:05:13 AM9/7/23
to debe...@googlegroups.com
Hi Amit,

The initial snapshot is something that runs only once when the connector is first deployed.  If you need to perform an ad-hoc or on-demand snapshot after the fact, you can configure the connector to perform incremental snapshots [1].  This mode of a snapshot is slightly different because it can run concurrently while streaming changes, it is resumable, and can be triggered as needed. 

Thanks,
Chris

[1]: https://debezium.io/documentation/reference/2.4/connectors/sqlserver.html#sqlserver-incremental-snapshots
--
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/debezium/931b8790-eb84-49dd-9ba7-94f2e59875d0n%40googlegroups.com.

Amit Kumar Manjhi

unread,
Sep 7, 2023, 9:32:20 AM9/7/23
to debezium
Hi Chris 

Thank you for your quick response

I tried to perform incremental snapshots but it ended up with the below warning

2023-09-07 18:50:31,658 INFO   SQL_Server|server1|streaming  Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.abc]' with the additional condition 'No condition passed'   [io.debezium.pipeline.signal.ExecuteSnapshot]

2023-09-07 18:50:31,663 WARN   SQL_Server|server1|streaming  Incremental snapshot for table 'testDB.dbo.abc' skipped cause the table has no primary keys   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]

Any comments and suggestions will be helpful to me

Thanks

Chris Cranford

unread,
Sep 7, 2023, 9:47:58 AM9/7/23
to debe...@googlegroups.com
Hi Amit -

It's ironic that I was just replying in another thread about Incremental Snapshot pitfalls and I acknowledged this precise use case.  Universe works in mysterious ways. 

So given the message, this means the table does not have a primary key nor a unique index that acts as a primary key.  Does the table have at least 1 column that is unique for all rows that you can use as a surrogate key instead?  If so, you could specify the surrogate key and column name in the signal.  If that isn't possible, you will either

    1. Upgrade to Debezium 2.4.0.Beta1 and use ad-hoc blocking snapshots.
    2. Remove the offsets/history topic and retake a full snapshot of all tables.
    3. Use a temporary connector to re-populate the old topic for that single table.

The last option requires stopping the original connector and making sure to use a dummy history topic and prefix and then using the table routing SMT to re-route the event to the right old topic.  This is very error prone and can be tough to get right if you're not extremely familiar with all the moving pieces to make this work.  If you're not comfortable with (3), I would highly recommend (1) or (2) depending on whether you can deploy a preview release in your environment.

Thanks,
Chris

Amit Kumar Manjhi

unread,
Sep 8, 2023, 3:32:31 AM9/8/23
to debezium

Hi Chris,

Thank you for the detailed explanation.

I have upgraded my Debezium to version 2.3 in order to utilize the surrogate-key feature.

Here is the insert query that I am attempting:

insert into testDB.dbo.debezium_signal(id, type, data) values ('ad-hoc-test', 'execute-snapshot', '{"data-collections":["testDB.dbo.a"],"type":"incremental","surrogate-key":"id"}')

The 'id' column data type is numeric, and it is unique.

The query ran successfully, but I am encountering the same problem:

2023-09-08 12:39:52,287 INFO   SQL_Server|server1|streaming  Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.a]' with additional condition 'No condition passed'   [io.debezium.pipeline.signal.ExecuteSnapshot]

2023-09-08 12:39:52,298 WARN   SQL_Server|server1|streaming  Incremental snapshot for table 'testDB.dbo.a' was skipped because the table has no primary keys   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]

Could you please help me or suggest what I might be doing wrong? Any comments on this issue would be greatly appreciated.

Thanks, Amit

Chris Cranford

unread,
Sep 8, 2023, 8:45:01 AM9/8/23
to debe...@googlegroups.com
Hi Amit,

I'm not convinced that your upgrade was properly applied.  The first INFO log message does not match what is expected.  It should have read:

    Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.a]' with additional condition 'No condition passed' and surrogate key 'id'

If you don't see the "and surrogate key '...'" part of that log message, then you aren't using Debezium 2.3 unfortunately.

Thanks,
Chris

Amit Kumar Manjhi

unread,
Sep 11, 2023, 2:03:29 AM9/11/23
to debezium

Hi Chris,

Thank you for getting back to me quickly and clarifying the expected log message. I appreciate your attention to detail.

I cross-checked and reinstalled Debezium 2.3, and now the incremental snapshot with the surrogate key is working fine.

However, I have encountered another issue. After the incremental snapshot, I observed that only one record was present in Kafka,
Even though the source table contains 8 rows.

I would greatly appreciate any insights or guidance you can offer to help resolve this issue. 

If you need any additional information or logs, please let me know.  

Thanks
~ Amit

Chris Cranford

unread,
Sep 11, 2023, 12:50:58 PM9/11/23
to debe...@googlegroups.com
Hi Amit -

Well, not to state the most obvious, but is the surrogate key unique for all 8 rows or do all the rows share the same value?  If the latter, then only seeing 1 row in the topic makes sense.  If the surrogate key column is unique for all 8 rows, then I'm afraid we need more detail.  Perhaps enable TRACE logging and attempt the incremental snapshot for the 8 rows again and attach the logs for us to review.

Thanks,
Chris

Amit Kumar Manjhi

unread,
Sep 12, 2023, 9:29:15 AM9/12/23
to debezium

Hi Chris,

Thank you for your help and response.

Yes, the surrogate key (id) is unique for all 8 rows, and they have integer values like 3, 11, 12, 13, 14, 9, 10, 15.

However, I am still experiencing the issue of only seeing the single last row from the table.

Here are the complete logs for your reference.


2023-09-12 18:44:52,508 INFO   SQL_Server|server1|streaming  Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.a]' with additional condition 'No condition passed' and surrogate key 'id'   [io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot]

2023-09-12 18:44:52,513 WARN   SQL_Server|server1|streaming  Schema not found for table 'testDB.dbo.a', known tables [testDB.dbo.a, testDB.dbo.debezium_signal]   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:44:52,516 INFO   SQL_Server|server1|streaming  Received request to open window with id = 'a8c11f8d-8273-47c2-be65-c820bcd9544b-open', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:44:52,517 INFO   SQL_Server|server1|streaming  Received request to close window with id = 'a8c11f8d-8273-47c2-be65-c820bcd9544b-close', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:44:52,518 INFO   SQL_Server|server1|streaming  Received request to open window with id = '55be490e-7350-45da-b842-1620ba03ad96-open', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:44:52,518 INFO   SQL_Server|server1|streaming  Received request to close window with id = '55be490e-7350-45da-b842-1620ba03ad96-close', expected = '70ef1fff-b739-4d71-ad94-90703454b6ee', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:08,732 INFO   ||  [AdminClient clientId=1--shared-admin] Node 1 disconnected.   [org.apache.kafka.clients.NetworkClient]
2023-09-12 18:45:11,380 INFO   ||  Committing files after waiting for rotateIntervalMs time but less than flush.size records available.   [io.confluent.connect.s3.TopicPartitionWriter]
2023-09-12 18:45:11,587 INFO   ||  Files committed to S3. Target commit offset for source_db_debezium_signal-0 is 287   [io.confluent.connect.s3.TopicPartitionWriter]
2023-09-12 18:45:16,889 INFO   SQL_Server|server1|streaming  Skipping read chunk because snapshot is not running   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:45:16,889 INFO   SQL_Server|server1|streaming  Received request to open window with id = '55be490e-7350-45da-b842-1620ba03ad96-open', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:16,890 INFO   SQL_Server|server1|streaming  Received request to close window with id = '55be490e-7350-45da-b842-1620ba03ad96-close', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:16,890 INFO   SQL_Server|server1|streaming  Received request to open window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-open', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:16,891 INFO   SQL_Server|server1|streaming  Received request to close window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-close', expected = 'a8c11f8d-8273-47c2-be65-c820bcd9544b', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:49,142 INFO   SQL_Server|server1|streaming  Skipping read chunk because snapshot is not running   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:45:49,143 INFO   SQL_Server|server1|streaming  Received request to open window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-open', expected = '55be490e-7350-45da-b842-1620ba03ad96', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:49,143 INFO   SQL_Server|server1|streaming  Received request to close window with id = '70ef1fff-b739-4d71-ad94-90703454b6ee-close', expected = '55be490e-7350-45da-b842-1620ba03ad96', request ignored   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotContext]
2023-09-12 18:45:52,505 INFO   SQL_Server|server1|streaming  Skipping read chunk because snapshot is not running   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
2023-09-12 18:46:11,589 INFO   ||  Committing files after waiting for rotateIntervalMs time but less than flush.size records available.   [io.confluent.connect.s3.TopicPartitionWriter]
2023-09-12 18:46:11,792 INFO   ||  Files committed to S3. Target commit offset for source_db_debezium_signal-0 is 299   [io.confluent.connect.s3.TopicPartitionWriter]



Any assistance you can provide in resolving this would be greatly appreciated.

~ Thanks
   Amit

Chris Cranford

unread,
Sep 12, 2023, 10:56:16 AM9/12/23
to debe...@googlegroups.com
Hi Amit -

I believe the issue has to do with the WARN in in the logs below about not finding the table `testDB.dbo.a`, the incremental snapshot is effectively skipped in this case.  Can you share your full connector configuration and can you confirm that the signal insert does not include any hidden characters that might make the comparison of the "testDB.dbo.a" table identifiers fail to match.

Thanks,
Chris

Amit Kumar Manjhi

unread,
Sep 13, 2023, 9:37:43 AM9/13/23
to debezium

Hi Chris,

Thank you for your help and response.

Here I am sending complete details of my setup so that you can guide me

Step-01: I have re-built docker images with debezium2.3 and created a source connector for a single table name customers
Here are the configurations of my source connector

{
    "name""inventory-connector",
    "config": {
        "connector.class" : "io.debezium.connector.sqlserver.SqlServerConnector",
        "tasks.max" : "1",
        "topic.prefix" : "server1",
        "database.hostname" : "sqlserver",
        "database.port" : "1433",
        "database.user" : "sa",
        "database.password" : "Password!",
        "database.names" : "testDB",
        "schema.history.internal.kafka.bootstrap.servers" : "kafka:29092",
        "schema.history.internal.kafka.topic""schema-changes.inventory",
        "database.encrypt""false",
        "signal.data.collection":"testDB.dbo.debezium_signal",
        "table.include.list":"dbo.customers,dbo.employee,dbo.debezium_signal"
    }
} Note: I am able to see the data of table customers in kafka Step-02: I have created one table name employee CREATE TABLE employee (
  id INT IDENTITY(1001, 1),
  first_name VARCHAR(255),
  last_name VARCHAR(255),email VARCHAR(255)
); Then Inserted few data into table INSERT INTO employee (first_name, last_name, email)
VALUES ('Amit', 'Kumar', 'amit....@xyz.com');
INSERT INTO employee (first_name, last_name, email)
VALUES ('Virat', 'Kumar', 'mukesh...@xyz.com');
INSERT INTO employee (first_name, last_name, email)
VALUES ('Rohit', 'Kumar', 'amit....@xyz.com');
INSERT INTO employee (first_name, last_name, email)
VALUES ('Rahul', '', 'amit....@xyz.com');
INSERT INTO employee (first_name, last_name, email)
VALUES ('Mohit', 'Kumar', ''); After that, I have enable CDC for this table EXEC sys.sp_cdc_enable_table
  @source_schema = 'dbo',
  @source_name = 'employee',
  @role_name = NULL,
  @supports_net_changes = 0;
GO

Step-03: I have updated my source connector with the configuration

{
    "connector.class" : "io.debezium.connector.sqlserver.SqlServerConnector",
        "tasks.max" : "1",
        "topic.prefix" : "server1",
        "database.hostname" : "sqlserver",
        "database.port" : "1433",
        "database.user" : "sa",
        "database.password" : "Password!",
        "database.names" : "testDB",
        "schema.history.internal.kafka.bootstrap.servers" : "kafka:29092",
        "schema.history.internal.kafka.topic""schema-changes.inventory",
        "database.encrypt""false",
        "signal.data.collection":"testDB.dbo.debezium_signal",
        "table.include.list":"dbo.customers,dbo.employee,dbo.debezium_signal"
    }

Step 04: I have implemented an incremental snapshot using the below insert query

insert into debezium_signal(id, type, data)
values ('ad-hoc-employee', 'execute-snapshot', '{"data-collections":["testDB.dbo.employee"],"type":"incremental","surrogate-key":"id"}'); I have attached a screenshot of the
debezium_signal data attachment please have a look.   But still, I am able to see only the last record from table employee (name=Mohit) in Kafka. Here are the logs for your reference


 Table testDB.dbo.employee is new to be monitored by capture instance dbo_employee   [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1    | 2023-09-13 12:59:48,146 INFO   ||  [Producer clientId=server1-schemahistory] Resetting the last seen epoch of partition schema-changes.inventory-0 to 0 since the associated topicId changed from null to WqAvUj64RLCVM4RGtDK5QQ   [org.apache.kafka.clients.Metadata]
monitoring-connect-1    | 2023-09-13 12:59:48,151 INFO   SQL_Server|server1|streaming  Schema will be changed for Capture instance "dbo_employee" [sourceTableId=testDB.dbo.employee, changeTableId=testDB.cdc.dbo_employee_CT, startLsn=00000028:00000ed8:0047, changeTableObjectId=1733581214, stopLsn=NULL]   [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1    | 2023-09-13 12:59:48,387 INFO   ||  [Producer clientId=connector-producer-inventory-connector-0] Resetting the last seen epoch of partition server1-0 to 0 since the associated topicId changed from null to c7H5eFaGSSaJg3Mp7D3cqQ   [org.apache.kafka.clients.Metadata]
monitoring-connect-1    | 2023-09-13 13:00:44,385 INFO   SQL_Server|server1|streaming  Migrating schema to Capture instance "dbo_employee" [sourceTableId=testDB.dbo.employee, changeTableId=testDB.cdc.dbo_employee_CT, startLsn=00000028:00000ed8:0047, changeTableObjectId=1733581214, stopLsn=NULL]   [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1    | 2023-09-13 13:00:44,437 INFO   SQL_Server|server1|streaming  Migration skipped, no table schema changes detected.   [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
monitoring-connect-1    | 2023-09-13 13:00:44,487 INFO   SQL_Server|server1|streaming  Requested 'INCREMENTAL' snapshot of data collections '[testDB.dbo.employee]' with additional condition 'No condition passed' and surrogate key 'id'   [io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot]
monitoring-connect-1    | 2023-09-13 13:00:44,520 INFO   SQL_Server|server1|streaming  Incremental snapshot for table 'testDB.dbo.employee' will end at position [1005]   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
monitoring-connect-1    | 2023-09-13 13:00:45,002 INFO   ||  2 records sent during previous 00:00:57.377, last recorded offset of {server=server1, database=testDB} partition is {transaction_id=null, incremental_snapshot_correlation_id=ad-hoc-employee, event_serial_no=1, incremental_snapshot_maximum_key=aced0005757200135b4c6a6176612e6c616e672e4f626a6563743b90ce589f1073296c020000787000000001737200116a6176612e6c616e672e496e746567657212e2a0a4f781873802000149000576616c7565787200106a6176612e6c616e672e4e756d62657286ac951d0b94e08b0200007870000003ed, commit_lsn=00000028:000019a0:001c, change_lsn=00000028:000019a0:001b, incremental_snapshot_collections=[{"incremental_snapshot_collections_id":"testDB.dbo.employee","incremental_snapshot_collections_additional_condition":null,"incremental_snapshot_collections_surrogate_key":"id"}], incremental_snapshot_primary_key=aced000570}   [io.debezium.connector.common.BaseSourceTask]
monitoring-kafka-1      | 2023-09-13 13:00:45,017 - INFO  [data-plane-kafka-request-handler-4:Logging@66] - Creating topic server1.testDB.dbo.debezium_signal with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1))
monitoring-connect-1    | 2023-09-13 13:00:45,095 WARN   ||  [Producer clientId=connector-producer-inventory-connector-0] Error while fetching metadata with correlation id 5 : {server1.testDB.dbo.debezium_signal=LEADER_NOT_AVAILABLE}   [org.apache.kafka.clients.NetworkClient]
monitoring-kafka-1      | 2023-09-13 13:00:45,144 - INFO  [Controller-1-to-broker-1-send-thread:NetworkClient@937] - [Controller id=1, targetBrokerId=1] Node 1 disconnected.
monitoring-kafka-1      | 2023-09-13 13:00:45,169 - INFO  [data-plane-kafka-request-handler-5:Logging@66] - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(server1.testDB.dbo.debezium_signal-0)
monitoring-kafka-1      | 2023-09-13 13:00:45,189 - INFO  [data-plane-kafka-request-handler-5:UnifiedLog$@1787] - [LogLoader partition=server1.testDB.dbo.debezium_signal-0, dir=/kafka/data/1] Loading producer state till offset 0 with message format version 2
monitoring-kafka-1      | 2023-09-13 13:00:45,192 - INFO  [data-plane-kafka-request-handler-5:Logging@66] - Created log for partition server1.testDB.dbo.debezium_signal-0 in /kafka/data/1/server1.testDB.dbo.debezium_signal-0 with properties {}
monitoring-connect-1    | 2023-09-13 13:00:45,207 WARN   ||  [Producer clientId=connector-producer-inventory-connector-0] Error while fetching metadata with correlation id 6 : {server1.testDB.dbo.debezium_signal=LEADER_NOT_AVAILABLE}   [org.apache.kafka.clients.NetworkClient]
monitoring-kafka-1      | 2023-09-13 13:00:45,196 - INFO  [data-plane-kafka-request-handler-5:Logging@66] - [Partition server1.testDB.dbo.debezium_signal-0 broker=1] No checkpointed highwatermark is found for partition server1.testDB.dbo.debezium_signal-0
I hope this information will be helpful to you.


Any assistance you can provide in resolving this would be greatly appreciated.

~ Thanks
   Amit  

incremental_snapshot.png
datainkafka.png

Amit Kumar Manjhi

unread,
Sep 13, 2023, 9:41:11 AM9/13/23
to debezium
* Please ignore the employee table name from table.inculde_list in step-01. I took single table that is  customers
       
 "table. include.list": "dbo.customers,,dbo.debezium_signal"


jiri.p...@gmail.com

unread,
Sep 13, 2023, 9:44:37 AM9/13/23
to debezium
Hi,

could you please share the full log and also the data from Kafka in text format so we can see it in full?

Thanks

J.

Amit Kumar Manjhi

unread,
Sep 13, 2023, 11:17:37 AM9/13/23
to debezium
Hi Jiri

Thank you for your quick response.

Here I am sending complete logs and Kafka data in a text file

PFA

Please let me know if you need any other info.

Thanks
Amit
fulllog.txt
data-in-kafka.txt

jiri.p...@gmail.com

unread,
Sep 14, 2023, 4:33:45 AM9/14/23
to debezium
ok I see

monitoring-connect-1 | 2023-09-13 13:00:49,460 INFO SQL_Server|server1|streaming No data returned by the query, incremental snapshotting of table 'testDB.dbo.employee' finished

in the log.

Could you please enable TRACE level logging for `io.debezium.pipeline.source.snapshot.incremental`, `io.debezium.connector.sqlserver.SqlServerConnection` and ` and `io.debezium.jdbc.JdbcConnection`?

Thanks

J.

Amit Kumar Manjhi

unread,
Sep 14, 2023, 8:25:37 AM9/14/23
to debezium
Hi Jiiri 

Thanks for your response.

Here  I have enabled suggested  TRACE level logging using the below command:

curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.pipeline.source.snapshot.incremental -d '{"level": "DEBUG"}'

curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.connector.sqlserver.SqlServerConnection -d '{"level": "DEBUG"}'

curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.jdbc.JdbcConnection -d '{"level": "DEBUG"}'

After that in logs, I have observed

monitoring-connect-1    | 2023-09-14 10:55:08,043 INFO   ||  192.168.16.1 - - [14/Sep/2023:10:55:07 +0000] "PUT /admin/loggers/io.debezium.jdbc.JdbcConnection HTTP/1.1" 200 35 "-" "PostmanRuntime/7.32.3" 153   [org.apache.kafka.connect.runtime.rest.RestServer]
monitoring-connect-1    | 2023-09-14 10:55:26,560 INFO   ||  192.168.16.1 - - [14/Sep/2023:10:55:26 +0000] "PUT /admin/loggers/io.debezium.connector.sqlserver.SqlServerConnection HTTP/1.1" 200 55 "-" "PostmanRuntime/7.32.3" 12   [org.apache.kafka.connect.runtime.rest.RestServer]
monitoring-connect-1    | 2023-09-14 10:55:38,528 INFO   ||  192.168.16.1 - - [14/Sep/2023:10:55:38 +0000] "PUT /admin/loggers/io.debezium.pipeline.source.snapshot.incremental HTTP/1.1" 200 52 "-" "PostmanRuntime/7.32.3" 4   [org.apache.kafka.connect.runtime.rest.RestServer]

I am sending the latest complete logs for you to look over.

Please take a look at the attachment.

Thanks

Complete-Logs_14th-Sep.txt

jiri.p...@gmail.com

unread,
Sep 15, 2023, 5:05:18 AM9/15/23
to debezium
Thanks, unforutnatelt the log does not contain additional log messages :-(

Maybe the appender has its own threshold configured? Also could you please use TRACE, not DEBUG to we get the most detail information?

J.

Reply all
Reply to author
Forward
0 new messages