Sink connector optimization

17 views
Skip to first unread message

Vladislav P

unread,
Dec 16, 2025, 8:48:25 AM (7 days ago) Dec 16
to debezium
 Hi, The problem is as follows: The Sink connector cannot handle the load. He doesn't have time to read behind the source connector. We have found the root of the problem - DELETE operations, they take a very long time to complete. In Postgre, we see how the process performs many operations: "DELETE FROM SCHEMA.TABLE WHERE ID=$1". When the "delete.enable: false" option is disabled, the sink-connector reduces the lag very quickly and keeps up with the source connector. Help me solve this problem. 

Now there are questions:
1) How are the operations performed inside "batch:500", can there be any INSERT/UPDATE/DELETE operations that are performed sequentially?
2) DELETE operations are not combined when "use.reduction.buffer: true" is enabled? Are only INSERTS combined?
3) How can DELETE operations be optimized? Currently, each deletion operation takes about 1 second.  And it feels like they can't commit because of the long execution and will last indefinitely.
4) Tell me how you can implement a soft deletion of records. To put in the column Postgre delete: true when the record is deleted. 

Chris Cranford

unread,
Dec 16, 2025, 8:17:18 PM (7 days ago) Dec 16
to debe...@googlegroups.com
Hi -

Let's continue the discussion on Zulip, as there's no reason to duplicate the conversation efforts.
#community-jdbc > Commit of offsets timed out @ 💬

-cc
4) Tell me how you can implement a soft deletion of records. To put in the column Postgre delete: true when the record is deleted.  --
You received this message because you are subscribed to the Google Groups "debezium" group.
To unsubscribe from this group and stop receiving emails from it, send an email to debezium+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/debezium/2cf354de-cb71-48c6-b773-55e2d88f0749n%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages