How to use kafka log compact with debezium connect, and how to config my debezium?

667 views
Skip to first unread message

CC VMaster

unread,
Nov 3, 2021, 6:05:08 AM11/3/21
to debezium
As described in the documentation: https://debezium.io/documentation/reference/stable/connectors/mysql.html#mysql-delete-events

Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.

I want to delivery all mysql records into kafka and keep it permanently. However, process data like update operation tooks up too much storage space, so i must to reclaim it. 

According to the document, i config kafka topic with cleanup.policy = compact and min.compaction.lag.ms = 600000, but it no effect.
3e8d0416-5fdd-40d1-96b5-1e5ac2edfb34.png
I update the same primary key to increase the number of Kafka records from 3614 to 3620, and then wait for more than 10 minutes. Theoretically, this part of the records generated by update will be recycled by compact, but not.

How should i config kafka config or debezium connect config to make it take effect?

Note:
Kafka topic config : 
PartitionCount: 1 ReplicationFactor: 1 Configs: compression.type=snappy,cleanup.policy=compact,segment.bytes=1073741824,min.compaction.lag.ms=600000

Debezium connect config:
"producer.override.compression.type":"snappy",
"topic.creation.default.cleanup.policy":"compact",
"topic.creation.default.compression.type":"snappy",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"
Reply all
Reply to author
Forward
0 new messages