Topic __consumer_offsets increases very fast and takes too much disk space.

1,624 views
Skip to first unread message

zhz shi

unread,
May 2, 2016, 11:04:20 PM5/2/16
to kafka-clients
Hi,

I'm using the latest Kafka server with default offset configs, and 0.9 kafka-client with Spring-kafka-integration. On the client side I config the consumer with enable_auto_commit_config=true and auto_commit_interval_ms_config=100. I find the topic __consuer_offset increases very fast and taks too much disk space, but seems the default offset.retention policy does not work(offsets.retention.minutes=1440). Here's the questions I have for this problem:

1. Is there any way to figure out all configuration for a running Kafka broker?
2. What's the recommendation for broker offset config for __consumer_offset topic? 

BR, Zhz

gerard...@dizzit.com

unread,
May 3, 2016, 3:00:02 AM5/3/16
to kafka-clients
It's sort of a bug, most likely on your broker compaction is not enabled, you could also set the topic for the ofsets to deletion, and limit the size by time/and oor space, but it's a bit risky.

zhz shi

unread,
May 3, 2016, 4:26:06 AM5/3/16
to kafka-clients
Yes I'm using the default value of 'log.cleaner.enable=false', but if the config is like so, shouldn't the log.cleaner.policy be the 'delete' by default?

gerard...@dizzit.com

unread,
May 4, 2016, 3:35:30 AM5/4/16
to kafka-clients
No, cause you could loose offsets that way. In 0.9.0.1 they 'fixed' the default to log.cleaner.enable=true, so you could change that in the broker and it should work as intended with 0.9.0.0 then.

zhz shi

unread,
May 4, 2016, 9:25:35 PM5/4/16
to kafka-clients
Yes the offset can be cleaned up now after I changed this config. Thanks a lot :-)
Reply all
Reply to author
Forward
0 new messages