Hiya,
We have an issue with the confluent docker images (we upgraded them to kafka 0.9) where kafka does not delete any logs, does anyone have any idea why this is?
We ideally want to delete logs over an hour old(dev cluster)
Here's our env variables.
: ${KAFKA_ADVERTISED_HOST_NAME:=$(curl --retry 5 --connect-timeout 3 -s
169.254.169.254/latest/meta-data/local-hostname)}
: ${KAFKA_PORT:=9092}
: ${KAFKA_NUM_NETWORK_THREADS:=3}
: ${KAFKA_NUM_IO_THREADS:=8}
: ${KAFKA_SOCKET_SEND_BUFFER_BYTES:=102400}
: ${KAFKA_SOCKET_RECEIVE_BUFFER_BYTES:=102400}
: ${KAFKA_SOCKET_REQUEST_MAX_BYTES:=104857600}
: ${KAFKA_LOG_DIRS:=/var/lib/kafka}
: ${KAFKA_NUM_PARTITIONS:=1}
: ${KAFKA_NUM_RECOVERY_THREADS_PER_DATA_DIR:=1}
: ${KAFKA_LOG_RETENTION_HOURS:=1}
: ${KAFKA_LOG_RETENTION_BYTES:=547483648}
: ${KAFKA_LOG_SEGMENT_BYTES:=1073741824}
: ${KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS:=10000}
: ${KAFKA_LOG_CLEANER_ENABLE:=true}
: ${KAFKA_ZOOKEEPER_CONNECT:=$ZOOKEEPER_PORT_2181_TCP_ADDR:$ZOOKEEPER_PORT_2181_TCP_PORT}
: ${KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS:=6000}
: ${KAFKA_AUTO_CREATE_TOPICS_ENABLE:=true}
: ${KAFKA_DELETE_TOPIC_ENABLE:=true}
I was just wondering if when we created the topics this might be overriden?
Interestingly, when I go to delete the file manually, it seems to kick off the deletion logic.
"[2016-01-07 15:25:49,919] INFO Scheduling log segment 0 for log fastly-logs-0 for deletion. (kafka.log.Log)
[2016-01-07 15:25:49,921] ERROR Uncaught exception in scheduled task 'kafka-log-retention' (kafka.utils.KafkaScheduler)
kafka.common.KafkaStorageException: Failed to change the log file suffix from to .deleted for log segment 0
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:260)
at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:804)
at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:795)
at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:551)
at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:551)
at scala.collection.immutable.List.foreach(List.scala:381)
at kafka.log.Log.deleteOldSegments(Log.scala:551)
at kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:421)
at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:452)
at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:450)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:450)
at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:190)
at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"
It obviously errors out, as we just deleted the file beforehand.