Hello,
I'm running Confluent Kafka 4.1.0 on Kubernetes and it has worked without many problems. My default configuration for topic retention is set to "forever"(big number) and we set topics that don't need to stick around to shorter retention times manually.
Yesterday, there was a topic that god some bad data in it and instead of working around it in code, I wanted to simply delete the topic. I did so using the kafka-topics tool and then I got impatient. After checking the list of topics(no more than 10 minutes passed) I noticed the topic was still there so I hopped on Zookeeper, open the shell, and delete the /brokers/topics/<topic> directory along with the /admin/delete_topics/<topic> directory and called it a day. This was with the kafka cluster still up, which was probably a mistake on my part.
After I did this, I went back to the kafka bash shell and listed topics and the topic I wanted to be gone was gone. I figured it did what I wanted it too.
Unfortunately, after I started a new application that consumes topics that I didn't delete, a brand new consumer group with offset.reset of "beginning" it didn't process any of the topics that I know had data to be reprocessed. Every single time I did before I deleted the topic, those other topics would get reprocessed. This time, nothing. After fumbling around thinking it was a client issue, I finally started checking my brokers. Although those topics are listed, they don't seem to have any data. I created a kafka-console-consumer on the topics my app was consuming and nothing. However, there were other topics whose data was there and processed through the consumer.
I'm trying to figure out how I can verify for sure that my data for those topics is gone. I know I didn't delete all those topics and those logs weren't new so the replication should of been done. They are still listed when I do a "--list" command.
Is there any way to check to see if the data is truly gone or could there be something else going on?