Any reason why the ability to handle multiple kafka topics (using a pattern) was removed from new kafka indexing service? That would have been hugely useful for my usecase. I have tens (eventually will hit 100) of kafka topics that feed data to my druid cluster. As it stands now, each topic will need to be handled by a different task, which means a worker per topic (not including the replicas and partitions). Each worker (If I understand it right) is a JVM and the kafka tasks attach to the worker for their life time which is pretty much never ending. And, that means a ton of resources just to run the kafka indexing tasks.
Any thoughts around how I can workaround this issue?
Thanks, Arul