I am checking out at the storm-kafka spout in the storm-contrib repo, to use as a replacement for our current python kafka spout. Specifically, we want to build a kafka-spout system where the brokers, partitions and consumers are dynamic.
I was just going through the code, and it looks like this can handle the case when partitions and brokers change.
However, I could not find anything obvious that handles the case where a consumer dies, or a new consumer is added. Can someone explain how partitions get redistributed amongst each spout thread in this case?
Thanks.