I'd like to prototype a new Quarkus app that uses Kafka as the internal message hub that will have potentially hundreds of topics that will receive & store incoming data.
Then to 'publish' that data to 'consumers' I would like to use gRPC for its more efficient protocol stack. I can envision the 'consumers' getting data a couple of ways:
1. Batch requests. Consumers would request batches of data from known Kafka offsets.
2. Streaming requests. Consumers would receive streams of data from the Kafka topics either from a specified Kafka offset and/or from latest data in the topic. E.g. if interested in the very latest messages they would stream to receive messages potentially forever.
But in the streaming option, lots of questions arise such as flow control/etc. What happens if client can't process at the rate new messages come in? Who's job is it to manage this? Should the client be required to scale out to handle what is provided? Should the server slow down? Or either/both? How to manage all this?
Are there some examples of this sort of thing using Quarkus? Any non Quarkus examples of this I could port to Quarkus perhaps?
If anyone has any pointers/suggestions on this please let me know.
-Dave