Hello,
Greetings! I had this question about netty.
While doing some experiments, found that if we have a simple netty TCP server that just returns back a fixed response, and a single threaded client sending data over a single channel continuously as soon/long as the channel "isWritable" was true, we could get very high throughputs. 100K requests per second were normal. The read at the client side channel handler was in this case just draining the response data returned.
However, when we try to do this by modifying clients to use large channel pools and such that a send over a channel is not done till the response of the previous is not received (by acquiring the channel during send, and releasing it on read), the throughput for the same input load goes down by almost a factor of 10. We tried to cover for the delay in sending successive requests by having very large channel pools.
In this scenario, though, we can support a large number of concurrent channels (tried with 20000 on our low end machines) without any problem, the throughput is very low.
Is this expected? Especially considering that every channel now has only sparse data, in fact, only one request at a time to be read and serviced by server?
Or for such a scenario where a large number of connections send very sparse data and behave in a synchronous style, some different Netty settings would be more beneficial?
Tried to search on the web for netty benchmarks and expected numbers, but could not find numbers in the scenario mentioned above.
Help on this would be much appreciated.
Regards,
--samantp.