First for those that are not clear on what I mean.
If you have one connection, in theory you could send a request to a server, and send with the header an id, and when the server replies with data, it will include the same kind of header id in the response, allowing the client to identify and connection the response data to the relevant request it is related to. That means in theory it would be possible to send multiple concurrent requests to the server, and be able to handle various response data concurrently.
For instance, if you use the same jedis connection currently from several threads we know Jedis will fail completely in handling the reponse because it does not connect request out and response data in, but likely intermingles them.
"Redis is a single thread synchronous system and can only handle one command at the time. That's why Jedis uses the same approach."
"Redisson and lettuce use a different approach which is async io from the client side."
This is what I am to get down to. Those two statements seems contradictory. If Redis is incapable and that is the reason Jedis communicates this way, then how / are / if redisson / lettuce able to circumvent this limitation?
They use a different approach how? I am not sure async is really happening on the client side for redisson. I mean the sync part of Jedis could easily be wrapped up in a future and be made "async". While in reality it would mean requests are queued.
If in truth Redisson is capable of firing off multiple requests on one connection, how are they doing it, and is that potentially much better for throughput than creating multiple connections, by instead creating multiple requests per connection, which ought to be cheaper.