I suspect it might be the RESP parser of your Redis client.
If a large string is nested inside an array and the response needs
multiple TCP packets to be complete, a smart parser stores a stack of
where it is to be able to continue parsing efficiently when a new TCP
packet arrives, but a dumb parser simply restarts the parser for every
new TCP packet until the entire value can be parsed in one go.
I recently fixed this issue in the Erlang client "eredis" here:
Which client are you using?
Some indications that the problem might be the parser:
If myhash is a hash with only one field and a large value, is "HGET
myhash field1" faster than "HGETALL myhash"?
If mylist is a list with only one large element, is "LINDEX mylist 0"
faster than "LRANGE mylist 0 0"?
The only difference in the above results is whether the string is
wrapped in an array or not, so if the answers to the above questions are
"yes", changes are it's the parser.
On 2021-07-12 05:15, Litchy S wrote:
> Hi I have done some performance test about Redis hash, list, stream
> I just simply create a string with length 10, 1k, 100k and push to Redis
> and get.
> when server and client are on same machine(localhost:6379), all tests
> are <1ms
> when server and client are on different machine(I just choose 2 machine
> in network), all writes are about 10ms (I think this is reasonable and
> 10ms write could be regarded as a baseline of these 2 machine network cost)
> But the read performance is very different, *hash cost only <1ms, but
> list and stream cost 20~70ms*.
> You received this message because you are subscribed to the Google
> Groups "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to redis-db+u...@googlegroups.com
> To view this discussion on the web visit