MEMORY:
Each redis instance was run with 8MB and then 64MB corpus limits. A single jredis client streamed non-pipelined 16-byte LPUSH requests until memory was full, then reported the number of successful appends and how long it took. 10,000 keys were used in round-robin.
Linked lists, 8MB: 54962 in 5 sec -- 152.6 bytes/entry (9.5x)
Linked lists, 64MB: 619580 in 56 sec -- 108.3 bytes/entry (6.8x)
Zip lists, 8MB: 314864 in 31 sec -- 20.6 bytes/entry (1.7x)
Zip lists, 64MB: 3251988 in 306 sec -- 26.6 bytes/entry (1.3x)
Speed was always between 10k-11k appends/sec, with no significant change across runs.
THROUGHPUT:
Running the throughput test (heavily pipelined, doing 1M pushes to 1M keys 20 times):
Linked lists average 1M inserts in 5726 msec, or 175k inserts/sec.
Zip lists average 1M inserts in 6756 msec, or 148k inserts/sec.
Bottom line: zip lists provide 4x as much memory for data, at a cost of 20% throughput. Totally worth it for lists of small items.
robey
Thanks for sharing! The ziplists were indeed added to git-head about a
week ago, but haven't been really announced yet. Glad to see you found
them and did some testing. I would like to add that there are some
little extra optimizations that are to be done. I expect the
#bytes/entry to drop to about 18, which allows you to store even more
data in the same amount of memory.
Furthermore, for people who don't want the 20% performance drop for
small lists and happily accept a bit more memory being used: all
thresholds considering ziplists are configurable, so don't worry. Not
sure these configuration directives are already placed inside
redis.conf on git-head, but they definitely will be in a couple of
days.
Cheers,
Pieter
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>