On Tue, May 8, 2012 at 2:35 PM, Guillaume Luccisano
<
guil...@socialcam.com> wrote:
> Hey everyone,
>
> I'm planning to use a redis instance to store millions of counter,
> like followers, views, etc..
> I was thinking putting this in a big hash for each kind.
> But was wondering what will be fastest and what will be the optimal
> way to store it in term of memory usage?
The fastest way is to have them as standard keys, no nesting.
The optimal way in terms of memory use is to use the hash-*-ziplist
configuration to set a reasonable size, shard your data, and keep your
hashes under those limits. The use of the ziplist encoding in memory
will reduce performance (which will vary based on your ziplist
limits), but it will also reduce memory use. You will have to test
both ways to determine whether the performance hit is worth the
reduction in memory use.
> Also, if this get bigger than the memory size, storage on disk will
> probably not work well with big hashes.
If Redis uses more memory than you have available, your performance
will suffer terribly in just about any scenario (it *may not be*
catastrophic if you have an SSD for your swap, and the majority of
your reads/writes are to a small subset of your keys).
Regards,
- Josiah
> Any expert advice on the question?
>
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to
redi...@googlegroups.com.
> To unsubscribe from this group, send email to
redis-db+u...@googlegroups.com.
> For more options, visit this group at
http://groups.google.com/group/redis-db?hl=en.
>