I would actually recommend going about it in the opposite direction.
Start a new Redis on an identical system (same architecture, same
version of Redis, same configuration file), create some hashes with
sizes on the order of magnitude of hashes you are actually seeing.
1. Start with an empty Redis, check memory use reported by INFO with
resident size.
2. Add your hash using 4 byte unique keys and values.
3. Re-check memory use reported by INFO with resident size.
4. Do this for hash sizes increasing by sqrt(2) until you are close to
the maximum size of your hash in production.
Because you know how much space each hash will take of a given size,
and you know the overhead of your actual data (4 byte key, 4 byte
value), you can extrapolate Redis' overhead for data structures,
memory over-allocation, and fragmentation. Re-calculate your expected
size in production, and your numbers should line up pretty well.
With the version of Redis I am using, I know that Redis uses roughly
80 bytes + my data size to store an entry in a hash. If I have a hash
of 1 million entries (a small hash for us), I know it's going to be 80
megs + whatever data I put in. Redis has improved memory use since we
deployed, so I would suggest you run the tests.
Regards,
- Josiah
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>