Just use a slave per core and a load balancer in front, like haproxy. No need to get fancy.
F.
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/groups/opt_out.
Yes, my original intent was to distribute read load without using (much) more memory.
> Depending on how latency-sensitive your read ops are, that *might* be enough additional overhead to make your scheme infeasible,I care more about throughput than latency, so I think it is OK for me.
I have been trying to think up some use cases other than mine, but I don't find much. I am not sure if web page caching can make use of my idea. Perhaps pages updated by other people can be served safely stale for some short time. HyperLogLog is not accurate and staleness may not be a big problem, but it's not memory hungry, so I'm not very sure. Maybe some people may use a lot of HLL sets to count different kinds of objects?
I have finally found time to dive into Redis source and implement the idea. I am not very sure how to test though. Command handling is basically untouched, and most code is to shutdown subsystems in the child process. I want to test if the subsystems (like AOF/replication) are shutdown and does nothing, but how to effectively do so?
Any code review will be very helpful to me. There is a strong feeling of some subsystem is lurking somewhere, not shutdown properly.
As to the Redis source code, I do think it's easy to modify. However, IMHO, the subsystems are not encapsulated well enough. For example, Redis uses server.masterhost to check for the master/slave status, and the checks are everywhere, even in db.c. This makes me afraid of the read-only slave going into places I want to keep it out of. Does anybody also consider it a problem? I have some opinions I'd really like to share.
For more options, visit https://groups.google.com/d/optout.