I have an autoscaling group that only reads from our redis database. To keep the latency as low as possible, I'd like to have a slave on each of these instances. My question is, when does it become inefficient to have a slave per server in an autoscaling group and would that play nicely with instances constantly going up and being terminated? My AS (autoscaling) group currently has about 50 instances. Also, how taxing is replication on each of the servers in the AS group?
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.
Every time your servers scale up, Redis will add a new slave. Unless there is already a slave that has recently connected, each new slave induces a BGSAVE + a full transfer of the dumped database to the slave. From there, all write commands are replicated out to all the slaves.
If you set up a slave, upon connection it sends a SYNC command. It doesn't matter if it's the first time it has connected or if it's a reconnection.
--
Within about a week, we'd like at least a workable improved solution but the project can be ongoing
Our biggest database has an upper limit of 10 million or so (may increase as we scale)
I assume this means how often is data deleted. All data is useless after 24 hours of the first time we first insert a key, then its erased.
We basically increment a count for each of these keys, but our reads dont need to be perfect. In short, we can have somewhat stale data (a count of 3, and reading a value of 2 is OK, but having an actual count of 10, and reading 2 is not OK)
How short are your deadlines?
Within about a week, we'd like at least a workable improved solution but the project can be ongoing
How much total data do you have?Our biggest database has an upper limit of 10 million or so (may increase as we scale)
How much data churn do you have?I assume this means how often is data deleted. All data is useless after 24 hours of the first time we first insert a key, then its erased.
How sensitive is your app to stale data?
We basically increment a count for each of these keys, but our reads dont need to be perfect. In short, we can have somewhat stale data (a count of 3, and reading a value of 2 is OK, but having an actual count of 10, and reading 2 is not OK)
Also, I've read a few of your articles (one on bloom filters, one on why you didn't use bloom filters) and have tried to think of ways to apply those but haven't thought of anything that fits really well. If you want more details, etc... I'd be more than willing to chat over google+ or what have you.
yeah, that makes a lot more sense. We'd like to keep it under or around 2 milliseconds. We're running hgets for reads (which can probably be turned into an hgetall actually) and hsets for writes. Those are the only two commands we run
how short are your deadlines?yeah, that makes a lot more sense. We'd like to keep it under or around 2 milliseconds. We're running hgets for reads (which can probably be turned into an hgetall actually) and hsets for writes. Those are the only two commands we run
How much ram does it use?biggest DB peaks at ~2 gigs (usually much lower though). The 3-4 others are only around ~200 mbs.
Your last idea sounds really promising, as it'll alleviate the master during replication even if the AS group scales up and the databases grow (tree-structure as you said). What do you mean about "IO write scaling" though?