--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.
Thank you for your post and appreciate your feedback, but I think we are missing each other on what I was originally trying to achieve.Conflict resolution----------------------------------Sorry if I haven't been clear. From my first post.<firstPost>For any concurrent issues (if they bother your application), With a load balancer, create three pools. (with custom monitors)Redis-ReadRedis-WriteRedis-AllRedis-Write would be a fail over mode. so Nod1->Node2->Node3.Redis-Read would be all the same priority, or something like Node2|Node3 - > Node1 (since it is handling writes)Anything you need to immediately read after you write, go to "Redis-Write" (for read or write)For slower moving data, that you want to read, or concurrency/latency isn't an issue for your writes, go to "Redis-Read" or "Redis-All" depending on your situation.</firstPost>In *NO* way am I proposing a MASTER/MASTER where it okay to write to *any* master and expect conflict resolution to take place. It Depends on your data and what you are trying to do.For the most part it is done just like it is done now with MASTER->SLAVE->SLAVE.For writes it actually looks more like this what I am proposingMASTER<->SLAVE1<->SLAVE2You only write to master unless there is a failure/downtime/patching that load balances to SLAVE1 for writes.
All reads would also go there where "concurrency matters"(aka, write something and immediately read it). You have an option though to write to SLAVE1 or SLAVE2 if your application doesn't care (counters for instance, or unique views of an email, or whatever).
Then the master comes back up, it gets the updates it missed, and becomes master once more. (load balancer moves traffic to it)
RAM is Cheap------------------------
Both setups have the same RAM total, setup 1 will not be lop sided and can use all of the ram available.I do believe setup 2, would cost more (as well as more maintenance costs).. but I haven't done the math.Now, I will admit setup 2 will scale a hell of a lot better. (if you have crazy requirements)
Network reliability--------------------------------Now I do think this was somewhat taken out of context.I in no way think networks are reliable...I read the study and it was like having a studying saying "water is wet". Of course it is.Each device has a certain reliability factor, more devices you add more unreliable it becomes. 1+1 = 2.What I was talking about is in an earlier post about network partitions on a much smaller scale of just two switches, between 3 Redis nodes.
<previousPost>Network Partition /Split brain--------------------------------------There is just no good way to deal with this imo. Always messy/complicated. To get around it I've seen attempts where there is a cable connecting the two boxes for the heartbeat/information exchange.While this is a good way to make splits unlikely, not a very scale-able approachWay we tend to do it isSwitch1-----------Switch2A AB BC CNetwork cards are setup in a fail-over setup. (two ports).All traffic in A, B , C go through switch 1, unless switch one dies, or the network card does. then through switch 2.If Switch 2 dies, oh well.If Swtich 1 dies, failover to Switch2.If a port on switch 1 dies, that client will route to swtich 2, which will and still be able to connect to the members.</previousPost>
Am I wrong in assuming this can be considered reliable?
Having said all this, I am now sure if I should even continue the project,.Not that it cannot be done as it is actually fairly easy, but that there is a possibly of people getting burned by not understanding and then would look unfavorably to Redis. (eg, like people complain sometimes about MySQL, when they tried to use a Non-ACID storage engine and complain they lost data and think MySQL sucks.. possibly a bad example but you get the point).
I like the Redis project (though I am new to it), and would love to use it in my production environment, but it had some thorns on it that made it somewhat distasteful, so I was trying to resolve them. :\
Best regards.