Replica set with 3 members in 3 Data Centers with "1 Data center tolerance"

55 views
Skip to first unread message

thomas....@prowebce.com

unread,
Feb 11, 2016, 6:33:35 AM2/11/16
to mongodb-user
Hi everybody,

Considering this example : 
data center 1 : 1 primary, 1 secondary
data center 2 : 1 secondary, 1 arbiter
data center 3 : 1 arbiter


Is it possible do the same with 3 members instead of 5 ? I think yes but i have not seen this example anywere :
data center 1 : 1 primary
data center 2 : 1 secondary
data center 3 : 1 arbiter


Thank you very much,
Thomas

Stephen Steneker

unread,
Feb 11, 2016, 8:24:08 AM2/11/16
to mongodb-user

On Thursday, 11 February 2016 22:33:35 UTC+11, thomas.jalabert wrote:

Considering this example : 
data center 1 : 1 primary, 1 secondary
data center 2 : 1 secondary, 1 arbiter
data center 3 : 1 arbiter


Is it possible do the same with 3 members instead of 5 ? I think yes but i have not seen this example anywere :
data center 1 : 1 primary
data center 2 : 1 secondary
data center 3 : 1 arbiter

Hi Thomas,

You generally should include at most one arbiter in a replica set. Since arbiters do not store any data, they count toward the voting majority of the replica set but cannot acknowledge writes. A replica set with one or more arbiters may have different failover scenarios depending on which members fail.

Your first example would allow failure of any 1 DC. However, if DC2 fails the maximum write concern that could be acknowledged is w:2; if DC1 fails the maximum write concern that could be acknowledged is w:1. Despite having 5 members configured, this means your application can only safely assume that 1 data-bearing member will be available. A P/S/S configuration across three DCs would actually be more robust, as it would support w:2 or w:majority in the event of failure of any single member/DC.

Your second example of three members in different DCs is a common case which allows failure of any 1 DC. Again, you could improve resilience by having three data bearing nodes (i.e. replace the arbiter with a secondary). If one of your data-bearing members is unavailable in a P/S/A configuration you will no longer have any active replication until the second data-bearing node returns, and you can only safely assume that a  w:1 write concern can be acknowledged (allowing for potential failover).

For more information see Three Member Replica Sets.

Regards,
Stephen

Message has been deleted

thomas....@prowebce.com

unread,
Feb 18, 2016, 9:23:26 AM2/18/16
to mongodb-user
OK
So, the good infrastructures are the followings ?

data center 1 : 1 primary, 1 secondary
data center 2 : 1 secondary, 1 secondary
data center 3 : 1 secondary

And

data center 1 : 1 primary
data center 2 : 1 secondary
data center 3 : 1 secondary

In the first example, if we had one arbiter in place of the seconder in the data center 3, the max writeconcern that could be acknowledged would be 2 and not 3, which means that if 1 data center fails, we dont prevent rollbacks of data that have been acknowledged to the client.
True ?

Thank you,
Thomas
Reply all
Reply to author
Forward
0 new messages