Master-Slave Equivalent using Replica Sets

72 views
Skip to first unread message

Mike Fisher

unread,
May 3, 2016, 7:53:51 PM5/3/16
to mongodb-user
I have a question regarding the "Replication and MongoDB" manual release 3.2.4.  I am using MongoDB version 3.2.  I have created a Replica Set with two members running on different machines.  I have configured the replica set to behave like a Master/Slave replication (see page 44).  However, I am wanting to convert the slave member to be the new Master and the old Master to be the new slave.  In section 2.6.4 of the "Replication and MongoDB" manual, there is a subsection entitled "Inverting Master and Slave".  In step 4 of Inverting the Master and Slave, it talks about moving all of the data files that begin with "local" on B from the dbpath.  However, when I look in the dbpath, I do not see any local.* files.  Where should I find the local.* files?  Is this a change for release 3.2? 

Rhys Campbell

unread,
May 4, 2016, 3:39:40 AM5/4/16
to mongodb-user
If you have created a Replicaset I don't think you want this section.

From the manual 

Important: Replica sets (page 9) replace master-slave replication for most use cases. If possible, use replica sets rather than master-slave replication for all new production deployments. This documentation remains to support legacy deployments and for archival purposes only.

I've never played with Master->Slave stuff but I have created a 2-Node replicaset which has essentially the same purpose. Failover for this can be performed exclusively through the mongo shell without having to worry about copying any files. For example the below mongo shell commands slips the roles of a Master/Slave in a 2-Node Replicaset

On current master

mongo -u admin -p${PASS} --authenticationDatabase admin <<EOF
	cfg = rs.conf();
	cfg.members[1].priority = 1;
	cfg.members[1].hidden = false;
	cfg.members[1].votes = 1;
	rs.reconfig(cfg);
	var start = new Date().getTime();
        while (new Date().getTime() < start + 5000);
	rs.stepDown( { "secondaryCatchUpPeriodSecs": 120 } ) 
EOF

On slave to promote

mongo -u admin -p${PASS} --authenticationDatabase admin <<EOF
	cfg = rs.conf();
	cfg.members[0].priority = 0;
	cfg.members[0].hidden = true;
	cfg.members[0].votes = 0;
	rs.reconfig(cfg);
EOF


Rhys Campbell

unread,
May 4, 2016, 5:44:15 AM5/4/16
to mongodb-user
Just a related note...

In cfg.members[X] make sure you are accessing the intended member. The order in your array might be different.

Stephen Steneker

unread,
May 4, 2016, 8:01:27 AM5/4/16
to mongodb-user

On Wednesday, 4 May 2016 09:53:51 UTC+10, Mike Fisher wrote:

I have a question regarding the “Replication and MongoDB” manual release 3.2.4. I am using MongoDB version 3.2. I have created a Replica Set with two members running on different machines. I have configured the replica set to behave like a Master/Slave replication (see page 44).

Hi Mike,

The Master/Slave configuration is a deprecated legacy deployment option. The instructions you are referencing are specific to both Master/Slave and the MMAP storage engine, so will not be helpful for a new MongoDB 3.2 replica set deployment which will be using WiredTiger as the default storage engine.

For a new deployment you should definitely be using replica sets.

However, I am wanting to convert the slave member to be the new Master and the old Master to be the new slave. In section 2.6.4 of the “Replication and MongoDB” manual, there is a subsection entitled “Inverting Master and Slave”.

The usual goal of a replica set deployment is to enable automatic failover. Replica set primaries are maintained based on a quorum vote of configured members, so require a minimum of three members for automatic failover. If you only want to have two data-bearing members, there is the option of adding a third voting-only arbiter to enable a majority vote in the event of failure.

For more information please see: Three Member Replica Sets.

A properly configured replica set deployment can automatically handle failover up to the level of fault tolerance you have provisioned. Generally you should consider all members of the replica set as peers in terms of resource and configuration (particularly if you only have three members), so that any data-bearing member is eligible to become primary.

If you really want to have a two member replica set with manual failover (which is highly discouraged) the equivalent helpers you’d be looking for are:

  • Use rs.stepDown(..) to force the current primary to become a secondary and trigger an election. In a two member replica set, this will only work if both members are healthy (a two member replica set does not provide any fault tolerance).

  • Use member priorities to influence the outcome of elections if there is a strong reason to favour a specific replica set member being the primary (for example, with a geographically distributed replica set).

  • If either member is down, you can force reconfiguration to create a new replica set with the surviving member.

Regards,
Stephen

Mike Fisher

unread,
May 4, 2016, 2:01:09 PM5/4/16
to mongodb-user
Stephen,

     Thanks for the information.  Especially the reference to force reconfiguration.  That reference will come in handy.  However, I really don't want the automatic fail over functionality of replica sets.  For my application, I will have a single database in a data center that will be my primary database.  I wish to replicate that data base to other data centers.  However, I want to be able to manually choose which data center will be failed over to in case of an outage or a planned application software upgrade.  So, I would really like my replica set to behave like a Master/Slave configuration.

Mike Fisher

unread,
May 4, 2016, 2:01:41 PM5/4/16
to mongodb-user
Rhys,
    Thanks for your reply to my post.  Your instructions worked great!!  I was able to change roles for both the Master and the Slave members.  However, I have a question about the synching of the two data bases during this process.  Is it possible that a series of pending write operations on the original Master member could be lost during the role reversal process?  The reason that I ask this is because the old Master/Slave documentation for "Inverting Master and Slave" talks about halting writes on the original master using the fsync command, creating a new set of local files on the new Master, shutting down the original Master and copying the new Master's local files to the original Master's dbpath, and then restarting the original Master as a slave with the fastsync option.  So, it seems like there should be some mechanism to flush out the original Masters write operation buffer to completion, synch the changes to the original slave and then change roles.  Is that what the rs.stepDown() function does?

Stephen Steneker

unread,
May 4, 2016, 4:56:30 PM5/4/16
to mongodb-user

On Thursday, 5 May 2016 04:01:41 UTC+10, Mike Fisher wrote:

So, it seems like there should be some mechanism to flush out the original Masters write operation buffer to completion, synch the changes to the original slave and then change roles. Is that what the rs.stepDown() function does?

Hi Mike,

Yes, this is effectively what rs.stepDown() does. By default rs.stepDown() waits 10 seconds for an electable secondary to catch up with any changes, but you can set a different period with secondaryCatchUpPeriodSecs as per Rhys example. If an electable secondary cannot catch up in this time period, the primary prior to the stepdown will be re-elected as the most current data source. If you reconfigure the replica set with a higher priority for your preferred primary, this will also have the effect of electing the member with highest priority as primary once it is in sync.

In the event of unexpected shutdown where changes haven’t fully replicated to a secondary that becomes primary, a rollback will occur when the former primary rejoins the replica set. The rollback process reverts writes to make the former primary consistent with the current state of the replica set; documents that are rolled back are written to BSON files in a rollback/ directory for review.

I want to be able to manually choose which data center will be failed over to in case of an outage or a planned application software upgrade.

FYI, you can also use replica set priorities for this purpose: set a higher priority for your primary data centre and your preferred primary member.

Manual failover and forced reconfiguration with two members will leave your deployment more exposed to downtime and rollbacks, but I assume that is acceptable for your use case.

I would also encourage you to use “primary” and “secondary” terminology for clarity that your deployment is a replica set. As noted earlier, Master/Slave is a different (and deprecated) deployment topology.

Regards,
Stephen

Mike Fisher

unread,
May 4, 2016, 5:00:08 PM5/4/16
to mongodb-user


After looking into my question some more (i.e. see https://docs.mongodb.org/manual/reference/method/rs.stepDown/), the command rs.stepDown() does block writes to the primary and does wait for the secondary member to catchup during a synch operation.

Reply all
Reply to author
Forward
0 new messages