If you have created a Replicaset I don't think you want this section.
Important: Replica sets (page 9) replace master-slave replication for most use cases. If possible, use replica sets rather than master-slave replication for all new production deployments. This documentation remains to support legacy deployments and for archival purposes only.
mongo -u admin -p${PASS} --authenticationDatabase admin <<EOF cfg = rs.conf(); cfg.members[1].priority = 1; cfg.members[1].hidden = false; cfg.members[1].votes = 1; rs.reconfig(cfg); var start = new Date().getTime(); while (new Date().getTime() < start + 5000); rs.stepDown( { "secondaryCatchUpPeriodSecs": 120 } ) EOFOn slave to promotemongo -u admin -p${PASS} --authenticationDatabase admin <<EOF cfg = rs.conf(); cfg.members[0].priority = 0; cfg.members[0].hidden = true; cfg.members[0].votes = 0; rs.reconfig(cfg); EOF
On Wednesday, 4 May 2016 09:53:51 UTC+10, Mike Fisher wrote:
I have a question regarding the “Replication and MongoDB” manual release 3.2.4. I am using MongoDB version 3.2. I have created a Replica Set with two members running on different machines. I have configured the replica set to behave like a Master/Slave replication (see page 44).
Hi Mike,
The Master/Slave configuration is a deprecated legacy deployment option. The instructions you are referencing are specific to both Master/Slave and the MMAP storage engine, so will not be helpful for a new MongoDB 3.2 replica set deployment which will be using WiredTiger as the default storage engine.
For a new deployment you should definitely be using replica sets.
However, I am wanting to convert the slave member to be the new Master and the old Master to be the new slave. In section 2.6.4 of the “Replication and MongoDB” manual, there is a subsection entitled “Inverting Master and Slave”.
The usual goal of a replica set deployment is to enable automatic failover. Replica set primaries are maintained based on a quorum vote of configured members, so require a minimum of three members for automatic failover. If you only want to have two data-bearing members, there is the option of adding a third voting-only arbiter to enable a majority vote in the event of failure.
For more information please see: Three Member Replica Sets.
A properly configured replica set deployment can automatically handle failover up to the level of fault tolerance you have provisioned. Generally you should consider all members of the replica set as peers in terms of resource and configuration (particularly if you only have three members), so that any data-bearing member is eligible to become primary.
If you really want to have a two member replica set with manual failover (which is highly discouraged) the equivalent helpers you’d be looking for are:
Use rs.stepDown(..)
to force the current primary to become a secondary and trigger an election. In a two member replica set, this will only work if both members are healthy (a two member replica set does not provide any fault tolerance).
Use member priorities to influence the outcome of elections if there is a strong reason to favour a specific replica set member being the primary (for example, with a geographically distributed replica set).
If either member is down, you can force reconfiguration to create a new replica set with the surviving member.
Regards,
Stephen
On Thursday, 5 May 2016 04:01:41 UTC+10, Mike Fisher wrote:
So, it seems like there should be some mechanism to flush out the original Masters write operation buffer to completion, synch the changes to the original slave and then change roles. Is that what the rs.stepDown() function does?
Hi Mike,
Yes, this is effectively what rs.stepDown()
does. By default rs.stepDown()
waits 10 seconds for an electable secondary to catch up with any changes, but you can set a different period with secondaryCatchUpPeriodSecs
as per Rhys example. If an electable secondary cannot catch up in this time period, the primary prior to the stepdown will be re-elected as the most current data source. If you reconfigure the replica set with a higher priority for your preferred primary, this will also have the effect of electing the member with highest priority as primary once it is in sync.
In the event of unexpected shutdown where changes haven’t fully replicated to a secondary that becomes primary, a rollback will occur when the former primary rejoins the replica set. The rollback process reverts writes to make the former primary consistent with the current state of the replica set; documents that are rolled back are written to BSON files in a rollback/
directory for review.
I want to be able to manually choose which data center will be failed over to in case of an outage or a planned application software upgrade.
FYI, you can also use replica set priorities for this purpose: set a higher priority for your primary data centre and your preferred primary member.
Manual failover and forced reconfiguration with two members will leave your deployment more exposed to downtime and rollbacks, but I assume that is acceptable for your use case.
I would also encourage you to use “primary” and “secondary” terminology for clarity that your deployment is a replica set. As noted earlier, Master/Slave is a different (and deprecated) deployment topology.
Regards,
Stephen