Two PRIMARY nodes on three member replica set

56 views
Skip to first unread message

etyr...@gmail.com

unread,
Mar 2, 2016, 2:14:08 PM3/2/16
to mongodb-user
Hi All,

I am using mongodb 3.2.3.  I am working on a distributed application with many nodes.  We have a MongoDB replica set that we want to run on three of the application nodes.  If we remove a application node that is running mongod then we spin up mongod on one of the nodes that is not running mongod, use replSetReconfig to add the new node to the replica set, then call replSetReconfig again to remove the replica set member we want to remove.

The problem is I end up with two PRIMARY nodes, the one that was originally the primary, and the new node that was just added.  I have only tried this so far with removing one of the SECONDARY members.  Currently I am running all of the mongod instances on different ports on one machine as I write and test my code so it won't be a network communication issue.

Here is the healthy replica set status before I do the add/remove:

rs0:PRIMARY> rs.status()

{

"set" : "rs0",

"date" : ISODate("2016-03-02T17:04:43.528Z"),

"myState" : 1,

"term" : NumberLong(1),

"heartbeatIntervalMillis" : NumberLong(2000),

"members" : [

{

"_id" : 0,

"name" : "unununium.local:27017",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 55083,

"optime" : {

"ts" : Timestamp(1456938256, 3),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2016-03-02T17:04:16Z"),

"electionTime" : Timestamp(1456883201, 2),

"electionDate" : ISODate("2016-03-02T01:46:41Z"),

"configVersion" : 3,

"self" : true

},

{

"_id" : 1,

"name" : "unununium.local:27018",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 115,

"optime" : {

"ts" : Timestamp(1456938256, 3),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2016-03-02T17:04:16Z"),

"lastHeartbeat" : ISODate("2016-03-02T17:04:42.558Z"),

"lastHeartbeatRecv" : ISODate("2016-03-02T17:04:42.557Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "unununium.local:27017",

"configVersion" : 3

},

{

"_id" : 2,

"name" : "unununium.local:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 67,

"optime" : {

"ts" : Timestamp(1456938256, 3),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2016-03-02T17:04:16Z"),

"lastHeartbeat" : ISODate("2016-03-02T17:04:42.558Z"),

"lastHeartbeatRecv" : ISODate("2016-03-02T17:04:42.648Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "unununium.local:27018",

"configVersion" : 3

}

],

"ok" : 1

}


And here is the status again after the add/remove showing two primary nodes:

rs0:PRIMARY> rs.status()

{

"set" : "rs0",

"date" : ISODate("2016-03-02T17:06:33.484Z"),

"myState" : 1,

"term" : NumberLong(1),

"heartbeatIntervalMillis" : NumberLong(2000),

"members" : [

{

"_id" : 0,

"name" : "unununium.local:27017",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 55193,

"optime" : {

"ts" : Timestamp(1456938346, 3),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2016-03-02T17:05:46Z"),

"electionTime" : Timestamp(1456883201, 2),

"electionDate" : ISODate("2016-03-02T01:46:41Z"),

"configVersion" : 5,

"self" : true

},

{

"_id" : 1,

"name" : "unununium.local:27018",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 225,

"optime" : {

"ts" : Timestamp(1456938346, 3),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2016-03-02T17:05:46Z"),

"lastHeartbeat" : ISODate("2016-03-02T17:06:32.722Z"),

"lastHeartbeatRecv" : ISODate("2016-03-02T17:06:32.721Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "unununium.local:27017",

"configVersion" : 5

},

{

"_id" : 3,

"name" : "unununium.local:27020",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 44,

"optime" : {

"ts" : Timestamp(1456938347, 1),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2016-03-02T17:05:47Z"),

"lastHeartbeat" : ISODate("2016-03-02T17:06:32.722Z"),

"lastHeartbeatRecv" : ISODate("2016-03-02T17:06:32.050Z"),

"pingMs" : NumberLong(0),

"electionTime" : Timestamp(1456938346, 2),

"electionDate" : ISODate("2016-03-02T17:05:46Z"),

"configVersion" : 5

}

],

"ok" : 1

}


Any ideas?  Would it be better to do the add/remove as on call to replSetReconfig?  Or perhaps there is something I should wait for between doing the add and the remove?

Thanks in advance for any suggestions.

Ed

Kevin Adistambha

unread,
Mar 6, 2016, 11:18:55 PM3/6/16
to mongodb-user

Hi Ed,

If you executed rs.initiate() on the new node before it was added to the existing replica set, you might be running into SERVER-22287 which is fixed in the upcoming version 3.2.4.

If you did not execute rs.initiate() on the new node, could you please post the exact sequence of commands that you executed?

Best regards,
Kevin

Reply all
Reply to author
Forward
0 new messages