Keeping Odd Replica Set Member within a Cluster

281 views
Skip to first unread message

A. Jalil @AJ

unread,
Sep 26, 2015, 12:08:47 AM9/26/15
to mongodb-user
Hi Stephen,

I got my RS0 Replicat Set migrated successfully to AWS.  But, I read somewhere in mongo Doc that we are supposed to keep Odd numbers of members within each Replica Set. Right now, in my RS0 I have 6 nodes, 3 old nodes and 3 new nodes I just added on AWS which you can see below.  In order to keep Odd numbers of nodes within my RS0, I went ahead and shutdown the node [server0-3.comwhich you can see below it says [ not reachable/healthy ] because I stopped mongo on this server. But, when I check Shards, I see 6 nodes in RS0 (3 old nodes + 3 new AWS nodes) + 3 old nodes from RS1 which is what I expected.  So, I was wondering if this is the proper way to do this or should I remove this node completely from the Replica Set R0 as well as from the Shards cluster ?

Please note, eventually, I will remove the old nodes completely from the cluster, and things will go back to Odd members again, but for now I'd like to wait until I see data getting replicated successfully on AWS nodes and then remove the old ones. I know I can add an arbiter as an alternative, but I don't want to add more work to this since I will be removing all old nodes anyway.


rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-09-26T02:56:33Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 3,
                        "name" : "server0-1.com:27017",                                                 =>  Old RS0 server-1
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 78490,
                        "optime" : Timestamp(1443235169, 1),
                        "optimeDate" : ISODate("2015-09-26T02:39:29Z"),
                        "electionTime" : Timestamp(1443157717, 1),
                        "electionDate" : ISODate("2015-09-25T05:08:37Z"),
                        "self" : true
                },
                {
                        "_id" : 4,
                        "name" : "server0-2.com:27017",                                              =>  Old RS0 server-2
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 78489,
                        "optime" : Timestamp(1443235169, 1),
                        "optimeDate" : ISODate("2015-09-26T02:39:29Z"),
                        "lastHeartbeat" : ISODate("2015-09-26T02:56:32Z"),
                        "lastHeartbeatRecv" : ISODate("2015-09-26T02:56:32Z"),
                        "pingMs" : 1,
                        "syncingTo" : "server0-1.com:27017"
                },
                {
                        "_id" : 5,
                        "name" : "server0-3.com:27017",                                                 =>  Old RS0 server-3
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",                                  (not reachable cz I shutdown mongo)
                        "uptime" : 0,
                        "optime" : Timestamp(1443235169, 1),
                        "optimeDate" : ISODate("2015-09-26T02:39:29Z"),
                        "lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
                        "lastHeartbeatRecv" : ISODate("2015-09-26T02:56:09Z"),
                        "pingMs" : 0,
                        "syncingTo" : "server0-1.com:27017"
                },
                {
                        "_id" : 6,
                        "name" : "server0-2-AWS.com:27017",                                              => The new server-2 I added on AWS
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 78489,
                        "optime" : Timestamp(1443235169, 1),
                        "optimeDate" : ISODate("2015-09-26T02:39:29Z"),
                        "lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
                        "lastHeartbeatRecv" : ISODate("2015-09-26T02:56:33Z"),
                        "pingMs" : 1,
                        "syncingTo" : "server0-1.com:27017"
                },
                {
                        "_id" : 7,
                        "name" : "server0-3-AWS.com:27017",                                              => The new server-3 I added on AWS
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 78487,
                        "optime" : Timestamp(1443235169, 1),
                        "optimeDate" : ISODate("2015-09-26T02:39:29Z"),
                        "lastHeartbeat" : ISODate("2015-09-26T02:56:33Z"),
                        "lastHeartbeatRecv" : ISODate("2015-09-26T02:56:32Z"),
                        "pingMs" : 1,
                        "syncingTo" : "server0-1.com:27017"
                },
                {
                        "_id" : 8,
                        "name" : "server0-1-AWS.com:27017",                                         => The new server-1 I added on AWS
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1024,
                        "optime" : Timestamp(1443235169, 1),
                        "optimeDate" : ISODate("2015-09-26T02:39:29Z"),
                        "lastHeartbeat" : ISODate("2015-09-26T02:56:32Z"),
                        "lastHeartbeatRecv" : ISODate("2015-09-26T02:56:32Z"),
                        "pingMs" : 0,
                        "syncingTo" : "server0-1.com:27017"
                }
        ],
        "ok" : 1
}



> And when I do rs.conf() I still see the server despite the fact I shutdown mongo on this server..

rs0:PRIMARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 24,
        "members" : [
                {
                        "_id" : 3,
                        "host" : "server0-1.com:27017",                         =>  Old RS0 server
                        "priority" : 100
                },
                {
                        "_id" : 4,
                        "host" : "server0-2.com:27017",                        =>  Old RS0 server
                        "priority" : 50
                },
                {
                        "_id" : 5,
                        "host" : "server0-3.com:27017",                       =>  Old RS0 server -  I am still seeing the server in config even though I stopped mongo on this server 
                        "priority" : 50
                },
                {
                        "_id" : 6,
                        "host" : "server0-2-AWS.com:27017"                 =>  the new server I added in AWS
                },
                {
                        "_id" : 7,
                        "host" : "server0-3-AWS.com:27017"                =>  the new server I added in AWS
                },
                {
                        "_id" : 8,
                        "host" : "server0-1-AWS.com:27017"                =>  the new server I added in AWS
                }
        ],
        "settings" : {
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}



Thank you so much !
@AJ

Stephen Steneker

unread,
Sep 26, 2015, 4:26:39 AM9/26/15
to mongodb-user

On Saturday, 26 September 2015 14:08:47 UTC+10, A. Jalil @AJ wrote:

I got my RS0 Replicat Set migrated successfully to AWS.  But, I read somewhere in mongo Doc that we are supposed to keep Odd numbers of members within each Replica Set. Right now, in my RS0 I have 6 nodes, 3 old nodes and 3 new nodes I just added on AWS which you can see below.  In order to keep Odd numbers of nodes within my RS0, I went ahead and shutdown the node [server0-3.comwhich you can see below it says [ not reachable/healthy ] because I stopped mongo on this server. But, when I check Shards, I see 6 nodes in RS0 (3 old nodes + 3 new AWS nodes) + 3 old nodes from RS1 which is what I expected.  So, I was wondering if this is the proper way to do this or should I remove this node completely from the Replica Set R0 as well as from the Shards cluster ? 

Hi AJ,

Shutting down a node does not remove it from the replica set configuration -- this just changes the current state of the node as tracked in rs.status().

A six node configuration with one member unavailable still requires a strict majority (number-of-nodes/2 + 1) to successfully elect a primary, which would be a minimum of 4 votes assuming the default configuration. The guidance on odd number of members is a general suggestion to ensure a primary can be elected and your applications can satisfy “majority” (aka replica_safe) write concern. The election mechanics are more nuanced than just the number of nodes; it would be worth having a read of the Replica Set Elections documentation for more detail.

Since you are planning to decommission the old nodes, I would rs.remove() the shutdown server0-3.com to bring you back to an odd number of nodes (with the majority being your new AWS nodes). Any changes to the replica set config (e.g. adding or removing nodes) will propagate to the sharded cluster it is part of.

You could also take the further step of changing the remaining old nodes in rs0 to priority 0 (non-eligible to become a primary) and hidden: true (not visible to clients). After this configuration change all of the old nodes would still be participating in replication until you are ready to decommission them, your primary would always be a node on AWS, and your application clients would always be reading from the AWS nodes. You can make all these changes in a single configuration update using rs.reconfig().

Regards,
Stephen

A. Jalil @AJ

unread,
Sep 26, 2015, 5:36:26 PM9/26/15
to mongodb-user
Great, I removed the old member successfully by using  rs.reconfig() now, I have 5 nodes, things are so far..

Now I am trying to reset one of the new nodes on AWS to PRIMARY (from zero to non-zero), but I am getting an error:  "Cannot set property 'priority' of undefined.."

As you can see on my config below, all my AWS nodes' priority are set to ZERO, I need to change at least one of them greater than 0 so one of them can be a PRIMARY.

Here is what I did:
cfg = rs.conf();
cfg.members[8].priority = 1000;              => trying to set priority member[8] of this node [ server-1.aws.com ] to 1000
rs.reconfig(cfg);

> I got this Error:
rs0:PRIMARY> cfg.members[8].priority = 1000
2015-09-26T16:19:50.367-0500 TypeError: Cannot set property 'priority' of undefined


Please note, I tried to do the reverse by re-setting the current PRIMARY from 100 to ZERO , the command ran find (see below), but after I checked stats, no changes were applied:
rs0:PRIMARY> cfg.members[3].priority = 0;
0
rs0:PRIMARY> rs.reconfig(cfg);
{ "ok" : 1 }

I also noticed member[6]'s priority is not showing.. not sure why ?


rs0:PRIMARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 29,
        "members" : [
                {
                        "_id" : 3,
                        "host" : "server-01.com:27017",
                        "priority" : 100
                },
                {
                        "_id" : 4,
                        "host" : "server-02.com:27017",
                        "priority" : 50
                },
                {
                        "_id" : 6,
                        "host" : "server-02.aws.com:27017"
                },
                {
                        "_id" : 7,
                        "host" : "server-03.aws.com:27017",
                        "priority" : 0
                },
                {
                        "_id" : 8,
                        "host" : "server-1.aws.com:27017",
                        "priority" : 0
                }
        ],
        "settings" : {
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}


Thank you !

@AJ

Stephen Steneker

unread,
Sep 26, 2015, 8:07:53 PM9/26/15
to mongodb-user

On Sunday, 27 September 2015 07:36:26 UTC+10, A. Jalil @AJ wrote:

Great, I removed the old member successfully by using  rs.reconfig() now, I have 5 nodes, things are so far..

Now I am trying to reset one of the new nodes on AWS to PRIMARY (from zero to non-zero), but I am getting an error:  "Cannot set property 'priority' of undefined.."

As you can see on my config below, all my AWS nodes' priority are set to ZERO, I need to change at least one of them greater than 0 so one of them can be a PRIMARY.

Here is what I did:
cfg = rs.conf();
cfg.members[8].priority = 1000;              => trying to set priority member[8] of this node [ server-1.aws.com ] to 1000
rs.reconfig(cfg);

Hi AJ,

When you retrieve the config to edit in the shell, it will be a JavaScript object. Members will be a 0-indexed array, so if you want to set your node with _id: 8 it is currently cfg.members[4].

Please note, I tried to do the reverse by re-setting the current PRIMARY from 100 to ZERO , the command ran find (see below), but after I checked stats, no changes were applied:
rs0:PRIMARY> cfg.members[3].priority = 0;
0
rs0:PRIMARY> rs.reconfig(cfg);
{ "ok" : 1 } 
 
I also noticed member[6]'s priority is not showing.. not sure why ?

Priorities only show in the rs.conf() if changed from the default (priority 1).

So I think what you want to do is set your old nodes to priority 0 and your AWS to priority 1 (all equally likely to be elected primary, unless you actually have preferences):

    cfg.members[0].priority = 0
    cfg.members[1].priority = 0
    cfg.members[2].priority = 1
    cfg.members[3].priority = 1
    cfg.members[4].priority = 1

If you are making a number of config changes at once, you will probably find it easier to copy into an external editor and then paste back into the shell.

You can also launch an external editor from the shell if you have the EDITOR environment variable set.

Regards,
Stephen

A. Jalil @AJ

unread,
Sep 26, 2015, 9:23:01 PM9/26/15
to mongodb-user
I tried your suggestion so many times by trying to update AWS node priority to 1 but I keep getting this error:   Cannot set property 'priority' of undefined

> So, what I ended up doing is:

1. stepDown              (old server-01.com is no longer primary)
2. removed the node  server-01.com
3. and re-added it using <id> and <priority> options like so: 

rs0:PRIMARY> rs.stepDown()
rs0:PRIMARY> rs.remove("server-01.com:27017")
rs0:PRIMARY> rs.add({_id: 10, host: "server-01.com:27017", priority: 0, hidden: false})

Now the old server  server-01.com is set to 0, therefore it should not go back to become a primary, and after I did stepDown again, the new AWS server has become a primary which is what I wanted to accomplish.  Now, that old servers are set to 0 I can remove them anytime and keep the new AWS servers.

But even after I got things working the way I want it, for some reason I still can't update the priority directly using the command below:

cfg.members[10].priority = 0
Cannot set property 'priority' of undefined

The lesson learned - Initially when I added the 3 AWS nodes, they all got priority Zero by default, so next time when I add new AWS nodes on 2nd RS1, I will make sure to use [rs.add] with the priority > 0 so anyone of them can be elected as primary after I stepDown the old primary.. exp:  rs.add({_id: 10, host: "server-aws-01.com:27017", priority: 5, hidden: false})


Thanks again !
@AJ




Stephen Steneker

unread,
Sep 27, 2015, 12:13:38 AM9/27/15
to mongodb-user

On Sunday, 27 September 2015 11:23:01 UTC+10, A. Jalil @AJ wrote:

But even after I got things working the way I want it, for some reason I still can't update the priority directly using the command below:

cfg.members[10].priority = 0
Cannot set property 'priority' of undefined

Hi AJ,

As I noted in my previous response on this thread, when you retrieve the output of rs.conf() to edit in the shell it will be a JavaScript object with cfg.members as a 0-indexed array. The range of indexes for a 5 member array will be 0 .. 4, so cfg.members[10] will not exist. You have to use the index of the array; the _id of the replica set node is part of the element in the array.

Before setting the priority, try checking what the array element looks like. For example, the last element in your 5-node members array would be: cfg.members[4].

The lesson learned - Initially when I added the 3 AWS nodes, they all got priority Zero by default, so next time when I add new AWS nodes on 2nd RS1, I will make sure to use [rs.add] with the priority > 0 so anyone of them can be elected as primary after I stepDown the old primary.. exp:  rs.add({_id: 10, host: "server-aws-01.com:27017", priority: 5, hidden: false})

You can change the priority with rs.reconfig() at any time with the correct syntax. In general I would add nodes with the default config to keep things understandable, so you should not specify “hidden:false” or an _id when adding new nodes. A priority is fine, if you really have preferences on which node becomes primary. In general you should think of the replica set nodes as peers so they are all equally provisioned and capable of the same workload in the event of failover.

Regards,
Stephen

A. Jalil @AJ

unread,
Sep 27, 2015, 1:55:28 AM9/27/15
to mongodb-user
Yeah I misunderstood that, I thought I can set priority to any number > 0 for primary to get elected..

Here is my conf as it stands now, I removed all old nodes and I only have new AWS nodes now. but the priority looks messed up I think. Based what you see below, what should I change them to ?

Please note:   id-8 is secondary   id-9 is primary   and  id-10 is secondary


rs0:PRIMARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 40,
        "members" : [
                {
                        "_id" : 8,
                        "host" : "server-01.aws.com:27017",
                        "priority" : 2
                },
                {
                        "_id" : 9,
                        "host" : "server-02.aws.com:27017",               <=  primary
                        "priority" : 10
                },
                {
                        "_id" : 10,
                        "host" : "server-03.aws.com:27017",
                        "priority" : 6

A. Jalil @AJ

unread,
Sep 27, 2015, 2:11:43 AM9/27/15
to mongodb-user
I was wondering if I should remove secondary one at the time, and add them again without using <id> or <hidden:false..>  then  stepDown the primary and do same thing.. do you have a better suggestion ?

Thanks.

Stephen Steneker

unread,
Sep 27, 2015, 2:25:04 AM9/27/15
to mongodb-user
On Sunday, 27 September 2015 15:55:28 UTC+10, A. Jalil @AJ wrote:
Yeah I misunderstood that, I thought I can set priority to any number > 0 for primary to get elected..

Hi AJ,

Yes, you can use any number for priority. The priorities are relative, so in your current config the preferred order is set to server-02 (priority 10), server-03 (priority 6), and server-01 (priority 2).
 
Here is my conf as it stands now, I removed all old nodes and I only have new AWS nodes now. but the priority looks messed up I think. Based what you see below, what should I change them to ?

The priority settings are fine if you have some preference on the order in which nodes are likely to becomes primary. However, in the normal case you should have all nodes in the replica set configured with equal resources and priority so that any node is eligible to become primary.

Having priorities set may introduce more elections depending on the order of nodes becoming unavailable. For example:
  - server-02 (priority 10) is stepped down to upgrade MongoDB versions
  - server-03 (priority 6) is the preferred primary, and should be elected if available and up to date
  - server-02 is restarted. after catching up on the oplog sync, server-02 will become the primary again

In this example, the last election (server-02) wouldn't be necessary if all nodes were equal priority. Elections will impact write availability for your application.

Priority is usually only set in special circumstances: for example, you don't want some nodes to become primary so set them to 0 (eg. the nodes you were migrating) or you have a preference on which nodes are eligible to become primary because a replica set spans multiple data centres or availability zones (eg. you want to favour nodes in the primary data centre).

So you can leave your config as-is, or consider removing the priority settings so all nodes are equally eligible to become primary.

Regards,
Stephen

Stephen Steneker

unread,
Sep 27, 2015, 2:28:02 AM9/27/15
to mongodb-user

On Sunday, 27 September 2015 16:11:43 UTC+10, A. Jalil @AJ wrote:

I was wondering if I should remove secondary one at the time, and add them again without using <id> or <hidden:false..>  then  stepDown the primary and do same thing.. do you have a better suggestion ?

Hi AJ,

There’s no need to remove & resync the nodes. The only change I would consider is removing the priority (via rs.reconfig()), since this likely isn’t necessary. Your last replica set config only has the 3 nodes in AWS, and the only non-default config is priority settings.

Regards,
Stephen

A. Jalil @AJ

unread,
Sep 27, 2015, 12:54:11 PM9/27/15
to mongodb-user
Ok, so lets take this example id: 8:

 "_id" : 8,
      "host" : "server-01.aws.com:27017",
       "priority" :  2


To update the priority to 1, I would run this command:


cfg = rs.conf()
cfg.members[8].priority = 1
rs.reconfig(cfg)


But how do I remove it ?  do I leave priority blank like so:
cfg = rs.conf()
cfg.members[8].priority = 
rs.reconfig(cfg)


Thank you !
@AJ

Stephen Steneker

unread,
Sep 27, 2015, 7:38:10 PM9/27/15
to mongodb-user

On Monday, 28 September 2015 02:54:11 UTC+10, A. Jalil @AJ wrote:

Ok, so lets take this example id: 8:

 "_id" : 8,
      "host" : "server-01.aws.com:27017",
       "priority" :  2

Hi AJ,

There are several different ways to edit the config:

1) Copy the rs.conf() output into a separate text editor and then paste back into the shell.

2) Edit via an external editor in the mongo shell (assuming you have EDITOR environment variable set)

 cfg = rs.conf()
 edit cfg  // Opens in your external $EDITOR (eg. vim / nano / ..)
 rs.reconfig(cfg)

3) Edit in the mongo shell.

 cfg = rs.conf()

 // Assuming you have 3 members and want to delete the priorities
 delete cfg.members[0]
 delete cfg.members[1]
 delete cfg.members[2]

 cfg // Check the current value before reconfiguring
 rs.reconfig(cfg)

cfg.members[8].priority = 1

Recalling that JavaScript arrays are 0-indexed, this would set the 9th member of the  cfg.members  array to priority 1. You have to use the array index, not the _id for the member (which is part of the array).

Using your example of a single member with _id of 8:

var testcfg = {
        "members" : [
                {
                        "_id" : 8,
                        "host" : "server:27017",
                        "priority" : 2
                },
         ]
}

> cfg.members[0]
{
  "_id": 8,
  "host": "server:27017",
  "priority": 2
}

> delete cfg.members[0].priority
true

> rs.reconfig(cfg)

Regards,
Stephen

A. Jalil @AJ

unread,
Sep 28, 2015, 1:20:57 PM9/28/15
to mongodb-user
Hi Stephen,

Per your suggestion, I deleted all priority in the array which you can see below. I am assuming they are all set to 1 now which is the default, right ? Is there a way to retrieve their values and see if they are all set to [1] and not [0] somehow..  Thank you.


rs0:PRIMARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 41,
        "members" : [
                {
                        "_id" : 8,
                        "host" : "server1-aws.com:27017"
                },
                {
                        "_id" : 9,
                        "host" : "server2-aws.com:27017"
                },
                {
                        "_id" : 10,
                        "host" : "server3-aws.com:27017"

Stephen Steneker

unread,
Sep 28, 2015, 6:51:32 PM9/28/15
to mongodb-user

On Tuesday, 29 September 2015 03:20:57 UTC+10, A. Jalil @AJ wrote:

Per your suggestion, I deleted all priority in the array which you can see below. I am assuming they are all set to 1 now which is the default, right ? Is there a way to retrieve their values and see if they are all set to [1] and not [0] somehow.

Hi AJ,

In MongoDB 3.0 or newer the priority is always shown via rs.conf(). If you are using an older version of MongoDB (eg. 2.6) the output will not show the default priority so all of your members will be set to priority 1 unless otherwise listed.

You can be certain they aren’t zero if the replica set has a current primary, since priority 0 members cannot become primary ;-).

Regards,
Stephen

Reply all
Reply to author
Forward
0 new messages