Porblems in configuring sharded cluster with single replication in 3.4

1,523 views
Skip to first unread message

sjmi...@gmail.com

unread,
Jan 15, 2017, 12:13:31 PM1/15/17
to mongodb-user
Hi,
Since configuring a shared cluster differs in 3.4 than in 3.0 I am having some trouble.
I would like some inputs from group members as where I a may be going wrong.

I would like a three config server and a cluster of three shard servers.
I am following this guide:
https://docs.mongodb.com/manual/tutorial/deploy-shard-cluster/

I config server config is:
sharding:
  clusterRole: configsvr
replication:
  replSetName: rs0
storage:
   dbPath: /data/advice/configdb

For all three servers.
After I start the config server on default port 27019 I initiate replica set by:
rs.initiate(
  {
    _id : "rs0",
configsvr: true,
    members: [
      { _id : 0, host : "<ip-1>:27019" }
    ]
  }
)

For shard server my config is:
sharding:
  clusterRole: shardsvr
replication:
  replSetName: rs0
storage:
  dbPath: /data/advice/db

and for all three shards I start the server with above config on default port 27018 and initiate replica set by:
rs.initiate(
  {
    _id : "rs0",
    members: [
      { _id : 0, host : "<ip-1>:27018" }
    ]
  }
)

Then I configure my mongos cluster by specifying:
sharding:
  configDB: rs0/<ip-1>:27019,<ip-2>:27019,<ip-3>:27019


Then I connect to the mongo cluster using mongo command
and add the shards

sh.addShard( "rs0/<ip-1>:27018")
sh.addShard( "rs0/<ip-2>:27018")
sh.addShard( "rs0/<ip-3>:27018")

and enable sharding on my test-db

sh.enableSharding("test-db")


When I run staus I get this output

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("587b76664a6938e9de9510bb")
}
  shards:
        {  "_id" : "rs0",  "host" : "rs0/<ip-1>:27018",  "state" : 1 }
  active mongoses:
        "3.4.1" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Sun Jan 15 2017 18:47:26 GMT+0530 (IST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "test-db",  "primary" : "rs0",  "partitioned" : true }

Note it added only the first shard server to the shard. Why it did not add others.


Also when I try to shard a collection on this db I get:
mongos> sh.shardCollection("test-db.output", { key: 1 } )
{
        "ok" : 0,
        "errmsg" : "ensureIndex failed to create index on primary shard: Cannot accept sharding commands if not started with --shardsvr"
}

I don't understand why I get this error, as I got connected to the shard cluster as per the docs.

Please let me know where I am going wrong in order to shard with no replication (ie with only single replica set)

Thanks
Sachin


Attila Tozser

unread,
Jan 15, 2017, 1:05:32 PM1/15/17
to mongod...@googlegroups.com
i am not sure when it was introduced, as i remember it maybe worked beforehand but now, if you check the code here:


the add shard command checks the replicaset name and if there is already a set with it, it is not adding the new one. Have you seen this error message: 

"A shard already exists containing the replica set '" ?

I suspect a similar rootcause behind the error you observe and the fact that the config replicaset has the same name as your shard replicaset (rs0).

Would you mind to change the setnames to different ones?



--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/debaa6e5-07da-47c3-9855-506f24021844%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sachin Mittal

unread,
Jan 15, 2017, 11:47:04 PM1/15/17
to mongod...@googlegroups.com
Ok.
I understood that every shard server should have its own unique replica set name and if that shard server needs to be replicated then we can add members to it.
using
rs.initiate(
  {
    _id : "<unique replicate set name>",
    members: [
      { _id : 0, host : "<ip-1>:27018" },
     <more members here>
    ]
  }
)

Please confirm the same.
I have another question that when we start other members of that replica set then do we need to run rs.initiate on those members too?

On the same lines I have question for config server replica set too. Since in production we have three member config server then I suppose that would be called as a replica set.
So we need a unique name for that config server set too. Is that correct.

And when we start a config server and call rs.initiate we provide following config
rs.initiate(
  {
    _id : "<unique replicate set name>",
configsvr: true,
    members: [
      { _id : 0, host : "<ip-1>:27019" },
      { _id : 1, host : "<ip-2>:27019" },
      { _id : 2, host : "<ip-3>:27019" },
    ]
  }
)
Is the config passed to initiate above correct?

Also do we need to start all three config servers first and then call rs.initiate by connecting to one?
Or call the same config on all three after starting each one by one?

Thanks
Sachin



Kevin Adistambha

unread,
Jan 27, 2017, 1:41:07 AM1/27/17
to mongodb-user

Hi Sachin

I understood that every shard server should have its own unique replica set name and if that shard server needs to be replicated then we can add members to it.

Yes this is correct. Please note that it is recommended for production deployments for each shard to be a replica set, unless you are running the sharded cluster in development.

I have another question that when we start other members of that replica set then do we need to run rs.initiate on those members too?

No, you only run rs.initiate() to initiate a replica set on the first member only. The rs.initiate() command creates and initiates a replica set. After the first member is initialized as a replica set, you can use the rs.add() command to add subsequent members of the replica set. Please see Deploy a Replica Set and Add Members to a Replica Set for more details.

Alternatively, you can call rs.initiate() by supplying the replica set configuration which contains all the intended members of the replica set (please see the example here: https://docs.mongodb.com/manual/reference/method/rs.initiate/#example).

On the same lines I have question for config server replica set too. Since in production we have three member config server then I suppose that would be called as a replica set.
So we need a unique name for that config server set too. Is that correct.

This depends on your MongoDB version. In MongoDB 3.0 and earlier, the three config servers are in a mirrored configuration. However, starting from MongoDB 3.2, you can have config servers as a replica set as an option (with mirrored configuration still available for backward compatibility). Starting from MongoDB 3.4, config servers are strictly to be a replica set, with mirrored option no longer available.

Also do we need to start all three config servers first and then call rs.initiate by connecting to one?
Or call the same config on all three after starting each one by one?

For best results, you may want to follow the step-by-step procedure outlined in the Deploy a Sharded Cluster page. Please note that the page is for the current MongoDB version (3.4), which mandates the use of a replica set config server. A couple of things worth mentioning:

  1. The procedure above is a basic illustration on how to deploy a sharded cluster, and does not include enabling security on the deployment. Please follow the procedure outlined in the Deploy Sharded Cluster with Keyfile Access Control page on how to enable basic authentication. For production deployments, it is strongly recommended to use x.509 instead.

  2. For best results, please use a Fully Qualified Domain Name for each member of the cluster, and avoid hard-coding static IP addresses.

Best regards,
Kevin

Reply all
Reply to author
Forward
0 new messages