assertion: 11010 count fails:{ assertion: "setShardVersion failed ....}

57 views
Skip to first unread message

moskrc

unread,
Apr 11, 2011, 4:27:21 AM4/11/11
to mongodb-user
Hi,

I want to configure sharding over the replica set.

> db.runCommand( { listshards : 1 } );
{
"shards" : [
{
"_id" : "rs1",
"host" : "rs1/server1.domain.com:
28000,server5.domain.com:28000,server2.domain.com:28000"
},
{
"_id" : "rs2",
"host" : "rs2/server3.domain.com:
28000,server8.domain.com:28000,server6.domain.com:28000"
},
{
"_id" : "rs3",
"host" : "rs3/server4.domain.com:
28000,server9.domain.com:28000,server7.domain.com:28000"
}
],
"ok" : 1
}

When I try do dump the database , I get an error.

assertion: 11010 count fails:{ assertion: "setShardVersion failed
host[server4.domain.com:28000] { errmsg: "not ma...", assertionCode:
10429, errmsg: "db assertion failure", ok: 0 }

Mongos logs at this moment:

###

Mon Apr 11 01:23:59 [mongosMain] connection accepted from
127.0.0.1:51605 #212
Mon Apr 11 01:23:59 [conn212] setShardVersion failed
host[server4.domain.com:28000] { errmsg: "not master", ok: 0.0 }
Mon Apr 11 01:23:59 [conn212] Assertion: 10429:setShardVersion failed
host[server4.domain.com:28000] { errmsg: "not master", ok: 0.0 }
0x51f4a9 0x69b163 0x69acf2 0x69acf2 0x69acf2 0x576ba6 0x5774b6
0x575630 0x575b31 0x65f661 0x57bdcc 0x631062 0x66432c 0x6761c7
0x57ea3c 0x69ec30 0x322140673d 0x3220cd3d1d
/opt/mongodb/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x129) [0x51f4a9]
/opt/mongodb/bin/mongos [0x69b163]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/
mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBaseERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi
+0x16) [0x576ba6]
/opt/mongodb/bin/
mongos(_ZN5mongo17ClientConnections13checkVersionsERKSs+0x1c6)
[0x5774b6]
/opt/mongodb/bin/mongos(_ZN5mongo15ShardConnection5_initEv+0x2d0)
[0x575630]
/opt/mongodb/bin/mongos(_ZN5mongo15ShardConnectionC1ERKNS_5ShardERKSs
+0xa1) [0x575b31]
/opt/mongodb/bin/
mongos(_ZN5mongo15dbgrid_pub_cmds8CountCmd3runERKSsRNS_7BSONObjERSsRNS_14BSONObjBuilderEb
+0x9e1) [0x65f661]
/opt/mongodb/bin/
mongos(_ZN5mongo7Command20runAgainstRegisteredEPKcRNS_7BSONObjERNS_14BSONObjBuilderE
+0x67c) [0x57bdcc]
/opt/mongodb/bin/
mongos(_ZN5mongo14SingleStrategy7queryOpERNS_7RequestE+0x262)
[0x631062]
/opt/mongodb/bin/mongos(_ZN5mongo7Request7processEi+0x29c) [0x66432c]
/opt/mongodb/bin/
mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE
+0x77) [0x6761c7]
/opt/mongodb/bin/mongos(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE
+0x34c) [0x57ea3c]
/opt/mongodb/bin/mongos(thread_proxy+0x80) [0x69ec30]
/lib64/libpthread.so.0 [0x322140673d]
/lib64/libc.so.6(clone+0x6d) [0x3220cd3d1d]
Mon Apr 11 01:23:59 [conn212] end connection 127.0.0.1:51605


###

Thanks.

Regards,
Vitaliy

Eliot Horowitz

unread,
Apr 11, 2011, 5:17:53 AM4/11/11
to mongod...@googlegroups.com
What version is this with?
There was an issue with 1.8.0 (and maybe 1.6) where this could happen.
That issue was fixed with 1.8.1 so you may want to try that.

> --
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
>
>

moskrc

unread,
Apr 11, 2011, 7:35:38 AM4/11/11
to mongodb-user
Version 1.8.1. This happens when I try to dump database connected to
mongos that is not on the server that is primary for the selected
database. Whew...

Example:

I want to make a database dump "cms-prod".

Where it...

> db.printShardingStatus()

...
databases:
{ "_id" : "cms-prod", "partitioned" : true, "primary" : "rs1" }
...

See, that cms-prod on REPLICA SET 1 (rs1)

What is rs1:

> db.runCommand( { listshards : 1 } );
....
"shards" : [
{
"_id" : "rs1",
"host" : "rs1/server1.domain.com:
28000,server5.domain.com:28000,server2.domain.com:28000"
},
...


Aha....

Go to the server5.domain.com (for example)... And check it

[moskrc@server5 ~]$ /opt/mongodb/bin/mongo localhost:28000
MongoDB shell version: 1.8.1
connecting to: localhost:28000/test
rs1:SECONDARY> rs.status()
{
"set" : "rs1",
"date" : ISODate("2011-04-11T11:25:02Z"),
"myState" : 2,
"members" : [
{
"_id" : 0,
"name" : "server2.domain.com:28000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6644,
"optime" : {
"t" : 1302520423000,
"i" : 254
},
"optimeDate" : ISODate("2011-04-11T11:13:43Z"),
"lastHeartbeat" : ISODate("2011-04-11T11:25:00Z")
},
{
"_id" : 1,
"name" : "server5.domain.com:28000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"optime" : {
"t" : 1302520423000,
"i" : 254
},
"optimeDate" : ISODate("2011-04-11T11:13:43Z"),
"self" : true
},
{
"_id" : 2,
"name" : "server1.domain.com:28000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6644,
"optime" : {
"t" : 1302520423000,
"i" : 254
},
"optimeDate" : ISODate("2011-04-11T11:13:43Z"),
"lastHeartbeat" : ISODate("2011-04-11T11:25:01Z")
}
],
"ok" : 1
}

See, that server2.domain.com is PRIMARY.

This means that we can do a dump of the database only through mongos
running on server2.domain.com. If you try to do it with other
computers - will occur this error.

How can I fix it?

Thanks.

Eliot Horowitz

unread,
Apr 11, 2011, 2:30:08 PM4/11/11
to mongod...@googlegroups.com
I'm very confused about what you are trying to do.

Are you doing the mongodump through mongos, or directly to a node?

moskrc

unread,
Apr 11, 2011, 2:39:17 PM4/11/11
to mongodb-user
Yes, throught mongos.

moskrc

unread,
Apr 11, 2011, 2:44:00 PM4/11/11
to mongodb-user
Now I've also found problems with reading data through different
mongos. On some read, some do not. I wrote a script to transfer data
from GridFS the file system. If you connect through some mongos, an
error is issued (Python):

raise CorruptGridFile("no chunk #%d" % chunk_number)
gridfs.errors.CorruptGridFile: no chunk #0

What's going on. Maybe firewall? But all the necessary ports are open.
What logs should I look?

Eliot Horowitz

unread,
Apr 11, 2011, 2:49:57 PM4/11/11
to mongod...@googlegroups.com
What shard key did you use for the chunks collection?

moskrc

unread,
Apr 11, 2011, 2:55:49 PM4/11/11
to mongodb-user
"files_id"

> db.fs.chunks.ensureIndex({files_id: 1});
> db.runCommand({ shardcollection : "cms.fs.chunks", key : { files_id : 1 }})
> ...
>
> продолжение »

moskrc

unread,
Apr 11, 2011, 2:58:02 PM4/11/11
to mongodb-user
> db.printShardingStatus()

cms.fs.chunks chunks:
rs2 5
rs3 5
rs1 6
{ "files_id" : { $minKey : 1 } } -->> { "files_id" :
ObjectId("4d9ee390d8b9bb57c80001c2") } on : rs2 { "t" : 2000, "i" :
0 }
{ "files_id" : ObjectId("4d9ee390d8b9bb57c80001c2") } -->>
{ "files_id" : ObjectId("4d9ee396d8b9bb57c80003f5") } on : rs3 { "t" :
3000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee396d8b9bb57c80003f5") } -->>
{ "files_id" : ObjectId("4d9ee3abd8b9bb57c80005a1") } on : rs2 { "t" :
4000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee3abd8b9bb57c80005a1") } -->>
{ "files_id" : ObjectId("4d9ee4ead8b9bb57c80006f2") } on : rs3 { "t" :
5000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee4ead8b9bb57c80006f2") } -->>
{ "files_id" : ObjectId("4d9ee4f8d8b9bb57c8000832") } on : rs2 { "t" :
6000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee4f8d8b9bb57c8000832") } -->>
{ "files_id" : ObjectId("4d9ee502d8b9bb57c800097b") } on : rs3 { "t" :
7000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee502d8b9bb57c800097b") } -->>
{ "files_id" : ObjectId("4d9ee51bd8b9bb57c8000ada") } on : rs2 { "t" :
8000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee51bd8b9bb57c8000ada") } -->>
{ "files_id" : ObjectId("4d9ee535d8b9bb57c8000be0") } on : rs3 { "t" :
9000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee535d8b9bb57c8000be0") } -->>
{ "files_id" : ObjectId("4d9ee53fd8b9bb57c8000dbd") } on : rs2 { "t" :
10000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee53fd8b9bb57c8000dbd") } -->>
{ "files_id" : ObjectId("4d9ee544d8b9bb57c8000fcc") } on : rs3 { "t" :
11000, "i" : 0 }
{ "files_id" : ObjectId("4d9ee544d8b9bb57c8000fcc") } -->>
{ "files_id" : ObjectId("4d9ee553d8b9bb57c800117d") } on : rs1 { "t" :
11000, "i" : 1 }
{ "files_id" : ObjectId("4d9ee553d8b9bb57c800117d") } -->>
{ "files_id" : ObjectId("4d9ee563d8b9bb57c8001344") } on : rs1 { "t" :
1000, "i" : 12 }
{ "files_id" : ObjectId("4d9ee563d8b9bb57c8001344") } -->>
{ "files_id" : ObjectId("4d9ee570d8b9bb57c8001523") } on : rs1 { "t" :
1000, "i" : 13 }
{ "files_id" : ObjectId("4d9ee570d8b9bb57c8001523") } -->>
{ "files_id" : ObjectId("4d9ee571d8b9bb57c800172b") } on : rs1 { "t" :
1000, "i" : 14 }
{ "files_id" : ObjectId("4d9ee571d8b9bb57c800172b") } -->>
{ "files_id" : ObjectId("4d9f8731d8b9bb2430000032") } on : rs1 { "t" :
1000, "i" : 15 }
{ "files_id" : ObjectId("4d9f8731d8b9bb2430000032") } -->>
{ "files_id" : { $maxKey : 1 } } on : rs1 { "t" : 1000, "i" : 16 }
> ...
>
> продолжение »

moskrc

unread,
Apr 11, 2011, 4:07:15 PM4/11/11
to mongodb-user
More info:

Different readings fs.chunks.stats()

server2.domain.com (rs1)

> db.fs.chunks.stats()
{
"ns" : "mycms-prod.fs.chunks",
"sharded" : false,
"primary" : "rs1",
"ns" : "mycms-prod.fs.chunks",
"count" : 1871,
"size" : 173588580,
"avgObjSize" : 92778.50347407804,
"storageSize" : 537443328,
"numExtents" : 18,
"nindexes" : 3,
"lastExtentSize" : 93068544,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 311296,
"indexSizes" : {
"_id_" : 98304,
"files_id_1_n_1" : 114688,
"files_id_1" : 98304
},
"ok" : 1
}

server9.domain.com (rs3)

> db.fs.chunks.stats()
{
"sharded" : true,
"ns" : "mycms-prod.fs.chunks",
"count" : 4419,
"size" : 565973048,
"avgObjSize" : 128077.17764200045,
"storageSize" : 947802624,
"nindexes" : 3,
"nchunks" : 16,
"shards" : {
"rs1" : {
"ns" : "mycms-prod.fs.chunks",
"count" : 1871,
"size" : 173588580,
"avgObjSize" : 92778.50347407804,
"storageSize" : 537443328,
"numExtents" : 18,
"nindexes" : 3,
"lastExtentSize" : 93068544,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 311296,
"indexSizes" : {
"_id_" : 98304,
"files_id_1_n_1" : 114688,
"files_id_1" : 98304
},
"ok" : 1
},
"rs2" : {
"ns" : "mycms-prod.fs.chunks",
"count" : 1276,
"size" : 195976764,
"avgObjSize" : 153586.80564263323,
"storageSize" : 203456000,
"numExtents" : 14,
"nindexes" : 3,
"lastExtentSize" : 37403136,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 237568,
"indexSizes" : {
"_id_" : 73728,
"files_id_1_n_1" : 90112,
"files_id_1" : 73728
},
"ok" : 1
},
"rs3" : {
"ns" : "mycms-prod.fs.chunks",
"count" : 1272,
"size" : 196407704,
"avgObjSize" : 154408.57232704404,
"storageSize" : 206903296,
"numExtents" : 15,
"nindexes" : 3,
"lastExtentSize" : 38564352,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 221184,
"indexSizes" : {
"_id_" : 65536,
"files_id_1_n_1" : 90112,
"files_id_1" : 65536
},
"ok" : 1
}
},
"ok" : 1
}


What it is?

Thanks
> > > >> > >> > assertion: 11010 count fails:{ assertion:...
>
> продолжение »

Eliot Horowitz

unread,
Apr 11, 2011, 4:19:38 PM4/11/11
to mongod...@googlegroups.com
How many mongos do you have?
Can you give an overview of your architecture?
Can you describe how you started each mongos?
You can also try bouncing the mongos

2011/4/11 moskrc <mos...@gmail.com>:

moskrc

unread,
Apr 11, 2011, 4:34:19 PM4/11/11
to mongodb-user
I will prepare materials..

>> You can also try bouncing the mongos
Describe me how to do it please..
> >> > > >> > > What is rs1:...
>
> продолжение >>

Gaetan Voyer-Perrault

unread,
Apr 11, 2011, 5:00:37 PM4/11/11
to mongod...@googlegroups.com
>> You can also try bouncing the mongos
> Describe me how to do it please..

"bounce" == re-start

(i.e. stop process, wait for clean shutdown, re-start process)

2011/4/11 moskrc <mos...@gmail.com>
>
> продолжение >>

moskrc

unread,
Apr 11, 2011, 6:14:44 PM4/11/11
to mongodb-user
Wow! I reran mongos and now I have access to all files in GridFS. But
dump the database still does not work. Writes an error:

[moskrc@server9 test6]$ /opt/mongodb/bin/mongodump -h localhost:30000 -
d mycms-prod
connected to: localhost:30000
DATABASE: mycms-prod to dump/mycms-prod
mycms-prod.cms_comment to dump/mycms-prod/cms_comment.bson
16 objects
mycms-prod.system.indexes to dump/mycms-prod/system.indexes.bson
67 objects
mycms-prod.cms_pdfcontent to dump/mycms-prod/cms_pdfcontent.bson
15 objects
mycms-prod.djangoratings_vote to dump/mycms-prod/
djangoratings_vote.bson
25 objects
mycms-prod.auth_permission to dump/mycms-prod/auth_permission.bson
192 objects
mycms-prod.tracking_pagevisit to dump/mycms-prod/
tracking_pagevisit.bson
assertion: 11010 count fails:{ assertion: "setShardVersion failed
host[server1.domain.com:28000] { errmsg: "not master...",
assertionCode: 10429, errmsg: "db assertion failure", ok: 0 }
[moskrc@server9 test6]$

# How many mongos do you have?
I have 6 mongos. Actually used always only 2. For 2 applications. Each
application has its own mongos.

# Can you give an overview of your architecture?
Total I have 9 servers. On each run mongod with params: shardsvr =
true replSet = rs1 (rs2 and rs3). Three replicas. Each replica
consists of a 3-mongod. And three config servers (server4.domain.com:
28001,server6.domain.com:28001,server1.domain.com:28001).

# Can you describe how you started each mongos?
bind_ip = 127.0.0.1,123.456.789.12 port = 30000 fork = true configdb =
server4.domain.com:28001,server6.domain.com:28001,server1.domain.com:
28001

# You can also try bouncing the mongos
I restarted used mongos. That helped. Databases are now identical. But
the dump is still not working. I wrote the above error.

Thanks!
> > > >> > > >> > > { "_id" : "cms-prod", "partitioned" : true,...
>
> продолжение >>

Eliot Horowitz

unread,
Apr 11, 2011, 6:49:09 PM4/11/11
to mongod...@googlegroups.com
Are you sure all processes are 1.8.1?
If so, can you check the mongos for other messages?

moskrc

unread,
Apr 12, 2011, 1:50:40 AM4/12/11
to mongodb-user
Hi Eliot,

Yes, I checked it out. Everywhere installed version 1.8.1.

[moskrc@server8 ~]$
[moskrc@server8 ~]$ /opt/mongodb/bin/mongos --version
Mon Apr 11 22:40:31 /opt/mongodb/bin/mongos db version v1.8.1, pdfile
version 4.5 starting (--help for usage)
Mon Apr 11 22:40:31 git version:
a429cd4f535b2499cc4130b06ff7c26f41c00f04
Mon Apr 11 22:40:31 build sys info: Linux bs-linux64.10gen.cc
2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64
BOOST_LIB_VERSION=1_41
[moskrc@server8 ~]$
[moskrc@server8 ~]$ sudo /opt/mongodb/bin/mongos --port 30000 --
configdb server4.domain.com:28001,server6.domain.com:
28001,server1.domain.com:28001
Mon Apr 11 22:40:34 /opt/mongodb/bin/mongos db version v1.8.1, pdfile
version 4.5 starting (--help for usage)
Mon Apr 11 22:40:34 git version:
a429cd4f535b2499cc4130b06ff7c26f41c00f04
Mon Apr 11 22:40:34 build sys info: Linux bs-linux64.10gen.cc
2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64
BOOST_LIB_VERSION=1_41
Mon Apr 11 22:40:34 SyncClusterConnection connecting to
[server4.domain.com:28001]
Mon Apr 11 22:40:34 SyncClusterConnection connecting to
[server6.domain.com:28001]
Mon Apr 11 22:40:34 SyncClusterConnection connecting to
[server1.domain.com:28001]
Mon Apr 11 22:40:34 [mongosMain] waiting for connections on port 30000
Mon Apr 11 22:40:34 [websvr] web admin interface listening on port
31000
Mon Apr 11 22:40:34 [Balancer] about to contact config servers and
shards
Mon Apr 11 22:40:34 [Balancer] updated set (rs1) to: rs1/
server2.domain.com:28000,server1.domain.com:28000
Mon Apr 11 22:40:34 [Balancer] updated set (rs1) to: rs1/
server2.domain.com:28000,server1.domain.com:28000,server5.domain.com:
28000
Mon Apr 11 22:40:34 [ReplicaSetMonitorWatcher] starting
Mon Apr 11 22:40:34 [Balancer] updated set (rs2) to: rs2/
server3.domain.com:28000,server8.domain.com:28000
Mon Apr 11 22:40:34 [Balancer] updated set (rs2) to: rs2/
server3.domain.com:28000,server8.domain.com:28000,server6.domain.com:
28000
Mon Apr 11 22:40:34 [Balancer] updated set (rs3) to: rs3/
server4.domain.com:28000,server7.domain.com:28000
Mon Apr 11 22:40:34 [Balancer] updated set (rs3) to: rs3/
server4.domain.com:28000,server7.domain.com:28000,server9.domain.com:
28000
Mon Apr 11 22:40:34 [Balancer] config servers and shards contacted
successfully
Mon Apr 11 22:40:34 [Balancer] balancer id: server8.domain.com:30000
started at Apr 11 22:40:34
Mon Apr 11 22:40:34 [LockPinger] creating dist lock ping thread for:
server4.domain.com:28001,server6.domain.com:28001,server1.domain.com:
28001
Mon Apr 11 22:40:34 [LockPinger] SyncClusterConnection connecting to
[server4.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] SyncClusterConnection connecting to
[server4.domain.com:28001]
Mon Apr 11 22:40:34 [LockPinger] SyncClusterConnection connecting to
[server6.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] SyncClusterConnection connecting to
[server6.domain.com:28001]
Mon Apr 11 22:40:34 [LockPinger] SyncClusterConnection connecting to
[server1.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] SyncClusterConnection connecting to
[server1.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] SyncClusterConnection connecting to
[server4.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] SyncClusterConnection connecting to
[server6.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] SyncClusterConnection connecting to
[server1.domain.com:28001]
Mon Apr 11 22:40:34 [Balancer] warning: dist_lock has detected clock
skew of 216887ms




##### WHEN TRY TO DO $ /opt/mongodb/bin/mongodump -h localhost:30000 -
d mycms-prod #####

Mon Apr 11 22:40:38 [mongosMain] connection accepted from
127.0.0.1:53990 #1
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server2.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server1.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server5.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server3.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server8.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server6.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server4.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server7.domain.com:28000
Mon Apr 11 22:40:38 [conn1] creating WriteBackListener for:
server9.domain.com:28000
Mon Apr 11 22:40:38 [conn1] setShardVersion failed
host[server1.domain.com:28000] { errmsg: "not master", ok: 0.0 }
Mon Apr 11 22:40:38 [conn1] Assertion: 10429:setShardVersion failed
host[server1.domain.com:28000] { errmsg: "not master", ok: 0.0 }
0x51f4a9 0x69b163 0x69acf2 0x69acf2 0x69acf2 0x576ba6 0x5774b6
0x575630 0x575b31 0x65f661 0x57bdcc 0x631062 0x66432c 0x6761c7
0x57ea3c 0x69ec30 0x337580673d 0x33750d3d1d
/opt/mongodb/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x129) [0x51f4a9]
/opt/mongodb/bin/mongos [0x69b163]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/
mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBaseERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi
+0x16) [0x576ba6]
/opt/mongodb/bin/
mongos(_ZN5mongo17ClientConnections13checkVersionsERKSs+0x1c6)
[0x5774b6]
/opt/mongodb/bin/mongos(_ZN5mongo15ShardConnection5_initEv+0x2d0)
[0x575630]
/opt/mongodb/bin/mongos(_ZN5mongo15ShardConnectionC1ERKNS_5ShardERKSs
+0xa1) [0x575b31]
/opt/mongodb/bin/
mongos(_ZN5mongo15dbgrid_pub_cmds8CountCmd3runERKSsRNS_7BSONObjERSsRNS_14BSONObjBuilderEb
+0x9e1) [0x65f661]
/opt/mongodb/bin/
mongos(_ZN5mongo7Command20runAgainstRegisteredEPKcRNS_7BSONObjERNS_14BSONObjBuilderE
+0x67c) [0x57bdcc]
/opt/mongodb/bin/
mongos(_ZN5mongo14SingleStrategy7queryOpERNS_7RequestE+0x262)
[0x631062]
/opt/mongodb/bin/mongos(_ZN5mongo7Request7processEi+0x29c) [0x66432c]
/opt/mongodb/bin/
mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE
+0x77) [0x6761c7]
/opt/mongodb/bin/mongos(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE
+0x34c) [0x57ea3c]
/opt/mongodb/bin/mongos(thread_proxy+0x80) [0x69ec30]
/lib64/libpthread.so.0 [0x337580673d]
/lib64/libc.so.6(clone+0x6d) [0x33750d3d1d]
Mon Apr 11 22:40:38 [conn1] end connection 127.0.0.1:53990


Mon Apr 11 22:40:41 [mongosMain] dbexit: received signal 2 rc:0
received signal 2
Mon Apr 11 22:40:41 CursorCache at shutdown - sharded: 0
passthrough: 1
[moskrc@server8 ~]$

Thanks
> >> > > >> { "files_id" : ObjectId("4d9ee51bd8b9bb57c8000ada") } on : rs2 { "t" :...
>
> продолжение >>

moskrc

unread,
Apr 12, 2011, 1:19:33 PM4/12/11
to mongodb-user
Maybe somebody had to deal with this? What logs should I look?

Thanks
> > > On 12 ÁÐÒ, 01:00, Gaetan Voyer-Perrault <ga...@10gen.com> wrote:
> > >> >> You can also try bouncing the mongos
> > >> > Describe me how to do it please..
>
> > >> "bounce" == re-start
>
> > >> (i.e. stop process, wait for clean shutdown, re-start process)
>
> > >> 2011/4/11 moskrc <mos...@gmail.com>
>
> > >> > I will prepare materials..
>
> > >> > >> You can also try bouncing the mongos
> > >> > Describe me how to do it please..
>
> > >> > On 12 ÁÐÒ, 00:19, Eliot Horowitz <eliothorow...@gmail.com> wrote:
> > >> > > How many mongos do you have?
> > >> > > Can you give an overview of your architecture?
> > >> > > Can you describe how you started each mongos?
> > >> > > You can also try bouncing the mongos
>
> > >> > > 2011/4/11 moskrc <mos...@gmail.com>:
>
> > >> > > > More info:
>
> > >> > > > Different readings fs.chunks.stats()
>
> > >> > > > server2.domain.com (rs1)
>
> > >> > > >> db.fs.chunks.stats()
> > >> > > > {...
>
> продолжение »
Reply all
Reply to author
Forward
0 new messages