explain "connPoolStats" outputs

738 views
Skip to first unread message

ZephW

unread,
Feb 28, 2012, 11:46:17 PM2/28/12
to mongod...@googlegroups.com
hi, all,

My mongostats show the number of connections to primary shard is ever-inceasing:

                                      insert  query update delete getmore command flushes mapped  vsize    res non-mapped faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn     set repl       time 
YF-C0-SV0001.internal.some.com:11001     0     38      0      0       2      95       0  10.3g    32g  1.16g      21.7g      0        0          0       0|0     3|0    22k    83k 10813 shard_1    M   12:39:12 
YF-C0-SV0002.internal.some.com:11002    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   683m      12.6g      0        0          0       0|0     0|0   186b     2k   111 shard_2  SEC   12:39:12 
YF-C0-SV0003.internal.some.com:11003     0     31      0      0       2      40       0  10.3g  31.9g  1.22g      21.7g      0        0          0       0|0     2|0     7k    49k 10834 shard_3    M   12:39:12 
YF-C0-SV0004.internal.some.com:11004     0     31      0      0       0      57       0  12.3g    36g  1.11g      23.7g      0        0          0       0|0     2|0    11k    25k 10805 shard_4    M   12:39:12 
YF-C0-SV0005.internal.some.com:11005    *0     *0     *0     *0       0     3|0       0  12.2g  24.8g   760m      12.6g      0        0          0       0|0     0|0   186b     2k   112 shard_5  SEC   12:39:12 
YF-C0-SV0006.internal.some.com:11006    *0     *0     *0     *0       0     4|0       0  12.1g  24.7g  1.01g      12.6g      0        0          0       0|0     0|0   337b     2k   116 shard_6  SEC   12:39:12 
YF-C0-SV0007.internal.some.com:11007    *0     *0     *0     *0       0     3|0       0  12.1g  24.6g   721m      12.5g      0        0          0       0|0     0|0   186b     2k   112 shard_7  SEC   12:39:12 
YF-C0-SV0008.internal.some.com:11008     0     28      0      0       0      48       0  12.2g  27.1g   868m        15g      0        0          0       0|0     1|0     8k    11k  2354 shard_8    M   12:39:12 
YF-C0-SV0009.internal.some.com:11001    *0     *0     *0     *0       0     4|0       0  10.2g  20.9g  1.34g      10.7g      0        0          0       0|0     0|0   337b     2k    67 shard_1  SEC   12:39:12 
YF-C0-SV0010.internal.some.com:11002    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   699m      12.6g      0        0          0       0|0     0|0   186b     2k   117 shard_2  SEC   12:39:12 
YF-C0-SV0011.internal.some.com:11003    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   921m      12.6g      0        0          0       0|0     0|0   186b     2k   117 shard_3  SEC   12:39:12 
YF-C0-SV0012.internal.some.com:11004    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   822m      12.6g      0        0          0       0|0     0|0   186b     2k   109 shard_4  SEC   12:39:12 
YF-C0-SV0013.internal.some.com:11005    *0     *0     *0     *0       0     3|0       0  12.2g  24.8g   762m      12.6g      0        0          0       0|0     0|0   186b     2k   116 shard_5  SEC   12:39:12 
YF-C0-SV0014.internal.some.com:11006    *0     *0     *0     *0       0     9|0       0  12.2g  24.9g  1.01g      12.7g      0        0          0       0|0     0|0   558b     6k   171 shard_6  SEC   12:39:12 
YF-C0-SV0015.internal.some.com:11007    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   710m      12.5g      0        0          0       0|0     0|0   186b     2k   115 shard_7  SEC   12:39:12 
YF-C0-SV0016.internal.some.com:11008    *0     *0     *0     *0       0     5|0       0  12.1g  24.7g   801m      12.6g      0        0          0       0|0     0|0   310b     4k   127 shard_8  SEC   12:39:12 
YF-C0-SV0017.internal.some.com:11001    *0     *0     *0     *0       0     4|0       0  12.1g  24.8g   852m      12.7g      0        0          0       0|0     0|0   337b     2k   161 shard_1  SEC   12:39:12 
YF-C0-SV0018.internal.some.com:11002     0     35      0      0       0      96       0  12.3g  35.9g  1002m      23.7g      0        0          0       0|0     2|0    22k    66k 10807 shard_2    M   12:39:12 
YF-C0-SV0019.internal.some.com:11003    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   925m      12.6g      0        0          0       0|0     0|0   186b     2k   112 shard_3  SEC   12:39:12 
YF-C0-SV0020.internal.some.com:11004    *0     *0     *0     *0       0     3|0       0  12.1g  24.7g   994m      12.6g      0        0          0       0|0     0|0   186b     2k   116 shard_4  SEC   12:39:12 
YF-C0-SV0021.internal.some.com:11005     0     28      0      0       0      52       0  10.4g  32.2g  1.05g      21.8g      0        0          0       0|0     2|0     8k    11k 10822 shard_5    M   12:39:12 
YF-C0-SV0022.internal.some.com:11006     0     36      0      0       0      93       0  10.3g    32g   1.1g      21.7g      0        0          0       0|0     1|0    22k   109k 10821 shard_6    M   12:39:12 
YF-C0-SV0023.internal.some.com:11007     0     28      0      0       0      35       0  10.3g  32.1g     1g      21.8g      0        0          0       0|0     2|0     7k    12k 10815 shard_7    M   12:39:12 
                       localhost:31000     0      1      0      0       0       1                 1.34g   258m                 0                                         215b   830b   615          RTR   12:39:12 


So I try to debug the issue. Now I need help to understand the output from 'connPoolStats', can someone explain its output to me?

mongos> db.adminCommand("connPoolStats")
{
"hosts" : {
"YF-C0-SV0001.internal.some.com:11001::0" : {
"available" : 1,
"created" : 8
},
"YF-C0-SV0001:21000::0" : {
"available" : 1,
"created" : 1
},
"YF-C0-SV0001:21000::30" : {
"available" : 1,
"created" : 1
},
"YF-C0-SV0001:21000,YF-C0-SV0017:21000,YF-C0-SV0009:21000::0" : {
"available" : 3,
"created" : 1655
},
"YF-C0-SV0001:21000,YF-C0-SV0017:21000,YF-C0-SV0009:21000::30" : {
"available" : 1,
"created" : 25
},
"YF-C0-SV0002.internal.some.com:11002::0" : {
"available" : 0,
"created" : 19
},
"YF-C0-SV0003.internal.some.com:11003::0" : {
"available" : 1,
"created" : 8
},
"YF-C0-SV0004.internal.some.com:11004::0" : {
"available" : 1,
"created" : 6
},
"YF-C0-SV0005.internal.some.com:11005::0" : {
"available" : 0,
"created" : 19
},
"YF-C0-SV0006.internal.some.com:11006::0" : {
"available" : 1,
"created" : 19
},
"YF-C0-SV0007.internal.some.com:11007::0" : {
"available" : 0,
"created" : 19
},
"YF-C0-SV0008.internal.some.com:11008::0" : {
"available" : 1,
"created" : 18
},
"YF-C0-SV0009.internal.some.com:11001::0" : {
"available" : 0,
"created" : 2
},
"YF-C0-SV0009:21000::0" : {
"available" : 0,
"created" : 1
},
"YF-C0-SV0009:21000::30" : {
"available" : 1,
"created" : 1
},
"YF-C0-SV0010.internal.some.com:11002::0" : {
"available" : 1,
"created" : 20
},
"YF-C0-SV0011.internal.some.com:11003::0" : {
"available" : 1,
"created" : 20
},
"YF-C0-SV0012.internal.some.com:11004::0" : {
"available" : 0,
"created" : 19
},
"YF-C0-SV0013.internal.some.com:11005::0" : {
"available" : 1,
"created" : 20
},
"YF-C0-SV0014.internal.some.com:11006::0" : {
"available" : 0,
"created" : 19
},
"YF-C0-SV0015.internal.some.com:11007::0" : {
"available" : 1,
"created" : 20
},
"YF-C0-SV0016.internal.some.com:11008::0" : {
"available" : 1,
"created" : 18
},
"YF-C0-SV0017.internal.some.com:11001::0" : {
"available" : 1,
"created" : 3
},
"YF-C0-SV0017:21000::0" : {
"available" : 0,
"created" : 1
},
"YF-C0-SV0017:21000::30" : {
"available" : 1,
"created" : 1
},
"YF-C0-SV0018.internal.some.com:11002::0" : {
"available" : 1,
"created" : 9
},
"YF-C0-SV0019.internal.some.com:11003::0" : {
"available" : 0,
"created" : 19
},
"YF-C0-SV0020.internal.some.com:11004::0" : {
"available" : 1,
"created" : 20
},
"YF-C0-SV0021.internal.some.com:11005::0" : {
"available" : 1,
"created" : 9
},
"YF-C0-SV0022.internal.some.com:11006::0" : {
"available" : 1,
"created" : 7
},
"YF-C0-SV0023.internal.some.com:11007::0" : {
"available" : 1,
"created" : 8
},
"YF-C0-SV0024.internal.some.com:11008::0" : {
"available" : 0,
"created" : 8
},
"shard_1/YF-C0-SV0001.internal.some.com:11001,YF-C0-SV0017.internal.some.com:11001,YF-C0-SV0009.internal.some.com:11001::0" : {
"available" : 1,
"created" : 1
},
"shard_2/YF-C0-SV0002.internal.some.com:11002,YF-C0-SV0018.internal.some.com:11002,YF-C0-SV0010.internal.some.com:11002::0" : {
"available" : 2,
"created" : 2
},
"shard_3/YF-C0-SV0019.internal.some.com:11003,YF-C0-SV0011.internal.some.com:11003,YF-C0-SV0003.internal.some.com:11003::0" : {
"available" : 2,
"created" : 2
},
"shard_4/YF-C0-SV0004.internal.some.com:11004,YF-C0-SV0020.internal.some.com:11004,YF-C0-SV0012.internal.some.com:11004::0" : {
"available" : 2,
"created" : 2
},
"shard_5/YF-C0-SV0021.internal.some.com:11005,YF-C0-SV0013.internal.some.com:11005,YF-C0-SV0005.internal.some.com:11005::0" : {
"available" : 1,
"created" : 1
},
"shard_6/YF-C0-SV0006.internal.some.com:11006,YF-C0-SV0022.internal.some.com:11006,YF-C0-SV0014.internal.some.com:11006::0" : {
"available" : 2,
"created" : 2
},
"shard_7/YF-C0-SV0007.internal.some.com:11007,YF-C0-SV0023.internal.some.com:11007,YF-C0-SV0015.internal.some.com:11007::0" : {
"available" : 1,
"created" : 1
},
"shard_8/YF-C0-SV0008.internal.some.com:11008,YF-C0-SV0024.internal.some.com:11008,YF-C0-SV0016.internal.some.com:11008::0" : {
"available" : 1,
"created" : 1
}
},
"replicaSets" : {
"shard_1" : {
"hosts" : [
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 1
},
"shard_2" : {
"hosts" : [
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 1,
"nextSlave" : 2
},
"shard_3" : {
"hosts" : [
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
}
],
"master" : 2,
"nextSlave" : 1
},
"shard_4" : {
"hosts" : [
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 1
},
"shard_5" : {
"hosts" : [
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 1
},
"shard_6" : {
"hosts" : [
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 1,
"nextSlave" : 0
},
"shard_7" : {
"hosts" : [
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 1,
"nextSlave" : 2
},
"shard_8" : {
"hosts" : [
{
"ok" : true,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : false,
"ismaster" : true,
"hidden" : false,
"secondary" : false,
"pingTimeMillis" : 0
},
{
"ok" : true,
"ismaster" : false,
"hidden" : false,
"secondary" : true,
"pingTimeMillis" : 0
}
],
"master" : 0,
"nextSlave" : 2
}
},
"createdByType" : {
"master" : 343,
"set" : 12,
"sync" : 1680
},
"totalAvailable" : 36,
"totalCreated" : 2035,
"numDBClientConnection" : 5221,
"numAScopedConnection" : 24,
"ok" : 1
}
mongos> 

Barrie

unread,
Feb 29, 2012, 10:58:27 AM2/29/12
to mongod...@googlegroups.com
What versions are you running for the mongos, mongod, and driver?  

ZephW

unread,
Feb 29, 2012, 9:53:12 PM2/29/12
to mongod...@googlegroups.com
I'm running 2.0.2. The ever-increasing issue has been resolved (caused by mis-config). But I still need someone to help me understand each field of connPoolStats output.
Thank you.
Message has been deleted

Gregor Macadam

unread,
Mar 1, 2012, 12:09:11 PM3/1/12
to mongod...@googlegroups.com
Mongos holds a pool of connections for each host - connections are recycled from the pool 
For each host, "available" shows the number of connections in the pool for that host and "created" shows how many connections have *ever* been created for that host - so the pool will grow and shrink but the created will just show how many have been created *ever*. 
Next there are a bunch of stats for each replica set / shard which I can explain if you want but maybe you can understand those already. 

Next "createdByType" shows the number of each type of connection that exists in all the pools. Master is a connection to the shard primary, set is a replica set connection and sync is a connection to the config server. 
Next there is a running total of total number of connections in all pools - "totalAvailable".  
Then there is total number of connections ever created - "totalCreated".  
Then there are two numbers indicating the type of connection - "numDBClientConnection" are normal connections and "numAScopedConnection" are connections which are exception safe - that is the connection will be released if an exception is thrown in the code. 
Make sense?

ZephW

unread,
Mar 6, 2012, 3:11:30 AM3/6/12
to mongod...@googlegroups.com
Gregor, thanks for the explanation, so "createdByType" shows connections *ever* created or currently still there? numDBClientConnection is so huge, is it a *ever* count as well?

ZephW

unread,
Mar 6, 2012, 3:12:13 AM3/6/12
to mongod...@googlegroups.com
BTW, why 'sync' is abnormally so large?

Adam C

unread,
Mar 6, 2012, 9:51:07 AM3/6/12
to mongod...@googlegroups.com
Sync is large because the configuration database is where the mongos gets its sharding information from (config database holds all the sharding meta-data, mongos keeps a copy of it).  Hence there will be a large number of sync connections from the mongos perspective (which is where you ran this command).

The createdByType is an always incrementing count as is numDBClientConnections - they are really just summaries of the information above (which is per host) broken out in a few different ways for informational purposes.

Adam

ZephW

unread,
Mar 6, 2012, 9:58:31 PM3/6/12
to mongod...@googlegroups.com
Thank you.
Reply all
Reply to author
Forward
0 new messages