Mongo Memory mapping file size issue

50 views
Skip to first unread message

Virendra Agarwal

unread,
Jan 29, 2015, 4:41:33 AM1/29/15
to mongod...@googlegroups.com
I am witnessing some performance issues in my Mongo Sharded cluster. I checked the serverStatus for the memory uses on my shards
 This is the output

Shard one 
Primary RAM size 512 GB

    "mem" : {
                "bits" : 64,
                "resident" : 266 GB,
                "virtual" : 2425 GB,
                "supported" : true,
                "mapped" : 1211 GB,
                "mappedWithJournal" : 2422 GB
        }

Secondary RAM size 16 GB

    "mem" : {
                "bits" : 64,
                "resident" : 12663 MB,
                "virtual" : 2308 GB,
                "supported" : true,
                "mapped" : 1153 GB,
                "mappedWithJournal" : 2306 GB
        }

Shard two 
Primary RAM size 512 GB

    "mem" : {
                "bits" : 64,
                "resident" : 256 GB,
                "virtual" : 2420 GB,
                "supported" : true,
                "mapped" : 1200 GB,
                "mappedWithJournal" : 2400 GB
        }

Secondary RAM size 16 GB

    "mem" : {
                "bits" : 64,
                "resident" : 12663 MB,
                "virtual" : 2350,
                "supported" : true,
                "mapped" : 1130 GB,
                "mappedWithJournal" : 2260 GB
        }

Could you please tell me if this can cause an issue with performance as my mapped output is much higher than the memory available in servers.
Also as secondary servers i am using much lower RAM server as they are used just for Data back up no read an write is on them.

Will Berkeley

unread,
Jan 29, 2015, 10:18:59 AM1/29/15
to mongod...@googlegroups.com
Having resident memory be much smaller than virtual memory is not necessarily a problem with MongoDB using memory-mapped files. The data files are mapped to a region of virtual memory and the operating system controls moving things in and out of resident memory based on what data MongoDB needs to access.

What are the performance issues that you are seeing? What version of MongoDB are you using? I'd be happy to help you try to investigate performance problems. Based on the information here, it doesn't look like memory is necessarily a problem.

-Will

Virendra Agarwal

unread,
Jan 29, 2015, 11:30:01 AM1/29/15
to mongod...@googlegroups.com
HI Will,
Many thanks for reply . My performance issue is related with Sharding issue.
I had some collections sharded in my DB and for some time i got error in my Mongos logs that for particular collection meta data lock has been taken.
In that case that collection stops chunk transfer.
Use case detail is That i create Daily Db which has some collections those are sharded. Every day a new DB created with these collections.
But this metadata lock happens randomly on one collection everyday and complete data stay for that collection on one shard only.

 2015-01-02T14:53:58.983+0530 [conn58] warning: splitChunk failed - cmd: { splitChunk: "DB20150102.locationCount", keyPattern: { articleId: 1, host: 1 }, min: { articleId: MinKey, host: MinKey }, max: { articleId: MaxKey, host: MaxKey }, from: "shard0000", splitKeys: [ { articleId: "", host: "abc.com" } ], shardId: "DB20150102.locationCount-articleId_MinKeyhost_MinKey", configdb: "x.x.x.192:27017,x.x.x.54:27017,x.x.x.55:27017" } result: { who: { _id: "DB20150102.locationCount", state: 1, who: "mongos1:27017:1420185037:1475849446:conn913:869542099", ts: ObjectId('54a660afdc99ecfb22d83c27'), process: "mongos1:27017:1420185037:1475849446", when: new Date(1420189871037), why: "split-{ articleId: MinKey, host: MinKey }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" }

When this error happens it blocks db and write halted.
Reply all
Reply to author
Forward
0 new messages