MongoDB Multiple Volume Mount Points Support

845 views
Skip to first unread message

gli

unread,
Apr 1, 2013, 10:59:03 AM4/1/13
to mongod...@googlegroups.com

Sorry for my ignorance about MongoDB storage. I looked around, but did not find answers. Here are the questions I have:

 

1.       Does MongoDB support multiple volume mount points, in Windows Server 2008? For example, each disk has 500MB, with four of them

2.       Assume MongoDB supports multiple mount points, how does MongoDB recognize the newly added (or mounted) disks?

 
Thanks in advance.
Grant
 

William Zola

unread,
Apr 1, 2013, 12:54:21 PM4/1/13
to mongod...@googlegroups.com
Hi Grant!

Currently, MongoDB will store its data files in a single directory; the one designated by the 'datadir' parameter.  If you have multiple volumes and you want to use them with MongoDB, the best way to do so is to use the OS facilities to create a single virtual volume, and then use the virtual volume for MongoDB.  You can add additional physical volumes to the virtual volume using the OS facilities.

Let me know if you have further questions.

 -William 

Kevin Rice

unread,
Apr 1, 2013, 5:36:47 PM4/1/13
to mongod...@googlegroups.com
Alternate (simpler?) solution:

Let's say you have 4 mount points you want to use.  Shard your database into 4 shards.  When starting your mongod's, specify a dbpath corresponding to the storage location you want.

-- Kevin J. Rice

William Zola

unread,
Apr 1, 2013, 6:55:10 PM4/1/13
to mongod...@googlegroups.com
Hey Kevin!

This is not recommended.  If you have 4 'mongod's running on the same machine, they'll contend for memory, and your performance will suffer.  You really want to have only one 'mongod' process running on a single host.

 -William 

Kevin Rice

unread,
Apr 1, 2013, 10:32:30 PM4/1/13
to mongod...@googlegroups.com
Strongly disagree, William.  In fact, I have large amounts of empirical evidence to the contrary.  I'm on a 192 GB box with 32 processors, running with 24 shards on that one box (and the same on 4 other boxes).  They work fine.  In fact, we've proven through comparative tests that we get MUCH better performance on that configuration, mostly because Mongo has database/collection locking contention during massive numbers of write-heavy application usage.  We're getting upwards 25k updates/second per box.

There is contention for memory, just equal contention, and all the shards contend equally so they all share equally.  Works fine.


--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com
To unsubscribe from this group, send email to
mongodb-user...@googlegroups.com
See also the IRC channel -- freenode.net#mongodb
 
---
You received this message because you are subscribed to a topic in the Google Groups "mongodb-user" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mongodb-user/2Z8zjaY53F0/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to mongodb-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Jaap Taal

unread,
Apr 2, 2013, 5:23:35 AM4/2/13
to mongod...@googlegroups.com
@Kevin,

That's not an ordinary set up. This only works with huge amounts of RAM available. I have to agree with William in general. That something works for you setup doesn't mean it's to be recommended.

Jaap Taal
 
[ Q42 BV | tel 070 44523 42 | direct 070 44523 65 | http://q42.nl | Waldorpstraat 17F, Den Haag | Vijzelstraat 72 unit 4.23, Amsterdam | KvK 30164662 ]


You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.

Kevin Rice

unread,
Apr 2, 2013, 10:11:22 AM4/2/13
to mongod...@googlegroups.com
Jaap:

<argumentative mode>
Oh yeah!!?!?!??!?

( LOL)

<inquisitive mode>

Thank you - this is a good discussion to be having!  I understand it's not normal to have servers with 192 GB each.  I get that.  So, as noted, I have proof it works nicely with large ram.  I am interested to hear if you (or anyone!) have actual experience with a box with 8 GB (for instance) having 2 or 4 shards, and experiencing some kind of thrashing?  I have not seen any such behavior...

I want to know if anyone has ACTUALLY SEEN bad behavior from multiple (shard) mongo daemons interfering with each other in a measurable way?  If so, what was the slowdown?  Did it show as increased IO, some kind of swap usage, high cpu load, or what?  I'm not seeing any of these things, and I'm absolutely burying the boxes with uber-high write-load, with most of the read-load hitting the secondaries (which is probably typical of a sharded replicaset setup).

We're instrumenting everything, and we see no problems while running for weeks at a time this way. 

Thanks,
-- Kevin J. Rice
Sears.com / Kmart.com
Chicago, IL


Kelly Stirman

unread,
Apr 2, 2013, 10:34:04 AM4/2/13
to mongod...@googlegroups.com
As a generalization, I think MongoDB tends to hit bottlenecks at the disk, memory, CPU levels, in that order.

So, if you put multiple mongod on a single machine you are more likely to encounter disk contention. You can of course mitigate this with lots of disk throughput.

And if you put multiple mongod on a single machine, you are more likely to have a situation where your working set doesn't fit in memory. You can of course mitigate this by sizing your memory for your working set appropriately. The OS will manage the memory across the different mongod instances just fine.

Because MongoDB is not CPU intensive, machines with lots of CPUs may be mostly idle. With careful planning you can run multiple mongod on a server to make good use of the CPU without overwhelming disk and memory. Without careful planning, you can . . . . . well, make matters worse.

So, as a general rule it is probably best to run one mongod per machine, but if you know what you are doing you can run multiple mongod per machine. 

Sam Millman

unread,
Apr 2, 2013, 11:22:36 AM4/2/13
to mongod...@googlegroups.com
As Kelly noted, IO could hit you here, but then you prolly have a good IO setup and with 32 processesors. I am guessing by processors you probably mean cores as such I am guessing you don't have a single mobo to house them all unless you have 4*8 cores, and I am unsure if I have seen many boards with that capacity, it is definitely rare (32 on a single board would, of course, be stupid...).

If you are using separate mobos for this then the contention could be the same as single machines.

Sam Millman

unread,
Apr 2, 2013, 11:24:34 AM4/2/13
to mongod...@googlegroups.com
"I want to know if anyone has ACTUALLY SEEN bad behavior from multiple (shard) mongo daemons interfering with each other in a measurable way?"

Wait do you mean mongos or mongod?

Kevin Rice

unread,
Apr 2, 2013, 1:59:03 PM4/2/13
to mongod...@googlegroups.com
Kelly:

I agree with you rather completely.  On the hardware side, very true.

Overall, I would add to your rule:  add 'locking' before disk IO.   With our massively update-heavy app, the first problem we hit was database/collection locking.  Once we increased the number of shards to reduce that, we hit socket/open-file problems.  Increasing ulimits and going to virtual addresses (listen) fixed that.  Then, we hit disk IO utilization % limits.  Adding separate filesystems per shard and (ultimately) a SAN fixed that, mostly.  After that, we're probably limited by working set limitations.  CPU has stayed below 50% even when maxed out on everything else.

Sam:

I'm talking about many mongod shard daemons interfering with each other to reduce throughput, a contention I have NOT seen, mine work fine. 

Sam:

As an example, hardware-wise, HP Proliant 360 DL360 G7 boxes provide 2 physical CPUs with 6 cores each of Xeon x5650, each of which has two hyperthreading (?) psudo-cores (don't know what these are actually, called, it's my term).  What a user sees (via top, or cat /proc/cpuinfo), is 24 cores.  Same goes for 2 * 8 core cpus * 2 pseudo-cores per real core = 32 cores for a larger version of the same box.

Kevin J. Rice
Sears.com / Kmart.com / Mygofer.com

William Zola

unread,
Apr 2, 2013, 5:50:15 PM4/2/13
to mongod...@googlegroups.com
I want to know if anyone has ACTUALLY SEEN bad behavior from multiple (shard) mongo daemons interfering with each other in a measurable way?  

I have.  Frequently.  Repeatedly.  I have the (emotional) scars to show for it. 

If so, what was the slowdown?  Did it show as increased IO, some kind of swap usage, high cpu load, or what? 

It shows up as slow performance on queries; high page faults, I/O maxed out at the maximum level the disks can handle, lots of queueing for reads and writes.  It's typically worse on spinning disk than it is on SSDs

Kevin: One of the things that's worth pointing out is that you have a *highly* unusual setup.  You have a write-heavy load (typical MongoDB installations are read-heavy); you've got more RAM in a single box than most folks have in 10; and (I'm guessing) you're using SSDs heavily RAIDed behind that SAN.  

I'm pretty sure that the OP didn't have any of those things true in their environment. 

It's great that your setup works for you.  I'd like to remind you that your circumstances are rare to the point of being almost unique among the readers of this group.

 -William 

Sam Millman

unread,
Apr 3, 2013, 3:10:21 AM4/3/13
to mongod...@googlegroups.com
Ah, you count hyperthreading as cores.

As William said I too have first hand noticed the problem with hardware contention with MongoDB even on a high end machine. Yours is easily an edge case.

However:


"Adding separate filesystems per shard and (ultimately) a SAN fixed that, mostly.  After that, we're probably limited by working set limitations.  CPU has stayed below 50% even when maxed out on everything else."

That could shed some light on this a little, since your storage is now separate from your server.

Personally it seems that you have mitigated the normal problems you would get by distributing your workload.

Kevin Rice

unread,
Apr 3, 2013, 2:27:38 PM4/3/13
to mongod...@googlegroups.com
Thank you, Sam and William, for your input, i appreciate the perspective.  Yes, we're somewhat unusual. 

Reply all
Reply to author
Forward
0 new messages