I did a write-test of a *very* trivial document earlier on this
evening, using an m1.large, and was able to hit 20k/sec writes with
safe:false, and 1.6k/sec with safe:true...
Attached: PHP
With safe writes you are most likely limited by the number of client
processes (threads) because of sequential request/response cycle.
By reducing the syncDelay you are essentially writing more, but
smaller chunks of data which is much closer to the limitations of EBS
which has a low throughput.
> --
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
>
This will most likely be limited by the disk io at these low numbers.
Please checkout network/io performance for the bottleneck on the
server.With safe writes you are most likely limited by the number of client
processes (threads) because of sequential request/response cycle.By reducing the syncDelay you are essentially writing more, but
smaller chunks of data which is much closer to the limitations of EBS
which has a low throughput.
On Tue, Feb 28, 2012 at 7:10 AM, Daniel Hunt <> wrote:
> On Feb 27, 10:37 pm, Ben Wilber <> wrote:
>> what is your flush delay set at? we have to set it low (5 seconds) or it
>> locks the DB for too long. I can get a few thousand inserts/second if
>> there are no concurrent reads. the 250-300 inserts/second limit is when my
>> reads start failing after waiting for 3s+ for a lock
>
> I've not actually set a flush delay, so I presume that that's
> happening every 60 seconds.
>
> --
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user+unsubscribe@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user+unsubscribe@googlegroups.com.
Should I just double/triple my EC2 costs and shard across multiple replica sets?
we're not sharding currently. we have 2 m2.2xlarge instances in a replica set with an arbiter on another machine. we do a lot more reads from the secondary than the primary (we need consistent reads in some areas), but even so the lock % is very high on the primary when doing a lot of inserts. it seems the EBS disks are just too slow but I'm curious how others are doing this on EC2? Is everyone dealing with this slow of performance for mixed read/write loads?
what is your --syncDelay set at? If you haven't changed it then it's flushing to disk every 60s. How long did you run your test? I don't see any actual disk flushes happening in the mongostat/iostat output you sent which would explain the very high concurrent reads/writes. I can't get anywhere near that.
It might be worth noting that even with raid 10 there is a 2gbs rate limit between an individual EC2 instance and the EBS service as a whole, so this is the maximum throughput you could possibly achieve on EBS from a single node no matter how many volumes you have.
Also, you have probably already seen it but there might be some additional information here:
http://www.mongodb.org/display/DOCS/Amazon+EC2