> --
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
>
>
> The script is at: http://dl.dropbox.com/u/7258508/mongo_stress.py
> Cassandra has a stress script. I basically took that and modified it
> to work with MongoDB.
> You will find some parts in the script that does not apply to MongoDB.
>
> The options I ran the script with are: -n 10240 -t 1 -c 1 -S 100000 -d
> 123 -o insert -k
>
> -n 10240 : Implies 10240 total writes
> -t 1: Implies only one thread.
> -c 1: ignore it. does not matter for MongoDB.
> -S 100000: About 100KB document size.
> -d 123: Ignore. Does not matter for MongoDB.
> -o insert: insert operation.
> -k: continue even if errors (btw, I did not get any errors)
>
> I run the above in a loop:
> for i in `seq 1 100`;do
> python mongo_stress.py -n 10240 -t 1 -c 1 -S 100000 -d 123 -o insert -
> k >> op.txt
> done
>
> Captured the total time for each 1GB write (i.e. each 10240 writes)
> and plotted it.
>
What kind of hardware did you do this test on? Which OS / mongo version?
> Amazon AWS. 3 large instances in the same AZ with EBS disks.
> Linux/1.8.1
>
> On May 19, 1:48 pm, Adam Fields <fie...@street86.com> wrote:
>> On May 19, 2011, at 11:24 AM, Gambitg wrote:
>>
>>
This is a production system running at peak capacity so I can't do an independent stress test, but I can now confirm this anecdotally.
I had an oplog size of 100GB, and I dropped it down to 40GB two days ago. That machine is now performing noticeably better with a smaller oplog (and no other changes I'm aware of). If I had to estimate, I'd say overall performance is up by about 50%, since the actual throughput metrics of the machine aren't solely dependent on write performance. This also seems to have no effect on whether the primary can serve up the oplog fast enough to the secondary - this machine is still completely unable to participate in replication under load.
I'd say this is worth investigating further.