Hello list,
I will not go into details here, mostly because of lack of formatting.
Full details of the issue can be found here:
http://stackoverflow.com/questions/10032687/mongodb-c-poor-concurrent-read-performance.
I'm testing MongoDb read performance on what I would say it is
moderate concurrency. It's not even true concurrency, as I'm having
100 threads that execute a query that retrieves 20 documents on a Quad
Core machine.
Some info about the test:
- There are 100.000 documents in the database.
- The BSON size of each document is near 20Kb.
- Server has 8GB of memory.
I'm first generating 20 random document IDs, for each thread, then I'm
using the In linq extension to retrieve them from the database.
The explain() trace for a query is:
{
"cursor" : "BtreeCursor _id_ multi",
"nscanned" : 39,
"nscannedObjects" : 20,
"n" : 20,
"millis" : 0,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"_id" : [......Lots of IDS here.....]
}
}
The read times range from 200-500ms for the first threads that get
scheduled and get to execute first, up to 9000ms or more for the last
threads.
Any idea would be welcome.
Thank you,
Marcel