MongoDb/C# poor concurrent read performance

351 views
Skip to first unread message

marceln

unread,
Apr 6, 2012, 2:40:40 AM4/6/12
to mongodb-user
Hello list,

I will not go into details here, mostly because of lack of formatting.
Full details of the issue can be found here:
http://stackoverflow.com/questions/10032687/mongodb-c-poor-concurrent-read-performance.

I'm testing MongoDb read performance on what I would say it is
moderate concurrency. It's not even true concurrency, as I'm having
100 threads that execute a query that retrieves 20 documents on a Quad
Core machine.
Some info about the test:
- There are 100.000 documents in the database.
- The BSON size of each document is near 20Kb.
- Server has 8GB of memory.

I'm first generating 20 random document IDs, for each thread, then I'm
using the In linq extension to retrieve them from the database.
The explain() trace for a query is:

{
"cursor" : "BtreeCursor _id_ multi",
"nscanned" : 39,
"nscannedObjects" : 20,
"n" : 20,
"millis" : 0,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"_id" : [......Lots of IDS here.....]
}
}

The read times range from 200-500ms for the first threads that get
scheduled and get to execute first, up to 9000ms or more for the last
threads.

Any idea would be welcome.

Thank you,
Marcel

Randolph Tan

unread,
Apr 6, 2012, 1:20:42 PM4/6/12
to mongod...@googlegroups.com
According to the explain, it took the server 0 milliseconds to perform the query. Did you run this explain from the shell or was it from your C# application? Are you running the entire setup locally?

marceln

unread,
Apr 6, 2012, 3:21:08 PM4/6/12
to mongod...@googlegroups.com
Hi Randolph,

I've executed explain locally, on the VM where Mongo is installed.
The C# app is running on my development computer, which is still in the local network.

I've also noticed that the query duration is 0ms, but I wouldn't like to blame it on the network (just yet).

Marcel

marceln

unread,
Apr 6, 2012, 3:25:03 PM4/6/12
to mongod...@googlegroups.com
Will try with a test where I run explain from a remote computer, see how it goes.

marceln

unread,
Apr 6, 2012, 3:30:33 PM4/6/12
to mongod...@googlegroups.com
And here it is. It's a bit crazy, as the duration is still 0 (or sometimes 0ms). But I don't think it's retrieving the documents. Is it correct to assume that it executes the query on the server, when using explain()?
Here's the output this time:

> db.Jobs.find({_id:{$in: [28656,54554,35922,85838,39141,23376,85612,11189,21779,99891,34833,8257,32160,22293,67684,49094,29557,76384,3276,91634]}}).explain();
{
        "cursor" : "BtreeCursor _id_ multi",
        "nscanned" : 39,
        "nscannedObjects" : 20,
        "n" : 20,
        "millis" : 0,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "isMultiKey" : false,
        "indexOnly" : false,
        "indexBounds" : {
                "_id" : [
                        [
                                3276,
                                3276
                        ],
                        [
                                8257,
                                8257
                        ],
                        [
                                11189,
                                11189
                        ],
                        [
                                21779,
                                21779
                        ],
                        [
                                22293,
                                22293
                        ],
                        [
                                23376,
                                23376
                        ],
                        [
                                28656,
                                28656
                        ],
                        [
                                29557,
                                29557
                        ],
                        [
                                32160,
                                32160
                        ],
                        [
                                34833,
                                34833
                        ],
                        [
                                35922,
                                35922
                        ],
                        [
                                39141,
                                39141
                        ],
                        [
                                49094,
                                49094
                        ],
                        [
                                54554,
                                54554
                        ],
                        [
                                67684,
                                67684
                        ],
                        [
                                76384,
                                76384
                        ],
                        [
                                85612,
                                85612
                        ],
                        [
                                85838,
                                85838
                        ],
                        [
                                91634,
                                91634
                        ],
                        [
                                99891,
                                99891
                        ]
                ]
        }
}

Randolph Tan

unread,
Apr 6, 2012, 3:45:48 PM4/6/12
to mongod...@googlegroups.com
No, explain does not send the actual documents. The time you see in explain is the time it takes to perform the query in the server, which does not include the time to take to serialize/deserialize and sent over the network.

marceln

unread,
Apr 8, 2012, 5:04:36 PM4/8/12
to mongod...@googlegroups.com

Hello All,

OK, so... From your experience, is it possible to get under 100ms read times for moderate concurrency, on commodity hardware?
If yes, then please share some of the details,

Thanks

craiggwilson

unread,
Apr 9, 2012, 4:38:46 PM4/9/12
to mongod...@googlegroups.com
Hi marceln,
  I have tried to emulate the issue get extremely fast read times (see the two attached documents).  I am not crossing a network boundary, so that may be part of it.
  
  Could you do two things for me:  1) try the attached program.cs file and see if you get the same results.  2) provide complete source to your test program.
  
Thanks...
Program.cs
out.txt

marceln

unread,
Apr 10, 2012, 1:49:07 PM4/10/12
to mongod...@googlegroups.com
Hi Craig,

Thank you for your answer! 
I haven't tested your sample yet (which is essentially the same as mine), but at a quick look I think you're getting those fast read times because the BSON size of your object is very small.
So, the only difference is that I'm dumping in the description field around 20Kb of text. 
But I will try your sample nevertheless. 

Have you made any extra settings on the server side? Perhaps some indexes or some other configuration change.

Marcel

craiggwilson

unread,
Apr 10, 2012, 2:48:31 PM4/10/12
to mongod...@googlegroups.com
Nope, no changes, no indexes.  What you see there is what you get.  You can modify the CreateJob function to make the description much more massive (I'll do the same).  Remember that the larger the document, the longer it'll take to send over the wire and deserialize; nothing can really be done with that.

Is this just a test? If yes, would you mind elaborating on how you're planning on using Mongo in production?  Any type of benchmark like this is very difficult to use as a basis for comparison because they generally aren't testing realistic scenarios.  

Anyways, let me know how it goes.

craiggwilson

unread,
Apr 12, 2012, 11:02:03 AM4/12/12
to mongod...@googlegroups.com
Just checking back...  I ran this with 20K in each of the description fields.  The initial threads too a little longer while connections were estabolished, but after than, 3,4,5 ms each.  Again, this is running against localhost, so the network overhead isn't account for.  

Have you had a chance to repeat your tests?


On Tuesday, April 10, 2012 12:49:07 PM UTC-5, marceln wrote:

marceln

unread,
Apr 12, 2012, 12:20:10 PM4/12/12
to mongod...@googlegroups.com
Hi Craig,

To answer your first question: this is not just a test. I'm looking for a NoSQL solution to use as data store for a job board site, written in ASP.NET
It will have around 1 million jobs to start with, whose combined size (properties + description) should be around 20kb, in the worst case scenario (but usually it will be under 10k).
So that's about it. 

I'm trying to get under 100ms read times for concurrent and random reads (100 users or more). The production server will be more powerful than the server I'm testing on but still, the times I'm getting now from Mongo are too high.

I will also try to integrate your sample tonight and let you know. I'll run it both locally and on another server.
Thanks again!

Marcel
Reply all
Reply to author
Forward
0 new messages