C# Driver Async performances & issues

570 views
Skip to first unread message

Andrea Balducci

unread,
Apr 11, 2016, 7:18:30 AM4/11/16
to mongodb-user
I'm testing latest c# driver to measure insert performances with the sync / async methods; I need a fast and reliable way to persist from a lot of concurrent writes.
I've found that InsertOneAsync (https://github.com/andreabalducci/MongoDbSyncAsyncTests/blob/master/testMongo/Program.cs#L68) is 5x times slower than the sync version; have to .Wait() on task to avoid saturating the waitqueue.
With async / await (https://github.com/andreabalducci/MongoDbSyncAsyncTests/blob/master/testMongo/Program.cs#L90) it's 1.5x times slower and subject to MongoWaitQueueFullException.

What's wrong with my code?
It's better to stay on a Sync write pattern?

Thanks.


Robert Stam

unread,
Apr 11, 2016, 10:05:31 AM4/11/16
to mongod...@googlegroups.com
As a general rule the async methods will have *slightly* more overhead than the sync methods. So an *individual* async method call will be slightly slower than the sync equivalent. But in a heavily loaded system it is likely that an async implementation will handle higher aggregate throughput since it will require fewer threads.

One combination that is not recommended is calling an async method and then calling Wait on the task. If you're going to block on the result you're better of calling the sync method in the first place.

Using Parallel.ForEach for benchmarks is non-deterministic because you don't know what degree of parallelism Parallel.ForEach will use. In fact, Parallel.ForEach will tune the degree of parallelism while the loop is executing, so it can easily be using different degrees of parallelism during the same loop.

Your TestSync and TestAsyncAwait methods are calling different overloads of Parallel.ForEach and we don't know if those two overloads use different heuristics for tuning the degree of parallelism. Most likely they do since you are seeing a larger performance difference than expected between the sync and async implementations.

The default connection pool size is 100 connections. So once 100 Threads/Tasks are using a connection, subsequent Threads/Tasks have to wait for a connection to become available. The default size of the connection pool wait queue is 500. So if you have a degree of parallelism that is over 600 you are subject to MongoWaitQueueFullExceptions. Also, if 500 Threads/Tasks are waiting for a connection it is likely or possible that some of them will get a TimeoutException if they have to wait too long.

It is important to keep the degree of parallelism and the size of the connection pool matched.

As to whether you should use sync or async, you should be able to achieve your goals with either approach. Just choose the one you prefer or are more comfortable with.


--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: https://docs.mongodb.org/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/3499ceb2-a748-4a83-b36f-4215aa9821f8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Andrea Balducci

unread,
Apr 12, 2016, 4:16:08 AM4/12/16
to mongodb-user
Modified the source an pushed on an new branch (https://github.com/andreabalducci/MongoDbSyncAsyncTests/tree/v2)
With MaxDegreeOfParallelism equals to 8 and default connection pool size of 100 the waitqueue is saturated on Async/Await test.

Andrea Balducci

unread,
Apr 12, 2016, 5:00:28 AM4/12/16
to mongodb-user
Found the issue, async on Parallel.ForEach does not await lamba "async Task".


On Monday, April 11, 2016 at 4:05:31 PM UTC+2, Robert Stam wrote:
Reply all
Reply to author
Forward
0 new messages