Mongo Query Performance on Million Records

1,672 views
Skip to first unread message

rohit....@ephesoft.com

unread,
Feb 14, 2017, 12:25:54 AM2/14/17
to mongodb-user, Ajit Deswal, Sourabh Aggarwal, Rohit Khurana
Hello MongoDB,

Greetings of the day!

I have been using Mongo as a data storage for my Analytics product development where the system involves the capability to handle million records all together.

Now, I have been trying to perform some aggregations, like $match, $group, $sort, etc. The query is taking too much time in providing the results from mongo. This is really not acceptable for our system.

For example, i have almost 2 million of records in mongo as one collection which has 135 fields.

Now if execute db.<collection_name>.find().skip(600000) // fetching of documents by skipping first 6 lakhs records
then it is taking a hell lot of time to return the results like in some minutes.


Also, if i apply the $group operator and $sort operator in the same set of collection, it most of the times throws me out of memory error.

Please help me in improving the performance as this is becoming the bottleneck for my product.

I hope i am clear with my query. Please ask me if my query is not clear to you.

I am expecting a faster response.

Thanks in Advance!

Rohit Khurana

Rhys Campbell

unread,
Feb 14, 2017, 11:24:05 AM2/14/17
to mongodb-user, ajit....@ephesoft.com, sourabh....@ephesoft.com, arden...@gmail.com
skip is often not an efficient method to paginate. See here...


You should consider an alternative method. A possibility is using the _id value of the preiovus set of data if that workflow suits you.


The out of memory error can be prevented by using the allowDiskUse option of the agg framework...


To get more specific help yuo should provide...

1. The query.
2. Sample documents.
3. The output of db.collection.getIndexes()

Cheers,

Rhys


Reply all
Reply to author
Forward
0 new messages