TTL index in Large Collection having more than 1 Billion documents

560 views
Skip to first unread message

Soorya Prakash

unread,
Oct 27, 2016, 2:43:59 AM10/27/16
to mongodb-user
Hi, 
  I am trying to use the TTL index in my large collection having more than 1 Billion documents.In this collection I am having frequent  read and write operation also.In this case if I had a TTL index in this collection , will it cause any performance degradation like slowness in read and write operation or locking the collection level until the TTL thread remove the matching documents, high use of RAM memory, Mongo db crash etc..,




Kevin Adistambha

unread,
Nov 7, 2016, 7:28:32 PM11/7/16
to mongodb-user

Hi Soorya

I am trying to use the TTL index in my large collection having more than 1 Billion documents.In this collection I am having frequent read and write operation also.

will it cause any performance degradation like slowness in read and write operation or locking the collection level until the TTL thread remove the matching documents, high use of RAM memory, Mongo db crash etc..,

Performance (TTL or otherwise) highly depends on a combination of factors, for instance:

TTL index expires documents using a background thread that runs approximately every 60 seconds (depending on workload of the server). The additional load generated by the TTL thread should be similar to a manual delete process, so if your hardware has the capacity to comfortably serve your reads and writes, the TTL operation should not cause a significant increase in load. See Expiration of Data and TTL Index Delete Operations for more details.

Since there is no general answer to performance questions, I would encourage you to perform an extensive load testing with and without the TTL index to see if your design can cope with the load. You can use the ttlMonitorEnabled server parameter to enable/disable the TTL thread to measure the effect of the TTL index.

Best regards,
Kevin

Reply all
Reply to author
Forward
0 new messages