Mongo Newbie here.
It appear that after I created TTL index; mongo db lock rate shots up.
Just installed MMS about a hour ago. I only want to keep the data for 4 days (60x60x24x4)=345600.
Delete seems to be a very expensive operation in Mongodb?
Am I not using TTL index correctly?
How can I quickly delete/expire bunch of data in ok size collection(26+ millions records)?
Very similar Q as posted in this thread:
Capped collection doesn't work well for me as it's not precise and I would like to shard my collections eventually.
possible solution but I will lose records in the mean time?
http://edgar.tumblr.com/post/38232292665/how-to-delete-large-amount-of-data-of-a-mongodb
Server/Mongo info:
3.5.0-45-generic #68~precise1-Ubuntu SMP
MongoDB shell version: 2.4.9
collection stats:
> db.test.stats()
{
"ns" : "dw.test",
"count" : 26120286,
"size" : 4666195584,
"avgObjSize" : 178.64259158571235,
"storageSize" : 5503524752,
"numExtents" : 25,
"nindexes" : 6,
"lastExtentSize" : 536600560,
"paddingFactor" : 1.0000000000629252,
"systemFlags" : 0,
"userFlags" : 1,
"totalIndexSize" : 8035102992,
"indexSizes" : {
"_id_" : 874848352,
"a_1_b_1_c_1" : 2294038432,
"d_1" : 1528020816,
"epoch_1" : 1224290592,
"g_1" : 1298659488,
"h_1" : 815245312
},
"ok" : 1
}
Before creating TTL index:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 *0 1877 *0 0 1879|0 1 78.6g 158g 11.7g 0 dw:26.4% 0 0|0 0|0 588k 192k 3 11:19:33
*0 *0 1880 *0 0 1880|0 1 78.6g 158g 11.7g 0 dw:26.2% 0 0|0 0|1 588k 193k 3 11:19:34
*0 *0 1913 *0 0 1914|0 1 78.6g 158g 11.7g 0 dw:27.6% 0 0|0 0|1 599k 196k 3 11:19:35
*0 *0 1931 *0 0 1932|0 1 78.6g 158g 11.7g 0 dw:25.6% 0 0|0 0|1 605k 198k 3 11:19:36
After creating TTL index:
ts: time stamp
"ts" : ISODate("2014-02-01T07:13:26.139Z")"
db.test.ensureIndex({ts:1}, {expireAfterSeconds: 345600 })
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 *0 14 *0 0 15|0 1 79.1g 159g 13.8g 0 dw:257.3% 0 0|0 0|1 4k 4k 3 11:42:58
*0 *0 16 *0 0 17|0 1 79.1g 159g 13.9g 0 dw:198.7% 0 2|0 0|0 5k 4k 3 11:43:00
*0 *0 33 *0 0 34|0 2 79.1g 159g 13.8g 0 dw:194.6% 0 1|1 0|0 10k 6k 3 11:43:02
*0 *0 24 *0 0 25|0 1 79.1g 159g 13.9g 0 dw:197.6% 0 2|0 0|1 7k 5k 3 11:43:03
Dropped TTL index: Everything is back to normal:
db.test.dropIndex({"ts":1})
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 *0 1838 *0 0 1839|0 1 79.1g 158g 3.26g 0 dw:24.2% 0 0|0 0|0 583k 188k 4 13:19:23
*0 *0 628 *0 0 628|0 0 79.1g 158g 3.24g 0 dw:8.3% 0 0|0 0|1 199k 66k 4 13:19:25
*0 *0 2072 *0 0 2074|0 1 79.1g 158g 3.25g 0 dw:27.4% 0 0|0 0|0 657k 212k 4 13:19:26
*0 4 604 *0 0 604|0 0 79.1g 158g 3.23g 0 dw:8.1% 0 0|0 0|0 191k 64k 4 13:19:28
*0 *0 2096 *0 0 2098|0 1 79.1g 158g 3.26g 0 dw:26.2% 0 0|0 0|0 665k 215k 4 13:19:29
The reason I ask is because it looks like the index started deleting stuff straight away and was still busily churning when it was removed. Delete operations take the write-lock. mongostat doesn't show deletions occurring as a result of internal processes because it isn't privy to those. You don't seem to be running a replica-set so unfortunately you can't use mongostat on a secondary to observe the deletions occurring either. The db.stats() will show if the count and size is going down though.
"ttl" : {
"deletedDocuments" : NumberLong(746805),
"passes" : NumberLong(7651)
}