Documents in the files collection contain some or all of the following fields. Applications may create additional arbitrary fields:
- files.uploadDate¶
The date the document was first stored by GridFS. This value has the Date type.
db.files.ensureIndex({"uploadDate" : 1},{expireAfterSeconds : 3600})
--
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com
To unsubscribe from this group, send email to
mongodb-user...@googlegroups.com
See also the IRC channel -- freenode.net#mongodb
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Its not that mongodb cant handle the deletes, its that your going to end up with high lock rate if you let mongo handle the delets and you have a busy server or you were busy and now its time to delete things. Just as you said about it being internal so it will be faster is true and would you rather be able to serve reads and add new files or make sure old files are deleted on time. I recommend making sure to use memcache for the simple looks ups on this when you can, insteed of lots of querys that will then have to serve files. Also one thing to think about with ttl is that it is not perfect at deleteing on time depending on how busy the server is and scheduling issues.
http://docs.mongodb.org/manual/tutorial/expire-data/
Note
TTL indexes expire data by removing documents in a background task that runs once a minute. As a result, the TTL index provides no guarantees that expired documents will not exist in the collection. Consider that:
- Documents may remain in a collection after they expire and before the background process runs.
- The duration of the removal operations depend on the workload of your mongod instance.