|Deleting Data That Has Been Processed||Toby Ho||10/4/12 6:22 AM|
I am writing a batch job to aggregate some data using the aggregation framework. Since the data output is potentially larger than the document limit size, I am using a $limit in the top of my pipeline to reduce the number of objects that are processed at a time. After the aggregation is complete, I save off the result in another collection. Now, I would like to remove all the records I have processed so far. how would I reliably do this without having to worry about race conditions? Or is there a better way to go about what I am trying to do?
|Re: Deleting Data That Has Been Processed||Stephen Steneker||10/7/12 4:58 PM|
This question has also been discussed on StackOverflow:
As suggested in the comments there, the expected approach would be to use a deterministic boundary such as date or id values to filter rather than relying on $limit.
|Re: Deleting Data That Has Been Processed||viC||10/8/12 3:54 AM|
In case you don't have a criteria to separate them, try this:
1> rename your collection
2> Now do all your processing on the new collection.
*Warning: It doesn't work well in Sharded environments*