{
"_id": ObjectId("5795db397e92fa345def9522"),
"name": "Simon",
"history": [
{
"change": -50,
"when": 1469536987003
}
{
"change": -300,
"when": 1469536987002
}
{
"change": 100,
"when": 1469536987001
}
{
"change": 500,
"when": 1469536987000
}
]
}
Hi Zoro,
In MongoDB, a write operation on a single document is atomic even it modifies multiple items within that document. However, in the case that you mentioned, getting the sum of change
requires an aggregation with $unwind and $group which cannot be done inside the update.
One way to do this, you could have a precomputed sum on the top-level of the document which reflects the current total on the top-level of the document, for example:
{
"_id": ObjectId("5795db397e92fa345def9522"),
"name": "Simon"
,
"sum": 250
"history": [
{
"change": -50,
"when": 1469536987003
}
{
"change": -300,
"when": 1469536987002
}
{
"change": 100,
"when": 1469536987001
}
{
"change": 500,
"when": 1469536987000
}
]
}
This way, your update operation could specify the {sum : {$gt : 0}}
without having to run aggregation. You should keep in mind that the application has to update the sum field when new entries are added to history
.
Another way to control concurrency is through some form of Two Phase Commit. You could implement a simple version by using findAndModify to update a field to indicate that the record is being updated (preventing other operations to update this document) and get the most up-to-date document, make your update and then change the field again to a different state so that other operations can update the document.
One final thing to take into consideration is the possibility that with ever-increasing entries in history
the document might hit the 16MB Document Size Limit. In that case, you could consider moving the history
to a separate collection and only keeping the pre-aggregated sum in the main collection.
Regards,
Amar