--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/93e1450c-319a-4096-b044-d70ca1e7227c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Just to chip in.. I see this too. No associated issues known.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/34f419f0-a1bb-489b-a1b4-06b485c2583b%40googlegroups.com.
Does WT use optimistic concurrency control for findAndModify? So that...
* concurrent clients can each read a matching document at time T, but nothing is locked internally
* client A commits the change at time T+1
* client B attempts to commit the change at time T+2, internally WT detects a write from client A between times T & T+2 --> rollback
Failing scenario:
* There was plenty of documents updated from time to time to preserve statistics about requests for data which was not present in our data source (so we might for example make up missing entries)
* One of our data sources is degenerated (has little entries and many requests)
* This generated massive amount of updates to a single document
* Which caused one machine to freak out (all cores occupied by a MongoDB shard)
* Live-lock like condition was observed. System has been very busy, but progress was not observable.
We suspect that because WiredTiger implements optimistic locking on writes this caused tight loop and updates of this one document filled all CPU cores. But that leaded to lots of repetitions due to optimistic locking conflicts. Propably without any sleeping nor timeout (I would like to get confirmation on that).