Duplicate Key Error on Upsert For mongo3.0.x (wiredTiger)

216 views
Skip to first unread message

Memory Box

unread,
Aug 13, 2015, 6:40:54 AM8/13/15
to mongodb-dev
Hi, all

I just got a weird error sent through from our application:

when i update records with two process in a single mongod instance, it was  complaining of a duplicate key error on a collection with a unique index on it, but the operation in question was an upsert.

Case code:

 import time
 from bson import Binary
 from pymongo import MongoClient, DESCENDING

 bucket = MongoClient('127.0.0.1', 27017)['test']['foo']
 bucket.drop()
 bucket.update({'timestamp': 0}, {'$addToSet': {'_exists_caps': 'cap15'}}, upsert=True, safe=True, j=True, w=1, wtimeout=10)
 bucket.create_index([('timestamp', DESCENDING)], unique=True)
 while True:
     timestamp =  str(int(1000000 * time.time()))
     bucket.update({'timestamp': timestamp}, {'$addToSet': {'_exists_caps': 'cap15'}}, upsert=True, safe=True, w=1, wtimeout=10)


when I run script with two processes at the same time, pymongo raise DuplicateKeyError:

Traceback (most recent call last):
  File "test_mongo_update.py", line 13, in <module>
    bucket.update({'timestamp': timestamp}, {'$addToSet': {'_exists_caps': 'cap15'}}, upsert=True, safe=True, w=1, wtimeout=10)
  File "build/bdist.linux-x86_64/egg/pymongo/collection.py", line 552, in update
  File "build/bdist.linux-x86_64/egg/pymongo/helpers.py", line 202, in _check_write_command_response
pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: test.foo index: timestamp_-1 dup key: { : "1439372554049783" }

  • mongoserver: 3.0.4, 3.0.5
  • storage engine: wiredTiger
  • pymongo: 2.8.1
the appendix  is mongo.conf

But this case works fine on MMAPV1 storage engine.

I've asked this same issue in:

Any thoughts on what could be going wrong here?

Thanks!
mongod.conf

David Murphy

unread,
Aug 13, 2015, 6:45:05 AM8/13/15
to mongo...@googlegroups.com
Does this happen if you have upsert=False?  This really sounds like  its trying to make a second record in the same seconds ( both process both called upsert at the same time, and one lost and dedicated the duplicate from the other) .

David

--
You received this message because you are subscribed to the Google Groups "mongodb-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-dev...@googlegroups.com.
To post to this group, send email to mongo...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-dev.
For more options, visit https://groups.google.com/d/optout.

Andrew

unread,
Sep 9, 2015, 2:02:28 PM9/9/15
to mongodb-dev
I'm having a similar bug happening for me on 3.0.6.  Multi-Threaded app doing upserts.  I don't know for sure yet, but i'm wondering if it's a bug within the caching of wiredtiger that doesn't realize the first record was inserted and when it receives the second upsert command it also treats it as a insert and the underlying actual mongodb under the wiredtiger cache is catching the key exception.  I have no proof of this yet, other than firmly believing an upsert should never throw a duplicate key error when query by _id and _id as the only index.

Andrew

unread,
Sep 9, 2015, 2:02:28 PM9/9/15
to mongodb-dev
View this mongodb jira ticket: https://jira.mongodb.org/browse/SERVER-14322


On Thursday, August 13, 2015 at 6:40:54 AM UTC-4, Memory Box wrote:
Reply all
Reply to author
Forward
0 new messages