Failed: restore error: error running merge command: (NoSuchKey) Missing expected field "db"
1188567 document(s) restored successfully. 0 document(s) failed to restore.
Hi,
What is the topology of your deployment and what version are you upgrading to? I see a similar message when trying to restore to the config server as described in SERVER-28137. From the ticket, it appears that you should Upgrade User Authorization Data, but this is just my guess on the cause of the message.
Ignoring the error for now, are you able to confirm that all documents were restored successfully and the database is operational? Can your apps connect to it?
Best regards,
Kevin
Hi,
From your description, it doesn’t sound like you’re running a sharded cluster (the one with a config server). Rather it sounds like it’s a replica set. Note that this is only a guess on my part.
I believe restoring it to the primary node, it will see it as a single node setup and then I can convert it to a replica set from there.
This sounds like a good plan. You might want to check out Deploy a Replica Set and Convert a Standalone to a Replica Set which may be relevant to what you’re trying to do.
If you’re new to MongoDB, you might benefit from free MongoDB courses in MongoDB University. Of particular interest would be the M001: MongoDB Basics course.
Best regards,
Kevin
Hi Mike,
I installed mongodb 2.6.12 on my MAC and restored my 2.6.12 dump from the current mongodb dump from my RHEL box. When I did, I didn’t get over 2/3rds of my DBs back.
This shouldn’t happen. May I ask how you took the dump? Did you fsynclock the server before dumping?
I do realize I am using a dump from the primary node of a replica set (on a RHEL system) and dumping to a single standup instance on my MAC. The dbpath and logpath are different,
That doesn’t matter. mongodump will dump only the data content in BSON format, and restoring the dump will just fill the new server with the BSON documents (which are cross-platform). The indexes would be rebuild by the target server after the restore was done.
The only issue with dumping from 2.6.12 and restoring to 4.2.1 is the users database, since 2.6.12 is using an old version of the auth schema that’s not supported anymore in 4.2. See SCRAM for more information regarding the new auth format, and Upgrade to SCRAM for procedure on upgrading the auth schema to a more modern auth mechanism. Note that this step is only needed if you want to preserve the user auth information. If you’re fine with rebuilding the user database, this step is not needed, although you should be careful in not importing the user database into the newly restored servers (or drop them before enabling auth on the new server).
Best regards,
Kevin
HOST=`/opt/mongo/bin/mongo --quiet --eval 'rs.status().members.forEach( function(z){ if (z.stateStr=="PRIMARY") { printjson(z.name);}})';`
SUBSTRING=${HOST:1:28}
/opt/mongo/bin/mongodump -h $SUBSTRING -o $DEST >> $LOGDIR/dump.log
and then run a tar zcvPf on the file
So...no, we are not using fsynclock or anything. Should I be using a -drop flag with my mongorestore command?
This was the script I was provided and was told to use.
I actually looked into SCRAM and knew I needed to do the switch/change/etc. but I haven't gotten to a solid state to try anything with that. My guess is that we will want to keep the user auth information, but it's good to know that I should "drop" it before taking a backup if we didn't.
Thank you very much!
-Mike
Hi Mike,
It seems to me that the script is quite straightforward, so it shouldn’t be an issue. It is puzzling why you don’t get a full dump, but looking at the dump.log might provide some clue (e.g. perhaps permission issue of the backup user if you have auth?).
If you’re restoring to a new server, you shouldn’t need to use the --drop parameter. Having said that, it’s a good idea to include --drop to ensure that you’re not adding data should the same collection namespace exists in the target server.
Instead of using the script, you might want to follow the procedure in Back Up and Restore with MongoDB Tools manually for testing purposes.
Best regards,
Kevin
2020-01-02T10:50:55.681-0500 continuing through error: E11000 duplicate key error collection: mydb.system.users index: _id_ dup key: { _id: ObjectId('516f03124f79f867890bc4be') }
< { "name" : "mydb.system.profile", "options" : { "capped" : true, "size" : 1010000 }
The size differences for mydb from current to restored is a difference of 5.002GB
I then restored another db individually, we will call is mydb2, that size was approx 4Gb smaller. From what I can see, I have the same number of collections and tables....this is again restoring from a dump of a 2.6.12 db and restoring to my MAC to a 2.6.12 version.
Another key note, when doing a full restore, it doesn't bring all my dbs back, but if I restore those individually, they come back and show the same size as on my current test instance (.078GB), which is basically what all my other dbs are except for 3 of them and the one of those 3 are local.
I am not sure why on both fronts. Is there another way to confirm or figure out if I truly have issues or not?
Thanks much!
-Mike
Hi,
{ “name” : “mydb.system.profile”, “options” : { “capped” : true, “size” : 1010000 }
The system.profile collection contains profiler output, which means that you have enabled or have previously enabled the profiler on that specific database. This collection doesn’t contain your data. Note that it’s not recommended to enable profiling on a prod instance, since it’s mainly used for performance troubleshooting purposes and it does involve overhead.
The size differences for mydb from current to restored is a difference of 5.002GB
Actual size on disk could be smaller in a restored instance since there could be fragmentation in an actively used database. This fragmentation could be worse under the MMAPv1 storage engine (which is what 2.6.12 uses), so disk size is not a measure of how successful a restore is. The output of db.collection.dataSize() would be a more representative size metric. Another thing you can check is the document counts across all restored databases vs. the original databases.
Best regards,
Kevin
To get count of all documents in mongodb
Available Document count: Both 54
To get all count of document results in a collection
Result counts for collections: All were the same
To get all dataSize of documents in a collection
Command I ran on mydb2: db.getCollectionNames().forEach(function(collection) { size = db[collection].dataSize(); print("dataSize for " + collection + ":"+ size); });
Datasize per collection differences, the rest of the collections were the same: (< = local; > = current system)
-------------------------------------
< dataSize for collection1:343152144
---
> dataSize for collection1:346322096
-------------------------------------
< dataSize for collection2:12224
---
> dataSize for collection2:20416
-------------------------------------
< dataSize for collection3:96
---
> dataSize for collection3:4194336
-------------------------------------
< dataSize for collection4:96
---
> dataSize for collection4:160
-------------------------------------
< dataSize for collection5:1216
---
> dataSize for collection5:5056
-------------------------------------
< dataSize for collection6:674684192
---
> dataSize for collection6:999598112
-------------------------------------
Hi Mike,
Unfortunately all things being equal, the remaining theory of why your dump could be smaller is a bad one: There may be data that are corrupt that cannot be read by mongodump. One way to check this is to run the dbHash command on the database and check if both servers returned the same hash for all collections. If some collections differ, then there could be corrupt documents on the original server. There is no quick way to determine which documents are corrupt though, so they had to be compared one by one.
Best regards,
Kevin
User Assertion: 17415:Cannot restore users with schema version 1 to a system with server version 2.5.4 or greater
assertion: 17415 Cannot restore users with schema version 1 to a system with server version 2.5.4 or greater
Any thoughts?
-Mike
Hi,
assertion: 17415 Cannot restore users with schema version 1 to a system with server version 2.5.4 or greater
That’s because the user authorization schema was changed in MongoDB 2.6, so you would need to Upgrade User Authorization Data to 2.6 Format before you can restore this into your 2.6 deployment. Judging from this message, it appears that the deployment was upgraded to 2.6 from an earlier version without upgrading the auth schema.
Note that authorization schema has changed again in MongoDB 3.0 (see Upgrade to SCRAM) and the old MONGODB-CR method was removed in MongoDB 4.0 (see Compatibility Changes in MongoDB 4.0).
This is part of the reason why the supported binary drop-in upgrade path is only between successive major versions, so 2.6 -> 3.0 -> 3.2 -> 3.6 -> 4.0 -> 4.2, so this auth schema upgrade is done for you during the upgrade process. If you want to jump between more than one major versions, it’s likely you’d need to recreate the users.
Best regards,
Kevin
I did restore the dump into a new older 2.4.3 instance I put on my MAac and then upgrade to 2.6.12 w/o issue or that error.
I then tried to dump/restore to 4.X and it didn't bring over most of the collections, data, etc again, but at least this time I believe I have my current 2.6 restored to my MAC 2.6 instance correctly (though I still have some checks to do to ensure that is true).
I agree that it looks like I would have to recreate the users as you stated.
Thanks again!