Upgraded MongoDB 2.6 instance to 4.2 using mongodump/restore

652 views
Skip to first unread message

M Trostle

unread,
Nov 22, 2019, 2:01:22 PM11/22/19
to mongodb-user
We are running 2.6 of mongodb and wanted to upgrade to the current revision.  When reps were onsite, we were told that doing an mongodump and then restoring the data to a recent version was the best approach.  Here is what I received after doing just that:

Failed: restore error: error running merge command: (NoSuchKey) Missing expected field "db"

1188567 document(s) restored successfully. 0 document(s) failed to restore.


 All documents restored successfully.  I was told that from the error I "might miss out on users".

It looks like maybe it was just trying to merge user records, but we don't have user definitions on all the DBs, so maybe that's why we got the error?  How could I check if things are fine or not? Any suggestions?

Any help would be appreciated.  I am new to all of this.

Thanks,

-M Trostle

Kevin Adistambha

unread,
Dec 3, 2019, 8:48:58 PM12/3/19
to mongodb-user

Hi,

What is the topology of your deployment and what version are you upgrading to? I see a similar message when trying to restore to the config server as described in SERVER-28137. From the ticket, it appears that you should Upgrade User Authorization Data, but this is just my guess on the cause of the message.

Ignoring the error for now, are you able to confirm that all documents were restored successfully and the database is operational? Can your apps connect to it?

Best regards,
Kevin

M Trostle

unread,
Dec 4, 2019, 9:52:56 AM12/4/19
to mongodb-user
We have a mongodb replica set cluster on 3 different VMs running version 2.6.  There are multiple db within the setup (running show dbs). I am upgrading to 4.2.  I did a mongodump of the primary node (which I assume is the config server).  FYI...I have never worked with MongoDB before.  The people who dealt with this prior are all gone and the documentation on what they did is not the best.  Anyways....from reading the link you sent, it sounds like maybe they upgraded from 2.4 to 2.6 and possibly didn't upgrade the authorization model?  I was told that some of the dbs have user authentication.  

I did a test upgrade on my MAC using the dump from the cluster's primary node.  I was able to do some minor queries (though limited due to knowledge) and the database was up and running.  I could see updated files on the OS and such as well.  My next step is to bring the dump over to the new (what will be) primary node of the 3 new VMs we built.  I believe restoring it to the primary node, it will see it as a single node setup and then I can convert it to a replica set from there.

There is a lot above and maybe some things I missed.  Hopefully the error is due to the authorization model and I can get that fixed and then move forward from there.  I think my response has a little scope creep, but I wanted to share the whole situation as well for your understanding.

Thanks for your time and response!

Kevin Adistambha

unread,
Dec 5, 2019, 12:12:15 AM12/5/19
to mongodb-user

Hi,

From your description, it doesn’t sound like you’re running a sharded cluster (the one with a config server). Rather it sounds like it’s a replica set. Note that this is only a guess on my part.

I believe restoring it to the primary node, it will see it as a single node setup and then I can convert it to a replica set from there.

This sounds like a good plan. You might want to check out Deploy a Replica Set and Convert a Standalone to a Replica Set which may be relevant to what you’re trying to do.

If you’re new to MongoDB, you might benefit from free MongoDB courses in MongoDB University. Of particular interest would be the M001: MongoDB Basics course.

Best regards,
Kevin

M Trostle

unread,
Dec 12, 2019, 9:09:36 AM12/12/19
to mongodb-user
I started looking into the authorization setup and wanted to find out if that had been setup for the current test instance, so I installed mongodb 2.6.12 on my MAC and restored my 2.6.12 dump from the current mongodb dump from my RHEL box.  When I did, I didn't get over 2/3rds of my DBs back.  I then checked into my 4.2.1 instance I had restored to and it did indeed show all the dbs, BUT most had either no data or a lot less from the current 2.6.12 test instance.

I can understand having to deal with the issues with going to 4.2.1, but restoring to a 2.6.12 instance to a 2.6.12 instance was not what I expected.

I do realize I am using a dump from the primary node of a replica set (on a RHEL system) and dumping to a single standup instance on my MAC.  The dbpath and logpath are different, but I am not sure that matters.

Do you or anyone have any suggestions?  I think I'm going to run the restore with a -vvvvv (I thought 3 v's were the max until I found another post) and see if that tells me anything and maybe the stopOnError flag as well.  I know this is not the typical way to upgrade, but this is the path I was provided by a mongodb rep who was onsite months ago.

Thanks Kevin for all your help so far.  Any help anyone can provide is appreciated!

-Mike

Kevin Adistambha

unread,
Dec 15, 2019, 6:49:32 PM12/15/19
to mongodb-user

Hi Mike,

I installed mongodb 2.6.12 on my MAC and restored my 2.6.12 dump from the current mongodb dump from my RHEL box. When I did, I didn’t get over 2/3rds of my DBs back.

This shouldn’t happen. May I ask how you took the dump? Did you fsynclock the server before dumping?

I do realize I am using a dump from the primary node of a replica set (on a RHEL system) and dumping to a single standup instance on my MAC. The dbpath and logpath are different,

That doesn’t matter. mongodump will dump only the data content in BSON format, and restoring the dump will just fill the new server with the BSON documents (which are cross-platform). The indexes would be rebuild by the target server after the restore was done.

The only issue with dumping from 2.6.12 and restoring to 4.2.1 is the users database, since 2.6.12 is using an old version of the auth schema that’s not supported anymore in 4.2. See SCRAM for more information regarding the new auth format, and Upgrade to SCRAM for procedure on upgrading the auth schema to a more modern auth mechanism. Note that this step is only needed if you want to preserve the user auth information. If you’re fine with rebuilding the user database, this step is not needed, although you should be careful in not importing the user database into the newly restored servers (or drop them before enabling auth on the new server).

Best regards,
Kevin

M Trostle

unread,
Dec 16, 2019, 12:31:12 PM12/16/19
to mongodb-user
Hi Kevin!

The backup is a Online/HOT backup on the primary.  The backup is basically this:

HOST=`/opt/mongo/bin/mongo --quiet --eval 'rs.status().members.forEach( function(z){ if (z.stateStr=="PRIMARY") { printjson(z.name);}})';`

SUBSTRING=${HOST:1:28}

/opt/mongo/bin/mongodump -h $SUBSTRING -o $DEST  >> $LOGDIR/dump.log

and then run a tar zcvPf on the file


So...no, we are not using fsynclock or anything.  Should I be using a -drop flag with my mongorestore command?


This was the script I was provided and was told to use.

I actually looked into SCRAM and knew I needed to do the switch/change/etc. but I haven't gotten to a solid state to try anything with that.  My guess is that we will want to keep the user auth information, but it's good to know that I should "drop" it before taking a backup if we didn't.


Thank you very much!


-Mike

Kevin Adistambha

unread,
Dec 16, 2019, 7:29:31 PM12/16/19
to mongodb-user

Hi Mike,

It seems to me that the script is quite straightforward, so it shouldn’t be an issue. It is puzzling why you don’t get a full dump, but looking at the dump.log might provide some clue (e.g. perhaps permission issue of the backup user if you have auth?).

If you’re restoring to a new server, you shouldn’t need to use the --drop parameter. Having said that, it’s a good idea to include --drop to ensure that you’re not adding data should the same collection namespace exists in the target server.

Instead of using the script, you might want to follow the procedure in Back Up and Restore with MongoDB Tools manually for testing purposes.

Best regards,
Kevin

Message has been deleted

M Trostle

unread,
Jan 2, 2020, 3:23:09 PM1/2/20
to mongodb-user
Hi Kevin,

Been off for a little for the holidays, but anyways....I found nothing to note in the dump.log that I could see.

I did try to restore an individual DB that has almost 6GBs of space (as seen in show dbs) and when restoring it to my local 4.2.1 blank mongo instance, it says successful.

FYI...When I did a test restore of the individual db to my 4.2.1 instance, it had no failures, but when I did with --drop, it had 1 document failure (see error below):

2020-01-02T10:50:55.681-0500 continuing through error: E11000 duplicate key error collection: mydb.system.users index: _id_ dup key: { _id: ObjectId('516f03124f79f867890bc4be') }


I also just did a restore of that same individual db to my 2.6.12 instance and it showed it had 49 namespaces, where as my current is 50.  When comparing collections, after the restore it showed 20, where the current is 21.  I did a list on collections and the only difference was a "system.profile" that was missing.  Not sure what to make of that just yet.

I'm trying to figure out what I may be missing, so I thought doing the above might provide some insight.

Thoughts on the above?

Regards,

Mike





M Trostle

unread,
Jan 2, 2020, 5:14:50 PM1/2/20
to mongodb-user
I did some more checks and on the one individual db I restored, the missing namespace was this: (name and size changed for privacy)

< { "name" : "mydb.system.profile", "options" : { "capped" : true, "size" : 1010000 } 


The size differences for mydb from current to restored is a difference of 5.002GB


I then restored another db individually, we will call is mydb2, that size was approx 4Gb smaller.  From what I can see, I have the same number of collections and tables....this is again restoring from a dump of a 2.6.12 db and restoring to my MAC to a 2.6.12 version.


Another key note, when doing a full restore, it doesn't bring all my dbs back, but if I restore those individually, they come back and show the same size as on my current test instance (.078GB), which is basically what all my other dbs are except for 3 of them and the one of those 3 are local.


I am not sure why on both fronts.  Is there another way to confirm or figure out if I truly have issues or not?


Thanks much!


-Mike


Kevin Adistambha

unread,
Jan 2, 2020, 7:23:28 PM1/2/20
to mongodb-user

Hi,

{ “name” : “mydb.system.profile”, “options” : { “capped” : true, “size” : 1010000 }

The system.profile collection contains profiler output, which means that you have enabled or have previously enabled the profiler on that specific database. This collection doesn’t contain your data. Note that it’s not recommended to enable profiling on a prod instance, since it’s mainly used for performance troubleshooting purposes and it does involve overhead.

The size differences for mydb from current to restored is a difference of 5.002GB

Actual size on disk could be smaller in a restored instance since there could be fragmentation in an actively used database. This fragmentation could be worse under the MMAPv1 storage engine (which is what 2.6.12 uses), so disk size is not a measure of how successful a restore is. The output of db.collection.dataSize() would be a more representative size metric. Another thing you can check is the document counts across all restored databases vs. the original databases.

Best regards,
Kevin

M Trostle

unread,
Jan 3, 2020, 10:05:24 AM1/3/20
to mongodb-user
Thanks for helping me out about the  system.profile.  Sounds like if I do the upgrade, it would just be removed, so that is good.  I don't have much history of what was done previously before I was asked to upgrade it.

db.collection.dataSize() didn't work for me, but off of that, I found some commands to run and this is what I found out:

This comparison was from mydb2, which has approx 4GB size difference (via show dbs command as stated previously):

To get count of all documents in mongodb

Available Document count: Both 54

 

To get all count of document results in a collection

Result counts for collections: All were the same

 

To get all dataSize of documents in a collection

Command I ran on mydb2: db.getCollectionNames().forEach(function(collection) { size = db[collection].dataSize(); print("dataSize for " + collection + ":"+ size); });

Datasize per collection differences, the rest of the collections were the same:  (< = local;  > = current system)


-------------------------------------

< dataSize for collection1:343152144

---

> dataSize for collection1:346322096

-------------------------------------

< dataSize for collection2:12224

---

> dataSize for collection2:20416

------------------------------------- 

< dataSize for collection3:96

---

> dataSize for collection3:4194336

-------------------------------------

< dataSize for collection4:96

---

> dataSize for collection4:160

-------------------------------------

< dataSize for collection5:1216

---

> dataSize for collection5:5056

-------------------------------------

< dataSize for collection6:674684192

---

> dataSize for collection6:999598112

-------------------------------------


Thanks and Best Regards,

Mike


Kevin Adistambha

unread,
Jan 5, 2020, 6:56:51 PM1/5/20
to mongodb-user

Hi Mike,

Unfortunately all things being equal, the remaining theory of why your dump could be smaller is a bad one: There may be data that are corrupt that cannot be read by mongodump. One way to check this is to run the dbHash command on the database and check if both servers returned the same hash for all collections. If some collections differ, then there could be corrupt documents on the original server. There is no quick way to determine which documents are corrupt though, so they had to be compared one by one.

Best regards,
Kevin

M Trostle

unread,
Jan 8, 2020, 9:28:00 AM1/8/20
to mongodb-user
I did another restore of 2.6.12 to my MAC (running 2.6.12) and I got this, maybe I missed this the first time (though I question that):

User Assertion: 17415:Cannot restore users with schema version 1 to a system with server version 2.5.4 or greater

assertion: 17415 Cannot restore users with schema version 1 to a system with server version 2.5.4 or greater


Any thoughts?


-Mike


Kevin Adistambha

unread,
Jan 9, 2020, 9:53:42 PM1/9/20
to mongodb-user

Hi,

assertion: 17415 Cannot restore users with schema version 1 to a system with server version 2.5.4 or greater

That’s because the user authorization schema was changed in MongoDB 2.6, so you would need to Upgrade User Authorization Data to 2.6 Format before you can restore this into your 2.6 deployment. Judging from this message, it appears that the deployment was upgraded to 2.6 from an earlier version without upgrading the auth schema.

Note that authorization schema has changed again in MongoDB 3.0 (see Upgrade to SCRAM) and the old MONGODB-CR method was removed in MongoDB 4.0 (see Compatibility Changes in MongoDB 4.0).

This is part of the reason why the supported binary drop-in upgrade path is only between successive major versions, so 2.6 -> 3.0 -> 3.2 -> 3.6 -> 4.0 -> 4.2, so this auth schema upgrade is done for you during the upgrade process. If you want to jump between more than one major versions, it’s likely you’d need to recreate the users.

Best regards,
Kevin

M Trostle

unread,
Jan 9, 2020, 10:43:24 PM1/9/20
to mongodb-user
So with my install on my Mac of 2.6.12 (default deploy from tar), I need to upgrade the schema on my MAC instance before I restore from my dump of my current 2.6.12 instance?

I did restore the dump into a new older 2.4.3 instance I put on my MAac and then upgrade to 2.6.12 w/o issue or that error.

I then tried to dump/restore to 4.X and it didn't bring over most of the collections, data, etc again, but at least this time I believe I have my current 2.6 restored to my MAC 2.6 instance correctly (though I still have some checks to do to ensure that is true).

I agree that it looks like I would have to recreate the users as you stated.

Thanks again!

Reply all
Reply to author
Forward
0 new messages