Re: Mongo Backup - Restoring EBS Volumes

275 views
Skip to first unread message

Chase

unread,
Oct 8, 2012, 11:18:33 AM10/8/12
to mongod...@googlegroups.com
Bump, this is a very relevant issue to the community I think. It would be great to have a solution to this documented.

Sandeep Parikh

unread,
Oct 8, 2012, 6:16:27 PM10/8/12
to mongod...@googlegroups.com
Chase,
  • How did you conduct the backup? Was the database running or did you fsync+lock and then take the snapshots?
  • If you launched another instance of the same AMI with EBS RAID there could be issues trying to restore the backup onto that box because of device name collisions, etc.
Please send me the keys/addresses and I will definitely take a look (off list is fine).

-Sandeep

On Saturday, October 6, 2012 at 12:50 PM, Chase wrote:

I am following the guide for EC2 backup and restore (http://goo.gl/mzKUK). I have created an EC2 image from the 10gen provided AMI on the AWS Marketplace (http://goo.gl/v1PzT) which uses a Raid 10 EBS for storage. I am able to create backup snapshots of all 4 drives. My real problem is with restoring them! My full steps are:

 
  1. Create a new RAID 10 Mongo instance using this: http://goo.gl/v1PzT
  2. Start up mongo, put it some test data to see if it gets backed up
  3. Create snapshots of all 4 raid drives as described http://goo.gl/mzKUK
  4. Create a new instance, using the same base AMI from the AWS Marketplace link above in step 1
  5. Create volumes from the snapshots made and attach them to the instance
  6. Where things breakMounting the volumes groups section on the 10gen howto page (http://goo.gl/mzKUK)
    1. mdadm has no --auto-update-homehost option
    2. The following is the commands and results that are causing the issues
mdadm --assemb -u ec655893:c8ed37fa:b05bfb20:fd44dbd4 --no-degraded /dev/md0
mdadm: /dev/md0 has been started with 4 drives.
 
pvscan
Found duplicate PV 5aZHqMeG2jHqmK1c1d1jellaYZIYA6U2: using /dev/md127 not /dev/md0
PV /dev/md127   VG vg0   lvm2 [200.00 GiB / 8.00 MiB free]
Total: 1 [200.00 GiB] / in use: 1 [200.00 GiB] / in no VG: 0 [0   ]

vgscan
Reading all physical volumes.  This may take a while...
Found duplicate PV 5aZHqMeG2jHqmK1c1d1jellaYZIYA6U2: using /dev/md127 not /dev/md0
Found volume group "vg0" using metadata type lvm2

mkdir -p /var/lib/mongodb
cat >> /etc/fstab << EOF
> /dev/mapper/vg0-data /var/lib/mongodb xfs noatime,noexec,nodiratime 0 0
> EOF

mount /var/lib/mongodb
mount: /dev/mapper/vg0-data already mounted or /var/lib/mongodb busy
mount: according to mtab, /dev/mapper/vg0-data is mounted on /data

I believe that what is happening is that since both volumes are coming from the same root AMI, that they are registered the same (ie same UUID?). But I have no idea on how to actual restore these volumes.

It does seem though, that since this is a 10gen provided AMI, that there should be a documented way to restore data.

If anyone needs access to this box to test, I have keys and addresses I can post.

Thanks for any help!

--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com
To unsubscribe from this group, send email to
mongodb-user...@googlegroups.com
See also the IRC channel -- freenode.net#mongodb

Chase Brammer

unread,
Oct 8, 2012, 6:29:18 PM10/8/12
to mongod...@googlegroups.com
Sandeep

Thank you for responding! I was banging my head on this one. 
  • For how I did the backup: Did a mongo lock via `db.runCommand({fsync: 1, lock: 1});` and then took snapshots, just like how this guide recommends http://goo.gl/mzKUK
  • Yes, I did launch another instance from the same AMI (the 10gen created one). It seems like you would want to restore the backup to the same AMI to avoid changing configurations of the database and operating system right?
  • I will email to you keys and addresses now offline
Thank you!

--
Chase Brammer

Rudolf Schmidt

unread,
Oct 9, 2012, 5:19:31 AM10/9/12
to mongod...@googlegroups.com
Just out of interest: would the mongodb-maintenance script compilation be of interest to you when backing up your stuff? I could add restoration scripts if there are people in the need for that feature :-)

Chase

unread,
Oct 12, 2012, 11:34:31 AM10/12/12
to mongod...@googlegroups.com
Sandeep emailed me, and it worked! Thanks very very much. This was seriously very very helpful. I have documented the whole recovery process and the commands I used below for reference for others.

  • Create snapshots of current volume using the guide
  • Start and connect to a new mongo instance
  • View logical volumes: lvdisplay -v 
  • Unmount data volume: umount /dev/vg0/data
  • Unmount log volume: umount /dev/vg0/log
  • Unmount journal volume: umount /dev/vg0/journal
  • Remove the volume group: vgremove vg0
  • Show physical volumes: pvs -a
  • Remove the physical volume: pvremove /dev/md127
  • Stop the RAID: mdadm --stop /dev/md127
  • Detach volumes through AWS, wait for confirmation that they are dettached
  • Restart instance
  • Reattach all volumes, wait for confirmation that they are attached
  • View logical volumes, notice that they probably say NOT Available: lvdisplay -v
  • Fix the data volume: lvchange -a y /dev/vg0/data
  • Fix the log volume: lvchange -a y /dev/vg0/log
  • Fix the journal volume: lvchange -a y /dev/vg0/journal
  • View logical volumes, notice that they now say available: lvdisplay -v
  • Mount all the volumes!: mount /data, /log, /journal
  • Remove the mongo lock file (since the snapshot happened with the lock): rm -rf /data/mongod.lock
  • Start mongo up: /etc/init.d/mongod start
  • Wait for mongo to finish allocating and starting up
  • Check data
  • Success!
Reply all
Reply to author
Forward
0 new messages