We're using 4 EBS volumes (4x100GB) in RAID10, as recommended.
For backups: Locking the database on a secondary and run an EBS
snapshot. This is incremental, so the first one takes a long time but
later ones are fast.
For resizing / adding capacity: it's easiest to just bring up a new
server with larger EBS volumes, because our server setup is fully
automated.
We experimented with LVM for resizing / snapshotting reasons but found
that LVM snapshots kill write performance and it wasn't worth it, so
we're using plain 'mdadm' raid setup, something like:
DEVICE=/dev/md0
MD_DRIVES="/dev/sdf /dev/sdg /dev/sdh /dev/sdi"
mdadm -C $DEVICE --chunk=256 -n 4 -l 10 -p f2 $MD_DRIVES
As an added bonus of using EBS snapshots, I can bring a new EC2
instance up with new EBS volumes that are created from snapshots, so
the mongo data is prepopulated and the instance can be online in a few
minutes. That's great for disaster recovery.
I don't want to claim that this setup is perfect. We've been trying
for months to debug intermittent locking / connection drops that may
be related to write load or EBS latency. But in terms of backups and
disaster recovery it's not keeping me awake at night.
On Feb 28, 9:37 am, Doron Gutman <
gshocko...@gmail.com> wrote:
> Hey,
>
> I'm trying to understand about the "proper" setup for using mongodb on
> ec2' ebs drives.
> I've readhttp://
groups.google.com/group/mongodb-user/browse_thread/thread/81a9...