Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cloning a disk

0 views
Skip to first unread message

Michael Hopkins

unread,
May 8, 2006, 7:28:45 AM5/8/06
to

Hi all

What are people's preferences here on how to clone a server disk to a new
larger one? Am about to replace 60G IDE with faster 250G SATA.

I would like it to:

- be difficult to mess up, preferably a debian/ubuntu package that takes
care of all messy details e.g. symlinks, permissions, dates

- be completely reliable

- avoid fiddling about too much i.e. the use of floppies or ftp servers

TIA

Michael


_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

_/ _/ _/_/_/ Hopkins Research Ltd
_/ _/ _/ _/
_/_/_/_/ _/_/_/ http://www.hopkins-research.com/
_/ _/ _/ _/
_/ _/ _/ _/ 'touch the future'

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/


Robert Hull

unread,
May 8, 2006, 9:47:47 AM5/8/06
to
In uk.comp.os.linux, on Mon 08 May 2006 12:28, Michael Hopkins
<michael...@hopkins-research.com> wrote:

> What are people's preferences here on how to clone a server disk to a
> new larger one? Am about to replace 60G IDE with faster 250G SATA.


man fdisk
man cp (pay attention to the -a option)

>
> I would like it to:
>
> - be difficult to mess up,

fdisk is not very easy to mess up

cp will not copy nothing over something so is also hard to mess up


> preferably a debian/ubuntu package

Sorry, fdisk and cp are basic shell commands rather than a debian
package :-(

>
> that takes care of all messy details e.g. symlinks, permissions,
> dates

cp -a does this
>
> - be completely reliable

cp is very reliable


>
> - avoid fiddling about too much i.e. the use of floppies or ftp
> servers
>

No need for either

> TIA
>
> Michael
>
>
>
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
>
> _/ _/ _/_/_/ Hopkins Research Ltd
> _/ _/ _/ _/
> _/_/_/_/ _/_/_/ http://www.hopkins-research.com/
> _/ _/ _/ _/
> _/ _/ _/ _/ 'touch the future'
>
>
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Please correct your .sig separator
--
Robert HULL

Archival or publication of this article on any part of thisishull.net
is without consent and is in direct breach of the Data Protection Act

VS

unread,
May 8, 2006, 2:01:49 PM5/8/06
to
Michael Hopkins wrote:
>
> Hi all
>
> What are people's preferences here on how to clone a server disk to a new
> larger one? Am about to replace 60G IDE with faster 250G SATA.
>
> I would like it to:
>
> - be difficult to mess up, preferably a debian/ubuntu package that takes
> care of all messy details e.g. symlinks, permissions, dates
>
> - be completely reliable
>
> - avoid fiddling about too much i.e. the use of floppies or ftp servers
>
> TIA
>
> Michael


Hi Michael,

The disk-duplicator program (man dd) can be used to copy partitions
across disks, and may be worth looking at. Since it copies whole
partitions, symlinks, permissions, etc. are maintained across copies.

If both hard disks are on the same machine, booting off Knoppix and
copying partitions should be fairly easy. For eg: dd if=/dev/hda1
of=/dev/sda1

Regards,
Vinay.

Greg Hennessy

unread,
May 8, 2006, 3:51:02 PM5/8/06
to
On Mon, 08 May 2006 12:28:45 +0100, Michael Hopkins
<michael...@hopkins-research.com> wrote:

>
>
>Hi all
>
>What are people's preferences here on how to clone a server disk to a new
>larger one? Am about to replace 60G IDE with faster 250G SATA.
>
>I would like it to:
>
> - be difficult to mess up, preferably a debian/ubuntu package that takes
> care of all messy details e.g. symlinks, permissions, dates
>
> - be completely reliable
>
> - avoid fiddling about too much i.e. the use of floppies or ftp servers

Download the bootable iso of G4U and use it.


http://www.feyrer.de/g4u/


greg
--
Every Villian Is Lemons

Rikishi 42

unread,
May 8, 2006, 5:45:22 PM5/8/06
to
Michael Hopkins wrote:

> What are people's preferences here on how to clone a server disk to a new
> larger one? Am about to replace 60G IDE with faster 250G SATA.

Put the new disk in the machine, partition it and use somthing like:
cp -var /source/ /destination/

> I would like it to:
>
> - be difficult to mess up, preferably a debian/ubuntu package that takes
> care of all messy details e.g. symlinks, permissions, dates

Check.

> - be completely reliable
You'd get an error message, if there was a problem.

> - avoid fiddling about too much i.e. the use of floppies or ftp servers

Fiddle-free.


--
Research is what I'm doing, when I don't know what I'm doing.
(von Braun)

Michael Hopkins

unread,
May 8, 2006, 6:01:44 PM5/8/06
to
On 8/5/06 22:45, in article ioq4j3-...@whisper.geuens.org, "Rikishi 42"
<fsck...@telenet.be> wrote:

> Michael Hopkins wrote:
>
>> What are people's preferences here on how to clone a server disk to a new
>> larger one? Am about to replace 60G IDE with faster 250G SATA.
> Put the new disk in the machine, partition it and use somthing like:
> cp -var /source/ /destination/
>
>> I would like it to:
>>
>> - be difficult to mess up, preferably a debian/ubuntu package that takes
>> care of all messy details e.g. symlinks, permissions, dates
> Check.
>
>> - be completely reliable
> You'd get an error message, if there was a problem.
>
>> - avoid fiddling about too much i.e. the use of floppies or ftp servers
> Fiddle-free.
>

Thanks to all for suggestions.

I am used to rsync, so I will probably partition disk and then go that way
with excludes for /swap & such. I hadn't thought about cp -var but may look
into that too.

M

Michael Paoli

unread,
May 8, 2006, 11:07:23 PM5/8/06
to
Michael Hopkins wrote:
> What are people's preferences here on how to clone a server disk to a new
> larger one? Am about to replace 60G IDE with faster 250G SATA.
> I would like it to:
> - be difficult to mess up, preferably a debian/ubuntu package that takes
> care of all messy details e.g. symlinks, permissions, dates
> - be completely reliable
> - avoid fiddling about too much i.e. the use of floppies or ftp servers

The "best" method(s) really depends a lot upon the particular
objectives and conditions/circumstances. Will everything on the
"source" (copying from) disk be ro when the data is to be copied off
of it? Will the data on the "source" disk be wiped/destroyed after
it's successfully copied? What are the concerns/priorities between a
copy that's "as identical as possible" vs. fully functionally
equivalent and logically identical, but doesn't drag along old cruft
(e.g. data from unallocated blocks in the filesystems)?

These can all lead to rather different "answers".

If the "source" filesystems must be rw mounted while the copy is
made, then one needs to use an ordinary tool that reads via the
filesystem, and not the device, e.g. tar, cp, cpio, pax, rsync, etc.
They all have their own various tradeoffs. Using a dump type utility
may or may not be "safe" regarding consistency of data produced, and
such doesn't exist for all filesystem types.

sfdisk is quite powerful, and makes it really easy to screw up a disk
if one doesn't know what one's doing and/or one isn't sufficiently
careful, but I certainly like how it can be used to provide a
detailed dump of a disk's partition information, and can provide it
in a manner suitable for recreating the same partitioning. For a
different target disk (e.g. larger) one could carefully make any
appropriate adjustments, but otherwise use the sfdisk dump output as
a template for partitioning the new disk.

dd will copy things quite exactly - given suitable block sizes and
source/target devices. This could even be done for entire disk(s),
though some tweaking of resultant boot blocks, partition table, etc.,
may be necessary for the results to be most useful. This can also be
done at the level of partitions.

Note that copying things *too* exactly can be hazardous. E.g. in
most circumstances one doesn't want to duplicate filesystem UUIDs,
LVM PV/LV/VG unique identifiers, and likewise for software RAID, etc.
Havoc can result if unique identifiers aren't, and multiple copies
of such are accessible by the same system at the same time. Again,
whether or not that is a problem or potential problem depends much
upon the objectives and the particular handling of the source and
target disks. If these identifiers are going to be wiped on the
source disk after they've been copied to the target, then there may
not be much of an issue ... though one still has to be cautious of
matters such as when the kernel, or various modules/utilities have
scanned and recognized these identifiers - if their idea of what
objects exist where is out-of-date and doesn't match to where things
have actually been copied/"moved", nasty hazards could still remain
until that's properly squared away.

"Google Groups does not currently support posting to the following
usenet groups:
"alt.os.linux.ubuntu"" ... bah, ... whatever, ... so the newsgroups are
trimmed a bit for now

Simon Waters

unread,
May 10, 2006, 2:45:57 PM5/10/06
to
On Mon, 08 May 2006 23:01:44 +0100, Michael Hopkins wrote:
>
> Thanks to all for suggestions.
>
> I am used to rsync, so I will probably partition disk and then go that way
> with excludes for /swap & such. I hadn't thought about cp -var but may look
> into that too.

I've just been through both with Debian Sarge,
dd and rsync copies of boxes.

I much preferred dd, which also does XP disk upgrades fine as well;
http://www.debian-administration.org/users/simonw/weblog/34

Roughly the rsync route was;

Base install on new server.
Sync apt settings (used rsync on /etc/apt). apt-get update.
Use dpkg set selection to match up the base systems (probably not strictly
required as rsync does most of this, but I figured better safe than
sorry). I manually changed the kernel entries and one or two others in the
selection file.

Then (and it'll need customising).

rsync -avz \
--exclude /etc/apt \
--exclude /sys \
--exclude /dev \
--exclude /proc \
--exclude /boot \
--exclude "/vmlinu*" \
--exclude "/initrd*" \
--exclude /etc/network \
--exclude /etc/modules \
--exclude /etc/modules.conf \
--exclude /lib/modules \
--exclude /etc/mdadm \
--exclude /etc/fstab \
--exclude /etc/mtab \
--exclude "/var/lib/dpkg/info/kernel-image-2.4.27-2-686-smp*" \
--exclude "/var/lib/dpkg/info/kernel-image-2.6.8-2-386*" \
--exclude /var/spool \
--exclude /var/run \
/ $DESTINATION:/

I then had separate scripts to sync some data from "/var/spool" once the
mail server and other services were stopped. And also one to sort out
"/etc/network/interfaces".

I also wrote a script to stop/start services on both boxes (same set of
services of course, so same script), since that is useful, and easier than
single user mode.

Running "update-grub" on the target is a good idea afterwards depending
what happened to the kernel packages, but of course grub give you plenty
of scope to TRY(!) and rescue things if you didn't.

Chances are your mileage will vary -- I had to mess about a bit because
the new machine couldn't use the same stock Debian kernels as the old. So
far my frankstein machine doesn't show any ill effects from the move, and
I used the same scripts to clone it back to the original when the hardware
was fixed.

I also did the odd "apt-get clean" and purged CPAN cache, and excluded
backups etc, to speed the process before I ran the rsync.

The good thing about rsync is you can do it again after testing, if it
didn't work. But note I didn't include "--delete" so you risk accumulating
spare files.

Compared to "dd" this all seems very hit and miss to me, I have more faith
in gparted, and the ability of most modern filesystems to be resized, than
in my ability to get this transfer 100%.

Although generally for dd you ideally want a large external hard disk,
and plenty of down time, the sarge server I did got swapped with less than
5 minutes down time, and most of that was me manually running a handful
of scripts, and a reboot to make sure it looked solid.

Use "dd", live less close to the edge.

Nix

unread,
May 10, 2006, 2:41:55 PM5/10/06
to
On 8 May 2006, Michael Paoli spake:

> Note that copying things *too* exactly can be hazardous. E.g. in
> most circumstances one doesn't want to duplicate filesystem UUIDs,
> LVM PV/LV/VG unique identifiers, and likewise for software RAID, etc.

If you're moving something onto a md-created RAID array you don't just
have to worry about the RAID superblock: the underlying filesystems were
(or should have been!) created with knowledge of the RAID stripe size,
and if you get that wrong you get a *drastic* slowdown.

(well, OK, this isn't true for all FSen, but it is true for e.g. ext2
and ext3.)

--
`On a scale of 1-10, X's "brokenness rating" is 1.1, but that's only
because bringing Windows into the picture rescaled "brokenness" by
a factor of 10.' --- Peter da Silva

Nix

unread,
May 14, 2006, 7:09:01 AM5/14/06
to
On Wed, 10 May 2006, Simon Waters stipulated:

> Then (and it'll need customising).
>
> rsync -avz \
> --exclude /etc/apt \
> --exclude /sys \
> --exclude /dev \
[...]

Myself I compute most of this automatically; otherwise I kept leaving
things out:

my @excluded_files = ('*~', '*.bak', '*.swp', '.newsrc.dribble', '.locatedb' ...);
my %excluded_directories = ( all => [ 'lost+found', '/var/tmp', '/usr/local/tmp', ...]
hades => [ '/mirror', '/usr/share/dar/catalogues', ...]
... );
my @excluded_fsen = ( 'proc', 'sysfs', 'msdos', 'devpts', 'tmpfs', 'openpromfs', 'iso9660',
'udf', 'usbdevfs', 'minix', 'vfat', 'nfs', 'none' );

# Collate a list of excluded directories for the current backup set.
# This involves combining
# - the contents of %excluded_directories for the current backup set.
# - the contents of %excluded_directories applicable to all sets
# - all filesystems listed as being of a type mentioned in @excluded_from
# (no support, yet, for including supported fsen mounted below those
# in the tree: when this causes a problem it'll be added)

sub collate_exclusions
{
my @exclusions = ();
my %excluded_hash;
my $root_regexp = "^\Q" . $roots{$backup_set} . "\E";

push @exclusions, @{$excluded_directories{$backup_set}}, @{$excluded_directories{all}};

map { $excluded_hash{$_} = 1; } @excluded_fsen;

open MOUNTS, '</etc/mtab' or return @exclusions;

while (<MOUNTS>)
{
my ($dev, $mountpoint, $fs, $options) = split;

next if !defined($fs); # be paranoid :)
next if ($mountpoint !~ $root_regexp); # Filter out stuff outside the set.

push @exclusions, $mountpoint if exists $excluded_hash{$fs};
}

close MOUNTS;

# Relativize all the paths, to stop dar mindlessly moaning about it.

foreach (@exclusions)
{
$_ =~ s,^/,,;
}

return @exclusions;
}


I do the same sort of thing with .locatedb files, or I just forget to
prune things out of the list of directories that updatedb must scan.

0 new messages