Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What's the best and easy way to copy/move my old slow 320 GB SATA HDD's updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe it clean)?

11 views
Skip to first unread message

Ant

unread,
May 19, 2022, 10:57:45 AM5/19/22
to
Hello.

What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.

Thank you for reading and hopefully answering soon. :)
--
Quiet cooler week so far, but will today be slammy? Celtics have better get burned by Miami Heat!
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
/ /\ /\ \ Please nuke ANT if replying by e-mail.
| |o o| |
\ _ /
( )

The Natural Philosopher

unread,
May 19, 2022, 11:16:22 AM5/19/22
to
On 19/05/2022 15:57, Ant wrote:
> Hello.
>
> What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
> updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
> it clean)? Yes, SSD is smaller but my Debian's installation only uses
> about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
> using the same 13 yrs. old PC.
>
> Thank you for reading and hopefully answering soon. :)
my approach is these days to remove the old hard drive and install the
ssd, reinstall latest linux, and then install latest version of apps,
and roll any data across by reattaching the hard drive ab carefully
copying what you want...

--
“it should be clear by now to everyone that activist environmentalism
(or environmental activism) is becoming a general ideology about humans,
about their freedom, about the relationship between the individual and
the state, and about the manipulation of people under the guise of a
'noble' idea. It is not an honest pursuit of 'sustainable development,'
a matter of elementary environmental protection, or a search for
rational mechanisms designed to achieve a healthy environment. Yet
things do occur that make you shake your head and remind yourself that
you live neither in Joseph Stalin’s Communist era, nor in the Orwellian
utopia of 1984.”

Vaclav Klaus

Marco Moock

unread,
May 19, 2022, 11:54:49 AM5/19/22
to
Am Donnerstag, 19. Mai 2022, um 09:57:37 Uhr schrieb Ant:

> What's the best and easy way to copy/move my old slow 320 GB SATA
> HDD's updated Debian bullseye v11.3 to an old fast 115 GB SSD (going
> to wipe it clean)? Yes, SSD is smaller but my Debian's installation
> only uses about 8 GB. I installed Debian use the whole 320 GB drive.
> I'll still be using the same 13 yrs. old PC.

This PC likely doesn't have UEFI, that makes it easier.
You need to reduce the size of you current partition so it fits on the
new installation. This also affects the size of the file system. You
can use GParted for that in a live system bootet from USB.
Then you can create a new msdos partition table on your SSD and then
clone the partition (not entire disk, so /dev/sdXN instead of just
/dev/sdX) with dd. You should do some research about the alignment of
that partition because if that is not correct the speed will be worse.
You should also specify the block size in dd.

Tauno Voipio

unread,
May 19, 2022, 12:30:02 PM5/19/22
to
On 19.5.22 17.57, Ant wrote:
> Hello.
>
> What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
> updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
> it clean)? Yes, SSD is smaller but my Debian's installation only uses
> about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
> using the same 13 yrs. old PC.
>
> Thank you for reading and hopefully answering soon. :)


First, you need to shrink the current installation to something
smaller than the new SSD.

Download GParted Live Bootable from GParted pages and install it
to a CD/DVD/USB stick (whichever you can boot from). It is quite
straightforward to shrink the only partition to say 10 GiB.
Check the last block number of the shrunk image to know how much
you need to copy in the next step.

If you can install the new SSD on the hardware together with the
old drive, just boot from the shrunk old drive and use e.g. dd
to copy enough of the old disk to cover the full image.

If everything has gone well, shut down the computer, change the
disks to the new disk only, and boot it. If the boot succeeds,
the next step is to expand the new partition and file system to
fill the SSD, using the bootable GParted agin.

--

-TV

Ant

unread,
May 19, 2022, 2:08:54 PM5/19/22
to
That sounds complex. :/

Ant

unread,
May 19, 2022, 3:17:17 PM5/19/22
to
FYI. My current HDD's df and /etc/fstab can be found in
https://pastebin.com/raw/zAJM6Npc.

Bobbie Sellers

unread,
May 19, 2022, 3:32:37 PM5/19/22
to
Why post to so many newsgroups. Seems Trollish to me.


On 5/19/22 07:57, Ant wrote:
> Hello.
>
> What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
> updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
> it clean)? Yes, SSD is smaller but my Debian's installation only uses
> about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
> using the same 13 yrs. old PC.
>
> Thank you for reading and hopefully answering soon. :)

Do a fresh install and copy back the information you wish to retain.

Bit Twister

unread,
May 19, 2022, 3:52:12 PM5/19/22
to
On Thu, 19 May 2022 14:17:10 -0500, Ant wrote:
> FYI. My current HDD's df and /etc/fstab can be found in
> https://pastebin.com/raw/zAJM6Npc.
>
>
> In comp.os.linux.setup Ant <a...@zimage.comant> wrote:
>> Hello.
>
>> What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
>> updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
>> it clean)? Yes, SSD is smaller but my Debian's installation only uses
>> about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
>> using the same 13 yrs. old PC.


If it were I, I would boot a rescue cd,
https://www.system-rescue.org/
http://www.sysresccd.org/Download has instructions on coping to usb,
use gparted to format and label the new partition. I would then create
/src and /dest partition and mount to respective partitions, then use rsync
to copy /src to dest partition. I would then use
mousepad to change / mount point UUID to /dest's UUID.

The operation would be something like
mkdir /src
mkdir /dest
gparted to format ssd card. and note new partition UUID and /dev/xxxx
mount -t auto /dev/sdb1 /src
mount -t auto /dev/xxxxx /dest
rsync --delete -aAHSXxv /src/ /dest
mousepad /src/etc/fstab # to set / uuid to /dest uuid

umount /src /dest
reboot



Old install should boot up.
update-grub should rebuild grub menu to have new partition copy.
boot that device kernel
and run update-grub
and run grub-install /dev/xxxx
a reboot should let you pick your new install from the new install grub
menu.

Hope I got everything correct.


James Moe

unread,
May 20, 2022, 12:04:11 AM5/20/22
to
On 2022-05-19 07:57, Ant wrote:

> What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
> updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
> it clean)? Yes, SSD is smaller but my Debian's installation only uses
> about 8 GB. I installed Debian use the whole 320 GB drive.
>
Here are my notes for transferring system disks.

Moving System Partitions or Volumes
Before booting create a list of the /dev/sdXn devices of interest.
sdXn = sdb2, for instance.
Boot a Rescue System or a “Live CD.”
1. Verify the volumes are as expected by mounting and inspecting them.
cd /
mkdir /mnt/dev-old
mkdir /mnt/dev-new
mount /dev/sdXn /mnt/dev-old # the volume to replace or move
mount /dev/sdYn /mnt/dev-new # the target volume
2. Copy the data from old to new.
cd /mnt/dev-old
cp -a . /mnt/dev-new
3. Unmount the volume.
umount /mnt/dev-old
umount /mnt/dev-new
Repeat 1., 2., and 3. for each volume.
4. Clean up.
rmdir /mnt/dev-old
rmdir /mnt/dev-new
5. Create the build environment.
cd /

# Only if /usr or /boot are separate volumes
mkdir /mnt/usr
mkdir /mnt/boot
mount /dev/sdUn /mnt/usr
mount /dev/sdBn /mnt/boot

mount /dev/sdYn /mnt # Mount the root
mount --bind /sys /mnt/sys
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
6. Modify /mnt/etc/fstab as required.
7. Build the boot loader
chroot /mnt
mkinitrd
8. If moving the root volume:
- Run yast::Boot Loader
- Modify "Boot Loader Location" as needed. Usually "Boot from Partition" is okay.
- Verify "Set Active Flag" and "Write generic boot code to MBR" are set.
- Save
9. All done. Restart with the new configuration.
exit
shutdown -r now


--
James Moe
jmm-list at sohnen-moe dot com
Think.

The Natural Philosopher

unread,
May 20, 2022, 7:06:17 AM5/20/22
to
Yes!

Experience suggests that if this sort of thing is something you don't do
every day, this is faster than 'upgrading in place'


--
Gun Control: The law that ensures that only criminals have guns.

Ant

unread,
May 22, 2022, 9:08:42 PM5/22/22
to
OK. I think I finally got it working now after reading everyone's suggestions (thanks!).

What I did from my memory over my weekend after many trials and errors:
1. Downloaded and burned https://downloads.sourceforge.net/gparted/gparted-live-1.4.0-1-amd64.iso and https://osdn.net/projects/clonezilla/downloads/76513/clonezilla-live-2.8.1-12-amd64.iso/ to two different CD-RW.
2. Made a back up of my original HDD's datas! Duh.
3. Booted gparted from the burned CD-RW. Resized my Seagate 320 GB HDD's Debian partition to about 106 GB. Went to 115 GB SSD, deleted all partitions, and made almost the whole drive as EXT4 FS. Made a new right extended 1 GB partition with a 1 GB swap partition.
4. Rebooted to my HDD to see if its Debian still works. It did. Thanks God!
5. Rebooted to Clonezilla's burned CD-RW and copied Seagate 320 GB HDD's Debian partition to SSD which took under four minutes since it was a small installation.
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(


In comp.os.linux.help Ant <a...@zimage.comant> wrote:
> FYI. My current HDD's df and /etc/fstab can be found in
> https://pastebin.com/raw/zAJM6Npc.


> In comp.os.linux.setup Ant <a...@zimage.comant> wrote:
> > Hello.

> > What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
> > updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
> > it clean)? Yes, SSD is smaller but my Debian's installation only uses
> > about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
> > using the same 13 yrs. old PC.

> > Thank you for reading and hopefully answering soon. :)


--
Dang computer problems! Quiet cooler week with the recent very light rain. It's like winter again! Celtics have better get burned by Miami Heat at the end of the eastern conference!

Bit Twister

unread,
May 22, 2022, 10:02:02 PM5/22/22
to
On Sun, 22 May 2022 20:08:35 -0500, Ant wrote:

> 6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
> 7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(

You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.

25.BX945

unread,
May 22, 2022, 11:53:51 PM5/22/22
to
Correct. You need to tweak 'fstab' AND the old drive. You can't
have two identically UUID identified drives in there. The
alternative - one I like - is to drop the UUID crap entirely
and create NAMED drives in fstab. It's easier to tell what's
what afterwards.

As for the actual xfer ... in theory 'dd' oughtta do it.
Attach your SSD, then "dd if=/dev/sda of=/dev/sdb bs=64k"
is kind of the basic. DO use 'lsblk' to MAKE SURE what
/dev/sd(?) the original and new drives are ! 'dd' is
sometimes nicknamed 'disk destroyer' for a REASON, YOU
have to get it right !

You can add "status=progress" to see what's going on with 'dd'.
One important note ... just because 'dd' says it's done does
NOT mean it's done ... you'll likely still see the drive light
blinking for a few minutes after. Apparently lots of data gets
stored in memory buffers and it takes a little while for all
those to be emptied onto your target drive. Get impatient and
you'll get an incomplete copy. No blinky light ? ASSUME an xtra
five minutes after 'dd' claims it's done.

THEN disconnect the HDD and reboot using the SSD and see if
it all works. If so, best if you use gparted from a linux
stick to totally clear the old HDD - including changing
its UUID, then reboot with it plugged in as normal. It'll
be detected as a new drive, probably /dev/sdb, and you can
go from there.

(Dual-booters .. you MIGHT run into problems because Winders
is The Great Preventer and might make extra effort to be sure
you can't get there from here. But, why would anyone want a
box with Winders on it ... ???)

In short, there's NO reason to lose your existing - perhaps
highly-customized - distro just to move to an SSD. I do
development stuff and have umpteen zillion apps and libraries
and custom settings. Losing those is a DISASTER - 24 hours+
to start from scratch assuming I can remember ALL the special
settings I've done.

Are SSDs better for everything ? MAYbe not. On the whole, do
not expect them to tolerate as many read/writes as a magnetic
drive. This might be important if you're running a big database
or anything else that does lots of re-indexing all the time.
Also, for security/disposal reasons, you can't blank 'em out
reliably with bleachbit or even 'dd' because of the wear-leveling
system built in. "Dispose" with a large hammer ... maybe one of
those big sparky stun-guns .........

If you're more an "average user" then SSDs oughtta be fine.
There are some deep-deep-down kernel-level tweaks you can also
make to further improve SSD performance. There are assumptions
made, that you have a magnetic drive, and some of that can be
adjusted to your advantage (gamers, do your research).

Ant

unread,
May 23, 2022, 12:11:57 AM5/23/22
to
In comp.os.linux.setup 25.BX945 <25B...@nada.net> wrote:
> On 5/22/22 10:01 PM, Bit Twister wrote:
> > On Sun, 22 May 2022 20:08:35 -0500, Ant wrote:
> >
> >> 6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
> >> 7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
> >
> > You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.

> Correct. You need to tweak 'fstab' AND the old drive. You can't
> have two identically UUID identified drives in there. The
> alternative - one I like - is to drop the UUID crap entirely
> and create NAMED drives in fstab. It's easier to tell what's
> what afterwards...

Why did they even use UUIDs? It's so confusing.

Bit Twister

unread,
May 23, 2022, 12:14:47 AM5/23/22
to
On Sun, 22 May 2022 23:53:43 -0400, 25.BX945 wrote:
> On 5/22/22 10:01 PM, Bit Twister wrote:
>> On Sun, 22 May 2022 20:08:35 -0500, Ant wrote:
>>
>>> 6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
>>> 7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
>>
>> You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
>
> Correct. You need to tweak 'fstab' AND the old drive. You can't
> have two identically UUID identified drives in there. The
> alternative - one I like - is to drop the UUID crap entirely
> and create NAMED drives in fstab. It's easier to tell what's
> what afterwards.

Very true and will also have the same problem if NAMED drives have the
same value. I too moved to using labels instead of UUIDs.


>
> In short, there's NO reason to lose your existing - perhaps
> highly-customized - distro just to move to an SSD. I do
> development stuff and have umpteen zillion apps and libraries
> and custom settings. Losing those is a DISASTER - 24 hours+
> to start from scratch assuming I can remember ALL the special
> settings I've done.

Hehe, I always do clean installs. As for custom settings you either
keep a log on all changes with before/after settings for each file.
OR just write scrips to automate making your changes. Only costs me about
an hour for my scripts to make my changes.

Bit Twister

unread,
May 23, 2022, 12:20:21 AM5/23/22
to
On Sun, 22 May 2022 23:11:50 -0500, Ant wrote:

>
> Why did they even use UUIDs? It's so confusing.

Because multi-drive systems would not come up reliably with the same
/dev/sdxx values once in awhile.

You would have avoided all this "experience" had you used rsync instead
of dd.

Anssi Saari

unread,
May 23, 2022, 3:34:43 AM5/23/22
to
a...@zimage.comANT (Ant) writes:

> Why did they even use UUIDs? It's so confusing.

For the case where you'd unplug the old drive after cloning it's easier.
Don't need to edit /etc/fstab or any other place either. Grub will know
what the root partition is and where to resume from if hibernation is
used, likewise the kernel will know what the root file system is.

Why do you want both drives in the system anyways? After cloning I do
like to keep the old drive *around* for a while but not plugged into
anyhing. It serves as a cloneable backup if needed. After a while at
least my recently cloned HD goes into SER recycling since it's 2007
vintage.

Tauno Voipio

unread,
May 23, 2022, 10:36:34 AM5/23/22
to
On 23.5.22 4.08, Ant wrote:
> OK. I think I finally got it working now after reading everyone's suggestions (thanks!).
>
> What I did from my memory over my weekend after many trials and errors:
> 1. Downloaded and burned https://downloads.sourceforge.net/gparted/gparted-live-1.4.0-1-amd64.iso and https://osdn.net/projects/clonezilla/downloads/76513/clonezilla-live-2.8.1-12-amd64.iso/ to two different CD-RW.
> 2. Made a back up of my original HDD's datas! Duh.
> 3. Booted gparted from the burned CD-RW. Resized my Seagate 320 GB HDD's Debian partition to about 106 GB. Went to 115 GB SSD, deleted all partitions, and made almost the whole drive as EXT4 FS. Made a new right extended 1 GB partition with a 1 GB swap partition.
> 4. Rebooted to my HDD to see if its Debian still works. It did. Thanks God!
> 5. Rebooted to Clonezilla's burned CD-RW and copied Seagate 320 GB HDD's Debian partition to SSD which took under four minutes since it was a small installation.
> 6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
> 7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
>
>
> In comp.os.linux.help Ant <a...@zimage.comant> wrote:
>> FYI. My current HDD's df and /etc/fstab can be found in
>> https://pastebin.com/raw/zAJM6Npc.
>
>
>> In comp.os.linux.setup Ant <a...@zimage.comant> wrote:
>>> Hello.
>
>>> What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
>>> updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
>>> it clean)? Yes, SSD is smaller but my Debian's installation only uses
>>> about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
>>> using the same 13 yrs. old PC.
>
>>> Thank you for reading and hopefully answering soon. :)


Your filesystem (EXT4) on the SSD may still be smaller than the
partition it is in. You can use the GParted CD to check and maybe
resize it.

--

-TV

25.BX945

unread,
May 24, 2022, 10:14:25 PM5/24/22
to
On 5/23/22 12:11 AM, Ant wrote:
> In comp.os.linux.setup 25.BX945 <25B...@nada.net> wrote:
>> On 5/22/22 10:01 PM, Bit Twister wrote:
>>> On Sun, 22 May 2022 20:08:35 -0500, Ant wrote:
>>>
>>>> 6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
>>>> 7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
>>>
>>> You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
>
>> Correct. You need to tweak 'fstab' AND the old drive. You can't
>> have two identically UUID identified drives in there. The
>> alternative - one I like - is to drop the UUID crap entirely
>> and create NAMED drives in fstab. It's easier to tell what's
>> what afterwards...
>
> Why did they even use UUIDs? It's so confusing.


They thought it would be more "generic" - uniquely identifying
a disk. Alas such a scheme TELLS you NOTHING USEFUL. I like
names that DO tell you something, helps keep track, esp if
you have a box with lots of drives/partitions. I keep one
with EIGHT drives and 12 partitions ... need all the cues
I can get with that one. I don't WANT the UUID idea of
"uniquely identified", assigning human-readable names lets
me just slide in a replacement disk without fartin' around
very much. Fstab just sees "BakDrive3" and doesn't care if
it's the same physical disk as before.

David W. Hodgins

unread,
May 24, 2022, 10:53:29 PM5/24/22
to
On Tue, 24 May 2022 22:14:16 -0400, 25.BX945 <25B...@nada.net> wrote:
> On 5/23/22 12:11 AM, Ant wrote:
>> Why did they even use UUIDs? It's so confusing.

The use of uuids were a solution to the problem where drive detection can't
be relied on to always be in the same order. The first drive that's fully
powered up becomes sda, even if it's usually the second drive, so sdb.

> They thought it would be more "generic" - uniquely identifying
> a disk. Alas such a scheme TELLS you NOTHING USEFUL. I like
> names that DO tell you something, helps keep track, esp if
> you have a box with lots of drives/partitions. I keep one
> with EIGHT drives and 12 partitions ... need all the cues
> I can get with that one. I don't WANT the UUID idea of
> "uniquely identified", assigning human-readable names lets
> me just slide in a replacement disk without fartin' around
> very much. Fstab just sees "BakDrive3" and doesn't care if
> it's the same physical disk as before.

You don't have to use the uuid. From my fstab ...
LABEL=x7b / ext4 defaults,noatime 1 1

I chose the label x7b as it's an x86_64 install of Mageia 7 (since upgraded to 8)
on /dev/sdb. Like the uuid, if you choose to use a label, it's up to you to ensure
it's unique. Use a label that means something to you, or let the system use the
generated uuid. Your choice.

Regards, Dave Hodgins

Bit Twister

unread,
May 25, 2022, 4:00:39 AM5/25/22
to
Yep, I use labels, even for swap. Except I use the swap partition label
because each format of swap wipes out the Medial label/UUID, which is usually
performed when installing a new OS.

I usually set the Partition label and media label to the same value.
Those usually become my mount points.

$ grep swap /etc/fstab
PARTLABEL=swap swap swap defaults,nofail 0 0

$ lsblk -o NAME,TYPE,FSTYPE,MOUNTPOINT,LABEL,PARTLABEL
NAME TYPE FSTYPE MOUNTPOINT LABEL PARTLABEL
sda disk
├─sda1 part ext4 mga6 mga6
├─sda2 part ext4 / mga8 mga8
├─sda3 part ext4 mga7 mga7
├─sda4 part ext4 cauldron cauldron
├─sda5 part ext4 /local local local
├─sda6 part ext4 /accounts accounts accounts
├─sda7 part ext4 /misc misc misc
├─sda8 part ext4 /spare spare spare
├─sda9 part ext4 /vmguest vmguest vmguest
└─sda10 part bios_grub
sdb disk
├─sdb1 part swap [SWAP] swap swap
├─sdb2 part ext4 bk_up bk_up
├─sdb3 part ext4 hotbu hotbu
├─sdb4 part ext4 cauldron_bkup cauldron_bkup
├─sdb5 part ext4 /myth myth myth
├─sdb6 part ext4 net_ins net_ins
└─sdb7 part ext4 net_ins_bkup net_ins_bkup

25.BX945

unread,
May 25, 2022, 11:55:12 PM5/25/22
to
Indeed. Correctly using LABEL generally solves the sda/sdb/sdc thing.
You DO need to actually label the partitions though. UUID or part
label, both "uniquely identify" - but the latter is far more human
readable.

I am aware of the "problem" mentioned. It used to be an issue with
OpenSuse about ten years ago - you might have to use the emergency
terminal to tweak fstab. Have not seen it with Debian-based distros
and certainly not lately. I use LABEL in boxes with 4-8 disks pretty
regularly. I think they smartened-up the kernal somehow ..
0 new messages