Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

must i consider zfs or lvm for smr large drive?

111 views
Skip to first unread message

Samuel Wales

unread,
Aug 19, 2022, 7:20:05 PM8/19/22
to
apologies for the subject header being kind of an opinion poll rather
than a question. but it is meant as a question.


until now, i have avoided lvm and zfs determinedly. i have always
been completely satisfied to copy some big partition rather than deal
with the complexity of those. i don't want to get confused about them
when i am debugging or setting up.

i use luks and ext4 and that's enough complexity for me. i get them
right, understand them, and glory in few corner cases.

i have a new 4tb portable external drive. i want it to have a huge partition.

even such things as resizing sound error-prone or complex. more
layers and commands to learn. and zfs is a whole new thing, with, oh,
yeah, you have to use contrib or non-free [can i rely on this being
secure and also available into the future?] and oh, yeah, it's
different from luks, and oh, yeah, do a balance/resilver/whatever.
yes, send/recv beckons.

but now i am thinking, with smr, the drive could pseudo-brick, despite
discard and fstrim. and i might then want to do some kind of, idk, dd
if=/dev/zero of=some-partition to "reset" it. and my 20gb root
partition might be too small for that.

i don't actually know if /dev/zero resets smr to stop shuffling. i am
just speculating.

but if it does, then i might want lvm's or zfs's resizing feature so
that i can do /dev/zero to some lo... gical ... volume? which would
then in my imagination reset smr and then the drive would work again
instead of 3.6tb filled non-writable.

idk if zfs/btrfs has smr features better than ext4 or vice-versa. i
do NOT need snapshotting, raid. my box is old and would not support
deduplication and i wonder if it would even support zfs at all at 6gb
which always gets filled up with firefox.

so, am i going to need one of these two
more-complex-than-luks-and-ext4 technologies just for safety when the
huge partition fills up? i know they are /desirable/ technologies for
those who like them.

but desirability is not the question at all. :) the question is, for
MY case, is lvm/zfs/btrfs? going to be needed for smr.


idk if i am on this mailing list.

preliminary comments below. :)


p.s.

as a preliminarty comment, i have partitioned it for booting, my idea
being for it to boot off of anything for quick perfectly-my-env
rescue, not for all the time use. i ahve accessibility issues that
make installing and rescue cd's problematic.]

as more preliminary, the thing does not boot on my old bios box no
matter what i try.

and yet more preliminary, it is toshiba canvio basics. it does
spindown or head parking at a ridiculously low delay. idk if hdparm
-y or -Y or scsi-spin or scsiadd or eject or idle3 or what is safest.
or if i should let it rack up those smartctl attrs.

and another. i am limited in computer use and have a very large
number of limitations that i cannot go into beause it would take too
much out of me to do so. i am not a normal kind of user. but i'd
still like gentle, helpful comments on my question if anybody has
some. i've seen issues with myself and others in the past [not on
this list] with "help" being used as a very transparent, quite obvious
excuse for being a rather extreme jerk, and i'd be interested in
knowing of some accepted things to say that say "thanks, but i do not
want 'help' from you personally at all but others are still very
welcome to contribute as i know already that they are sincere and
helpful" other than quitting the place entirely [at this point always
my best option]. the idea being to encourage sincere others to help
while getting others to realize i do not want help from the problem
person and that my not replying to the problem person does not mean
sincere others can't contribute, i/e/ the problem person has not
claimed accepted ownerhip over helping me and i am in no mood to be
attacked merely for asking a question or having accessibility and
other limitations or for no reason at all.

David Christensen

unread,
Aug 19, 2022, 8:40:05 PM8/19/22
to
On 8/19/22 16:13, Samuel Wales wrote:
> apologies for the subject header being kind of an opinion poll rather
> than a question. but it is meant as a question.

<snip>


Please post:

# cat /etc/debian_version ; uname -a


What is the make and model of your computer? BIOS or UEFI?


What is the make and model of your SMR drive?


What partition scheme do you use -- MBR or GPT?


How do you intend to use the partitions -- e.g. boot, swap, root, usr,
var, home, data, online backup, offline backup, archives, images,
sneakernet, other?


STFW "zfs smr disk drive":

* I see several reasonably current articles that advise against using
ZFS with SMR drives; notably the Western Digital Red with SMR:

https://www.truenas.com/community/threads/update-wd-red-smr-drive-compatibility-with-zfs.88413/

https://arstechnica.com/gadgets/2020/06/western-digitals-smr-disks-arent-great-but-theyre-not-garbage/

* I see a 2014 OpenZFS conference paper regarding host-aware SMR. I do
not know if this technology made it into OpenZFS, or if Debian includes it:

https://openzfs.org/w/images/2/2a/Host-Aware_SMR-Tim_Feldman.pdf


STFW "btrfs smr disk drive", nothing stands out.


One option would be to benchmark all three choices (ext4, btrfs, zfs).
Make sure to cover all the use-cases, including disaster recovery.


Another option would be to replace the disk with a CMR disk (e.g.
refund, exchange, sale).


David

David Christensen

unread,
Aug 19, 2022, 8:50:05 PM8/19/22
to
On 8/19/22 17:28, DdB wrote:

> If i use [ZFS] for a single drive, i would at the very least consider
> setting "copies=2" to have at least some redundancy for data, i value.


My SOHO file and backup servers are FreeBSD with encrypted ZFS root. I
use single 2.5" SSD's for the OS drive. I hacked the installer to set
copies=2 for boot and root, and enabled mirror for swap.


David

Dan Ritter

unread,
Aug 20, 2022, 6:50:05 AM8/20/22
to
Samuel Wales wrote:
> apologies for the subject header being kind of an opinion poll rather
> than a question. but it is meant as a question.

Do NOT use ZFS on an SMR drive.

Don't buy SMR drives unless you have a use for it solely as an
archival destination.

If you bought an SMR drive by accident, return it if possible.

-dsr-

Stefan Monnier

unread,
Aug 20, 2022, 2:40:05 PM8/20/22
to
> i have a new 4tb portable external drive. i want it to have a huge partition.

I love LVM and use it as a matter-of-course everywhere (except for /boot
partition which I still keep as a separate partition out of habit).

But FWIW, using LVM with external drives is not super smooth: it's OK if
the drive is almost always connected, but otherwise I don't think LVM
handles the case of plugging/unplugging the drive smoothly enough
(AFAICT there's no real problem at the lower levels, but at the UI level
it's just not "plug&play" enough IMO).

The main issue is that after plugging the drive in, you need to
"activate" its volumes (e.g. `vgchange -ay`, which AFAICT does not
affect the disk itself but only the host OS, making the volumes appear
under /dev/mapper), and they won't get deactivated automatically when
you unplug it (so you end up with ghost entries in /dev/mapper unless
you're careful to unmount everything and `vgchange -an` before
unplugging).

Maybe you can activate them once and for all and later just
unplug&replug and that'll work but I wouldn't bet on it (IIRC it depends
on whether it gets the same /dev/sdX label when you plug it back in).

So if your plugging and unplugging is done in a disciplined enough way
(and is already accompanied by running some scripts, e.g. to initiate
a backup onto the drive) I would recommend the use of LVM, but otherwise
you're probably better off without it.

> but now i am thinking, with smr, the drive could pseudo-brick,

Last I heard, SMR drives aren't significantly less reliable than CMR, so
I'm not sure you should base your decision on that. Of course, you'll
want to keep backups (unless that drive is the backup for others,
obviously).


Stefan

to...@tuxteam.de

unread,
Aug 20, 2022, 3:00:06 PM8/20/22
to
On Sat, Aug 20, 2022 at 02:35:35PM -0400, Stefan Monnier wrote:

[...]

> > but now i am thinking, with smr, the drive could pseudo-brick,
>
> Last I heard, SMR drives aren't significantly less reliable than CMR, so
> I'm not sure you should base your decision on that. Of course, you'll
> want to keep backups (unless that drive is the backup for others,
> obviously).

Agreed. They are designed for another use-case. I don't know whether there
are file systems which work nicely with SMR.

Cheers
--
t
signature.asc

David Christensen

unread,
Aug 20, 2022, 6:10:05 PM8/20/22
to
> Am 20.08.2022 um 02:43 schrieb David Christensen:
>> My SOHO file and backup servers are FreeBSD with encrypted ZFS root.  I
>> use single 2.5" SSD's for the OS drive.  I hacked the installer to set
>> copies=2 for boot and root, and enabled mirror for swap.

On 8/20/22 00:30, DdB wrote:
> Hey! This sounds like you know, what you are doing, and more advanced
> compared to me. Sorry for having stated the obvious, then.
>
> Have fun with it. :-)
> DdB


The obvious choices are one drive and RAID. Both are supported by the
Debian and FreeBSD installers.


I prefer one drive per OS image, and keep all of my data in RAID on a
file server. The FreeBSD installer is a shell script, and already
features encrypted ZFS on root. Adding "copies=2" was straight-forward.
The key was being able to read and write Bourne shell scripts.


David
0 new messages