How can I shrink my zfs-on-root partition for FreeBSD?

0 views
Skip to first unread message

Raphael O.

unread,
Jan 6, 2026, 1:57:19 PMJan 6
to freebsd-...@freebsd.org
Hello!

I've had a little mishap

The initial situation:

I have the following hard drives installed in my PC.
2 TB NVMe (EFI boot partition and the rest Windows)
1 TB NVMe (NTFS formatted, data grave)
2 TB SATA SSD (FreeBSD-ZFS and FreeBSD-Swap)
1 TB SATA SSD (NTFS, games)
500GB SSD (Linux and Windows Games)

Windows, FreeBSD and Linux all share the EFI boot partition.
Works great so far.
I previously had the partition for zfs on 128GB on the 2 TB SATA where I have FreeBSD on it.
Since it was getting tight (I still had free space, but 70% full) I thought I'd increase it to 256 or 512.
Unfortunately, I forgot the -s option in the gpart resize command and, as you can imagine, FreeBSD grabbed the whole 2TB (except for the few gb from swap).

Now I would like to reduce the size again, but that doesn't work quite so easily with ZFS.
I've read up on it, but wanted to check again whether these steps are the right ones I've taken.

My procedure would look like this:
`zfs snap zroot@today` take a snapshot of the pool

I would then write the snapshot to a file and compress it with gzip
`zfs send -R zroot@today | gzip > /mnt/backup.gz`.
Or should I use
`zfs send -R -c zroot@today > /mnt/zroot-backup.zfs`?
The second one I saw online, but I don't remember where.

Reboot into live shell using a boot stick and destroy the pool.
`zpool destroy zroot`

Delete the partitions using gpart and create a new one (now with the correct size)
Create a zfs pool on the newly created partition
Create mountpoints

Then restore the snapshot of the file
`zcat /mnt/backup.gz < zroot@backup`
and make a rollback to this snapshot

`zfs rollback zroot@backup`

Would that work? Or is there something wrong which I don't have in mind?
Thanks in advance.

-obr
signature.asc

Daniel Tameling

unread,
Jan 7, 2026, 2:14:00 PMJan 7
to ques...@freebsd.org
> `zcat /mnt/backup.gz < zroot@backup`

I have never tried to do what you want to do, but that command looks wrong to me. I don't even think it does anything useful. You probably want to do something like
`zcat /mnt/backup.gz | zfs recv -F zroot@backup`
`zfs rollback zroot@backup`

I would also recommend that you create a different backup with all the data you definitively don't want to loose, in case anything goes wrong.

Best regards,
Daniel

Edward Sanford Sutton, III

unread,
Jan 7, 2026, 7:36:31 PMJan 7
to ques...@freebsd.org
On 1/6/26 11:56, Raphael O. wrote:
> Hello!
>
> I've had a little mishap
>
> The initial situation:
>
> I have the following hard drives installed in my PC.
> 2 TB NVMe (EFI boot partition and the rest Windows)
> 1 TB NVMe (NTFS formatted, data grave)
> 2 TB SATA SSD (FreeBSD-ZFS and FreeBSD-Swap)
> 1 TB SATA SSD (NTFS, games)
> 500GB SSD (Linux and Windows Games)
>
> Windows, FreeBSD and Linux all share the EFI boot partition.
> Works great so far.
> I previously had the partition for zfs on 128GB on the 2 TB SATA
where I have FreeBSD on it.
> Since it was getting tight (I still had free space, but 70% full) I
thought I'd increase it to 256 or 512.
> Unfortunately, I forgot the -s option in the gpart resize command
and, as you can imagine, FreeBSD grabbed the whole 2TB (except for the
few gb from swap).
>
> Now I would like to reduce the size again, but that doesn't work
quite so easily with ZFS.
> I've read up on it, but wanted to check again whether these steps are
the right ones I've taken.
If the zpool property of autoexpand is off then zfs would not alter
the pool to use the new space yet. In that case you could readjust the
partition to the originally smaller desired size increase. I'd still
make a backup first for good measure but this may save you a lot of time
restoring it unnecessarily. Default should be off but confirm what you
have with `zpool get autoexpand zroot`. Off can be a nice safety for
issues like this incorrect partition size increase.

> My procedure would look like this:
> `zfs snap zroot@today` take a snapshot of the pool
If you have more than one dataset you want to transfer, you likely
want to use `zfs snap -r zroot@today` for a recursive snapshot since zfs
operates on datasets and not the pool as a whole. Without it you would
not be backing up others like zroot/be/default/usr, zroot/data/home, etc.
If you are booted on this system while doing it, consider stopping
services or going to single user mode to avoid snapshotting files that
are currently open/being altered. Not doing so will give a filesystem
that is correct for the point in time but its like you powered off the
computer in the middle of those programs running. Better than nothing
but for an intentional backup I assume you can spare a few moments to
shut down what doesn't need to be running during such snapshot if it
matters. Once the snapshot is almost instantly made, you could relaunch
programs while performing the transfer but you will not be backing up
changes without a new snapshot + transfer; if you need to shorten the
gap between snapshot and transfer you can repeat the steps by making
another snapshot and then doing an incremental transfer so you have a
small transfer time on this next cycle. Restoring a destroyed pool will
still need a full pool transfer time.

> I would then write the snapshot to a file and compress it with gzip
> `zfs send -R zroot@today | gzip > /mnt/backup.gz`.
> Or should I use
> `zfs send -R -c zroot@today > /mnt/zroot-backup.zfs`?
> The second one I saw online, but I don't remember where.

-c causes ZFS to transfer compressed blocks as compressed blocks. If
large blocks are in use but you do not also use -L then large blocks (if
any existed) will be decompressed+compressed to fit smaller block sizes
for transfer.
I should probably look at integrating -c into my workflow but most of
my transfers I have overridden with increased compression very high for
temporary archival at the received destination.
I usually use -eLR for backup/restore tasks. Now that I usually write
the backup as a received stream to a filesystem instead of storing the
stream as a file I can likely drop the -e.

I'd consider something other that gzip for compression/decompression
as there is likely little to be gained + gzip will likely be the
bottleneck of the transfer.
zstd supports faster compression+decompression while normally
achieving better compression ratio at the same time. Increasing
compression settings will likely make the compressor a bottleneck if it
wasn't already (depends on CPU and disk to know where your limit is).
Decompression will not be impacted the same and I've usually had best
luck somewhere around compression setting 12-15 to get the maximum
theoretical decompression rate.
If you want maximum compression because you will keep the stream for
a long time and want to minimize the space it takes then I'd likely use
xz. There are other compressors that will compress the stream further
but at the cost of longer compression time, likely slow decompression,
and likely more RAM to do it. I'd test such a compressed stream on more
exotic compressors restores the original properly before trusting it as
a backup.
If you use -c and -e I'd assume you will see little benefit when
transferring filesystems that have compression enabled for their data
already. If you have enough disk space then skipping a compressor may be
faster and if you store the stream on a ZFS partition you could still
have lz4 or zstd applied without any special compression/decompression
commands but compression should be a little worse compared to what a
standalone compressor achieves.
Where this backup was going to be written was never described but I
assume it was not on a ZFS pool.

> Reboot into live shell using a boot stick and destroy the pool.
> `zpool destroy zroot`
>
> Delete the partitions using gpart and create a new one (now with the
correct size)
> Create a zfs pool on the newly created partition

> Create mountpoints
Short answer is likely no for this step... In ZFS terminology I
presume you mean 'create datasets'. Mountpoints will be backed up as the
property of the original datasets and will be restored accordingly. With
a recursive send, the datasets that were sent will be received to the
pool and without overwriting anything they get their original properties
so there is nothing to create after making the pool if you transferred
the full pool's datasets and now want to receive all datasets. Unless
I'm confused, I also take care of simplifying this with '-d' during receive.

> Then restore the snapshot of the file
> `zcat /mnt/backup.gz < zroot@backup`
> and make a rollback to this snapshot
I'm not familiar with zcat but seems that command should try to write
a file called zroot@backup from the current directory to a compressed
file called /mnt/backup.gz. That is backwards from what you were
thinking so I'd guess you meant for '<' to be '>' to not overwrite your
backup. I don't use gzip but assuming new pool name is zroot this
command would likely work:

zcat /mnt/backup.gz|zfs recv -udF zroot

If using your alternative -c stream that was not further compressed
then try:

cat /mnt/backup.gz|zfs recv -udF zroot

-u will avoid mounting the received stream since otherwise your
running separate system would start having your old system mounted over
it. -d removes zroot from the received name but is respecified in the
receive target here; this makes it unnecessary to create those datasets
for receiving too. -F forces anything existing to be overwritten.
Are you trying to rename @today to @backup during receive? I'd handle
that as a separate step or choose a better snapshot name from the start
such as the date itself. I use zfs snapshot -r puddle3@`date -u
"+%Y%m%dT%H%M%SZ"` myself to have a date+time label that is formatted in
the ISO style that sorts all elements from most to least significant #s
which means a sort by name is a sort by time. You can easily drop off
the time of day or make it easier to read but more bloated with adding
symbols and you can rearrange the numbers to be out of numerical
sequence in case you prefer to treat day as more significant than month,
month as more significant than year, etc.

> `zfs rollback zroot@backup`
`recv -F` already had you in that state at the time it was received.
Being unmounted thanks to 'recv -u' means little pool activity should
occur from the booted system but you can use export/import techniques to
also specify it not be mounted. If you really need to rollback a pool,
remember to rollback all relevant datasets if more than one dataset
existed. There is no recursive rollback command that I am aware of but
if you have many you may consider scripting the activity to avoid a lot
of typing and potential typos or missed entries.

> Would that work? Or is there something wrong which I don't have in mind?
> Thanks in advance.

Most ZFS commands have -n for dry run (=do nothing) and -v for
verbosity of what is (or would be) done. Others such as -P on 'zfs
create' help on some commands to cause other details to be spewed. I
store most ZFS commands in text files or scripts so I know what I've
done (zfs also has a history that can generally be referred to) and it
helps me avoid mistakes like forgetting a flag or a typo in a name. This
further keeps mistakes to a minimum but a few I still start with -vn and
only delete the -n after verifying I like the action.

> -obr

Reply all
Reply to author
Forward
0 new messages