Backup/restore recipe

0 views
Skip to first unread message

Eugene R

unread,
Nov 12, 2025, 11:00:53 AMNov 12
to ques...@freebsd.org
Hello,

Does anyone have any howtos/recipes for optimal backup and restore
strategies for a FreeBSD-based server? In particular, a "modern" ZFS
installation (pretty complex dataset tree) on a remote cloud system
accessible via SSH or console, with some external storage via smbfs or S3.

I suppose we will need
- partition layout
- ZFS layout
- /boot directory
- /etc directory (including passwd, fstab, etc)
- filesystem contents (using tar.gz or whatever) and/or
- ZFS data that can be restored directly

I imagine three potential scenarios:
- selective restore of specific files or subtrees to a working FreeBSD
system (this one is reasonably obvious)
- (essentially) exact duplicate of the original system state on the same
or different machine (ideally binary exact if hardware allows)
- functionally equivalent duplicate (i.e., the same filesystem content
over the potentially different low-level layouts)
In cases 2 and 3, we likely will have to start from a clean machine,
possibly with dummy Linux or FreeBSD installation.

I will be grateful for any pointers or explanations.

Best regards
Eugene


Frank Leonhardt

unread,
Nov 12, 2025, 11:46:11 AMNov 12
to freebsd-...@freebsd.org
First off, ZFS snapshots are your friend. It's very easy to create a
cron job script that'll snapshot everything daily (or whatever) and
rotate them. This allows you to roll back everything, individual
datasets or just have a look at older files.

Here's a little script I run in a cronjob called "snapshot7days"

-------------------------

#!/bin/sh
for ds in $@
do

zfs destroy -r ${ds}@7daysago
zfs rename -r ${ds}@6daysago @7daysago
zfs rename -r ${ds}@5daysago @6daysago
zfs rename -r ${ds}@4daysago @5daysago
zfs rename -r ${ds}@3daysago @4daysago
zfs rename -r ${ds}@2daysago @3daysago
zfs rename -r ${ds}@yesterday @2daysago
zfs rename -r ${ds}@today @yesterday
zfs snapshot -r ${ds}@today

done

-------------------------

Not exactly complicated.

You run it by passing the datasets you want a snapshot of - e.g.
"snapshot7days zr/jail/webserver zr/jail/dbserver ..."

The next fun thing is dataset replication - I replicate production
servers to a duplicate off-site. See "zfs send" and "zfs receive". Send
write so stdout an receive reads from stdin so you can send them over
ssh (or if local, use nc for speed). By having a replica of the whole
pool on another set of drives off-site is a comforting feeling. And the
best bit is you can do incremental updates (it only transfers the blocks
that have changed between snapshots).

If you're backing up on a local network to a SMB server, you can just
pipe the dataset(s) to a large file on that using zfs send. Windoze
won't know how to read it. If you have encrypted datasets you can
send/receive using the --raw option and then Windoze users won't even be
able to dump the file and look at it. By default it decrypts before
sending. If you do this you'll need to have kept the encryption key
somewhere safe and restore it with zfs load-key.

I still use tar to dump to tape at file level , just in case ZFS stops
working.

Speaking of tape, it's very easy to dump a dataset to a remote tape
drive: zfs send pool/dataset | ssh user@remote "cat >/dev/sa0"

That way if Amazon toasts your VM you have a an air-gapped copy in a
place ransomware can't touch it.

Regards, Frank.




void

unread,
Nov 12, 2025, 1:38:22 PMNov 12
to ques...@freebsd.org
On Wed, Nov 12, 2025 at 07:00:15PM +0300, Eugene R wrote:
>Hello,
>
>Does anyone have any howtos/recipes for optimal backup and restore
>strategies for a FreeBSD-based server? In particular, a "modern" ZFS
>installation (pretty complex dataset tree) on a remote cloud system
>accessible via SSH or console, with some external storage via smbfs or
>S3.

I use tarsnap

https://www.tarsnap.com/
--

Matthias Petermann

unread,
Nov 12, 2025, 1:51:19 PMNov 12
to ques...@freebsd.org
Hello,

Am 12.11.25 um 17:00 schrieb Eugene R:
I’ve had very good experiences using restic for backups on FreeBSD (and
other platforms). It’s not tied to ZFS, but for me it’s the best overall
compromise between platform independence, snapshot-based versioning,
integrity, and security by default.

Backups can be stored locally or across different cloud backends -
restic supports all major targets via rclone, so you can use S3, Google
Cloud, SMB shares, and more, all with the same workflow.

Restores are fast and flexible - from single files up to full directory
trees - and it integrates easily into automated setups (cron jobs,
systemd timers, or shell scripts).

A bare-metal restore isn’t supported directly, but with good
documentation of your system layout (partitions, ZFS datasets, boot
config, etc.) or some deployment automation, that’s easy to work around.
I typically treat the system setup as code and use restic for all the
actual data.


Best regards
Matthias



--
Für alle, die digitale Systeme verstehen und gestalten wollen:
jede Woche neue Beiträge zu Architektur, Souveränität und Systemdesign.
👉 https://www.petermann-digital.de/blog


Dan Mahoney (ports)

unread,
Nov 12, 2025, 2:00:07 PMNov 12
to Matthias Petermann, ques...@freebsd.org
+1 for restic, but I do like to idea of tarsnap because it supports someone who works hard for the project.  They're very similar in how they function (and there's nothing stopping you from using both).

-Dan

Eugene R

unread,
Nov 12, 2025, 2:06:35 PMNov 12
to ques...@freebsd.org

Hello and thanks to everyone for the suggestions.

Overall, making backups is indeed pretty straightforward using ZFS or an assortment of "agnostic" filetree-based backup solutions. And in the latter case, selective restore is also easy.

What I do not understand is how should one approach restore in scenarios 2 and 3 (and how to stage the backups for them).

- selective restore of specific files or subtrees to a working FreeBSD system (this one is reasonably obvious)

- (essentially) exact duplicate of the original system state on the same or different machine (ideally binary exact if hardware allows, i.e., an equivalent of a cloned server snapshot)

- functionally equivalent duplicate (i.e., the same filesystem content over the potentially different low-level layouts)
In cases 2 and 3, we likely will have to start from a clean machine, possibly with dummy Linux or FreeBSD installation.

Eugene

Frank Leonhardt

unread,
Nov 13, 2025, 8:21:01 AMNov 13
to ques...@freebsd.org
On 12/11/2025 19:06, Eugene R wrote:

Hello and thanks to everyone for the suggestions.

Overall, making backups is indeed pretty straightforward using ZFS or an assortment of "agnostic" filetree-based backup solutions. And in the latter case, selective restore is also easy.

What I do not understand is how should one approach restore in scenarios 2 and 3 (and how to stage the backups for them).

- selective restore of specific files or subtrees to a working FreeBSD system (this one is reasonably obvious)
- (essentially) exact duplicate of the original system state on the same or different machine (ideally binary exact if hardware allows, i.e., an equivalent of a cloned server snapshot)
- functionally equivalent duplicate (i.e., the same filesystem content over the potentially different low-level layouts)
In cases 2 and 3, we likely will have to start from a clean machine, possibly with dummy Linux or FreeBSD installation.

Eugene

Using ZFS send to replicate the datasets does indeed produce a "binary exact" snapshot for most intents. The blocks might be stored in difference places on the destination but to the rest of the system it's identical. You can copy the datasets onto a pool on a new drive, copy the bootloader on to it, plug it into a server and just boot it back up. If you're replicating the datasets to their own pool. on [a] dedicated drive[s]  you can maintain a ready-to-go clone of the live server.

You can also go into the backup datasets and copy individual files (or whole datasets) out as required. Make sure the backup datasets are set to "read-only" or the act of accessing them

I've seen suggestions for various file level backups, which have their place, but you specifically asked to backup the whole running system. If you replicate a ZFS snapshot you get everything in a way that's not possible when copying individual files from a live system. AIX has this facility in JFS2, but it's not common.

If course, if it's a VM you can take a snapshot of the VMDK and copy that, but it's not actually as good as a ZFS snaphshot guarantees the FS is in a consistent state whereas VMWare has no way of telling if it's mid-way through a TXG. But I use real hardware :-)

Regards, Frank.


Daniel Tameling

unread,
Nov 13, 2025, 9:21:21 AMNov 13
to ques...@freebsd.org
This is how I cloned a VM in the past:

old machine:

bectl create migrate
zfs snapshot zroot/ZROOT/migrate@2025
zfs send -prec zroot/ZROOT/migrate@2025 | ssh server 'cat > bectl.txt'

zfs snapshot zroot/home@migrate
zfs send -R zroot/home@migrate | ssh server 'cat > home.txt'

new machine: install the same version of FreeBSD. At the end of the installation don't reboot into new system yet. If you do, you will end up using zfs datasets you want to destroy, which makes the destruction impossible. Drop to shell and execute:

ssh server 'cat bectl.txt' | zfs recv zroot/ROOT/newbe
zfs activate newbe

zfs list -H -o name -t snapshot zroot/home | xargs -n1 zfs destroy
(WARNING: this will nuke zroot/home and all associated snapshots!)
ssh server 'cat home.txt' | zfs recv -dF zroot

Reboot now. If everything is working you can destroy the "old" boot environment:
zfs destroy default
bectl rename newbe default

This will give you a new machine with the old home with all snapshots but only the migrate boot environment without snapshots. I think I did this because my home snapshots were almost the same as the dataset, so there was not much overhead in keeping them and just sending boot environments means I can also "update" a live system. The problem there is the home snapshot, you can't be logged in as a user that uses zroot/home or you can't destroy it. But I think if you log in as root it works.

Also keep in mind, that bectl doesn't do a perfect clone. I think typically /usr/src and /var are not included. You might want to check the man page.

To be honest, these notes are quite old as I moved to a different backup strategy. You should test whether they really restore the system before you rely on them for backup.

What I do now is backup boot/loader.conf, etc and usr/local/etc, save the output of "pkg query -e '%a = 0' %o", and tar+rsync the important files that I haven't constantly synched anyway.

Best regards,
Daniel

Frank Leonhardt

unread,
Nov 13, 2025, 3:28:53 PMNov 13
to ques...@freebsd.org

This is the general idea...

Supposing you have server1 as your live server, and backupserver as your backup. I would name the zpools after the server (it's more meaningful than zroot and stops clashes). So the live server's root zpool is called "server1" and the backup server is called something else.

So on your backup server:

1) Insert your blank "clone" hard disk, say da1.
2) Copy the partition table from server1 (gpart backup). and get this to backupserver somehow (e.g. cut/paste is quickest)
3) Copy the partition table to da1 (gpart restore)
4) Copy the boot code to it: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1
5) Create an empty zpool on da1:  zpool create -o altroot=/mnt -O mountpoint=none server1 /dev/da1p2
6) Set the boot file system zpool set bootfs=server1/ROOT/default server1

On the live server:

7) Snapshot the original data on server 1: zfs snapshot -r server1@backup
8) Copy the data: zfs send -R server1@backup | ssh backupserver "zfs receive -Fduv server1"

Thereafter you can send snapshot deltas and keep it in sync. I don't think I've missed anything, but this should maintain a bootable disk that's identical to the one on server1. If I get a moment I'll test it. I've done this a few times and wish I'd written it down, but in general I'm just preserving the zpool. Setting it up to boot is easy enough. If it's a multi-disk setup just have multiple disks on backupserver and alter the "zpool create" to match

What you don't want to do is omit the altroot and mountpoint=none stuff or you'll end up with two systems mounted on root on the backupserver. It's a bit tricky when that happens. So I'm told :-)




Karl Vogel

unread,
Nov 13, 2025, 5:03:55 PMNov 13
to ques...@freebsd.org
>> On 12/11/2025 19:06, Eugene R wrote:

> Overall, making backups is indeed pretty straightforward using ZFS or
> an assortment of "agnostic" filetree-based backup solutions.

You can do both. I keep daily snapshots, and I also make snapshots called
"@inc30" for important datasets every 30 minutes. The basic idea:

zfs list -d 3 -Ho name,mountpoint -t filesystem |
while read dataset mountpoint
do
skip unmounted or redundant datasets
skip top-level pools like "tank"
snapshot="${dataset}@inc30"

zfs diff -H "$snapshot" 2> /dev/null |
grep '^[+M]' |
remove crap like cache-files ... > flist

use cpio/tar/whatever to copy flist elsewhere
rm flist
destroy $snapshot
re-create $snapshot
done

When I mangle a file, it's usually something I've created in the last
hour or two, so daily snaps don't help. After a few months, I roll
the 30-minute backups into one daily backup.

--
Karl Vogel I don't speak for anyone but myself

Comment: I was in the military, and China stole my freaking DNA profile.
Reply: Gonna be a weird day for you when China's clone army invades us.
--Hacker News conversation, 6 Nov 2025

Frank Leonhardt

unread,
Nov 14, 2025, 6:53:20 AMNov 14
to ques...@freebsd.org
**** WARNING ****

Actually, don't do it exactly like this. I remembered overnight about
the trouble I'd had with overlapping mountpoints when I did this a few
years ago. You can certainly get an exact copy of all the datasets this
way but there's a catch-22 on booting. If you have the auto mount
disabled (as above) then it won't boot. If you enable it it will overlay
the system on the backup server. It can be resolved by booting off
something else (USB), flipping the switch and then removing the (USB)
drive and rebooting before it goes too wonky. Then you have a bootable
disk. It would probably work if the system on the backup and live server
was the same.

What I do in practice is run everything live in jails anyway, which
means that other than a few odd files to set up the network (rc.conf and
jail.conf) you just plonk the jails on top a standard base system. You
can replicate the whole lot, as I said, but making it bootable isn't clean.




David Christensen

unread,
Nov 14, 2025, 12:32:54 PMNov 14
to ques...@freebsd.org
I do not have any FreeBSD VPS's, but I do have a linode.com Debian
GNU/Linux VPS running the UniFi controller. I subscribe to the backup
service, have done at least one restore (via the web control panel), and
it works for my use-case:

https://techdocs.akamai.com/cloud-computing/docs/backup-service


I can see doing live file-level backup/restore of static (!) system
configuration files and data over SSH, but "exact duplicate of the
original system state" makes me think shutdown the VPS and either make a
copy of the OS virtual disk file or export the VPS to file via some
tool. The latter require "outside" access to VPS -- e.g. web control
panel, SSH, or API.


Does your cloud provider offer backup solution(s) that can meet your
requirements?


David


Reply all
Reply to author
Forward
0 new messages