On 9/18/25 20:00,
fre...@vanderzwan.org wrote:
>> In any case, I tried deleting the 20250711020000 snapshot.
>> Now the jump in refer just moved to the next:> zfs list -t all -o name,refer,used,usedbychildren,usedbydataset,usedbyrefreservation,usedbysnapshots|grep default
>>> zroot/ROOT/default 62.1G 70.5G 0B 62.1G 0B 8.36G
>>> ...
>>> zroot/ROOT/default@auto_zroot-20250611020000 3.34G 215M - - - -
>>> zroot/ROOT/default@auto_zroot-20250810020000 62.1G 116M - - - -
>>> ...
>> Still "zfs send" generates the same huge amount of data.
>>
>>
>
> That’s because for some reason the data is still referenced in snapshots.
Then is usedbysnapshot lying???
> If you delete all snapshots up to and including 20250810020000 you should see that usage drop to what you expect.
Unfortunately not.
> # zfs list -t snap|grep default
> zroot/ROOT/default@auto_zroot-20250821020000 41.0M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250828020000 33.9M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250904020000 28.1M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250909020000 10.1M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250911020000 10.3M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250912020000 1.84M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250913020000 5.97M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250914020000 5.79M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250915020000 4.75M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250916020000 2.48M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250917020000 2.09M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250917210000 1.21M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250917220000 1.54M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250917230000 1.36M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918000000 1.44M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918010000 1.42M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918020000 1.65M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918030000 1.95M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918040000 964K - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918050000 1.61M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918060000 960K - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918070000 1.48M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918080000 1.47M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918090000 1.74M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918100000 2.07M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918110000 2.03M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918120000 892K - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918130000 1.19M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918140000 1.62M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918150000 1.81M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918160000 1.58M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918170000 1.88M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918180000 1.99M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918190000 1.12M - 62.1G -
> zroot/ROOT/default@auto_zroot-20250918200000 972K - 62.1G -
> # zfs list -o name,refer,used,usedbysnapshots|grep default
> zroot/ROOT/default 62.1G 62.4G 263M
> One more thing, can you show the output of ’zfs list -o space |grep zroot’ to list all datasets in the pool ?
> # zfs list -o space |grep zroot
> zroot 1.95T 1.50T 536K 88K 0B 1.50T
> zroot/ROOT 1.95T 62.4G 0B 88K 0B 62.4G
> zroot/ROOT/default 1.95T 62.4G 263M 62.1G 0B 0B
> zroot/ezjail 1.95T 1.40T 328K 128K 0B 1.40T
> zroot/ezjail/backup 1.95T 33.9G 9.00G 1.29G 0B 23.7G
> zroot/ezjail/backup/cache 1.95T 1.38G 1.20G 182M 0B 0B
> zroot/ezjail/backup/postgres 1.95T 22.3G 17.7G 4.55G 0B 0B
> zroot/ezjail/backup/tmp 1.95T 128K 0B 128K 0B 0B
> zroot/ezjail/basejail 1.95T 6.82G 5.52G 1.31G 0B 0B
> zroot/ezjail/dc 1.95T 4.52G 2.59G 793M 0B 1.16G
> zroot/ezjail/dc/cache 1.95T 1.16G 1.04G 119M 0B 0B
> zroot/ezjail/dc/tmp 1.95T 128K 0B 128K 0B 0B
> zroot/ezjail/fs 1.95T 1.21T 5.73G 28.8M 0B 1.21T
> zroot/ezjail/fs/cache 1.95T 498M 462M 36.3M 0B 0B
> zroot/ezjail/fs/images 1.95T 118G 0B 118G 0B 0B
> zroot/ezjail/fs/log 1.95T 89.3G 56.6G 32.7G 0B 0B
> zroot/ezjail/fs/shares 1.95T 481G 59.1G 422G 0B 0B
> zroot/ezjail/fs/tmp 1.95T 144K 0B 144K 0B 0B
> zroot/ezjail/fs/usr 1.95T 547G 1.18G 305M 0B 546G
> zroot/ezjail/fs/usr/home 1.95T 546G 31.9G 514G 0B 0B
> zroot/ezjail/ids 1.95T 16.2G 2.55G 584M 0B 13.1G
> zroot/ezjail/ids/cache 1.95T 96.5M 0B 96.5M 0B 0B
> zroot/ezjail/ids/logs 1.95T 12.6G 0B 12.6G 0B 0B
> zroot/ezjail/ids/spool 1.95T 364M 0B 364M 0B 0B
> zroot/ezjail/ids/tmp 1.95T 128K 0B 128K 0B 0B
> zroot/ezjail/mail 1.95T 128G 3.31G 662M 0B 124G
> zroot/ezjail/mail/cache 1.95T 558M 449M 110M 0B 0B
> zroot/ezjail/mail/clamav 1.95T 414M 0B 414M 0B 0B
> zroot/ezjail/mail/imap 1.95T 123G 17.2G 106G 0B 0B
> zroot/ezjail/mail/tmp 1.95T 344K 0B 344K 0B 0B
> zroot/ezjail/newjail 1.95T 70.8M 16K 70.8M 0B 0B
> zroot/ezjail/proxy 1.95T 6.32G 737M 222M 0B 5.39G
> zroot/ezjail/proxy/tmp 1.95T 152K 0B 152K 0B 0B
> zroot/ezjail/proxy/var 1.95T 5.39G 122M 22.5M 0B 5.24G
> zroot/ezjail/proxy/var/cache 1.95T 328M 294M 33.1M 0B 0B
> zroot/ezjail/proxy/var/clamav 1.95T 414M 0B 414M 0B 0B
> zroot/ezjail/proxy/var/log 1.95T 3.55G 2.54G 1.02G 0B 0B
> zroot/ezjail/proxy/var/squid 1.95T 989M 0B 989M 0B 0B
> zroot/home 1.95T 22.6G 22.6G 18.8M 0B 0B
> zroot/tmp 1.95T 134M 0B 134M 0B 0B
> zroot/usr 1.95T 11.8G 0B 88K 0B 11.8G
> zroot/usr/obj 1.95T 7.84G 0B 7.84G 0B 0B
> zroot/usr/src 1.95T 3.93G 0B 3.93G 0B 0B
> zroot/var 1.95T 5.32G 0B 88K 0B 5.32G
> zroot/var/audit 1.95T 88K 0B 88K 0B 0B
> zroot/var/cache 1.95T 213M 0B 213M 0B 0B
> zroot/var/clamav 1.95T 425M 0B 425M 0B 0B
> zroot/var/crash 1.95T 88K 0B 88K 0B 0B
> zroot/var/dumps 1.95T 88K 0B 88K 0B 0B
> zroot/var/log 1.95T 4.69G 3.74G 978M 0B 0B
> zroot/var/mail 1.95T 640K 528K 112K 0B 0B
> zroot/var/tmp 1.95T 88K 0B 88K 0B 0B
> And the output of ‘zpool list’.
> # zpool list
> NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
> vm 446G 361G 85.4G - - 54% 80% 1.00x ONLINE -
> zroot 3.56T 1.50T 2.06T - - 19% 42% 1.00x ONLINE -
bye & Thanks
av.