Re: help with full zfs "partitions" - can't delete files

0 views
Skip to first unread message

Paul Procacci

unread,
Jun 3, 2024, 4:40:25 PMJun 3
to William Dudley, freebsd-questions


On Mon, Jun 3, 2024 at 4:29 PM William Dudley <wfdu...@gmail.com> wrote:
The problem:

FreeBSD 13.3 amd64 system, with
a zfs pool built from two physical drives.
The zfs pool has 7 "partitions" (is that what they're called?)

I was copying files over from another machine and didn't realize that
I filled one of the partitions.

I can't proceed now with this one full partition.
Every single command fails due to "out of space".

That includes:
rm (one file or many)
dd if=/dev/zero of=(some file)
truncate (somefile)
zfs destroy poolname/partitionname
cannot destroy 'poolname/partitionname': out of space

There are no snapshots, I never created any.

Extensive googling has not shown any more than bug reports acknowledging
that this is a problem.

How do I fix this, short of burning the machine to the ground and starting over?

Thanks,
Bill Dudley

This email is free of malware because I run Linux.

They are called datasets.

The dataset *may* have gone readonly.

zfs get all poolname/dataset

Posting the output of the above may help us.

~Paul
--
__________________

:(){ :|:& };:

William Dudley

unread,
Jun 3, 2024, 4:42:45 PMJun 3
to freebsd-questions

William Dudley

unread,
Jun 3, 2024, 4:54:39 PMJun 3
to Paul Procacci, freebsd-questions
zfs get all m2pool/gU4
NAME        PROPERTY              VALUE                  SOURCE
m2pool/gU4  type                  filesystem             -
m2pool/gU4  creation              Sun Dec  9 19:13 2018  -
m2pool/gU4  used                  2.34T                  -
m2pool/gU4  available             0B                     -
m2pool/gU4  referenced            2.34T                  -
m2pool/gU4  compressratio         1.00x                  -
m2pool/gU4  mounted               no                     -
m2pool/gU4  quota                 none                   default
m2pool/gU4  reservation           none                   default
m2pool/gU4  recordsize            128K                   default
m2pool/gU4  mountpoint            /u4                    local
m2pool/gU4  sharenfs              off                    default
m2pool/gU4  checksum              on                     default
m2pool/gU4  compression           off                    default
m2pool/gU4  atime                 on                     default
m2pool/gU4  devices               on                     default
m2pool/gU4  exec                  on                     default
m2pool/gU4  setuid                on                     default
m2pool/gU4  readonly              off                    default
m2pool/gU4  jailed                off                    default
m2pool/gU4  snapdir               hidden                 default
m2pool/gU4  aclmode               discard                default
m2pool/gU4  aclinherit            restricted             default
m2pool/gU4  createtxg             480                    -
m2pool/gU4  canmount              on                     default
m2pool/gU4  xattr                 on                     default
m2pool/gU4  copies                1                      default
m2pool/gU4  version               5                      -
m2pool/gU4  utf8only              off                    -
m2pool/gU4  normalization         none                   -
m2pool/gU4  casesensitivity       sensitive              -
m2pool/gU4  vscan                 off                    default
m2pool/gU4  nbmand                off                    default
m2pool/gU4  sharesmb              off                    default
m2pool/gU4  refquota              none                   default
m2pool/gU4  refreservation        none                   default
m2pool/gU4  guid                  16119321983578430568   -
m2pool/gU4  primarycache          all                    default
m2pool/gU4  secondarycache        all                    default
m2pool/gU4  usedbysnapshots       0B                     -
m2pool/gU4  usedbydataset         2.34T                  -
m2pool/gU4  usedbychildren        0B                     -
m2pool/gU4  usedbyrefreservation  0B                     -
m2pool/gU4  logbias               latency                default
m2pool/gU4  objsetid              84                     -
m2pool/gU4  dedup                 off                    default
m2pool/gU4  mlslabel              none                   default
m2pool/gU4  sync                  standard               default
m2pool/gU4  dnodesize             legacy                 default
m2pool/gU4  refcompressratio      1.00x                  -
m2pool/gU4  written               2.34T                  -
m2pool/gU4  logicalused           2.34T                  -
m2pool/gU4  logicalreferenced     2.34T                  -
m2pool/gU4  volmode               default                default
m2pool/gU4  filesystem_limit      none                   default
m2pool/gU4  snapshot_limit        none                   default
m2pool/gU4  filesystem_count      none                   default
m2pool/gU4  snapshot_count        none                   default
m2pool/gU4  snapdev               hidden                 default
m2pool/gU4  acltype               nfsv4                  default
m2pool/gU4  context               none                   default
m2pool/gU4  fscontext             none                   default
m2pool/gU4  defcontext            none                   default
m2pool/gU4  rootcontext           none                   default
m2pool/gU4  relatime              off                    default
m2pool/gU4  redundant_metadata    all                    default
m2pool/gU4  overlay               on                     default
m2pool/gU4  encryption            off                    default
m2pool/gU4  keylocation           none                   default
m2pool/gU4  keyformat             none                   default
m2pool/gU4  pbkdf2iters           0                      default
m2pool/gU4  special_small_blocks  0                      default


Thanks,
Bill Dudley
This email is free of malware because I run Linux.

David Christensen

unread,
Jun 3, 2024, 6:25:28 PMJun 3
to ques...@freebsd.org
On 6/3/24 13:53, William Dudley wrote:
> zfs get all m2pool/gU4

<begin sort>

> NAME PROPERTY VALUE SOURCE
> m2pool/gU4 aclinherit restricted default
> m2pool/gU4 aclmode discard default
> m2pool/gU4 acltype nfsv4 default
> m2pool/gU4 atime on default
> m2pool/gU4 available 0B -
> m2pool/gU4 canmount on default
> m2pool/gU4 casesensitivity sensitive -
> m2pool/gU4 checksum on default
> m2pool/gU4 compression off default
> m2pool/gU4 compressratio 1.00x -
> m2pool/gU4 context none default
> m2pool/gU4 copies 1 default
> m2pool/gU4 createtxg 480 -
> m2pool/gU4 creation Sun Dec 9 19:13 2018 -
> m2pool/gU4 dedup off default
> m2pool/gU4 defcontext none default
> m2pool/gU4 devices on default
> m2pool/gU4 dnodesize legacy default
> m2pool/gU4 encryption off default
> m2pool/gU4 exec on default
> m2pool/gU4 filesystem_count none default
> m2pool/gU4 filesystem_limit none default
> m2pool/gU4 fscontext none default
> m2pool/gU4 guid 16119321983578430568 -
> m2pool/gU4 jailed off default
> m2pool/gU4 keyformat none default
> m2pool/gU4 keylocation none default
> m2pool/gU4 logbias latency default
> m2pool/gU4 logicalreferenced 2.34T -
> m2pool/gU4 logicalused 2.34T -
> m2pool/gU4 mlslabel none default
> m2pool/gU4 mounted no -
> m2pool/gU4 mountpoint /u4 local
> m2pool/gU4 nbmand off default
> m2pool/gU4 normalization none -
> m2pool/gU4 objsetid 84 -
> m2pool/gU4 overlay on default
> m2pool/gU4 pbkdf2iters 0 default
> m2pool/gU4 primarycache all default
> m2pool/gU4 quota none default
> m2pool/gU4 readonly off default
> m2pool/gU4 recordsize 128K default
> m2pool/gU4 redundant_metadata all default
> m2pool/gU4 refcompressratio 1.00x -
> m2pool/gU4 referenced 2.34T -
> m2pool/gU4 refquota none default
> m2pool/gU4 refreservation none default
> m2pool/gU4 relatime off default
> m2pool/gU4 reservation none default
> m2pool/gU4 rootcontext none default
> m2pool/gU4 secondarycache all default
> m2pool/gU4 setuid on default
> m2pool/gU4 sharenfs off default
> m2pool/gU4 sharesmb off default
> m2pool/gU4 snapdev hidden default
> m2pool/gU4 snapdir hidden default
> m2pool/gU4 snapshot_count none default
> m2pool/gU4 snapshot_limit none default
> m2pool/gU4 special_small_blocks 0 default
> m2pool/gU4 sync standard default
> m2pool/gU4 type filesystem -
> m2pool/gU4 used 2.34T -
> m2pool/gU4 usedbychildren 0B -
> m2pool/gU4 usedbydataset 2.34T -
> m2pool/gU4 usedbyrefreservation 0B -
> m2pool/gU4 usedbysnapshots 0B -
> m2pool/gU4 utf8only off -
> m2pool/gU4 version 5 -
> m2pool/gU4 volmode default default
> m2pool/gU4 vscan off default
> m2pool/gU4 written 2.34T -
> m2pool/gU4 xattr on default

<end sort>

> Thanks,
> Bill Dudley
> This email is free of malware because I run Linux.


When posting console sessions, please be complete -- prompt, exact
command entered, exact output obtained. For example:

2024-06-03 15:15:29 toor@vf2 ~
# freebsd-version -kru; uname -a
13.3-RELEASE-p1
13.3-RELEASE-p1
13.3-RELEASE-p2
FreeBSD vf2.tracy.holgerdanske.com 13.3-RELEASE-p1 FreeBSD
13.3-RELEASE-p1 GENERIC amd64


Looking at the output of `zfs get all m2pool/gU4`, above:

> m2pool/gU4 type filesystem -

The ZFS dataset "m2pool/gU4" is a file system.

> m2pool/gU4 available 0B -

The file system has zero bytes of available space.

> m2pool/gU4 readonly off default

The file system is read-write.

> m2pool/gU4 mountpoint /u4 local

The file system mount point has been set to "/u4".

> m2pool/gU4 canmount on default

The file system can be mounted.

> m2pool/gU4 mounted no -

The file system is not mounted.


As root, please mount the file system:

# zfs mount m2pool/gU4


Then try removing files and/or directories under /u4.


If either of the above fails, please post the complete console session.
Also, please post a complete console session for the following commands:

# freebsd-version -kru; uname -a

# zpool list m2pool

# zpool status m2pool

# zpool get all m2pool | sort

# mount | grep m2pool

# ls -ld / /u4


David


Edward Sanford Sutton, III

unread,
Jun 3, 2024, 6:31:39 PMJun 3
to ques...@freebsd.org
On 6/3/24 13:28, William Dudley wrote:
> The problem:
>
> FreeBSD 13.3 amd64 system, with
> a zfs pool built from two physical drives.

Mirrored or striped layout?

> The zfs pool has 7 "partitions" (is that what they're called?)
>
> I was copying files over from another machine and didn't realize that
> I filled one of the partitions.

What user? What command(s)?

> I can't proceed now with this one full partition.

If the pool is full, all datasets on it should be impacted instead of
just one.

> Every single command fails due to "out of space".
>
> That includes:
> rm (one file or many)
> dd if=/dev/zero of=(some file)
> truncate (somefile)
> zfs destroy poolname/partitionname
> cannot destroy 'poolname/partitionname': out of space

Tried as root? Users are limited from filling a partition fully. I
thought ZFS always forces a certain amount be free to avoid issues like
being unable to COW write to delete data.

> There are no snapshots, I never created any.
>
> Extensive googling has not shown any more than bug reports acknowledging
> that this is a problem.
>
> How do I fix this, short of burning the machine to the ground and
> starting over?
>
> Thanks,
> Bill Dudley
>
> This email is free of malware because I run Linux.

No system, Linux included, can guarantee that. I have Linux malware
infested botnets reaching out to me every day though 'usually' not by email.

William Dudley

unread,
Jun 3, 2024, 6:46:29 PMJun 3
to ques...@freebsd.org
see below


On Mon, Jun 3, 2024 at 6:31 PM Edward Sanford Sutton, III <mirr...@hotmail.com> wrote:
On 6/3/24 13:28, William Dudley wrote:
> The problem:
>
> FreeBSD 13.3 amd64 system, with
> a zfs pool built from two physical drives.

Mirrored or striped layout?

Striped 

> The zfs pool has 7 "partitions" (is that what they're called?)
>
> I was copying files over from another machine and didn't realize that
> I filled one of the partitions.

What user? What command(s)?

as root, by doing rsync from another machine's disk that is an NFS mount. 

> I can't proceed now with this one full partition.

If the pool is full, all datasets on it should be impacted instead of
just one.

> Every single command fails due to "out of space".
>
> That includes:
> rm (one file or many)
> dd if=/dev/zero of=(some file)
> truncate (somefile)
> zfs destroy poolname/partitionname
> cannot destroy 'poolname/partitionname': out of space

Tried as root? Users are limited from filling a partition fully. I
thought ZFS always forces a certain amount be free to avoid issues like
being unable to COW write to delete data.

All commands as root. 

> There are no snapshots, I never created any.
>
> Extensive googling has not shown any more than bug reports acknowledging
> that this is a problem.
>
> How do I fix this, short of burning the machine to the ground and
> starting over?
>
> Thanks,
> Bill Dudley
>
> This email is free of malware because I run Linux.

No system, Linux included, can guarantee that. I have Linux malware
infested botnets reaching out to me every day though 'usually' not by email.

I know, but it's "mostly" true, compared to people running Winders.

ANYWAY, this might be "solved", in the sense that I have a work around.
Paul Procacci emailed me a suggestion to try this:
sysctl -w vfs.zfs.spa.slop_shift=6
and if that doesn't work, try 7 or 8.  A setting of 7 allows me to delete files.
Not sure if this lets me fully clean up the mess, but so far, so good.

Bill Dudley 

Mathias Mader

unread,
Jul 3, 2024, 11:58:01 AM (2 days ago) Jul 3
to ques...@freebsd.org
Hey there,

I've had the same problem a couple of months back.
But to my surprise, removing stuff with the `find` command still worked:

find . -name 'something' -exec rm {} \;

I'm not smart enough to tell why or how, but it did work.

Regards,
Mathias
signature.asc

Kevin P. Neal

unread,
Jul 3, 2024, 11:50:38 PM (2 days ago) Jul 3
to William Dudley, freebsd-questions
I'm not sure this was ever answered, so I'll chime in with advice for
the future.

On Mon, Jun 03, 2024 at 04:28:12PM -0400, William Dudley wrote:
> The problem:
> FreeBSD 13.3 amd64 system, with
> a zfs pool built from two physical drives.
> The zfs pool has 7 "partitions" (is that what they're called?)

They're called "datasets". ZFS can use an entire disk, or it can live in
a partition. The whole disk or partition is the container for the pool.
Inside the pool are datasets.

> I was copying files over from another machine and didn't realize that
> I filled one of the partitions.
> I can't proceed now with this one full partition.

Strictly speaking, the pool is full because of the size of the dataset.

The old-school advice for avoiding this is to set the property
refreservation=1G on the top dataset of the pool. I had heard that
this wasn't necessary anymore. But your report sounds like it actually
is still needed.

The reason this works is because it reserves 1G of space for that dataset
so if the pool is otherwise filled there will still be that 1G free. You
aren't supposed to use the top dataset for anything except child datasets.
By convention, at least.

Since ZFS is a copy-on-write filesystem you need free space to make any
changes, but if the pool is totally full you can't make any changes. This
leads to the absurd situation where the pool is so full you can't delete
anything. BUT, if you have that 1G of reserved space in the top dataset
then you'll always have free space and thus can always delete things or
whatever.

> Every single command fails due to "out of space".

(Side note: It doesn't hurt to reserve that 1G of space unless you really,
really need that last little scrap of space. But at point fragmentation
becomes a very serious issue and you'll have terrible performance issues
so, really, don't let it get to that point.)
--
Kevin P. Neal http://www.pobox.com/~kpn/

"Good grief, I've just noticed I've typed in a rant. Sorry chaps!"
Keir Finlow Bates, circa 1998

Reply all
Reply to author
Forward
0 new messages