But, "df -o i /xxx" returns "df: operation not applicable for FSType
zfs" if /xxx is a zfs filesystem.
Is there a simple command that will tell me how many inodes (or I
guess znodes for zfs) are being used for ZFS like there is for UFS.
(I am not interested in the maximum number of inodes which is a fixed
size for UFS but are dynamically allocated for ZFS.)
I just want a quick way to count the number of files in the
filesystem.
Thanks,
Doug
I guess you could use find and wc before somebody else provides a
quicker way.
Victor
I'm no zdb expert but the information is extractable.
For example, for my testpool/test1 file system:
# zdb -dv testpool/test1
Dataset testpool/test1 [ZPL], ID 27, cr_txg 50682, 25.5K, 7 objects
Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 21.0K DMU dnode
1 1 16K 512 512 1K ZFS master node
2 1 16K 512 512 1K ZFS delete queue
3 1 16K 512 512 1K ZFS directory
4 1 16K 512 512 0 ZFS plain file
5 1 16K 512 512 0 ZFS plain file
6 1 16K 512 512 0 ZFS plain file
If I subtract the first 4 objects, I have 3 files. On a file system
with more
files, you could do this and get the ballbark the number of files:
# zdb -d testpool/test1
Dataset testpool/test1 [ZPL], ID 27, cr_txg 50682, 25.5K, 7 objects
Maybe someone has an easier way but counting files seems less
important when you don't have to worry about running out of inodes,
IMHO.
Cindy
man ls for inodes and recursion (-iR1). filter, accumulate and print
the O/P
with a pipe. I like using awk for the second part.
> I just want a quick way to count the number of files in the
> filesystem.
Just curious: What are you going to do with that number, once you've
somehow got it?
Alexander
df -g <filesys>
Then subtract <available> from <total files>:
LC_ALL=C df -g /export | awk '/total files/ { print $9-$7}'
Results depend of how stable the output of "df -g" is. Parameter columns
might shift around.
--
Daniel
> Doug <dy2...@gmail.com> wrote:
>> I just want a quick way to count the number of files in the
>> filesystem.
>
> df -g <filesys>
>
> Then subtract <available> from <total files>:
>
> LC_ALL=C df -g /export | awk '/total files/ { print $9-$7}'
Hm.
--($:~/public_html)-- sudo find /opt/apps/Gentoo/HomeSmall/rootfs/benutzen | wc -l
Password:
40038
--(askwar@winds06)-(16/pts/12)-(16:02:16/2007-11-02)--
--($:~/public_html)-- LC_ALL=C /usr/bin/df -g /opt/apps/Gentoo/HomeSmall/rootfs/benutzen | awk '/total files/ { print $9-$7}'
37211
There's a difference of ~3k "entries". This filesystem is basically not
used right now (ie. no, it did not happen that somebody deleted 3k files).
Ideas about what's missing from the df count? Or what's duplicated in
the find output?
Alexander
The 'find' program will find not only regular files, but directories and
links too - there are options you can add to include/exclude various
things.
use
find ... -xdev -exec ls -id {} + | cut -c-11 | sort -u | wc -l
to avoid counting hardlinked inodes several times.
> Ideas about what's missing from the df count? Or what's duplicated in
> the find output?
3000 hardlinks is a bit much, but some version control systems like to
use them.
--
Kjetil T.
Then I just ran "df -g":
...
/export/zfs/glee3 (uu/home/glee3 ): 131072 block size
512 frag size
1840753152 total blocks 933502876 free blocks 933502876 available
933502886 total files
933502876 free files 47513604 filesys id
zfs fstype 0x00000004 flag 255 filename length
/export/zfs/grs (uu/home/grs ): 131072 block size
512 frag size
1840753152 total blocks 933502876 free blocks 933502876 available
933618641 total files
933502876 free files 47513605 filesys id
zfs fstype 0x00000004 flag 255 filename length
...
Needless to say, there are *not* 933502886 files in either of those
directories.
Cheers,
Gary B-)
--
______________________________________________________________________________
Armful of chairs: Something some people would not know
whether you were up them with or not
- Barry Humphries
Well, Daniel said to subtract the "free files" value from the "total
files" value, which results in 933502886-933502876=10 files for /
export/zfs/glee3 and 933618641-933502876=115,765 files for /export/zfs/
grs.
By observation, the count seems to include the current and parent
directory (the "." and "..") files contained in each directory.
This is actually pretty much what I was looking for, so I thank Daniel
for the hint.
Using "find . -type f ..." does not work so well when the file system
has lots of files since it takes a long time to calcuate and in the
process causes a high CPU and I/O load, expecially when directories
contain lots (thousands, even millions) of files (although ZFS might
handle that better than UFS.)
I wanted to know how many files so I can monitor the progress of
programs that create and store lots of files as well as create reports
describing things like, the number of files in home directories for
the month, etc.
Thanks again,
Doug
Small correction.
Subtract <free files> from <total files> in the output of df -g:
LC_ALL=C df -g /export | \
awk '/total files/ { t=$9 } /free files/ {f=$1} END { print t-f }'
But for ZFS this shouldn't be an issue.
<free files> == <available> should be the case with ZFS.
> 933618641 total files
> 933502876 free files 47513605 filesys id
> zfs fstype 0x00000004 flag 255 filename length
> ...
>
> Needless to say, there are *not* 933502886 files in either of those
> directories.
Who says so? First, we are not talking about directories, but filesystems.
Second, total files doesn't mean used files.
<used files> = <total files> - <free files>:
So the filesystem above should contain 115765 files.
--
Daniel
> First, we are not talking about directories, but filesystems.
> Second, total files doesn't mean used files.
> <used files> = <total files> - <free files>:
>
> So the filesystem above should contain 115765 files.
>
And it does.
Nice tool!
> The 'find' program will find not only regular files, but directories and
> links too - there are options you can add to include/exclude various
> things.
zdb only lists regular files? Not directories and "special" files?
Alexander
Doesn't matter--you're not supposed to run zdb--it's only for Sun's elite
pack of engineers, not for mere mortal sysadmins.
From the man page:
Since the ZFS file system is
always consistent on disk and is self-repairing, zdb should
only be run under the direction by a support engineer.
and...
Any options supported by this command are internal to Sun
and subject to change at any time.
When exactly did Solaris quit treating sysadmins like professionals?
Pathetic.
Colin