Small files consume 128MB after appending data with ">>"

69 views
Skip to first unread message

cern...@gmail.com

unread,
Jun 28, 2014, 12:30:24 AM6/28/14
to fhgfs...@googlegroups.com
Hi,

I am observing some strange behavior on a host running fhgfs 2014.01-r5.  If I create a small file and then subsequently append enough data to make it at least 4608 bytes long, the file consumes 262144 512-byte blocks = 128MB:

$ dd if=/dev/zero of=testfile bs=1 count=1
1+0 records in
1+0 records out
1 byte (1 B) copied, 0.000867362 s, 1.2 kB/s
$ stat -c "%n %s %b" testfile
testfile 1 0
$ dd if=/dev/zero bs=4607 count=1 >> testfile
1+0 records in
1+0 records out
4607 bytes (4.6 kB) copied, 0.00255441 s, 1.8 MB/s
$ stat -c "%n %s %b" testfile
testfile 4608 262144
$ cat testfile > /dev/null
$ stat -c "%n %s %b" testfile
testfile 4608 9

The file's 128MB block allocation is reflected in "df", and when there are many of these files, it quickly fills up a large disk.

After reading at least one byte from the "overweight" file, its block allocation shrinks back to normal.  But if it is never touched, it will just occupy 128MB forever.

If the resultant file is under 4608 bytes, this behavior is not seen:

$ dd if=/dev/zero of=testfile bs=1 count=1
1+0 records in
1+0 records out
1 byte (1 B) copied, 0.00124018 s, 0.8 kB/s
$ stat -c "%n %s %b" testfile
testfile 1 0
$ dd if=/dev/zero bs=4606 count=1 >> testfile
1+0 records in
1+0 records out
4606 bytes (4.6 kB) copied, 0.00233725 s, 2.0 MB/s
$ stat -c "%n %s %b" testfile
testfile 4607 8
$ cat testfile > /dev/null
$ stat -c "%n %s %b" testfile
testfile 4607 8

This is 100% reproducible.  FWIW the problem was originally encountered with the out/.../*.P dependency files when building AOSP (Android).

Any ideas?

Bernd Schubert

unread,
Jun 30, 2014, 8:34:02 AM6/30/14
to fhgfs...@googlegroups.com
Well, it is not reproducible on my test system. However, may I assume
you are using XFS as underlying file system? Which kernel version?
Did you mount with "allocsize=131072k"? I remember this issue with xfs
and older kernel versions... You probably need to tune down allocsize to
a very small value or use another underlying file system (e.g. ext4).


Best regards,
Bernd

cern...@gmail.com

unread,
Jul 1, 2014, 5:38:21 PM7/1/14
to fhgfs...@googlegroups.com, bernd.s...@itwm.fraunhofer.de
On Monday, June 30, 2014 5:34:02 AM UTC-7, Bernd Schubert wrote:
Well, it is not reproducible on my test system. However, may I assume
you are using XFS as underlying file system? Which kernel version?
Did you mount with "allocsize=131072k"? I remember this issue with xfs
and older kernel versions... You probably need to tune down allocsize to
a very small value or use another underlying file system (e.g. ext4).

After checking with the administrator, it was confirmed that the file server was indeed set up to use XFS with "allocsize=131072k" per the instructions at [1].  He changed it back to the default 64k and the problem went away.

Thanks for your help.


Frank Kautz

unread,
Jul 2, 2014, 4:32:50 AM7/2/14
to fhgfs...@googlegroups.com
Hello,

can send us the used kernel version? Then we know if this issue happened
again with new kernels.

kind regards,
Frank
> --
> You received this message because you are subscribed to the Google
> Groups "fhgfs-user" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to fhgfs-user+...@googlegroups.com
> <mailto:fhgfs-user+...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.

frank_kautz.vcf

cern...@gmail.com

unread,
Jul 2, 2014, 10:54:16 AM7/2/14
to fhgfs...@googlegroups.com, frank...@itwm.fraunhofer.de
On Wednesday, July 2, 2014 1:32:50 AM UTC-7, Frank Kautz wrote:
Hello,

can send us the used kernel version? Then we know if this issue happened
again with new kernels.

File server is running RHEL6u4 with their stock 2.6.32 kernel

Client is running Ubuntu 12.04.1 with 3.2.0-31-generic
Reply all
Reply to author
Forward
0 new messages