Hi,
I am observing some strange behavior on a host running fhgfs 2014.01-r5. If I create a small file and then subsequently append enough data to make it at least 4608 bytes long, the file consumes 262144 512-byte blocks = 128MB:
$ dd if=/dev/zero of=testfile bs=1 count=1
1+0 records in
1+0 records out
1 byte (1 B) copied, 0.000867362 s, 1.2 kB/s
$ stat -c "%n %s %b" testfile
testfile 1 0
$ dd if=/dev/zero bs=4607 count=1 >> testfile
1+0 records in
1+0 records out
4607 bytes (4.6 kB) copied, 0.00255441 s, 1.8 MB/s
$ stat -c "%n %s %b" testfile
testfile 4608 262144
$ cat testfile > /dev/null
$ stat -c "%n %s %b" testfile
testfile 4608 9
The file's 128MB block allocation is reflected in "df", and when there are many of these files, it quickly fills up a large disk.
After reading at least one byte from the "overweight" file, its block allocation shrinks back to normal. But if it is never touched, it will just occupy 128MB forever.
If the resultant file is under 4608 bytes, this behavior is not seen:
$ dd if=/dev/zero of=testfile bs=1 count=1
1+0 records in
1+0 records out
1 byte (1 B) copied, 0.00124018 s, 0.8 kB/s
$ stat -c "%n %s %b" testfile
testfile 1 0
$ dd if=/dev/zero bs=4606 count=1 >> testfile
1+0 records in
1+0 records out
4606 bytes (4.6 kB) copied, 0.00233725 s, 2.0 MB/s
$ stat -c "%n %s %b" testfile
testfile 4607 8
$ cat testfile > /dev/null
$ stat -c "%n %s %b" testfile
testfile 4607 8
This is 100% reproducible. FWIW the problem was originally encountered with the out/.../*.P dependency files when building AOSP (Android).
Any ideas?