Directory index full?

3,433 views
Skip to first unread message

Robert Citek

unread,
May 25, 2010, 4:45:11 PM5/25/10
to CWE-LUG
Anyone ever come across a scenario where the directory was so full you
couldn't add any more files?

Came cross this recently with a warning showing up in
/var/log/kern.log: Directory index full!" I was able to reproduce
this error with a loopback filesystem by creating a 1 GB file,
formating it as an ext3 filesystem with a 1024 block size, and
mounting it via the loopback device. Here are the commands:

mkdir -p /tmp/test/loop
cd /tmp/test
dd if=/dev/zero bs=1M count=1000 of=ext3.img
mkfs.ext3 -F -b 1024 -i 2048 ext3.img
mount -o loop ext3.img loop
mkdir loop/tmp
cd loop/tmp
nice -n 20 seq 1 500000 | nice -n 20 xargs touch >& /dev/null
ls -U | wc -l
date
grep warn /var/log/kern.log | tail -2
df -i .
df -h .

Here is the output from the last five lines:

+ ls -U
+ wc -l
498992
+ date
Mon May 24 13:34:03 EDT 2010
+ grep warn /var/log/kern.log
+ tail -2
May 24 13:34:02 lucid kernel: [1217415.426398] EXT3-fs warning (device
loop0): ext3_dx_add_entry: Directory index full!
May 24 13:34:02 lucid kernel: [1217415.426964] EXT3-fs warning (device
loop0): ext3_dx_add_entry: Directory index full!
+ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/loop0 512000 499004 12996 98% /tmp/test/loop
+ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 938M 30M 858M 4% /tmp/test/loop


Notice that the number of files in the folder (498992) is less than
the number specified in the command (500000), indicating not all the
files were created. Also, notice that the log entries reference the
loopback device (loop0) and have a timestamp (13:34:02) close to the
time that the touch command ended (13:34:03). Lastly, notice that
there are still free inodes and plenty of disk space.

Anyone ever seen anything like this?

Regards,
- Robert

Mike B.

unread,
May 25, 2010, 5:47:39 PM5/25/10
to cwe...@googlegroups.com
No. I haven't, but that might be a good post for Ubuntuforums. I
wonder if the same can happen with ext4?

> --
> Central West End Linux Users Group (via Google Groups)
> Main page: http://www.cwelug.org
> To post: cwe...@googlegroups.com
> To subscribe: cwelug-s...@googlegroups.com
> To unsubscribe: cwelug-un...@googlegroups.com
> More options: http://groups.google.com/group/cwelug

Mike B.

unread,
May 25, 2010, 5:50:40 PM5/25/10
to cwe...@googlegroups.com
Hey. Wait. Isn't ext4 the default for lucid? Did you downgrade to
ext3? Or is /var/ on a full partition?

Robert Citek

unread,
May 25, 2010, 6:16:05 PM5/25/10
to cwe...@googlegroups.com
The original system was running 8.04 LTS. To create a model of it on
my Lucid laptop, I used a loopback with ext3. Have not tried it, yet,
with ext4.

Regards,
- Robert

Mike B.

unread,
May 26, 2010, 4:42:01 PM5/26/10
to cwe...@googlegroups.com
gotcha. I wonder if the file system itself is corrupt?

David Dooling

unread,
May 26, 2010, 4:59:30 PM5/26/10
to cwe...@googlegroups.com
On Wed, May 26, 2010 at 03:42:01PM -0500, Mike B. wrote:
> gotcha. I wonder if the file system itself is corrupt?

Looks like it is just a limit of ext3.

http://kerneltrap.org/mailarchive/linux-kernel/2008/5/18/1861684

Looks like the workaround is to use a bigger block size.

--
David Dooling
http://www.politigenomics.com/

Don Ellis

unread,
May 26, 2010, 6:54:02 PM5/26/10
to cwe...@googlegroups.com
This reminds me of way back in the early days when we started getting larger drives with Mac HFS, before HFS+ became available. We could see block sizes up to 32k on a large enough drive, maybe a gig or so.

Imagine creating a file with five characters in it, using 32k storage! The number of files you could create would have the same limit, before filling up the drive, yet most space on the drive could be empty.

--Don Ellis


On Wed, May 26, 2010 at 3:59 PM, David Dooling <ba...@users.sourceforge.net> wrote:
On Wed, May 26, 2010 at 03:42:01PM -0500, Mike B. wrote:
> gotcha.  I wonder if the file system itself is corrupt?

Looks like it is just a limit of ext3.

http://kerneltrap.org/mailarchive/linux-kernel/2008/5/18/1861684

Looks like the workaround is to use a bigger block size.

> On Tue, May 25, 2010 at 5:16 PM, Robert Citek <robert...@gmail.com> wrote:
> > The original system was running 8.04 LTS.  To create a model of it on
> > my Lucid laptop, I used a loopback with ext3.  Have not tried it, yet,
> > with ext4.
> >
> > Regards,
> > - Robert

... 

Robert Citek

unread,
May 26, 2010, 6:55:39 PM5/26/10
to cwe...@googlegroups.com
Thanks, David.

Unfortunately, it's not limited to ext3. Happens in ext4 as well.
The commands:

mkdir -p /tmp/test/loop
cd /tmp/test

dd if=/dev/zero bs=1M count=1000 of=ext4.img
mkfs.ext4 -F -b 1024 -i 2048 ext4.img
mount -o loop ext4.img loop


mkdir loop/tmp
cd loop/tmp
nice -n 20 seq 1 500000 | nice -n 20 xargs touch >& /dev/null
ls -U | wc -l
date
grep warn /var/log/kern.log | tail -2
df -i .
df -h .

The output from the last five lines:

+ ls
+ wc -l
499310
+ date
Wed May 26 17:53:41 EDT 2010


+ grep warn /var/log/kern.log
+ tail -2

May 26 17:53:41 mnx-lucid kernel: [1405455.069927] EXT4-fs warning
(device loop0): ext4_dx_add_entry: Directory index full!
May 26 17:53:41 mnx-lucid kernel: [1405455.086388] EXT4-fs warning
(device loop0): ext4_dx_add_entry: Directory index full!


+ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on

/dev/loop0 512000 499322 12678 98% /tmp/test/loop


+ df -h .
Filesystem Size Used Avail Use% Mounted on

/dev/loop0 875M 30M 796M 4% /tmp/test/loop

Yes, a solution is to have larger block sizes. However, the file
system that is exhibiting this issue already has 4096 byte blocks.
And Linux can't mount a filesystem via the loopback with a blocksize
larger than 4096.

# mkfs.ext3 -F -b 8192 -N 1000000 fs.img

# tune2fs -l fs.img | grep -i 'block size'
Block size: 8192

# mount -o loop fs.img loop
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

# dmesg | tail -1
[1408478.274638] EXT3-fs: bad blocksize 8192.

The quick solution seems to be to create one or more subdirectories
and move some of the files into them.

Regards,
- Robert

Robert Citek

unread,
May 26, 2010, 8:19:42 PM5/26/10
to cwe...@googlegroups.com
Apparently, the size the filesystem is a factor, too. I doubled the
size of the filesystem to 2 GB and I had to double the number of files
to get the error. I would have expected the limit on the directory
size to remain constant.

+ ls -U
+ wc -l

900545
+ date
Wed May 26 19:48:00 EDT 2010


+ grep warn /var/log/kern.log
+ tail -2

May 26 19:47:59 mnx-lucid kernel: [1412301.524937] EXT3-fs warning
(device loop0): ext3_dx_add_entry: Directory index full!
May 26 19:47:59 mnx-lucid kernel: [1412301.525175] EXT3-fs warning
(device loop0): ext3_dx_add_entry: Directory index full!


+ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on

/dev/loop0 1024000 900557 123443 88% /tmp/test/loop


+ df -h .
Filesystem Size Used Avail Use% Mounted on

/dev/loop0 1.8G 51M 1.6G 4% /tmp/test/loop

Interesting, although, expanding the filesystem is not an option for us, either.

Regards,
- Robert

On Wed, May 26, 2010 at 6:55 PM, Robert Citek <robert...@gmail.com> wrote:
> Yes, a solution is to have larger block sizes.
>

Robert Citek

unread,
May 28, 2010, 1:43:10 PM5/28/10
to cwe...@googlegroups.com
Ran an analogous process using reiserfs:

mkdir -p /tmp/test/loop
cd /tmp/test

dd if=/dev/zero bs=1M count=1000 of=fs.img
yes | mkfs.reiserfs -f fs.img
sudo mount -o loop fs.img loop
mkdir loop/tmp
cd loop/tmp
seq -f%07g 1 10000000 | xargs touch >& /dev/null


ls -U | wc -l
date

tail -3 /var/log/kern.log


df -i .
df -h .

Here's the output from the last five commands:

+ ls -U
+ wc -l

1900000
+ date
Fri May 28 13:32:25 EDT 2010
+ tail -3 /var/log/kern.log
May 28 13:28:39 mnx-lucid kernel: [1562071.373264] REISERFS (device
loop0): checking transaction log (loop0)
May 28 13:28:39 mnx-lucid kernel: [1562071.394933] REISERFS (device
loop0): Using r5 hash to sort names
May 28 13:28:39 mnx-lucid kernel: [1562071.394971] REISERFS (device
loop0): Created .reiserfs_priv - reserved for xattr storage.


+ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on

/dev/loop0 0 0 0 - /tmp/test/loop


+ df -h .
Filesystem Size Used Avail Use% Mounted on

/dev/loop0 1000M 230M 771M 23% /tmp/test/loop

What's odd is that even though I specified 10 million files it only
made 1.9 million. I don't know what's up with the inode reporting,
but there is still plenty of space and there is no error in the kernel
logs.

Anyone got a recommendation for a filesystem that can hold 10+ million
files in a single directory?

Regards,
- Robert

David Dooling

unread,
May 28, 2010, 11:51:52 PM5/28/10
to cwe...@googlegroups.com
On Fri, May 28, 2010 at 01:43:10PM -0400, Robert Citek wrote:
> Anyone got a recommendation for a filesystem that can hold 10+ million
> files in a single directory?

Don't do that. Seriously, don't do that. Even if it works, it won't
be pretty, or usable. Hash your file names and put them in two or
three levels of subdirectories based on the first characters of the
hash.

> On Wed, May 26, 2010 at 8:19 PM, Robert Citek <robert...@gmail.com> wrote:
> > Apparently, the size the filesystem is a factor, too.  I doubled the
> > size of the filesystem to 2 GB and I had to double the number of files
> > to get the error.  I would have expected the limit on the directory
> > size to remain constant.

--
David Dooling
http://www.politigenomics.com/

Robert Citek

unread,
May 29, 2010, 2:22:05 AM5/29/10
to cwe...@googlegroups.com
On Fri, May 28, 2010 at 11:51 PM, David Dooling
<ba...@users.sourceforge.net> wrote:
> On Fri, May 28, 2010 at 01:43:10PM -0400, Robert Citek wrote:
>> Anyone got a recommendation for a filesystem that can hold 10+ million
>> files in a single directory?
>
> Don't do that.  Seriously, don't do that.  Even if it works, it won't
> be pretty, or usable.  Hash your file names and put them in two or
> three levels of subdirectories based on the first characters of the
> hash.

I agree. Having that many files is a huge resource drain when working
with them, including rsyncing to a backup server.

However, using an alternative filesystem would be a quick-and-dirty
stop-gap method for until this client can find the resources to
implement a workable solution such as creating tar file archives or
splitting the files into subdirectories or using a database (my 2nd
choice) or even deleting the files (my first choice) which many they
suspect are unneeded.

Regards,
- Robert

Reply all
Reply to author
Forward
0 new messages