Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[NFS] Re: [PATCH] Large number of dentries and no_subtree_cache

2 views
Skip to first unread message

Neil Brown

unread,
Jul 15, 2001, 7:27:49 PM7/15/01
to
On Saturday July 14, mar...@veritas.com wrote:
> Hi Neil,
>
> Could you check this out, and pass to Linus if you agree? Thanks.
>
> nfsfh.c:find_fh_dentry() creates a new disconnected dentry when a
> well-connected one doesn't exist. Fine.
> For a "no_subtree_check" export, a disconnected entry maybe sufficient
> (if the inode isn't a directory), but find_fh_dentry() will create a new
> disconnected entry even when a disconnected one already exists. This can
> lead to a large number of dentries being created.
>
> Unreferenced disconnected dentries are removed via memory pressure calls
> to shrink_dcache_memory(), but until this happens the large number of
> entries effects performance.

This was true until 2.4.6-pre2. But at pre3 a patch went in that
changed it.
Previously anonymous dentries were hashed (though with no name, the
hash was pretty meaningless). This meant that they would hang around
after the last reference was dropped. This was actually fairly
pointless as they would never get referenced again, and caused a real
problem as umount wouldn't discard them and so you got the message
printk("VFS: Busy inodes after unmount. "
"Self-destruct in 5 seconds. Have a nice day...\n");

In 2.4.6-pre3 I stopped hashing those dentries so now when the last
reference is dropped, the dentry is freed. So now there will never be
more anonymous dentries than there are active nfsd threads.

Now, it may still be useful to re-use an anonymous dentry if one
exists, but I don't think that this is a valid reason any more.

I have in mind that it would be nice to cache recently used
"struct file"s instead of the individual read-ahead parameters in the
ra-cache. And then it would be nicer if we always found the same dentry
for the same file. But I'm not very keen on passing the "need_path"
down to the filesystems, though this seems to be needed.

So maybe the patch will be used, but probably not for the reason you
wrote it.

Thanks,

NeilBrown

_______________________________________________
NFS maillist - N...@lists.sourceforge.net
http://lists.sourceforge.net/lists/listinfo/nfs

Mark Hemment

unread,
Jul 16, 2001, 12:34:23 PM7/16/01
to
On Mon, 16 Jul 2001, Neil Brown wrote:
> On Saturday July 14, mar...@veritas.com wrote:
> > nfsfh.c:find_fh_dentry() creates a new disconnected dentry when a
> > well-connected one doesn't exist. Fine.
> > For a "no_subtree_check" export, a disconnected entry maybe sufficient
> > (if the inode isn't a directory), but find_fh_dentry() will create a new
> > disconnected entry even when a disconnected one already exists. This can
> > lead to a large number of dentries being created.

> This was true until 2.4.6-pre2. But at pre3 a patch went in that
> changed it.
> Previously anonymous dentries were hashed (though with no name, the
> hash was pretty meaningless). This meant that they would hang around
> after the last reference was dropped. This was actually fairly
> pointless as they would never get referenced again, and caused a real
> problem as umount wouldn't discard them and so you got the message
> printk("VFS: Busy inodes after unmount. "
> "Self-destruct in 5 seconds. Have a nice day...\n");
>
> In 2.4.6-pre3 I stopped hashing those dentries so now when the last
> reference is dropped, the dentry is freed. So now there will never be
> more anonymous dentries than there are active nfsd threads.

I did think about keeping the disconnected dentries unhashed, so they
would be quickly dropped, but the idea of getting into the state of
continously creating/destroying dentries didn't seem good for
performance (though I didn't conform that it would hurt SpecFS numbers).

Although one benefit of quickly dropping the disconnected ones is to
avoid carrying their "dead-weight" around after a full-connected entry has
been created (say, via a readdirplus - I've seen this happen quite a bit),
rather than waiting for memory pressure to kick them out.

Wouldn't it be possible to add a call to shrink_dcache_sb() from
kill_super() after the call to shrink_dcache_parent()? I haven't checked
this out at all, but can do. If this does solve the "self-destruct" prob,
then I've got a patch which is slightly more proactive at chucking out
disconnected entires when full-connected ones exist. Guess I need to see
how the 2.4.6 behaviour performs first.

Thanks,
Mark

0 new messages