Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

locate, updatedb and NFS mounted filesystems

2,429 views
Skip to first unread message

Rahul

unread,
Jul 27, 2010, 7:47:44 PM7/27/10
to
Our /home dir is mounted via NFS from another server. Whenever I try to run
"locate" to search for a file it doesn't find it. Of course, if I login to
the server where the data physically resides then locate does find the
date.

How can I get locate on the machine to also find the data that resides on
the other machine? Either I've to get locate to connect to the database of
the other machine or I can somehow ask updatedb on the local machine to
also index the remote NFS files. Not sure which is the better approach and
how to do this.

I tried digging around in the manpages but can't figure out how locate
decides what is it's default database. $LOCATE_PATH seems empty.

I do see that in my cron.daily there's a script:

cat mlocate.cron
#!/bin/sh
nodevs=$(< /proc/filesystems awk '$1 == "nodev" { print $2 }')
renice +19 -p $$ >/dev/null 2>&1
/usr/bin/updatedb -f "$nodevs"

But even here I'm not sure where it is storing the default database.

Any tips?

--
Rahul

J G Miller

unread,
Jul 27, 2010, 8:37:27 PM7/27/10
to
On Tuesday, 27 July, 2010 at 23:47:44h +0000, Rahul complained:

>
> Our /home dir is mounted via NFS from another server. Whenever I try to
> run "locate" to search for a file it doesn't find it.

Because the contents of /home have not been added to the locate database,
most probably because in the config file for updatedb, which is usually
found at /etc/updatedb.conf, there is an entry with

PRUNEFS=" ... nfs nfs4 ..."

Rahul

unread,
Jul 27, 2010, 8:47:06 PM7/27/10
to
J G Miller <mil...@yoyo.ORG> wrote in news:i2nu47$ec7$5...@news.eternal-
september.org:

> Because the contents of /home have not been added to the locate database,
> most probably because in the config file for updatedb, which is usually
> found at /etc/updatedb.conf, there is an entry with
>
> PRUNEFS=" ... nfs nfs4 ..."

Thanks for the pointer! Doesn't seem to be the case:

PRUNEFS = "auto afs gfs gfs2 iso9660 sfs udf"
PRUNEPATHS = "/afs /media /net /sfs /tmp /udev /var/spool/cups
/var/spool/squid /var/tmp"

--
Rahul

J G Miller

unread,
Jul 28, 2010, 10:54:54 AM7/28/10
to
On Wed, 28 Jul 2010 00:47:06 +0000, Rahul wrote:
>
> Thanks for the pointer! Doesn't seem to be the case:

Why do you say "seem"? If it is not there, it is not there! ;)

Assuming you are using mlocate, then go to

/var/lib/mlocate

and examine the file mlocate.db with view (vi{m} read only).

The file is binary but its format is custom to mlocate so
there are no tools other than locate for looking at its content.

Use the search facility :/\/home\/ to see if there are
any entries for your NFS mounted file system.

If there are no entries present then updatedb has not
examined the contents of the file system, which may be
due to user mapping/permission problems.

If there are entries present, then the problem is you
are using locate as a user with insufficient privileges
to locate the file under /home.

Keith Keller

unread,
Jul 29, 2010, 4:24:26 PM7/29/10
to
On 2010-07-27, Rahul <nos...@nospam.invalid> wrote:
>
> How can I get locate on the machine to also find the data that resides on
> the other machine? Either I've to get locate to connect to the database of
> the other machine or I can somehow ask updatedb on the local machine to
> also index the remote NFS files. Not sure which is the better approach and
> how to do this.

It depends on how big the NFS server is, and how fast the link between
the client and server is.

If the server is small, and the connection is reasonable, you could in
theory have each client index the NFS share. You'll need to do some
magic to be able to get the updatedb process to be able to see the
entire NFS share, such as using no_root_squash on the server, in order
to build a reasonable locate index on the clients.

If the server is large, or the connection is slow, you will want to
build the database on the server and share it out with your clients. I
did this in the past, but don't recall all the details. I suspect that
the easiest way to do this will be to create a symlink to the share on
the server that has the same name as the mount point on the clients,
then build the database just from that directory (you would not want to
index /usr/bin on the server, for example). Store this database
somewhere under the NFS share; on the clients, you'd use -d to specify
the path to the ''real'' locate database.

> I do see that in my cron.daily there's a script:

You will probably need to tweak and/or make your own script to build
this special locate db. I believe there's a switch to specify where to
put the locate db on running updatedb.

--keith


--
kkeller...@wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information

Rahul

unread,
Jul 29, 2010, 10:04:09 PM7/29/10
to
J G Miller <mil...@yoyo.ORG> wrote in news:i2pgbu$e31$6...@news.eternal-
september.org:

Thanks again for the tips!

> The file is binary but its format is custom to mlocate so
> there are no tools other than locate for looking at its content.
>
> Use the search facility :/\/home\/ to see if there are
> any entries for your NFS mounted file system.

The pattern for /home is not found. On the other hand /etc /var etc. do
result in matches within this binary file.

> If there are no entries present then updatedb has not
> examined the contents of the file system, which may be
> due to user mapping/permission problems.

Not sure which permission is the issue. /home has the following
permissions:

drwxr-xr-x 24 root root 4096 Jul 27 15:51 home

So they are pretty open. What user does updatedb run as? The only thing
that makes /home different than the other dirs is the fact that it resides
on a remote server (NFS). Has to be a mapping issue. But can't find what is
telling updatedb to avoid NFS systems.

--
Rahul

Chris Davies

unread,
Jul 30, 2010, 6:34:33 AM7/30/10
to
Rahul <nos...@nospam.invalid> wrote:
> PRUNEFS = "auto afs gfs gfs2 iso9660 sfs udf"
> PRUNEPATHS = "/afs /media /net /sfs /tmp /udev /var/spool/cups
> /var/spool/squid /var/tmp"

Is the filesystem containing your home directory NFS mounted as a symbolic
link from something under /net (or one of the other directories listed
in PRUNEPATHS)?

Chris

J G Miller

unread,
Jul 30, 2010, 8:10:58 AM7/30/10
to
On Friday, July 30th, 2010 at 02:04:09h +0000, Rahul wrote:

> What user does updatedb run as?

You need to start from the top.

Which script is being run to update the locate database?

How is this script being run and by whom?

Joe Beanfish

unread,
Jul 30, 2010, 1:19:37 PM7/30/10
to

# grep nfs /proc/filesystems
nodev nfs
nodev nfs4
nodev nfsd

So the above will exclude all nfs filesystems.

You could remove the -f "$nodevs" part so it only uses the settings
in your .conf file.

Note that this may slam the NFS server and/or use a lot of your net
bandwidth.

If your mounted system are the same architecture and version as
the host it looks like you could write a shell script wrapper that
would run locate with the option(s) to use the database on the mounted
system. Run once normally for local files. Run once for each mounted
filesystem telling it to use the mounted database.

David W. Hodgins

unread,
Jul 30, 2010, 4:45:10 PM7/30/10
to
On Thu, 29 Jul 2010 22:04:09 -0400, Rahul <nos...@nospam.invalid> wrote:

> So they are pretty open. What user does updatedb run as? The only thing

On my Mandriva 2010.1 system, /etc/cron.daily/mlocate.cron is
run as root, and updates /var/lib/mlocate/mlocate.db.

I have an encrypted filesystem, that I have explicitly excluded
in /etc/updatedb.conf.

The encrypted filesystem is mounted by a script run from my
~/.profile. After it's mounted, I manually run
/usr/bin/updatedb -U /var/mnt/data -l 0 -o /var/mnt/data/mlocate/mlocate.db --prunepaths="" --prunefs=""
and have included in .profile
export LOCATE_PATH="/var/mnt/data/mlocate/mlocate.db"

If I run a locate command as root, it doesn't see the files in
the encrypted filesystem. If I run it under my userid, it does.

I would try a similar approach, where the system that physically
has the filesystem runs an updatedb for just that filesystem,
creating an mlocate.db on that filesystem.

Then have users that remotely mount the filesytem set their
LOCATE_PATH, to include that remotely mounted mlocate.db.

Regards, Dave Hodgins


--
Change nomail.afraid.org to ody.ca to reply by email.
(nomail.afraid.org has been set up specifically for
use in usenet. Feel free to use it yourself.)

Rahul

unread,
Aug 4, 2010, 1:43:42 AM8/4/10
to
Joe Beanfish <j...@nospam.duh> wrote in news:i2v1ja
$b...@news.thunderstone.com:

> Note that this may slam the NFS server and/or use a lot of your net
> bandwidth.
>
> If your mounted system are the same architecture and version as
> the host it looks like you could write a shell script wrapper that
> would run locate with the option(s) to use the database on the mounted
> system. Run once normally for local files. Run once for each mounted
> filesystem telling it to use the mounted database.
>

Thanks for all the tips! In the interest of not slamming the NFS server I
decided the cleanest approach was: Index once by running mlocate
(locally) and then have all remote systems use the same index.db via NFS.

This is what I did (in case it helps anyone):

On storage server:
cp /var/lib/mlocate/mlocate.db /opt/tmp/ ##put in a daily cron job
chgrp slocate /opt/tmp/mlocate.db ##else locate on remote cannot read db

[ /opt/tmp is exported to all remote systems. ]

On remote system:
alias locate 'locate -d /opt/tmp/mlocate.db'
locate '*foofile*'
/home/foouser/foofile
/home/baruser/foofile
[snip]

Seems to work. It would have been more elegant had a link operation
worked rather than a copy to /opt/tmp. Unfortunately neither soft
(follows on the local system) nor hard links (different file system) work
here. But if there's a way around this I'd love to know.

Thanks again for all the pointers!

--
Rahul

Rahul

unread,
Aug 4, 2010, 1:43:57 AM8/4/10
to
Chris Davies <chris-...@roaima.co.uk> wrote in
news:pemai7x...@news.roaima.co.uk:

> Is the filesystem containing your home directory NFS mounted as a
> symbolic link from something under /net (or one of the other
> directories listed in PRUNEPATHS)?
>

Nope. Not the case.

--
Rahul

graeme vetterlein

unread,
Mar 12, 2023, 3:30:56 PM3/12/23
to
For anybody who ends up here. I had this issue with auto mounted NFS filesystems. What I did was:


Create /net.browsable

Then added the file : /etc/auto.net.browsable
# GPV 08Jan23 - My best guess
#
# Need to add -browse option to this somehow
#
homes nas:/homes
Reference nas:/Public/Reference
Documents nas:/Public/Documents
library nas:/Public/library

These are some of the automounted filesystems, the ones I want updatedb to index. I did not include, for example, multimedia or temp space.


(obviously restart autofs , or just wait till next reboot)










29V.X746

unread,
Mar 21, 2023, 1:44:32 AM3/21/23
to
Um ... let's see if I'm getting it ....

You've got NFS network mounts in fstab and they
don't always work in time for updatedb to deal ?

Is this something "sudo crontab -e" and then
"@reboot sleep 15 && mount -a && ubdatedb <options> &"
could have solved ? Also, using "_netdev" in
fstab often deals those "not quite ready" issues.

Network connections USUALLY come up pretty fast
these days - but not ALWAYS, not ALWAYS in time
to respect what's in fstab. In any case, if what
you want to mount from a network is being a prob
there ARE simple ways to mellow-out the situation.
0 new messages