How can I get locate on the machine to also find the data that resides on
the other machine? Either I've to get locate to connect to the database of
the other machine or I can somehow ask updatedb on the local machine to
also index the remote NFS files. Not sure which is the better approach and
how to do this.
I tried digging around in the manpages but can't figure out how locate
decides what is it's default database. $LOCATE_PATH seems empty.
I do see that in my cron.daily there's a script:
cat mlocate.cron
#!/bin/sh
nodevs=$(< /proc/filesystems awk '$1 == "nodev" { print $2 }')
renice +19 -p $$ >/dev/null 2>&1
/usr/bin/updatedb -f "$nodevs"
But even here I'm not sure where it is storing the default database.
Any tips?
--
Rahul
Because the contents of /home have not been added to the locate database,
most probably because in the config file for updatedb, which is usually
found at /etc/updatedb.conf, there is an entry with
PRUNEFS=" ... nfs nfs4 ..."
> Because the contents of /home have not been added to the locate database,
> most probably because in the config file for updatedb, which is usually
> found at /etc/updatedb.conf, there is an entry with
>
> PRUNEFS=" ... nfs nfs4 ..."
Thanks for the pointer! Doesn't seem to be the case:
PRUNEFS = "auto afs gfs gfs2 iso9660 sfs udf"
PRUNEPATHS = "/afs /media /net /sfs /tmp /udev /var/spool/cups
/var/spool/squid /var/tmp"
--
Rahul
Why do you say "seem"? If it is not there, it is not there! ;)
Assuming you are using mlocate, then go to
/var/lib/mlocate
and examine the file mlocate.db with view (vi{m} read only).
The file is binary but its format is custom to mlocate so
there are no tools other than locate for looking at its content.
Use the search facility :/\/home\/ to see if there are
any entries for your NFS mounted file system.
If there are no entries present then updatedb has not
examined the contents of the file system, which may be
due to user mapping/permission problems.
If there are entries present, then the problem is you
are using locate as a user with insufficient privileges
to locate the file under /home.
It depends on how big the NFS server is, and how fast the link between
the client and server is.
If the server is small, and the connection is reasonable, you could in
theory have each client index the NFS share. You'll need to do some
magic to be able to get the updatedb process to be able to see the
entire NFS share, such as using no_root_squash on the server, in order
to build a reasonable locate index on the clients.
If the server is large, or the connection is slow, you will want to
build the database on the server and share it out with your clients. I
did this in the past, but don't recall all the details. I suspect that
the easiest way to do this will be to create a symlink to the share on
the server that has the same name as the mount point on the clients,
then build the database just from that directory (you would not want to
index /usr/bin on the server, for example). Store this database
somewhere under the NFS share; on the clients, you'd use -d to specify
the path to the ''real'' locate database.
> I do see that in my cron.daily there's a script:
You will probably need to tweak and/or make your own script to build
this special locate db. I believe there's a switch to specify where to
put the locate db on running updatedb.
--keith
--
kkeller...@wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information
Thanks again for the tips!
> The file is binary but its format is custom to mlocate so
> there are no tools other than locate for looking at its content.
>
> Use the search facility :/\/home\/ to see if there are
> any entries for your NFS mounted file system.
The pattern for /home is not found. On the other hand /etc /var etc. do
result in matches within this binary file.
> If there are no entries present then updatedb has not
> examined the contents of the file system, which may be
> due to user mapping/permission problems.
Not sure which permission is the issue. /home has the following
permissions:
drwxr-xr-x 24 root root 4096 Jul 27 15:51 home
So they are pretty open. What user does updatedb run as? The only thing
that makes /home different than the other dirs is the fact that it resides
on a remote server (NFS). Has to be a mapping issue. But can't find what is
telling updatedb to avoid NFS systems.
--
Rahul
Is the filesystem containing your home directory NFS mounted as a symbolic
link from something under /net (or one of the other directories listed
in PRUNEPATHS)?
Chris
> What user does updatedb run as?
You need to start from the top.
Which script is being run to update the locate database?
How is this script being run and by whom?
# grep nfs /proc/filesystems
nodev nfs
nodev nfs4
nodev nfsd
So the above will exclude all nfs filesystems.
You could remove the -f "$nodevs" part so it only uses the settings
in your .conf file.
Note that this may slam the NFS server and/or use a lot of your net
bandwidth.
If your mounted system are the same architecture and version as
the host it looks like you could write a shell script wrapper that
would run locate with the option(s) to use the database on the mounted
system. Run once normally for local files. Run once for each mounted
filesystem telling it to use the mounted database.
> So they are pretty open. What user does updatedb run as? The only thing
On my Mandriva 2010.1 system, /etc/cron.daily/mlocate.cron is
run as root, and updates /var/lib/mlocate/mlocate.db.
I have an encrypted filesystem, that I have explicitly excluded
in /etc/updatedb.conf.
The encrypted filesystem is mounted by a script run from my
~/.profile. After it's mounted, I manually run
/usr/bin/updatedb -U /var/mnt/data -l 0 -o /var/mnt/data/mlocate/mlocate.db --prunepaths="" --prunefs=""
and have included in .profile
export LOCATE_PATH="/var/mnt/data/mlocate/mlocate.db"
If I run a locate command as root, it doesn't see the files in
the encrypted filesystem. If I run it under my userid, it does.
I would try a similar approach, where the system that physically
has the filesystem runs an updatedb for just that filesystem,
creating an mlocate.db on that filesystem.
Then have users that remotely mount the filesytem set their
LOCATE_PATH, to include that remotely mounted mlocate.db.
Regards, Dave Hodgins
--
Change nomail.afraid.org to ody.ca to reply by email.
(nomail.afraid.org has been set up specifically for
use in usenet. Feel free to use it yourself.)
> Note that this may slam the NFS server and/or use a lot of your net
> bandwidth.
>
> If your mounted system are the same architecture and version as
> the host it looks like you could write a shell script wrapper that
> would run locate with the option(s) to use the database on the mounted
> system. Run once normally for local files. Run once for each mounted
> filesystem telling it to use the mounted database.
>
Thanks for all the tips! In the interest of not slamming the NFS server I
decided the cleanest approach was: Index once by running mlocate
(locally) and then have all remote systems use the same index.db via NFS.
This is what I did (in case it helps anyone):
On storage server:
cp /var/lib/mlocate/mlocate.db /opt/tmp/ ##put in a daily cron job
chgrp slocate /opt/tmp/mlocate.db ##else locate on remote cannot read db
[ /opt/tmp is exported to all remote systems. ]
On remote system:
alias locate 'locate -d /opt/tmp/mlocate.db'
locate '*foofile*'
/home/foouser/foofile
/home/baruser/foofile
[snip]
Seems to work. It would have been more elegant had a link operation
worked rather than a copy to /opt/tmp. Unfortunately neither soft
(follows on the local system) nor hard links (different file system) work
here. But if there's a way around this I'd love to know.
Thanks again for all the pointers!
--
Rahul
> Is the filesystem containing your home directory NFS mounted as a
> symbolic link from something under /net (or one of the other
> directories listed in PRUNEPATHS)?
>
Nope. Not the case.
--
Rahul