[2010/08/27 17:30:59, 3] lib/util.c:fcntl_getlock(2064)
fcntl_getlock: lock request failed at offset 75694080 count 65536
type 1 (Function not implemented)
But I also found out about the flock option for lustre. Should I set
flock on all clients? or can I just use localflock option on the
fileserver?
David
--
Personally, I liked the university. They gave us money and facilities,
we didn't have to produce anything! You've never been out of college!
You don't know what it's like out there! I've worked in the private
sector. They expect results. -Ray Ghostbusters
_______________________________________________
Lustre-discuss mailing list
Lustre-...@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On Aug 27, 2010, at 6:41 PM, David Noriega wrote:
> But I also found out about the flock option for lustre. Should I set
> flock on all clients? or can I just use localflock option on the
> fileserver?
It depends.
If you are 100% sure none of your other clients use flocks in a way similar to samba to
guard their file accesses AND you don't export (same fs with) samba from more than one node, you
can mount with localflock on samba-exporting node.
Otherwise you need to mount with flock, but please be aware that flock is not exactly cheap in lustre,
every flock operation is a synchronous RPC plus it puts even more load on MDS and some applications
start to use flock once they see it as available resulting in possible unexpected slowdowns
(MPI apps in some IO modes without lustre ADIO driver tend to do this, I think)
Bye,
Oleg
On Fri, Aug 27, 2010 at 6:15 PM, Oleg Drokin <oleg....@oracle.com> wrote:
> Hello!
>
> On Aug 27, 2010, at 6:41 PM, David Noriega wrote:
>> But I also found out about the flock option for lustre. Should I set
>> flock on all clients? or can I just use localflock option on the
>> fileserver?
>
> It depends.
> If you are 100% sure none of your other clients use flocks in a way similar to samba to
> guard their file accesses AND you don't export (same fs with) samba from more than one node, you
> can mount with localflock on samba-exporting node.
>
> Otherwise you need to mount with flock, but please be aware that flock is not exactly cheap in lustre,
> every flock operation is a synchronous RPC plus it puts even more load on MDS and some applications
> start to use flock once they see it as available resulting in possible unexpected slowdowns
> (MPI apps in some IO modes without lustre ADIO driver tend to do this, I think)
>
> Bye,
> Oleg
--
Personally, I liked the university. They gave us money and facilities,
we didn't have to produce anything! You've never been out of college!
You don't know what it's like out there! I've worked in the private
sector. They expect results. -Ray Ghostbusters
On Mon, Aug 30, 2010 at 10:52 AM, Mark Hahn <ha...@mcmaster.ca> wrote:
>> No, we will only have a single samba server sharing out lustre-backed
>> files. What do you mean in a way similar to samba? What does samba do
>> that is different? We are using lustre to replace our old nfs server
>> for serving up home directories in our cluster and the rest of our
>> systems.
>
> what he meant is that if lustre is backing a single samba server,
> and the shared filesystem is only available via samba, you can turn
> optimize from flock to localflock. that is, since flock is relatively
> expensive, localflock provides the behavior within a single client, such as
> the machine running samba. if you have other lustre clients
> also mounting that filesystem, you'll need flock not localflock to provide
> consistency.
>
> -mark
Similar as in some app whose locking would conflict with samba locks in a way to protect
updates by samba from updates from those apps at the same time should they happen
in the same moment of time.
Now that I think about it, I remember that samba does not really use flock, but
rather some oplocks which have different posix lock op codes that lustre
does not implement, so mounting with localflock is the only way to get that
functionality, but it is important to remember that it won't be cluster-coherent
(so would only be visible locally to that samba-exporting node).
Typically cluster-coherence would only be important when you have more than one
samba-exporting node, though.
Bye,
Oleg
I was wondering if anyone else had experienced hard-link warnings when
attempting to create the mdsdb, eg:
warning MDS inode git-var (inum 83834835): DB_KEYEXIST: Key/data pair
already exists hard link?
warning MDS inode git-verify-pack (inum 83834835): DB_KEYEXIST: Key/data
pair already exists hard link?
warning MDS inode git-verify-tag (inum 83834835): DB_KEYEXIST: Key/data
pair already exists hard link?
warning MDS inode git-write-tree (inum 83834835): DB_KEYEXIST: Key/data
pair already exists hard link?
Roughly 250 or so of the same time of warning as the above, and all for
the same inode.
If the mdsdb can eventually be created (27GB so far and growing after 12
hours of creation time), will it be safe to use a database created with
those warnings to create the ostdb's, then run lfsck?
This would be: 2.6.18-164.11.1.el5_lustre.1.8.3
Thank you,
Adam
--
Adam Munro
System Administrator | SHARCNET | http://www.sharcnet.ca
Compute Canada | http://www.computecanada.org
519-888-4567 x36453
Yes, this is not unusual, and as it indicates it is likely due to a hard-linked file. This message has been already been removed from our next e2fsprogs release.
> If the mdsdb can eventually be created (27GB so far and growing after 12
> hours of creation time), will it be safe to use a database created with
> those warnings to create the ostdb's, then run lfsck?
This message does not cause any problems.
> This would be: 2.6.18-164.11.1.el5_lustre.1.8.3
Generally more important in a case like this is the e2fsprogs version.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.