Hi Atul,
Thanks a lot -- this is very helpful!
So assuming the application is performing the following fcntl() call to set a file segment lock:
struct flock fl;
int err;
fl.l_type = F_WRLCK;
fl.l_whence = 0;
fl.l_start = 0;
fl.l_len = 0; /* len = 0 means until end of file */
err = fcntl(file, F_SETLK, &fl);
I should be able to achieve the desired behavior if I enable cluster-wide locking with the /-o flock/ mount option. Is this correct?
Thanks again!
Nochum
---------- Original message ----------
From: Atul Vidwansa <Atul.Vidwa...@Sun.COM>
Date: Jan 8, 1:30 am
Subject: Newbie question: File locking, synchronicity, order, and ownership
To: lustre-discuss-list
Some comments inline..
Nochum Klein wrote:
> Hi Everyone,
> Apologies for what is likely a simple question for anyone who has been
> working with Lustre for a while. I am evaluating Lustre as part of a
> fault-tolerant failover solution for an application component. Based
> on our design using heartbeats between the hot primary and warm
> secondary components, we have four basic requirements of the clustered
> file system:
> 1. *Write Order *- The storage solution must write data blocks to
> shared storage in the same order as they occur in the data
> buffer. Solutions that write data blocks in any other order
> (for example, to enhance disk efficiency) do not satisfy this
> requirement.
> 2. *Synchronous Write Persistence* - Upon return from a
> synchronous write call, the storage solution guarantees that all
> the data have been written to durable, persistent storage.
> 3. *Distributed File Locking* - Application components must be
> able to request and obtain an exclusive lock on the shared
> storage. The storage solution must not assign the locks to two
> servers simultaneously.
AFAIK Lustre does support distributed locking. From wiki.lustre.org:
* /flock/lockf/
POSIX and BSD /flock/lockf/ system calls will be completely coherent
across the cluster, using the Lustre lock manager, but are not
enabled by default today. It is possible to enable client-local
/flock/ locking with the /-o localflock/ mount option, or
cluster-wide locking with the /-o flock/ mount option. If/when this
becomes the default, it is also possible to disable /flock/ for a
client with the /-o noflock/ mount option.
> 1. *Unique Write Ownership* - The application component that has
> the file lock must be the only server process that can write to
> the file. Once the system transfers the lock to another server,
> pending writes queued by the previous owner must fail.
It depends on what level of locking you do. Lustre supports byte-range
locking, so unless writes overlap, multiple writers can write to same file.
Cheers,
_Atul
> 1.
> Can anyone confirm that these requirements would be met by Lustre 1.8?
> Thanks a lot!
> Nochum
> ------------------------------------------------------------------------
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-disc...@lists.lustre.org
>http://lists.lustre.org/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
Lustre-disc...@lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss
Nochum Klein wrote:
>
> Hi Atul,
>
> Thanks a lot -- this is very helpful!
>
> So assuming the application is performing the following fcntl() call
> to set a file segment lock:
>
> struct flock fl;
> int err;
>
> fl.l_type = F_WRLCK;
> fl.l_whence = 0;
> fl.l_start = 0;
> fl.l_len = 0; /* len = 0 means until end of file */
>
> err = fcntl(file, F_SETLK, &fl);
>
> I should be able to achieve the desired behavior
>
What is your desired behavior ?
>
> if I enable cluster-wide locking with the /-o flock/ mount option. Is
> this correct?
>
Is your application writing to same file from multiple nodes? If yes, do
writes from different nodes overlap? Above piece of code will work fine
if each node is writing to its own file OR multiple nodes are writing to
different sections of the file. Otherwise, it will result in lock pingpong.
Cheers,
_Atul
>
> * /flock/lockf/
>
> POSIX and BSD /flock/lockf/ system calls will be completely coherent
> across the cluster, using the Lustre lock manager, but are not
> enabled by default today. It is possible to enable client-local
> /flock/ locking with the /-o localflock/ mount option, or
> cluster-wide locking with the /-o flock/ mount option. If/when this
> becomes the default, it is also possible to disable /flock/ for a
> client with the /-o noflock/ mount option.
>
> > 1. *Unique Write Ownership* - The application component that has
> > the file lock must be the only server process that can write to
> > the file. Once the system transfers the lock to another server,
> > pending writes queued by the previous owner must fail.
>
> It depends on what level of locking you do. Lustre supports byte-range
> locking, so unless writes overlap, multiple writers can write to same
> file.
>
> Cheers,
> _Atul
>
>
>
>
>
>
>
> > 1.
>
> > Can anyone confirm that these requirements would be met by Lustre 1.8?
>
> > Thanks a lot!
>
> > Nochum
> > ------------------------------------------------------------------------
>
> > _______________________________________________
> > Lustre-discuss mailing list
> > Lustre-disc...@lists.lustre.org <mailto:Lustre-disc...@lists.lustre.org>
> >http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-disc...@lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss
> <mailto:Lustre-disc...@lists.lustre.orghttp://lists.lustre.org/mailman/listinfo/lustre-discuss>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-...@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
_______________________________________________
Lustre-discuss mailing list
Lustre-...@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
If the client running the primary dies, eventually it will be evicted
from the cluster, its locks will be dropped, and the secondary will be
able to take over.
If the application running on the primary hangs while holding the lock,
then the secondary will not be able to take over.
I would recommend implementing your own lock system. A simple lockfile,
opened with O_EXCL|O_CREAT should suffice.
Nico
--