Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

NFS Failover

1,226 views
Skip to first unread message

David Parker

unread,
Jun 26, 2013, 2:20:01 PM6/26/13
to
Hello,

I'm wondering if there is a way to set up a highly-available NFS share using two servers (Debian Wheezy), where the shared volume can failover if the primary server goes down.  My idea was to use two NFS servers and keep the exported directories in sync using DRDB.  On the client, mount the share via autofs using the "Replicated Server" syntax.

For example, say I have two servers called server1 and server2, each of which is exporting the directory /export/data via NFS, and /export/data is a synced DRDB filesystem shared between them.    On the client, set up an autofs map file to mount the share and add this line:

    /mnt/data    server1,server2:/export/data

This is close, but it doesn't do what I'm looking to do.  This seems to round-robin between the two servers whenever the filesystem needs to be mounted, and if the selected server isn't available, it then tries the other one.

What I'm looking for is a way to have the client be aware of both servers, and gracefully failover between them.  I thought about using Pacemaker and Corosync to provide a virtual IP which floats between the servers, but would that work with NFS?  Let's say I have an established NFS mount and server1 fails, and the virtual IP fails over to server2.  Wouldn't there be a bunch of NFS socket and state information which server2 is unaware of, therefore rendering the connection useless on the client?  Also, data integrity is essential in this scenario, so what about active writes to the NFS share which are happening at the time the server-side failover takes place?

In full disclosure, I have tried the autofs method but not the Pacemaker/Corosyn HA method, so some experimentation might answer my questions.  In the meantime, any help would be greatly appreciated.

    Thanks!
    Dave

--
Dave Parker
Systems Administrator
Utica College
Integrated Information Technology Services
(315) 792-3229
Registered Linux User #408177

Dan Ritter

unread,
Jun 26, 2013, 3:20:01 PM6/26/13
to
http://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat

is the story of how somebody did that. Ubuntu rather than
Debian, but you should be able to translate easily.

You are correct that NFS is going to be very unhappy with
stateful changes. What you actually need is to use a clustered
filesystem rather than NFS.

-dsr-


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/20130626190...@randomstring.org

Adrian Fita

unread,
Jun 26, 2013, 3:30:02 PM6/26/13
to
I have also studied NFS fail-over with Pacemaker/Corosync/DRBD and it
could work with NFSv3; NFSv4 uses TCP which makes things very hard. But
even with NFSv3 I stumbled over strange situations, the likes of which I
don't really remember, but the bottom line I have decided that NFS NFS
fail-over is too fiddly and hard to control reliably. Now I'm studying
using Gluster for replicating data between nodes and mounting the
gluster volumes on the clients via glusterfs - this seems like a much
better, simpler and more robust approach. I suggest you take a look at
Gluster, it's an exceptionally good technology.

--
Adrian Fita


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/51CB3EEF...@gmail.com

David Parker

unread,
Jun 26, 2013, 4:00:02 PM6/26/13
to
On Wed, Jun 26, 2013 at 3:20 PM, Adrian Fita <adria...@gmail.com> wrote:
On 06/26/2013 09:11 PM, David Parker wrote:
>
> What I'm looking for is a way to have the client be aware of both
> servers, and gracefully failover between them.  I thought about using
> Pacemaker and Corosync to provide a virtual IP which floats between the
> servers, but would that work with NFS?  Let's say I have an established
> NFS mount and server1 fails, and the virtual IP fails over to server2.
>  Wouldn't there be a bunch of NFS socket and state information which
> server2 is unaware of, therefore rendering the connection useless on the
> client?  Also, data integrity is essential in this scenario, so what
> about active writes to the NFS share which are happening at the time the
> server-side failover takes place?
>

I have also studied NFS fail-over with Pacemaker/Corosync/DRBD and it
could work with NFSv3; NFSv4 uses TCP which makes things very hard. But
even with NFSv3 I stumbled over strange situations, the likes of which I
don't really remember, but the bottom line I have decided that NFS NFS
fail-over is too fiddly and hard to control reliably. Now I'm studying
using Gluster for replicating data between nodes and mounting the
gluster volumes on the clients via glusterfs - this seems like a much
better, simpler and more robust approach. I suggest you take a look at
Gluster, it's an exceptionally good technology.


Thank you for the information and suggestions.  Dan, thanks for the link, it exactly describes what I'm trying to do.  As you both pointed out, it would be easier and safer to use a clustered filesystem instead of NFS for this project.  I'll check out GlusterFS, it looks like a great option.

Thanks!

Stan Hoeppner

unread,
Jun 26, 2013, 5:10:03 PM6/26/13
to
On 6/26/2013 2:54 PM, David Parker wrote:

> As you both pointed out, it
> would be easier and safer to use a clustered filesystem instead of NFS for
> this project. I'll check out GlusterFS, it looks like a great option.

It may be worth clarification to note GlusterFS is not a cluster
filesystem. It is a distributed filesystem. There is a significant
difference between clustered and distributed.

A distributed filesystem such as Gluster is applicable to your needs as
you can add/remove clients in an ad hoc manner without issue. A cluster
filesystem is probably not suitable, because you simply can't connect
new nodes in a willy nilly fashion. None of OCFS, GFS, GPFS, CXFS, etc
handle this very well, if at all. Cluster filesystems require hardware
fencing between nodes. One doesn't setup hardware fencing willy nilly.

--
Stan


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/51CB57D...@hardwarefreak.com

Joel Wirāmu Pauling

unread,
Jun 26, 2013, 5:20:02 PM6/26/13
to
I successfully run nfsv4 and drbd in clustered mode.

The main thing to do wrt config files for nfs is pin down port numbers
to specific (rather than dynamic ones) at startup for the rpc suite.
And also switch to UDP rather than transport (solves session issues
during failover) - your clients all need to explicitly ensure they are
mounting with udp options.

Also you need to have the rpc socket file handles on a clustered
filesystem somewhere mounted on both nodes (I use GFS2 for this
purpose as it's easier).

I have heard great things about ceph instead of drbd but haven't tried
it myself yet.
Archive: http://lists.debian.org/CAKiAkGQZ0K0oZpy=W0G6D8KFgPZapsL90=EPvBygFRb=one...@mail.gmail.com

Igor Cicimov

unread,
Jun 26, 2013, 6:30:01 PM6/26/13
to

Gfs2 it self can be mounted as nfs share on the client side you dont even need to run nfs underneath.

Stan Hoeppner

unread,
Jun 26, 2013, 9:40:01 PM6/26/13
to
On 6/26/2013 5:27 PM, Igor Cicimov wrote:
> Gfs2 it self can be mounted as nfs share on the client side you dont even
> need to run nfs underneath.

Would you mind clarifying exactly what you mean by this?

--
Stan



--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/51CB96FD...@hardwarefreak.com

emmanuel segura

unread,
Jun 27, 2013, 4:20:02 AM6/27/13
to
Gfs2 it self can be mounted as nfs share on the client side you dont even need to run nfs underneath.

??????

I have a collegue who told the same thing, but showed to him that's not true

If you have have a link for this, i can appreciate

Thanks
Emmanuel


2013/6/27 Igor Cicimov <icic...@gmail.com>



--
esta es mi vida e me la vivo hasta que dios quiera

Gernot Super

unread,
Jun 27, 2013, 6:00:01 AM6/27/13
to
preload doesn't seem to get started with systemd on boot, any hints are
very appreciated!


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/51CC075...@yahoo.com

Gernot Super

unread,
Jun 27, 2013, 6:10:02 AM6/27/13
to
On 27.06.2013 11:35, Gernot Super wrote:
> preload doesn't seem to get started with systemd on boot, any hints
> are very appreciated!

more info:

root@debian:/# systemctl start preload.service
root@debian:/# systemctl enable preload.service
Failed to issue method call: No such file or directory


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/51CC0B53...@yahoo.com

Lisi Reisz

unread,
Jun 27, 2013, 6:50:02 AM6/27/13
to
On Thursday 27 June 2013 10:35:25 Gernot Super wrote:
> preload doesn't seem to get started with systemd on boot, any hints are
> very appreciated!

You would be more likely to get replies if you avoided hijacking a thread. Of
those who use threading, only those following the thread that you have
hijacked will see your email.

I should start a new thread and ask your question again.

Lisi


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/201306271128.5...@gmail.com

Stan Hoeppner

unread,
Jun 27, 2013, 1:20:01 PM6/27/13
to
Please keep replies on list.

On 6/26/2013 10:44 PM, Igor Cicimov wrote:
> On 27/06/2013 11:36 AM, "Stan Hoeppner" <st...@hardwarefreak.com> wrote:
>>
>> On 6/26/2013 5:27 PM, Igor Cicimov wrote:
>>> Gfs2 it self can be mounted as nfs share on the client side you dont
> even
>>> need to run nfs underneath.
>>
>> Would you mind clarifying exactly what you mean by this?
>>
> Meaning tou can use mount -t glusterfs or mount -t nfs on the client side.

You seem to be stating that one can mount a GFS2 filesystem by using

'mount -t glusterfs' or
'mount -t nfs'

This is simply not correct.

--
Stan


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org
Archive: http://lists.debian.org/51CC72A...@hardwarefreak.com
0 new messages