Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

session trunking with NFS

611 views
Skip to first unread message

Stefan Krueger

unread,
Jun 26, 2018, 3:00:05 AM6/26/18
to
Hello,

so far as I know Debian stretch is shipped with NFS-Version 4.2. The RFC[1] said NFSv4.1 has the capability for sessiontrunking to speed up the performance/throughput, so my question is how can I archiv this? How to configure the NFS-server and how to mount it on the client-side? There is no hint in the manpage for this.

Thanks in advance!
Best regards
Stefan


[1]https://tools.ietf.org/html/rfc5661#section-2.10

Reco

unread,
Jun 26, 2018, 3:10:04 AM6/26/18
to
Hi.

On Tue, Jun 26, 2018 at 08:57:25AM +0200, Stefan Krueger wrote:
> Hello,
>
> so far as I know Debian stretch is shipped with NFS-Version 4.2. The RFC[1] said NFSv4.1 has the capability for sessiontrunking to speed up the performance/throughput, so my question is how can I archiv this? How to configure the NFS-server and how to mount it on the client-side? There is no hint in the manpage for this.

The way they describe the feature at [1], it does not seem being that useful.

Assuming that you don't need a bunch of kernel patches ([1] describes
Debian 7.9), all you need to do is obtain an NFS server with multiple
non-bonded network interfaces, a client with the same, and mount NFS
share several times into the same directory.

And all you get out of this is the ability to utilize several network
links on both NFS client and server for a single client.

Personally I'd rather use conventional network bonding on NFS server,
and be done with it.

[1] http://packetpushers.net/multipathing-nfs4-1-kvm/

Reco

Michael Stone

unread,
Jun 26, 2018, 8:20:04 AM6/26/18
to
On Tue, Jun 26, 2018 at 10:07:28AM +0300, Reco wrote:
>Personally I'd rather use conventional network bonding on NFS server,
>and be done with it.

Conventional network bonding doesn't speed up a single stream, which is
why people have been looking for alternatives.

Mike Stone

Reco

unread,
Jun 26, 2018, 8:40:05 AM6/26/18
to
Hi.
That's something I agree with.

But the way I see the things, if one's really needs I/O bandwidth, low
latency and IOPS done in consumer-grade hardware, one should use FCoE,
not NFS. Especially if that's 'one initiator - one target'
configuration.

NFS was designed for multiple clients concurrently accessing the sames
shares, and in this scenario bonding seems much more simplier and
justified solution.

Reco

Stefan K

unread,
Jun 27, 2018, 4:40:04 AM6/27/18
to
Hi,

today i tried it, but it didn't work:
on my nfs-test system I use the 2x1GB interfaces
showmount -e <NFS-IP1>
and
showmount -e <NFS-IP2>
shows me the exports
so now i mount the nfs-share on a server with 10G Interfaces(bond), when i mount it with the second NFS-IP, I got an error "mount.nfs: mount(2): Device or resource busy"

Did I something wrong?

best regards
Stefan

> Gesendet: Dienstag, 26. Juni 2018 um 09:07 Uhr
> Von: Reco <recov...@gmail.com>
> An: debia...@lists.debian.org
> Betreff: Re: session trunking with NFS

Reco

unread,
Jun 27, 2018, 6:30:06 AM6/27/18
to
Hi.

On Wed, Jun 27, 2018 at 10:32:25AM +0200, Stefan K wrote:
> Hi,
>
> today i tried it, but it didn't work:
> on my nfs-test system I use the 2x1GB interfaces
> showmount -e <NFS-IP1>
> and
> showmount -e <NFS-IP2>
> shows me the exports
> so now i mount the nfs-share on a server with 10G Interfaces(bond), when i mount it with the second NFS-IP, I got an error "mount.nfs: mount(2): Device or resource busy"

>From a quick look to a Debian kernel source version 4.9.88 I conclude
that Multipath NFS feature is definitely included there.
The sources have all XPRTMULTIPATH defines, NFS client has
xprtmultipath.c file included, etc.
What I cannot find (and probably won't look into it) is whenever the
feature can be disabled during the compilation.
So, it should work, as long as client uses NFS protocol version 4.1 or
later.

The crucial implementation detail seems to be the need to use TCP to
mount NFS share, not the default UDP.

Reco
0 new messages