Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Status of NFS4.1 FS_RECLAIM in FreeBSD 10.1?

63 views
Skip to first unread message

Mahmoud Al-Qudsi

unread,
May 20, 2015, 7:18:38 PM5/20/15
to
Hello,

I have not delved too deeply into either the NFS spec or the FreeBSD nfsd
code, but from my admittedly-limited understanding, it seems that reclaim is
both a mandatory feature and one that is present in the current FreeBSD NFS
v4.1 implementation. Is my understanding of this correct?

My reason for asking is when attempting to migrate an ESXi server to a FreeBSD
NFSv4.1 datastore, ESXi throws the following error:

> WARNING: NFS41: NFS41FSCompleteMount:3601: RECLAIM_COMPLETE FS failed: Not
> supported; forcing read-only operation

VMware ESXi 6.0 is able to mount NFSv4.1 shares exported from other
operating systems, so I figured I would ask here on the list before digging
out a copy of tcpdump and going down that rabbit hole.

I can mount and use NFSv3 shares just fine with ESXi from this same server, and
can mount the same shares as NFSv4 from other clients (e.g. OS X) as well.

Thanks,

Mahmoud Al-Qudsi
NeoSmart Technologies

_______________________________________________
freebsd...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stabl...@freebsd.org"

Rick Macklem

unread,
May 20, 2015, 9:58:03 PM5/20/15
to
Mahoud Al-Qudsi wrote:
> Hello,
>
> I have not delved too deeply into either the NFS spec or the FreeBSD
> nfsd
> code, but from my admittedly-limited understanding, it seems that
> reclaim is
> both a mandatory feature and one that is present in the current
> FreeBSD NFS
> v4.1 implementation. Is my understanding of this correct?
>
Only the global RECLAIM_COMPLETE is implemented. I'll be honest that
I don't even really understand what the "single fs reclaim_complete"
semantics are and, as such, it isn't implemented.

I think it is meant to be used when a file system is migrated from
one server to another (transferring the locks to the new server) or
something like that.
Migration/replication isn't supported. Maybe someday if I figure out
what the RFC expects the server to do for this case.

> My reason for asking is when attempting to migrate an ESXi server to
> a FreeBSD
> NFSv4.1 datastore, ESXi throws the following error:
>
> > WARNING: NFS41: NFS41FSCompleteMount:3601: RECLAIM_COMPLETE FS
> > failed: Not
> > supported; forcing read-only operation
>
This is the first time I've heard of a client using this. The only clients
I've ever had the opportunity to test against are Linux, Solaris and the
FreeBSD one.

> VMware ESXi 6.0 is able to mount NFSv4.1 shares exported from other
> operating systems, so I figured I would ask here on the list before
> digging
> out a copy of tcpdump and going down that rabbit hole.
>
> I can mount and use NFSv3 shares just fine with ESXi from this same
> server, and
> can mount the same shares as NFSv4 from other clients (e.g. OS X) as
> well.
>
This is NFSv4.1 specific, so NFSv4.0 should work, I think. Or just use NFSv3.

rick

Mahmoud Al-Qudsi

unread,
May 20, 2015, 10:57:25 PM5/20/15
to
On May 20, 2015, at 8:57 PM, Rick Macklem <rmac...@uoguelph.ca> wrote:
> Only the global RECLAIM_COMPLETE is implemented. I'll be honest that
> I don't even really understand what the "single fs reclaim_complete"
> semantics are and, as such, it isn't implemented.

Thanks for verifying that.

> I think it is meant to be used when a file system is migrated from
> one server to another (transferring the locks to the new server) or
> something like that.
> Migration/replication isn't supported. Maybe someday if I figure out
> what the RFC expects the server to do for this case.

I wasn’t clear on if this was lock reclaiming or block reclaiming. Thanks.

>> I can mount and use NFSv3 shares just fine with ESXi from this same
>> server, and
>> can mount the same shares as NFSv4 from other clients (e.g. OS X) as
>> well.
>>
> This is NFSv4.1 specific, so NFSv4.0 should work, I think. Or just use NFSv3.
>
> rick

For some reason, ESXi doesn’t do ESXi 4.0, only v3 or v4.1.

I am using NFS v3 for now, but unless I’m mistaken, since FreeBSD supports
neither “nohide” nor “crossmnt” there is no way for a single export(/import)
to cross ZFS filesystem boundaries.

I am using ZFS snapshots to manage virtual machine images, each machine
has its own ZFS filesystem so I can snapshot and rollback individually. But
this means that under NFSv3 (so far as I can tell), each “folder” (ZFS fs)
must be mounted separately on the ESXi host. I can get around exporting
them each individually with the -alldirs parameter, but client-side, there does
not seem to be a way of traversing ZFS filesystem mounts without explicitly
mounting each and every one - a maintenance nightmare if there ever was one.

The only thing I can think of would be unions for the top-level directory, but I’m
very, very leery of the the nullfs/unionfs modules as they’ve been a source of
system instability for us in the past (deadlocks, undetected lock inversions, etc).
That and I really rather a maintenance nightmare than a hack.

Would you have any other suggestions?

Thanks,

Mahmoud

Rick Macklem

unread,
May 21, 2015, 8:21:01 AM5/21/15
to
Well, if you are just doing an NFSv4.1 mount, you could capture
packets during the failed mount attaempt with tcpdump and then
email me the raw packet capture, I can take a look at it.
(tcpdump doesn't handle nfs packets well, but wireshark will accept
a raw packet capture) Something like:
# tcpdump -s 0 -w <file>.pcap host <nfs-client>
should work.

When I read RFC-5661 around page #567, it seems clear that the
client should use RECLAIM_COMPLETE with the fs arg false after
acquiring a noew clientid, which is what a fresh mount would normally be.
(If the packet capture shows an EXCHANGEID followed by a RECLAIM_COMPLETE
with the fs arg true, I think ESXi is broken, but I can send you a patch
that will just ignore the "true", so it works.)
I think the "true" case is only used when a file system has been "moved"
by a server cluster, indicated to the client via a NFS4ERR_MOVED error
when it is accessed at the old server, but the working in RFC-5661 isn't
very clear.

rick

Rick Macklem

unread,
May 21, 2015, 8:29:47 AM5/21/15
to
Btw, here's a snippet from RFC-5661 (around page#567) that I think
clarifies what the client should be doing on a mount.

Whenever a client establishes a new client ID and before it does the
first non-reclaim operation that obtains a lock, it MUST send a
RECLAIM_COMPLETE with rca_one_fs set to FALSE, even if there are no
locks to reclaim. If non-reclaim locking operations are done before
the RECLAIM_COMPLETE, an NFS4ERR_GRACE error will be returned.

It clearly states that rca_one_fs should be FALSE, which is what all the
clients I have tested against does.

rick

Mahmoud Al-Qudsi

unread,
May 23, 2015, 4:24:55 PM5/23/15
to

> On May 21, 2015, at 8:19 AM, Rick Macklem <rmac...@uoguelph.ca> wrote:
>
> Well, if you are just doing an NFSv4.1 mount, you could capture
> packets during the failed mount attaempt with tcpdump and then
> email me the raw packet capture, I can take a look at it.
> (tcpdump doesn't handle nfs packets well, but wireshark will accept
> a raw packet capture) Something like:
> # tcpdump -s 0 -w <file>.pcap host <nfs-client>
> should work.
>
> When I read RFC-5661 around page #567, it seems clear that the
> client should use RECLAIM_COMPLETE with the fs arg false after
> acquiring a noew clientid, which is what a fresh mount would normally be.
> (If the packet capture shows an EXCHANGEID followed by a RECLAIM_COMPLETE
> with the fs arg true, I think ESXi is broken, but I can send you a patch
> that will just ignore the "true", so it works.)
> I think the "true" case is only used when a file system has been "moved"
> by a server cluster, indicated to the client via a NFS4ERR_MOVED error
> when it is accessed at the old server, but the working in RFC-5661 isn't
> very clear.
>
> rick


Thank you kindly.
I am travelling at the moment; but as soon as I can, I will get that to you.

Much appreciated,

andreas...@gmail.com

unread,
Mar 1, 2018, 2:26:11 AM3/1/18
to
Hi, I am searching around to find out why mounting my NFS 4.1 FreeBSD export in ESXi 6.5u1 is always read-only.
Now I found this as well as some other posts about the wrong ESXi NFS client behavior.
As this is quiet old, and the problem still exists with ESXi 6.5u1 and FreeBSD 11.1-RELEASE-p6, I would like to ask if there is already a patch available?

Thanks,
Andi
0 new messages