[Question] Question with NFS example , only server mounts dataset in /nfs, clients don’t see /nfs

17 views
Skip to first unread message

xwt1

unread,
Aug 31, 2025, 5:35:18 AM (6 days ago) Aug 31
to cloudlab-users
Hi CloudLab team,  
      My goal:
      I attempt to establish  3 × m510 nodes connected on the experiment LAN (RDMA-capable setup), and use the storage example from docs: each node should mount the same long-term dataset via NFS at /nfs (read/write).  
      What I did:
  1. Adapted the example from https://docs.cloudlab.us/advanced-storage.html#(part._storage-example-remote-nfs)         
  2.  Server node named nfs; clients node-1, node-2.
  3. My adapted Profile code are attached (uses /local/repository/nfs-server.sh and nfs-client.sh)  
      I observed that:  The experiment starts successfully. On nfs, the dataset is mounted at /nfs (visible in df -h).   On node-1/node-2, /nfs is missing and not mounted (no entry in mount/df -h).  
      My question: 
  1. What is the recommended change to this profile so that all three nodes reliably mount the dataset at /nfs with read/write?   
  2. Besides naming the server nfs, are there other assumptions (firewall, exports format, network interface selection) that I need to satisfy so that clients mount /nfs?  
Thanks a lot for any guidance!
Best regards,
xwt1  
profile.py

Leigh Stoller

unread,
Aug 31, 2025, 7:07:13 AM (6 days ago) Aug 31
to cloudlab-users
Hi. The best approach for problems like this, is to send
us a link to the status page of a failed experiment, so
we can look at it while running.

Thanks
Leigh

xwt1

unread,
Aug 31, 2025, 9:23:45 AM (5 days ago) Aug 31
to cloudlab-users
Thanks for your reply! I restart the experiment with the same profile again, and the status page link is below:
https://www.cloudlab.us/status.php?uuid=1ebb18ee-8663-11f0-bc80-e4434b2381fc

Three instances seem run correctly like what i said before, but when I log in instance nfs and node1, I find that my dataset can only be accessed in nfs node(see the picture attached, the dataset is mount at /nfs), the node1 can't access it. Like I said before,  I attempt to establish  3 × m510 nodes,  each node should mount the same long-term dataset via NFS at /nfs (read/write). I don't know why my profile not work. Could you give me some help?

xwt1
nfs.png
node1.png

Mike Hibler

unread,
Aug 31, 2025, 1:28:33 PM (5 days ago) Aug 31
to cloudla...@googlegroups.com
The problem is that you do not have the nfs startup scripts on the nodes
(e.g., /local/repository/nfs-client.sh). The reason for this is that the
NFS profile you were copying from is a so-called "repository-based" profile.
That means that the profile and associated scripts/files are in a git
repository that we clone into /local/repository on all nodes at startup time,
that is where the startup scripts come from. Your profile is a standalone
profile that is defined by just the geni-lib python script, there is no
associated place from which to get the nfs startup scripts.

The best solution is to following the instructions in
https://docs.cloudlab.us/creating-profiles.html
to copy the example nfs profile (https://www.cloudlab.us/p/PortalProfiles/nfs-dataset)
git repository to your own repository. Then you can modify the contained
profile.py script in the repository to add your changes.
> --
> You received this message because you are subscribed to the Google Groups
> "cloudlab-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to cloudlab-user...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/cloudlab-users/
> 8294a4d7-788d-4cdb-8062-0c8bae397df7n%40googlegroups.com.



xwt1

unread,
Sep 1, 2025, 3:16:24 AM (5 days ago) Sep 1
to cloudlab-users
Thanks for your guidance! I have set up it properly!
xwt1
Reply all
Reply to author
Forward
0 new messages