Hi!
The question has been asked before, but the answers are not consistent. I was wondering if there is a definitive list of flags to be used in tfvar files to mount a bucket on all nodes to a fixed location on the VM's filesystems. I have found that no matter what flags I used for mount_options, I end up with a folder that has permissions:
d?????????? ? ? ? ? ?
I have a bucket named my-bucket and the command cd && mkdir test && gcsfuse my-bucket test works post-terraform on the slurm-controller. I would like to have such a mount available on all my nodes, configured at the terraform stage.
According to help provided here and elsewhere on the internet, I added the following to the tfvars file:
network_storage = [{
server_ip = "gcs"
remote_mount = "my-bucket"
local_mount = "/gcs"
fs_type = "gcsfuse"
mount_options = "rw,_netdev,user"
}]
I also tried adding the uid and gid corresponding to my user instead of user. Post-terraform I get a folder with question mark permissions as above, unreadable. When I do, as user (without sudo) mount /gcs, I get a permission denied. If I create another mountpoint, modify fstab to point to that with the same mount options, sudo mount -a works, but the permissions are the same.
So my questions are:
- Is there any way to make this work, or should I prefix all my jobs/scripts with a gcsfuse command to a local user-owned directory? that seems extreme.
- If yes, what is the best place and ownership to create a bucket mount? in / or in /home, owned by root or by the user?
- If the ownership is user, how do I create a mountpoint automatically with the right permissions at terraform stage?
- if there is no way to do this, could this be done with a root-owned directory and o+rwx permissions? (I think I remember chmod can be specified in fstab). What are the risks of this?
Many thanks,
Arthur