Problem rebooting instance after adding entry to fstab for auo-mounting a GC bucket on reboot

3,139 views
Skip to first unread message

Kaihu Chen

unread,
Oct 21, 2016, 12:05:25 PM10/21/16
to gce-discussion
I added an entry to /etc/fstab so that my GC bucket is automatically mounted upon reboot. The entry looks like this:

  console-146515.appspot.com /mnt/mybk gcsfuse rw,user,allow_other  

Everything tested out fine when I tested it with 'sudo mount -a'. At this point I can also access the instance through Cloud SHELL, third-party SSH client, winscp, etc. without problem.
But as soon as I rebooted the instance the whole instance became unresponsive, meaning I was unable to access it at all, even when the VM Instances dashboard was showing that the instance is running normally. I have tried this three times in a row, and the instance always got bricked this way. Can anybody give me some advice as to how to resolve this problem? Thanks!


Carlos (Cloud Platform Support)

unread,
Oct 21, 2016, 4:57:50 PM10/21/16
to gce-discussion

Hi Kaihu,

Can you post step by step the procedure and commands you used? Additionally have you tried to grab additional information from the serial console of your VM?


Kaihu Chen

unread,
Oct 21, 2016, 6:16:07 PM10/21/16
to gce-discussion
Ahh, I did not know that there is such a thing as a serial console. Following is the log that I got through 'View serial port-View gcloud command' on the 'instance-1' in my account: 

Welcome to Cloud Shell! Type "help" to get started.
kaihuchen01@console-146515:~$ gcloud beta compute --project "console-146515" instances get-serial-port-ou
tput "instance-1" --zone "us-central1-b"
SeaBIOS (version 1.8.2-20160912_142702-google)
........
(above this looks normal, following is the area that looks problematic)
[    5.586904] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
[    5.588712] ACPI: Power Button [PWRF]
[    5.589375] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
[    5.593840] ACPI: Sleep Button [SLPF]
[    5.605626] AVX2 version of gcm_enc/dec engaged.
[    5.609721] ppdev: user-space parallel port driver
[    5.611941] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[    5.616431] alg: No test for crc32 (crc32-pclmul)
[    5.618914] intel_rapl: no valid rapl domains found in package 0
[FAILED] Failed to mount /mnt/mybk.
See 'systemctl status mnt-mybk.mount' for details.
[DEPEND] Dependency failed for Local File Systems.
[  OK  ] Closed UUID daemon activation socket.
[  OK  ] Closed ACPID Listen Socket.
[  OK  ] Stopped Getty on tty1.
[  OK  ] Stopped Serial Getty on ttyS0.
[  OK  ] Stopped getty on tty2-tty6 if dbus and logind are not available.
[  OK  ] Stopped target Graphical Interface.
[  OK  ] Stopped target Multi-User System.
[  OK  ] Stopped Google Compute Engine Shutdown Scripts.
[  OK  ] Stopped Regular background program processing daemon.
[  OK  ] Stopped Google Compute Engine Startup Scripts.
[  OK  ] Stopped Google Compute Engine Clock Skew Daemon.
[  OK  ] Stopped OpenBSD Secure Shell server.
[  OK  ] Stopped Google Compute Engine Accounts Daemon.
[  OK  ] Stopped Google Compute Engine IP Forwarding Daemon.
[  OK  ] Stopped Google Compute Engine Instance Setup.
[  OK  ] Stopped Internet superserver.
[  OK  ] Stopped /etc/rc.local Compatibility.
[  OK  ] Stopped Login Service.
[  OK  ] Reached target Login Prompts.
[  OK  ] Stopped LSB: Expand the filesystem of the mounted ro... possible size.
[  OK  ] Stopped LSB: Start NTP daemon.
[  OK  ] Stopped LSB: PM2 init script.
[  OK  ] Stopped Permit User Sessions.
[  OK  ] Reached target Remote File Systems.
         Starting Trigger Flushing of Journal to Persistent Storage...
[  OK  ] Stopped System Logging Service.
[  OK  ] Stopped target Basic System.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Timers.
[  OK  ] Stopped target System Initialization.
         Starting Create Volatile Files and Directories...
         Starting LSB: Raise network interfaces....
         Starting LSB: Generate ssh host keys if they do not exist...
[  OK  ] Closed Syslog Socket.
[  OK  ] Reached target Sockets.
         Starting Emergency Shell...
[  OK  ] Started Emergency Shell.
[  OK  ] Reached target Emergency Mode.
[  OK  ] Started Create Volatile Files and Directories.
[  OK  ] Started LSB: Generate ssh host keys if they do not exist.
[    6.002530] systemd-journald[160]: Received request to flush runtime journal from PID 1
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Trigger Flushing of Journal to Persistent Storage.
[  OK  ] Started Update UTMP about System Boot/Shutdown.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.
[    6.086384] random: nonblocking pool is initialized
[  OK  ] Started LSB: Raise network interfaces..
[  OK  ] Reached target Network.
[  OK  ] Reached target Network is Online.
Welcome to emergsulogin: root account is locked, starting shell
root@instance-1:~# 
..........

where /mnt/mybk is my mount point in /etc/fstab

What I did was as follows (from the KiTTY SSH console on my PC):
1. sudo mkdir /mnt/mybk
2. sudo chmod 777 /mnt/mybk
3. sudo nano /etc/fstab (then added the entry mentioned earlier and save the file)
4. sudo mount -a (and observed that the bucket got mounted correctly. A quick inspection into the bucket showed everything looks fine, 'df -h' shows that I got 1PB there)
5. Stop the instance from the "VM Instances" dashboard 
6. Start the instance from the "VM Instances" dashboard. Observed that I am unable to connect to it

Please let me know if you need further information. Thanks. 
.

Carlos (Cloud Platform Support)

unread,
Oct 24, 2016, 1:08:46 PM10/24/16
to gce-dis...@googlegroups.com
Hi Kaihu, 

I was able to replicate the behavior.  Basically by using the ¨noauto¨ option in /etc/fstab the VM boots up fine. 

my-bucket /mount/point gcsfuse rw,noauto,user

When I removed the option, I got in the Serial Console ¨Welcome to emergsulogin: root account is locked, starting shell¨.  According to this link it seems that systemd won´t boot with failed mounts. 

Now according to the gcsfuse documentation I understand that you need to keep the ¨noauto¨ flag and then mount by running ¨mount /mount/point¨.  Further discussion is carried on here.

Kaihu Chen

unread,
Oct 27, 2016, 7:48:50 PM10/27/16
to gce-discussion
Carlos, 

I read the discussion thread you mentioned, and it seems to me that the conclusions here are:

1. For gcsfuse not having the 'noauto' option in /etc/fstab is a bad idea, since it could hang the system on reboot as we have observed.
2. This pretty much means that I really have no way to get the system to mount a certain bucket automatically on reboot without human intervention. I needed this because I wanted to set up some preemptible instances on Google Cloud,  and since such instances could get killed unexpectedly I wanted to use a shared bucket that gets mounted automatically on reboot so that the next preemptible instance can pick up to remaining job and work on it.

If my understanding above is incorrect please let do me know, otherwise I will assume that the above arrangement I have in mind is not feasible. 
At any case, your help is much appreciated!



Kamran (Google Cloud Support)

unread,
Oct 27, 2016, 11:54:34 PM10/27/16
to gce-dis...@googlegroups.com

Hi Kaihu,

As described in this article you can add entries to your /etc/fstab file like the following:

my-bucket /mount/point gcsfuse rw,noauto,user

In order to mount the bucket automatically, you can also add on of the following commands in /etc/rc.local script:

mount /mount/point

or

mount my-bucket

I hope this helps

Sincerely,

Kaihu Chen

unread,
Oct 28, 2016, 11:03:51 PM10/28/16
to gce-discussion
Kamran, that works great for me. Thank you so much for your help!

Kaihu Chen

unread,
Oct 29, 2016, 7:09:55 PM10/29/16
to gce-discussion
I actually have one more question. 

My bucket is now mounted automatically on reboot without problem, but I am unable to modify or create any file there even using sudo.

My /etc/fstab now looks like the following:

  bucket-name mount-point gcsfuse rw,noauto,user,allow_other,file_mode=777,dir_mode=777 0 0

And I get the following error when attempting to create or modify any file in the bucket. I verified that all files/directories are owned by root with mask of 777

$ touch zzz
touch: cannot touch ‘zzz’: Input/output error
$ sudo touch zzz
touch: cannot touch ‘zzz’: Input/output error


Carlos (Cloud Platform Support)

unread,
Oct 31, 2016, 1:04:02 PM10/31/16
to gce-discussion
I believe this last issue is related to the access scope you granted to the GCE instance during its creation.

As an example, tt worked for me having this line in /etc/fstab

mybucket /mnt/dir1 gcsfuse rw,noauto,user

If I create the instance having the default access scopes I do get the same error when trying to write to the bucket (default access to cloud storage is read only). 

scopes: 

It certainly worked fine when I created the instance with full access scopes

For the details on cloud storage scopes you can refer to this article. 

Kaihu Chen

unread,
Oct 31, 2016, 4:47:37 PM10/31/16
to gce-discussion

Carlos,

Thanks to your pointer I am now able to get most of it to work as follows:

1. As you have pointed out, I need to create a new instance with full access scope. 
2. In the fstab I also have to add 'allow_other', for otherwise the bucket will be mounted by root and becomes invisible to other non-root users.
3. When signed in as a non-root user, I have to use 'sudo' to update files (e.g., sudo touch , sudo nano, etc.), otherwise I will be denied the permission.

This works good enough for me. The singular loose end left is that I am unable to make 'chmod' work even with sudo, which is needed for running shell scripts. So far 'sudo chmod' still fails silently.

Carlos (Cloud Platform Support)

unread,
Nov 1, 2016, 3:19:10 PM11/1/16
to gce-discussion
Hi Kaihu,

chmod() is unsupported by gcsfuse as documented here. Can you provide additional details on your use case? 

Kaihu Chen

unread,
Nov 1, 2016, 6:44:31 PM11/1/16
to gce-discussion
Carlos, I see. 

My use case is that I have created some shell scripts in my bucket in order to make repetitive tasks easier to handle, but at this time I am unable to run them because the execution mask cannot be turned on. I want such scripts to be located in the bucket so that they can be shared among many VM instances (otherwise I will have to copy/sync such scripts between instances all the time). Hope this helps.

Alex Martelli

unread,
Nov 1, 2016, 6:56:16 PM11/1/16
to Kaihu Chen, gce-discussion
On Tue, Nov 1, 2016 at 3:44 PM, Kaihu Chen <kaihu...@gmail.com> wrote:
Carlos, I see. 

My use case is that I have created some shell scripts in my bucket in order to make repetitive tasks easier to handle, but at this time I am unable to run them because the execution mask cannot be turned on. I want such scripts to be located in the bucket so that they can be shared among many VM instances (otherwise I will have to copy/sync such scripts between instances all the time). Hope this helps.

Assuming for example they're bash script, using `bash thescriptpath` can of course run them (as long as the user doing bash can read the script at all) and  of course similarly for any scripting language -- perl, python, whatever. `chmod +x` is just a convenience, not a must-have.


Alex
 


On Tuesday, November 1, 2016 at 3:19:10 PM UTC-4, Carlos (Cloud Platform Support) wrote:
Hi Kaihu,

chmod() is unsupported by gcsfuse as documented here. Can you provide additional details on your use case? 

--
© 2016 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
 
Email preferences: You received this email because you signed up for the Google Compute Engine Discussion Google Group (gce-discussion@googlegroups.com) to participate in discussions with other members of the Google Compute Engine community and the Google Compute Engine Team.
---
You received this message because you are subscribed to the Google Groups "gce-discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gce-discussion+unsubscribe@googlegroups.com.
To post to this group, send email to gce-discussion@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gce-discussion/f30d2a2e-680c-40ae-8d6d-be336a0c6c70%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kaihu Chen

unread,
Nov 1, 2016, 11:03:40 PM11/1/16
to gce-discussion, Kaihu Chen
Alex, your suggestion works for what I needed. Thank you very much for your prompt help, and I am done here.

Stewart Bryson

unread,
Jan 28, 2017, 1:37:22 PM1/28/17
to gce-discussion
The only problem with the rc.local approach is that it runs after all the systemctl/chkconfig stuff... which starts applications, services, etc. I need the mount point accessible before certain services are started.

/etc/fstab auto mount occurs before all the systemctl/chkconfig stuff.

It's not just as simple as having it mount automatically... it's important when it mounts.
Reply all
Reply to author
Forward
0 new messages