Unable to mount persistent storage; error message is "cannot mount /dev/sdc read-only"

3,271 views
Skip to first unread message

Steve Lorimer

unread,
Apr 2, 2019, 9:46:58 AM4/2/19
to gce-discussion
I have a VM instance with persistent storage attached 

    'disks': [
        ... 
        {
            'autoDelete': False,
            'boot': False,
            'deviceName': 'foo-data',
            'index': 2,
            'interface': 'SCSI',
            'kind': 'compute#attachedDisk',
            'mode': 'READ_ONLY',
            'type': 'PERSISTENT',
        },

The disk shows up in my VM:

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0 91.1M  1 loop /snap/core/6531
loop1     7:1    0   57M  1 loop /snap/google-cloud-sdk/76
loop2     7:2    0 56.7M  1 loop /snap/google-cloud-sdk/75
loop3     7:3    0 89.3M  1 loop /snap/core/6673
sda       8:0    0   10G  0 disk 
├─sda1    8:1    0  9.9G  0 part /
├─sda14   8:14   0    4M  0 part 
└─sda15   8:15   0  106M  0 part /boot/efi
sdb       8:32   0    1G  1 disk 
sdc       8:32   0    2G  1 disk 


It has an ext4 filesystem which I have previously created on it

$ lsblk -f /dev/sdc 
NAME FSTYPE LABEL UUID                                 MOUNTPOINT
sdb  ext4         142b28fd-c886-4182-892d-67fdc34b522a 

However, mounting it fails:

$ sudo mkdir /mnt/data
$ sudo mount /dev/sdc /mnt/data
mount: /mnt/data: cannot mount /dev/sdb read-only.

I have previously created a filesystem, successfully mounted the disk and then detached it from the VM instance.

In fact there is another disk which I used the exact same commands to create the filesystem and mount etc which succeeded in mounting:

$ mount | grep dev\/sd
/dev/sda1 on / type ext4 (rw,relatime,data=ordered)
/dev/sda15 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sdb on /mnt/infra type ext4 (ro,relatime,discard,data=ordered)

I am lost as to what is causing this error. Can anyone help please?

In case it helps, here is fdisk output:

$ sudo fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1D8CAF38-A6DB-4E55-B417-D9B0E481A3A5

Device      Start      End  Sectors  Size Type
/dev/sda1  227328 20971486 20744159  9.9G Linux filesystem
/dev/sda14   2048    10239     8192    4M BIOS boot
/dev/sda15  10240   227327   217088  106M EFI System

Partition table entries are not in disk order.

Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Justin Reiners

unread,
Apr 2, 2019, 10:38:29 AM4/2/19
to Steve Lorimer, gce-discussion
I'm honestly curious, why are you not partitioning the drives?

I've never done that before in my life actually, with 20+ years of Linux (on EXT4 or XFS that is), I'm not sure if it works or not, but no partition table just doesn't feel right :)

I've always created and written the single partition to each disk, and then

mkfs.ext4 /dev/sdb1
mkfs.ext4 /dev/sdc1 
mount -t ext4 /dev/sdb1 /mountpoint
mount -t ext4 /dev/sdc1 /mountpoint2

mounting in fstab with something like:
/dev/sdb1   /mountpoint    ext4  defaults 0 0
/dev/sdc1   /mountpoint2  ext4 defaults 0 0 




--
© 2018 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
 
Email preferences: You received this email because you signed up for the Google Compute Engine Discussion Google Group (gce-dis...@googlegroups.com) to participate in discussions with other members of the Google Compute Engine community and the Google Compute Engine Team.
---
You received this message because you are subscribed to the Google Groups "gce-discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gce-discussio...@googlegroups.com.
To post to this group, send email to gce-dis...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gce-discussion/9f1c64e8-e5c8-4f0f-b4bc-97a67f8c5ebc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Justin Reiners

Steve Lorimer

unread,
Apr 2, 2019, 12:47:49 PM4/2/19
to Justin Reiners, gce-discussion
I just followed the instructions on here: https://cloud.google.com/compute/docs/disks/add-persistent-disk


Format the disk. You can use any file format that you need, but the most simple method is to format the entire disk with a single ext4 file system and no partition table. If you resize the zonal persistent disk later, you can resize the file system without having to modify disk partitions.
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/[DEVICE_ID]

kim...@google.com

unread,
Apr 2, 2019, 11:38:39 PM4/2/19
to gce-discussion
I think the key is in this line of the GCE configuration:
    'mode': 'READ_ONLY'
This means the disk is mounted read only in GCE.

To resolve this, you either need to mount is as read/write, or you need to mount the ext4 filesystem as read only. Your options for that are mounting like this:
    mount -oro,noload /dev/sdc /mnt/data
Or setting the device to read only inside the guest:
    blockdev --setro /dev/sdc
The latter has the advantage that you don't need to set any ext4-specific options, it works across all filesystems.

Thanks,
Dan

On Tuesday, April 2, 2019 at 9:47:49 AM UTC-7, Steve Lorimer wrote:
I just followed the instructions on here: https://cloud.google.com/compute/docs/disks/add-persistent-disk


Format the disk. You can use any file format that you need, but the most simple method is to format the entire disk with a single ext4 file system and no partition table. If you resize the zonal persistent disk later, you can resize the file system without having to modify disk partitions.
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/[DEVICE_ID]

To unsubscribe from this group and stop receiving emails from it, send an email to gce-dis...@googlegroups.com.


--
Justin Reiners

Dan Kimmel

unread,
Apr 3, 2019, 6:12:07 PM4/3/19
to Steve Lorimer, gce-discussion
Yep, that's exactly my guess for what happened (ext4 trying to replay the journal).

Dan

On Wed, Apr 3, 2019 at 3:07 PM Steve Lorimer <steve....@gmail.com> wrote:
Hi Dan

Ah - thank you - it's the `noload` option which I was missing!

My intention is to mount it read-only.

Whilst it would be more correct to specify `-o ro` as an option to mount, if I omit that, the drive will typically still mount, just in read-only mode (and mount will emit a warning to that effect)... 

However, and I think this is what bit me in the a**, I see in the manpage for mount the following for `noload`

Note that, depending on the filesystem type, state and kernel behavior, the system may still write to the device.  For example, ext3 and ext4 will replay the journal if the filesystem is dirty.  

Presumably the filesystem was dirty, and the system tried to write to the disk during mounting, which is why it failed?

This theory would support other observed behaviour, which was I reattached the disk in read/write mode, then detached it and reattached it again in read-only mode... and now I can mount the disk.

Would you agree that this is likely what happened?

Thanks for the help!

Kind regards
Steve

To unsubscribe from this group and stop receiving emails from it, send an email to gce-discussio...@googlegroups.com.

To post to this group, send email to gce-dis...@googlegroups.com.

Steve Lorimer

unread,
Apr 5, 2019, 9:26:20 AM4/5/19
to kim...@google.com, gce-discussion
Hi Dan

Ah - thank you - it's the `noload` option which I was missing!

My intention is to mount it read-only.

Whilst it would be more correct to specify `-o ro` as an option to mount, if I omit that, the drive will typically still mount, just in read-only mode (and mount will emit a warning to that effect)... 

However, and I think this is what bit me in the a**, I see in the manpage for mount the following for `noload`

Note that, depending on the filesystem type, state and kernel behavior, the system may still write to the device.  For example, ext3 and ext4 will replay the journal if the filesystem is dirty.  

Presumably the filesystem was dirty, and the system tried to write to the disk during mounting, which is why it failed?

This theory would support other observed behaviour, which was I reattached the disk in read/write mode, then detached it and reattached it again in read-only mode... and now I can mount the disk.

Would you agree that this is likely what happened?

Thanks for the help!

Kind regards
Steve

On Tue, 2 Apr 2019 at 22:38, kimmel via gce-discussion <gce-dis...@googlegroups.com> wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to gce-discussio...@googlegroups.com.

To post to this group, send email to gce-dis...@googlegroups.com.

Charlie Reitzel

unread,
Jan 8, 2020, 12:45:05 PM1/8/20
to gce-discussion
Hi All,

Just wanted to drop a line to say thanks.   I experienced this problem in my GKE cluster.   I don't often mess with this layer of Linux (esp in a cloud instance), but I was able to muddle my way through, thanks to you all.

Question: Should I report a bug if the volume in question was created from a snapshot?  I.e. why would an ext4 log write be required for a brand new persistent disk?  Fwiw, I tried re-creating the read-only disk from a fresh snapshot of the source, writable disk with no effect.

For others who may, like me, lack experience with Linux mount commands, here was my procedure to mount and unmount:

1.  Attach the persistent disk to a shell VM instance in the same zone/region in read-only mode.
     Note: The GCE console would not allow writable on the 1st try.
2.  Mount the (now) attached block device onto a file system using the instructions here:


SSH to VM and run the following.

$ sudo lsblk
$ sudo mkdir -p /opt/foo/content
# Assuming block device is /dev/sdb
$ sudo mount -o ro,discard,defaults /dev/sdb /opt/foo/content

Note, this will fail with ro (read only) option, reproducing original issue.  Trying again without ro still fails, because disk is attached read only.

Exit VM.

3. Detach extra disk.  Re-attach. This time (for me, at least) the console allowed writable access.  

Repeat commands above, starting without ro mount option.   It will work.  Unmount, exit VM.

4. Detach extra disk.  Get on with your life ... :--)

hth and thanks for any pointers re bug report.

--Charlie

      

Thanks,
Charlie

Anthony (Google Cloud Support)

unread,
Jan 8, 2020, 4:03:05 PM1/8/20
to gce-discussion
Hi Charlie,

Thanks for your input.

You can file a public issue through the following link.

In regards to your inquiry where "why would an ext4 log write be required for a brand new persistent disk?" Normally a brand new persistent disk is not formatted with a journaling file system and thus would be blank. Think of it as a physical hard drive, which upon purchase the hard disk would normally not be formatted until user usage.

I hope this helps.

Charlie Reitzel

unread,
Jan 8, 2020, 5:49:48 PM1/8/20
to Anthony (Google Cloud Support), gce-discussion
Hi Anthony,

Thanks for getting back.   I see what you mean, about a brand new, unformatted disk.  However, this was a brand new disk created from a snapshot of another, writable disk.   Does that change anything?

Put another way, should I plan on this mount writable / unmount procedure as a normal part of life with shared, read only disks?

Thanks,
Charlie

To unsubscribe from this group and stop receiving emails from it, send an email to gce-discussio...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gce-discussion/f4d09e86-b2c8-4dc7-a85f-ccae994093d2%40googlegroups.com.


--
Charlie Reitzel
CTO, DRUIDapp, Inc.


Theodore Y. Ts'o

unread,
Jan 8, 2020, 9:58:52 PM1/8/20
to Charlie Reitzel, Anthony (Google Cloud Support), gce-discussion
On Wed, Jan 08, 2020 at 05:49:30PM -0500, Charlie Reitzel wrote:
>
> Thanks for getting back. I see what you mean, about a brand new,
> *unformatted* disk. However, this was a brand new disk *created from a
> snapshot* of another, writable disk. Does that change anything?
>
> Put another way, should I plan on this mount writable / unmount procedure
> as a normal part of life with shared, read only disks?

Hi,

I'm the ext4 maintainer, so let me correct some misinformation that
seems to be floating around on this thread.

If an ext4 file system is not cleanly unmounted, the journal needs to
be replayed before it can be safely accessed. This is why normally,
when a file system is mounted read-only, if the file system was not
cleanly unmounted, the Linux kernel will replay the journal before
proceeding to complete the read-only mount. If the device is in
read-only mode, the journal replay will fail, and the mount will be
aborted.

You can override it by using the noload mount option, but that really
is not a great idea, since while it does suppress the journal replay,
the file system may not be a consistent state. This can lead to files
to appear corrupted when they are read, and the kernel may detect a
file system inconsistency so serious that you will see "EXT4-fs error"
messages in the system log.

The root cause is caused by the fact that snapshot was taken while the
disk was mounted. Don't do that. Instead, either unmount the disk or
remount it read-only before taking a snapshot.

Alternatively, you can create a disk from the snapshot, but then mount
it read/write, and run e2fsck -fy on the disk to make sure the journal
is replayed and the file system's metadata checked to make sure the
file system is in a consistent state. Then you can detach the disk,
and reattach it in read-only mode, so that the disk can be shared
across multiple VM's.

Regards,

- Ted

Charlie Reitzel

unread,
Jan 9, 2020, 1:25:10 AM1/9/20
to Theodore Y. Ts'o, Anthony (Google Cloud Support), gce-discussion
Thanks for clarifying, Ted.  I'll be good and and unmount before creating new snapshots!

Digil (Google Cloud Platform Support)

unread,
Jan 9, 2020, 10:03:26 AM1/9/20
to gce-discussion
Supporting the other community member's detailed explanation, you may also need to refer the help center article about creating a snapshot and follow the best practices before performing the same. 

Theodore Y. Ts'o

unread,
Jan 9, 2020, 12:00:12 PM1/9/20
to Charlie Reitzel, Anthony (Google Cloud Support), gce-discussion
On Wed, Jan 08, 2020 at 09:58:40PM -0500, Theodore Y. Ts'o wrote:
> Alternatively, you can create a disk from the snapshot, but then mount
> it read/write, and run e2fsck -fy on the disk to make sure the journal
> is replayed

Correction; sorry, I was typing too fast last night.

Either mount it read/write and then unmount it to replay the journal
OR run e2fsck -fy on the disk....

(You can't run e2fsck -fy on a mounted file system.)

- Ted

Charlie Reitzel

unread,
Jan 9, 2020, 1:42:26 PM1/9/20
to Theodore Y. Ts'o, Anthony (Google Cloud Support), gce-discussion
No worries, I just unmounted the source disk before creating the snapshot.  It was quick.  However, I found I had to stop and start the VM before I could remount the same disk (rw) after creating the snapshot.   

Not a big problem, in my case.  Just a moment of panic when, after an apparently successful remount, there was no data!   I tried unattaching and re-attaching the disk, without any change. Even creating a new disk (from a snapshot) and attaching that did not work.   Luckily, the original disk was fine after I stopped and started the VM from the console.

Reply all
Reply to author
Forward
0 new messages