Minimal RAM ISO gives squashfs_read_data failed to read block

175 views
Skip to first unread message

Jan Robinson

unread,
Feb 2, 2021, 3:33:40 AM2/2/21
to kiwi
Hello All

New to Kiwi I am trying to build a minimal image for use in RAM only in order to deploy an image sent to this running SLES15.2 instance via ssh.
A method used with SLES11 mini image that has been working for many years.(not using kiwi)
The image boots all well and can even do all I want.
Commands used in the running kernel are:
lspci -mn,lsscsi, modprobe, parted, dmidecode - often fails with error below.
Receiving image via ssh( this always works).

This sometimes works 100% without the error but fails far too often.

Sample error by just typing in vi and enter( or lspci):

blk_update_request: I/O error, dev loopO, sector 310052 op OXO: (HEAD) flags OXO phys seg 1 prio class O
SQURSHFS error: squashfs_read_data failed to read block Ox97445Ba
vi: error while loading shared libraries: /usr/lib/per15/5.Z6.1/x86 . so: cannot read file data
Input/output error

Type definitions used.
 <type image="iso" bootprofile="default" bootkernel="std" flags="dmsquash" firmware="efi" hybridpersistent_filesystem="ext4" hybridpersistent="true"/>

 <type image="iso" bootprofile="default" bootkernel="std" flags="overlay" firmware="efi" hybridpersistent_filesystem="ext4" hybridpersistent="true"/>

df -hP
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             126G     0  126G   0% /dev
tmpfs                126G  4.0K  126G   1% /dev/shm
tmpfs                126G   19M  126G   1% /run
tmpfs                126G     0  126G   0% /sys/fs/cgroup
/dev/sr0             303M  303M     0 100% /run/initramfs/live
/dev/mapper/live-rw  1.3G  924M  243M  80% /
tmpfs                 26G     0   26G   0% /run/user/0

Kernel: 5.3.18-24.46-default
kiwi-systemdeps-bootloaders-9.23.5-1.1.x86_64
kiwi-man-pages-9.23.5-1.1.x86_64
kiwi-systemdeps-filesystems-9.23.5-1.1.x86_64
python3-kiwi-9.23.5-1.1.x86_64
kiwi-tools-9.23.5-1.1.x86_64

Are there anything to add to the TYPE definition or maybe another method?

Your support is appreciated.
Jan

Marcus Schäfer

unread,
Feb 2, 2021, 3:56:45 AM2/2/21
to kiwi-...@googlegroups.com
Hi,

> blk_update_request: I/O error, dev loopO, sector 310052 op OXO: (HEAD)
> flags OXO phys seg 1 prio class O
> SQURSHFS error: squashfs_read_data failed to read block Ox97445Ba
> vi: error while loading shared libraries: /usr/lib/per15/5.Z6.1/x86 .
> so: cannot read file data
> Input/output error
>
> Type definitions used.
>
> <type image="iso" bootprofile="default" bootkernel="std"
> flags="dmsquash" firmware="efi" hybridpersistent_filesystem="ext4"
> hybridpersistent="true"/>
>
> <type image="iso" bootprofile="default" bootkernel="std"
> flags="overlay" firmware="efi" hybridpersistent_filesystem="ext4"
> hybridpersistent="true"/>

Ok, your setup looks pretty much standard and you get blk_update_request
I/O error on read. This usually points to an issue with the low-level
storage hardware or an incompatibility on the squashfs format.

Therefore my first question is from where do you run this ?
what is /dev/sr0: USB stick, SD-Card, CD/DVD ... something else ?

To check if the issue is related to the storage please use the
exact same image and run it from a virtual environment via qemu

$ qemu-kvm -m 4096 -cdrom your-image.iso

Do you see the same errors when you try to read/write files ?

Last question: The image build is for SLE15 if I got it right.
What system was used to build that image ?

Looking forward to your feedback

Regards,
Marcus
--
Public Key available via: https://keybase.io/marcus_schaefer/key.asc
keybase search marcus_schaefer
-------------------------------------------------------
Marcus Schäfer (Res. & Dev.) SUSE Software Solutions Germany GmbH
Tel: 0911-740 53 0 Maxfeldstrasse 5
FAX: 0911-740 53 479 D-90409 Nürnberg
HRB: 21284 (AG Nürnberg) Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton
http://www.suse.de
-------------------------------------------------------
signature.asc

Jan Robinson

unread,
Feb 2, 2021, 5:32:15 AM2/2/21
to kiwi
Hi Marcus
 

storage hardware or an incompatibility on the squashfs format.

 I have some feeling about this - could be.

Therefore my first question is from where do you run this ?
what is /dev/sr0: USB stick, SD-Card, CD/DVD ... something else ?

Runs in Memory. ISO booted wit iDRAC Dell server, no cd, USB stick or SD.
 
To check if the issue is related to the storage please use the
exact same image and run it from a virtual environment via qemu

$ qemu-kvm -m 4096 -cdrom your-image.iso


I do not have a Virtual environment, and never used qemu.
qemu-kvm -m 4096 -cdrom /global/local/myresult/BMW_MINIOS_Sle15.x86_64-15.2.iso
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: failed to initialize KVM: No such file or directory

Would installing one of these help?

S  | Name                       | Summary                                       | Type
---+----------------------------+-----------------------------------------------+--------
   | docker-machine-driver-kvm2 | KVM driver for docker-machine                 | package
   | ikvm                       | A JVM Based on the Mono Runtime               | package
   | kvm_server                 | KVM Host Server                               | pattern
   | kvm_stat                   | Monitoring Tool for KVM guests                | package
   | kvm_tools                  | KVM Virtualization Host and tools             | pattern
   | patterns-server-kvm_server | KVM Host Server                               | package
   | patterns-server-kvm_tools  | KVM Virtualization Host and tools             | package
i+ | qemu-kvm                   | Wrapper to enable KVM acceleration under QEMU | package
   | system-role-kvm            | Server KVM role definition                    | package

 
Do you see the same errors when you try to read/write files ?

Writing is fine - only small writes are used.
The loop0 messages are always reported on : cannot read.
 
Last question: The image build is for SLE15 if I got it right.
What system was used to build that image ?

Kiwi server 5.3.18-24.37-default, Sles15.2.
Same used for the ISO.

Thanks so much.

Marcus Schäfer

unread,
Feb 2, 2021, 5:42:15 AM2/2/21
to kiwi-...@googlegroups.com
Hi,

> Therefore my first question is from where do you run this ?
> what is /dev/sr0: USB stick, SD-Card, CD/DVD ... something else ?
>
> Runs in Memory. ISO booted wit iDRAC Dell server, no cd, USB stick or
> SD.

ok, so outside our control how Dell maps the given .iso file
into memory ?

> To check if the issue is related to the storage please use the
> exact same image and run it from a virtual environment via qemu
> $ qemu-kvm -m 4096 -cdrom your-image.iso
>
> I do not have a Virtual environment, and never used qemu.
> qemu-kvm -m 4096 -cdrom
> /global/local/myresult/BMW_MINIOS_Sle15.x86_64-15.2.iso
> Could not access KVM kernel module: No such file or directory
> qemu-system-x86_64: failed to initialize KVM: No such file or directory

This means the kvm module is not loaded or your machine has no
virtualization capabilities. Try as root:

$ modprobe kvm

==> on Intel hardware this should look like the following

$ lsmod | grep kvm

kvm_intel 270336 0
kvm 790528 1 kvm_intel
irqbypass 16384 1 kvm

if you see this recall:

sudo qemu-kvm -m 4096 -cdrom \
/global/local/myresult/BMW_MINIOS_Sle15.x86_64-15.2.iso


If the kvm module can't load check if your CPU can do it

cat /proc/cpuinfo | grep vmx

That should highlight the "vmx" flag. If you can't see that you
are on a machine without hardware virtualization... which would
be ... odd ... these days

> Kiwi server 5.3.18-24.37-default, Sles15.2.
> Same used for the ISO.

ok, that's perfectly fine then

I really think this is related to how Dell maps the ISO.

The kvm based check should not show any errors. If this is
the case we know where to look at.

Thanks
signature.asc

Jan Robinson

unread,
Feb 2, 2021, 6:22:08 AM2/2/21
to kiwi
 Marcus.
 
$ lsmod | grep kvm

kvm_intel 270336 0
kvm 790528 1 kvm_intel
irqbypass 16384 1 kvm


Modules loaded. 
lsmod |grep kvm                                                               kvm_intel             270336  0
kvm                   786432  1 kvm_intel
irqbypass              16384  1 kvm

itadell101:/global/kiwi/sle15 # qemu-kvm -m 4096 -cdrom /global/local/mypxe-result/BMW_MINIOS_Sle15.x86_64-15.2.iso
VNC server running on 127.0.0.1:5900
 
This where it sits.

Thanks so much, 
Jan


Marcus Schäfer

unread,
Feb 2, 2021, 6:28:11 AM2/2/21
to kiwi-...@googlegroups.com
Hi,

> itadell101:/global/kiwi/sle15 # qemu-kvm -m 4096 -cdrom
> /global/local/mypxe-result/BMW_MINIOS_Sle15.x86_64-15.2.iso
> VNC server running on 127.0.0.1:5900
>
> This where it sits.

Is your image configured to use the serial console ? Try:

$ sudo qemu-kvm -m 4096 -serial stdio -cdrom \
/global/local/mypxe-result/BMW_MINIOS_Sle15.x86_64-15.2.iso
signature.asc
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Jan Robinson

unread,
Feb 3, 2021, 5:19:36 AM2/3/21
to kiwi-...@googlegroups.com
Hi.

Replies to the kiwi group gets deleted. https://groups.google.com/g/kiwi-images

Are there something I am doing wrong?

This worked:
   $ sudo qemu-kvm -m 4096 -serial stdio -cdrom  path to ISO

Thanks so much,
Jan

Marcus Schäfer

unread,
Feb 3, 2021, 5:29:00 AM2/3/21
to kiwi-...@googlegroups.com
Hi,

> Replies to the kiwi group gets
> deleted. [1]https://groups.google.com/g/kiwi-images

sorry there were messages in the pending queue... no idea why. I approved them

> This worked:
> $ sudo qemu-kvm -m 4096 -serial stdio -cdrom path to ISO

ok so that means there is nothing wrong with the ISO itself but with
the way the Dell system manages memory mapped ISO files. There is
little I can do now. This should be discussed with your hardware vendor
signature.asc

Jan Robinson

unread,
Feb 3, 2021, 5:53:28 AM2/3/21
to kiwi
Thank you for the support Marcus, appreciated.
All the best.

Jan Robinson

unread,
Feb 12, 2021, 3:00:42 AM2/12/21
to kiwi
Just an update should anyone had the same.

To have a RAM only boot this is not giving the squashfs  error on the loop0 device. 
If flags="overlay" it still fails the read on loop0.

This however works:
<type image="iso" bootprofile="default" bootkernel="std" flags="dmsquash" kernelcmdline="splash rd.live.ram=1 rd.writable.fsimg=1 rd.live.overlay=auto"/>

Marcus Schäfer

unread,
Feb 12, 2021, 4:53:08 AM2/12/21
to kiwi-...@googlegroups.com
Hi,

> Just an update should anyone had the same.
>
> To have a RAM only boot this is not giving the squashfs error on the
> loop0 device.
>
> If flags="overlay" it still fails the read on loop0.
>
> This however works:
>
> <type image="iso" bootprofile="default" bootkernel="std"
> flags="dmsquash" kernelcmdline="splash rd.live.ram=1
> rd.writable.fsimg=1 rd.live.overlay=auto"/>

Interesting. With flags="dmsquash" you switch to the upstream
dmsquash module which might do the overlay differently. It would
be interesting for me to know how the overlay is done in this
case. Is this system still using the kernel overlayfs or devmapper ?

Would be great if you can send the info from:

lsblk

and

cat /proc/mounts

and

dmsetup ls --tree

Thanks
signature.asc

Jan Robinson

unread,
Feb 12, 2021, 7:42:11 AM2/12/21
to kiwi
Marcus

A pleasure, here the output.

This running instance's life is very short, a few minutes. Only used to install/setup the real image.
Is this answered in below output? "Is this system still using the kernel overlayfs or devmapper ?"


# lsblk
NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0       7:0    0 234.1M  1 loop
loop1       7:1    0   1.2G  0 loop
└─live-rw 254:0    0   1.2G  0 dm   /
sr0        11:0    1   272M  0 rom  /run/initramfs/live
nvme0n1   259:0    0   2.9T  0 disk
nvme1n1   259:1    0   2.9T  0 disk
nvme3n1   259:2    0   2.9T  0 disk
nvme2n1   259:3    0   2.9T  0 disk
nvme4n1   259:4    0   2.9T  0 disk


# cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=131735748k,nr_inodes=32933937,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
none /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
/dev/sr0 /run/initramfs/live iso9660 ro,relatime,nojoliet,check=s,map=n,blocksize=2048 0 0
/dev/mapper/live-rw / ext4 rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=23,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=79965 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=26350984k,mode=700 0 0

# dmsetup ls --tree
live-rw (254:1)
 └─ (7:1)

# df -hP
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             126G     0  126G   0% /dev
tmpfs                126G  4.0K  126G   1% /dev/shm
tmpfs                126G  1.5G  125G   2% /run
tmpfs                126G     0  126G   0% /sys/fs/cgroup
/dev/sr0             272M  272M     0 100% /run/initramfs/live
/dev/mapper/live-rw  1.2G  907M  235M  80% /
tmpfs                 26G     0   26G   0% /run/user/0

/dev/disk/by-uuid # ls -l
total 0
lrwxrwxrwx 1 root root  9 Feb 12 15:34 2021-02-12-11-49-30-00 -> ../../sr0
lrwxrwxrwx 1 root root 10 Feb 12 15:34 2fa9d64a-5164-4c70-960c-67e3c8b8f3e0 -> ../../dm-0

/dev/disk/by-uuid # lsscsi
[10:0:0:0]   cd/dvd  Linux    Virtual CD       0399  /dev/sr0
[10:0:0:1]   disk    Linux    Virtual Floppy   0399  /dev/sdb

Regards.

Marcus Schäfer

unread,
Feb 12, 2021, 9:31:22 AM2/12/21
to kiwi-...@googlegroups.com
Hi,

> Is this answered in below output? "Is this system still using the
> kernel overlayfs or devmapper ?"

yes see here

> /dev/mapper/live-rw / ext4 rw,relatime 0 0

> # dmsetup ls --tree
> live-rw (254:1)
> └─ (7:1)

There is no overlayfs, so the "classic" device mapper method
is used which seems to be more stable compared to overlayfs.

So you said with overlayfs(the kiwi default) you get block I/O
errors ? This is interesting because recently there was another
guy who reported the same problem when using a live ISO through
a memory mapped file (virtual CD rom drive attached via the
hardware interface)

At the time I tested your image build I used kvm to test it.
That also means I used the guest memory as it is managed via
qemu-kvm. I could not see these sort of issues in my tests.

My gut feeling tells me something is wrong in overlayfs
signature.asc
Reply all
Reply to author
Forward
0 new messages