Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

kvm-arm: qemu always starts with drive in read-only mode

258 views
Skip to first unread message

Guido Haase

unread,
Aug 19, 2015, 2:00:03 PM8/19/15
to
Hi folks,

once again I need your help.

On my cubietruck I installed jessie on microSD card.

My first goal is to get a VM running with kvm-arm and CPU based hw
virtualization started by qemu (for testing), later then started and
administrated by virsh.

Having installed the host system as described here

https://www.debian.org/releases/jessie/armhf/ch05s01.html.en #5.1.5

I installed qemu and libvirt-bin virtinst virt-top bridge-utils
afterwards. Also I checked some dmesg output to verify the
virtualization options of the CPU are ready for use and kvm-arm is
available:

dmesg |grep SMP
SMP: Total of 2 processors activated (96.00 BogoMIPS).

dmesg |grep CPU
CPU: All CPU(s) started in HYP mode.
CPU: Virtualization extensions available.

dmesg |grep kvm
kvm [1]: interrupt-controller@1c84000 IRQ25
kvm [1]: timer IRQ27
kvm [1]: Hyp mode initialized successfully

I build a jessie bootstrap and put it into an *.img files so qemu can
use it. As hw to virtualize by qemu I choose a Versatile Express A15
board. I have a special dtb file and Kernel built with

General setup -> Configure standard kernel features (expert users)
General setup -> open by fhandle syscalls
Enable the block layer -> Support for large (2TB+) block devices
and files

for this VM.

After I started the system by command

qemu-system-arm -enable-kvm -smp 1 -m 256 -M vexpress-a15 -cpu host
-kernel /home/guido/kvm-arm/jessie/vexpress-zImage -dtb
/home/guido/kvm-arm/jessie/vexpress-v2p-ca15-tc1.dtb -append
"root=/dev/vda console=ttyAMA0 rootwait" -drive
if=none,file=/home/guido/kvm-arm/jessie/jessie-arm.img,id=factory
-device virtio-blk-device,drive=factory -net
nic,macaddr=02:fd:01:de:ad:34 -net tap -monitor null -serial stdio
-nographic

the VM starts and I'm aible to login. Sadly, device jessie-arm.img i.e.
/dev/vda is allways mounted read-only. After loging into the VM I can
mount it read/write. But this no solution. I want the machine booting
with the device in read/write mode, because already during the boot
process some deamons try to write to it.

Just for testing I tried the same with a standard open suse image:

qemu-system-arm -enable-kvm -smp 1 -m 256 -M vexpress-a15 -cpu host
-kernel /home/guido/kvm-arm/opensuse/vexpress-zImage -dtb
/home/guido/kvm-arm/opensuse/vexpress-v2p-ca15-tc1.dtb -append
"root=/dev/vda console=ttyAMA0 rootwait" -drive
if=none,file=/home/guido/kvm-arm/opensuse/opensuse- factory.img,
id=factory -device virtio-blk-device,drive=factory -net nic,
macaddr=02:fd:01:de:ad:34-net tap -monitor null -serial stdio
-nographic

The result is the same, booting fine but with /dev/sda in read-only
mode.

Any idea(s) what's going wrong?

Many thanks in advance,

Guido

Riku Voipio

unread,
Aug 20, 2015, 5:50:04 AM8/20/15
to
On Wed, Aug 19, 2015 at 07:44:34PM +0200, Guido Haase wrote:
> After I started the system by command
>
> qemu-system-arm -enable-kvm -smp 1 -m 256 -M vexpress-a15 -cpu host
> -kernel /home/guido/kvm-arm/jessie/vexpress-zImage -dtb
> /home/guido/kvm-arm/jessie/vexpress-v2p-ca15-tc1.dtb -append
> "root=/dev/vda console=ttyAMA0 rootwait" -drive
> if=none,file=/home/guido/kvm-arm/jessie/jessie-arm.img,id=factory
> -device virtio-blk-device,drive=factory -net
> nic,macaddr=02:fd:01:de:ad:34 -net tap -monitor null -serial stdio
> -nographic

> the VM starts and I'm aible to login. Sadly, device jessie-arm.img i.e.
> /dev/vda is allways mounted read-only. After loging into the VM I can
> mount it read/write. But this no solution. I want the machine booting
> with the device in read/write mode, because already during the boot
> process some deamons try to write to it.

You can change the -append line to "root=/dev/vda console=ttyAMA0 rootwait rw"
I'm exactly sure why that doesn't happen by default.

Riku

Guido Haase

unread,
Aug 20, 2015, 8:00:02 AM8/20/15
to

Hi all,

due to the replies of gilberto an Riku (thanks!) I was able to locate
the reason why my /dev/vda was mounted in ro-mode after booting so far
and I was able to fix now.

Having startet the VM and logged in to it 'mount' shows me /dev/vda/ is
mounted in ro-mode.

So my first step to get my rootfs writable was a

mount -o remount,rw /dev/vda /

because otherwise I couldn't store my changes in /etc/fstab .
Afterwards I open /etc/fstab an saw just one line:

# UNCONFIGURED FSTAB FOR BASE SYSTEM

So I added this content:

# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/vda / ext3 rw,relatime,errors=continue,barrier=0,
data=writeback 0 1

Then I rebootet the VM and my rootfs starts up wiht /dev/sda mountet
writeable :-)

Again, thanks and greetinx

Guido

Guido Haase ♦ ITeHA

unread,
Aug 21, 2015, 11:40:03 PM8/21/15
to
Hi folks,

just another question about the qemu binary package included in the
current/latest distribution of debian jessie for armhf.

In this blog post

http://blog.flexvdi.com/2015/03/17/enabling-kvm-virtualization-on-the-raspberry-pi-2/

I read about a patch for qemu which should ensure qemu runs my guest in
the core I isolated with option isolcpus. The related patch is based on
qemu-2.2.0.tar.bz2 which can be downloaded form here

http://wiki.qemu-project.org/download/qemu-2.2.0.tar.bz2

The patch itself contains this content:

------------------------------------------------------------------------

--- qemu-2.2.0/kvm-all.c 2014-12-09 14:45:42.000000000 +0000
+++ qemu-2.2.0.bak/kvm-all.c 2015-03-17 16:44:26.090954294 +0000
@@ -1739,9 +1739,14 @@
{
struct kvm_run *run = cpu->kvm_run;
int ret, run_ret;
+ cpu_set_t kvm_set;

DPRINTF("kvm_cpu_exec()\n");

+ CPU_ZERO(&kvm_set);
+ CPU_SET(3, &kvm_set);
+ sched_setaffinity(0, sizeof(cpu_set_t), &kvm_set);
+
if (kvm_arch_process_async_events(cpu)) {
cpu->exit_request = 0;
return EXCP_HLT;

------------------------------------------------------------------------

I think the author of the blog post mentioned above is using qemu 2.2
sources. But, as I figured out, the current/latest binary of qemu for
debian jessie on armhf architecture is 1:2.1+dfsg-12+deb8u1 i.e. it's
based on qemu 2.1 sources.

- How can I figure out if it's really necessary to patch the debian
qemu 1:2.1+dfsg-12+deb8u1 source code with the patch mentioned above?

- If so, how can I figure out if the patch code fit's to the source
of qemu ver. 2.1 used as base for latest debian qemu binary package?

- Are these tasks which would be done by the package maintainer usually
and if so, how can I ask/notify her/him?

Please, keep in mind I want to stay as possible to the binaries
distributed by the debian community and want to patch my only if it's
totally necessary.

Again, thanks for any feedback in advance. I hope my questions are not
to stupid for you experienced guys here on the lists.

Greetinx

Guido

Tim Fletcher

unread,
Aug 23, 2015, 12:00:02 PM8/23/15
to
On 22/08/15 04:15, Guido Haase ♦ ITeHA wrote:
> Hi folks,
>
> just another question about the qemu binary package included in the
> current/latest distribution of debian jessie for armhf.
>
> In this blog post
>
> http://blog.flexvdi.com/2015/03/17/enabling-kvm-virtualization-on-the-raspberry-pi-2/

There is a better way of doing this, basically you can use tasksel to
restrict QEMU to the core you have masked off.

I wrote it up here:
https://blog.night-shade.org.uk/2015/05/kvm-on-the-raspberry-pi2/




signature.asc

Tim Fletcher

unread,
Aug 24, 2015, 5:00:03 AM8/24/15
to
I meant taskset not tasksel, but the details remain the same

--

Guido Haase

unread,
Aug 26, 2015, 9:00:04 AM8/26/15
to
Hi folks,

after having started and got running my vexpress A15 machine on my
Cubietruck with jessie for armhf now fine using the qemu-system-arm
commandline:

qemu-system-arm
-enable-kvm \
-smp 1 \
-m 256 \
-M vexpress-a15 \
-cpu host \
-kernel /home/guido/kvm-arm/vexpress-zImage \
-dtb /home/guido/kvm-arm/vexpress-v2p-ca15-tc1.dtb \
-append "root=/dev/vda console=ttyAMA0 rootwait" \
-drive if=none,id=rootfs,file=/home/guido/kvm-arm/debootstr.img \
-device virtio-blk-device,drive=rootfs \
-net nic \
-net tap \
-monitor null \
-serial stdio \
-nographic

aI tried to port it to a libvirt *.xml file. Well, after hours of
searching the web and trying some configs I found that this job is a
pain. My first problem is, no value corresponding to the libvirt
documentations is accepted for "vcpu", see below:

-----------------------------------------------------------------------

<domain id='1' type='kvm
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<vcpu placement='static'>1</vcpu>
...
</domain>

root@hydra-tmp:/srv/kvm# virsh create ./test.xml
error: Failed to create domain from ./test.xml
error: internal error: early end of file from monitor: possible problem:
kvm_init_vcpu failed: Invalid argument

-----------------------------------------------------------------------

<domain id='1' type='kvm
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<vcpu placement='static'>2</vcpu>
...
</domain>

virsh create ./test.xml
error: Failed to create domain from ./test.xml
error: internal error: early end of file from monitor: possible problem:
kvm_init_vcpu failed: Invalid argument

-----------------------------------------------------------------------

<domain id='1' type='kvm
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<vcpu>2</vcpu>
...
</domain>

virsh create ./test.xml
error: Failed to create domain from ./test.xml
error: internal error: early end of file from monitor: possible problem:
kvm_init_vcpu failed: Invalid argument

-----------------------------------------------------------------------

<domain id='1' type='kvm
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<vcpu>1</vcpu>
...
</domain>

virsh create ./test.xml
error: Failed to create domain from ./test.xml
error: internal error: early end of file from monitor: possible problem:
kvm_init_vcpu failed: Invalid argument

-----------------------------------------------------------------------

On

https://bugzilla.redhat.com/show_bug.cgi?id=1171501
http://lists.nongnu.org/archive/html/qemu-devel/2014-12/msg01093.html

I found this to be a qemu related error, but starting qemu vm without
libvirt native the vm runs fines and accpeted the values passed over by
the command:

qemu-system-arm -enable-kvm -smp 1 ...
qemu-system-arm -enable-kvm -smp 2 ...

I have the following versions of libvirt and tools installed:

ii libvirt-bin 1.2.9-9 armhf programs for the libvirt
library
ii libvirt-clients 1.2.9-9 armhf programs for the libvirt
library
ii libvirt-daemon 1.2.9-9 armhf programs for the libvirt
library
ii libvirt-daemon-system 1.2.9-9 armhf Libvirt daemon configuration
files
ii libvirt0 1.2.9-9 armhf library for interfacing with
different virtualization
systems
ii python-libvirt 1.2.9-1 armhf libvirt Python bindings


If there is no solution I just need a way to start multiple qemu vm's
on boot. Sadly, qemu-system-arm option -daemonize doesn't work together
with the options -serial stdio or -nographic .

May be, anybody has a hint, a solution or a working .xml file for me.

Many thanks in advance and greetinx

Guido

Tim Fletcher

unread,
Aug 28, 2015, 5:20:03 AM8/28/15
to
On 26 August 2015 at 13:55, Guido Haase <guw...@web.de> wrote:
Hi folks,

after having started and got running my vexpress A15 machine on my
Cubietruck with jessie for armhf now fine using the qemu-system-arm
commandline:

If there is no solution I just need a way to start multiple qemu vm's
on boot. Sadly, qemu-system-arm option -daemonize doesn't work together
with the options -serial stdio or -nographic .

May be, anybody has a hint, a solution or a working .xml file for me.


The short answer is that libvirt is still mostly focused on x86 virtualisation, as you have found there are pockets of work on VMs on ARM but not much yet. The cpu issue for the Cubietruck is one that has bitten me before if you dig in the QEMU mailing list you'll find bug reports from me regarding it too.

The way to solve your problem is to use standard unix tools to run processes in the background, screen maybe the tool that you are looking for in this instance.

screen starts a new shell in a process which can be detached from the terminal you are using with ^A d

--

Guido Haase

unread,
Aug 28, 2015, 8:30:05 AM8/28/15
to
Hi Gilberto, him Tim,

many thanks to both of you for your feedback!

As proposed by Gilberto I put the output of 'lshw -html' and 'cat
/proc/cpuinfo' into files cubietruck_hw.html and cubietruck_cpuinfo.txt
and made them availalbe for public viewing here:

https://drive.google.com/file/d/0B0IAP2w2eNedWVRUeHVUdUJKT2s/view?usp=sharing
https://drive.google.com/file/d/0B0IAP2w2eNedQzNJU2hQUFp6eVk/view?usp=sharing

Furthermore I did a lot of tests to get a vm already running with qemu
started by libvirt.

As I read and checked myself the problem seems to by that VE A15 has
another CPU (A15 /w 2xA7) then Cubietruck (A20 /w 2xA7). So the
statements

<vcpu>1</vcpu>
<vcpu>2</vcpu>

seems to make libvirt crazy.

Bypassing all CPU related info directly to the hw anyway which kind of
CPU is installed on the board can be forced in qemu by choosing '-cpu
host'.

Actually, the libvirt equivalent

<cpu mode='host-passthrough'/>

should do the same job. But, as I tested it doesn't at least in the ARM
implementation of libvirt. And, there a tons of error descriptions and
bugfix proposals for the fault on the web.

Second, I tested several permutations of

<cpu mode='host-model'>
<model fallback='forbid'/>
<topology sockets='1' cores='2' threads='1'/>
</cpu>

but no one worked for me - so far ...

Third, I gave

<cpu match='exact'>
<model fallback='allow'>A...</model>
<vendor>ARM</vendor>
<topology sockets='1' cores='2' threads='1'/>
</cpu>

a chance. The result is the same, starting *.xml with virsh keeps on
throwing CPU related errors ...

Last but not least I came to believe the ARM implementation of libvirt
doesn't fit my needs (at least today) becauce it's not usably at the moment.

So I decided to start my kvm-system-arm vm(s) by using qemu native. To
make these tasks a little bit more comfortable I start and stop my vm(s)
by systemd as described here:

https://wiki.archlinux.org/index.php/QEMU#Custom_script
https://www.kissmyarch.de/archives/2014/02/28/qemu_systemd_service/index.html

I had to modify these a little bit to get it running with jessie, but in
general it works.

So I'm done now, running 4 kvm-system-arm vm(s) on my Cubietruck. Again,
thanks to anybody for reading and helping me in the past.

Greetinx

Guido
0 new messages