[3.2] qvm-block -A doesn't work reliably anymore?!

63 views
Skip to first unread message

David Hobach

unread,
Oct 23, 2016, 10:15:59 AM10/23/16
to qubes...@googlegroups.com
Dear all,

after upgrading to 3.2 (in-place) I noticed the following issue:

qvm-block -A fooVM dom0:/var/lib/qubes/appvms/blaVM/private.img
Traceback (most recent call last):


File "/usr/bin/qvm-block", line 151, in <module>


main()


File "/usr/bin/qvm-block", line 105, in main


block_attach(qvm_collection, vm, dev, **kwargs)


File "/usr/lib64/python2.7/site-packages/qubes/qubesutils.py", line
429, in block_attach

vm.libvirt_domain.attachDevice(etree.tostring(disk,
encoding='utf-8'))

File "/usr/lib64/python2.7/site-packages/libvirt.py", line 530, in
attachDevice

if ret == -1: raise libvirtError ('virDomainAttachDevice() failed',
dom=self)

libvirt.libvirtError: internal error: libxenlight failed to attach disk
'xvdi'

Strangely enough, xvdi still appears in fooVM and can be mounted.
Attempting to attach another file to fooVM however fails with the same
error and xvdj does not appear.

More stranegely, it works perfectly on another Qubes machine (same
kernel, same Xen version, no in-place-upgrade though) I got.

Possibly related: I also noticed that netvm and firewallVM don't always
start with the 4.4.14-11 kernel on boot anymore; thus I'm currently
testing 4.1.13-9.

xl info
host : dom0
release : 4.4.14-11.pvops.qubes.x86_64
version : #1 SMP Tue Jul 19 01:14:58 UTC 2016
machine : x86_64
nr_cpus : 4
max_cpu_id : 3
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 3399
hw_caps :
bfebfbff:2c100800:00000000:00007f00:77fafbff:00000000:00000021:00002fbb
virt_caps : hvm hvm_directio
total_memory : 16048
free_memory : 71
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 6
xen_extra : .1
xen_version : 4.6.1
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset :
xen_commandline : placeholder console=none
cc_compiler : gcc (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)
cc_compile_by : user
cc_compile_domain :
cc_compile_date : Tue Jul 26 11:55:46 UTC 2016
xend_config_format : 4

Dom0 is 100% up-to-date as of today.

Anyone got an idea on how to fix that qvm-block -A issue?

Kind Regards
David

David Hobach

unread,
Oct 24, 2016, 1:36:40 AM10/24/16
to qubes...@googlegroups.com
On 10/23/2016 04:15 PM, David Hobach wrote:
> Dear all,
>
> after upgrading to 3.2 (in-place) I noticed the following issue:
>
> qvm-block -A fooVM dom0:/var/lib/qubes/appvms/blaVM/private.img
> Traceback (most recent call last):
>
> File "/usr/bin/qvm-block", line 151, in <module>
>
> main()
>
> File "/usr/bin/qvm-block", line 105, in main
>
> block_attach(qvm_collection, vm, dev, **kwargs)
>
> File "/usr/lib64/python2.7/site-packages/qubes/qubesutils.py", line
> 429, in block_attach
> vm.libvirt_domain.attachDevice(etree.tostring(disk, encoding='utf-8'))
> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 530, in
> attachDevice
> if ret == -1: raise libvirtError ('virDomainAttachDevice() failed',
> dom=self)
> libvirt.libvirtError: internal error: libxenlight failed to attach disk
> 'xvdi'
>
> Strangely enough, xvdi still appears in fooVM and can be mounted.
> Attempting to attach another file to fooVM however fails with the same
> error and xvdj does not appear.
>
> More stranegely, it works perfectly on another Qubes machine (same
> kernel, same Xen version, no in-place-upgrade though) I got.

Found a stupid workaround:

losetup -f /var/lib/qubes/appvms/blaVM/private.img
qvm-block -A fooVM dom0:loop[justCreated]

works - even for multiple calls.
Funnily enough, qvm-block -a does not work contrary to its description
@qvm-block -h. Looks like dom0 is some special case which is not handled
100% correctly (it's a loop _device_ and not a file anymore, isn't it?).

> Possibly related: I also noticed that netvm and firewallVM don't always
> start with the 4.4.14-11 kernel on boot anymore; thus I'm currently
> testing 4.1.13-9.

4.1.13-9 shows the same issue, but the netvm & firewallvm start via
qvm-start. By right-clicking on the VM in the Qubes manager and starting
the VMs they don't start though. Not sure what's the difference...
Still interested...

> Kind Regards
> David
>

Reply all
Reply to author
Forward
0 new messages