On 04.06.2014 18:30, chymian wrote:
> hello marek,
> thanks for taking time to answer.
>
>>> Using LVM logical Volumes in VMs.
>>> following a answermail from marek, I tried to get access to the LVs, but
> the patch seems to be outdated (out of my head: arround 2011, sorry, don’t find
> the mail it at the moment, but was about patching udev-block-add?? and some
> python scripts). anyway, what is the preferred way of doing that with R2 rc1?
>>> goal is, to use my existent data/home partitions & my kvm-vms.
>>
>> Probably that patch need to be updated. I don't remember details, but if you
>> have separate VG with those VMs, you probably can assign raw PV to the VM
> and activate LVM there.
>
> that would work only for my data-partions and if I use all of them within one
> VM, and doesn’t give me the flexibility, to use diff. lvs within a vg in
> different VMs?
> within HVM-setup: it's already possible there, so one could think it's just a
> little tweak somewhere.
Done, will be in next qubes-core-dom0 (plus qubes-utils) package.
>> Anyway Nested VT-x isn't supported in Qubes, so KVM will probably won't work
>> in the VM.
>
> I don’t plan to nest VMs, I want to use my former KVM-VMs as ZEN-VM – should
> be makeable without changing the vms, but install the qubes-tools in them.
>
> so, the questions is, how to get qubes to see all lvs, that I can assign them
> to diff. VMs, basically, the same as with the data-part. above. but using a lv
> (with partition-table) instead of root.img in a HVM-setup. also not using
> private.img & volatile.img.
I'm afraid this part (using lvm instead of root.img) can be currently done
only with qvm-start --custom-config.
> that, and using pure data-partitions within diff. VMs would strengthen your
> migration path for existing systems.
>
> if there's no easy way at the moment, I/we probably have to wait for R3
> libvirt support. because this is a typical installation method in libvirt-vm-
> manager, that should do it, right?
> or will it not be possible to use the xen-setup with it? meaning eliminate
> root.img, priv… in favor of a all-lv-based VM?
>
> btw:
> I was playing around to create a new HVM standalone VM, which throws a critcal
> error:
>
> IndexError: list index out of range at line 158
> of file /usr/lib64/python2.7/site-packages/qubesmanager/create_new_vm.py
The code suggests you've tried to create non-standalone VM, without template
set. Fixed to make the message clear.
>>> qvm-block -d not working reliable?
>>> during the process of creating the deb-template, I often crossmounted
> root.img to the work-vm, to do chrooted install, etc.
>>> I realized, that there is:
>>>
>>> a: no cmd, to see the status of what is qvm-block-mounted where.
>>> b: "qvm-block -A vm file" does work
>>> "qvm-block -d file” doesn’t – throws an error – always
>>
>> You can list attached devices with qvm-block -l. Attached file will show as
>> "loop" device.
>
> sorry, but no, they don't.
>
>>
>>> c: "qvm-block -d vm” works only sometimes.
>>> unmounted xvdi within the work-vm,
>>> issued "qvm-block -d…”
>>> and checking with blkid, still shows xvdi connected.
>>> no way to disconnect, but rebooting work-vm
>>
>> Does qvm-block also list that device as connected?
>
> double checked:
>
> qvm-block -A work dom0:/var/lib/qubes/vm-templates/debian-jessie-mini/root.img
>
> ->> checked in vm work: available using blkid
>
> guenter@dom0 ~
> $ qvm-block -l
> dom0:sda Hitachi_HTS547575A9E384 () 698 GiB
> dom0:sdc SAMSUNG_HM500JI () 465 GiB
> dom0:sda1 Hitachi_HTS547575A9E384 (qubes-/boot) 500 MiB
> dom0:sdc1 SAMSUNG_HM500JI (hunab-ku) 465 GiB
> dom0:sdb4 M4-CT128M4SSD2 () 2 MiB
> dom0:sdb5 M4-CT128M4SSD2 () 26 GiB
> dom0:sdb6 M4-CT128M4SSD2 () 59 GiB
> dom0:sdb1 M4-CT128M4SSD2 (EFI) 188 MiB
> dom0:sdb3 M4-CT128M4SSD2 () 7 GiB
> dom0:sda5 Hitachi_HTS547575A9E384 () 32 GiB
> dom0:sda4 Hitachi_HTS547575A9E384 () 871 KiB
> dom0:sdb8 M4-CT128M4SSD2 () 25 GiB
> dom0:sda3 Hitachi_HTS547575A9E384 () 657 GiB
> dom0:sda2 Hitachi_HTS547575A9E384 () 7 GiB
> guenter@dom0 ~
> $ qvm-block -d dom0:/var/lib/qubes/vm-templates/debian-jessie-mini/root.img
> Usage: qvm-block -l [options]
> usage: qvm-block -a [options] <vm-name> <device-vm-name>:<device>
> usage: qvm-block -A [options] <vm-name> <file-vm-name>:<file>
> usage: qvm-block -d [options] <device-vm-name>:<device>
> usage: qvm-block -d [options] <vm-name>
> List/set VM block devices.
>
> qvm-block: error: Invalid VM or device name: dom0:/var/lib/qubes/vm-
> templates/debian-jessie-mini/root.img
Ah, file in /var/lib/qubes. Those are considered as "system files" and not
listed in qvm-block by default. You can list them with qvm-block
--show-system-disks.
Anyway to access template root.img, its better to simply start the template
(if possible). Otherwise you're risking mounting the same disk image from
multiple locations which will most likely cause filesystem corruption. Of
course there are cases (like this with working on new template) where
attaching root.img to other VM would be beneficial.
>
> ->> there should be a function like this (the opposite of -A), because if one
> has mounted more then on image-file to a vm – which is allowed, how to
> specifically unmount one of them?
>
> $ qvm-block -d work && echo unmounted
> unmounted
>
> ->> still available within work, blkid and mountable
> & qvm-block is giving back a ok status instead of an error!
>
>
>
>>
>>> general install question:
>>> so, I used the BTRFS setup, assuming the various partitions from the
> various VMs would have subvolumes, snapshot, etc available. but they do not –
> yet?
>>> what is the difference from LVM, LVM-thin-provisioning & BTRFS setup? I
> haven’t found any description, especially in the installation- or arch-paper?
>>
>> Qubes currently doesn't use of any special features of underlying
>> device/filesystem. All the data are stored in /var/lib/qubes in sparse files.
>> So unless you manually set up multiple volumes, or use those features in
> some
>> other means, its pretty irrelevant which layout you choose. The most tested
>> option is LVM. Regarding the differences, have a look here:
>>
http://docs.fedoraproject.org/en-US/Fedora/20/html/Installation_Guide/s1-diskpartsetup-x86.html
>>
>> BTW there are people here working on making use of LVM volumes, btrfs
>> features, or even ZFS pools, but I haven't seen anything ready for testing
> yet.
>
> ah, ok, understand it’s just fedora-installer heritage, not qubesOS VM
> provisioning.
Exactly.
>>> dom0
>>> there was a discussion about moving dom0 to debian. great idea, btw – IMHO
> ;)
>>> is it still a goal?
>>
>> This is pretty irrelevant which distribution is in dom0, all your
>> applications, files etc are in VMs and there you spend most of the time. So
>> unless there is some really good argument for that, we will not to change
> dom0 distribution and will focus on some useful features.
>
> like: qubesOS, with it fedora-centric appearance has very a small known-
> footprint in the debian/ubu/mint/etc… world (biggest desktop-linux base),
> which probably would change fast, if you make dom0 a debian system.
> but hey, I totally understand you and I can live with fedora in dom0.
> marketing wise: its another decision…