-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On Fri, Jul 31, 2020 at 02:17:05PM -0700, Jason M wrote:
> I then looked into alternatives to prevent my complete departure from
> Qubes. Marek told me about DomB, which is now in its design stages. It
> would allow me to statically partition my machine (like having 2 dom0 VMs
Not really 2 dom0 in a sense you would need. What you'd need is a 2
hardware domains (dom0 by default) and that can still be only one. One
of domB proposal goals is to make it more meaningful to have hardware
domain != dom0, by eliminating dom0.
> *GOALS*
> The final goals would be to support all Qubes features and apps.
Yes!!
> *STAGE 1*
> The initial goal is to get Qubes to be able to manage the virtual machines
> (start, stop, etc) using 'qvm-*' tools and *Qubes Manager*. Seamless VM
> video or audio will not be implemented in stage 1 so either a GPU will need
> to be passed through to the VM (which will also be able to provide HDMI
> audio), or access using spice or vnc. Stage 1 goals include the following:
>
> - Use same template system Qubes currently uses including settings like
> *qvm-prefs*, *features*, *tags*, etc.
> - Obviously support PCI pass-through using Nvidia drivers for RTX GPU
> - Support qrexec communication from host <-> vm
> - Locking down KVM host
> - Securing the network - look into ability to enabling *sys-net* and
> *sys-firewall*
One challenge with the last point is to have VM<->VM network connection
(like sys-net - sys-firewall) without exposing the host to the traffic.
Most (all?) of the traditional KVM setups assumes the host is responsible
for routing network traffic.
One idea is to use netdev socket to connect two VMs directly, but I
worry about the performance...
> *FUTURE*
>
> - Seamless windows
Yes, putting this beyond the first stage makes sense. But also it
shouldn't be that hard. Beyond vchan, you "just" need to handle
KVM-specific shared memory primitive (ivshmem device) in
gui-daemon/gui-agent.
> - Audio
Having qrexec and vchan, you will get audio for free.
> - Encrypted memory within each VM (AMD processors)
>
>
> *BUILD STATUS*
>
> I have modified where necessary all Qubes source repos to allow building
> for KVM within a Fedora-32 host and guest. All build modifications used
> conditional tests based on the 'BACKEND_VMM' build configuration option
> which is set to 'kvm'. When 'BACKEND_VMM' is set to 'xen', everything
> builds as normal.
>
> - *vmm-xen*: I still include this package to allow booting into KVM or
> XEN. There is also one dependency on it I need to remove.
> - *core-libvirt*: Configured to also compile the KVM modules and any
> other modules provided within the Fedora 32 distribution packages.
> - *core-vchan-xen*: Not required. Components that require it use the
> 'BACKEND_VMM' build variable. Nice forward thinking from the Qubes
> developers!
> - *core-vchan-libkvmchan*: Packaged *libkvmchan
> <
https://github.com/shawnanastasio/libkvmchan>* code based on the work
> completed by @shawnanastasio <
https://github.com/shawnanastasio>.
> - *qubes-core-vchan-kvm*: Packaged *qubes-core-vchan-kvm
> <
https://github.com/shawnanastasio/qubes-core-vchan-kvm>* code based on
> the work completed by @shawnanastasio <
https://github.com/shawnanastasio>
> .
> - *linux-utils*: Removed *qmemman* for KVM build. Not sure is they can
> be adapted for KVM. Will revisit KVM alternative later.
Balloon driver for KVM exists. But I don't know what interface it uses
for control. qmemman currently uses strictly Xen specific interfaces, so
disabling it for now makes sense.
One thing to consider is also enabling memory deduplication in KVM
(KSM). This should nicely save memory when running multiple similar VMs,
but at the same time is risky in light of speculative execution and also
rowhammer-style attacks.
> - *core-admin*:
> - Added KVM *libvirt* template
> - Added additional conditional 'BACKEND_VMM' for Xen specific build
> and depends
> - Still installs *qubes-qmemman.service* until files. Not sure if
> they can be adapted to KVM.
> - *qubes* python
> - *other*: Minor changes here and there.
>
>
> *INSTALL STATUS*
>
> - *dom0*:
> - All dom0 packages install without error (minus vmm-stubdom and iso
> related packages)
> - All Qubes services start successfully on boot
>
>
> - *template*:
> - qubes-template-fedora-32 installs within kvm host. A few manual
> modification were made to qubes.xml to facilitate this.
One of installation step is starting the template VM and connecting
qrexec. Do you mean that part worked already too?
> *WIP*
>
> - *core-admin*
> - *qubes python package*
> - Added a 'hypervisor' module to detect hypervisor type (xen, kvm,
> etc) for cases like the following where it is expected that the hypervisor
> is Xen if 'xen.lowlevel' is able to be imported. In my case the Xen module
> is installed since I also have a Xen boot option:
> - *qubes.app.VMMConnection.init_vmm_connection* change.
> - old: if 'xen.lowlevel.{xs,xc}' in sys.modules:
> - new: if hypervisor_type('xen'):
>
> - There are a few dependencies on Xen such as:
> -
> *qubes.app.xs (xenstore) *
> I was hoping that '*xenstore*' could be used as a standalone
> application (without Xen being activated).
Not needed. Besides qmemman, xenstore is used directly only for really
Xen specific workarounds that shouldn't be needed on KVM. You can safely
disable xenstore usage in core-admin if running on KVM.
> I have not yet looked at the
> source code but tried starting the *xenstore* service which
> failed since the '/proc/xen' directory does not exist. Wondering a I
> created an *procfs* entry for '/proc/xen' if the store would
> run without Xen.
>
> If *xenstore* won't work without X*en* then need to determine
> the best alternative; convert *xenstore* to work without Xen or
> some other solution?
As said above - you don't need that.
>
> - *qubes.ext.pci.attached_devices*:
> - ls('', 'backend/pci'), ls('', 'backend/pci' + domid)
> - read('', devpath + '/domain'), read('', devpath +
> '/num_devs'), read('', devpath + '/dev-' + str(dev))
This is a workaround for Xen toolstack limitation regarding reporting of
attached devices (extracting how device is visible inside the VM). You
can also get similar information via libvirt (and hopefully in case of
KVM it should be more accurate).
> - *qubes.vm.qubesvm.stubdom_xid*: if xs is None: return -1
> # No issue
Yes, no issue as stubdomains are not existing on KVM.
> - *qubes.vm.qubesvm.start_time*: read('',
> '/vm/{}/start_time').format(self.uuid)
I haven't found a proper libvirt API for this information. In case of
KVM, I believe you can get this by looking up qemu process id (libvirt domain
ID?) and checking its start time in /proc.
> - *qubes.vm.qubesvm.create_qdb_entries*: set_permissions('',
> '/local/domain/{}/memory'.format(self.xid, [{'dom': self.xid}])
> - *qubes.vm.qubesvm.get_prefmem*: read('',
> '/local/domain/{}/memory/meminfo').format(self.xid))
Both needed only with qmemman (and should be move to generic API, like
qubesdb if making qmemman KVM-compatible).
> -
> - *qubes.app.xc (xen connection)*
> - *qubes.app.QubesHost.get_free_xen_memory*:
> physinfo()['free_memory']
I believe it isn't used anywhere outside of qmemman. And qmemman has its
own copy of this function...
> - *qubes.app.QubesHost.is_iommu_supported*:
> physinfo()['virt_caps']
It should be possible to get it from libvirt via capabilities (see `virsh
capabilities` and `virsh domcapabilities`).
> - *qubes.app.QubesHost.get_vm_stats*: domain_getinfo(int,
> int)['domid', 'mem_kb', 'cpu_time', 'online_vpus']
There is a proper libvirt API for that (GetAllDomainStats), but it isn't
implemented in Xen driver. I hope it is in KVM.
>
> - Added a 'hypervisor' script to '/usr/lib/qubes' for other
> scripts like 'startup-misc.sh'
> - if /usr/lib/qubes/hypervisor_type xen; then ...
>
>
> *CURRENT ISSUES TO RESOLVE*
>
> - *xenstore*: Will it work without Xen? If not convert it so it will
> or provide another alternative?
> - *qmemman*:
> - Provide KVM alternative
> - Should components like linux-utils that provide xen only utilities
> have the xen utilities split into another repo like 'linux-utils-xen'?
> Then when a KVM alternative can be provided it could be placed in
> 'linux-utils-kvm'?
Ideally VM part of the qmemman could be made hypervisor-agnostic.
> - *Qubes python packages*:
> - Not yet sure how much of it relies on any xen packages. Currently I
> will continue using the hypervisor check and once all python packages are
> functioning correctly with KVM we can look into better ways to handle xen
> vs kvm or other hypervisors.
Ideally no xen-specific python modules should be required to work. There
are few Xen-specific workarounds (and I believe few KVM-specific won't
be avoidable either), but it should be easy to work just fine without
them if you are on the other hypervisor (like, using `try: import xen...;
except ImportError: ...` pattern).
> - *qubes-builder*:
> - For some reason I can not build with 'DIST_BUILD_TOOLS=1" (standard
> qubes xen components). I always get an error when building dom-fc32 of
> "sudo: unrecognized option
> '--resultdir=/home/user/qubes/chroot-dom0-fc32/home/user/qubes-src/vmm-xen/pkgs/dom0-fc32'.
> Am I missing another config option?
Missing "mock" package in your build environment?
> - Libvirt often fails to compile using 32 cores giving some error
> about some file that does not exist (when it does fail, it always fails at
> the same spot with same error message).
I guess some missing dependency in one of the Makefiles.
> It seems to be compiling too fast
> or maybe has something to do with using BTRFS filesystem. The rpm spec for
> libvirt uses the number of processors available (make -j32 V=1). It will
> build without errors if I generate a '.rpmmacros' file containing
> '%_smp_mflags' -j10' to the 'chroot-dom0-fc32/home/user' directory. Just
> wondering is there is a way to set number of jobs per component, or maybe
> switching to using 'DIST_BUILD_TOOLS' will help.
I would start with reporting this specific build issue on libvir-list.
It's worth also checking with upstream libvirt build, straight from git
clone master branch.
- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl8lWRUACgkQ24/THMrX
1yyeWgf9FQjfKXkTG7Rk7MQXI6g+FS+b/dWB7O7eKf8qlc3VJFe0Nfq+ov4YP7/S
i6LXv7+kD9+cgwmde+43DfuG0J3T5JOZwU4YCOL/Bp/zYTkJ+eo9sZRSraxWIdZe
wnNgHk09VnqW/VoCc7woS+5MeSk8C/ThD3ITrlIEqQssFh3WGRZUKq0DAbkSjhrL
RydrALVA9C+9LguQ8bTFpVTscj9gQN06PQQXRM/K/dMw5fI8qf3e7MpJFxl6yRKc
gWkVr7ZQA2UOzs+ENMawANlQR5pxAme2xDvx4iLfr23vDEqMJVAY4RAQP6Fb9fci
GSG8wb9Thdq2fO+Y/AAbDZ51ikb+Kg==
=t/7G
-----END PGP SIGNATURE-----