I don't seem to be missing anything there.
I could theoretically use a standalone PV style cube instead of a
standalone HVM. But going that route I might not be able to see the
output of a boot failure, and the grub timeout is made so small (when
installing qubes-core-agent) I do not have enough time to mess with grub
in case I install a broken kernel.
Unfortunately the formula given there to allow networking between hosts
does not work for me and I am not certain why. I am using Qubes 4.0 and
that is supposed to work. When I follow the instructions ping works
fine but tcp connections make it to the firewall vm and I get a "no
route to host" icmp reply.
I am not certain what is the problem. I have been able to completely
disable the qubes firewall and still the ssh packets are returned with a
"no route to host" and icmp packets still make the round trip. It looks
like there is some clever networking configuration that I have not
figured out yet, which is causing the problem.
I am going to spin up a second firewall vm and poke some more, and
see if I can get somewhere.
> If you are using HVMs you can, in some cases, install qubes packages,
> and then use tools like qvm-copy. I say, in some cases, because this
> wont work with some targets, like Ubuntu standalones.
Yes. I have explored using qubes packages. My initial kernel test
configuration is using debian11. Unfortunately the qubes packages make
the HVM unusable for my testing. Pulling in a bunch of stuff I don't
want and taking over configuration I need to control for my tests.
Eric