/dev/mapper/qubes_dom0-root does not exist

96 views
Skip to first unread message

Micah Lee

unread,
Oct 1, 2018, 2:50:54 PM10/1/18
to qubes...@googlegroups.com
I recently installed Qubes 4.0 on a laptop, installed updates in dom0
and my templates, restored a backup, and did a bunch of custom
configuration. And then when I rebooted, Qubes wouldn't boot up due to a
partitioning error. (It looks like it's the same problem described here
[1]). During boot, I get a hundreds of lines that says:

dracut-initqueue[343]: Warning: dracut-initqueue timeout - starting
timeout scripts

Followed by:

dracut-initqueue[343]: Warning: Could not boot.
dracut-initqueue[343]: Warning: /dev/mapper/qubes_dom0-root does not
exist
dracut-initqueue[343]: Warning: /dev/qubes_dom0/root does not exist
Starting Dracut Emergency Shell...

Then it drops me into an emergency shell.

When I run lv_scan, I can see:

Scanning devices dm-0 for LVM logical volumes qubes_dom0/root
qubes_dom/swap
inactive '/dev/qubes_dom0/pool00' [444.64 GiB] inherit
inactive '/dev/qubes_dom0/root' [444.64 GiB] inherit
ACTIVE '/dev/qubes_dom0/swap' [15.29 GiB] inherit
inactive '/dev/qubes_dom0/vm-sys-net-private [2 GiB] inherit

And it continues to list another inactive line for each private or root
partition for each of my VMs. Only swap is active.

I spent a little time trying to troubleshoot this, but ultimately
decided that it wasn't worth the time, since I have a fresh backup. So I
formatted my disk again, reinstalled Qubes, restored my backup, etc.
After installing more updates and rebooting, I just ran into this exact
same problem *again*. I think this could be a Qubes bug.

Any idea on how I can fix this situation? The dracut emergency shell
doesn't seem to come with many LVM tools. There's lvm, lvm_scan,
thin_check, thun_dump, thin_repair, and thin_restore. I could boot to
the Qubes USB and drop into a troubleshooting shell to have access to
more tools.

[1]
https://groups.google.com/forum/#!searchin/qubes-users/dracut-initqueue$20could$20not$20boot|sort:date/qubes-users/PR3-ZbZXo_0/G8DA86zhCAAJ

Chris Laprise

unread,
Oct 1, 2018, 5:24:19 PM10/1/18
to Micah Lee, qubes...@googlegroups.com, Marek Marczykowski-Górecki
If you do 'sudo lvdisplay qubes_dom0/root' it will probably say LV
status is 'Not Available'. This could mean an 'lvchange' somewhere set
those volumes (pool00, root, etc) to setactivationskip=y.

You can attempt to fix it at least temporarily like so:

sudo lvchange -kn -ay qubes_dom0/pool00
sudo lvchange -kn -ay qubes_dom0/root
sudo lvchange -kn -ay qubes_dom0/vm-sys-net-private

Then use lvdisplay to verify the LV status has changed.

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886

Chris Laprise

unread,
Oct 1, 2018, 5:36:22 PM10/1/18
to Micah Lee, qubes...@googlegroups.com, Marek Marczykowski-Górecki
BTW if you can run 'lvm' in the rescue shell then you can use that for
various lv* commands including 'lvchange'. Just run 'lvm' by itself and
that will put you in an lvm shell where the 'lvchange' command and
others are accessible.
Reply all
Reply to author
Forward
0 new messages