Qubes 4 boot stuck at: "[ OK ] Reached target Basic System. "

1,922 views
Skip to first unread message

cub...@tutamail.com

unread,
Apr 3, 2019, 10:22:38 AM4/3/19
to Qubes Users

Hi,
Please help!
I've shut down my Qubes 4 system last night and it wouldn't restart. After providing my disk encryption password the system is stucks at:

"[ OK ] Reached target Basic System. "
followed by numerous line with:
[....] dracut -initqueue[338]: Warning: dracut-initqueue timeout - starting timeout scripts
[....] dracut -initqueue[338]: Warning: dracut-initqueue timeout - starting timeout scripts
[....] dracut -initqueue[338]: Warning: dracut-initqueue timeout - starting timeout scripts
.....
[....] dracut -initqueue[338]: Warning: Could not boot.
[....] dracut -initqueue[338]: Warning: /dev/mapper/qubes_dev0-root does not exist
[....] dracut -initqueue[338]: Warning: /dev/qubes_dev0/root does not exist
Starting Dracut Emergency Shell...

This is very strange. I haven't even updated 'dom0' lately and the system shut down clean. But it just wouldn't start today.

If I can't recover the Qubes-OS system as the whole please help me retrieve data/files that I have in my AppVM volumes. How can this be done.

This is an emergency for me and I would be immensely grateful for somebody's help to either fix systms boot, so that it could start again, OR help to connect my system disk to other OS and retrieve the data.

Thank you !
cubecub



Jayen Desai

unread,
Apr 3, 2019, 11:40:40 AM4/3/19
to qubes-users

Have you tried rescue mode using installation media? You can try it and I believe it should help. I had used rescue mode to edit my xen.cfg file which had helped to me boot the system one again in case I had passed some wrong parameters to xen.cfg. You can access rescue mode by pressing ctl+alt+f5. May be it will help you.

cub...@tutamail.com

unread,
Apr 3, 2019, 2:26:46 PM4/3/19
to Jayen Desai, qubes-users

Apr 3, 2019, 4:40 PM by jayen...@gmail.com:
Hi,
Thank you for your suggestion. I have given it a try but unfortunately the systems hasn't yet been restored. (I'm guessing you referred to the following commands to enter rescue mode: pkill -9 anaconda;  anaconda --rescue; )

I also tried mounting my Qubes disk in LinuxMint, using commands:
cryptsetup luksOpen /dev/sda2 qubes-disk;
then pvscan, pvdisplay, vgscan, vgdisplay, lvscan, lvdisplay.

These allowed me to see logical structure of my qubes_dom0 volumes, volumes representing AppVM, but I was unsuccessful mounting these volumes to get access to the data.
Comman 'lvscan' shows LV status next to each volumes, and  only 'swap' was marked as active. Everything else was 'NOT active', including 'root', 'pool00', and AppVM volumes.

Do you, or anybody else, have any idea how to proceed? I must recover the data. I hope they didn't get corrupted, only the access point has gone missing.

Please help with other suggestions or hopefully working solutions.

Many thanks!

--
You received this message because you are subscribed to the Google Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qubes-users...@googlegroups.com.
To post to this group, send email to qubes...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

awokd

unread,
Apr 3, 2019, 3:47:40 PM4/3/19
to cub...@tutamail.com, Jayen Desai, qubes-users
cub...@tutamail.com wrote on 4/3/19 6:26 PM:

> Thank you for your suggestion. I have given it a try but unfortunately the systems hasn't yet been restored. (I'm guessing you referred to the following commands to enter rescue mode: pkill -9 anaconda;  anaconda --rescue; )
>
> I also tried mounting my Qubes disk in LinuxMint, using commands:
> cryptsetup luksOpen /dev/sda2 qubes-disk;
> then pvscan, pvdisplay, vgscan, vgdisplay, lvscan, lvdisplay.
>
> These allowed me to see logical structure of my qubes_dom0 volumes, volumes representing AppVM, but I was unsuccessful mounting these volumes to get access to the data.
> Comman 'lvscan' shows LV status next to each volumes, and  only 'swap' was marked as active. Everything else was 'NOT active', including 'root', 'pool00', and AppVM volumes.
>
> Do you, or anybody else, have any idea how to proceed? I must recover the data. I hope they didn't get corrupted, only the access point has gone missing.
>
You're in the right area, but I don't see a "vgchange -ay" in your list
of commands?


cub...@tutamail.com

unread,
Apr 4, 2019, 2:32:44 PM4/4/19
to awokd, Jayen Desai, qubes-users

Apr 3, 2019, 8:47 PM by qubes...@googlegroups.com:
cub...@tutamail.com wrote on 4/3/19 6:26 PM:
Thank you for your suggestion. I have given it a try but unfortunately the systems hasn't yet been restored. (I'm guessing you referred to the following commands to enter rescue mode: pkill -9 anaconda;  anaconda --rescue; )

I also tried mounting my Qubes disk in LinuxMint, using commands:
cryptsetup luksOpen /dev/sda2 qubes-disk;
then pvscan, pvdisplay, vgscan, vgdisplay, lvscan, lvdisplay.

These allowed me to see logical structure of my qubes_dom0 volumes, volumes representing AppVM, but I was unsuccessful mounting these volumes to get access to the data.
Comman 'lvscan' shows LV status next to each volumes, and  only 'swap' was marked as active. Everything else was 'NOT active', including 'root', 'pool00', and AppVM volumes.

Do you, or anybody else, have any idea how to proceed? I must recover the data. I hope they didn't get corrupted, only the access point has gone missing.
You're in the right area, but I don't see a "vgchange -ay" in your list of commands?

Hi,
Yes, I've tried the command "vgchange -ay" and it gives me error message:
"Check of pool qubes_dom0/pool00 failed (status:1). Manual repair required!
1 logical volume(s) in volume group "qubes_dom0" now active. "

That single active volume is 'swap'.
All other lv'sn (which I have 86) have "LV Status" set to "NOT available", and I can't turn them back to active. Also vgchange wasn't able to activate them.

Is there any other way to to get to the lv's of the LUKS encrypted qubes disk?

Are there are any dedicated LUKS / LVM2 recovery tools (somebody mentioned 'scalpel' but I have't tried it yet)?
I would be grateful for any hints into good direction of retrieving data from my luks encrypted qubes disk.

Thank you.


--
You received this message because you are subscribed to the Google Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qubes-users...@googlegroups.com.
To post to this group, send email to qubes...@googlegroups.com.

awokd

unread,
Apr 4, 2019, 2:56:30 PM4/4/19
to cub...@tutamail.com, qubes-users
cub...@tutamail.com wrote on 4/4/19 6:32 PM:

> Hi,
> Yes, I've tried the command "vgchange -ay" and it gives me error message:
> "Check of pool qubes_dom0/pool00 failed (status:1). Manual repair required!
> 1 logical volume(s) in volume group "qubes_dom0" now active. "
>
> That single active volume is 'swap'.
> All other lv'sn (which I have 86) have "LV Status" set to "NOT available", and I can't turn them back to active. Also vgchange wasn't able to activate them.
>
> Is there any other way to to get to the lv's of the LUKS encrypted qubes disk?
>
> Are there are any dedicated LUKS / LVM2 recovery tools (somebody mentioned 'scalpel' but I have't tried it yet)?
> I would be grateful for any hints into good direction of retrieving data from my luks encrypted qubes disk.

You can try qtpie's procedure in
https://www.mail-archive.com/qubes...@googlegroups.com/msg19011.html.

awokd

unread,
Apr 16, 2019, 5:45:55 AM4/16/19
to cub...@tutamail.com, qubes-users
awokd wrote on 4/4/19 6:56 PM:
This one just bit me as well. Last I checked a month ago or so, I
thought I still had something like 250GB free in my primary lvm pool.
Maybe I was looking at the wrong line. I remember a Qubes notification
popped up unexpectedly around a week ago, but it disappeared before I
could read it. Didn't notice if the tray icon was any different
afterwards, but maybe it should start flashing or have a large red X
over it any time the pool containing root remains nearly full. I hope
this LVM failure mode is improved in 4.1. Recovery is painful.

I was only able to repair some of the LVM metadata with the above linked
procedure. The rest was corrupted. Luckily my critical VMs were among
the survivors so I could copy out their private volumes before the
reinstall, and had a recent enough backup of the rest. Hope you managed
to salvage some too.

Ryan Tate

unread,
Apr 16, 2019, 2:44:47 PM4/16/19
to qubes-users
It would be nice if someone from the Qubes team could provide at least some basic tips for Qubes users on how to avoid having our installations completely ruined by what are apparently LVM issues. This case is at least the sixth I'm aware of counting myself of a Qubes system totally unable to even boot and requiring a restore.

Every time now that I do a backup or restore I have to live in white knuckle fear that something might go wrong and I will lose again my system and have to restore again, a process that can be laborious and error prone not to mention simply time consuming.

Some simple instructions on how to do whatever needs to be done to keep this from happening -- defensively re-size dom0 or other Qubes? clear temp files out of certain dir used by the backup process? twiddle some settings? -- would be incredibly calming. I have a 1TB drive here so it seems unnecessary that my machine dies because of some shuffling of VM resizing issues. Can't I just allocate a ton of space to whatever this lvm process is?

awokd

unread,
Apr 16, 2019, 8:07:54 PM4/16/19
to qubes...@googlegroups.com
Ryan Tate wrote on 4/16/19 6:44 PM:
> It would be nice if someone from the Qubes team could provide at least some basic tips for Qubes users on how to avoid having our installations completely ruined by what are apparently LVM issues.

Here's why I'm hoping 4.1 will help:
https://github.com/QubesOS/qubes-issues/issues/1872#issuecomment-377638842.

cyber....@tutanota.com

unread,
Jan 4, 2020, 7:59:44 PM1/4/20
to qubes-users
I'm resurrecting this thread to report that I was affected by this problem. I hope a solution will be implemented soon because it takes me the better part of a day to restore my system, and that's a lot of time to lose to an unpredictable glitch.

cyber....@tutanota.com

unread,
Jan 11, 2020, 12:15:22 PM1/11/20
to qubes-users
Yet another of my Qubes machines fell victim to this problem today. That's two separate computers in one week. The first was my production laptop. Today, it was my production desktop. Of course, I am inconvenienced by the time I have lost diagnosing the problem and restoring my systems. I love Qubes, and I won't abandon it; but I am worried that other users are not so committed. I cannot be the only user affected by this problem in recent weeks. I am saddened by the thought that other users will abaondon Qubes because of this issue, which is both unpredictable and crippling. I urge the developers to implement a fix sooner rather than later.

Claudia

unread,
Jan 11, 2020, 2:36:22 PM1/11/20
to cyber....@tutanota.com, qubes-users

I didn't see your original thread, but I think I had a somewhat similar problem. A workaround was to boot with nomodeset, and ultimately I had to disable sys-usb in order to boot without nomodeset.

cyber....@tutanota.com

unread,
Jan 11, 2020, 3:26:10 PM1/11/20
to qubes-users
Thank you for your reply. If the problem recurrs, I would like to try this solution before wiping and reinstalling my entire system; but could you please provide a bit more detail? How would I boot with nomodeset?

Claudia

unread,
Jan 11, 2020, 3:38:17 PM1/11/20
to cyber....@tutanota.com, qubes-users

Boot into installation media and choose "rescue a qubes system", follow the prompts to mount and chroot, then edit /boot/efi/EFI/qubes/xen.cfg (UEFI systems) and add "nomodeset" at the end of the "kernel=" line for the version that is set as default. On grub systems, just press 'e' when you see the grub menu at boot up, and add nomodeset on kernel line. You might have to hold down shift or esc while booting to make it show the menu.

Or, when it gets stuck, if you are able to switch to tty2 (ctrl alt f2), you can just do the same thing from there.

cyber....@tutanota.com

unread,
Jan 11, 2020, 3:39:43 PM1/11/20
to qubes-users
Awesome. Thanks. If the problem recurs, I'll try this solution and report back. Thanks again.

David Hobach

unread,
Jan 12, 2020, 5:23:20 AM1/12/20
to cyber....@tutanota.com, qubes-users
What was the original report?

Hanging during boot at that message?

I also get this every now and then, but usually rebooting helps. It
definitely relates to the order in which sys-usb and/or sys-net are
started (maybe the hardware init wasn't completely done yet?). There's a
racing condition in there somewhere. There were a few github reports in
the past and it got better with 4.

Anyway you can choose to not automatically start sys-net & sys-usb at
boot, but do it manually and/or script it yourself. That's what I'm
doing and why rebooting usually suffices for me.

cyber....@tutanota.com

unread,
Jan 12, 2020, 10:39:11 AM1/12/20
to qubes-users
Thank you for suggesting that I disable auto-start on sys-net and sys-usb. I've made those changes, and from now on, I will manually start sys-net and sys-usb.

The original report was the same as the original post in this thread. I tried rebooting several times, but nothing changed.

I'm just an end-user without any formal training, so the quickest route to restoring my production machines was to reinstall. It's nice to have a few more options now.
Reply all
Reply to author
Forward
0 new messages