Fyi, previously lvscan on my system shown root, pool00, and every volume but swap as inactive
I followed your instructions, but the system still fails to boot. I've run 'vgchange -ay' and o saw the following printed a number of times.
device-mapper: table 253:6: thin: Couldn't open thin internal device
device-mapper: reload ioctl on (253:6) failed: no data available
I ran 'lvscan' again, and this time some VMS were marked active, but a number (root,various -back volumes, several -root volumes, etc)
Really terrified everything is gone as I had just recovered from a backup while my hardware got fixed, but I don't have the backup anymore.
I have some which don't show as active - it's looking like some data loss..
Something I am getting when I run
Lvconvert --repair qubes_dom0/pool00
WARNING: Sum of all thin volume sizes (2.67TiB) exceeds the size of thin pools and the size of whole volume group (931.02GiB)
Is this something I can fix perhaps?
Also, I have some large volumes which are present. I've considered trying to remove them, but I might hold off until I get data off the active volumes first..
I've run across the thin_dump / thin_check / thin_repair commands. It seems they're used under the hood by lvconvert --repair to check thin volumes.
Is there a way to relate those dev_ids back to the thin volumes lvm can't seem to find?
On 7/29/19 10:19 AM, thomas...@gmail.com wrote:
> Thanks for your response.
>
> I have some which don't show as active - it's looking like some data loss..
>
> Something I am getting when I run
> Lvconvert --repair qubes_dom0/pool00
>
>
> WARNING: Sum of all thin volume sizes (2.67TiB) exceeds the size of thin pools and the size of whole volume group (931.02GiB)
>
> Is this something I can fix perhaps?
This is normal. Thin provisioning usually involves over-provisioning,
and that's what you're seeing. Most of our Qubes systems display this
warning when using lvm commands.
>
> Also, I have some large volumes which are present. I've considered trying to remove them, but I might hold off until I get data off the active volumes first..
>
> I've run across the thin_dump / thin_check / thin_repair commands. It seems they're used under the hood by lvconvert --repair to check thin volumes.
>
> Is there a way to relate those dev_ids back to the thin volumes lvm can't seem to find?
If 'lvs' won't show them, then I don't know precisely how. A long time
ago, I think I used 'vgcfgrestore /etc/lvm/archive/<latest-file>' to
resolve this kind of issue.
I also recommend seeking help from the wider Linux community, since this
is a basic Linux storage issue.
And of course, a reminder there mishaps are a good reason to do the
following:
1. After installation, at least double the size of your pool00 tmeta volume.
2. Perform regular backups (I'm working on a tool that can backup lvs
much quicker than the Qubes backup tool).