Raid 5 failure

14 views
Skip to first unread message

Whenow

unread,
Jul 10, 2019, 4:23:43 PM7/10/19
to qubes...@googlegroups.com
I have a RAID 5 configuration that has failed. I physically removed the failed disk, inserted a new one of the same nominal capacity. Then in dom0 I used mdadm to add the new disk to the array and the new disk was successfully added in to the array. Upon every reboot now the disk is not automatically added to the RAID. I need to create the device node for the new hard drive's RAID partition and then manually enter "sudo mdadm --manage /dev/md127 --add /dev/sdc1" to complete the RAID setup. What do I need to do to make this setup automatic?

Frédéric Pierret

unread,
Jul 11, 2019, 10:44:05 AM7/11/19
to Whenow, qubes...@googlegroups.com

Hi,

It's not Qubes related. I encourage you to look at usual manual of mdadm. Normally, after managing to repair your array, you should do something like:

    1) Backup /etc/mdadm/mdadm.conf

    2) sudo mdadm --detail --scan --verbose > /etc/mdadm/mdadm.conf

    3) Reboot to check

Best,

Frédéric

On 7/10/19 10:23 PM, 'Whenow' via qubes-users wrote:
I have a RAID 5 configuration that has failed. I physically removed the failed disk, inserted a new one of the same nominal capacity. Then in dom0 I used mdadm to add the new disk to the array and the new disk was successfully added in to the array. Upon every reboot now the disk is not automatically added to the RAID. I need to create the device node for the new hard drive's RAID partition and then manually enter "sudo mdadm --manage /dev/md127 --add /dev/sdc1" to complete the RAID setup. What do I need to do to make this setup automatic?
--
You received this message because you are subscribed to the Google Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qubes-users...@googlegroups.com.
To post to this group, send email to qubes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-users/xyv3lRaE14w1yGqVz_mM0rVTgcGVUtjsihNjdYiALRZUHl9G7ClttjBn3gBgNaf9-jOq5eqPTyQTTipPqFUAk9370b7xWn3Zvn11jZvbH9Q%3D%40protonmail.com.
For more options, visit https://groups.google.com/d/optout.
signature.asc

Whenow

unread,
Sep 18, 2019, 7:43:51 PM9/18/19
to qubes...@googlegroups.com
What are (or how do I find on my system in /boot) the internal commands qubes uses to assemble a raid, open a luks container in the raid and then recognize and open an lvm? I'm working on recovering my system after a raid disk died, forcing my raid to fall apart. I was able to reassemble raid, I think, and was able to command open the encrypted container (which leads me to believe the stuff inside my raid is good and consistent) but things like vgscan and the like always return no volume groups, logical volumes, physical volumes. Is there an exact, proper structure to getting my lvm stuff working or am I probably screwed? Thanks.
Reply all
Reply to author
Forward
0 new messages