0.1RC4 testing and odd RAID1 issue

159 views
Skip to first unread message

Tom Schmidt

unread,
Mar 22, 2014, 5:48:17 PM3/22/14
to al...@googlegroups.com
I have two DNS-323 boxes now.  The first a rev B1 that I have been running Alt-F 0.1RC2 and 0.1RC3 on, and I used the TryIt mode to test 0.1RC4.  So far it looks really good!  Awesome job Joao and other contributors.  This box is my production backup server for my home PCs.

The second DNS-323 I just got is a Rev-C1.  I have 0.1RC3 flashed on it, and likewise have used the TryIt mode to test 0.1RC4 and it likewise looks pretty good.  I have a couple suggestions that I will send in a separate E-mail thread.

As I started to compare the two, I noticed that my original B1 box was not showing any RAID devices in the RAID Creation and Maintenance menu, yet the status menu did show it:

Disks
Bay Dev. Model Capacity Power Status Temp Health
right sda ST2000DM001-1CH164 2000.4 GB active or idle 36°C/96.8°F passed
left sdb ST2000DM001-9YN164 2000.4 GB active or idle 38°C/100.4°F passed
RAID
Dev. Capacity Level State Status Action Done ETA
md0 1862.0 GB raid1 clean OK idle

Mounted Filesystems
Dev. Label Capacity Available FS Mode Dirty Automatic FSCK in
md0
1.8TB
507.0GB
ext3 RW
27 mounts or 161 days
sda4
486.2MB
449.0MB
ext3 RW
9 mounts or 89 days
sdb4
486.2MB
449.0MB
ext3 RW
9 mounts or 109 days
So I looked at the /usr/www/cgi-bin/raid.cgi script to see what it was doing.  I found that it uses 'blkid -t TYPE="mdraid"' to find the RAID members.  On my rev-B1, the blkid output looks like this:

[root@DNS-323]# blkid
/dev/mtdblock0: TYPE="minix"
/dev/sda4: UUID="b6be70bc-f0f7-4dc3-a6e0-bba1c24e0729" SEC_TYPE="ext2" TYPE="ext3"
/dev/md0: UUID="3ea807f6-7ef8-4b66-b576-19bb1c59eac7" TYPE="ext3"
/dev/mtdblock1: TYPE="minix"
/dev/sda1: TYPE="swap"
/dev/sda2: UUID="f18ee45c-d88c-4f28-ab58-08ddb499f6fb" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb4: UUID="046b75ab-1fbd-45f7-a313-3df61690da5f" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb2: UUID="6edc0857-edb4-4108-a642-0e361650bd55" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb1: TYPE="swap"
/dev/loop0: TYPE="squashfs"

Whereas on my rev-C1, blkid output looks like this:

[root@DNS-323-C1]# blkid
/dev/mtdblock0: TYPE="minix"
/dev/sda4: UUID="0ea03f52-d6a3-47f7-bfac-dec4c4ed06de" SEC_TYPE="ext2" TYPE="ext3"
/dev/md0: UUID="e7265074-bce5-4cb8-85f6-98b79fa4026d" SEC_TYPE="ext2" TYPE="ext3"
/dev/mtdblock1: TYPE="minix"
/dev/sdb4: UUID="8cc4b5c7-a302-4d58-83d3-f13d1d5ee631" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb1: TYPE="swap"
/dev/sdb2: UUID="c1ef15ab-4944-89b8-911b-59751e6aaa77" TYPE="mdraid"
/dev/sda1: TYPE="swap"
/dev/sda2: UUID="c1ef15ab-4944-89b8-911b-59751e6aaa77" TYPE="mdraid"

Note the UUID for sda2 and sdb2 are the same on the C1, but on my B1, the UUID is different and each is TYPE="ext3" instead of TYPE="mdraid".

Here is more data from the B1 with the odd RAID1 configuration:
[root@DNS-323]# cat /proc/mdstat
Personalities : [linear] [raid1]
md0 : active raid1 sdb2[3] sda2[2]
      1952466647 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[root@DNS-323]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Feb 10 16:38:32 2013
     Raid Level : raid1
     Array Size : 1952466647 (1862.02 GiB 1999.33 GB)
  Used Dev Size : 1952466647 (1862.02 GiB 1999.33 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Mar 22 15:34:18 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : DNS-323:0  (local to host DNS-323)
           UUID : 0d34b0ca:60cd4f5a:a3d3b3a7:5dded66c
         Events : 322553

    Number   Major   Minor   RaidDevice State
       3       8       18        0      active sync   /dev/sdb2
       2       8        2        1      active sync   /dev/sda2

[root@DNS-323]# fdisk -l

Disk /dev/mtdblock0: 0 MB, 65536 bytes
255 heads, 255 sectors/track, 0 cylinders
Units = cylinders of 65025 * 512 = 33292800 bytes

Disk /dev/mtdblock0 doesn't contain a valid partition table

Disk /dev/mtdblock1: 0 MB, 65536 bytes
255 heads, 255 sectors/track, 0 cylinders
Units = cylinders of 65025 * 512 = 33292800 bytes

Disk /dev/mtdblock1 doesn't contain a valid partition table

Disk /dev/mtdblock2: 1 MB, 1572864 bytes
255 heads, 255 sectors/track, 0 cylinders
Units = cylinders of 65025 * 512 = 33292800 bytes

Disk /dev/mtdblock2 doesn't contain a valid partition table

Disk /dev/mtdblock3: 6 MB, 6488064 bytes
255 heads, 255 sectors/track, 0 cylinders
Units = cylinders of 65025 * 512 = 33292800 bytes

Disk /dev/mtdblock3 doesn't contain a valid partition table

Disk /dev/mtdblock4: 0 MB, 196608 bytes
255 heads, 255 sectors/track, 0 cylinders
Units = cylinders of 65025 * 512 = 33292800 bytes

Disk /dev/mtdblock4 doesn't contain a valid partition table

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1          66      530113+ 82 Linux swap
/dev/sda2             131      243201  1952467807+ fd Linux raid autodetect
/dev/sda4              67         130      514080  83 Linux

Partition table entries are not in disk order

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdb1               1          66      530113+ 82 Linux swap
/dev/sdb2             131      243201  1952467807+ fd Linux raid autodetect
/dev/sdb4              67         130      514080  83 Linux

Partition table entries are not in disk order

Disk /dev/md0: 1999.3 GB, 1999325846528 bytes
2 heads, 4 sectors/track, 488116661 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table


Some background on my rev-B1 that led up to this.  I originally had two 1.5TB disks in it as RAID1 running 0.1RC2.  When one drive died, I opted to upgrade to two 2TB mirrored disks.  So I broke the mirror and had both disks setup as individual disks.  Then I partitioned the 2TB disk to have the swap partition like the 1.5TB disk, with the remainder as user space and rsynced the data between each partition.  Once this was completed, I removed the 1.5TB disk and manually configured the RAID1 using mdadm for the two 2TB disks.  This apparently is why I now have this odd RAID configuration, as the disks are still independent UUID but mdadm can still set them up as RAID1.

So now I need to know if I should correct and how to correct it without loosing the data.  I could plug in a USB disk and rsync my data to it, then destroy and rebuild the RAID1 and restore my data.  Or is there an easier way to do this without corrupting my data?

My rev-C1 is available for testing out commands and Alt-F releases, as I have no data on it.  I just have a couple old 120GB disks setup on it for RAID1 at this time.

Suggestions?

Thanks in advance.

Tom
-- 
---------------------------------------------------------------------
Tom Schmidt   t...@4schmidts.com
'66 Mustang convertible 289-2V   '09 Shelby GT-500
2003 Smart Passion   2010 BMW X6 xDrive35i
http://www.4schmidts.com/

João Cardoso

unread,
Mar 22, 2014, 9:59:25 PM3/22/14
to al...@googlegroups.com, t...@4schmidts.com
It uses two criteria: partitions of type RAID (oxfd or 0xda for MBR or 0xfd00 for GPT) and blkid of type mdraid
Honestly, if you can fsck the RAID I would let it as is. There are several ways of doing things, some better than others. Some ways can only make handling future issues harder, but now it is done.

Your RAID is using metadata ver-1.2, which is the default when no --metadata is specified to mdadm. RC3 creates RAID with metadata 1.2 for greater then 2.2TB disks and 0.9 for smaller disks, but RC4 will use metadata ver-1.0 independently of the disk size (D-Link uses ver-0.9).

The difference is where the RAID metadata is located on the device, in the beginning (ver-1,2) or at the end (ver-0.9 and ver-1.0). Putting the metadata at the device end has advantages when one wants to break a RAID and keep its data or create a RAID from a standard filesystem and keep the fs data.
This means that, in principle, if you break a 1.2 RAID your data will not be accessible, as the filesystem don't find its superblock at the device component start. Creating a RAID from an ordinary fs will destroy the fs superblock at its start, and again the data will not be easily accessible  As a matter of fact the 1.2 RAID metadata is located 4KB at the device start, so the first superblock will not be destroyed, but the fs data that was at the location where the metadata is put will vanish.

On the other side, ver 1.0 (or 0.9)  metadata is located at the device end, and as an fs seldom is 100% full this will not destroy any of its data when creating and breaking a RAID.
That's why in the wiki I recommend to shrink a  fs before converting it to RAID, so no data will be located at its end (shrinking compacts data at the device start); after the RAID is created, the fs should be enlarged to occupy all device space (the RAID device tells the fs that the device ends before the metadata).

I have only used metadata 0.9 on my tests, as the wiki warns, so the above is theoretical.

This is why RC4 will use ver-1.0, which is needed for greater than 2.2TB partitions and is located at the device end.

In short: a normal partition-based fs sees its start and end sectors at the partition start and end; a fs created on a RAID device sees its start/end where the RAID tell him they are, and the RAID "lies" to a fs and tells him that the device starts and ends depending where its metadata is located.

Why does your system shows different in the B1 and C1 boxes? It depends on what metadata was before you break the RAID device, and the exact sequence of commands and supplied options.

Alt-F can't just cover all possible situations...

In any case, you can try the attached raid.cgi -- copy it to the cgi-bin directory but DON'T activate any option! Does your RAID now appears?
If you have Alt-F packages installed and /Alt-F/usr/www/cgi-bin exists, the raid.cgi will survive reboots (even in TryIt mode), and you have to delete if (use 'aufs.sh -n' before doing it, you know that)


  I could plug in a USB disk and rsync my data to it, then destroy and rebuild the RAID1 and restore my data.  Or is there an easier way to do this without corrupting my data?

"easy ways" might or not work, you should always have a backup. And the metadata detail can have serious consequences. There is no way to convert between metadata versions other than breaking the array and rebuild it, so: if it is working without issues, fsck works fine and the posted raid.cgi shows your RAID I wouldn't mess with it.
But if you are a perfectionist or your data is invaluable, make a backup, wait for RC4 and recreate the RAID.

Ah, I intend to test the 1.0 metadata on a "standard" fs to RAID, RAID break to "standard" fs process without data loss, but recent RC4 issues have delayed its release (and I'm on the road now). 

PS-I hate the "standard" fs wording, I only use it as D-Link calls it that way and users are used to it. A fs is a fs is a fs! a fs is created/exists on a device, be it a partition, a RAID or whatever, there are no non-standard fs ;-)
raid.cgi

Cem Basu

unread,
Mar 23, 2014, 12:09:51 PM3/23/14
to al...@googlegroups.com, t...@4schmidts.com
For those of us who started with RC3 and created their RAID sets (1.2), what is the recovery path when we go to RC4?  How can we ensure the data/RAID persists and not at risk to loss?

Thanks

João Cardoso

unread,
Mar 23, 2014, 3:52:03 PM3/23/14
to al...@googlegroups.com, t...@4schmidts.com


On Sunday, March 23, 2014 4:09:51 PM UTC, Cem Basu wrote:
For those of us who started with RC3 and created their RAID sets (1.2), what is the recovery path when we go to RC4?

There is no necessity, existing RAIDs will be assembled and used as found; as usual with Alt-F your disks/filesystems/RAID/data will be just used and not touched or changed in any way.

Newly created RAIDs arrays will (eventually) use metadata ver-1.0, and that will facilitate recovery in case of trouble. 
 
 How can we ensure the data/RAID persists and not at risk to loss?


You are using Alt-F for some time and reason,,, and I use to make tests, that's your assurance. But read the warranty:

                            NO WARRANTY

  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.

  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.


Thanks

João Cardoso

unread,
Mar 23, 2014, 3:59:26 PM3/23/14
to al...@googlegroups.com, t...@4schmidts.com



On Sunday, March 23, 2014 1:59:25 AM UTC, João Cardoso wrote:

(...)
 
In any case, you can try the attached raid.cgi

hmmm, It might not work for RC3, as other changes, related to the device node creation (/etc/mdev.conf), might be needed.

And pardon my lecture in my previous post. I visited you homepage and understand that you are a knowledge unix programmer.

Joao


(...)

Tom Schmidt

unread,
Mar 30, 2014, 7:21:09 PM3/30/14
to al...@googlegroups.com
João,
    I was on vacation for the last week, and just returned, so now I was able to finally test the updated raid.cgi that you sent.  On my rev-B1 box, the raid.cgi page now looks like this:

RAID Creation
Dev. Type Component 1 Component 2 Spare
md1

RAID Maintenance
Dev. Capacity Level Ver. Components
Array RAID Operations Component Operations
md0 1862.0 GB raid1 1.2 sda2 sdb2
So now it does display my raid1 metadata Version 1.2 partition.

Tom
--
You received this message because you are subscribed to the Google Groups "Alt-F" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alt-f+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/alt-f/7a2ed646-d3a3-4174-b4d3-971015500f8c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

João Cardoso

unread,
Mar 31, 2014, 6:10:00 PM3/31/14
to
But it seems to not be displaying neither the "RAID Operations" nor the "Component Operation" options...

on my system:

RAID Maintenance
Dev.CapacityLevelVer.ComponentsArrayRAID OperationsComponent Operations
md027.9 GBraid10.90sdb2
md127.9 GBraid11.0sdb3
md217.7 GBraid11.2sdb4


And for the RAID webUI to be usable, the component partition type must be of type RAID. This is an Alt-F requirement.

Tom Schmidt

unread,
Mar 31, 2014, 10:40:13 PM3/31/14
to al...@googlegroups.com
João,
   Somehow those fields got missed in my cut-n-paste.  Here it is again:

RAID Maintenance
Dev. Capacity Level Ver. Components
Array RAID Operations Component Operations
md0 1862.0 GB raid1 1.2 sda2 sdb2

Tom


On 3/31/2014 2:54 PM, João Cardoso wrote:


On Monday, March 31, 2014 12:21:09 AM UTC+1, Tom Schmidt wrote:
But it seems to not be displaying neither the "RAID Operations" nor the "Component Operation" options...

on my system:

RAID Maintenance
Dev. Capacity Level Ver. Components
Array RAID Operations Component Operations
md0
27.9 GB raid1 0.90 sdb2
md1 27.9 GB raid1 1.0 sdb3
md2 17.7 GB raid1 1.2 sdb4


--
You received this message because you are subscribed to the Google Groups "Alt-F" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alt-f+un...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

João Cardoso

unread,
Apr 3, 2014, 12:00:55 PM4/3/14
to

(...)


Your RAID is using metadata ver-1.2, which is the default when no --metadata is specified to mdadm. RC3 creates RAID with metadata 1.2 for greater then 2.2TB disks and 0.9 for smaller disks, but RC4 will use metadata ver-1.0 independently of the disk size (D-Link uses ver-0.9).

The difference is where the RAID metadata is located on the device, in the beginning (ver-1.2) or at the end (ver-0.9 and ver-1.0). Putting the metadata at the device end has advantages when one wants to break a RAID and keep its data or create a RAID from a standard filesystem and keep the fs data.
This means that, in principle, if you break a 1.2 RAID your data will not be accessible, as the filesystem don't find its superblock at the device component start. Creating a RAID from an ordinary fs will destroy the fs superblock at its start, and again the data will not be easily accessible  As a matter of fact the 1.2 RAID metadata is located 4KB at the device start, so the first superblock will not be destroyed, but the fs data that was at the location where the metadata is put will vanish.

On the other side, ver 1.0 (or 0.9)  metadata is located at the device end, and as an fs seldom is 100% full this will not destroy any of its data when creating and breaking a RAID.
That's why in the wiki I recommend to shrink a  fs before converting it to RAID, so no data will be located at its end (shrinking compacts data at the device start); after the RAID is created, the fs should be enlarged to occupy all device space (the RAID device tells the fs that the device ends before the metadata).

I have only used metadata 0.9 on my tests, as the wiki warns, so the above is theoretical.

This is why RC4 will use ver-1.0, which is needed for greater than 2.2TB partitions and is located at the device end.

I have now done the metadata tests, and have experimentally confirmed that you can't use RAID metadata version 1.2 to "promote" a normal filesystem to RAID1 or convert a RAID1 to a "normal" filesystem, as your data will be lost.
For RAID1 metadata 1.0 (and 0.9) both procedures preserve your data as long as you shrink/enlarge the filesystem. So:

-if you currently have a  RAID1  with metadata version 1.2, don't break the RAID without doing a backup first.
-If you want to convert a "standard" filesystem to RAID1 on a bigger than 2.2TB disk using Alt-F RAID webUI, you will lose your data.

That's not mdadm author's fault, as preserving data is not the purpose or even a design target when creating RAID1.
I have updated the wikis accordingly

It is unfortunate that Alt-F RC3 uses metadata version 1.2 on bigger than 2.2TB disks. Your data is not at risk, you only lose some flexibility if you need to manipulate the array. For RC4 the metadata version will be displayed in the RAID webUI.

Chris Lombardo

unread,
Jul 10, 2015, 7:43:59 PM7/10/15
to al...@googlegroups.com
Joao,

So I want to break my RAID setup using your wiki but I do not have any options available when viewing the RAID menu.  I have a DNS-323-B1 with 2x3 TB drives in RAID 1.

Under the RAID menu the UI says the following: 


No partitions of type RAID found, use the Disk Partitioner to create RAID partitions.


Back when I created the RAID 1 setup, I remember following a command line tutorial that you posted for drives larger than 2 TB.  Could you help with the proper command line command to break RAID into single disks without losing the data.  Thanks.

On Thursday, April 3, 2014 at 10:01:07 AM UTC-4, João Cardoso wrote:

(...)


Your RAID is using metadata ver-1.2, which is the default when no --metadata is specified to mdadm. RC3 creates RAID with metadata 1.2 for greater then 2.2TB disks and 0.9 for smaller disks, but RC4 will use metadata ver-1.0 independently of the disk size (D-Link uses ver-0.9).

The difference is where the RAID metadata is located on the device, in the beginning (ver-1.2) or at the end (ver-0.9 and ver-1.0). Putting the metadata at the device end has advantages when one wants to break a RAID and keep its data or create a RAID from a standard filesystem and keep the fs data.
This means that, in principle, if you break a 1.2 RAID your data will not be accessible, as the filesystem don't find its superblock at the device component start. Creating a RAID from an ordinary fs will destroy the fs superblock at its start, and again the data will not be easily accessible  As a matter of fact the 1.2 RAID metadata is located 4KB at the device start, so the first superblock will not be destroyed, but the fs data that was at the location where the metadata is put will vanish.

On the other side, ver 1.0 (or 0.9)  metadata is located at the device end, and as an fs seldom is 100% full this will not destroy any of its data when creating and breaking a RAID.
That's why in the wiki I recommend to shrink a  fs before converting it to RAID, so no data will be located at its end (shrinking compacts data at the device start); after the RAID is created, the fs should be enlarged to occupy all device space (the RAID device tells the fs that the device ends before the metadata).

I have only used metadata 0.9 on my tests, as the wiki warns, so the above is theoretical.

This is why RC4 will use ver-1.0, which is needed for greater than 2.2TB partitions and is located at the device end.
I have now done the metadata tests, and have experimentally confirmed that you can't use RAID metadata version 1.2 to "promote" a normal filesystem to RAID1 or convert a RAID1 to a "normal" filesystem, as your data will be lost.
For RAID1 metadata 1.0 (and 0.9) both procedures preserve your data as long as you shrink/enlarge the filesystem. So:

-if you currently have a  RAID1  with metadata version 1.2, don't break the RAID without doing a backup first.
-If you want to convert a "standard" filesystem to RAID1 on a bigger than 2.2TB disk using Alt-F RAID webUI, you will lose your data.

That's not mdadm author's fault, as preserving data is not the purpose or even a design target when creating RAID1.
I have updated the wikis accordingly

João Cardoso

unread,
Aug 13, 2015, 11:51:39 AM8/13/15
to Alt-F


On Saturday, 11 July 2015 00:43:59 UTC+1, Chris Lombardo wrote:
Joao,

So I want to break my RAID setup using your wiki but I do not have any options available when viewing the RAID menu.  I have a DNS-323-B1 with 2x3 TB drives in RAID 1.

Under the RAID menu the UI says the following: 


No partitions of type RAID found, use the Disk Partitioner to create RAID partitions.


What Alt-F version are you using?
To change the disk partition to type RAID you have to use the command line. 
As your disks are 3TB they must be using GPT, verify that in the Partitioner.
Also verify what metadata version your RAID is and what disk partitions are being used and be aware that certain metadata formats are not suitable to be breakable and keep the data. That is explained (quoted) bellow in this post.
Changing the partition type does not alter the partition contained data.

mdadm --detail --scan --verbose # prints RAID metadata and disk partitions in use by RAID devices
sgdisk
-p /dev/sda # prints your sda disk GPT partition table. Use also for sdb
sgdisk
-t partition_number:fd00 /dev/sda # where partition_number is the one used by the RAID and given by the previous commands. Its probably '2', so use 'sgdisk -t 2:fd00 /dev/sda' to change the partition to type RAID. Use also for sdb.

After using those commands the RAID webUI should be usable, but it does not verify (only displays) the metadata in use.
It is possible to change the metadata from version 0.9 to version 1.0 (other post address that) but other conversions are not possible.

Cem Basu

unread,
Aug 13, 2015, 9:23:59 PM8/13/15
to Alt-F
You can degrade your 2x RAID to 1x, format and promote the other drive for 1.0 RAID type and copy your data over.  When the copy completes, you can delete the first RAID and add the drive to the new 1.0 RAID set.

--
You received this message because you are subscribed to a topic in the Google Groups "Alt-F" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alt-f/1Ut9hHtlavc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to alt-f+un...@googlegroups.com.
Visit this group at http://groups.google.com/group/alt-f.
Reply all
Reply to author
Forward
0 new messages