Help needed on disks from a failed DNS323 Alt-f

228 views
Skip to first unread message

Peter Read

unread,
Jun 10, 2022, 6:33:27 AM6/10/22
to Alt-F
Hi there, my DNS323 recently suffered a catastrophic failure (smoking main board) and I'm keen to recover the data from the disks. Both are 3TB Toshibha disks and were healthy before the mainboard died. My issue is that I'm unable to read them on any system. There were configured as JBOD using ext3 I believe but I've tried to read them on a mac and replacement Asustor NAS. I'm trying to use some recovery software but it simply cannot find a partition let alone mount the drive. 

Does anyone have any suggestions on what to do next?

Many thanks

João Cardoso

unread,
Jun 10, 2022, 12:41:01 PM6/10/22
to Alt-F
You probably need a linux machine. If you don't have one, boot a PC from a live usb linux distro; there are several, I used opensuse and ubuntu.
If the RAID was created using the Alt-F webUI, you should have no issues finding the (GPT/GUID for greater than 2TB disks) partitions, the usual is a first partition of type swap (type/code 8200) and a second of type for RAID (type/code FD00). For seeing partitions, you need 'fdisk -l  /dev/sd<disk>' or 'gdisk -l /dev/sd<disk>' or other variants.
You most probably need to use the 'mdadm' command from the command line to assemble the RAID. Be careful here, you *don't* want to create a RAID, you *want* to *assemble* an existing RAID. The 'mdadm --examine /dev/sd<disk><partition>' command will show you the RAID info from each disk.
When using a graphical environment, 'Disks' for Gnome, or 'KDE partition Manager', or 'gparted' can display the disks and its partitions, but probably not assemble the RAID.
I wouldn't rely on automatic recovery tool, as a JBOD RAID is not a very common setup.


Peter Read

unread,
Jun 10, 2022, 6:25:47 PM6/10/22
to Alt-F
Thanks for this Joao. Do I need to have both disks connect to the linux machine at the same time to build the RAID or can I build each disk separately? 

gra...@xtra.co.nz

unread,
Jun 10, 2022, 6:36:15 PM6/10/22
to Alt-F
On Windows I use  a driver from here:    https://sourceforge.net/projects/ext2fsd/

After one of my raid 0 drives failed,  I wanted to replace both drives with higher capacity drives.   This driver enabled me to put the remaining good drive in a caddy and copy all my files across to the new drive array.    I have had my DNS323 since the early 2000s and have done this a few times as  larger capacity hard drive prices have come down.

Sorry I don't know if there is an Apple equivalent for this

Joao Cardoso

unread,
Jun 10, 2022, 7:48:42 PM6/10/22
to Alt-F
On Friday, June 10, 2022 at 11:36:15 PM UTC+1 gra...@xtra.co.nz wrote:
On Windows I use  a driver from here:    https://sourceforge.net/projects/ext2fsd/

After one of my raid 0 drives failed,  I wanted to replace both drives with higher capacity drives.   This driver enabled me to put the remaining good drive in a caddy and copy all my files across to the new drive array. 

Are you sure it was a RAID 0 and not a RAID 1? With RAID 0 data is evenly split  between drives, if one drive fails all data is lost. By contrary, on RAID1 data is duplicated in both drives, if one drive fails data is still available on the other drive.
Additionally, with RAID 1 (with metadata lower than or equal to 1.0), the filesystem is laid on on the partition start, so it looks like a normal, standard filesystem that ext2/3/4 tools  (such as ext2fs) recognize and can deal with.
So, I'm convinced that you had a RAID 1, not a RAID 0.

For a JBOD (Just a Bunch Of Disks), disks are concatenated one after the other, and the filesystem starts on the first disk/partition; a disk tool, if presented with the first disk, will see a truncated ext2/3 filesystem, but will not be able to use it, as it looks like the disk is "short", only half of what it should be. Probably you can force mount that first disk and recover that part of the data; if only the second disk is available, it *might* be possible, using spare and redundant ext2/3/4 filesystem metadata (super-blocks)  present on that disk and some trickery, to recover that disk data -- but here I'm speculating.

So, @rot, just for diagnoses, you can try to use one disk one at a time, seeing if the tool detects it as a broken "short" ext2/3/4 filesystem -- that will be the first disk.
Whatever you do, mount the RAID and than the filesystem in read-only mode, or you risk that an automatic filesystem check destroys the otherwise fine metadata (superblocks). 
RAID 0 and JBOD are not really RAID (*Redundant* Array of Inexpensive/Independent Disks). And a RAID is just a new virtual disk, a filesystem must be laid-on on top of it, as is done with normal physical disks -- this is a source of many confusions.

Alt-F uses RAID in incremental mode. From the mdadm manual page:
     Incremental Assembly
             Add a single device to an appropriate array.  If the addition of the device makes the array
             runnable, the array will be started.  This provides a convenient interface  to  a  hot-plug
             system.   As  each  device  is  detected, mdadm has a chance to include it in some array as
             appropriate.

So, if I recollect correctly, when a partition is detected, its type is read using the 'blkid' program, and if a RAID device type is recognized on it (by the 'blkid /dev/<disk><partition>' program), the 'mdadm --incremental /dev/<disk><partition>' command is run. On Alt-F, the RAID device is the second partition, so it will be /dev/sda2 or /dev/sdb2 on a two disk system, so a possible sequence would be
mdadm --incremental /dev/sda2 # for the first disk. On a booted linux system with already one disk, the RAID device might be sdb2
mdadm --incremental /dev/sdb2 # or /dev/sdc2 on a booted linux with one disk. And, if /dev/md0 is the resulting raid device
mount -ro /dev/md0 /mnt/your-mount-directory # the 'ro' mean read-only

Graham Wilson

unread,
Jun 10, 2022, 10:29:57 PM6/10/22
to al...@googlegroups.com

Oops – sorry my RAID is where the data is the same on both disks.        I usually have had to replace one of the drives every 3 to 4 years thru the 15 to 20 years I have had the DNS323

--
You received this message because you are subscribed to a topic in the Google Groups "Alt-F" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alt-f/xBaZUnHO8Ww/unsubscribe.
To unsubscribe from this group and all its topics, send an email to alt-f+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/alt-f/e1dac891-ba05-4f8c-ae4e-1c69630904c2n%40googlegroups.com.

Peter Read

unread,
Jun 12, 2022, 2:33:21 AM6/12/22
to Alt-F
hi all, I managed to install both disks on my new NAS (As5202T) and ran a couple of commands but I'm still lost on what I'm looking at. Hoping you can point me in the right direction here.

Below is the result of blkid command:
===================================

# blkid

/dev/loop0: UUID="9b4590f8-139a-48cf-bff5-9d54b083bffd" BLOCK_SIZE="1024" TYPE="ext4"

/dev/sda2: UUID="b2e563da-7ca5-719b-1c11-37310e666033" UUID_SUB="b00d23fc-49f0-eae8-e23c-47e9c2073087" LABEL="AS5202T-2D0B:0" TYPE="linux_raid_member" PARTUUID="0c2f5675-9209-432b-aa9e-c55d1cc691f1"

/dev/sda3: UUID="42e067e7-3965-0d5d-cc48-1ce71bca8ac2" UUID_SUB="361af2ba-3a2d-3c61-19ce-5a50dc4f402a" LABEL="AS5202T:126" TYPE="linux_raid_member" PARTUUID="a0be72a5-7a65-4ec7-aad9-8c427ee0bc34"

/dev/sda4: UUID="5ca9717f-7554-dc86-6a03-e8135cf11008" UUID_SUB="ce02c463-1209-c1c5-3ec9-2801630291b8" LABEL="AS5202T-2D0B:1" TYPE="linux_raid_member" PARTUUID="81b7b14d-21ac-46db-aeab-2912ac2bcd7a"

/dev/sdc2: LABEL="Seagate Backup Plus Drive" BLOCK_SIZE="512" UUID="7AB82449B82405EB" TYPE="ntfs" PTTYPE="atari" PARTLABEL="Basic data partition" PARTUUID="8d71bb95-e855-44c8-a500-c0e0f306f787"

/dev/sdc3: LABEL="Seagate Backup Plus 2" BLOCK_SIZE="512" UUID="70CCA943CCA90486" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="819a419a-f6d5-49f8-a623-61070d1b0617"

/dev/md0: UUID="e01b1e9b-8bfd-4d69-b727-1cd161853f98" BLOCK_SIZE="4096" TYPE="ext4"

/dev/md126: UUID="dc5bf433-4af8-4f11-8de7-060e7e60325b" TYPE="swap"

/dev/md1: UUID="e6d0ac05-736e-4448-a8e1-ef4b57033b84" BLOCK_SIZE="4096" TYPE="ext4"

/dev/sda1: PARTUUID="0db7c2f0-4223-4a54-af79-cb1aea2cb2bb"

/dev/sdc1: PARTLABEL="Microsoft reserved partition" PARTUUID="c47bed0e-9614-4502-aad4-6b9cad9bf711"

==============================================

I can tell from here that sdc2 and sdc3 are my external 8tb drive. The rest are unclear to me. 



Below is the result of parted -l

============================================

Model: ATA WDC WD20EARS-00M (scsi)

Disk /dev/sda: 2000GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Disk Flags: 


Number  Start   End     Size    File system  Name  Flags

 1      1049kB  268MB   267MB

 2      268MB   2416MB  2147MB                     raid

 3      2416MB  4563MB  2147MB                     raid

 4      4563MB  2000GB  1996GB                     raid



Error: /dev/sdb: unrecognised disk label

Model: WD My Book 1130 (scsi)                                             

Disk /dev/sdb: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: unknown

Disk Flags: 


Model: Seagate Backup+ Hub BK (scsi)

Disk /dev/sdc: 8002GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags: 


Number  Start   End     Size    File system  Name                          Flags

 1      17.4kB  134MB   134MB                Microsoft reserved partition  msftres

 2      135MB   4001GB  4001GB  ntfs         Basic data partition

 3      4001GB  8002GB  4001GB  ntfs         Basic data partition



Error: /dev/mmcblk0: unrecognised disk label

Warning: Error fsyncing/closing /dev/mmcblk0: Input/output error

Retry/Ignore? i                                                           

Model: MMC 8GTF4R (sd/mmc)

Disk /dev/mmcblk0: 7818MB

Sector size (logical/physical): 512B/512B

Partition Table: unknown

Disk Flags: 


Error: /dev/mmcblk0boot0: unrecognised disk label

Model: Generic SD/MMC Storage Card (sd/mmc)                               

Disk /dev/mmcblk0boot0: 4194kB

Sector size (logical/physical): 512B/512B

Partition Table: unknown

Disk Flags: 


Model: Linux Software RAID Array (md)

Disk /dev/md0: 2144MB

Sector size (logical/physical): 512B/512B

Partition Table: loop

Disk Flags: 


Number  Start  End     Size    File system  Flags

 1      0.00B  2144MB  2144MB  ext4



Model: Linux Software RAID Array (md)

Disk /dev/md126: 2144MB

Sector size (logical/physical): 512B/512B

Partition Table: loop

Disk Flags: 


Number  Start  End     Size    File system     Flags

 1      0.00B  2144MB  2144MB  linux-swap(v1)



Error: /dev/mmcblk0boot1: unrecognised disk label

Model: Generic SD/MMC Storage Card (sd/mmc)                               

Disk /dev/mmcblk0boot1: 4194kB

Sector size (logical/physical): 512B/512B

Partition Table: unknown

Disk Flags: 


Model: Linux Software RAID Array (md)

Disk /dev/md1: 1996GB

Sector size (logical/physical): 512B/512B

Partition Table: loop

Disk Flags: 


Number  Start  End     Size    File system  Flags

 1      0.00B  1996GB  1996GB  ext4


=====================================================

 For deeper understanding I have a 2TB WD drive that the NAS boots from and an 8TB drive attached as target storage. The 2 drives are both 3TB Toshiba units, one loaded inside the NAS via SATA and the 2nd attached via USB (seems to be /dev/sdb). I am confused about /dev/mmcblk0boot0 and /dev/mmcblk0 though. To me it looks like the unrecognised disk label items are the JBOD but they don't seem to follow any of the logic I've seen anywhere else. I also ran an frisk -l command with the following output:


==================================

# fdisk -l

Disk /dev/sda: 1863 GB, 2000398934016 bytes, 3907029168 sectors

243201 cylinders, 255 heads, 63 sectors/track

Units: sectors of 1 * 512 = 512 bytes


Device  Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type

/dev/sda1    0,32,33     32,162,2          2048     524287     522240  255M 83 Linux

/dev/sda4    0,0,2       0,32,32              1       2047       2047 1023K ee EFI GPT


Partition table entries are not in disk order

fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdc: 2048 GB, 2199023255040 bytes, 4294967295 sectors

266305 cylinders, 256 heads, 63 sectors/track

Units: sectors of 1 * 512 = 512 bytes


Device  Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type

/dev/sdc1    0,0,2       1023,255,63          1 4294967295 4294967295 2047G ee EFI GPT

Disk /dev/md0: 2045 MB, 2144337920 bytes, 4188160 sectors

523520 cylinders, 2 heads, 4 sectors/track

Units: sectors of 1 * 512 = 512 bytes


Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md126: 2045 MB, 2144337920 bytes, 4188160 sectors

523520 cylinders, 2 heads, 4 sectors/track

Units: sectors of 1 * 512 = 512 bytes


Disk /dev/md126 doesn't contain a valid partition table

Disk /dev/md1: 1859 GB, 1995700174848 bytes, 3897851904 sectors

487231488 cylinders, 2 heads, 4 sectors/track

Units: sectors of 1 * 512 = 512 bytes


Disk /dev/md1 doesn't contain a valid partition table

=======================================

Again I'm confused with the output here. It seemed to me that md0 was the live partition on the NAS but this states it doesn't have a valid partition and is only 2045MB.... and sdb isn't even listed here despite it being one of the 3TB disks I'm looking for. I know this is a long post but any help you can offer would be greatly appreciated.

Cheers 

João Cardoso

unread,
Jun 12, 2022, 12:53:53 PM6/12/22
to Alt-F
On Sunday, June 12, 2022 at 7:33:21 AM UTC+1 rot...@gmail.com wrote:
hi all, I managed to install both disks on my new NAS (As5202T) and ran a couple of commands but I'm still lost on what I'm looking at. Hoping you can point me in the right direction here.
 
An important question, do you remember how was your JBOD array created? Under Alt-F or using the D-Link firmware?

The mmcblk devices are from the NAS firmware, you can disregard them. The 'fdisk' command seems to only deal with the old MBR (for less than 2TB disks) style partitions, its output is unreliable.



Below is the result of blkid command:
===================================

# blkid

sda its a 2TB WDC WD20EARS with GPT partition table
   sda2, sda3, sda4 partitions all have type of RAID components (not a RAID device)
sdb partition table isn't recognized by parted, but it says its a  3TB WD My Book 1130
sdc if a 8TB Seagate Backup+ Hub BK (scsi) with a GPT partition table
   sdc2, sdc3 has type NTFS

You say you have 4 disks, one is missing (sdd). Some USB adapters do not handle > 2TB disks or partitions. Can't you plug both 3TB inside the NAS, even if you have to temporarily disconnect one of the 2TB or 8TB disks?

md0, md1 are RAID devices, they were assembled from partitions 
md126 is a RAID device that was created and no significant name assigned to it.

So the NAS firmware has tried to auto-assemble RAID, we have to know what it found. What is the output of 'cat /proc/mdstat'?
It will tell you what RAID device it found (md0/md1/md126...), the RAID type, what disk partitions made up the RAID device, and the RAID state. E.g.:

[root@DNS-325]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]  
md0 : active raid1 sdb2[0] sda2[2]
     1952990119 blocks super 1.0 [2/2] [UU]
     bitmap: 0/15 pages [0KB], 65536KB chunk

You can also use the 'mdadm --detail /dev/md0' (or md1 or md126), and 'mdadm --examine /dev/sdNX', eg 'mdadm --examine /dev/sda2'

Peter Read

unread,
Jun 12, 2022, 9:07:38 PM6/12/22
to Alt-F
hi Joao, 

I'm fairly sure I remember building this using the ALTF software. I haven't used the Dlink firmware for years but I can't be 100% certain how it was built. 

The sdb drive is actually the one hanging from the USB port and is recognising the 3TB W My Book (I snagged an old enclosure and used it to connect my Toshiba 3TB disk). The NAS needs to have one drive installed to initiate so it cannot run without a drive used for the OS (that's the 2TB WD drive). The strange thing is that the NAS does show the drive in the GUI but lists it as inaccessible. 

 # cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md1 : active raid1 sda4[0]

      1948925952 blocks super 1.2 [1/1] [U]

      

md126 : active raid1 sda3[0]

      2094080 blocks super 1.2 [2/1] [U_]

      

md0 : active raid1 sda2[0]

      2094080 blocks super 1.2 [2/1] [U_]

      

unused devices: <none>

============================

mdadm --detail /dev/md0

/dev/md0:

           Version : 1.2

     Creation Time : Mon May 30 23:29:50 2022

        Raid Level : raid1

        Array Size : 2094080 (2045.00 MiB 2144.34 MB)

     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)

      Raid Devices : 2

     Total Devices : 1

       Persistence : Superblock is persistent


       Update Time : Mon Jun 13 04:10:57 2022

             State : clean, degraded 

    Active Devices : 1

   Working Devices : 1

    Failed Devices : 0

     Spare Devices : 0


Consistency Policy : resync


              Name : AS5202T-2D0B:0

              UUID : b2e563da:7ca5719b:1c113731:0e666033

            Events : 6996


    Number   Major   Minor   RaidDevice State

       0       8        2        0      active sync   /dev/sda2

       -       0        0        1      removed

=================================

# mdadm --detail /dev/md1

/dev/md1:

           Version : 1.2

     Creation Time : Mon May 30 23:29:58 2022

        Raid Level : raid1

        Array Size : 1948925952 (1858.64 GiB 1995.70 GB)

     Used Dev Size : 1948925952 (1858.64 GiB 1995.70 GB)

      Raid Devices : 1

     Total Devices : 1

       Persistence : Superblock is persistent


       Update Time : Mon Jun 13 09:01:09 2022

             State : clean 

    Active Devices : 1

   Working Devices : 1

    Failed Devices : 0

     Spare Devices : 0


Consistency Policy : resync


              Name : AS5202T-2D0B:1

              UUID : 5ca9717f:7554dc86:6a03e813:5cf11008

            Events : 14


    Number   Major   Minor   RaidDevice State

       0       8        4        0      active sync   /dev/sda4


===============================

# mdadm --detail /dev/md126

/dev/md126:

           Version : 1.2

     Creation Time : Mon May 30 23:29:55 2022

        Raid Level : raid1

        Array Size : 2094080 (2045.00 MiB 2144.34 MB)

     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)

      Raid Devices : 2

     Total Devices : 1

       Persistence : Superblock is persistent


       Update Time : Mon Jun 13 00:00:26 2022

             State : clean, degraded 

    Active Devices : 1

   Working Devices : 1

    Failed Devices : 0

     Spare Devices : 0


Consistency Policy : resync


              Name : AS5202T:126  (local to host AS5202T)

              UUID : 42e067e7:39650d5d:cc481ce7:1bca8ac2

            Events : 1412


    Number   Major   Minor   RaidDevice State

       0       8        3        0      active sync   /dev/sda3

       -       0        0        1      removed

=============================


Peter Read

unread,
Jun 12, 2022, 9:31:01 PM6/12/22
to Alt-F
I've tried installing either drive in the USB slot and it shows up as a device each time but with 'unrecognised disk label' and 'partition table unknown' on both disks. I'm sensing defeat here but open to suggestions. 

Peter Read

unread,
Jun 12, 2022, 9:49:20 PM6/12/22
to Alt-F
Hi, I tried again this time removing the other disk in the array inside the NAS itself. When running parted -l I get the following which shows my 3TB disk with a get partition table. I do have the second disk connected via USB but it doesn't seem to recognise it this time. 

parted -l

Model: ATA WDC WD20EARS-00M (scsi)

Disk /dev/sda: 2000GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Disk Flags: 


Number  Start   End     Size    File system  Name  Flags

 1      1049kB  268MB   267MB

 2      268MB   2416MB  2147MB                     raid

 3      2416MB  4563MB  2147MB                     raid

 4      4563MB  2000GB  1996GB                     raid



Model: ATA TOSHIBA DT01ACA3 (scsi)

Disk /dev/sdb: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags: 


Number  Start   End     Size    File system     Name  Flags

 1      32.8kB  537MB   537MB   linux-swap(v1)

 2      537MB   3001GB  3000GB  ext4



Model: Seagate Backup+ Hub BK (scsi)

Disk /dev/sdd: 8002GB

Peter Read

unread,
Jun 12, 2022, 11:18:53 PM6/12/22
to Alt-F

In this new set up with the disk in the other slot on the NAS, I have the following output on the 3TB disk.


mdadm --examine /dev/sdb

/dev/sdb:

   MBR Magic : aa55

Partition[0] :   4294967295 sectors at            1 (type ee)

Peter Read

unread,
Jun 12, 2022, 11:37:32 PM6/12/22
to Alt-F
Seems my other disk is now dead - I cannot get it to recognise on any system at all. If there is any hope recovering data from the sdb disk above, great. 

Peter Read

unread,
Jun 13, 2022, 3:24:19 AM6/13/22
to Alt-F
Made some progress today - I was able to mount the first disk on my NAS, mount it and copy the data to the USB 8TB disk hanging off it. The second disk is in the freezer to see if I can revive it somehow. Seems strange to me that I was able to mount the disk without any issue but so be it. I'm a happy camper :)

João Cardoso

unread,
Jun 13, 2022, 2:27:54 PM6/13/22
to Alt-F
On Monday, June 13, 2022 at 8:24:19 AM UTC+1 rot...@gmail.com wrote:
Made some progress today - I was able to mount the first disk on my NAS, mount it and copy the data to the USB 8TB disk hanging off it. The second disk is in the freezer to see if I can revive it somehow. Seems strange to me that I was able to mount the disk without any issue but so be it. I'm a happy camper :)

Then you didn't had a JBOD, you had a RAID1! You only need one disk (the other contains an exact copy) and using the RAID in degraded state.
You seem to have find a good setup: the 2TB disk the NAS needs to boot plugged in the NAS, one of the old 3TB disk plugged also in the NAS, and the 8TB disk in the USB.

It is not possible for me to help further, because your configuration is always changing and the info posted 2 posts ago is not the current one.
Notice that device names (sda, sdb, md0, md1...) might change when disks are moved or even swapped inside the NAS, so your conclusions need to be adapted from one setup to another one.

After identifying the device name (sda/sdb) with a given disk with 'parted', using 'cat /proc/mdstat' to identify each RAID device (md0/md1) with its disk partitions components (sdax/sdbx) and the RAID type (linear (known as JBOD), raid1, etc), you can decide what to do next. The 'mdadm --detail RAID-device' and 'mdadm --examine disk-partition' might also be useful.

Out of curiosity, what is output of 'cat /proc/mdstat' in the setup 3TB disk inside the NAS and able to mount  "it" (i.e., do you mount the RAID device md0/md1 or a disk partition sdax/sdbx?)
 

Peter Read

unread,
Jun 13, 2022, 11:09:11 PM6/13/22
to Alt-F
Hi Joao, I was able to mount the 3TB disk directly to a mount point. Given that, I'm thinking this was a straight single drive set up (no raid) with 2 x 3TB disks formatted as EXT4 independently from one another. Sadly the second drive was dead - it wouldn't even spin up until I changed the controller board but even then it won't load - was just identified as a device in linux. I'm sorry the config was changing all the time but I had to figure out exactly how to get things mounted - dropping the second disk in the NAS meant removing it from it's volume, USB controller  vs SATA etc.  
I'm closing this thread now though and want to truly thank for all your help and support. Your knowledge is immense and your patience never ending. Thank you so so much.

Cheers

Peter

Reply all
Reply to author
Forward
0 new messages