>2TB LUNs, Firewire and Linux should work on new kernels

87 views
Skip to first unread message

Andrew Grover

unread,
Aug 18, 2009, 3:06:49 PM8/18/09
to drobo...@googlegroups.com
It looks like the issue with >2TB LUNs on Linux via Firewire was fixed
in kernel 2.6.31, yay! (Next Ubuntu release "Karmic" will include this
kernel, FWIW.)

Apparently the SBP layer wasn't allocating enough space for the larger
CDBs needed with >2TB LUNs.

Regards -- Andy

Ido Magal

unread,
Aug 18, 2009, 3:10:26 PM8/18/09
to drobo...@googlegroups.com
any chance there'll be a way to enlarge the LUN without killing the data in the foreseeable future?

Andrew Grover

unread,
Aug 18, 2009, 5:08:46 PM8/18/09
to drobo...@googlegroups.com
On Tue, Aug 18, 2009 at 12:10 PM, Ido Magal<ido....@gmail.com> wrote:
> any chance there'll be a way to enlarge the LUN without killing the data in
> the foreseeable future?

I have no inside information, but I would highly doubt it.

Once I upgrade to Karmic, I'm anticipating a reformat, and then never
again (or at least until I have >16TB of real data... not any time
soon :-)

-- Andy

Peter Silva

unread,
Aug 18, 2009, 10:10:38 PM8/18/09
to drobo...@googlegroups.com
Is that patch specific to firewire?  Because I can reliably toast LUNS > 2 TiB on my
gen 1 USB Drobos...

Andrew Grover

unread,
Aug 19, 2009, 1:22:16 AM8/19/09
to drobo...@googlegroups.com
On Tue, Aug 18, 2009 at 7:10 PM, Peter Silva<infor...@gmail.com> wrote:
> Is that patch specific to firewire?  Because I can reliably toast LUNS > 2
> TiB on my
> gen 1 USB Drobos...

Yea, the problem is only on 1) Linux 2) Firewire and 3) kernel older
than 2.6.31.

Regards -- Andy

browner

unread,
Aug 23, 2009, 11:56:45 AM8/23/09
to drobo-talk


On Aug 19, 6:22 am, Andrew Grover <andy.gro...@gmail.com> wrote:
> On Tue, Aug 18, 2009 at 7:10 PM, Peter Silva<informa...@gmail.com> wrote:
> > Is that patch specific to firewire?  Because I can reliably toast LUNS > 2
> > TiB on my
> > gen 1 USB Drobos...
>
> Yea, the problem is only on 1) Linux 2) Firewire and 3) kernel older
> than 2.6.31.
>
> Regards -- Andy


I think in addition to any Linux kernel limitations there are
limitations in the Drobo's firmware that prevent LUN's > 2TiB with
certain file systems. So make sure you are using NTFS or HPFS and not
ext3 for example or you will likely see corruption at some point in
the future.

Chris

Litrik De Roy

unread,
Aug 23, 2009, 12:50:39 PM8/23/09
to drobo...@googlegroups.com

Do you have any references to back that up?
When it comes to Linux support and large LUNs there are plenty of
assumptions floating around but I have never seen any hard
confirmation (from for example Data Robotics).

--
Litrik De Roy
Norio ICT Consulting - http://www.norio.be/

browner

unread,
Aug 24, 2009, 6:33:26 AM8/24/09
to drobo-talk


On Aug 23, 5:50 pm, Litrik De Roy <lit...@gmail.com> wrote:
> On Sun, Aug 23, 2009 at 17:56, browner<chris.scotl...@gmail.com> wrote:
>
> > On Aug 19, 6:22 am, Andrew Grover <andy.gro...@gmail.com> wrote:
> >> On Tue, Aug 18, 2009 at 7:10 PM, Peter Silva<informa...@gmail.com> wrote:
> >> > Is that patch specific to firewire?  Because I can reliably toast LUNS > 2
> >> > TiB on my
> >> > gen 1 USB Drobos...
>
> >> Yea, the problem is only on 1) Linux 2) Firewire and 3) kernel older
> >> than 2.6.31.
>
> >> Regards -- Andy
>
> > I think in addition to any Linux kernel limitations there are
> > limitations in the Drobo's firmware that prevent LUN's > 2TiB with
> > certain file systems. So make sure you are using NTFS or HPFS and not
> > ext3 for example or you will likely see corruption at some point in
> > the future.
>
> > Chris
>
> Do you have any references to back that up?
> When it comes to Linux support and large LUNs there are plenty of
> assumptions floating around but I have never seen any hard
> confirmation (from for example Data Robotics).
>
> - Show quoted text -

In short no, I have gleamed this information from the experiences
reported by other users and some small experiments of my own but it
may not be 100% true.

I did contact DR about this issue and asked them specifically to
clarify, however they simply replied stating they would pass my
comments to the development team.

One interesting thing to note is that when using the Windows drobo
software and formatting the drobo through a droboshare, ext3 is
available as an option. However it is notable that the software limits
you to a 2TiB LUN just the same as it does with FAT32 and only allows
larger sizes for NTFS and HPFS. This to me suggests there is some
underlying technical limitation that DR are aware of.

By all means try it and report back but I wouldn't trust it with any
precious data just in case. You might also like to try filling it up
past the 2TiB point and then deleting data to see if the drobo ever
reclaims the free space - that seems to be the first symptom if things
are going wrong.

Chris

Peter Silva

unread,
Aug 24, 2009, 8:24:58 AM8/24/09
to drobo...@googlegroups.com
I have a repeatable test case that gets the Drobo extremely confused, to the point where
offloading the data and reformat, or reset is all that will bring it back.  I have submitted
diagnostics to DRI on this topic, but no resolution afaik.

to reproduce:

1) reset your drobo (use the manual pin method...) erases all data on your drobo, clears all logs.

2) determine the data space available on your drobo after accounting for parity.
   (say you have 3 x 500 G drives, then one of them is parity, and you have 1 G available.

3) Set your LUNSIZE to > 2 TiB, say 8 TiB.  (I've done it with 4 and 8, don't recall 16...)

4)  build your ext3 fs as per normal (It will take a very long time, no worries, just let it run.)

5)  You should have a pretty, empty drobo with no blue lights on and a ready file system.

6)  write data to the drive until you reach the physical capacity of the drive (get a write error)
    ( all the blue lights fill up, drobom reports increasing capacity, etc..)
    drobo gets sluggish.

7) remove a large part of the data...
     -- all the blue lights will stay lit.
     -- drobo will remain sluggish.

8) Take a diagnostic, send it to DRI.


If you do the same thing, and the LUN size is 2 TIB, the Drobo recovers the space without issue.  Actually, probably want to do the test with 2 TiB to confirm that first, then repeat with
a larger LUN.

ChrisW

unread,
Sep 29, 2009, 4:26:25 AM9/29/09
to drobo-talk
All true... Ive just finished about a weeks worth of tests with ext3
on a drobo. LUNs < 2TB, perfectly behaved. Over 2TB and everything
seems to work fine, until you run an rm - rf against the volume and
notice that no,the leds dont go out.

Very disappointing, the marketing for drobo really doesnt make it
clear that "supports Linux" actually means "supports Linux - as long
as you dont have much data you want to store and need to keep owners
and permissions..." ;)

Sadly multiple 2TB LUNs wont cut it for the job in hand, and as we
already have a raid array with more than 2TB in use the drobo is
sitting empty on the shelf - looking pretty, but pretty useless.

Im now wondering about possibly using HFS+ which drobo seemingly does
support up to 16TB and does offer a journaling file system which keeps
unix permissions. Im not sure how much of a performance hit this is
compared to ext3, but having said that, performance on cp and rm on
the drobo isnt exactly snappy in the first place.

Anyone ahead of me with HFS+ tests?? Im always keen to learn from
mistakes, especially other peoples :)

chris w.

ChrisW

unread,
Oct 1, 2009, 12:05:43 AM10/1/09
to drobo-talk
well... its now 04:57Am, and after a whole bunch of tests, I can
say.................... nope, hfsplus doesnt seem the way forward
either.


I used mkfs.hfplus to build the 8TB volume, which worked fine (though
for some reason it had to be /dev/hdc rather than /dev/dhc1 as I
would have expected). I built it without journaling, and with case
sensitivity off (as thats reported as causing issues on the drobo
site) . It is recogiised and mounts up fine, files access is a little
slow compared to ext3... but then that wasnt exactly fast. I filled
the drive up to just over 2TB, noted the % used via drobom status (no
I cant see the LEDs Im at home). Then kicked of an rm -rf..... and
after a while (actually hfsplus seems quicker on delete than ext3
did), df tells me I have and empty drive.... and drobom status
tellsme.... I still have all my data. Duh!

OK... I give up... has anyone got a file system of any type > 2TB
working with a drobo??

Sebastian

unread,
Oct 1, 2009, 1:00:38 AM10/1/09
to drobo-talk
Disclaimer: I do not know the internals of the drobo, Though I've read
the Drobo related patents, and I am also extremely familiar with how
thin provisioned storage systems work.

It's my experience that the drobo does NOT actually recover space.
All the lights seem to really do is report the amount of PHYSICAL
space allocated underneath the virtual disk. ( The virtual disk is
the thin provisioned lun the OS sees )

What happens in thinp systems like the Drobo ( And for reference
Equallogic and Lefthand iSCSI arrays ) is that as the OS writes data
to the virtual disk, the device allocates data pages from the backing
store. (backing store==physical disks) Once the backing store pages
have been allocated they cannot be freed as current filesystems have
NO way to tell the storage that space has been reclaimed. ( There is
talk floating about using the TRIM SCSI command to perform the
reclaim, but no Linux/Windows filesystems support this properly yet..
and even then the Storage Device needs to implement the command)

What the Drobo seems to do is "peek" under the covers of the FS to
pseudo claim the space reclaimed because they know the FS will just
come back and write the data back into the same space that has already
been allocated. The issue we seem to be seeing is that if the
virtual disk is greater then 2TB and ext3 formatted the drobo seems to
be unable to gauge how much the OS thinks is in use vs the allocated
space in the virtual disk and display it via the leds. When this
happens it just keep displaying the allocated space in the virtual
disks backing store.

If you want to use 8TB volumes that are thin provisioned you simply
need to be prepared to stick additional disks into the drobo BEFORE it
gets to 100%. This will cause the LED lights to go back down some.
IE: assume you have 8TB virtual disk with 4TB of actual hard drive
space. Fill it with 2 TB of data and the drobo will claim 100% full.
Add 4TB more... and now the drobo will claim 50% full again... rinse
repeat until you actually have enough disks to support the entire 8TB
virtual disk.

Note you can remove and read data back to the drobo as many times as
you want... until you get close to that 100% mark. So for instance
throwing enough data to fill it up 80%, then delete the data.. the
drobo will still claim 80% usage.. but then put the same (or equal
sized) data back and it will STILL claim 80%.

I hope the above makes sense.

Andrew Grover

unread,
Oct 1, 2009, 2:44:57 AM10/1/09
to drobo...@googlegroups.com
On Wed, Sep 30, 2009 at 9:05 PM, ChrisW

> I used mkfs.hfplus to build the 8TB volume, which worked fine (though
> for some reason it had to be /dev/hdc  rather than /dev/dhc1 as I
> would have expected). I built it without journaling, and with case
> sensitivity off (as thats reported as causing issues on the drobo
> site) . It is recogiised and mounts up fine, files access is a little
> slow compared to ext3... but then that wasnt exactly fast.  I filled
> the drive up to just over 2TB, noted the % used via drobom status (no
> I cant see the LEDs Im at home). Then kicked of an rm -rf.....    and
> after a while (actually hfsplus seems quicker on delete than ext3
> did), df tells me I have and empty drive.... and drobom status
> tellsme.... I still have all my data. Duh!
>
> OK... I give up... has anyone got a file system of any type > 2TB
> working with a drobo??

My experience has been Drobo takes a while to recognize newly freed
space. I only have used 2TB LUNs so far, but you might see if some
time (4-8 hours?) makes a difference?

You might also try NTFS on Windows and/or Linux, and see if you can
reproduce the issue using that filesystem.

Regards -- Andy

Verne Arase

unread,
Oct 1, 2009, 11:47:29 PM10/1/09
to drobo...@googlegroups.com

On Sep 30, 2009, at 11:05 PM, ChrisW wrote:

> I used mkfs.hfplus to build the 8TB volume, which worked fine (though
> for some reason it had to be /dev/hdc rather than /dev/dhc1 as I
> would have expected). I built it without journaling, and with case
> sensitivity off (as thats reported as causing issues on the drobo
> site) . It is recogiised and mounts up fine, files access is a little
> slow compared to ext3... but then that wasnt exactly fast. I filled
> the drive up to just over 2TB, noted the % used via drobom status (no
> I cant see the LEDs Im at home). Then kicked of an rm -rf..... and
> after a while (actually hfsplus seems quicker on delete than ext3
> did), df tells me I have and empty drive.... and drobom status
> tellsme.... I still have all my data. Duh!
>
> OK... I give up... has anyone got a file system of any type > 2TB
> working with a drobo??

My volume is configured to be 16 TB ... currently with 2+1+1+1 and it
works just fine.

You know that you have to give it some time to register that it's no
longer at capacity ... it probably takes place during some garbage
collection or something.

-- Verne

ChrisW

unread,
Oct 2, 2009, 4:27:50 AM10/2/09
to drobo-talk

> You know that you have to give it some time to register that it's no  
> longer at capacity ... it probably takes place during some garbage  
> collection or something.

I quickly refilled the drobo to 12%,then deleted the data and left it
to see if it did any housekeeping... its been more than 24 hours now
and :-

df reports: /dev/sdc 8.0T 1.1G 8.0T 1% /
mnt/drobo1
drombom status reports: /dev/sdc /mnt/drobo1 Drobo disk pack 12% full
- ([], 0)

Can you describe the steps you took to configure yours and confirm you
have tried a big deletion and watched it reclaim space

cheers - chrisw

Verne Arase

unread,
Oct 2, 2009, 4:56:27 PM10/2/09
to drobo...@googlegroups.com
On Oct 2, 2009, at 3:27 AM, ChrisW wrote:

> Can you describe the steps you took to configure yours and confirm you
> have tried a big deletion and watched it reclaim space

Actually, practically nothing.

I'm using HFS+ on a MacBook Pro (currently using Snow Leopard).

-- Verne

Reply all
Reply to author
Forward
0 new messages