Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ZFS on Dell with FreeBSD

82 views
Skip to first unread message

Albert Shih

unread,
Oct 19, 2011, 10:14:43 AM10/19/11
to
Hi

Sorry to cross-posting. I don't knwon which mailing-list I should post this
message.

I'll would like to use FreeBSD with ZFS on some Dell server with some
MD1200 (classique DAS).

When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
two options :

1/ create a LV on the PERC H800 so the server see one volume and put
the zpool on this unique volume and let the hardware manage the
raid.

2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
and ZFS manage the raid.

which one is the best solution ?

Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)

Regards.

JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mer 19 oct 2011 16:11:40 CEST
_______________________________________________
freebsd-...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questi...@freebsd.org"

Ivan Voras

unread,
Oct 19, 2011, 10:40:23 AM10/19/11
to
On 19/10/2011 16:30, Fajar A. Nugraha wrote:
> On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih <Alber...@obspm.fr> wrote:
>> Hi
>>
>> Sorry to cross-posting. I don't knwon which mailing-list I should post this
>> message.
>>
>> I'll would like to use FreeBSD with ZFS on some Dell server with some
>> MD1200 (classique DAS).
>>
>> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
>> two options :
>>
>> 1/ create a LV on the PERC H800 so the server see one volume and put
>> the zpool on this unique volume and let the hardware manage the
>> raid.
>>
>> 2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
>> and ZFS manage the raid.
>>
>> which one is the best solution ?
>
> Neither.
>
> The best solution is to find a controller which can pass the disk as
> JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
> (though others might disagree).

Depending on the requirements and the purpose of the machine, it might
idea to combine it by having the hardware handle multiple RAID-1
devices. E.g. if you want to implement RAID-10, you might create N
RAID-1 volumes of two drives each in hardware. This is especially good
since FreeBSD's ZFS doesn't yet handle hot spares.


signature.asc

Damien Fleuriot

unread,
Oct 19, 2011, 10:40:46 AM10/19/11
to
On 10/19/11 4:14 PM, Albert Shih wrote:
> Hi
>
> Sorry to cross-posting. I don't knwon which mailing-list I should post this
> message.
>
> I'll would like to use FreeBSD with ZFS on some Dell server with some
> MD1200 (classique DAS).
>
> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> two options :
>
> 1/ create a LV on the PERC H800 so the server see one volume and put
> the zpool on this unique volume and let the hardware manage the
> raid.
>
> 2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
> and ZFS manage the raid.
>
> which one is the best solution ?
>
> Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)
>
> Regards.
>
> JAS


Why would you post about freebsd on opensolaris' list is beyond me.


Regarding your options with the ZFS pool, you will want to set your
disks as JBOD, so you can aggregate them with ZFS and use its native
features (including the self healing).

If you setup a hardware RAID then create a ZFS pool off it, you'll miss
on the self healing because ZFS will not see the individual drives.



Regarding your RAM needs, you are providing too few information for an
accurate answer.

What will you use the server for ?



Regarding the H800 card on freebsd, I would test it beforehand if I were
you, there were problems getting the H200 working on 8.2 before.

Jorge Medina

unread,
Oct 19, 2011, 10:28:02 AM10/19/11
to
On Wed, Oct 19, 2011 at 11:14 AM, Albert Shih <Alber...@obspm.fr> wrote:
> Hi
>
> Sorry to cross-posting. I don't knwon which mailing-list I should post this
> message.
>
> I'll would like to use FreeBSD with ZFS on some Dell server with some
> MD1200 (classique DAS).
>
> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> two options :
>
>        1/ create a LV on the PERC H800 so the server see one volume and put
>        the zpool on this unique volume and let the hardware manage the
>        raid.
>
>        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
>        and ZFS manage the raid.
>
> which one is the best solution ?
>
> Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)
>
> Regards.

for ZFS approach the second option in my opinion is better.

> JAS
> --
> Albert SHIH
> DIO batiment 15
> Observatoire de Paris
> 5 Place Jules Janssen
> 92195 Meudon Cedex
> Téléphone : 01 45 07 76 26/06 86 69 95 71
> Heure local/Local time:
> mer 19 oct 2011 16:11:40 CEST
> _______________________________________________
> freebsd-...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questi...@freebsd.org"
>



--
Jorge Andrés Medina Oliva.
Computer engineer.
IT consultant
http://www.bsdchile.cl

Fajar A. Nugraha

unread,
Oct 19, 2011, 10:30:31 AM10/19/11
to
On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih <Alber...@obspm.fr> wrote:
> Hi
>
> Sorry to cross-posting. I don't knwon which mailing-list I should post this
> message.
>
> I'll would like to use FreeBSD with ZFS on some Dell server with some
> MD1200 (classique DAS).
>
> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> two options :
>
>        1/ create a LV on the PERC H800 so the server see one volume and put
>        the zpool on this unique volume and let the hardware manage the
>        raid.
>
>        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
>        and ZFS manage the raid.
>
> which one is the best solution ?

Neither.

The best solution is to find a controller which can pass the disk as
JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
(though others might disagree).

>
> Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)

The more the better :)

Just make sure do NOT use dedup untul you REALLY know what you're
doing (which usually means buying lots of RAM and SSD for L2ARC).

--
Fajar

Krunal Desai

unread,
Oct 19, 2011, 10:52:07 AM10/19/11
to
On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih <Alber...@obspm.fr> wrote:
> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> two options :
>
>        1/ create a LV on the PERC H800 so the server see one volume and put
>        the zpool on this unique volume and let the hardware manage the
>        raid.
>
>        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
>        and ZFS manage the raid.
>
> which one is the best solution ?
>
> Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)

I know the PERC H200 can be flashed with IT firmware, making it in
effect a "dumb" HBA perfect for ZFS usage. Perhaps the H800 has the
same? (If not, can you get the machine configured with a H200?)

If that's not an option, I think Option 2 will work. My first ZFS
server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID
0s in the PERC Option ROM, but Solaris did not seem to mind that.

--khd

Sergio de Almeida Lenzi

unread,
Oct 19, 2011, 11:36:11 AM10/19/11
to
I have several dells with the perc controller
and I can say that the best solution is
to user raid 0 (see both disks) and let
zfs to mirror them..

This works ok, and you can use the zfs tools to
manage the disks.

Howerver this does not solve the problem that I
had with the perc controller..

The problem is that all the storage you have is
in the perc controller, even it is reliable, when
it "breaks", all of your storage (and so the computer...)
is useless. You cannot move the drives (HD) to another
machine because the "normal" controller (ad, ada) will
not recognize the disk.

Even if you have a spare perc controller of the same kind
at hand (and I bet you do not have...) the disks are
"signed" by the other (the broken) controller and so
will not be recognized by the new controller.

In my case I had to call dell support, and only after
several hours I could put the HD online again.

I mount only one disk (of the zfs pool) in the new
controller, and even with the dell support in the phone
the new controller wipe out the disk.
A new call (with a different dell support) was able to
re-initialize the disk, that includes re-install Freebsd...
and after that "attach" the old disk, and using zpool detatch,
followed by zpool attach (the old disk), it than reconstruct
the mirror... resulting in almost 6 hour downtime
and a loss of one day working for the hole company.

Now, a dell support (here) does not attend you if you say
that the OS is FreeBSD, you must say to them that you are
installing Linux... to get the support.

Conclusion: now I prefer the IBM 32XX series...

Ok
that is my story


Sergio.

per...@pluto.rain.com

unread,
Oct 20, 2011, 2:15:55 AM10/20/11
to
Damien Fleuriot <m...@my.gd> wrote:

> Why would you post about freebsd on opensolaris' list is beyond me.

Presumably because opensolaris is the ZFS upstream.

Любомир Григоров

unread,
Oct 19, 2011, 8:00:22 PM10/19/11
to
If by OpenSolaris you mean OpenIndiana, (OpenSolaris is dead), both OI and
FreeBSD have the same ZFS pool version (28). There is no "native" advantage
of using OI over FreeBSD regarding ZFS.

If you want the latest ZFS with cryto (30), you need to go with closed
source Solaris 11 Express, which you need to buy a support license (a no-no)
if you want to use in production or commercially.

2011/10/19 <per...@pluto.rain.com>
--
Lyubomir Grigorov (bgalakazam)

Dave Pooser

unread,
Oct 19, 2011, 8:56:28 PM10/19/11
to
On 10/19/11 9:14 AM, "Albert Shih" <Alber...@obspm.fr> wrote:

>When we buy a MD1200 we need a RAID PERC H800 card on the server

No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
it presents the individual disks and ZFS can handle redundancy and
recovery.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com

Rocky Shek

unread,
Oct 19, 2011, 10:23:26 PM10/19/11
to
I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based on
past experience

Also LSI Original HBA normally released FW earlier than OEM.

Plus, most of users in community use LSI HBA.

Rocky

-----Original Message-----
From: zfs-discu...@opensolaris.org
[mailto:zfs-discu...@opensolaris.org] On Behalf Of Dave Pooser
Sent: Wednesday, October 19, 2011 5:56 PM
To: freebsd-...@freebsd.org; zfs-d...@opensolaris.org
Subject: Re: [zfs-discuss] ZFS on Dell with FreeBSD

On 10/19/11 9:14 AM, "Albert Shih" <Alber...@obspm.fr> wrote:

>When we buy a MD1200 we need a RAID PERC H800 card on the server

No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
it presents the individual disks and ZFS can handle redundancy and
recovery.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com


_______________________________________________
zfs-discuss mailing list
zfs-d...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Fajar A. Nugraha

unread,
Oct 19, 2011, 10:23:59 PM10/19/11
to
On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser <dave...@alfordmedia.com> wrote:
> On 10/19/11 9:14 AM, "Albert Shih" <Alber...@obspm.fr> wrote:
>
>>When we buy a MD1200 we need a RAID PERC H800 card on the server
>
> No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
> I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
> it presents the individual disks and ZFS can handle redundancy and
> recovery.

Exactly, thanks for suggesting an exact controller model that can
present disks as JBOD.

With hardware RAID, you'd pretty much rely on the controller to behave
nicely, which is why I suggested to simply create one big volume for
zfs to use (so you pretty much only use features like snapshot,
clones, etc, but don't use zfs self healing feature). Again, others
might (and have) disagree and suggest using volumes for individual
disk (even when you're still relying on hardware RAID controller). But
ultimately there's no question that the best possible setup would be
to present the disks as JBOD and let zfs handle it directly.

--
Fajar

Dennis Glatting

unread,
Oct 19, 2011, 11:24:53 PM10/19/11
to


On Thu, 20 Oct 2011, Fajar A. Nugraha wrote:

> On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser <dave...@alfordmedia.com> wrote:
>> On 10/19/11 9:14 AM, "Albert Shih" <Alber...@obspm.fr> wrote:
>>
>>> When we buy a MD1200 we need a RAID PERC H800 card on the server
>>
>> No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
>> I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
>> it presents the individual disks and ZFS can handle redundancy and
>> recovery.
>
> Exactly, thanks for suggesting an exact controller model that can
> present disks as JBOD.
>
> With hardware RAID, you'd pretty much rely on the controller to behave
> nicely, which is why I suggested to simply create one big volume for zfs
> to use (so you pretty much only use features like snapshot, clones, etc,
> but don't use zfs self healing feature). Again, others might (and have)
> disagree and suggest using volumes for individual disk (even when you're
> still relying on hardware RAID controller). But ultimately there's no
> question that the best possible setup would be to present the disks as
> JBOD and let zfs handle it directly.
>

I saw something interesting and different today, which I'll just throw
out.

A buddy has a HP370 loaded with disks (not the only machine that provides
these services, rather the one he was showing off). The 370's disks are
managed by the underlying hardware RAID controller, which he built as
multiple RAID1 volumes.

ESXi 5.0 is loaded and in control of the volumes, some of which are
partitioned. Consequently, his result is vendor supported interfaces
between disks, RAID controller, ESXi, and managing/reporting software.

The HP370 has multiple FreeNAS instances whose "disks" are the "disks"
(volumes/partitions) from ESXi (all on the same physical hardware). The
FreeNAS instances are partitioned according to their physical and logical
function within the infrastructure, whether by physical or logical
connections. The FreeNAS instances then serves its "disks" to consumers.

We have not done any performance testing. Generally, his NAS consumers are
not I/O pigs though we want the best performance possible (some consumers
are over the WAN resulting in any HP/ESXi/FreeNAS performance issues
possibly moot). (I want to do some performance testing because, well, it
may have significant amusement value.) A question we have is whether ZFS
(ARC, maybe L2ARC) within FreeNAS is possible or would provide any value.

Damien Fleuriot

unread,
Oct 20, 2011, 4:15:44 AM10/20/11
to
Possible, yes.
Provides value, somewhat.

You still get to use snapshots, compression, dedup...
You don't get ZFS self healing though which IMO is a big loss.

Regarding the ARC, it totally depends on the kind of files you serve and the amount of RAM you have available.

If you keep serving huge, different files all the time, it won't help as much as when clients request the same small/avg files over and over again._______________________________________________

Albert Shih

unread,
Oct 20, 2011, 5:33:09 AM10/20/11
to
Le 19/10/2011 à 21:30:31+0700, Fajar A. Nugraha a écrit
> > Sorry to cross-posting. I don't knwon which mailing-list I should post this
> > message.
> >
> > I'll would like to use FreeBSD with ZFS on some Dell server with some
> > MD1200 (classique DAS).
> >
> > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> > two options :
> >
> >        1/ create a LV on the PERC H800 so the server see one volume and put
> >        the zpool on this unique volume and let the hardware manage the
> >        raid.
> >
> >        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
> >        and ZFS manage the raid.
> >
> > which one is the best solution ?
>
> Neither.
>
> The best solution is to find a controller which can pass the disk as
> JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
> (though others might disagree).

Thanks. That's going to be very complicate...but I'm going to try.

>
> >
> > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)
>
> The more the better :)

Well, my employer is not so rich.

It's first time I'm going to use ZFS on FreeBSD on production (I use on my
laptop but that's mean nothing), so what's in your opinion the minimum ram
I need ? Is something like 48 Go is enough ?

> Just make sure do NOT use dedup untul you REALLY know what you're
> doing (which usually means buying lots of RAM and SSD for L2ARC).

Ok.

Regards.

JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:30:49 CEST

Fajar A. Nugraha

unread,
Oct 20, 2011, 5:40:38 AM10/20/11
to
On Thu, Oct 20, 2011 at 4:33 PM, Albert Shih <Alber...@obspm.fr> wrote:
>> > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)
>>
>> The more the better :)
>
> Well, my employer is not so rich.
>
> It's first time I'm going to use ZFS on FreeBSD on production (I use on my
> laptop but that's mean nothing), so what's in your opinion the minimum ram
> I need ? Is something like 48 Go is enough ?

If you don't use dedup (recommended), should be more than enough.

If you use dedup, search zfs-discuss archive for some calculation method posted.

For comparison purposes, you could also look at Oracle's zfs storage
appliance configuration:
https://shop.oracle.com/pls/ostore/f?p=dstore:product:3479784507256153::NO:RP,6:P6_LPI,P6_PROD_HIER_ID:424445158091311922637762,114303924177622138569448

--
Fajar

Albert Shih

unread,
Oct 20, 2011, 5:49:13 AM10/20/11
to
Le 19/10/2011 à 10:52:07-0400, Krunal Desai a écrit
> On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih <Alber...@obspm.fr> wrote:
> > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> > two options :
> >
> >        1/ create a LV on the PERC H800 so the server see one volume and put
> >        the zpool on this unique volume and let the hardware manage the
> >        raid.
> >
> >        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
> >        and ZFS manage the raid.
> >
> > which one is the best solution ?
> >
> > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)
>
> I know the PERC H200 can be flashed with IT firmware, making it in
> effect a "dumb" HBA perfect for ZFS usage. Perhaps the H800 has the
> same? (If not, can you get the machine configured with a H200?)

I'm not sure what you mean when you say «H200 flashed with IT firmware» ?

> If that's not an option, I think Option 2 will work. My first ZFS
> server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID
> 0s in the PERC Option ROM, but Solaris did not seem to mind that.

OK.

I don't have choice (too complexe to explain and it's meanless here) but I
can only buy at Dell at this moment.

On the Dell website I've the choice between :


SAS 6Gbps External Controller
PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe
PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe
PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
LSI2032 SCSI Internal PCIe Controller Card

I've no idea what's the first thing is. But what I understand the best
solution is the first or the last ?

Regards.

JAS

--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:44:39 CEST

Damien Fleuriot

unread,
Oct 20, 2011, 5:57:06 AM10/20/11
to
The best solution is to get a dumb HBA which will present your drives
directly to the OS (JBOD), then create your ZFS pools there.

Many people have already recommended LSI because it's widely used on the
list.

Also, what do they mean by "SAS 6Gbps External Controller" ?

Chuck Swiger

unread,
Oct 20, 2011, 12:52:42 PM10/20/11
to
Hi--

On Oct 20, 2011, at 2:57 AM, Damien Fleuriot wrote:
> Also, what do they mean by "SAS 6Gbps External Controller" ?

SAS is "serial attached SCSI"; it permits multipath connections to devices and thus is more similar to fibre channel HBAs than SATA, although some SAS controllers will also work with normal SATA drives.

Regards,
--
-Chuck

Damien Fleuriot

unread,
Oct 20, 2011, 12:59:15 PM10/20/11
to


On 10/20/11 6:52 PM, Chuck Swiger wrote:
> Hi--
>
> On Oct 20, 2011, at 2:57 AM, Damien Fleuriot wrote:
>> Also, what do they mean by "SAS 6Gbps External Controller" ?
>
> SAS is "serial attached SCSI"; it permits multipath connections to devices and thus is more similar to fibre channel HBAs than SATA, although some SAS controllers will also work with normal SATA drives.
>
> Regards,

I know what SAS stands for.

My question was, what do they mean by *external* controller ?

Do you get to provide your own ?

Chuck Swiger

unread,
Oct 20, 2011, 1:05:47 PM10/20/11
to
On Oct 20, 2011, at 9:59 AM, Damien Fleuriot wrote:
>> SAS is "serial attached SCSI"; it permits multipath connections to devices and thus is more similar to fibre channel HBAs than SATA, although some SAS controllers will also work with normal SATA drives.
>
> I know what SAS stands for.

OK.

> My question was, what do they mean by *external* controller ?

It means the connections to the devices are external, rather than being intended for internal devices:

http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/topics/en/us/raid_controller?c=us&l=en&cs=555

> Do you get to provide your own ?

Devices? Yes.

Regards,
--
-Chuck

Krunal Desai

unread,
Oct 20, 2011, 4:35:55 PM10/20/11
to
On Thu, Oct 20, 2011 at 5:49 AM, Albert Shih <Alber...@obspm.fr> wrote:
> I'm not sure what you mean when you say «H200 flashed with IT firmware» ?

IT is "Initiator Target", and many LSI chips have a version of their
firmware available that will put them into this mode, which is
desirable for ZFS. This is opposed to other LSI firmware modes like
"IR" which is RAID, I believe. (which you do not want). Since the H200
uses a LSI chip, you can download that firmware from LSI and flash it
to the card turning it into an IT-mode card and a simple HBA.

--khd

Koopmann, Jan-Peter

unread,
Oct 20, 2011, 4:28:59 PM10/20/11
to


>
> On the Dell website I've the choice between :
>
>
> SAS 6Gbps External Controller
> PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe
> PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe
> PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
> PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
> PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
> LSI2032 SCSI Internal PCIe Controller Card
>

The first one probably is a LSI card. However check with DELL (and if it is LSI, check what card exactly). And check if with that controller they support seeing all individual drives in the chassis as JBOD.

Otherwise consider buying the chassis without the controller and get just the LSI from someone else.

Regards,
JP_______________________________________________

Albert Shih

unread,
Oct 27, 2011, 11:32:20 AM10/27/11
to
Le 19/10/2011 à 19:23:26-0700, Rocky Shek a écrit

Hi.

Thanks for this information.

> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based on
> past experience

Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under FreBSD 9.0 ?

Best regards.

Regards.
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 27 oct 2011 17:20:11 CEST

David Magda

unread,
Oct 27, 2011, 1:34:50 PM10/27/11
to
On Thu, October 27, 2011 11:32, Albert Shih wrote:

>> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based
>> on past experience
>
> Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under
> FreBSD 9.0 ?

Check the man page for mpt(4):

http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+9-current
http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+8.2-RELEASE

Or LSI's site:

http://www.lsi.com/products/storagecomponents/Pages/LSISAS9205-8e.aspx
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx

Do you know how to use a search engine?

Albert Shih

unread,
Oct 28, 2011, 1:53:30 AM10/28/11
to
Le 27/10/2011 à 13:34:50-0400, David Magda a écrit
> On Thu, October 27, 2011 11:32, Albert Shih wrote:
>
> >> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based
> >> on past experience
> >
> > Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under
> > FreBSD 9.0 ?
>
> Check the man page for mpt(4):
>
> http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+9-current
> http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+8.2-RELEASE

Well....I don't find this LSI in the mpt driver. I find the chipset of the
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx in
the mps drivers. But I don't known if it's enough to support le card.
this one use 2308 chip and I definitely don't find this chip on mps driver.

> http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx
>
> Do you know how to use a search engine?

Don't knwon you tell me ;-)

I going to spend lot of money to buy some card, I just hope I can sure the
card going to work....

Thanks

Regards.

JAS



--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
ven 28 oct 2011 07:48:55 CEST

Vincent Hoffman

unread,
Oct 28, 2011, 9:37:06 AM10/28/11
to
On 28/10/2011 06:53, Albert Shih wrote:
> Le 27/10/2011 à 13:34:50-0400, David Magda a écrit
>> On Thu, October 27, 2011 11:32, Albert Shih wrote:
>>
>>>> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based
>>>> on past experience
>>> Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under
>>> FreBSD 9.0 ?
>> Check the man page for mpt(4):
>>
>> http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+9-current
>> http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+8.2-RELEASE
> Well....I don't find this LSI in the mpt driver. I find the chipset of the
> http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx in
> the mps drivers. But I don't known if it's enough to support le card.
>
>> Or LSI's site:
>>
>> http://www.lsi.com/products/storagecomponents/Pages/LSISAS9205-8e.aspx
> this one use 2308 chip and I definitely don't find this chip on mps driver.
>
>> http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx
>>
>> Do you know how to use a search engine?
> Don't knwon you tell me ;-)
>
> I going to spend lot of money to buy some card, I just hope I can sure the
> card going to work....

There is a fair chance for any newer LSI/PERC that supports sas it may
be supported under the mfi driver.
for example on dell R410
mfiutil -u0 show adapter
mfi0 Adapter:
Product Name: PERC H700 Adapter
Serial Number: 0CP00UO
Firmware: 12.10.0-0025
RAID Levels: JBOD, RAID0, RAID1, RAID5, RAID6, RAID10, RAID50
Battery Backup: present
NVRAM: 32K
Onboard Memory: 512M
Minimum Stripe: 8k
Maximum Stripe: 1M

mfi0@pci0:3:0:0: class=0x010400 card=0x1f161028 chip=0x00791000
rev=0x05 hdr=0x00
vendor = 'LSI Logic / Symbios Logic'
device = 'MegaRAID SAS 2108 [Liberator]'
class = mass storage


I am currently having some issues with a similar controller but thats a
different firmware and rebadged by supermicro.
so far i havent had any issues with this dell but its been under very
light load and only up for a month.

Vince
> Thanks
>
> Regards.
>
> JAS

Eduardo

unread,
Oct 28, 2011, 10:57:58 AM10/28/11
to
On Wed, Oct 19, 2011 at 1:36 PM, Sergio de Almeida Lenzi
<lenzi....@gmail.com> wrote:
> I have several dells with the perc controller
> and I can say that the best solution is
> to user raid 0 (see both disks) and let
> zfs to mirror them..
>
> This works ok, and you can use the zfs tools to
> manage the disks.
>
> Howerver this does not solve the problem that I
> had with the perc controller..

Hi,

I have one observation regarding a similar situation ... if I am using
the perc to create a mirror of the 2 boot disks (using UFS) ... and
the perc crashes / die or gets replaced ... I use to think that I
could boot from the individual disk ... since in this case the data is
there and does not need the perc to see it .... is this correct ? or
you think that I will not be able to boot the system ? (or even if it
is not the boot disk ... I should be able to mount one of the drives
anyway... )

Thanks!
0 new messages