Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

LSI MegaRAID SAS 9240 with mfi driver?

166 views
Skip to first unread message

Jan Mikkelsen

unread,
Mar 30, 2012, 12:15:51 AM3/30/12
to freebsd...@freebsd.org, j...@freebsd.org
Hi,

I have a loan LSI MegaRAID SAS 9240-4i controller for testing.

According to the LSI documentation, this device provides the MegaRAID interface and the BIOS message mentions MFI. The LSI driver for this device also lists support for the 9261 which I know is supported by mfi(4). Based on all this, I was hopeful that mfi(4) would work with the 9240.

The pciconf -lv output is:

none3@pci0:1:0:0: class=0x010400 card=0x92411000 chip=0x00731000 rev=0x03 hdr=0x00
vendor = 'LSI Logic / Symbios Logic'
device = 'MegaRAID SAS 9240'
class = mass storage
subclass = RAID

I added this line to src/sys/dev/mfi/mfi_pci.c

{0x1000, 0x0073, 0xffff, 0xffff, MFI_FLAGS_GEN2, "LSI MegaRAID SAS 9240"},

It gave this result (tried with hw.mfi.msi set to 0 and to 1):

mfi0: <LSI MegaRAID SAS 9240> port 0xdc00-0xdcff mem 0xfe7bc000-0xfe7bffff,0xfe7c0000-0xfe7fffff irq 16 at device 0.0 on pci1
mfi0: Using MSI
mfi0: Megaraid SAS driver Ver 3.00
mfi0: Frame 0xffffff8000285000 timed out command 0x26C8040
mfi0: failed to send init command

The firmware is package 20.10.1-0077, which is the latest on the LSI website.

Is this path likely to work out? Any suggestions on where to go from here?

Thanks,

Jan Mikkelsen

_______________________________________________
freebsd...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stabl...@freebsd.org"

John Baldwin

unread,
Mar 30, 2012, 10:03:04 AM3/30/12
to Jan Mikkelsen, freebsd...@freebsd.org
On Friday, March 30, 2012 12:06:40 am Jan Mikkelsen wrote:
> Hi,
>
> I have a loan LSI MegaRAID SAS 9240-4i controller for testing.
>
> According to the LSI documentation, this device provides the MegaRAID interface and the BIOS message mentions MFI. The LSI driver for this device
also lists support for the 9261 which I know is supported by mfi(4). Based on all this, I was hopeful that mfi(4) would work with the 9240.
>
> The pciconf -lv output is:
>
> none3@pci0:1:0:0: class=0x010400 card=0x92411000 chip=0x00731000 rev=0x03 hdr=0x00
> vendor = 'LSI Logic / Symbios Logic'
> device = 'MegaRAID SAS 9240'
> class = mass storage
> subclass = RAID
>
> I added this line to src/sys/dev/mfi/mfi_pci.c
>
> {0x1000, 0x0073, 0xffff, 0xffff, MFI_FLAGS_GEN2, "LSI MegaRAID SAS 9240"},
>
> It gave this result (tried with hw.mfi.msi set to 0 and to 1):
>
> mfi0: <LSI MegaRAID SAS 9240> port 0xdc00-0xdcff mem 0xfe7bc000-0xfe7bffff,0xfe7c0000-0xfe7fffff irq 16 at device 0.0 on pci1
> mfi0: Using MSI
> mfi0: Megaraid SAS driver Ver 3.00
> mfi0: Frame 0xffffff8000285000 timed out command 0x26C8040
> mfi0: failed to send init command
>
> The firmware is package 20.10.1-0077, which is the latest on the LSI website.
>
> Is this path likely to work out? Any suggestions on where to go from here?

You should try the updated mfi(4) driver that Doug (cc'd) is going to soon
merge into HEAD. It syncs up with the mfi(4) driver on LSI's website which
supports several cards that the current mfi(4) driver does not. (I'm not
fully sure if the 9240 is in that group or not. Doug might know however.)

--
John Baldwin

Doug Ambrisko

unread,
Mar 30, 2012, 10:17:45 AM3/30/12
to John Baldwin, Jan Mikkelsen, freebsd...@freebsd.org
Yes, this card is supported with the mfi(4) in projects/head_mfi. Looks
like we fixed a couple of last minute found bugs when trying to create a
RAID wth mfiutil. This should be fixed now. I'm going to start the
merge to -current today. The version in head_mfi can run on older
versions of FreeBSD with the changes that Sean did.

Note that I wouldn't recomend the 9240 since it can't have a battery
option. NVRAM is the key to the speed of mfi(4) cards. However, that
won't stop us from supporting it.

Doug A.

Jan Mikkelsen

unread,
Mar 30, 2012, 5:54:30 PM3/30/12
to Doug Ambrisko, freebsd...@freebsd.org, John Baldwin
Hi,

On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:

> John Baldwin writes:
> | On Friday, March 30, 2012 12:06:40 am Jan Mikkelsen wrote:
> | ...
> | > Is this path likely to work out? Any suggestions on where to go from here?
> |
> | You should try the updated mfi(4) driver that Doug (cc'd) is going to soon
> | merge into HEAD. It syncs up with the mfi(4) driver on LSI's website which
> | supports several cards that the current mfi(4) driver does not. (I'm not
> | fully sure if the 9240 is in that group or not. Doug might know however.)
>
> Yes, this card is supported with the mfi(4) in projects/head_mfi. Looks
> like we fixed a couple of last minute found bugs when trying to create a
> RAID wth mfiutil. This should be fixed now. I'm going to start the
> merge to -current today. The version in head_mfi can run on older
> versions of FreeBSD with the changes that Sean did.
>
> Note that I wouldn't recomend the 9240 since it can't have a battery
> option. NVRAM is the key to the speed of mfi(4) cards. However, that
> won't stop us from supporting

Thanks.

I don't know what changes Sean did. Are they in 9.0-release, or do I need -stable after a certain point? I'm assuming I should be able to take src/sys/dev/mfi/... and src/usr.sbin/mfiutil/... from -current.

The performance is an interesting thing. The write performance I care about is ZFS raidz2 with 6 x JBOD disks (or 6 x single disk raid0) on this controller. The 9261 with a BBU performs well but obviously costs more.

I can see the BBU being important for controller based raid5, but I'm hoping that ZFS with JBOD will still perform well. I'm ignorant at this point, so that's why I'm trying it out. Do you have any experience or expectations with a 9240 being used in a setup like that?

Regards,

Jan.

Doug Ambrisko

unread,
Mar 30, 2012, 6:22:42 PM3/30/12
to Jan Mikkelsen, freebsd...@freebsd.org, John Baldwin
Jan Mikkelsen writes:
| Hi,
|
| On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:
|
| > John Baldwin writes:
| > | On Friday, March 30, 2012 12:06:40 am Jan Mikkelsen wrote:
| > | ...
| > | > Is this path likely to work out? Any suggestions on where to go from here?
| > |
| > | You should try the updated mfi(4) driver that Doug (cc'd) is going to soon
| > | merge into HEAD. It syncs up with the mfi(4) driver on LSI's website which
| > | supports several cards that the current mfi(4) driver does not. (I'm not
| > | fully sure if the 9240 is in that group or not. Doug might know however.)
| >
| > Yes, this card is supported with the mfi(4) in projects/head_mfi. Looks
| > like we fixed a couple of last minute found bugs when trying to create a
| > RAID wth mfiutil. This should be fixed now. I'm going to start the
| > merge to -current today. The version in head_mfi can run on older
| > versions of FreeBSD with the changes that Sean did.
| >
| > Note that I wouldn't recomend the 9240 since it can't have a battery
| > option. NVRAM is the key to the speed of mfi(4) cards. However, that
| > won't stop us from supporting
|
| Thanks.
|
| I don't know what changes Sean did. Are they in 9.0-release, or do I
| need -stable after a certain point? I'm assuming I should be able to
| take src/sys/dev/mfi/... and src/usr.sbin/mfiutil/... from -current.

It's in the SVN project/head_mfi repro. You can browse it via the web at:
http://svnweb.freebsd.org/base/projects/head_mfi/

It's not in -current yet. I'm working on the. I just did all the
merges to a look try and eye'd them over. Now doing a compile test
then I can check it into -current.

| The performance is an interesting thing. The write performance I care
| about is ZFS raidz2 with 6 x JBOD disks (or 6 x single disk raid0) on
| this controller. The 9261 with a BBU performs well but obviously costs more.

There will need to be clarification in the future. JBOD is not that
same as a single disk RAID. If I remember correctly, when doing some
JBOD testing version single disk RAID is that JBOD is slower. A
single disk RAID is faster since it can use the RAID. However, without
the battery then you risk losing data on power outage etc. Without the
battery then performance of a JBOD and single disk RAID should be able
the same.

A real JBOD as shown by LSI's firmware etc. shows up as a /dev/mfisyspd<n>
entries. JBOD by LSI is a newer thing.

| I can see the BBU being important for controller based raid5, but I'm
| hoping that ZFS with JBOD will still perform well. I'm ignorant at this
| point, so that's why I'm trying it out. Do you have any experience or
| expectations with a 9240 being used in a setup like that?

The battery or NVRAM doesn't matter on the RAID type being used since the
cache in NVRAM mode, says done whenever it has space in the cache for the
write. Eventually, it will hit the disk. Without the cache working in
this mode the write can't be acknowledged until the disk says done. So
performance suffers. With a single disk RAID you have been using the
cache.

Now you can force using the cache without NVRAM but you have to acknowledge
the risk of that.

Doug A.

Jan Mikkelsen

unread,
Mar 30, 2012, 6:53:31 PM3/30/12
to Doug Ambrisko, freebsd...@freebsd.org, John Baldwin
On 31/03/2012, at 9:21 AM, Doug Ambrisko wrote:

> Jan Mikkelsen writes:
> | I don't know what changes Sean did. Are they in 9.0-release, or do I
> | need -stable after a certain point? I'm assuming I should be able to
> | take src/sys/dev/mfi/... and src/usr.sbin/mfiutil/... from -current.
>
> It's in the SVN project/head_mfi repro. You can browse it via the web at:
> http://svnweb.freebsd.org/base/projects/head_mfi/
>
> It's not in -current yet. I'm working on the. I just did all the
> merges to a look try and eye'd them over. Now doing a compile test
> then I can check it into -current.

OK, will check it out.

> | The performance is an interesting thing. The write performance I care
> | about is ZFS raidz2 with 6 x JBOD disks (or 6 x single disk raid0) on
> | this controller. The 9261 with a BBU performs well but obviously costs more.
>
> There will need to be clarification in the future. JBOD is not that
> same as a single disk RAID. If I remember correctly, when doing some
> JBOD testing version single disk RAID is that JBOD is slower. A
> single disk RAID is faster since it can use the RAID. However, without
> the battery then you risk losing data on power outage etc. Without the
> battery then performance of a JBOD and single disk RAID should be able
> the same.
>
> A real JBOD as shown by LSI's firmware etc. shows up as a /dev/mfisyspd<n>
> entries. JBOD by LSI is a newer thing.

Ok, interesting. I was told by the distributor that the 9240 supports JBOD mode, but the 9261 doesn't. I'm interested to test it out with ZFS.

>
> | I can see the BBU being important for controller based raid5, but I'm
> | hoping that ZFS with JBOD will still perform well. I'm ignorant at this
> | point, so that's why I'm trying it out. Do you have any experience or
> | expectations with a 9240 being used in a setup like that?
>
> The battery or NVRAM doesn't matter on the RAID type being used since the
> cache in NVRAM mode, says done whenever it has space in the cache for the
> write. Eventually, it will hit the disk. Without the cache working in
> this mode the write can't be acknowledged until the disk says done. So
> performance suffers. With a single disk RAID you have been using the
> cache.

With RAID-5 it is important because a single update requires two writes and a failure in the window where one write has completed and one write has not could cause data corruption. I don't know whether the controller really handles this case.

I guess I'm hopeful that ZFS will perform the function performed by the NVRAM on the controller. I can see how the controller in isolation is clearly slower without a BBU because it has to expose the higher layers to the disk latency.

> Now you can force using the cache without NVRAM but you have to acknowledge
> the risk of that.

Yes, I understand the risk, and it is one I do not want to take. All the 9261s I have deployed have a BBU and go into write through mode if the battery has a problem.

I think I need to test it in the context of ZFS and see how it works without controller NVRAM.

Regards,

Jan.

Doug Ambrisko

unread,
Mar 30, 2012, 7:25:34 PM3/30/12
to Jan Mikkelsen, freebsd...@freebsd.org, John Baldwin
Correct, JBOD is not supported on all cards and depending on how the
card comes needs to be enabled. Again JBOD is not RAID on a single
disk. Also to clarify mfiutil create jbod does a RAID for each drive
which isn't the same definition of JBOD that LSI talks about. They
are 2 different animals. MegaCli can configure LSI JBOD's to enable
the feature and create them. I'm not really sure what the value of
JBOD support is. I haven't seen any kind of performance gains.

| > | I can see the BBU being important for controller based raid5, but I'm
| > | hoping that ZFS with JBOD will still perform well. I'm ignorant at this
| > | point, so that's why I'm trying it out. Do you have any experience or
| > | expectations with a 9240 being used in a setup like that?
| >
| > The battery or NVRAM doesn't matter on the RAID type being used since the
| > cache in NVRAM mode, says done whenever it has space in the cache for the
| > write. Eventually, it will hit the disk. Without the cache working in
| > this mode the write can't be acknowledged until the disk says done. So
| > performance suffers. With a single disk RAID you have been using the
| > cache.
|
| With RAID-5 it is important because a single update requires two writes
| and a failure in the window where one write has completed and one write
| has not could cause data corruption. I don't know whether the controller
| really handles this case.

That shouldn't be a problem since the acknowledge won't happen until
the writes are all done and if any fail then the I/O should fail back
to the OS.

| I guess I'm hopeful that ZFS will perform the function performed by the
| NVRAM on the controller. I can see how the controller in isolation is
| clearly slower without a BBU because it has to expose the higher layers
| to the disk latency.

All the ZFS should really be doing is adding another level of caching.
Without an NVRAM cache, you can't really get the performance gain.

| > Now you can force using the cache without NVRAM but you have to acknowledge
| > the risk of that.
|
| Yes, I understand the risk, and it is one I do not want to take. All
| the 9261s I have deployed have a BBU and go into write through mode if
| the battery has a problem.
|
| I think I need to test it in the context of ZFS and see how it works
| without controller NVRAM.

Well, then you can do the performance test of the 9240 on the 9261s
by disabling the battery and the cache! Feel free to do the test on
the 9240. I can't see anything being faster without the NVRAM cache.

Doug A.

Jan Mikkelsen

unread,
Apr 16, 2012, 5:40:56 AM4/16/12
to Doug Ambrisko, freebsd...@freebsd.org

On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:

> John Baldwin writes:
> | On Friday, March 30, 2012 12:06:40 am Jan Mikkelsen wrote:
> | > Hi,
> | >
..
>
> | > I have a loan LSI MegaRAID SAS 9240-4i controller for testing.
I have just imported the mfi(4) and mfiutil(8) into a 9.0-RELEASE tree to try this out.

When booting up with two fresh drives attached, they show up as usable JBOD disks. However, I cannot use mfiutil to create anything with them. Every drive gives

"mfiutil: Drive n not available"

Is this expected behaviour? How can I create a raid1 volume using mfiutil and clean disks?

I tried using MegaCli from the LSI website (versions 8.02.16 and 8.02.21), but they can't even detect the controller. I know you said at some point that a very recent version of MegaCli was required. What version is necessary?

dmesg:

mfi0: <Drake Skinny> port 0xdc00-0xdcff mem 0xfe7bc000-0xfe7bffff,0xfe7c0000-0xfe7fffff irq 16 at device 0.0 on pci1
mfi0: Using MSI
mfi0: Megaraid SAS driver Ver 4.23
mfi0: 7021 (387925223s/0x0020/info) - Shutdown command received from host
mfi0: 7022 (boot + 4s/0x0020/info) - Firmware initialization started (PCI ID 0073/1000/9241/1000)
mfi0: 7023 (boot + 4s/0x0020/info) - Firmware version 2.120.244-1482
mfi0: 7024 (boot + 5s/0x0020/info) - Package version 20.10.1-0077
mfi0: 7025 (boot + 5s/0x0020/info) - Board Revision 03A
mfi0: 7026 (boot + 33s/0x0002/info) - Inserted: PD 32(e0xff/s1)
mfisyspd0: <MFI System PD> on mfi0
mfisyspd0: 1907729MB (3907029168 sectors) SYSPD volume
mfisyspd0: SYSPD volume attached
mfisyspd1: <MFI System PD> on mfi0
mfisyspd1: 1907729MB (3907029168 sectors) SYSPD volume
mfisyspd1: SYSPD volume attached


Thanks,

Jan Mikkelsen

Doug Ambrisko

unread,
Apr 16, 2012, 12:36:34 PM4/16/12
to Jan Mikkelsen, freebsd...@freebsd.org
Jan Mikkelsen writes:
| On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:
| > John Baldwin writes:
| > | On Friday, March 30, 2012 12:06:40 am Jan Mikkelsen wrote:
| > | > Hi,
| > | >
| ...
You might want to include the output of:
mfiutil show drives
and then the command you are trying to do to create a RAID with.

| Is this expected behaviour? How can I create a raid1 volume using
| mfiutil and clean disks?

I'm not sure if mfiutil can switch disks from JBOD mode to RAID.
I don't see any reason why it shouldn't. It can't go from RAID to
real JBOD mode since it doesn't have code to support that.

| I tried using MegaCli from the LSI website (versions 8.02.16 and
| 8.02.21), but they can't even detect the controller. I know you
| said at some point that a very recent version of MegaCli was
| required. What version is necessary?

What was the syntax you used since usage is cryptic? I've never
seen a MegaCli that couldn't access the card. What I meant by
more recent MegaCli is earlier versions didn't have the JBOD
commands in it. I have a 8.00.46 that knows about JBOD.

| dmesg:
|
| mfi0: <Drake Skinny> port 0xdc00-0xdcff mem 0xfe7bc000-0xfe7bffff,0xfe7c0000-0xfe7fffff irq 16 at device 0.0 on pci1
| mfi0: Using MSI
| mfi0: Megaraid SAS driver Ver 4.23
| mfi0: 7021 (387925223s/0x0020/info) - Shutdown command received from host
| mfi0: 7022 (boot + 4s/0x0020/info) - Firmware initialization started (PCI ID 0073/1000/9241/1000)
| mfi0: 7023 (boot + 4s/0x0020/info) - Firmware version 2.120.244-1482
| mfi0: 7024 (boot + 5s/0x0020/info) - Package version 20.10.1-0077
| mfi0: 7025 (boot + 5s/0x0020/info) - Board Revision 03A
| mfi0: 7026 (boot + 33s/0x0002/info) - Inserted: PD 32(e0xff/s1)
| mfisyspd0: <MFI System PD> on mfi0
| mfisyspd0: 1907729MB (3907029168 sectors) SYSPD volume
| mfisyspd0: SYSPD volume attached
| mfisyspd1: <MFI System PD> on mfi0
| mfisyspd1: 1907729MB (3907029168 sectors) SYSPD volume
| mfisyspd1: SYSPD volume attached

You are definitely in real JBOD mode with each drive being /dev/mfisyspd0
and /dev/mfisyspd1. So you can access the drives as those to do some
experiments with if you want to.

Doug A.

Jan Mikkelsen

unread,
Apr 16, 2012, 10:16:50 PM4/16/12
to Doug Ambrisko, freebsd...@freebsd.org

On 17/04/2012, at 2:32 AM, Doug Ambrisko wrote:

> Jan Mikkelsen writes:
> | On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:
..
The MegaCli problem was an embarrassing operator error which I can't blame on the bad UI.

"mfiutil create jbod …" doesn't create a JBOD disk, it creates a raid0 volume. I think that was expected. The biggest problem with this controller and just mfiutil is that you can't get a drive from the JBOD state to the unconfigured-good state, and a blank disk starts in JBOD. So to do any setup you need to resort to the BIOS utility or MegaCli.

For each disk to change from JBOD to "Unconfigured-good" so that it can be used to create a volume, I needed to do:

MegaCli -PDMakeGood -Physdrv '[64:1]' -force -a0

Obviously with the right drivespec. Once they're in this state I can use mfiutil to create volumes.

I can get drives from the unconfigured-good state to JBOD by doing "MegaCli -PDMakeJBOD …".

This is just in a little test machine with a few drives. Now that it is working to this level I will get a server with a 9240 and give it a proper run. I'll also try the driver out in one of the 9261 based servers we've got here.

Thanks!

Jan.

John Baldwin

unread,
Apr 17, 2012, 1:57:47 PM4/17/12
to freebsd...@freebsd.org, Jan Mikkelsen
On Monday, April 16, 2012 10:15:10 pm Jan Mikkelsen wrote:
>
> On 17/04/2012, at 2:32 AM, Doug Ambrisko wrote:
>
> > Jan Mikkelsen writes:
> > | On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:
> ...
It should be very easy to add a 'good' command to mfiutil. Actually, there
already is a 'good' command. Have you tried using that?

# mfiutil good <n>

--
John Baldwin

Jan Mikkelsen

unread,
Apr 17, 2012, 7:22:34 PM4/17/12
to John Baldwin, freebsd...@freebsd.org

On 18/04/2012, at 3:51 AM, John Baldwin wrote:
> It should be very easy to add a 'good' command to mfiutil. Actually, there
> already is a 'good' command. Have you tried using that?
>
> # mfiutil good <n>

Missed that. Works fine. Sorry for the noise.

Regards,

Jan.
0 new messages