Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

HP MSA50 on Sparc Solaris 10 ?

155 views
Skip to first unread message

chris

unread,
Jun 7, 2013, 9:50:59 AM6/7/13
to
Hi,

In the never ending quest to reduce power consumption, i'm trying to get
an HP MSA50 drive enclosure running on one of the lab machines. The box
is a 1U high enclosure that takes up to 10 x 2.5" sas drives. There's an
input and output socket to allow connection to a second identical box.
I know this is heresy, but they are neat boxes that draw ~100watts full
of drives.

The question is, does anyone have any experience of running these boxes
on a sun system ?. Afaics, they are just a jbod box with sas expander at
the connector end. Using a lsi megaraid pci-e card, the drives can be seen
from Linux x86 and raid 5 etc configured from the power on bios utility, but
of course, this won't work on a Sparc system. I would be interested to know
what controllers might work, prefer pci-x to start with, though pci-e would
work in other machines. Don't need h/w raid, as I would prefer to use zfs...

Regards,

Chris

Doug McIntyre

unread,
Jun 7, 2013, 12:33:53 PM6/7/13
to
While I haven't for Solaris SPARC (just x86)..

I'd start with LSI SAS HBAs. Solaris has been using LSI HBA for
some time, and they've seem to add support for them quite quickly.

The release notes for Solaris 10 8/11 says it supports the SAS 2308
chipset HBAs..

http://docs.oracle.com/cd/E23823_01/html/821-2730/gkugu.html#glbdn

These cards are common and come by default with "IT" firmware.
(ie. LSI SAS 9207-8e). That would be a PCIe card.

I think Oracle/Sun shipped SAS1068 based cards for SPARC systems at
one point. This are PCI-x based chipsets, and would expect most
updates for Solaris 10 and newer to support them just fine.

cindy.sw...@gmail.com

unread,
Jun 7, 2013, 1:04:45 PM6/7/13
to
I would also check the Solaris HCL, here:

http://www.oracle.com/webfolder/technetwork/hcl/data/sol/components/views/disk_controller_all_results.page1.html

If your model is supported and you can create a ZFS storage pool on the devices, make sure you write some test data and read it back. Then, check zpool status, iostat -En, fmadm faulty, and so on. I might be paranoid but have seen
some strange results on unsupported LSI HBAs recently.

Thanks, Cindy

chris

unread,
Jun 8, 2013, 8:43:58 AM6/8/13
to
On 06/07/13 17:04, cindy.sw...@gmail.com wrote:

>> I think Oracle/Sun shipped SAS1068 based cards for SPARC systems at
>>
>> one point. This are PCI-x based chipsets, and would expect most
>>
>> updates for Solaris 10 and newer to support them just fine.
>
> I would also check the Solaris HCL, here:
>
> http://www.oracle.com/webfolder/technetwork/hcl/data/sol/components/views/disk_controller_all_results.page1.html
>
> If your model is supported and you can create a ZFS storage pool on the devices, make sure you write some test data and read it back. Then, check zpool status, iostat -En, fmadm faulty, and so on. I might be paranoid but have seen
> some strange results on unsupported LSI HBAs recently.
>
> Thanks, Cindy

Hi,

Thanks for the replies. I have an lsi pci-x controller, not sun branded,
but
has the 1068 chip set, but it has no external connector. Solaris does
see the
card, but when an adapter plate (sas drive cables <-> SFF-8470) is used to
translate that to the external array, it can't see the drives, even with a
reboot -r or devfsadm -c disk. Nothing wrong with controller or array, but
possibly something in the cabling. Looking at various options, looks
like the
lsi sas1068x might be a good place to start, as it is pci-x, is 8 port and
has two external SFF-8470 external 4 lane connectors that match the msa50.

I take on board the unsupported bit, but the small hp sas arrays might
be useful
to others if they can be made to work. Can write a bit of C code to randomly
write and then read files and parts thereof to exercise the setup for a few
days or more as well. None of this is critical infrastructure, more a bit of
experimental stuff :-)...

Regards,

Chris

chris

unread,
Jun 8, 2013, 8:51:08 AM6/8/13
to
On 06/08/13 12:43, chris wrote:

Sorry, typo: Controller should have been sas3800X.,
not sas1068X...

ITguy

unread,
Jun 8, 2013, 11:46:33 AM6/8/13
to
> I know this is heresy, but they are neat boxes that draw ~100watts full
> of drives.
> ...
> Don't need h/w raid, as I would prefer to use zfs...

I don't understand why Sun/Oracle stopped selling simple SAS JBOD storage. Solaris/ZFS was specifically designed to use a bunch of cheap, "dumb" disks and give great performance and reliability. Why would the company that owns Solaris/ZFS stop selling this type of storage? WTF??

Not everyone needs/wants a dedicated SAN or NAS. Sometimes a healthy amount of locally attached storage is good enough.

Ian Collins

unread,
Jun 8, 2013, 4:02:34 PM6/8/13
to
ITguy wrote:
>> I know this is heresy, but they are neat boxes that draw ~100watts
>> full of drives. ... Don't need h/w raid, as I would prefer to use
>> zfs...
>
> I don't understand why Sun/Oracle stopped selling simple SAS JBOD
> storage. Solaris/ZFS was specifically designed to use a bunch of
> cheap, "dumb" disks and give great performance and reliability. Why
> would the company that owns Solaris/ZFS stop selling this type of
> storage? WTF??

Margins.

--
Ian Collins

Sami Ketola

unread,
Jun 9, 2013, 7:13:16 AM6/9/13
to
chris <me...@devnull.com> wrote:
> Thanks for the replies. I have an lsi pci-x controller, not sun branded,
> but
> has the 1068 chip set, but it has no external connector. Solaris does
> see the
> card, but when an adapter plate (sas drive cables <-> SFF-8470) is used to
> translate that to the external array, it can't see the drives, even with a
> reboot -r or devfsadm -c disk. Nothing wrong with controller or array, but
> possibly something in the cabling. Looking at various options, looks
> like the
> lsi sas1068x might be a good place to start, as it is pci-x, is 8 port and
> has two external SFF-8470 external 4 lane connectors that match the msa50.

That card is a RAID card most likely with a raid firmware in it. You would neet
to flash it with non-raid firmware in order to use it with Solaris.

Sami

chris

unread,
Jul 21, 2013, 10:46:39 AM7/21/13
to
Just an update on the msa50 disk box investigation.

Found an LSI3442X, which is a pci-x hba, with an mini D looking
SFF8470 connector on the bracket. Also, an HP 4 lane infiniband
style sas cable, with SFF8470 connectors at each end. Test
machine was a V240 running solaris 10.

Put 5 x 146 Gbyte drives across the top of the msa50, blanks in
the other five. Format can see all the drives and ran 5 shell
windows, each running format / analyse / write, read, compare, so
that there was a reasonable amount of activity and contention on
the bus. No errors at all logged and all drives label correctly.
Create zfs pool from all the drives with no problem.

Have a T2000 in the lab, which has pci-e low profile slots, Need a
pci-e version of the LSI3442 to test that configuration, but so far,
the pci-x version does seem to work. Obviously more work needs to be
done for production use, but looks promising.

The HP msa series boxes are pretty low cost second user and there
really is no equivalent from Sun / Oracle afaics...

Regards,

Chris

Doug McIntyre

unread,
Jul 21, 2013, 10:07:36 PM7/21/13
to
chris <me...@devnull.com> writes:
>The HP msa series boxes are pretty low cost second user and there
>really is no equivalent from Sun / Oracle afaics...

Thats interesting, although I wouldn't expect any problems based on my
experience. The price does seem right on the used 2.5" MSA boxes. Too
bad nobody seems to sell them with disk trays, so that adds on another
$50 unless there is a cheap source of trays.

Too bad the 3.5" units all seem pretty pricey on the used market.

Most of what I've found before are 3rd party and all feel "cheap" for
what you pay for them. It would be nicer to have HP disk trays for the
built up storage that I've added on.


chris

unread,
Jul 26, 2013, 6:04:43 PM7/26/13
to
My msa box had no caddies, but found 5 HP badged for < 4.00 ukp each, so
not too bad. The drives were Sun badged 146Gb types from a stack of servers
bought a couple of years ago. As this was an investigation, didn't want
spend too much until there were at least some results. Power consumption
with 5 drives is ~70 watts, much lower than a 3.5: drive equivalent.

Have access to an msa60, 12 x 450Gb 3.5" drives, but the connector on that
one is the weird push in and lock type, so need to find another 4 lane cable
to suit. The cables are an astronomical price, even fleabay, so just bide
time until one turns up :-)...

Chris

0 new messages