Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

External storage options Sun Fire V880?

111 views
Skip to first unread message

mathog

unread,
Apr 1, 2013, 1:08:57 PM4/1/13
to
We have an old Sun Fire V880 with 6 internal FC disks (all 73Gb) plus 5
external U320 disks in a JBOD (off an added adapter, all disks 147Gb,
split between two channels). Everything is backed up to tape (over a
U320 bus shared with some of the SCSI disks.) The oldest of these disks
have been running continuously since 2003. In other words, disk
failures in the not too distant future are extremely likely. We have
one spare SCSI disk (not plugged in) and one FC disk that is running but
empty, and so could be used as a spare. The machine boots from one of
the FC disks, we do not know if it would boot from a SCSI disk,never
having tried it.

The machine runs an older version of Oracle which in turn is used by the
key application, a piece of software produced by a company that no
longer exists. So there is no way this software can be migrated to a
newer machine, not even to a newer version of Solaris (we are stuck at
Solaris 9).

Assuming this was your machine and you had to keep it running, naturally
for as little money as possible, what would you do?

My thinking is that while SCSI and FC disks are still available, they
are quite expensive, and many of the ones advertised are refurbs and/or
otherwise not quite trustworthy. So it seems like the best thing to do
would be to get another external storage box that connects to the V880
over the U320 and holds a few largish SATA2/3 disks to replace all of
these older, smaller disks. External storage containers of that sort
show up on ebay at a low cost quite frequently, and if need be they can
still be bought new. If the machine could boot off that external array
that would be best, otherwise, after moving the data, pull the 5 FC
disks which are not the system disk and set them aside as spares. (And
pray that they do not freeze up permanently while powered and and kept
in storge!) 100% up time is not required, nor is hot swapping of drives.

I considered eSATA, but a quick search did not turn up an adapter that
would work on the V880 + Solaris 9.

Your thoughts?

Thanks,

David Mathog


Doug McIntyre

unread,
Apr 1, 2013, 1:50:02 PM4/1/13
to
mathog <dma...@gmail.com> writes:
>My thinking is that while SCSI and FC disks are still available, they
>are quite expensive, and many of the ones advertised are refurbs..

At the rate that I tend to burn up SATA/SAS disks, I'd trust many of
the older SCSI and FC disks to last much longer than a new SATA.

Lifecycle of a typical SATA disk = warranty period +/- 30% and it will
die a fiery death with near 90% certainty.
Seagate SATA disks = 2 year waranty.
Seagate SAS disks = 3 year waranty (unless you get some older ones at 5 yr)

Lifecycle of SCSI or FC disk == until it wears out. Maybe 10-12 years?


I'm not saying don't have a backup plan. But don't discount old
enterprise gear as being obsolete by the newer and "better" hardware,
especially how crappy SATA disks are. They are all ticking timebombs
as far as I'm concerned, and would never deploy anything SATA/SAS
without being in a very very redundant array with a ton of SPAREs
available (at least 50% disks spread between both hot and cold spares).

Sun sold a SAS HBA with an LSI 3080 based chipset for that system.
There are probably many LSI cards with the same chipset available.



John D Groenveld

unread,
Apr 1, 2013, 1:59:15 PM4/1/13
to
In article <kjcer9$g0b$1...@dont-email.me>, mathog <dma...@gmail.com> wrote:
>The machine runs an older version of Oracle which in turn is used by the
>key application, a piece of software produced by a company that no
>longer exists. So there is no way this software can be migrated to a
>newer machine, not even to a newer version of Solaris (we are stuck at
>Solaris 9).
>
>Assuming this was your machine and you had to keep it running, naturally
>for as little money as possible, what would you do?

Acquire a newer system and migrate your application as
Solaris 9 branded zone:
<URL:http://docs.oracle.com/cd/E22645_01/html/820-4490/>

John
groe...@acm.org

Ian Collins

unread,
Apr 1, 2013, 9:16:26 PM4/1/13
to
Doug McIntyre wrote:
> mathog <dma...@gmail.com> writes:
>> My thinking is that while SCSI and FC disks are still available, they
>> are quite expensive, and many of the ones advertised are refurbs..
>
> At the rate that I tend to burn up SATA/SAS disks, I'd trust many of
> the older SCSI and FC disks to last much longer than a new SATA.
>
> Lifecycle of a typical SATA disk = warranty period +/- 30% and it will
> die a fiery death with near 90% certainty.
> Seagate SATA disks = 2 year waranty.
> Seagate SAS disks = 3 year waranty (unless you get some older ones at 5 yr)
>
> Lifecycle of SCSI or FC disk == until it wears out. Maybe 10-12 years?

I see such comments quite often, but they don't reflect my experience.

I have a batch of consumer Seagate SATA disks that have been in use in
one or other of my home systems since 2007 in less than ideal (hot!)
environments. I've yet to loose one.

One of my clients has three Thumpers (from about 2008) that have been in
continuous, heavy use. So far one has lost two drives (early, bad
firmware), the others none.

--
Ian Collins

Casper H.S. Dik

unread,
Apr 2, 2013, 4:42:45 AM4/2/13
to
mathog <dma...@gmail.com> writes:

>The machine runs an older version of Oracle which in turn is used by the
>key application, a piece of software produced by a company that no
>longer exists. So there is no way this software can be migrated to a
>newer machine, not even to a newer version of Solaris (we are stuck at
>Solaris 9).

Have you tried running the application in a Solaris 9 zone on Solaris 10?

I'm hoping that you also have a plan to replace the application
in the future, the hardware will die at one point in the future.

Casper

chris

unread,
Apr 2, 2013, 11:32:24 AM4/2/13
to
On 04/02/13 01:16, Ian Collins wrote:

>> Lifecycle of SCSI or FC disk == until it wears out. Maybe 10-12 years?
>
> I see such comments quite often, but they don't reflect my experience.
>
> I have a batch of consumer Seagate SATA disks that have been in use in
> one or other of my home systems since 2007 in less than ideal (hot!)
> environments. I've yet to loose one.
>
> One of my clients has three Thumpers (from about 2008) that have been in
> continuous, heavy use. So far one has lost two drives (early, bad
> firmware), the others none.
>

Modern sata drives are pretty good, but the technology comes from consumer,
not pro computing and the fact that they are much cheaper must mean
something.
The other thing to consider is that sata drives are usually 7200 rpm, not 10
or 15k, which can have a significant effect on access time and application
performance.

On balance, I would stick with fibre channel or scsi, Perhaps buy another
disk array. There are a lot of pretenders in the storage business, but
something
like a second user EMC or Xylogics 14 or 16 drive box and controller is
quite
low cost now, even full of 73 or 146Gb drives. Enough space for a raid
config
with multiple hot swap spares and you can set all that up before migrating.
Compact size, but built like a brick outhouse, with all the parts very
conservatively rated. Sas, especially 2.5", is still too expensive for
the better
quality stuff, but I wouldn't touch sata for any mission critical stuff
when
there's so much higher spec second user pro kit around these days. You
just need
to shop around a bit to get the best deal...

Regards,

Chris


mathog

unread,
Apr 2, 2013, 1:40:09 PM4/2/13
to
Doug McIntyre wrote:
> mathog <dma...@gmail.com> writes:
>> My thinking is that while SCSI and FC disks are still available, they
>> are quite expensive, and many of the ones advertised are refurbs..
>
> At the rate that I tend to burn up SATA/SAS disks, I'd trust many of
> the older SCSI and FC disks to last much longer than a new SATA.


I wasn't going to employ consumer grade SATA.

So are you saying the claimed 1.2 x10^6 hour MTBF of a WD5002ABYS RE3
(for instance) has no basis in fact? There are lots of Enterprise SATA
drives with similar MTBF numbers. Many of these disk models come in
both SATA and SAS variants, presumably differing only in the interface
electronics. It may be that SATA interface electronics are more prone
to failure than SAS electronics, but I have no reason to think that is
the case.

If the disk doesn't have a 5 year warranty on it then the manufacturer
has demonstrated that they have no faith in the quality of the drive.
Actually, in a sense none of them do, since no disk has, say, a 10 year
warranty, which the manufacturer could offer if the claimed MTBF was at
all accurate and in excess of a million hours. (Is it just me, or does
it not seem like there is some sort of gentleman's agreement between the
manufacturers that no disk will ever be sold with a warranty longer than
5 years?)

Doug McIntyre

unread,
Apr 2, 2013, 7:21:23 PM4/2/13
to
mathog <dma...@gmail.com> writes:
>Doug McIntyre wrote:
>> mathog <dma...@gmail.com> writes:
>>> My thinking is that while SCSI and FC disks are still available, they
>>> are quite expensive, and many of the ones advertised are refurbs..
>>
>> At the rate that I tend to burn up SATA/SAS disks, I'd trust many of
>> the older SCSI and FC disks to last much longer than a new SATA.

>I wasn't going to employ consumer grade SATA.

>So are you saying the claimed 1.2 x10^6 hour MTBF of a WD5002ABYS RE3
>(for instance) has no basis in fact?

I had over 100 of the WD 400 RE2 drives.

At least *75%* of them had failed in the 4-6 year mark. Sometimes I'd
have 6 go out in one shelf at a time. That storage system is long
gone, it wasn't worth fighting to keep alive.

In another NetApp drive array here that a customer has, they have 96x
Seagate 450GB SAS disks in it at about the 2-year mark. They've had
at least 10 drives go out and need replacing.

In another vendor's storage system of mine here, they're using 28 x
Seagate Constellation drives, and I've replaced 5-6 of them within
the first year of operation.

OOTH, my old-school NetApp filers have had only like one bad disk in 5
years of deployment. Or the customer with the NetApp above has several
Thumpers without a single disk failure.

It definately could be how hard the disk is worked, as that array with
over 100 WD RE2 disks was worked with like a 98% duty cycle. It never
had any oppertunity to slow down.

So, yes, I'm definately seeing lots of disk failures, and I have
piles of bad disks.

chris

unread,
Apr 3, 2013, 1:08:22 PM4/3/13
to
On 04/02/13 23:21, Doug McIntyre wrote:

>
> So, yes, I'm definately seeing lots of disk failures, and I have
> piles of bad disks.

What do you think / experience of the 2.5" 10k drives which seem to
be the current industry standard ?. Some of the last V series machines
used those and they do seem to be much faster on boot and general
access times. Are they as reliable as the older 3.5's, for example ?.
Much lower power consumption as well.

Have quite a few of those kicking around here now, but no disk box as
yet to put them in. HP MSA50 / 70 series look good on paper and a
standard lsi logic controller will work out of the box on Solaris with
those, but still quite expensive here in the uk.

You are right about the older fc arrays though. Although the castings
may look the same, I do wonder if they selected the best platters,
lowest vibration motors and head assemblies for those drives, with the
balance going to the low end. They have quite remarkable reliability...

Regards,

Chris

cindy swearingen

unread,
Apr 3, 2013, 2:00:45 PM4/3/13
to
Speaking of reliability...did you all see this:

http://youtu.be/fAUvfqLEWuA

I don't quite understand all of this because eventually you see an
S11.1 release
but the uptime is quite impressive and consider that the disks were
active for
10 years.

Thanks, Cindy

YTC#1

unread,
Apr 3, 2013, 3:37:25 PM4/3/13
to
On 04/ 3/13 07:00 PM, cindy swearingen wrote:
> On Apr 3, 11:08 am, chris <m...@devnull.com> wrote:
>> On 04/02/13 23:21, Doug McIntyre wrote:
>>
>>
>>
>>> So, yes, I'm definately seeing lots of disk failures, and I have
>>> piles of bad disks.
>>
>> What do you think / experience of the 2.5" 10k drives which seem to
>> be the current industry standard ?. Some of the last V series machines
>> used those and they do seem to be much faster on boot and general
>> access times. Are they as reliable as the older 3.5's, for example ?.
>> Much lower power consumption as well.
>>
>> Have quite a few of those kicking around here now, but no disk box as
>> yet to put them in. HP MSA50 / 70 series look good on paper and a
>> standard lsi logic controller will work out of the box on Solaris with
>> those, but still quite expensive here in the uk.
>>
>> You are right about the older fc arrays though. Although the castings
>> may look the same, I do wonder if they selected the best platters,
>> lowest vibration motors and head assemblies for those drives, with the
>> balance going to the low end. They have quite remarkable reliability...
>>
>> Regards,
>>
>> Chris
>
> Speaking of reliability...did you all see this:
>
> http://youtu.be/fAUvfqLEWuA

Anyone who names machines after owls in a childrens book needs shutting
down......

>
>

mathog

unread,
Apr 4, 2013, 1:52:56 PM4/4/13
to
Doug McIntyre wrote:
> mathog <dma...@gmail.com> writes:
>> Doug McIntyre wrote:
>>> mathog <dma...@gmail.com> writes:
>>>> My thinking is that while SCSI and FC disks are still available, they
>>>> are quite expensive, and many of the ones advertised are refurbs..
>>>
>>> At the rate that I tend to burn up SATA/SAS disks, I'd trust many of
>>> the older SCSI and FC disks to last much longer than a new SATA.
>
>> I wasn't going to employ consumer grade SATA.
>
>> So are you saying the claimed 1.2 x10^6 hour MTBF of a WD5002ABYS RE3
>> (for instance) has no basis in fact?
>
> I had over 100 of the WD 400 RE2 drives.
>
> At least *75%* of them had failed in the 4-6 year mark. Sometimes I'd
> have 6 go out in one shelf at a time. That storage system is long
> gone, it wasn't worth fighting to keep alive.

The claim that the drives provide a 1.2 x 10^6 hour MTBF is a form of
implied warranty, one (very) incompatible with your observed 75% failure
rate in <6 years. Since this involves 100 disks it might be worth
pursuing an implied warranty claim.

Their defense would most likely be that the environment in which these
drives were mounted was hostile - which is possible. Probably the
temperature was within range, but excessive vibration has been an issue
in some storage subsystems, and it tends to get worse when the system is
packed full of disks and run nonstop. That is not a strong defense
though since that is the expected environment for this sort of
enterprise disk.

Or they might claim that the remaining 25% would actually have run 1.2 x
10^7 hours more, so that the claimed MTBF was correct. I would hate to
be the lawyer who had to use that defense in court!

chris

unread,
Apr 4, 2013, 5:24:34 PM4/4/13
to
On 04/03/13 19:37, YTC#1 wrote:

>
> Anyone who names machines after owls in a childrens book needs shutting
> down......
>

That sounds a bit intolerant :-), but anyway, I used to name my machines
after
surrealists, but there's more variety now, including one called tridac.

You need to plug it in to google, along with "rae", to undestand the
significance....

YTC#1

unread,
Apr 5, 2013, 5:17:42 AM4/5/13
to
On 04/ 4/13 10:24 PM, chris wrote:
> On 04/03/13 19:37, YTC#1 wrote:
>
>>
>> Anyone who names machines after owls in a childrens book needs
>> shutting down......
>>
>
> That sounds a bit intolerant :-), but anyway, I used to name my

I prefer to think I am a realist :-)

> machines after surrealists, but there's more variety now, including
> one called tridac.
>
> You need to plug it in to google, along with "rae", to undestand the
> significance....

That's the problem, I CBA ? I just want to know what the machine is used
for and where it is located, the name should reflect that. :-)

To many years delivering systems I suppose :-(


>

Doug McIntyre

unread,
Apr 7, 2013, 9:48:29 AM4/7/13
to
chris <me...@devnull.com> writes:
... I wrote...>
>> So, yes, I'm definately seeing lots of disk failures, and I have
>> piles of bad disks.

>What do you think / experience of the 2.5" 10k drives which seem to
>be the current industry standard ?. Some of the last V series machines
>used those and they do seem to be much faster on boot and general
>access times. Are they as reliable as the older 3.5's, for example ?.
>Much lower power consumption as well.

I don't have 5 year+ time on them in my data center (maybe just 3-4 years?),
nor any large storage arrays, just a few smaller ones like the HP NAS
Storageworks shelf, so maybe there's only the 100-200 range of 2.5"
drives yet in my DC.

I've only had to replace a handful of 2.5" devices, so overall the
ratio has been better. But also I think they have been used more for
lighter duty storage tasks. I did have the one in my laptop go "odd",
but not outright bad. I replaced it with a hybrid, but that isn't
the same as the enterprise storage 2.5" devices.

My gut feeling (without longer term data) is that overall, with less
surface area, and less heat, they probably will have better
reliability, so far they have been better, but obviously the storage
provided is much smaller. (just fine for me for storage, I'd rather
have 450GB disks in a storage array than 3TB or 4TB disks).

cindy swearingen

unread,
Apr 8, 2013, 11:51:07 AM4/8/13
to
On Apr 7, 7:48 am, Doug McIntyre <mer...@dork.geeks.org> wrote:
I agree but have other reasons as well...

In a Solaris / ZFS world, mirrored pools are flexible, reliable, and
perform
better in most cases so having disks under 1 TB is probably more
realistic
from a mirroring standpoint. It might be easier to convince admins to
waste
disk space in the GB range rather than the TB range. Its not really a
waste,
its data protection, but still...

Thanks, Cindy

Michael

unread,
Apr 10, 2013, 2:08:10 PM4/10/13
to
Hi,
The SATA drives are crap, I have for the past two years struggled using
them and every second time I did scrub a the zpools atleast one drive
failes, then when rebuilding from a hotspare the next one failes (:

The trusty E450(1998) fully loaded with 300GB SCSI drives that are maybe
7-8 years old still never failed , ever!

So, you should maybe get small SAS drives preferable 2.5 inch and even
better get everything from a trusty vendor, Oracle, IBM, HP EMC(DELL)
then maybe the selected drives they use are okay but not the ones from
normal distributers!!


0 new messages