mathog <
dma...@gmail.com> writes:
>Doug McIntyre wrote:
>> mathog <
dma...@gmail.com> writes:
>>> My thinking is that while SCSI and FC disks are still available, they
>>> are quite expensive, and many of the ones advertised are refurbs..
>>
>> At the rate that I tend to burn up SATA/SAS disks, I'd trust many of
>> the older SCSI and FC disks to last much longer than a new SATA.
>I wasn't going to employ consumer grade SATA.
>So are you saying the claimed 1.2 x10^6 hour MTBF of a WD5002ABYS RE3
>(for instance) has no basis in fact?
I had over 100 of the WD 400 RE2 drives.
At least *75%* of them had failed in the 4-6 year mark. Sometimes I'd
have 6 go out in one shelf at a time. That storage system is long
gone, it wasn't worth fighting to keep alive.
In another NetApp drive array here that a customer has, they have 96x
Seagate 450GB SAS disks in it at about the 2-year mark. They've had
at least 10 drives go out and need replacing.
In another vendor's storage system of mine here, they're using 28 x
Seagate Constellation drives, and I've replaced 5-6 of them within
the first year of operation.
OOTH, my old-school NetApp filers have had only like one bad disk in 5
years of deployment. Or the customer with the NetApp above has several
Thumpers without a single disk failure.
It definately could be how hard the disk is worked, as that array with
over 100 WD RE2 disks was worked with like a 98% duty cycle. It never
had any oppertunity to slow down.
So, yes, I'm definately seeing lots of disk failures, and I have
piles of bad disks.