Looking at a single ASP configured with 8 drives either as a single 8-
drive RAID set or as two 4-drive sets. What's the better option?
Statistically, if you have two drives fail the 2x4 RAID would give you a
57% change of having a single failed drive in each set. In other words,
if two drives fail in a 1x8 set you have 100% chance of data loss, in a
2x4 set you have only a 43% chance of data loss.
But of course we've all heard that nothing is free, so I'm wondering what
other implications there are. I'm assuming performance would be
affected, does anybody have any info on this?
Any other thoughts/comments?
Charles Wilt
I've only seen 1 drive failure in 4 years on 2 AS/400's. Where as the
drive was replaced as a precaution and not because the drive was at
fault.
I think that a dual drive failure in any system us highly unlikly.
Thus I would go for a 10 disk array getting better performance and losing
less on parity.
And ofcourse creating a great backup strategy using journals. Depending
on the complexity and size of the system using BRS or another backup tool
Cheers,
stef
We've only had 1 drive fail in 6 years in our 9402-400 with 10 drives
configured 1x2-Drive Mirror, 2x4-Drive RAID. Note that in this case, the
reason for two RAID arrays is because each set of 4 drives are of a
different size.
I'd agree that a dual drive failure is highly unlikely, but given the
recent history of the IBM drive failure it perhaps isn't as unlikely as
we would like.
I'm simply wondering what the cost would be for the added reliability.
Charles
http://www.iseriesnetwork.com/resources/artarchive/index.cfm?fuseaction=v
iewarticle&CO_ContentID=11154&channel=art
"Customers who stick with RAID can take a few steps to make their system
more reliable. First, iSeries customers should at least mirror their load
source, the most critical iSeries drive, Breisacher says. The system
can’t power up unless it can read the licensed internal code and
operating system information that the load source carries.
Also, RAID users should always keep their parity sets down to four
drives, Breisacher says. “If you want to decrease your chance of failure,
spreading the parity across four drives gives you a better shot of system
uptime than if you spread it across ten,” he says."
David Breisacher is the CEO of iSeries disk vendor BCC Technologies
Charles
In article <MPG.17555adc8...@news.easynews.com>,
cw...@nospam.miamiluken.com says...
If you have 8 drives, all of the same capacity, running off of the same IOP,
how can you configure two different parity sets?
If your not having ongoing problems with your current bank of drives, I
wouldn't change anything.
Mike
The "recent history" of IBM drive failure is now old news, those issues have
been addressed by PTF or the bad batch of drives long since replaced due to
failure. I had a customer plagued by 6 drive failures over a 6 month period
a year ago, and at no time did we see multiple failures.
I haven't seen a system placed by us in the past 1-2 years that has had any
drive failures yet and some that have been out there 3-4 years, inluding MES
upgrades. The DASD problems were limited to small batches and was short
lived.
If you're that mission-critical, you should be looking at mirrored system
level protection and/or have a respectable (read FAST) tape unit - DASD
failure is not the only thing you need to concern yourself with.
RT
"Charles Wilt" <cw...@nospam.miamiluken.com> wrote in message
news:MPG.17557af47...@news.easynews.com...
Actually, I'm considering this for a new box we haven't got yet.
Charles
In article <jyXG8.1555$Hl3....@newsread1.prod.itd.earthlink.net>,
michael.p...@onemain.com says...
Assuming you have 8 drives and two drives fail:
with a single 8-drive array you have a 100% chance of data loss
with 2 4-drive arrays you have only a 43% chance of data loss
Charles
In article <3CEC945C...@aracnet.com>, kgo...@aracnet.com says...
The likelihood of a single drive failing is remote, the likelihood of two
drives failing at once even more so.
Nonetheless, it could happen and it appears that a simple config change
could make the probability of data loss significantly less. The change
costs no $$, and the only question is what effect it may have on
performance. Nobody seems to have hard data, but BCC's engineers think
it may cost 5-10% worst case.
For larger systems this doesn't seem to matter so much. For example 32
drives the chance of data loss with 2 failed drives is:
8x4-drives = 10%
4x8-drives = 23%
for 100 drives:
25x4-drives = 3%
10x10-drives = 9%
A quick conclusion is once you have to have multiple arrays because of
the number of drives, this really doesn't matter.
The question is does it make sense to have multiple arrays if you can,
even though you don't have too.
Charles
In article <jd_G8.83784$UV4.143234@rwcrnsc54>,
russan...@no.slimey.spammers.attbi.noteven.com says...
If you parity stripe across 8 drives, you will have better repsonse times
and lower disk busy values if that parity set has to rebuild the data during
a disk outage than if you've only striped across 4 drives. Rebuilding the
data on the replacement drive will also occur sooner, lessening the chance
of a multiple disk outage occurring.
I can only assume that given the recent price reductions, you're looking at
17GB drives regardless of your storage needs, so the above issue should be
considered in the final decision.
Given the number of total spindles you've been discussing, it also appears
that you have given consideration to the minimum processor requirements for
number of spindles re:performance.
Are you putting an LTO tape unit on this machine?
Maybe you've already thought all this out, just couldn't remember all the
thread.
RT
"Charles Wilt" <cw...@nospam.miamiluken.com> wrote in message
news:MPG.175818412...@news.easynews.com...
In article <mltH8.70245$L76.113316@rwcrnsc53>,
russan...@no.slimey.spammers.attbi.noteven.com says...
> Sorry, no hard data either but consider this:
>
> If you parity stripe across 8 drives, you will have better repsonse times
> and lower disk busy values if that parity set has to rebuild the data during
> a disk outage than if you've only striped across 4 drives. Rebuilding the
> data on the replacement drive will also occur sooner, lessening the chance
> of a multiple disk outage occurring.
Actually according to the IBM v5r1 Performance Capabilities Manual, 4
parity arms performs better than 8 if running exposed with 1 failed drive
because you have to read ALL remaining (3 or 9) arms. Doesn't really say
anything about the rebuild times but it seems to me that rebuild of a 4-
arm would faster since you are only reading 3 other arms to write what is
missing.
>
> I can only assume that given the recent price reductions, you're looking at
> 17GB drives regardless of your storage needs, so the above issue should be
> considered in the final decision.
Actually, looking at BCC 15K 8GB-FAST (17GB short-stroked to 8GB), even
with 8GB usable it's about a 4x increase in DASD from our current box.
>
> Given the number of total spindles you've been discussing, it also appears
> that you have given consideration to the minimum processor requirements for
> number of spindles re:performance.
Yep, which is why we are going with the BCC :-)
>
> Are you putting an LTO tape unit on this machine?
Yes, either LTO or AIT3.
I obviously missed that part and made an assumption based on general system
performance.
"The estimated time to rebuild a DASD is approximately 30 minutes for a 8
arm array on a dedicated system with no other jobs running. If other
concurrent jobs being run on the system are requesting 130 IOs per second to
this DASD subsystem, the rebuild time will increase to approximately 1
hour."
Regards,
RT
"Charles Wilt" <cw...@nospam.miamiluken.com> wrote in message
news:MPG.1758488cf...@news.easynews.com...