New SATA Spinning Platter Hard Drives - Ensuring They Function Properly

17 views
Skip to first unread message

Kurtois

unread,
Dec 2, 2012, 7:54:34 PM12/2/12
to arch-r...@googlegroups.com
I purchased a couple of Seagate 3TB hard drives during the "Black Friday" discount flurry recently. That said, I am interested in hearing folks thoughts about ensuring the drives are in good working order. 

I have heard about...

Using this tool:

I also read up on Spinrite by Gibson Research:

Lastly, I read something about a Linux tool that could be used (ddrescue?).

Bottom Line:
I am interested to hear what users do when they get a new drive that they want to store personal data on. 

Kurtois

unread,
Dec 2, 2012, 8:02:09 PM12/2/12
to arch-r...@googlegroups.com
This podcast looks to have some interesting discussion on the topic:

Chris Weiss

unread,
Dec 2, 2012, 8:10:17 PM12/2/12
to arch-r...@googlegroups.com
ddrescue is a modified version of dd that will retry on failure and
lower read block sizes on error to try an recover as much as possible.
on a healthy drive, it's no different than dd.

if you want to do a health check, use a tool to read the S.M.A.R.T.
data and see what it says. each hdd vendor also has their own testing
tool you can use.

I don't think any of these tools will be any good for predicting quality though.

EschewObfuscation

unread,
Dec 2, 2012, 9:51:47 PM12/2/12
to arch-r...@googlegroups.com
SMART is pretty notorious for telling you "your drive just died" rather than "it's about to die".

unraid is free in 3 drive configuration, and now that drives 2TB and up are available, one motherboard and flash stick dedicated to supporting 3 drives is not an unreasonable server. If you pay, you can add many more drives, for greater space efficiency, but the copy you get is then married to the flash stick, so should the stick fail, you can't immediately get back in operation. You'd have to buy another copy.

Even so, effective, easy to set up,and not terribly expensive. You might try it with 3 drives, and decide later if you want to expand.

Since unraid is its own linux, booting from the flash stick, you do not need a boot drive. You could attach a floppy or cd drive from which to boot and run spinrite when you wish to run that against the individual drives. So unraid and spinrite are not mutually exclusive. Not a bad solution, imo.

On Sunday, December 2, 2012 7:10:17 PM UTC-6, ||cw wrote:
...

Kurtois

unread,
Dec 2, 2012, 10:25:36 PM12/2/12
to arch-r...@googlegroups.com
Thanks for the reply.

Watching this video indicates that magnetic media has *always* had imperfections in the platters that hard drive manufacturers could not eliminate, dating back to the days of MB hard drives. 



On Sunday, December 2, 2012 7:10:17 PM UTC-6, ||cw wrote:

Ben West

unread,
Dec 2, 2012, 11:33:12 PM12/2/12
to arch-r...@googlegroups.com
This is true, and I believe drive manufacturers have been almost always been employing various methods of error correction, data encoding, and reserve sectors to tolerate some quantity of physical defects on the medium, whether present out of the factory or developed over time.

That said, since the original question was about how to test a newly purchased drive for fitness, I guess you could either run exhaustive testing (whether using a tool providing by the manufacturer or by a 3rd party), or could do some sort of active burn-in testing and write dummy data to the entire drive and read back for verification.

From an abstract perspective, I'd recommend active burn-in testing.  That might let you catch any defects that are still lurking in the near end of the 'bathtub curve,' and presumably let you still return the drive for replacement.  Testing for defects that doesn't involve actually writing to a region of disk can only provide so much test coverage.
http://theproaudiofiles.com/more-dependable-hard-drives/
http://www.wetware.co.nz/2009/07/stress-test-burn-in-test-for-hard-drives/

Here is a report Google recently did on hard drive testing that found most drive failures occurring within the 1st 3 months of usage.
http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en/us/papers/disk_failures.pdf

This would suggest drive manufacturers weren't doing their own in house burn-in testing thoroughly enough to weed out the units with an unacceptable level of defects.  Which shouldn't be surprising.  The hard drive market is fiercely competitive, and exhaustive burn-in testing time that scales linearly with drive capacity can quickly become too long to bother with.

--
You received this message because you are subscribed to the Google Groups "Arch Reactor" group.
To post to this group, send email to arch-r...@googlegroups.com.
To unsubscribe from this group, send email to arch-reactor...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msg/arch-reactor/-/pLicvIZLeosJ.

For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Ben West

EschewObfuscation

unread,
Dec 3, 2012, 9:48:51 AM12/3/12
to arch-r...@googlegroups.com
Well if we want to split hairs, technically that's not true; the reason "hat box" mainframe drives were once so expensive is that only "perfect" ones made it through manufacturing Q/A. But as far as drives you or I could afford, yes, getting the cost down to our reach required the manufacturers adopt a "good enough is good enough" approach, which involves a quick scan and dialout of bad sectors at manufacturing. (Or possibly at first powerup.) That change of philosophy occurred back in the 5-10 MB days, prehistoric by most people's perspective.

When drives broke the 120 GB per platter threshold, they did it by going to perpendicular magnetic encoding. What many people don't know is that pushes areal density so high that drives now RELY on error correction for normal operation out of the box. Which is a reason, beyond normal caution, to never, ever trust a single drive anymore. Current drives, to achieve the density they have, are actually less reliable than those a couple of generations ago.


On Sunday, December 2, 2012 9:25:36 PM UTC-6, Kurtois wrote:
Thanks for the reply.

Watching this video indicates that magnetic media has *always* had imperfections in the platters that hard drive manufacturers could not eliminate, dating back to the days of MB hard drives. 

Chris Weiss

unread,
Dec 3, 2012, 10:02:14 AM12/3/12
to arch-r...@googlegroups.com
well, if you to split the split hairs, that QA process weeded out the
imperfect platters that would affect intended usage. the
imperfections still existed. in fact, the ones that made into
mainframe systems also still had imperfections, just either an a level
their equipment could not detect, or the imperfections existed in such
a way that would not manifest as a problem in the mainframe's drive.

Greer Carper

unread,
Dec 3, 2012, 10:30:44 AM12/3/12
to arch-r...@googlegroups.com
Have there been any noteworthy developments in RAID configuration, be it raid controllers, software, etc, the last few years?

3 years ago I looked into getting an areca 12 port raid controller + battery backup and using it for raid 5 in a dedicated server but ended up sticking with a NAS raid 5 box.  Now I'm reaching the point where the said NAS box has no space and I'm resorting to JBOD for my data.  Is there other avenues I should consider?

Ryan/baslisks

unread,
Dec 3, 2012, 10:35:43 AM12/3/12
to arch-r...@googlegroups.com
tape drives. 50 bucks for 3 terabytes of space. The problem is that the drives are crazy expensive.

Chris Weiss

unread,
Dec 3, 2012, 10:58:43 AM12/3/12
to arch-r...@googlegroups.com
should be able to upgrade the drives in the nas.

block level deduplication is the "big new thing", but it takes a
decent cpu and a crapload of ram to work. I've played some with
lessfs on linux, but being userspace it's less than stellar. I think
ZFS has the best free implementation right now.
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Kurtois

unread,
Dec 4, 2012, 1:07:14 PM12/4/12
to arch-r...@googlegroups.com
First off, thanks to folks chiming in on this topic.

That said, from my research after my last post, I have noted that performing a zeroing out of the drive might be the best approach:

I think this agrees with what Ben West posted, i.e. an active burn-in.

Lastly, I was expecting Google Groups to report back (send email) when a reply was made to this topic. I received no such email. Anyone else know if this capability exists?

Chris Weiss

unread,
Dec 4, 2012, 1:22:18 PM12/4/12
to arch-r...@googlegroups.com
I changed my user prefs to send emails for all. maybe without that
you have to specifically subscribe to a thread?
Reply all
Reply to author
Forward
0 new messages