Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Truth or marketing?

62 views
Skip to first unread message

David

unread,
Jun 29, 2016, 6:49:02 AM6/29/16
to
Just opened the retail box on my new WD Red 3TB drive (bought the retail
version as it was the same price as a bare drive for some reason).

There is a table on the back of the box explaining what various drives in
the range are good for.

Blue - nothing much and 2 years warranty

Black - gaming, video ediing, performance, 5 years warranty

Red - NAS, RAID compatible (!), 3 years warranty

Purple - surveillance systems with up to 32 cameras, 3 years warranty.


The box claims cloning software for each but there is no software in the
box. I assume that it may be available after registering the drive. The
support portal asks for registration. Oh, and the URL in the documentation
for support of the retail kit no longer exists.

http://www.wdc.com/en/products/products.aspx?id=810

says that there is special firmware in WD Red to make RAID "better.".
NASware 3.0.

"Built into every WD Red hard drive, NASware 3.0's advanced technology
improves your system's storage performance by increasing compatibility,
integration, upgradeability, and reliability."

Still trying to find out exactly what it is, but I should be fitting the
drive not surfing the net.

Is there something special or is this just marketing bollocks?

AS far as I can see any drive should work in a RAID array.

I recognise that if the WD Red doesn't power down to save electricity it
may give better long term reliability, but what is the special requirement
to be "RAID compatible"?

Ah, well, back to fettling SATA cables.

Cheers


Dave R

--
Windows 8.1 on PCSpecialist box

Ian

unread,
Jun 29, 2016, 7:08:24 AM6/29/16
to
On 2016-06-29, David <wib...@btintenet.com> wrote:
> Just opened the retail box on my new WD Red 3TB drive (bought the retail
> version as it was the same price as a bare drive for some reason).
>
> There is a table on the back of the box explaining what various drives in
> the range are good for.
>
> Blue - nothing much and 2 years warranty
>
> Black - gaming, video ediing, performance, 5 years warranty
>
> Red - NAS, RAID compatible (!), 3 years warranty
>
> Purple - surveillance systems with up to 32 cameras, 3 years warranty.

> AS far as I can see any drive should work in a RAID array.
>
> I recognise that if the WD Red doesn't power down to save electricity it
> may give better long term reliability, but what is the special requirement
> to be "RAID compatible"?

It has been suggested [citation required] that the primary difference
between server (RAID compatible) and desktop drives is their error
recovery behaviour. Desktop drives will try and try and try again until
they get a successful read, server drives will throw in the towel quickly
in the assumption (hope) that the RAID has the data elsewhere. I'd be
interested to know if this is true for WD black/red...


--
Ian

"Tamahome!!!" - "Miaka!!!"

Adrian Caspersz

unread,
Jun 29, 2016, 7:21:15 AM6/29/16
to
On 29/06/16 12:08, Ian wrote:
Desktop drives will try and try and try again until
> they get a successful read, server drives will throw in the towel quickly
> in the assumption (hope) that the RAID has the data elsewhere. I'd be
> interested to know if this is true for WD black/red...
>

I've got a seagate consumer drive rescued from an old Sky+ box. I
presume it just delivers the data, dropouts and all within a strict time
window.

This all looks like manufacturer bining, dividing into different SKUs
for use, and slapping on propriety firmware to conceal.

Like done with memory chips.

--
Adrian C

Johnny B Good

unread,
Jun 29, 2016, 6:38:35 PM6/29/16
to
On Wed, 29 Jun 2016 10:48:59 +0000, David wrote:

> Just opened the retail box on my new WD Red 3TB drive (bought the retail
> version as it was the same price as a bare drive for some reason).
>
> There is a table on the back of the box explaining what various drives
> in the range are good for.
>
> Blue - nothing much and 2 years warranty
>
> Black - gaming, video ediing, performance, 5 years warranty
>
> Red - NAS, RAID compatible (!), 3 years warranty
>
> Purple - surveillance systems with up to 32 cameras, 3 years warranty.
>
>
> The box claims cloning software for each but there is no software in the
> box. I assume that it may be available after registering the drive. The
> support portal asks for registration. Oh, and the URL in the
> documentation for support of the retail kit no longer exists.
>
> http://www.wdc.com/en/products/products.aspx?id=810
>
> says that there is special firmware in WD Red to make RAID "better.".
> NASware 3.0.

Presumably it has TLER[1] to prevent the RAID controller firmware or
software from prematurely marking the drive as bad and dropping it out of
the array.

>
> "Built into every WD Red hard drive, NASware 3.0's advanced technology
> improves your system's storage performance by increasing compatibility,
> integration, upgradeability, and reliability."
>
> Still trying to find out exactly what it is, but I should be fitting the
> drive not surfing the net.

You should *first* be testing the drive with Western Digital's WDIDLE3
utility to verify the head unload time-out setting. Just *don't* be
surprised[2] if this set to an insanely short 8 seconds instead of a more
useful 5 minutes (300 seconds).

Although it's possible to disable this time-out function completely
(sometimes - the disable feature seems to only work randomly, it either
works or it doesn't), it's probably best to allow head unloading (but at
a less insane rate - 5 minutes seems just about right, imo, for a desktop
drive).

Unloading the heads from off the platter to reduce power consumption by
some two or three hundred milliwatts (it allows the head positioning
servo control module to be powered down into a standby state) was first
introduced into their laptop drives over a decade ago where the 8 seconds
time-out period had no consequences of premature wear (WD laptop drives
with head unload counts of 3 and 5 million cycles still working perfectly
fine) and even provided an additional bonus of reducing the chances of a
nasty head crash in the event of gross mechanical shock, an all too
prevalent cause of disk failure in laptops.

It seems this feature was used by WD in their larger desktop cousins as
a means of improving their "Green Credentials" with their absurdly short
8 second time-out period being set as the default to maximise the power
savings the typical dumb "Tech Reviewers" would report in order to win
the "Our Drive Is Greener Than Every Other Brand Of Drive" award, even
though they knew full well of its life shortening effect.

>
> Is there something special or is this just marketing bollocks?

Not entirely, TLER is a real specification that marks off enterprise
grade drives from the common herd of commodity desktop drives.

>
> AS far as I can see any drive should work in a RAID array.

You'd think so, wouldn't you? Well, think again. A more meaningful
acronym would be RAED as in Redundent Array of Expensive Drives since the
extended time to recover hard to read data blocks now required on modern
desktop drives using PRML and similar track reading DSP techniques means
a perfectly good (by the modern standards of commodity desktop drives)
may land up being ejected out of the Array by an overly impatient RAID
controller.

>
> I recognise that if the WD Red doesn't power down to save electricity it
> may give better long term reliability,

I assume you're referring to spin down power saving. The improvement
essentially comes about by reducing the frequency of thermal cycling
events. HDDs much prefer a temperature stable mode of operation over
their platter spindle bearings being given frequent periods of respite.

With disk drive manufacturers quite happy to quote MTBF rates measured
to the nearest half million hours (half to one and a half million hours
being typical), you're left to conclude that spindle bearing life is not
an issue. However, operational temperature and, more importantly, thermal
cycling *is* a known correlating factor in determining failure rates of
modern electronic assemblies so it seems only wise to minimise extreme
temperature swings, especially in view of the reliability data set
provided by google some years back where no simple linear temperature
versus failure rate could be seen (those drives were *never* subjected to
spin down power savings abuse - they ran 24/7).

However, having said all that, this will only work in your favour if
you've taken the trouble to make sure the head unload time-out value has
been maxed out to the 300 seconds upper limit before placing it into
service.

And, whilst I'm mentioning this issue of insanely short head unload time-
out values, Western Digital aren't the only HDD manufacturer to suffer
such lunacy, they're just the only manufacturer who happens to keep this
setting out of the collection of power management settings embodied in
the APM feature set. IOW, it's quite possible to set other brands of HDD
into a head unloading frenzy equally as insane as WD's default,
presumably by picking the most aggressive power savings option in the
APM. DAMHIK, I just know. :-(


>but what is the special requirement to be "RAID compatible"?

Hint: TLER

>
> Ah, well, back to fettling SATA cables.
>

JOOI, what does the fettling of SATA cables involve these days?

Back in the early days of SATA, for me, fettling a SATA cable was simply
the matter of pre-forming the rather stiff and latchless cables to
utilise their springiness to help retain the connector plugs in the SATA
sockets.

I learnt early on to recognise the distinctive symptom of bad SATA
connection problems, along with the the quick (if temporary) fix of a
firm slap against the side of the PC case to unstick the hang event,
taking note of the need for another fettling session at the next
available opportunity (which could simply be as soon as the system wasn't
running any critical jobs - you could literally hot swap a SATA data
cable out without any consequences other than the desired one). :-)

[1] Time Limited Error Recovery

[2] I was *very* surprised when checking a brand new 4TB WD RED with the
WDIDLE3 utility just over 2 1/2 years ago, only to discover this had also
been afflicted with an 8 second time-out value by default. Needless to
say, I fixed this and set it to 300 seconds before adding it to my NAS
box even though this was the very first WD drive, ime, that I had been
able to completely disable the head unloading feature on.

On balance I felt that the head unload feature on a less aggressive 5
minute time-out would do more good than harm. As a matter of interest,
the SMART log for this drive shows a head unload cycle count of just 6563
versus a *claimed* 14,579 PoH value. I *know* that it has actually
clocked up nearly 200 hours more than the 4TB Hitachi DeskStar which
correctly shows a PoH figure of 23874 (along with a head unload cycle
count of 40297 - about 1.7 head unload events per hour, anything below 5
per hour should be no cause for concern - the WD RED has done even
better, at a mere 0.273 head unload cycles per hour!).

HTH & HAND :-)

--
Johnny B Good

David

unread,
Jun 30, 2016, 5:38:14 AM6/30/16
to
On Wed, 29 Jun 2016 22:38:33 +0000, Johnny B Good wrote:

> On Wed, 29 Jun 2016 10:48:59 +0000, David wrote:
>
>> Just opened the retail box on my new WD Red 3TB drive (bought the
>> retail version as it was the same price as a bare drive for some
>> reason).
<snip>
>> Still trying to find out exactly what it is, but I should be fitting
>> the drive not surfing the net.
>
> You should *first* be testing the drive with Western Digital's WDIDLE3
> utility to verify the head unload time-out setting. Just *don't* be
> surprised[2] if this set to an insanely short 8 seconds instead of a
> more useful 5 minutes (300 seconds).
<snip>
>
>
>> Ah, well, back to fettling SATA cables.
>>
>>
> JOOI, what does the fettling of SATA cables involve these days?
>
<snip>
>
> HTH & HAND :-)

Thanks - first point; I assume that I can WDIDLE3 the HDD at any point
when it is not mounted? Not just first time out of the box? Hmm...looks
like it. I think the unload count is O.K. on my other WD Red; I will check
again.

Fettling SATA cables?

Because of the layout of my Silverstone HTPC box I have some cabling
issues. The hard drives go into removable boxes (open frames) secured
against one side of the case. The middle one overlaps the SATA connectors
on the mother board. Thus I have to remove the middle HDD case if I want
to get at the SATA connectors for any reason, such as to cable in a device
in the other two boxes or temporarily connect another drive with the case
lid off. Today's solution is to populate all the SATA connectors (apart
from the one shared with the eSATA port) with cables, label them all up
with the connector number, and tie the spares up for future use.

I am also having interesting times generally with fitting HDDs.

The two 3.5" HDD boxes, holding 3 each, also have provision for cooling
fans. I decided to fit a couple of slimline quiet fans into the boxes to
try and quieten everything down - more slow spinning fans should in theory
move a similar amount of air with less noise.

However with these fans fitted I have to move the HDDs out one screw
fixing slot, which means they reach further out into the case. This in
turn is causing me additional cabling problems because the drive cabling
is fighting with the power leads from the PSU (third drive case) and some
of the leads from the mother board (middle drive case lowest slot).

I don't know if I'm missing something obvious, or if the modular power
leads from the PSU are in an unusual place and the case is really intended
for mini-ITX not full ITX, or something......

However I am about to start juggling drives around to see if I can get the
minimum cable interaction.

Current drives in the 3.5" cases are now

2 * WD 3TB Red 3.5"

2 * Crucial SSDs sharing an Icy Dock

So populating 3 of the 6 available spaces.

There is the supposed capacity for 6 3.5" drives but I am damned if I can
see how to fit them with the additional fans installed, and am pretty sure
I would have problems using the slot in the middle cage directly over the
SATA connectors anyway.

Loads of fun.

I may have to relocate the fan from one cage just to get the 3.5" drives
to settle in.

David

unread,
Jun 30, 2016, 7:27:36 AM6/30/16
to
On Thu, 30 Jun 2016 09:38:13 +0000, David wrote:

> On Wed, 29 Jun 2016 22:38:33 +0000, Johnny B Good wrote:
>
>> On Wed, 29 Jun 2016 10:48:59 +0000, David wrote:
>>
>>> Just opened the retail box on my new WD Red 3TB drive (bought the
>>> retail version as it was the same price as a bare drive for some
>>> reason).
> <snip>
>>> Still trying to find out exactly what it is, but I should be fitting
>>> the drive not surfing the net.
>>
>> You should *first* be testing the drive with Western Digital's WDIDLE3
>> utility to verify the head unload time-out setting. Just *don't* be
>> surprised[2] if this set to an insanely short 8 seconds instead of a
>> more useful 5 minutes (300 seconds).
> <snip>
<snip>

Looking at the WD site, and the software available, I read:

"This firmware modifies the behavior of the drive to wait longer before
positioning the heads in their park position and turning off unnecessary
electronics. This utility is designed to upgrade the firmware of the
following hard drives: WD1000FYPS-01ZKB0, WD7500AYPS-01ZKB0,
WD7501AYPS-01ZKB0. CAUTION: Do not attempt to run this software on any
hard drives other than what is listed above."

So according to WD I should not run wibble diddle 3 on my WD Red.

Looking at using idle3 under Linux to check what the settings are.

dennis@home

unread,
Jun 30, 2016, 7:29:50 AM6/30/16
to
On 29/06/2016 23:38, Johnny B Good wrote:

>
> It seems this feature was used by WD in their larger desktop cousins as
> a means of improving their "Green Credentials" with their absurdly short
> 8 second time-out period being set as the default to maximise the power
> savings the typical dumb "Tech Reviewers" would report in order to win
> the "Our Drive Is Greener Than Every Other Brand Of Drive" award, even
> though they knew full well of its life shortening effect.

My WD green drives are currently managing about one head load unload per
hour if the SMART data is correct.

Johnny B Good

unread,
Jun 30, 2016, 11:26:03 AM6/30/16
to
As I suggested, a rate of 5 or less head unload events per hour is
nothing to be concerned about. I simply based this on the fact that WD
offer lifetime rating limits of 300 and 600 thousand cycles on their
commodity and enterprise class drives respectively, assuming the lower
limit being applied to give you a PoH figure of 60,000 hours (a total run
time corresponding to some 6.8 years) before the drive ends up running on
"Borrowed Time" (assuming it lasts even that long before failing due to
any other number of random causes or else gets retired from active duty
on account of a capacity upgrade). To paraphrase Bill Gates, "6.8 years
ought to be enough for anyone!" :-)

However, the rate at which a WD drive still using the default 8 second
time-out value can vary tremendously depending on its usage cycle. Your
current average of one per hour may simply be a matter of serendipity.

Not so in the case of sysadmins discovering head unload event counts in
excess of 150,000 after just 6 months of service in their server boxes
about 5 or 6 years ago causing cries of outrage against Western Digital
for using such a cunning stunt. A cunning stunt almost on a par with
Seagate's FreeAgent drive "specials" cunning stunt doomed to fail measure
to mitigate overheating in enclosures lacking ventilation by mistaking
power saving spin down for a temperature limiting remedy for their hot
running drives - the saving grace in WD's case being the existence of a
remedy to fix the problem.

If you haven't done so already, I'd recommend running the WDIDLE3
utility (afaicr, it's available on the UBCD) and make sure the drives are
set for a 300 second time-out value, or if you prefer, disabled. Failing
that, then at least check the SMART logs once a month to make sure the
situation hasn't suddenly gotten out of hand (worst case scenario could
result in 350 or more head unload events per hour, say updating a log
file every ten seconds with no other activity to reset the time-out
period).

Running WDIDLE3 might require making temporary changes in the cmos setup
to set the SATA interface into an IDE compatable (or equivilent) mode and
possibly require you have just one of those drives connected at a time.
This last requirement may or may not be true but worth bearing mind if
the WDIDLE3 utility doesn't work properly when more than one WD drive is
connected.

I understand that there is now a *nix equivilent to the WDIDLE3 utility
available to reprogram the head unload time-out value. I've never had
cause thus far to try it out so can't comment any further.

--
Johnny B Good

Mike Scott

unread,
Jun 30, 2016, 2:49:04 PM6/30/16
to
On 30/06/16 16:26, Johnny B Good wrote:
...
>
> I understand that there is now a *nix equivilent to the WDIDLE3 utility
> available to reprogram the head unload time-out value. I've never had
> cause thus far to try it out so can't comment any further.
>

I've just been bitten by this problem. I bought a blue drive some months
ago not realising there were head load/unload issues. (I'd checked Tom's
guide, and a couple of other places; none mentioned the issue)

After my freebsd server had been running for three weeks or so, I
realised I could hear the drive spinning up and down far too frequently.

Anyway, the start/stop and load counts were 15305 and 65770. So I'm now
using a program 'ataidle' at startup:
/usr/local/sbin/ataidle -P 254 /dev/ada0

which clearly disables the timeout completely. I believe the same
program runs in linux too.

(Although, mea culpa, the numbers started ramping again after a (rare)
reboot: I think /usr/local/sbin can't be in the PATH in rc.local :-( )



--
Mike Scott (unet2 <at> [deletethis] scottsonline.org.uk)
Harlow Essex
"The only way is Brexit" -- anon.

Johnny B Good

unread,
Jun 30, 2016, 3:26:07 PM6/30/16
to
On Thu, 30 Jun 2016 09:38:13 +0000, David wrote:

> On Wed, 29 Jun 2016 22:38:33 +0000, Johnny B Good wrote:
>
>> On Wed, 29 Jun 2016 10:48:59 +0000, David wrote:
>>
>>> Just opened the retail box on my new WD Red 3TB drive (bought the
>>> retail version as it was the same price as a bare drive for some
>>> reason).
> <snip>
>>> Still trying to find out exactly what it is, but I should be fitting
>>> the drive not surfing the net.
>>
>> You should *first* be testing the drive with Western Digital's WDIDLE3
>> utility to verify the head unload time-out setting. Just *don't* be
>> surprised[2] if this set to an insanely short 8 seconds instead of a
>> more useful 5 minutes (300 seconds).
> <snip>
>>
>>
>>> Ah, well, back to fettling SATA cables.
>>>
>>>
>> JOOI, what does the fettling of SATA cables involve these days?
>>
> <snip>
>>
>> HTH & HAND :-)
>
> Thanks - first point; I assume that I can WDIDLE3 the HDD at any point
> when it is not mounted? Not just first time out of the box? Hmm...looks
> like it. I think the unload count is O.K. on my other WD Red; I will
> check again.

Yes, it can be run any time. It's never too late until it *is* too late.

The WDIDLE3 utility is a dos program which means you need to set the SATA
interface mode to IDE compatibility mode in the cmos setup for it to work.

I understand that there's now a *nix version of this utility which
probably doesn't require any such (temporary) changes to be made in the
cmos setup (but as I've never had reason to try this thus far, I'm only
surmising here).

>
> Fettling SATA cables?
>
> Because of the layout of my Silverstone HTPC box I have some cabling
> issues. The hard drives go into removable boxes (open frames) secured
> against one side of the case. The middle one overlaps the SATA
> connectors on the mother board. Thus I have to remove the middle HDD
> case if I want to get at the SATA connectors for any reason, such as to
> cable in a device in the other two boxes or temporarily connect another
> drive with the case lid off. Today's solution is to populate all the
> SATA connectors (apart from the one shared with the eSATA port) with
> cables, label them all up with the connector number, and tie the spares
> up for future use.
>
> I am also having interesting times generally with fitting HDDs.

I know where you're coming from. I too have similar, if perhaps not
quite so extreme, issues with my re-purposed 2nd hand acquired Gateway
2000 (desktop layout) case used to house my 4 disk NAS build. It only had
provision for a two drive bay (located on the RHS of the box) with a cdrom
and floppy drive bay in the middle.

With a modern ATX or micro-ATX MoBo, there was sufficient room to mount
the two additional drives on brass MoBo stand off pillars to the base of
the case immediately below the cdrom/floppy drive bay and in the space to
the LHS of that. All the HDDs are nicely placed to bask in the incoming
fresh air thus saving the need for additional fans over and above the
thermostatic fan in the 145 watt rated (old skool rating) mini ITX styled
PSU (into which I had to add the innards of a 5.2v smpsu wallwart to to
beef up the pathetically 100mA rated 5VSB rail[1]).

>
> The two 3.5" HDD boxes, holding 3 each, also have provision for cooling
> fans. I decided to fit a couple of slimline quiet fans into the boxes to
> try and quieten everything down - more slow spinning fans should in
> theory move a similar amount of air with less noise.

You may be able to do away with those extra fans by opening up, what are
often pathetically inadequate, ventilation slots or holes. It's
surprising how effective a standard 80mm PSU fan can be as the sole
source of ventilation when the case is modded to improve its breathing
(especially so when the idling consumption is only a hundred watts or
less - in my case, circa 50 to 51 watts all drives spinning).

>
> However with these fans fitted I have to move the HDDs out one screw
> fixing slot, which means they reach further out into the case. This in
> turn is causing me additional cabling problems because the drive cabling
> is fighting with the power leads from the PSU (third drive case) and
> some of the leads from the mother board (middle drive case lowest slot).
>
> I don't know if I'm missing something obvious, or if the modular power
> leads from the PSU are in an unusual place and the case is really
> intended for mini-ITX not full ITX, or something......
>
> However I am about to start juggling drives around to see if I can get
> the minimum cable interaction.

Good luck with that. If you're prepared to get your hands dirty with
with a bit of gross DiY activity (tinsnips pliers and drills), you may
enjoy even greater success without so much reliance on "Luck". :-)

>
> Current drives in the 3.5" cases are now
>
> 2 * WD 3TB Red 3.5"
>
> 2 * Crucial SSDs sharing an Icy Dock
>
> So populating 3 of the 6 available spaces.
>
> There is the supposed capacity for 6 3.5" drives but I am damned if I
> can see how to fit them with the additional fans installed, and am
> pretty sure I would have problems using the slot in the middle cage
> directly over the SATA connectors anyway.
>
> Loads of fun.
>
> I may have to relocate the fan from one cage just to get the 3.5" drives
> to settle in.

I discovered, several years ago now, that the key to a nice "Cool 'n'
Quiet" system is the inclusion of generously sized ventilation slots/
holes and removal of any sources of turbulent airflow in the path of your
standard axial cooling fan(s). These fans are quite capable of shifting
considerable volumes of air provided they're not asked to fight
backpressure (or its equivilent, suction drag).

The most common affronts to the principles of efficient ventilation are
the vent slots stamped out of the sheet steel panels in PC PSUs (as well
as in some cases). Here, there is an aesthetically pleasing and effective
remedy requiring little more than a suitable pair of pliers by which to
twist the metal strips between the slots by 45 to 60 degrees. This not
only preserves the safety barrier function and improve its appearance as
well as increase the effective cross sectional area, it also reduces, if
not eliminates, drag and noise inducing turbulence, creating an
improvement in airflow out of all proportion to visible expectations.

System ventilation requirements is one area where a bit of well thought
out attention to detail can pay off big time. It's surprising just how
little airflow is required to keep drive temperatures to within 10 to 15
degrees of ambient which is enough to permit operation right up to 40
degree room temperatures without undue risk of data loss.

[1] It took me quite a while to realise why most of the alternative MoBos
I'd tried in place of the original P75 board were failing to fire up. The
worst of it all was that the reason was documented on the PSU label
itself if I'd only taken the trouble to examine it in detail.

When it finally dawned on me that the 100mA rating might be the issue, I
made up a 3 cell AA battery pack to use as a substitute 5VSB rail to
retest the recalcitrant MoBos, discovering that once fired up, the 4.5v
assistance was no longer needed to keep it running. At that point, I was
tempted to just fit a 3 cell battery pack with a momentary push to start
switch I could hold in whilst pressing the normal on/off button but
decided on a more elegant solution in the form of a re-purposed PCB
removed from a 1.2A 5.2 volt wallwart fitted within the existing PSU
using a diode to connect to the 5VSB line inside so as to hold it up to a
reduced if adequate 4.7 volts. Once booted, this 5VSB line magically rose
up to 5 volts anyway.

This mod was done quite a few years ago and, as I expected, has been
working reliably ever since. Although each additional component increases
the failure rate of the overall system, I didn't believe that would be
much of an issue in this case since the naked smpsu circuit board runs
far cooler inside the mini-ITX psu that it ever could inside of its
original plastic plugtop overcoat. Moreover, the only time it is
subjected to any loading stress is during the brief time between mains
power on and the pressing of the on/off button. Once started, no current
is drawn from it leaving it to handle the quarter of a watt standby
consumption it would normally be expected to deal with from within the
cosy confines of its original plastic enclosure.

This wasn't the only modification I eventually found myself having to
apply. As I upgraded to later MoBos that weren't able to power the cpu vrm
from the 5 volt rail changing the 4 pin 12 volt MoBo connector's status
from "optional" to "mandatory", I had to add a 12v CPU VRM connector
lead. This was followed by the addition of a dual connector SATA power
lead - I think I was happy to carry on using my existing Molex to SATA
power adapters at that stage rather than remove one of the dual molex
leads to make space for a second SATA power cable (things were getting
rather crowded in the cable bundle exit hole by then).

You might question the "wisdumb" of modifying a mini ITX PSU that was
already ancient before I started using it. Part of the answer lies in the
fact of it not being a standard ATX form factor (although a SFF ATX PSU
[2] nicely slots into its place with very minor fettling work) and the
rest of it lies in its higher than usual efficiency (79% - not quite the
minimum to qualify as a Bronze 80 PSU but a damn sight better than the 66
to 70 percent typical of most cheap commodity ATX supplies - testing
using typical cheap commodity ATX PSUs showed a consumption some ten
watts greater than I was obtaining with that venerable mini ITX unit).

Also there was the fact that the 145 watt maximum rating, unlike the
maximum ratings of the cheap 'n' cheerful 300 to 450 watt ATX PSUs which
would be good only for a matter of seconds before going BANG! (as was
often true of the more ambitious 600 watt kit being reviewed in various
PC related magazines where they literally did just that after a mere ten
seconds at maximum load), this was an Old Skool rating where it would
carry this loading 24/7 indefinitely with a +50% surge rating measured in
tens of minutes rather than mere seconds. It may have seemed underpowered
but for the task at hand it was more than amply specced up.

[2] I keep a 270 Watt SFF ATX psu in the spares box to insure against the
day that the original mini-ITX finally expires from old age. The reason
I'm not using it *instead* of the mini ITX psu is because it would make
the NAS box consume an extra two watts. Still and all, as a spare unit,
it's still noticeably more efficient than a typical ATX psu.

When it comes to PSU efficiency, I've looked at *all* the options and
the plain fact is, compared to what I'm using and what I've got in the
"Spares Box", investing in a decent long life Bronze 80 or better 240
watt mini ITX psu will never repay itself within a ten year period.

Otoh, if I were upgrading from a commodity 300 W ATX psu with it's
typical 67% efficiency, I probably would see a ROI after a mere 4 or 5
years (but even that is marginal - the Bronze 80 PSU may not even last
that long).

My plan is to run the 145 W mini ITX psu into the ground and then do
likewise with its currently designated replacement at which point I may
well have obtained another suitable Bronze 80 or better rated SFF ATX psu
from a flea market trader for very little expense or I'm faced with an
unavoidable investment in a Bronze 80 or better specced mini ITX which,
by then, may well be considered the minimum standard permitted for use by
home IT kit and therefore absent any premium pricing penalty. Even if I'm
caught short on 'drop-in' replacements, I know I can always bodge up a
temporary fix if needs be (that's the beauty of home built kit :-).

--
Johnny B Good

Mike Tomlinson

unread,
Jun 30, 2016, 5:47:39 PM6/30/16
to
En el artículo <20160630201...@chronos.eternal-september.org>,
Chronos <use...@chronos.org.uk> escribió:

>http://idle3-tools.sourceforge.net/

Many thanks for that. Stuck a 2TB Green in my Microserver yesterday and
the head unload count was already over 800.

Now disabled.

--
(\_/)
(='.'=) systemd: the Linux version of Windows 10
(")_(")

Adrian Caspersz

unread,
Jun 30, 2016, 6:42:10 PM6/30/16
to
On 30/06/16 22:46, Mike Tomlinson wrote:
> En el artículo <20160630201...@chronos.eternal-september.org>,
> Chronos <use...@chronos.org.uk> escribió:
>
>> http://idle3-tools.sourceforge.net/
>
> Many thanks for that. Stuck a 2TB Green in my Microserver yesterday and
> the head unload count was already over 800.
>
> Now disabled.
>

Hmm, haven't done it - known about the issue for sometime but seems a
hole in the ground fitted my head....

I know that stating the following publicly is going to accelerate the
likelihood that these two discs are going to fail in the next week[1].
But anyway, I've got the following two green drives in a old ReadyNas
Duo v1.

Model: WDC WD20EARS-00MVWB0
Power On Hours 47307 (5.4 years)
Load Cycle Count 1380670

Model: WDC WD20EARX-00PASB0
Power On Hours 34356 (3.9 years)
Load Cycle Count 970498

About 30 load cycles per hour.


[1] Go to the bookies, place ya bets ...

--
Adrian C

Johnny B Good

unread,
Jun 30, 2016, 8:13:19 PM6/30/16
to
Don't be too hard on Western Digital. Using an 8 second time-out on
their laptop drives going right back to their IDE models doesn't seem to
present any problems (e.g. 3 and 5 million cycles clocked up without any
other issues). Assuming my sample of two laptop drives is typical, there
is, in the case of laptop drives, the side benefit of reduced risk of
head crashes during high G-Force transient events (an all too common
failure mode with laptops).

However, this automatic head unloading feature doesn't seem to scale
very favourably to the larger desktop drives (and it seems WD must have
understood this since they specified 300 and 600 thousand cycle limits on
life for their commodity and enterprise drives respectively).

Another point worth considering is that such excessive head unloading
cycle counts aren't limited to WD product alone. Both Samsung and Hitachi
can have their head unloading time-out values set on a short fuse simply
by setting an aggressive power saving option in their APM (WD it seems
chose to keep it entirely separated from the APM options - it's just a
terrible shame on them that they chose such a short time-out value as a
default).

A few years back when I became concerned over a few UNC errors showing
in the SMART logs of a 3TB Hitachi Cool Spin drive and posted my SMART
logs for opinions, the non-zero MZER count on one of my Spin Points was
pointed out and a closer inspection revealed that the drive had managed
to clock up just over a million head unload cycles (its twin had a mere
168 thousand or so clocked up at that time). Subsequent testing when I
finally retired the drive about a year later, strongly suggested that the
MZER events were the result of the high head unloading count. Ho hum, if
only I'd noticed sooner! :-(

--
Johnny B Good

Johnny B Good

unread,
Jun 30, 2016, 8:40:02 PM6/30/16
to
On Thu, 30 Jun 2016 20:17:41 +0100, Chronos wrote:

> On Thu, 30 Jun 2016 19:49:03 +0100 Mike Scott
> <usen...@scottsonline.org.uk.invalid> wrote:
>
>> Anyway, the start/stop and load counts were 15305 and 65770. So I'm now
>> using a program 'ataidle' at startup:
>> /usr/local/sbin/ataidle -P 254 /dev/ada0
>
> idle3ctl sets it permanently on the drive on Linux:
>
> http://idle3-tools.sourceforge.net/

Yeah, that's the best way to deal with the problem. In fact, in this
case with *all* WD drive models, it's the *only* Linux way to fix the
problem since the ataidle command can't touch the head unload timer
because WD chose to keep it totally seperate from the APM feature list.

IOW, as far as Western digital HDDs are concerned, no APM level exists
to effect the head unloading timer.

So, there's just two solutions to this problem, WDIDLE3 under dos or
idle3ctl under Linux and they both do the same thing, reprogram the head
unload time-out value stored in the drive's firmware. Once reprogrammed
or disabled, it will remain that way regardless of how many PCs you hawk
it around to (e.g installed into a USB2/eSATA portable drive enclosure).

Mike Tomlinson

unread,
Jul 1, 2016, 2:09:44 AM7/1/16
to
En el artículo <dtlli1...@mid.individual.net>, Adrian Caspersz
<em...@here.invalid> escribió:

> Load Cycle Count 1380670

eep.

Mind you, if it's still working after >5 years, you've had your money's
worth.

Just checked mine and I see the HL count is still increasing - then I
spotted this:

"Please power cycle your drive off and on for the new setting to be
taken into account. A reboot will not be enough"

Sigh. Now done.

dennis@home

unread,
Jul 1, 2016, 5:36:10 AM7/1/16
to
On 01/07/2016 01:13, Johnny B Good wrote:

> Another point worth considering is that such excessive head unloading
> cycle counts aren't limited to WD product alone. Both Samsung and Hitachi
> can have their head unloading time-out values set on a short fuse simply
> by setting an aggressive power saving option in their APM (WD it seems
> chose to keep it entirely separated from the APM options - it's just a
> terrible shame on them that they chose such a short time-out value as a
> default).
>

I don't really see why it should be a problem, the drives us a ramp
loading head so the nothing should ever touch the platter. I wonder if
the ramp wears so it stops working?


Johnny B Good

unread,
Jul 1, 2016, 9:25:53 AM7/1/16
to
There's obviously some sort of wear mechanism involved, else why specify
a maximum EoL limit figure?

In the early days of 'modern' IDE drives using "Voice Coil" drive with
servo track following, a spring was used to bias the heads to travel
towards the "Landing Zone" in-board of the innermost cylinder position
where they'd be latched into the parking position as a result of power
down (or power loss). The latch was operated by a solenoid to unlatch the
heads by the disk controller as part of its power on initialisation
sequence.

Later on, the need for such a 'parking spring' was eliminated by making
use of the energy stored in the platters to drive the spindle motor as a
braking generator to provide the few seconds of power required to allow
the head servo circuits to *actively* drive the heads to a landing zone
which could now be a parking ramp next to the outer edge of the platter
assembly rather than a special landing zone area close to the spindle.

The big advantage of this being that actual head to platter contact
could be entirely eliminated with a secondary function (taking advantage
of this elimination of head to platter contact) of allowing an additional
way to shave a few hundred milliwatts off the drive's semi-idle
consumption, albeit with a few hundred milliseconds (as opposed to
several seconds in the case of platter spin down/ spin up) delay.

Now, this new active head parking technique when it was used in laptop
drives was immediately seized upon as an additional power saving
mechanism, a feature so prized in the case of battery powered portable
computing as to be genuinely useful. Since the performance penalty was so
slight compared to all the other performance penalties inherent in a
portable computer system, it was used as one of the many unspecified
proprietary power saving design features used to minimise laptop HDD
power consumption.

I've no doubt that WD at least, saw the use of a very short 8 second
time-out as an acceptable compromise in regard of performance with an
important side effect of reducing the risk of head crashes due to the
very real possibility of the portable battery powered computer (laptop or
tablet) being subjected to gross mechanical accelerations (drop events)
as a matter of normal use (if the heads are safely parked gross
mechanical shocks can't cause head crash events).

It would seem that wear was not an issue with laptop scaled head parking
ramps (if my seeing 8 to 10 year old IDE laptop drives still functioning
perfectly even with 3 and 5 million head unload cycles clocked up in the
SMART logs is anything to go by).

Typically (where do you think the idea of a built in user programmable 1
to 15 minute spin down power saving feature came from?), this head unload
power saving technique was transferred to the larger desktop drives 'Lock
Stock and Barrel' (even to the inclusion of the entirely inappropriate 8
second time-out... Fools!!!).

WD's big mistake in transferring this power saving technique from the
laptop drive case to the desktop drive models lay in not re-scaling the
time period range to something more like a 30 seconds minimum out to a 20
minute maximum. At the very least, they should have reprogrammed the
default to the 300 seconds maximum rather than the 8 seconds minimum.

The benefit of an 8 second time-out as a head crash risk reduction
feature would be totally lost on a disk drive expected to be protected
against such events anyway. The desktop cases housing these *desktop*
drives would normally only be exposed to mechanical shock events whilst
being physically relocated when they'd be powered down and the drive
heads safely parked.

I suppose a "Devil's Advocate" could claim there is a case for retaining
the 8 second minimum for when such drives are fitted into portable
external enclosures which are less protected against such mechanical
shock events whilst the drive is in an operational state but I'd rather
forego such a dubious benefit (it's not a guaranteed method to eliminate
the risk of head crash events, just a means of reducing this risk in
laptop usage - i.e. there was a chance (as opposed to no chance) that
serendipity would step in and save your bacon when you did something
stupid).

What beggars belief is that WD, despite all the cries of outrage 5 or 6
years ago, still fails to take even the simplest of steps to remedy the
problem even to this day[1]. Quite frankly, the only thing saving them
from joining Seagate in the "Stunning Cunts" club is the availability of
that WDIDLE3 utility to allow the more savvy computer user to apply the
obvious fix for themselves.

[1] If anyone has evidence to the contrary, please feel free to chip in
and set the record straight. :-)

--
Johnny B Good

Johnny B Good

unread,
Jul 1, 2016, 9:45:48 AM7/1/16
to
On Fri, 01 Jul 2016 06:55:27 +0100, Mike Tomlinson wrote:

> En el artículo <dtlli1...@mid.individual.net>, Adrian Caspersz
> <em...@here.invalid> escribió:
>
>> Load Cycle Count 1380670
>
> eep.
>
> Mind you, if it's still working after >5 years, you've had your money's
> worth.
>
> Just checked mine and I see the HL count is still increasing - then I
> spotted this:
>
> "Please power cycle your drive off and on for the new setting to be
> taken into account. A reboot will not be enough"
>
> Sigh. Now done.

Wow! I forgot about *that* "Gotcha!". Mind you, a power cycle is
virtually guaranteed when using WD's WDIDLE3 dos based utility. However,
that reminds me of my paranoia over dealing with the million cycle (and
168 thousand cycle) Samsung SpinPoints in my NAS4Free box when changing
the APM settings on those drives to *NO* power savings where I went out
of my way to completely shutdown the NAS and power cycle reboot it
afterwards just to make damn sure the head unload feature had been well
and truly disabled. Subsequent SMART checks revealed that I'd managed to
completely stop these counts from incrementing any further.

Anyway, for those using the Linux version, "idle3ctl" where there may be
no obvious reason to power cycle the drives, this posting is a very
valuable "Heads Up" warning.

I'm afraid I didn't mention the need since thus far, I've only ever used
WDIDLE3 which virtually guarantees such power cycling so took it for
granted that this would be true for everyone else, including users of the
idle3ctl utility.

My advice for anyone making APM changes is to check the SMART logs
afterwards to confirm the change of setting (at least in the case of
attempts to reduce or eliminate the head unloading count rate) and be
prepared to completely power cycle the box if the anticipated signs of
change don't show up in the SMART logs.

--
Johnny B Good

Jaimie Vandenbergh

unread,
Jul 1, 2016, 10:27:55 AM7/1/16
to
Johnny B Good <johnny...@invalid.ntlworld.com> wrote:
>
>
> Wow! I forgot about *that* "Gotcha!". Mind you, a power cycle is
> virtually guaranteed when using WD's WDIDLE3 dos based utility. However,
> that reminds me of my paranoia over dealing with the million cycle (and
> 168 thousand cycle) Samsung SpinPoints in my NAS4Free box when changing
> the APM settings on those drives to *NO* power savings where I went out
> of my way to completely shutdown the NAS and power cycle reboot it
> afterwards just to make damn sure the head unload feature had been well
> and truly disabled. Subsequent SMART checks revealed that I'd managed to
> completely stop these counts from incrementing any further.

All this excitement reminded me to check the counts in my Microservers, and
they're all under 3000. Over half of them at WDs, red and green. I've never
intentionally twiddled them. FreeNAS defaulting to fixing then, I wonder?

Cheers - Jaimie

David

unread,
Jul 1, 2016, 11:19:24 AM7/1/16
to
Possibly they no longer have that particular firmware setting.

The WD site lists drives which should be "wibble diddled" but it is a
short list.

"This firmware modifies the behavior of the drive to wait longer before
positioning the heads in their park position and turning off unnecessary
electronics. This utility is designed to upgrade the firmware of the
following hard drives: WD1000FYPS-01ZKB0, WD7500AYPS-01ZKB0,
WD7501AYPS-01ZKB0.

CAUTION: Do not attempt to run this software on any hard drives other than
what is listed above."

For wdidle3_1_05.

Jaimie Vandenbergh

unread,
Jul 1, 2016, 11:32:21 AM7/1/16
to
Aha! My smallest is 2TB so no wonder.

Cheers - Jaimie

Johnny B Good

unread,
Jul 1, 2016, 12:11:26 PM7/1/16
to
That looks like a list of ancient drive models. WDIDLE3 v1.05 works for
older 44pin IDE laptops just as effectively as it does with the very
recent 6TB Green (and the not so recent 4TB REDs). I suspect that they
were concerned with running it on what might now be considered pre-
historic models at the time and simply forgot to update the notice.

--
Johnny B Good

Johnny B Good

unread,
Jul 1, 2016, 12:11:40 PM7/1/16
to
On Fri, 01 Jul 2016 14:27:54 +0000, Jaimie Vandenbergh wrote:

Not in the case of Western Digital drives, the head unload timer is
completely independent of the APM settings which is all FreeNAS (and N4F)
have control over. Now, the other brands of HDD, otoh, *can* have this
timer upset via the APM settings, as I found to my cost a few years ago
with those Samsung SpinPoints :-(

Either WD has finally heeded user demands and set the timer to a less
damaging value or else it's just fortuitous that your usage pattern has
been thwarting their propensity to park their heads every 8 seconds.

The only way to be certain about this, is to either run the WDIDLE3
utility (requiring a reboot into dos (and very likely a temporary
adjustment in the cmos setup to enable IDE mode) or else the use of
idle3ctl via a shell session at the console or via an SSH login, assuming
it can be run via a plug in thumb drive.

In view of the fact that the drives require power cycling to effect any
changes of this setting, it's probably easier to shut the server down and
reboot from a UBCD pen drive or CD and run WDIDLE3 from there. You might
need to have each WD drive connected only one at a time (possibly with
any non-WD drives unplugged as well for good measure) when using WDIDLE3.

--
Johnny B Good

Johnny B Good

unread,
Jul 1, 2016, 12:20:48 PM7/1/16
to
I wouldn't jump to any conclusion based on an out of date warning
notice. :-(

When I checked out a brand new unused 4TB RED nearly 2 years(?) ago, I
doubt anyone could have been more surprised than I to discover it set to
the 8 second default. Then ditto just 9 months ago when I checked out a
brand spanking new 6TB Green (both of which I set to the 300 seconds max).

Believe me, it's safe to use WDIDLE3 on your 2TB drives. I doubt it's
even possible to buy any of the previous models to which that warning
notice may have related to.

--
Johnny B Good

dennis@home

unread,
Jul 2, 2016, 6:31:07 AM7/2/16
to
Synology NAS boxes don't suffer from these load cycles with retail green
drives. I don't know if the drives suffered from it in the first place
as they only live in the NAS.

I have had a drive fail after a few months and WD replaced the 4G drive
with a 6G drive, shame the Synology simple mirroring doesn't use the
extra space though.

David

unread,
Jul 3, 2016, 10:23:06 AM7/3/16
to
On Fri, 01 Jul 2016 13:25:52 +0000, Johnny B Good wrote:

<massive snip>
>
> What beggars belief is that WD, despite all the cries of outrage 5 or 6
> years ago, still fails to take even the simplest of steps to remedy the
> problem even to this day[1]. Quite frankly, the only thing saving them
> from joining Seagate in the "Stunning Cunts" club is the availability of
> that WDIDLE3 utility to allow the more savvy computer user to apply the
> obvious fix for themselves.
>
> [1] If anyone has evidence to the contrary, please feel free to chip in
> and set the record straight. :-)

Just installed and run idle3ctl under Mint on my two 3TB WD Red.

If I understand the output correctly both are set to 300.0 seconds.

Once I've configured Pan on Mint then I will cut and paste the results.

David

unread,
Jul 3, 2016, 11:59:07 AM7/3/16
to
david@MintHTPC ~ $ sudo idle3ctl -g /dev/sdb
Idle3 timer set to 138 (0x8a)
david@MintHTPC ~ $ sudo idle3ctl -g /dev/sdc
Idle3 timer set to 138 (0x8a)
david@MintHTPC ~ $ sudo idle3ctl -g103 /dev/sdc
Idle3 timer set to 300.0s (0x8a)

Hopefully the third test (choosing the output as if from wdidle3 v1.03)
is showing that the time out is 300 seconds.


Cheers


Dave R


--
Mint on HTPC

Johnny B Good

unread,
Jul 3, 2016, 1:06:41 PM7/3/16
to
On Sun, 03 Jul 2016 14:23:05 +0000, David wrote:

Thanks for that feedback, Dave.

JOOC, I tried running idle3ctl on my Linux Mint 17.1 box to have a look
at the output for my 2TB WD Green (a retiree from the NAS box about a
year back) and got this curious result:

kepler john # idle3ctl -g105 /dev/sdc
Idle3 timer set to 3720.0s (0xfc)

I also got the same result using the -g103 option btw.

This roused my curiousity even further because I was anticipating either
a 300 seconds or disabled result so I trawled my collection of spare HDDs
(mostly Samsungs) to dig out its partner in crime, the other 2TB WD Green
retiree and attached it to the workbench Al Fresco setup for a UBCD
session hosted run of WDIDLE3.

After making a false start, forgetting to reset the SATA mode to non-
RAID (that MoBo's codeword for "IDE compatable"), I discovered I had
disabled the timer on this drive. Taking advantage of the hot plug
feature of SATA, I disconnected it and inserted it into the e-SATA
connected docking station attached to the Linux Mint box and eventually
got the following result:

kepler john # idle3ctl -g103 /dev/sdh
Idle3 timer is disabled

which matched the WDIDLE3 result. I then decided to reconnect it back to
my Al Fresco box and use WDIDLE3's /S option to reprogram it to a 300
seconds time out value which it did just nicely (no problems due to
having unplugged it live from the test system). After verifying the
result using the /R command line switch, I then hot unplugged it and
introduced it back to the docking station to run the following:

kepler john # idle3ctl -g105 /dev/sdh
Idle3 timer set to 300.0s (0x8a)

which confirms that the "300.0s" and "disabled" results *do* match up
with both of the later versions of WDIDLE3's "Reality", leaving the
'impossible' "3720.0s" value as a bit of a conundrum since, according to
the documentation (and testing with larger values for the /S parameter),
300 seconds is the maximum programmable value short of the next larger
value of 'infinity', i.e "disabled".

It seems your understanding of the output is correct but as to the
origin of that setting, in view of my experience with that 6TB WD Green
about ten months ago, I rather suspect it might be more to do with the
disk initialisation process employed by your NAS box (I believe these
were used in a NAS box previously - possibly I'm conflating this with
someone else's experience) than down to a change of heart by Western
Digital themselves.

--
Johnny B Good

David

unread,
Jul 4, 2016, 6:50:27 AM7/4/16
to
The older one was initialised (AFAICR) under W7. The latest was
initialised under Mint.

I don't think that anything took place which would have touched the time
outs in the disc firmware.

From random access to the web I think the problem is more likely to be
found on older drives and more recent Green drives (presumably to bump up
the "green" credentials).

Anyway, very relieved to have rescued the data of the failing Seagate
which is now looking terminal. From its position on the shelf.

I do note that the high values you are seeing are included in the idle3ctl
documentation - I was puzzling over why these values were shown which lead
me to question the results I was seeing and/or the documentation.

Johnny B Good

unread,
Jul 6, 2016, 3:48:39 AM7/6/16
to
On Mon, 04 Jul 2016 10:50:26 +0000, David wrote:

> On Sun, 03 Jul 2016 17:06:40 +0000, Johnny B Good wrote:

====snip====

>>
>> It seems your understanding of the output is correct but as to the
>> origin of that setting, in view of my experience with that 6TB WD Green
>> about ten months ago, I rather suspect it might be more to do with the
>> disk initialisation process employed by your NAS box (I believe these
>> were used in a NAS box previously - possibly I'm conflating this with
>> someone else's experience) than down to a change of heart by Western
>> Digital themselves.
>
> The older one was initialised (AFAICR) under W7. The latest was
> initialised under Mint.
>
> I don't think that anything took place which would have touched the time
> outs in the disc firmware.
>
> From random access to the web I think the problem is more likely to be
> found on older drives and more recent Green drives (presumably to bump
> up the "green" credentials).

My experience of the timer setting up to now has been that *every
single* WD drive (including a couple of 8 year old or so 44 pin IDE
laptop units - a 160GB replacement to the original 80GB drive my ten year
old Acer 3660 laptop was supplied with and a later 250GB upgrade
replacement for that) have all revealed themselves to be sporting the 8
second timer value.

The desktop drives (a couple of 2TB Greens, a 4TB Red and the latest 6TB
Green from just 10 months ago) all had been supplied with the 8 second
default time-out applied from brand new. I was eyeing up an old 40 pin IDE
320GB drive unit with a view to interrogating its head unload timer
setting when I was testing the 2TB Green unit BICBA to faff around with
an 80 wire IDE ribbon cable and the need to power cycle reboot the whole
machine so left that thought to 'simmer' on the back burner for now. If
anyone's curious, I'm quite happy to run the test if they'd like me to.

>
> Anyway, very relieved to have rescued the data of the failing Seagate
> which is now looking terminal. From its position on the shelf.

Ah well, that's Seagates for you. When they go, they go (to paraphrase
an Aldi or Lidle marketing slogan). The head actuator magnets make
extremely powerful 'fridge magnets' I do believe. :-)

>
> I do note that the high values you are seeing are included in the
> idle3ctl documentation - I was puzzling over why these values were shown
> which lead me to question the results I was seeing and/or the
> documentation.

I must take a closer look at the idle3ctl docs to see what that's all
about (I'm afraid to say neither info or man mentioned 'rogue' values - I
got the same manual/info file in each case). It looks like I'll have to
expend some more google-fu to find the references to 'rogue values' which
you mentioned.

Considering that setting the drive using the WDIDLE3 utility to either
'disabled' or the 300 seconds maximum showed as such when checked using
idle3ctl, the oddball value of 3720.0s that I first saw is a bit of a
mystery to say the least. Perhaps it's a "Water Mark" value that's
interpreted by the drive (and reported by WDIDLE3) as either the 300
seconds maximum value or 'disabled' for Western digital's benefit in
resolving any RMA disputes - just guessing here (and you'd be right to
assume, from my manifest cynicism, that I'm a child of the 50s) :-).

--
Johnny B Good

Mike Scott

unread,
Jul 6, 2016, 4:29:36 AM7/6/16
to
On 06/07/16 08:48, Johnny B Good wrote:
...
>
> My experience of the timer setting up to now has been that *every
> single* WD drive (including a couple of 8 year old or so 44 pin IDE
> laptop units - a 160GB replacement to the original 80GB drive my ten year
> old Acer 3660 laptop was supplied with and a later 250GB upgrade
> replacement for that) have all revealed themselves to be sporting the 8
> second timer value.
>
> The desktop drives (a couple of 2TB Greens, a 4TB Red and the latest 6TB
> Green from just 10 months ago) all had been supplied with the 8 second
> default time-out applied from brand new. I was eyeing up an old 40 pin IDE
...

Interesting. I've checked our only desktop 3.5" caviar blue 500Gb drive,
and the load and start/stop counts are around the 3300 mark after about
2 years running (power on hours 3000) . My wife turns it on and off
multiple times per day, so it suggests there's no timer active.

Johnny B Good

unread,
Jul 6, 2016, 10:59:20 PM7/6/16
to
On Wed, 06 Jul 2016 09:29:35 +0100, Mike Scott wrote:

> On 06/07/16 08:48, Johnny B Good wrote:
> ...
>>
>> My experience of the timer setting up to now has been that *every
>> single* WD drive (including a couple of 8 year old or so 44 pin IDE
>> laptop units - a 160GB replacement to the original 80GB drive my ten
>> year old Acer 3660 laptop was supplied with and a later 250GB upgrade
>> replacement for that) have all revealed themselves to be sporting the 8
>> second timer value.
>>
>> The desktop drives (a couple of 2TB Greens, a 4TB Red and the latest
>> 6TB
>> Green from just 10 months ago) all had been supplied with the 8 second
>> default time-out applied from brand new. I was eyeing up an old 40 pin
>> IDE
> ...
>
> Interesting. I've checked our only desktop 3.5" caviar blue 500Gb drive,
> and the load and start/stop counts are around the 3300 mark after about
> 2 years running (power on hours 3000) . My wife turns it on and off
> multiple times per day, so it suggests there's no timer active.

Assuming a 6 day week for 104 weeks, that works out to an average of 4.8
hours a day. If your missus is rebooting it 5 or 6 times a day, you could
simply be seeing the head unload cycles triggered by power down events
alone.

Otoh, if you're running windows Vista, that poor old drive may never get
a chance to expire even an 8 second head unload timer. :-)

The only way to know for sure is to check it with the WDIDLE3 utility
(it's one of the many such utilities included on the UBCD which can be
burnt to a CD or the iso image used to create a bootable USB pen drive).

I still haven't gotten around to testing that WD 320GB IDE drive yet.

--
Johnny B Good

Mike Scott

unread,
Jul 7, 2016, 3:17:45 AM7/7/16
to
On 07/07/16 03:59, Johnny B Good wrote:
> On Wed, 06 Jul 2016 09:29:35 +0100, Mike Scott wrote:
>
>> On 06/07/16 08:48, Johnny B Good wrote:
>> ...
>>>
>>> My experience of the timer setting up to now has been that *every
>>> single* WD drive (including a couple of 8 year old or so 44 pin IDE
...
>>> replacement for that) have all revealed themselves to be sporting the 8
>>> second timer value.
>> ...
>>
>> Interesting. I've checked our only desktop 3.5" caviar blue 500Gb drive,
>> and the load and start/stop counts are around the 3300 mark after about
>> 2 years running (power on hours 3000) . My wife turns it on and off
>> multiple times per day, so it suggests there's no timer active.
>
> Assuming a 6 day week for 104 weeks, that works out to an average of 4.8
> hours a day. If your missus is rebooting it 5 or 6 times a day, you could
> simply be seeing the head unload cycles triggered by power down events
> alone.

Exactly the point I was trying to make - the evidence for this
particular WD drive is that there is no timer built into the drive. Cf
your earlier comment that /all/ your own drives have it.

>
> Otoh, if you're running windows Vista, that poor old drive may never get
> a chance to expire even an 8 second head unload timer. :-)

Mint and freebsd all through here, bar a dual boot (mint/XP) lappy, and
one with vista purely to run Garmin's irritatingly windows-only GPS
software. But I digress :-}

>
> The only way to know for sure is to check it with the WDIDLE3 utility
> (it's one of the many such utilities included on the UBCD which can be
> burnt to a CD or the iso image used to create a bootable USB pen drive).
>
> I still haven't gotten around to testing that WD 320GB IDE drive yet.
>


--

Johnny B Good

unread,
Jul 7, 2016, 12:14:27 PM7/7/16
to
On Thu, 07 Jul 2016 08:17:43 +0100, Mike Scott wrote:

> On 07/07/16 03:59, Johnny B Good wrote:
>> On Wed, 06 Jul 2016 09:29:35 +0100, Mike Scott wrote:
>>
>>> On 06/07/16 08:48, Johnny B Good wrote:
>>> ...
>>>>
>>>> My experience of the timer setting up to now has been that *every
>>>> single* WD drive (including a couple of 8 year old or so 44 pin IDE
> ...
>>>> replacement for that) have all revealed themselves to be sporting the
>>>> 8 second timer value.
>>> ...
>>>
>>> Interesting. I've checked our only desktop 3.5" caviar blue 500Gb
>>> drive,
>>> and the load and start/stop counts are around the 3300 mark after
>>> about 2 years running (power on hours 3000) . My wife turns it on and
>>> off multiple times per day, so it suggests there's no timer active.
>>
>> Assuming a 6 day week for 104 weeks, that works out to an average of
>> 4.8
>> hours a day. If your missus is rebooting it 5 or 6 times a day, you
>> could simply be seeing the head unload cycles triggered by power down
>> events alone.
>
> Exactly the point I was trying to make - the evidence for this
> particular WD drive is that there is no timer built into the drive. Cf
> your earlier comment that /all/ your own drives have it.

Well, I've yet to test that 320GB desktop drive. I may have an answer to
that question ("*Are* all my WD drives 'blessed' with the idle 3 feature?
) later today.

>
>
>> Otoh, if you're running windows Vista, that poor old drive may never
>> get
>> a chance to expire even an 8 second head unload timer. :-)
>
> Mint and freebsd all through here, bar a dual boot (mint/XP) lappy, and
> one with vista purely to run Garmin's irritatingly windows-only GPS
> software. But I digress :-}

As did I simply to take a 'Pot Shot' at one of MSFT's finest shining
examples. :-)
>

Funnily enough, the one and only machine I have that's afflicted with
MSFT's 'Finest' (and 'ready to go') may prove useful when I start
experimenting with VMWare's latest free tools to try and fix some issues
I have with a P2V vdmk image of the win2k setup I was running prior to my
replacing it with Linux Mint last year (the result of the latest MoBo
upgrade exhausting the limits of win2k's capabilities).

Dual booting has never held any appeal for me (I have enough problems
choosing what to wear when I get out of bed in the morning). Luckily for
me, PC hardware became powerful enough some five or more years back to
allow a more elegant solution (imo, at least) to become a practical way
around this problem in the form of Virtual Machine installed instances of
alternative OSes which neatly sidesteps the boot time decision as to
(paraphrasing an MSFT advertising slogan) "Where do you want to crash
today?"

Dual booting (more like multi booting in a lot of cases these days), is
a useful feature when experimenting with alternative OSen (and you
normally shut the machine down at the end of each day so you'd get the
opportunity to decide each following morning or afternoon or evening) but
it's largely limited to solving issues of unreasonably restricted "System
Requirements" by the likes of (just for a recent example) VMWare's
exclusion of even winXP from its list of MSFT OSen for its latest P2V
conversion tool which now appears to be sans *nix support).

I much prefer the virtualisation solution, especially now that "Entry
Level" PC hardware routinely specifies quad core GHz clocked CPUs with
virtualisation support built in and RAM sizes from a minimum of 4GB and
upwards.

Since CPU clock speeds have been stalled at around the 3 to 4 GHz mark
over the past half decade or so and *all* CPU makers' solutions to this
lack of progress on the clock speed front has been to use multiple cores
so their marketing divisions can use the "Number of cores times GHz"
equation to push sales, Virtualisation has become a very effective way to
utilise all those additional cpu cores as well as free you from the boot
time decision (which, admittedly for most, is an automatic default choice
of their favoured OS, for 90 odd percent or more of the time).

When I started to take a more serious interest in 'modern' Linux distros
about 4 years ago (I knew the day would come when my next major hardware
upgrade would finally push win2k beyond its limits, and MSFT had nothing
further to offer that I would be happy to touch without benefit of a ten
foot barge pole), I took a particular interest in virtualisation, in
particular, Oracle's VirtualBox which, like Linux was also free.

At the time, 4 years ago using Ubuntu 12.04 and VBox ver 4.xxx on my
modestly specced test rig, I was quite impressed by the performance I was
seeing running the Quake 2 and Unreal games software in both win2k and
winXP VMs (with win2k, unsurprisingly, being the performance winner) on a
micro ATX board with a modestly specced Nvidia based on-board graphics
adapter.

By the time I was faced with that ultimate hardware upgrade last year, I
was ready to make the leap from MSFT's truly finest example of a windowed
OS to a Linux distro. The choice was merely a matter of sorting out the
details. Ubuntu had gained some rather weird ideas about the desktop GUI
long before then so I knew I had to look for a better 'Modernised Debian'
distro which proved to be Linux Mint as best as I could decide at the
time.

Now, some 15 months on, I feel a lot more 'settled' with my change of
desktop environment. The 'Culture Shock' (which could have been much
worse if I hadn't grown up with MSDOS's CLI since the days of MSDOS 3.3
prior to using a windowed desktop GUI on a routine basis since around
1996/97 with win95osr2, along with my prior experience with a ZX80 and
then a Transam Tuscan S100 Bus computer (both built from kits) to develop
BASIC and then Z80 Assembler programming skills with experience of Linux
courtesy of the SUSIE 6 CD supplied with the Linux For Dummies book given
to me by my BiL in the late 90s) was some small factor in my sense of
unease but, for the reasons given, was soon dissipated into a vague
displeasure at the sheer unmitigated stupidity of the Linux developer
community, in particular, those involved with the various desktop
environments (it was as if they'd never *ever* had the privilege of using
Windows 2000 Professional to show them where MSFT had gone so horribly
wrong with all their subsequent OSes from which they must have received
undue influence in their apparent aping of the Vista/Win7 desktop file
managers).

The other remaining 'annoyance' with Linux distros is their inexplicably
dire performance when dealing with SMB/CIFS network shares (BSD seems to
manage this with ease if FreeNAS/NAS4Free is any indicator - I've yet to
test a desktop version of a BSD distro, there aren't too many choices
available compared to the hundreds of Linux distros).

The other major annoyance is the tearing screen effect which afflicts
movie file playback. However, this is a problem more to do with the
graphics chips manufacturers (Nvidea and ATI/AMD) not co-operating as
much as they could with the *nix devs and in this case, there's always
hope that this will change for the better and a driver update will
finally be released to fix this problem. I live in hope (but not with
bated breath).

I've become more used to the differences in the details of using the KDE
desktop and the win2k desktop experience. For example, just a mere month
or so back, I finally discovered that the KDE devs had hidden the right
mouse button drag 'n drop function on the left mouse key, "In Plain
Sight" so to speak. :-)

When it comes to File Managers and desktop user interfaces, the saying,
"There's none so blind as those who *refuse* to see." describes the
situation rather succinctly imho. Still, and all, I live and learn and
the rough edges are beginning to smooth out. After all, with no
preconceived notions to get in the way, who's to say that left click drag
'n' drop *isn't* the more logical method? :-)

--
Johnny B Good

Phil

unread,
Jul 8, 2016, 7:30:12 AM7/8/16
to
In message <qJ2fz.43746$rI....@fx37.am4>, Johnny B Good
<johnny...@invalid.ntlworld.com> writes
>On Mon, 04 Jul 2016 10:50:26 +0000, David wrote:
>
>> On Sun, 03 Jul 2016 17:06:40 +0000, Johnny B Good wrote:
>
>====snip====
>
> My experience of the timer setting up to now has been that *every
>single* WD drive (including a couple of 8 year old or so 44 pin IDE
>laptop units - a 160GB replacement to the original 80GB drive my ten year
>old Acer 3660 laptop was supplied with and a later 250GB upgrade
>replacement for that) have all revealed themselves to be sporting the 8
>second timer value.
>
> The desktop drives (a couple of 2TB Greens, a 4TB Red and the latest 6TB
>Green from just 10 months ago) all had been supplied with the 8 second
>default time-out applied from brand new.

I've just finished a new build (first in ~8 years, so a few changes to
work through) and the WD Blue 2Tb that I bought in June had the timer
set to 8s. I'll have a look at the manufacture date next time I have the
case open.


Then I went back to re-examine the WD drive that I fitted in the current
machine in 2014. That had POH of 12457 and LCC over 200,000. The timer
was set to 12.8 seconds -- I don't know what happened there but I did
have a go at it with WDIDLE3 sometime in the past and obviously didn't
do it right :-( I've now set that timer to 300s and the LCC count seems
to have duly stopped climbing so fast.

--
Phil

David

unread,
Jul 8, 2016, 8:30:34 AM7/8/16
to
This reminds me that I have some 2.5" ex-laptop drives (WD Black I think)
replaced by SSDs and going into towers.

Not sure if I should be tinkering with them as they don't seem to have the
same problems as the 3.5" drives.

Johnny B Good

unread,
Jul 8, 2016, 8:23:23 PM7/8/16
to
You may want to check them just the same after you've read the following:


I've just set that 320GB drive up on my test rig (it only took a minute
to fish the drive out of the drawer and grab the spare 80 wire IDE ribbon
cable off the desk to swap the 40 wire optical drive cable out to hitch
the drive to the one and only IDE port) and fire up the wdidle3 utility
using the /r option (which I thought ought to be safe with non idle3
drives) and after a few seconds delay got an ATA command error report,
indicating that the drive isn't idle3 capable.

This is the first Western digital drive I've checked that doesn't
support the idle3 feature (it's a Western Digital Caviar WD3200
manufactured in March 2005). Curious about how it had fared from
subjecting it to the wdidle3 test, I rebooted and let the bios use the
default boot order (FDD, HDD, USB pen drive) and saw a grub boot loader
error message. Obviously I've used this drive in the past to test a Linux
distro on.

I rebooted with the UBCD pen drive and fired up Parted Magic to do some
more testing. The SMART stats look fine (no sign of a head unload count,
as expected) so I decided to run the quick 2 minute self test which
completed without error about ten minutes ago. I followed this up by
running the 5 minute conveyance test which only now has just completed,
also without error.

From what I can see of the contents, this must be one of the early
Ubuntus I was testing (the mnt folder in the root shows a modified date
of April 2012), probably Ubuntu 12.4. It looks like that drive has spent
most of the past 3 years packed away in the drawer waiting to be re-
purposed. Anyway, it doesn't seem to have come to any harm from its
encounter with wdidle3 which suggests there's no risk of damage testing
pre-idle3 drives (at least if you only test with the /R option).

On disconnecting the 320GB drive at the end of my tests, I checked out
my (very small) stock of working IDE laptop drives and found the 160GB
unit (a WD 1600 BEVE manufactured December 2007) which I've just retested
with the wdidle3 utility which reports the timer as being disabled,
confirming that the idle3 feature wasn't limited to just SATA models (IOW,
having an IDE 40 pin interface is no immunity against the curse of the
idle3 timer default, hence my curiosity over that 320GB desktop drive).

I don't think it's safe to assume that pre Eco-Green models are
necessarily free of the idle3 timer issue so it might be worth checking
any late model IDE desktop drives as well as earlier SATA units before
discounting this possibility.

Using the wdidle3 utility with the /R switch seems a safe enough way to
test drive models which lack this idle3 feature. The alternative is to
use a SMART log reporting tool to check whether or not there is a head
unload cycle count parameter listed.

Anyhow, there you have it. That 320GB drive *doesn't* have an idle3
timer after all! Question answered!

--
Johnny B Good

David

unread,
Jul 9, 2016, 7:42:02 AM7/9/16
to
On Sat, 09 Jul 2016 00:23:22 +0000, Johnny B Good wrote:

> On Fri, 08 Jul 2016 12:30:31 +0000, David wrote:
>
<massive snip>
>
> Anyhow, there you have it. That 320GB drive *doesn't* have an idle3
> timer after all! Question answered!


However, not the question I thought I had posed. :-)

As I understand it, the idle time head unload was brought in to protect
2.5" drives in laptops from accidental damage from shocks whilst the head
was flying, and also to reduce power consumption (a benefit when using
battery not mains).

The strategy was a success with (AFAIK) no major reduction in drive life.

For some as yet unclear reason, the same unload timer was incorporated
into 3.5" drives which didn't really need it. Perhaps in the Green drives
to artificially reduce the power consumption at the expense of
reliability? For further unknown reasons this had a much more detrimental
impact on drive life than was observed for 2.5" drives.

So the question is; if 8 second unload time seems to be O.K. on 2.5"
drives in laptops, is there any need to change the value for 2.5" drives
which are moved to desktop boxes?

Rob Morley

unread,
Jul 9, 2016, 3:53:56 PM7/9/16
to
On 9 Jul 2016 11:42:01 GMT
David <wib...@btintenet.com> wrote:

> So the question is; if 8 second unload time seems to be O.K. on 2.5"
> drives in laptops, is there any need to change the value for 2.5"
> drives which are moved to desktop boxes?
>
There is if you're using it as an always (or mostly) on machine that
may see lots of small but frequent disk access (e.g. running a Torrent
client, home media server or similar) because it will suffer from the
frequent cycling.

Johnny B Good

unread,
Jul 9, 2016, 11:12:12 PM7/9/16
to
On Sat, 09 Jul 2016 11:42:01 +0000, David wrote:

> On Sat, 09 Jul 2016 00:23:22 +0000, Johnny B Good wrote:
>
>> On Fri, 08 Jul 2016 12:30:31 +0000, David wrote:
>>
> <massive snip>
>>
>> Anyhow, there you have it. That 320GB drive *doesn't* have an idle3
>> timer after all! Question answered!
>
>
> However, not the question I thought I had posed. :-)
>
> As I understand it, the idle time head unload was brought in to protect
> 2.5" drives in laptops from accidental damage from shocks whilst the
> head was flying, and also to reduce power consumption (a benefit when
> using battery not mains).

You and I must think alike[1]. This was almost exactly the same
conclusion I arrived at (but see note [1]) after discovering the idle3
feature in IDE laptop drives that predated its notorious appearance in
those eco-green desktop drives so many years back now (how long ago was
it? Five, six years?).
>
> The strategy was a success with (AFAIK) no major reduction in drive
> life.

That was my conclusion after discovering head unload counts of 3 and 5
MILLION!! cycles with no apparent detriment.

>
> For some as yet unclear reason, the same unload timer was incorporated
> into 3.5" drives which didn't really need it. Perhaps in the Green
> drives to artificially reduce the power consumption at the expense of
> reliability? For further unknown reasons this had a much more
> detrimental impact on drive life than was observed for 2.5" drives.

Again, we seem to be thinking alike (I wonder if my previous postings
over the years on this very subject have been an influential factor in
your perceptions of Western Digital's use of the idle3 function? :-)

I assumed that the reliability of head unloading technology becomes
somewhat compromised when scaled up to the dimensions of a desktop drive.
Since it seems WD were aware of this (witness the EoL rating figures of
300 and 600 thousand head unload cycles), the use of an 8 second time-out
to gain a few hundred mW (platters spinning) idle state power consumption
saving, would seem to have been a cynical abuse designed to help them win
"Eco-Friendliness" kudos and awards from the easily impressed technical
reviewers whose time constraints would lead them to miss the deleterious
effects that weren't going to become apparent for at least another 6 to
12 months.

This sleight of hand did the trick as far as marketing success went but
the cries of outrage from observant sysops, after the honeymoon period
was over, merely persuaded WD to offer their wdidle3 utility rather than
change the default time-out to a more life enhancing setting. Incredibly
this went on for at least the next three years afaics.

Perhaps their legal eagles had advised them not to respond to their
customers' demands (at least no straight away) in case it led to a class
action lawsuit over the issue of "Mis-sold Product" (after all, it would
fail to comply with its eco-friendly specs which would have been an
important part of their customers' purchasing decisions).

>
> So the question is; if 8 second unload time seems to be O.K. on 2.5"
> drives in laptops, is there any need to change the value for 2.5" drives
> which are moved to desktop boxes?
>
I'd say yes, there is a benefit (even if no *urgent* need) to disable or
extend the time-out to 300 seconds. Whilst re-loading the heads is a much
quicker operation than the 3 to 5 seconds required to spin up the
platters on a 2.5 inch drive (7 to 12 seconds on 3.5 inch drives), it
still represents a few dozen milliseconds worth of delay on the initial
access.

Furthermore if each access is spaced more than 8 seconds apart, you'll
be faced with a noticeable delay each time compared to the more usual 15
to 20 ms whilst the heads are flying over the platters under control of
the track servo controller, possibly as much as 100 milliseconds - I
don't have any published figures to hand for this particular performance
metric but I think this is a reasonable guess.

[1] You have this backwards. The power saving aspect was the primary
consideration with the *reduction* in head crash risk in the event of
gross mechanical perturbations through mishandling or aggressively
slammed down laptop lids being a secondary and beneficial side effect.

There were more effective (and expensive) ways to eliminate the risk of
damage through gross mechanical shock events. The reduction in head crash
risk was merely a side effect of a power saving strategy which took
advantage of the improved head parking mechanism (dedicated parking ramps
outboard of the platters replacing the use of a dedicated landing zone on
the platter surface itself in-board of the innermost tracks).

--
Johnny B Good

Johnny B Good

unread,
Jul 9, 2016, 11:31:51 PM7/9/16
to
Maybe not so much the wear and tear aspect[1] as the reduction in access
speed due to the head loading delay (possibly as much as an extra 100ms).

In a system powered from a mains supply (possibly with a UPS in line)
and in a box that is unlikely to suffer mechanical disturbance whilst
powered up, the use of head unloading to save a few dozen miliwatts per
drive (the *primary* function of head unloading) makes no sense
whatsoever considering the cost in performance terms. When using laptop
drives to substitute for desktop drives, the last thing you want to do is
further degrade their already mediocre performance.

[1] Witness the 3 and 5 MILLION!!! head unload cycle counts I've observed
on two older IDE interfaced WD laptop drives with no evidence of any
problems whatsoever.

--
Johnny B Good

Mike Tomlinson

unread,
Jul 9, 2016, 11:46:57 PM7/9/16
to
En el artículo <e2jgz.398651$jB.3...@fx34.am4>, Johnny B Good <johnny-
b-g...@invalid.ntlworld.com> escribió:

> I'd say yes, there is a benefit (even if no *urgent* need) to disable or
>extend the time-out to 300 seconds

Does the panel have a view on whether disabling it or setting it to 300s
is best? This would be for a 3.5" WD Green running 24/7 in a NAS with
intermittent access (mainly movie storage).

This drive:

[root@microserver tmp]# smartctl -i /dev/sde

smartctl 5.42 2011-10-20 r3458 [i686-linux-2.6.18-410.el5.centos.plus]

=== START OF INFORMATION SECTION ===
Device Model: WDC WD20EZRX-00D8PB0
Serial Number: WD-WMC4MXXXXXXX
LU WWN Device Id: 5 0014ee 0591dcc3d
Firmware Version: 80.00A80
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 8
ATA Standard is: ACS-2 (revision not indicated)
Local Time is: Sun Jul 10 04:30:36 2016 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

[root@microserver tmp]# smartctl -A /dev/sde | grep "^193"

193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always
- 793

--
(\_/)
(='.'=) systemd: the Linux version of Windows 10
(")_(")

Johnny B Good

unread,
Jul 11, 2016, 12:24:31 PM7/11/16
to
Ok then, seeing as how you've "Shown me yours", "I'll show you mine." :-)

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Green
Device Model: WDC WD60EZRX-00MVLB1
Serial Number: WD-WXXXXXXXXXXX
LU WWN Device Id: 5 0014ee 20b434348
Firmware Version: 80.00A80
User Capacity: 6,001,175,126,016 bytes [6.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5700 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Mon Jul 11 14:39:18 2016 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 201 201 021 Pre-fail
Always - 8908
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 12
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 096 096 000 Old_age
Always - 3132
10 Spin_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 12
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 3
193 Load_Cycle_Count 0x0032 200 200 000 Old_age
Always - 807
194 Temperature_Celsius 0x0022 121 114 000 Old_age
Always - 31
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime
(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 3098
-
# 2 Short offline Completed without error 00% 2930
-
# 3 Short offline Completed without error 00% 2762
-
# 4 Short offline Completed without error 00% 2594
-
# 5 Short offline Completed without error 00% 2427
-
# 6 Short offline Completed without error 00% 2259
-
# 7 Short offline Completed without error 00% 2091
-
# 8 Short offline Completed without error 00% 1923
-
# 9 Short offline Completed without error 00% 1755
-
#10 Short offline Completed without error 00% 1587
-
#11 Short offline Completed without error 00% 1420
-
#12 Short offline Completed without error 00% 1252
-
#13 Short offline Completed without error 00% 1084
-
#14 Short offline Completed without error 00% 916
-
#15 Short offline Completed without error 00% 748
-
#16 Short offline Completed without error 00% 580
-
#17 Short offline Completed without error 00% 413
-
#18 Short offline Completed without error 00% 1438
-
#19 Short offline Completed without error 00% 1270
-
#20 Short offline Completed without error 00% 1102
-
#21 Short offline Completed without error 00% 934
-
==================================================================

That's quite obviously referring to the 6TB Green I installed 278 days
and 13 hours ago (the system uptime figure ties in with the purchase
invoice date).

If you take note of the reported PoH figure of 3132 (sorry about the
damned line wrap, btw) and the 807 head unload cycles clocked up since
then (obviously I chose not to disable it), it would appear to have
averaged 1 event per 3 hours, 52 minutes and 51.7 seconds (a rate of
0.2576628 events per hour).

However, since I know that that drive has *actually* clocked up the 6685
hours calculated from the system uptime, the head unloading rate is
actually just slightly less than half those values (8 hours, 17 minutes
and 1.56 seconds per head unload cycle or 0.120718 events per hour).

In case you're wondering why I trust the system uptime more than the
reported PoH in the SMART log, just take a close look at the weekly
scheduled short offline test results (it's the Lifetime hours - actually,
the PoH figure clocked up when the test was run - which reveal that every
so often, the figure will decrement by *about* a thousand hours from the
expected value. Strangely, *not* the 1024 hours one might have expected
with a systemic error in the counter register which one might well expect
to be storing its count value as a binary number).

If I seem rather blase about this "Dorian Grey Syndrome", it's because
I've already gotten well used to this with the 4TB RED I commissioned
nearly 3 years ago. It's currently displaying a PoH figure of just 14860
which a rough calculation indicates is short of its true value by some
10,000 hours! The "193" figure shows 6590 unload cycles (the drive's idle3
time-out was set for 300 seconds). I'll leave you to work out the
apparent and the real head unloading rates for yourself.

It was quite surprise to discover that a later, larger capacity and
different model Western Digital HDD should suffer exactly the same DGS
symptoms as that 4TB WD RED exhibited (and still exhibits). The thing is,
when I posted these observations to this NG about 2 1/2 years ago, I
never thought I'd land up answering my own question, "Has anyone else
seen this peculiar behaviour?". I suppose if you want a job doing well,
you have to do it yourself. :-(

Oh, before I forget (and in case the conclusion isn't already obvious),
I'd recommend the 300 second time-out setting rather than completely
disabling it.

Other manufacturers also use a head unload time-out, normally a much
more sane value than WD's 8 second default. However, the big difference
here is that this time-out setting can be altered in a proprietary way by
changes made in the APM settings (WD's APM makes no changes to the idle3
timer value).

If you're going to try more aggressive power savings via the APM
interface, be warned that you may land up matching the insanity of WD's
default setting (as an unfortunate experience with a 2TB Samsung SpinPoint
a couple or three years back demonstrated by managing to top the ONE
MILLION mark as a result of experimenting with various APM settings a
year or so earlier before I finally spotted just how high the head unload
count had gone. :-( If you do change APM settings on a non-WD drive, keep
an eye on the head unload counter afterwards to make sure you haven't
accidentally set it into WD self destruct mode.

--
Johnny B Good

Mike Tomlinson

unread,
Jul 13, 2016, 10:00:45 AM7/13/16
to
En el artículo <2LPgz.687908$WR.2...@fx43.am4>, Johnny B Good <johnny-
b-g...@invalid.ntlworld.com> escribió:

> Ok then, seeing as how you've "Shown me yours", "I'll show you mine." :-)

oo-er, missus :)

[chomp]

> Oh, before I forget (and in case the conclusion isn't already obvious),
>I'd recommend the 300 second time-out setting rather than completely
>disabling it

noted, thanks.

As the drive is in a NAS that's on 24/7 and which also contains 4 2TB
HGST Ultrastars not exactly noted for their power efficiency, you may
guess I'm not that bothered about power saving. Long-term reliability
is more important to me.

I'll leave the head unload timer disabled. Time will tell.

Thanks again for taking the trouble to reply, Johnny.
0 new messages