Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

NAND flash misery

4 views
Skip to first unread message

Vladimir Vassilevsky

unread,
Jun 27, 2008, 12:10:29 PM6/27/08
to

Guess how many bad blocks are typical for NAND flash of several GB
capacity? As many as 2 percent! There could be the whole areas of
hundreds of megabytes of the contiguous bad cells, as well as the random
scatter.

It is possible to do the extensive read/write test to find the most of
the unreliable blocks; but it takes many hours.

I didn't encounter this problem until we started to use the high
capacity CF cards. The bad blocks were very rare for the cards of 1GB
and below. Since the flash iself is hidden behind the IDE interface and
a compatible file system, and the read/write performance is critical, it
is generally impossible to apply an error correction scheme.

I was under impression that flash is more reliable then HDD; now I see
that it is not so. Do you know how reliable are the IDE flash drives?


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com

Didi

unread,
Jun 27, 2008, 12:29:32 PM6/27/08
to
Vladimir Vassilevsky wrote:
> ...

>
> I was under impression that flash is more reliable then HDD; now I see
> that it is not so. Do you know how reliable are the IDE flash drives?
>

Vladimir,
if flash were a viable and reliable replacement for HDDs this would
have
happened for years by now, the costs would have gone down. They are
not, and given their limited number of write cycles they are bound to
stay out of the way of normal disks (which have achieved an amazing
level of performance).
Thanks for posting the details you measured, I'll know now to not buy
a 2G SD-card for my camera (not that I need any more than the 1G I
have now, I have never used > 1/3 its capacity anyway :-) ).

Didi

------------------------------------------------------
Dimiter Popoff Transgalactic Instruments

http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/

Original message: http://groups.google.com/group/comp.arch.embedded/msg/e9e96546454a52ab?dmode=source

cs_po...@hotmail.com

unread,
Jun 27, 2008, 12:55:33 PM6/27/08
to
On Jun 27, 12:10 pm, Vladimir Vassilevsky <antispam_bo...@hotmail.com>
wrote:

> Guess how many bad blocks are typical for NAND flash of several GB
> capacity? As many as 2 percent! There could be the whole areas of
> hundreds of megabytes of the contiguous bad cells, as well as the random
> scatter.

I thought the IDE interface was supposed to hide that from the host by
mapping in spares?

Isn't it also supposed to do wear leveling behind your back?

cs_po...@hotmail.com

unread,
Jun 27, 2008, 12:59:20 PM6/27/08
to
On Jun 27, 12:29 pm, Didi <d...@tgi-sci.com> wrote:

> Vladimir,
> if flash were a viable and reliable replacement for HDDs this would
> have
> happened for years by now, the costs would have gone down. They are
> not, and given their limited number of write cycles they are bound to
> stay out of the way of normal disks (which have achieved an amazing
> level of performance).

Flash is at the moment becoming a viable replacement for many
applications, both high end and low end - witness flash disks turning
up in everything from servers, to high end ultraportable notebooks, to
low end ones like the EeePC and even cheaper competitors.

This wasn't reasonable until the most recent generations of devices
started beating the performance and price point of the 1.8" mechanical
drives.

I don't necessarily think flash will be a replacement for large, cheap
mechanical drives, but for applications that only need a few GB, or
for applications where space and weight count and 32-64 GB is enough,
it's already gaining market penetration.

Write cycles could be an issue, but many of the ultraportable gadgetry
applications will see system replacement before that happens. And the
replacement will probably be 2 - 4 x the GB/$ of the original.

Vladimir Vassilevsky

unread,
Jun 27, 2008, 1:10:17 PM6/27/08
to

cs_po...@hotmail.com wrote:


>>Guess how many bad blocks are typical for NAND flash of several GB
>>capacity? As many as 2 percent! There could be the whole areas of
>>hundreds of megabytes of the contiguous bad cells, as well as the random
>>scatter.
>
> I thought the IDE interface was supposed to hide that from the host by
> mapping in spares?

It clearly does that, but only to a limited amount. Also, when a file
gets broken because of the bad block, it is too late to replace it.

> Isn't it also supposed to do wear leveling behind your back?

There is a whole lot of things that the internal controller could
probably do; but we can only guess about what it actually does.

John Devereux

unread,
Jun 27, 2008, 1:37:30 PM6/27/08
to
Vladimir Vassilevsky <antispa...@hotmail.com> writes:

> cs_po...@hotmail.com wrote:
>
>
>>>Guess how many bad blocks are typical for NAND flash of several GB
>>>capacity? As many as 2 percent! There could be the whole areas of
>>>hundreds of megabytes of the contiguous bad cells, as well as the random
>>>scatter.
>>
>> I thought the IDE interface was supposed to hide that from the host by
>> mapping in spares?
>
> It clearly does that, but only to a limited amount. Also, when a file
> gets broken because of the bad block, it is too late to replace it.

Just because the device is manufactured with bad blocks, does not
*automatically* mean that more will follow. (At least you have not
demonstrated that).

It's not like a load of bad blocks appearing on a magnetic drive,
where you know the whole drive is probably on the way out.

>> Isn't it also supposed to do wear leveling behind your back?
>
> There is a whole lot of things that the internal controller could
> probably do; but we can only guess about what it actually does.

It's the same for conventional disks isn't it?

--

John Devereux

Jim Stewart

unread,
Jun 27, 2008, 2:35:09 PM6/27/08
to
Didi wrote:
> Vladimir Vassilevsky wrote:
>> ...
>>
>> I was under impression that flash is more reliable then HDD; now I see
>> that it is not so. Do you know how reliable are the IDE flash drives?
>>
>
> Vladimir,
> if flash were a viable and reliable replacement for HDDs this would
> have
> happened for years by now, the costs would have gone down. They are
> not, and given their limited number of write cycles they are bound to
> stay out of the way of normal disks (which have achieved an amazing
> level of performance).
> Thanks for posting the details you measured, I'll know now to not buy
> a 2G SD-card for my camera (not that I need any more than the 1G I
> have now, I have never used > 1/3 its capacity anyway :-) ).

It also explains why the 8GB Photo Hard
Drive I bought for my Nikon is rock-solid
and I've had many intermittent and non-
reproduceable problems with larger FLASH
cards.

robert...@yahoo.com

unread,
Jun 27, 2008, 5:56:37 PM6/27/08
to
On Jun 27, 11:10 am, Vladimir Vassilevsky <antispam_bo...@hotmail.com>
wrote:

> Guess how many bad blocks are typical for NAND flash of several GB
> capacity? As many as 2 percent! There could be the whole areas of
> hundreds of megabytes of the contiguous bad cells, as well as the random
> scatter.


FWIW, for an 4GB flash device, 2% would be 80MB.

You'll also notice that most (all?) flash manufactures follow the
"1GB=1,000,000,000 bytes" rule common for disk drives. Leaves them
plenty of space (6.8%, in fact) for manufacturing defects and wear
leveling.

Dombo

unread,
Jun 28, 2008, 5:48:26 AM6/28/08
to
Didi schreef:

> Vladimir Vassilevsky wrote:
>> ...
>>
>> I was under impression that flash is more reliable then HDD; now I see
>> that it is not so. Do you know how reliable are the IDE flash drives?
>>
>
> Vladimir,
> if flash were a viable and reliable replacement for HDDs this would
> have
> happened for years by now, the costs would have gone down. They are
> not, and given their limited number of write cycles they are bound to
> stay out of the way of normal disks (which have achieved an amazing
> level of performance).

When the platter densities of (mechanical) HDD's went up at a certain
point manufactures of HDD's had to resort to error correction schemes to
obtain reliable operation. A modern HDD would be unusable without ECC.
It appears that high density flash is going the same direction.

David Brown

unread,
Jun 28, 2008, 6:28:18 AM6/28/08
to

NAND flash always has defects in manufacturing - the devices are
designed to cope with a certain level of faults to make manufacturing
cheaper (the same applies to many other types of chips, and hard disks).
Each sector in NAND has extra space for error correction and detection
(IIRC, 512 byte sectors are actually 528 bytes in size). Bad blocks can
be detected and marked during manufacture and testing, and blocks that
go bad (due to wearing out) are detected in use and the data moved to
different blocks.

The same thing is done with hard disks - the controller detects bad
blocks, and re-maps them. There are a few differences, however - on
hard disks, you get bad blocks in manufacturing but it is rare that a
good block goes bad in use. With flash, the controller can almost
always spot a bad block and recover the data (since it's normally a
single bit failure, the ECC will fix it), while on a hard disk you lose
data. And on flash, a remapping makes no difference to performance - on
a hard disk, it's equivalent to file fragmentation.

CF cards and other earlier flash devices are not that great at wear
levelling and bad block handling (that's one of the reasons for
flash-specific file systems like YAFS and JFFS2). Modern IDE, SATA and
SAS flash drives are far better. Good manufacturers quote MTBF numbers
that are orders of magnitude higher than for hard disks, and wear is no
longer a practical issue for larger flash disks (I've seen flash disks
spec'ed for *continuous* 20 MB/s writes for years). See also
<http://wiki.eeeuser.com/ssd_write_limit> - a 4 GB Eee PC disk should be
fine for a normal user for 25 years. And since wear is levelled across
a disk, a 128 GB disk will survive 32 times as much use for the same time.

Vladimir Vassilevsky

unread,
Jun 28, 2008, 9:09:06 AM6/28/08
to

"David Brown" <da...@westcontrol.removethisbit.com> wrote in message
news:486611d3$0$14988$8404...@news.wineasy.se...

> Vladimir Vassilevsky wrote:
> >
> > Guess how many bad blocks are typical for NAND flash of several GB
> > capacity? As many as 2 percent! There could be the whole areas of
> > hundreds of megabytes of the contiguous bad cells, as well as the random
> > scatter.
> >
> > It is possible to do the extensive read/write test to find the most of
> > the unreliable blocks; but it takes many hours.
> >
> > I didn't encounter this problem until we started to use the high
> > capacity CF cards. The bad blocks were very rare for the cards of 1GB
> > and below. Since the flash iself is hidden behind the IDE interface and
> > a compatible file system, and the read/write performance is critical, it
> > is generally impossible to apply an error correction scheme.
> >
> > I was under impression that flash is more reliable then HDD; now I see
> > that it is not so. Do you know how reliable are the IDE flash drives?
> >
>
> NAND flash always has defects in manufacturing - the devices are
> designed to cope with a certain level of faults to make manufacturing
> cheaper (the same applies to many other types of chips, and hard disks).
> Each sector in NAND has extra space for error correction and detection
> (IIRC, 512 byte sectors are actually 528 bytes in size). Bad blocks can
> be detected and marked during manufacture and testing, and blocks that
> go bad (due to wearing out) are detected in use and the data moved to
> different blocks.

The utterly bad blocks are detected at manufacturing; however there is a
bunch of the unreliable blocks which takes hours of testing to discover. If
the bad block is detected when in use, it means that the data is lost
already. It is too late to hide it by remaping.

> CF cards and other earlier flash devices are not that great at wear
> levelling and bad block handling (that's one of the reasons for
> flash-specific file systems like YAFS and JFFS2).

The actual erase block size in NAND flash is something like 32/64/128/256KB,
being bigger for the higher capacity devices. What it implies: any write
operation through the IDE interface is actually read - copy - erase -
modify - write at the controller level. The other surcumstance of that is
the speed penalty for the writes misaligned to the erase block size. Since
this kitchen is hidden behind IDE, there is no point in using YAFS or JFS or
such. A disk cache with the blocks of 32k makes a lot of sense though.


Vladimir Vassilevsky
DSP and Mixed Signal Consultant
www.abvolt.com


David Brown

unread,
Jun 28, 2008, 12:08:48 PM6/28/08
to

The point of ECC - Error checking and *correcting* - is that slightly
bad blocks do not lead to lost data. The most common problem in flash
blocks (excluding any totally failed blocks found in manufacturing) are
single-bit errors - bits that can't erase or program properly. These
single-bit errors do not lead to data loss, and the flash controller can
easily detect and correct them. It is even possible that the controller
will continue using a block with bad bits, and will not disable the
block until a certain number of bad bits have been found (I don't know
what error rates are used here in practice).

>> CF cards and other earlier flash devices are not that great at wear
>> levelling and bad block handling (that's one of the reasons for
>> flash-specific file systems like YAFS and JFFS2).
>
> The actual erase block size in NAND flash is something like 32/64/128/256KB,
> being bigger for the higher capacity devices. What it implies: any write
> operation through the IDE interface is actually read - copy - erase -
> modify - write at the controller level. The other surcumstance of that is
> the speed penalty for the writes misaligned to the erase block size. Since
> this kitchen is hidden behind IDE, there is no point in using YAFS or JFS or
> such. A disk cache with the blocks of 32k makes a lot of sense though.
>

YAFS and JFFS are designed for when you have direct access to the flash
- when you are using a controller that handles the wear levelling and
block placement (such as for CF cards and IDE/SAS/SATA controllers), you
should not use flash-specific file systems of that sort.

Didi

unread,
Jun 28, 2008, 4:12:37 PM6/28/08
to

True, but HDDs don't wear out with writing - and flash does.
This is a major advantage flash does not promise to catch up
with - at least for the time being.

Habib Bouaziz-Viallet

unread,
Jun 29, 2008, 2:39:32 AM6/29/08
to
Le Fri, 27 Jun 2008 11:10:29 -0500, Vladimir Vassilevsky a écrit:

> Since the flash iself is hidden behind the IDE interface and
> a compatible file system, and the read/write performance is critical, it
> is generally impossible to apply an error correction scheme.

"... generally impossible to apply an error correction scheme ..."
Are you really a serious guy or you comes here on subjects that you do not
master?

I will try to make the kind of "Vlad response" :
Is it a hobbyist project ? I think this is. mmmhh, Let see ... who cares
about the safety of hobbyists projects ?

>
> Vladimir Vassilevsky
> DSP and Mixed Signal Design Consultant http://www.abvolt.com

--
HBV

David Brown

unread,
Jun 29, 2008, 7:04:21 AM6/29/08
to
Didi wrote:
> Dombo wrote:
>> Didi schreef:
>>> Vladimir Vassilevsky wrote:
>>>> ...
>>>>
>>>> I was under impression that flash is more reliable then HDD; now I see
>>>> that it is not so. Do you know how reliable are the IDE flash drives?
>>>>
>>> Vladimir,
>>> if flash were a viable and reliable replacement for HDDs this would
>>> have
>>> happened for years by now, the costs would have gone down. They are
>>> not, and given their limited number of write cycles they are bound to
>>> stay out of the way of normal disks (which have achieved an amazing
>>> level of performance).
>> When the platter densities of (mechanical) HDD's went up at a certain
>> point manufactures of HDD's had to resort to error correction schemes to
>> obtain reliable operation. A modern HDD would be unusable without ECC.
>> It appears that high density flash is going the same direction.
>
> True, but HDDs don't wear out with writing - and flash does.
> This is a major advantage flash does not promise to catch up
> with - at least for the time being.
>

HDDs wear out through use. The lifetime is roughly dependant on the
time the hard disk has been powered up, and how much the head is moved.
It's thus fairly independent of the size. Flash wears out through
erase-write cycles on blocks. So the more blocks you have to spread the
wear, the longer the lifetime, and the more you read rather than write,
the longer the lifetime. So as flash drives get bigger, they are
surpassing HDDs for reliability and lifetime. For common desktop usage,
a 32 GB flash disk will probably far outlast a typical hard disk - with
256 GB and bigger flash disks, even high quality hard disks are no
longer competitive on reliability and lifetime for real applications.

The big issue is the cost per GB - hard disks are still much much
cheaper. A second issue is speed - standard hard disks are
significantly faster than standard flash disks. But that will change -
flash speeds are easily scaled (just use several devices in parallel)
once the price is right.

dalai lamah

unread,
Jun 29, 2008, 11:06:25 AM6/29/08
to
Un bel giorno Vladimir Vassilevsky digitň:

> Guess how many bad blocks are typical for NAND flash of several GB
> capacity? As many as 2 percent!

If with "typical" you mean "the grade used for low-cost consumer
electronics", you are right. But NAND normally come at least in two or
three different grade options: no bad blocks, 2% initial bad blocks without
dynamic bad blocks (until the minimum number of erase cycles), 2% bad block
with dynamic bad blocks, etc.

> I was under impression that flash is more reliable then HDD; now I see
> that it is not so. Do you know how reliable are the IDE flash drives?

I believed that solid-state drives would have taken over a lot easier, too.
Instead the good old rotating junk continues to resist very well, except in
very hot or very vibrational or very space-restrained applications.

--
emboliaschizoide.splinder.com

Vladimir Vassilevsky

unread,
Jun 29, 2008, 10:19:32 AM6/29/08
to

"David Brown" <da...@westcontrol.removethisbit.com> wrote in message
news:48676bc8$0$14992$8404...@news.wineasy.se...

> HDDs wear out through use. The lifetime is roughly dependant on the
> time the hard disk has been powered up, and how much the head is moved.

A HDD has one big problem: the lifetime tends to go to the end when it
falls on the floor.
Yes there are the special HDDs, accelerometers, suspensions and such;
however it makes HDD much less attractive then a flash alternatives.
Besides, a flash card is easy to swap.

> It's thus fairly independent of the size.

But the small and light HDD is more likely to survive the contact with the
floor :)

> Flash wears out through
> erase-write cycles on blocks. So the more blocks you have to spread the
> wear, the longer the lifetime, and the more you read rather than write,
> the longer the lifetime.

The flash write endurance (as stated by the manufacturers) is at the order
of millions of cycles. This is more then enough for many applications; there
is really no point to bother about the wear leveling. The problem I am
facing is the infant mortality of the unreliable cells.

> So as flash drives get bigger, they are
> surpassing HDDs for reliability and lifetime. For common desktop usage,
> a 32 GB flash disk will probably far outlast a typical hard disk - with
> 256 GB and bigger flash disks, even high quality hard disks are no
> longer competitive on reliability and lifetime for real applications.
>
> The big issue is the cost per GB - hard disks are still much much
> cheaper. A second issue is speed - standard hard disks are
> significantly faster than standard flash disks.

A common x300 8GB Lexar CF card from WallMart sustains the read/write speeds
at the order of 25...30MB/s. Those are the numbers that I measured; the
upper limit is because of the hardware, not the flash.

> But that will change -
> flash speeds are easily scaled (just use several devices in parallel)
> once the price is right.

The falling on the floor, weight/size, power consumption, instant readyness
and the heat dissipation are the problems of HDDs which flash doesn't have.

Vladimir Vassilevsky
DSP and Mixed Signal Consultant
www.abvolt.com


Vladimir Vassilevsky

unread,
Jun 29, 2008, 10:53:28 AM6/29/08
to

"Habib Bouaziz-Viallet" <ha...@rigel.systems> wrote in message
news:48672e24$0$21859$426a...@news.free.fr...

> Le Fri, 27 Jun 2008 11:10:29 -0500, Vladimir Vassilevsky a écrit:
>
> > Since the flash iself is hidden behind the IDE interface and
> > a compatible file system, and the read/write performance is critical, it
> > is generally impossible to apply an error correction scheme.
> "... generally impossible to apply an error correction scheme ..."
> Are you really a serious guy or you comes here on subjects that you do not
> master?

You blockheads should bow to the opportunity of receiving a lesson of
wisdom.

> I will try to make the kind of "Vlad response" :
> Is it a hobbyist project ? I think this is. mmmhh, Let see ... who cares
> about the safety of hobbyists projects ?

Good comment, but where is meat?
Would you suggest a solution which:

1) Small, low power, robust and swapable.
2) Sustain the read/write speeds of 5MB/s at the least
3) Compatible with standard flash card readers regardess of OS


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant

www.abvolt.com


David Brown

unread,
Jun 29, 2008, 11:59:03 AM6/29/08
to
Vladimir Vassilevsky wrote:
> "David Brown" <da...@westcontrol.removethisbit.com> wrote in message
> news:48676bc8$0$14992$8404...@news.wineasy.se...
>
>> HDDs wear out through use. The lifetime is roughly dependant on the
>> time the hard disk has been powered up, and how much the head is moved.
>
> A HDD has one big problem: the lifetime tends to go to the end when it
> falls on the floor.
> Yes there are the special HDDs, accelerometers, suspensions and such;
> however it makes HDD much less attractive then a flash alternatives.
> Besides, a flash card is easy to swap.
>
>> It's thus fairly independent of the size.
>
> But the small and light HDD is more likely to survive the contact with the
> floor :)
>

If you are planning on dropping your drive, flash certainly has the
advantage!

>> Flash wears out through
>> erase-write cycles on blocks. So the more blocks you have to spread the
>> wear, the longer the lifetime, and the more you read rather than write,
>> the longer the lifetime.
>
> The flash write endurance (as stated by the manufacturers) is at the order
> of millions of cycles. This is more then enough for many applications; there
> is really no point to bother about the wear leveling. The problem I am
> facing is the infant mortality of the unreliable cells.
>

Write endurance is not normally in the millions for NAND flash - perhaps
you are thinking of NOR flash, which is inherently more reliable (but
costs more per bit)? With wear levelling, write endurance is not a big
issue for modern flash disks - without it, you would wear out parts of
your disk (such as the FAT on FAT formatted drives).

Vladimir Vassilevsky

unread,
Jun 29, 2008, 1:02:21 PM6/29/08
to

David Brown wrote:

> Vladimir Vassilevsky wrote:
>
>> The flash write endurance (as stated by the manufacturers) is at the
>> order of millions of cycles. This is more then enough for many applications;
>> there is really no point to bother about the wear leveling. The problem I am
>> facing is the infant mortality of the unreliable cells.
>>
>
> Write endurance is not normally in the millions for NAND flash - perhaps
> you are thinking of NOR flash, which is inherently more reliable (but
> costs more per bit)?

Millions of cycles is what stated in the datasheets for the finished
devices like flash cards or jump drives. We can only guess what methods
are actually employed inside. Perhaps the wear leveling is taken care
off. However the reliability appears to be surprisingly low, especially

for the higher capacity devices.


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com

Stefan Reuther

unread,
Jun 29, 2008, 5:18:37 PM6/29/08
to
Vladimir Vassilevsky wrote:
> "David Brown" <da...@westcontrol.removethisbit.com> wrote in message
>>CF cards and other earlier flash devices are not that great at wear
>>levelling and bad block handling (that's one of the reasons for
>>flash-specific file systems like YAFS and JFFS2).
>
> The actual erase block size in NAND flash is something like 32/64/128/256KB,
> being bigger for the higher capacity devices. What it implies: any write
> operation through the IDE interface is actually read - copy - erase -
> modify - write at the controller level.

Not if the implementor of the controller firmware did his homework. If
that's the case, a 512-byte block write ends up as a 512-byte block
write at the flash device, at a new address, and the old address is just
marked as unusable. No read/erase/modify/write for every operation.
Garbage compaction happens a some time in the background.

> Since this kitchen is hidden behind IDE, there is no point in
> using YAFS or JFS or such.

Exactly. Although I heard some controller firmware used in consumer
flash disks optimizes based on the assumption that the file system at
the other end of the IDE or USB interface is FAT.

> A disk cache with the blocks of 32k makes a lot of sense though.

Not more or less for a flash disk than any other disk.


Stefan

Vladimir Vassilevsky

unread,
Jun 29, 2008, 6:01:04 PM6/29/08
to

Stefan Reuther wrote:

> Vladimir Vassilevsky wrote:
>
>>"David Brown" <da...@westcontrol.removethisbit.com> wrote in message

>>The actual erase block size in NAND flash is something like 32/64/128/256KB,


>>being bigger for the higher capacity devices. What it implies: any write
>>operation through the IDE interface is actually read - copy - erase -
>>modify - write at the controller level.
>
> Not if the implementor of the controller firmware did his homework.

This is what described in the appnotes from Samsung and Sandisk.

> If
> that's the case, a 512-byte block write ends up as a 512-byte block
> write at the flash device, at a new address, and the old address is just
> marked as unusable. No read/erase/modify/write for every operation.
> Garbage compaction happens a some time in the background.

So something like FAT or MFT has to be maintained internally, and it has
to be done in the background instead of by every transaction. This
scheme doesn't seem to be very applicable for the removable media.

>>Since this kitchen is hidden behind IDE, there is no point in
>>using YAFS or JFS or such.
>
> Exactly. Although I heard some controller firmware used in consumer
> flash disks optimizes based on the assumption that the file system at
> the other end of the IDE or USB interface is FAT.
>
>
>>A disk cache with the blocks of 32k makes a lot of sense though.
> Not more or less for a flash disk than any other disk.

There is a very significant penalty in the speed of the flash write
operation at the IDE level if the short blocks are used. The difference
can be as high as 10 times or so. According to the appnotes, this
happens due to the read - modify - write thing.

>
>
> Stefan

VLV

Dombo

unread,
Jun 30, 2008, 5:55:13 AM6/30/08
to
David Brown schreef:

Modern hard disks rely (heavily) on ECC too, likewise a hard disk can
spot blocks becoming bad before they become unrecoverable. In other
words: the situation for traditional hard disks is not that different.

Dombo

unread,
Jun 30, 2008, 5:59:25 AM6/30/08
to
David Brown schreef:

> Didi wrote:
>> Dombo wrote:
>>> Didi schreef:
>>>> Vladimir Vassilevsky wrote:
>>>>> ...
>>>>>
>>>>> I was under impression that flash is more reliable then HDD; now I see
>>>>> that it is not so. Do you know how reliable are the IDE flash drives?
>>>>>
>>>> Vladimir,
>>>> if flash were a viable and reliable replacement for HDDs this would
>>>> have
>>>> happened for years by now, the costs would have gone down. They are
>>>> not, and given their limited number of write cycles they are bound to
>>>> stay out of the way of normal disks (which have achieved an amazing
>>>> level of performance).
>>> When the platter densities of (mechanical) HDD's went up at a certain
>>> point manufactures of HDD's had to resort to error correction schemes to
>>> obtain reliable operation. A modern HDD would be unusable without ECC.
>>> It appears that high density flash is going the same direction.
>>
>> True, but HDDs don't wear out with writing - and flash does.
>> This is a major advantage flash does not promise to catch up
>> with - at least for the time being.
>
> HDDs wear out through use. The lifetime is roughly dependant on the
> time the hard disk has been powered up, and how much the head is moved.
> It's thus fairly independent of the size.

Also the number of spin-ups of ordinary HDD's is only guaranteed to
about 50.000 cycles. This can be a limiting factor when one needs to
employ aggressive power saving strategies.

Stefan Reuther

unread,
Jun 30, 2008, 2:13:53 PM6/30/08
to
Vladimir Vassilevsky wrote:
> Stefan Reuther wrote:
>> Vladimir Vassilevsky wrote:
>>> "David Brown" <da...@westcontrol.removethisbit.com> wrote in message
>>> The actual erase block size in NAND flash is something like
>>> 32/64/128/256KB,
>>> being bigger for the higher capacity devices. What it implies: any write
>>> operation through the IDE interface is actually read - copy - erase -
>>> modify - write at the controller level.
>>
>> Not if the implementor of the controller firmware did his homework.
>
> This is what described in the appnotes from Samsung and Sandisk.

Interesting. Failure-prone for removable media, and reduces the part's
life time :->

>> If that's the case, a 512-byte block write ends up as a 512-byte block
>> write at the flash device, at a new address, and the old address is just
>> marked as unusable. No read/erase/modify/write for every operation.
>> Garbage compaction happens a some time in the background.
>
> So something like FAT or MFT has to be maintained internally, and it has
> to be done in the background instead of by every transaction. This
> scheme doesn't seem to be very applicable for the removable media.

The remapping table can be reconstructed at any time from the data on
the media. The scheme is called log-structured file system. The
implementations I've seen so far (Green Hills and YAFFS, although I've
only started looking closer at the latter) work this way.


Stefan

cs_po...@hotmail.com

unread,
Jun 30, 2008, 3:34:33 PM6/30/08
to
On Jun 30, 2:13 pm, Stefan Reuther <stefan.n...@arcor.de> wrote:

> Interesting. Failure-prone for removable media, and reduces the part's
> life time :->

Not really. When you think about it, writes and read-modify-writes
are the embedded controller's only easy opportunity for moving
frequently modified data to a less used block - this is the easy
change to accomplish wear leveling.

I would also assume that enough energy can be stored in an on-card
capacitor to flush a full block write from cache ram to the NAND, at
least provided that the destination is already erased, which you'd
probably do premptively since the destination is going to be a
different (less worn) location anyway.

Habib Bouaziz-Viallet

unread,
Jul 1, 2008, 2:15:02 PM7/1/08
to
Le Sun, 29 Jun 2008 09:53:28 -0500, Vladimir Vassilevsky a écrit:

> "Habib Bouaziz-Viallet" <ha...@rigel.systems> wrote in message
> news:48672e24$0$21859$426a...@news.free.fr...
>> Le Fri, 27 Jun 2008 11:10:29 -0500, Vladimir Vassilevsky a écrit:
>>
>> > Since the flash iself is hidden behind the IDE interface and
>> > a compatible file system, and the read/write performance is critical, it
>> > is generally impossible to apply an error correction scheme.
>> "... generally impossible to apply an error correction scheme ..."
>> Are you really a serious guy or you comes here on subjects that you do not
>> master?
>
> You blockheads should bow to the opportunity of receiving a lesson of
> wisdom.

And i bow in front of you Vlad or should i say in front of your wisdom.


>
>> I will try to make the kind of "Vlad response" :
>> Is it a hobbyist project ? I think this is. mmmhh, Let see ... who cares
>> about the safety of hobbyists projects ?
>
> Good comment,

Really ?


> Would you suggest a solution which:
>
> 1) Small, low power, robust and swapable.
> 2) Sustain the read/write speeds of 5MB/s at the least
> 3) Compatible with standard flash card readers regardess of OS
>

Robust ???? Compatible ????
To meet these conditions i suggest to avoid using NAND Flash
based technology.

If I had the ultimate solution for data storage I would probably be on a
beach in the bahamas with charming creatures around me :-)


> Vladimir Vassilevsky
> DSP and Mixed Signal Design Consultant
> www.abvolt.com

--
HBV

Vladimir Vassilevsky

unread,
Jul 1, 2008, 7:02:45 PM7/1/08
to

Habib Bouaziz-Viallet wrote:

> Le Sun, 29 Jun 2008 09:53:28 -0500, Vladimir Vassilevsky a écrit:
>>"Habib Bouaziz-Viallet" <ha...@rigel.systems> wrote in message
>>news:48672e24$0$21859$426a...@news.free.fr...
>>>Le Fri, 27 Jun 2008 11:10:29 -0500, Vladimir Vassilevsky a écrit:
>>>
>>>
>>>>Since the flash iself is hidden behind the IDE interface and
>>>>a compatible file system, and the read/write performance is critical, it
>>>>is generally impossible to apply an error correction scheme.
>>>
>>>"... generally impossible to apply an error correction scheme ..."
>>>Are you really a serious guy or you comes here on subjects that you do not
>>>master?


>>You blockheads should bow to the opportunity of receiving a lesson of
>>wisdom.
>
> And i bow in front of you Vlad or should i say in front of your wisdom.

Eh? Wasn't it said to bow to the _opportunity_ ?

>>>I will try to make the kind of "Vlad response" :
>>>Is it a hobbyist project ? I think this is. mmmhh, Let see ... who cares
>>>about the safety of hobbyists projects ?
>>
>>Good comment,
>
> Really ?

Yes. But don't get a swelled head.

>
>>Would you suggest a solution which:
>>
>>1) Small, low power, robust and swapable.
>>2) Sustain the read/write speeds of 5MB/s at the least
>>3) Compatible with standard flash card readers regardess of OS
>>
> Robust ???? Compatible ????
> To meet these conditions i suggest to avoid using NAND Flash
> based technology.
> If I had the ultimate solution for data storage I would probably be on a
> beach in the bahamas with charming creatures around me :-)

If your goal is beach with charming creatures, you are perhaps in the
wrong kind of business for that. Even the ultimate storage solution
won't help.


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant

http://www.abvolt.com


msg

unread,
Jul 7, 2008, 10:17:31 AM7/7/08
to
David Brown wrote:
<snip>

> Good manufacturers quote MTBF numbers
> that are orders of magnitude higher than for hard disks, and wear is no
> longer a practical issue for larger flash disks (I've seen flash disks
> spec'ed for *continuous* 20 MB/s writes for years). See also
> <http://wiki.eeeuser.com/ssd_write_limit> - a 4 GB Eee PC disk should be
> fine for a normal user for 25 years. And since wear is levelled across
> a disk, a 128 GB disk will survive 32 times as much use for the same time.

I don't follow this logic; are you saying that for statistically average
usage in a system that doesn't thrash, wear-leveling on a larger device
has more room to work and thus has a longer life-cycle? Doesn't this
argument fall apart with modern VM o/s usage and thrashing?

Michael

David Brown

unread,
Jul 7, 2008, 2:37:29 PM7/7/08
to

Modern OS's (even windows) do not thrash unless you try to run too much
with too little ram.

And yes, if you have a larger disk to spread the writes, then there will
be fewer erase/writes per block for the same amount of write usage -
that's fairly obvious maths.

And even if you do occasionally thrash the swap file or swap partition,
the argument still applies - a larger disk will last longer.

0 new messages