Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Flash Drive Recommendations

35 views
Skip to first unread message

Peter

unread,
Jun 7, 2011, 5:52:35 AM6/7/11
to
Does anyone have a recommendation for a quality, good speed write and
read, flash drive (memory stick) with a total capacity of 4 or 8 GB? I
recently bought a Sandisk Cruzer Slice USB 2.0 Flash Drive (Model CZ37)
4 GB but it is much slower to write than an older Sandisk flash drive.
No precise figures just watching the transfer time on screen.

Googling for reviews shows some variations even within the same
manufacturer so a Linux user recommendation would be much appreciated.
The latest model came with preloaded software for Windows users which I
don't need for Linux.

Some time ago I think someone on this newsgroup suggested reformatting
flash drives to ext2 rather than the supplied FAT32. Are there any
downsides to doing this?

Any thoughts, please.

chris

unread,
Jun 7, 2011, 6:03:13 AM6/7/11
to
On 07/06/11 10:52, Peter wrote:
> Does anyone have a recommendation for a quality, good speed write and
> read, flash drive (memory stick) with a total capacity of 4 or 8 GB? I
> recently bought a Sandisk Cruzer Slice USB 2.0 Flash Drive (Model CZ37)
> 4 GB but it is much slower to write than an older Sandisk flash drive.
> No precise figures just watching the transfer time on screen.

I have a kingston G2 4Gb, which seems 'fast enough' for me. I often use
it for copying isos and is fine for that.

> Some time ago I think someone on this newsgroup suggested reformatting
> flash drives to ext2 rather than the supplied FAT32. Are there any
> downsides to doing this?

Other than it won't work transparently with Windows PCs nor Macs, no.

Giovanna Stefani

unread,
Jun 7, 2011, 6:54:47 AM6/7/11
to
On Tue, 07 Jun 2011 10:52:35 +0100, Peter wrote:
> The latest model came with preloaded software for Windows users which I
> don't need for Linux.

They do that because 95% of domestic computer users have that OS.
A quick format and it is all gone.

I use Sandisk memory sticks and have had no troubles with them.

GS

Nick Leverton

unread,
Jun 7, 2011, 7:23:54 AM6/7/11
to
In article <iskscd$7us$1...@localhost.localdomain>,

Peter <not...@nospam.co.uk> wrote:
>
>Some time ago I think someone on this newsgroup suggested reformatting
>flash drives to ext2 rather than the supplied FAT32. Are there any
>downsides to doing this?

Wear levelling can be a downside. Some of the embedded controllers in
these devices are FAT-aware and will reallocate the FAT allocation blocks
on rewrite, transparently to the O/S, so as to spread the wear across
the entire device instead of a single block. With non-FAT filesystems,
rewrite of a block will stay in the same block so that a heavily used
block will quickly use up the wear allowance for that area of the
flash chip.

That said, with modern wear limits being in the tens of thousands it's
unlikely to affect casual use e.g. for transferring photos, videos or
even use as a live boot device.

However using ext3 or ext4 and writing much data would wear it out more
quickly (ext3's journal involves static blocks for the head sector
and pointers AIUI), and constantly updating a file or directory with
atime enabled would wear out that file's inode. I have seen this on
CF cards used for data logging - as far as we could figure out what was
happening, anyway. We reformatted them so that the embedded controller
could allocate a new wear block and they were fine again.

Nick
--
Serendipity: http://www.leverton.org/blosxom (last update 29th March 2010)
"The Internet, a sort of ersatz counterfeit of real life"
-- Janet Street-Porter, BBC2, 19th March 1996

Theo Markettos

unread,
Jun 7, 2011, 10:31:12 AM6/7/11
to
Peter <not...@nospam.co.uk> wrote:
> Googling for reviews shows some variations even within the same
> manufacturer so a Linux user recommendation would be much appreciated.
> The latest model came with preloaded software for Windows users which I
> don't need for Linux.

There are three performance figures you need to worry about:

Bulk read speed (MB/s)
Bulk write speed (MB/s)
Access time (ms)

If you're copying big files, the bulk speeds matter. If you're using
small files, the access times dominate. If you're using the drive as swap,
access times really matter.

Getting stats on access times is hard. Having a drive support ReadyBoost on
Vista means the access time is <1ms, but the drive can have poor bulk
speeds. Manufacturers tend to have a large range of different devices with
different packaging and different performance, so you have to look at the
specific model (e-shop websites are often bad at telling you this).

I need to replace my router's swap drive, so I did a lot of staring at specs
a few weeks ago and ended up with a Lexar Jumpdrive 120x 2GB which was 7
quid from Amazon/MyMemory. I have yet to do speed tests on it - is there a
good speed test program (that includes seek times?)

bonnie++ -s512 -r0 (so use 512MB flash and don't complain that my RAM is
bigger than the USB stick [I have 1.5GB RAM]) gives, on the Lexar's default
FAT format:

Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
etoile 512M 379 93 18188 9 11391 9 1039 93 701003 80 473.4 5
Latency 43785us 323ms 323ms 28465us 6747us 24058us


Version 1.96 ------Sequential Create------ --------Random Create--------
etoile -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 63 73 +++++ +++ 715 90 63 74 +++++ +++ 228 84
Latency 134ms 27805us 24410us 131ms 11849us 55917us

Seq Out Block = Bulk write
Seq In Block = Bulk read
Random seek latency = Access time

Obviously read caching is happening there, as I don't think it can read at
700MB/s! I also didn't use -b which disables write caching. The other stats
are a little suspect (24us seek looks very slow to me, and doesn't match up
with 473/second) so evidently I need a better benchmark.

Also, is there a handy Linux app for checking if a flash drive contains the
reported capacity? (Something like dd if=/dev/urandom | tee /dev/sdf |
md5sum but it needs to be synchronised ). I'm always wary of buying fake
flash drives, and would like some way to check their actual storage capacity.

Theo

Geoff Clements

unread,
Jun 7, 2011, 2:31:33 PM6/7/11
to
chris wrote:


>> Some time ago I think someone on this newsgroup suggested reformatting
>> flash drives to ext2 rather than the supplied FAT32. Are there any
>> downsides to doing this?
>
> Other than it won't work transparently with Windows PCs nor Macs

and file permissions can get in the way when transferring across machines.


--
Geoff

Folderol

unread,
Jun 7, 2011, 3:28:22 PM6/7/11
to

I find the fact that ext2 doesn't work on Windows a distinct advantage - nobody
at work wants to 'borrow' my memory sticks :)

--
Will J G

Daniel James

unread,
Jun 8, 2011, 8:30:44 AM6/8/11
to
In article <iskscd$7us$1...@localhost.localdomain>, Peter wrote:
> Googling for reviews shows some variations even within the same
> manufacturer ...

The actual Flash memory chips and even the controllers used can vary
between batches from the same manufacturer, even for sticks that are
nominally the same model. The manufacturers just buy whatever's
cheapest when they need to start the production run.

I've had good experiences with sticks from HP, Crucial, PNY, Kingston,
and Corsair (in no particular order), and bad experiences with one of
two sticks from SanDisk (but that's not statistically significant). We
benchmarked a number of sticks at work, and IIRC the HP and PNY sticks
we tested were consistently good, but the others were not far behind.

I've also had some freebie sticks (given free with the slides after a
presentation) that were great, and some that were terrible.

Unfortunately the best advice I can suggest is that you buy from a
reputable maker and avoid cheap unbranded sticks. Even that's not
foolproof.

Cheers,
Daniel.

Message has been deleted

Peter

unread,
Jun 8, 2011, 12:18:59 PM6/8/11
to
On 07/06/11 15:31 Theo Markettos wrote:
>
> Also, is there a handy Linux app for checking if a flash drive
> contains the reported capacity? (Something like dd if=/dev/urandom
> | tee /dev/sdf | md5sum but it needs to be synchronised ). I'm
> always wary of buying fake flash drives, and would like some way to
> check their actual storage capacity.

I'm not sure if this is helps but I have used:-

# cfdisk /dev/sdb1

which appears to report correctly but may not be reliable. Replace
'sdb1' with your flash drive name as appropriate, of course, or you
could have 'toast'! Similarly, GUI wise, Gparted shows the flash drive
usage, capacity, etc.

Anyway, many thanks to all for your replies. I am now much better
informed and will select my flash drives carefully.

Peter

Message has been deleted

news1...@moo.uklinux.net

unread,
Jun 8, 2011, 4:27:15 PM6/8/11
to
Peter <not...@nospam.co.uk> wrote:
> Some time ago I think someone on this newsgroup suggested reformatting
> flash drives to ext2 rather than the supplied FAT32. Are there any
> downsides to doing this?

Er, one of these?

http://en.wikipedia.org/wiki/Flash_file_system

#Paul

Theo Markettos

unread,
Jun 8, 2011, 6:52:32 PM6/8/11
to
Huge <Hu...@nowhere.much.invalid> wrote:
> The fake flash disk I bought on eBay looked, smelled, tasted and sounded
> like a 32Gb one. The packing was OK. It reported its size OK. The only
> giveaway was that if you put more than a few Gb of data on it, the data
> beyond the first few Gb was jumbled nonsense.

Yup, had a similar experience on a MemoryStick Pro Duo, which has 128MB not
2GB. It was only when I started taking pictures on it that I noticed. Went
to a different shop and bought another one. That turned out to be fake too.

> (Is it the case that these things are all made the same, then marked up
> according to how much good capacity is on them?

Well it's possible the flash chips are made oversize and then individual bad
blocks mapped out, like on hard drives. But you'd have truly terrible yield
if you had even half the capacity gone through bad blocks so it's unlikely a
32GB device being downgraded to a 16GB device.

Smaller capacity devices are cheaper because they're made on 'last year's'
production line (fab), where the equipment has already been paid for and the
tooling expenses are less.

> Apparently the scam is to buy 2Gb ones and reformat/repackage/mark them so
> they look like 32Gb ones...)

Even if you buy them direct from the bona fide manufacturer they can still
be fake:
http://www.bunniestudios.com/blog/?p=918

Theo

Message has been deleted

alexd

unread,
Jun 10, 2011, 2:55:04 PM6/10/11
to
Meanwhile, at the uk.comp.os.linux Job Justification Hearings, Theo
Markettos chose the tried and tested strategy of:

> Huge <Hu...@nowhere.much.invalid> wrote:

>> (Is it the case that these things are all made the same, then marked up
>> according to how much good capacity is on them?

> Smaller capacity devices are cheaper because they're made on 'last year's'


> production line (fab), where the equipment has already been paid for and
> the tooling expenses are less.

I think the under-sized fakes are made using circular flash chips designed
for use in applications where data is written continuously, eg CCTV.

--
<http://ale.cx/> (AIM:troffasky) (UnSoEs...@ale.cx)
19:52:23 up 5 days, 9:37, 5 users, load average: 0.17, 0.19, 0.22
"People believe any quote they read on the internet
if it fits their preconceived notions." - Martin Luther King

Mike Tomlinson

unread,
Jun 13, 2011, 3:01:54 AM6/13/11
to
En el artículo <0lo*m2...@news.chiark.greenend.org.uk>, Theo Markettos
<theom...@chiark.greenend.org.uk> escribió:

>Also, is there a handy Linux app for checking if a flash drive contains the
>reported capacity?

Write a suitably large file to it with a pre-generated md5sum then
compare that with a md5sum on the file written to the drive?

A lot of fake flash drives claim double the real capacity, so as long as
you write a file that is more than 50% of the stated size, that should
be a realistic test.

I quite often copy large files (4.7Gb DVD .isos) around using a variety
of hardware and transfer methods (memory sticks, USB hard drives, eSATA,
NFS, scp, FTP etc.), starting with a md5sum taken at the first stage,
and am always quietly impressed that the md5sum taken at the end shows
the file is intact despite having been through several different
transfer methods. Suppose I shouldn't be impressed, really, but I am.
It's a testament to robust error checking.

--
(\__/)
(='.'=)
(")_(")


Theo Markettos

unread,
Jun 13, 2011, 1:09:43 PM6/13/11
to
Mike Tomlinson <mi...@jasper.org.uk> wrote:
> Write a suitably large file to it with a pre-generated md5sum then
> compare that with a md5sum on the file written to the drive?

Well yes, but if you don't have a large enough file lying about you have to
create one, which means having a number of GB free in the first place, plus
you have to write N GB to disc, read it back to md5sum, then read it again,
write to the flash, read it back and md5sum that. If you have a 32GB flash
stick it could be rather slow.

Ideally you'd generate some random data in RAM, put it through md5sum as you
do so, write it to flash, then read it back and md5sum it. That needs no
extra spare disc space, and saves two hard drive reads and one write. I'm
sure there must be a proper tool to do this with a nice interface etc,
rather than some program I could hack up, hence my inquiry.

Theo

Message has been deleted

Richard Kettlewell

unread,
Jun 13, 2011, 2:43:39 PM6/13/11
to
Huge <Hu...@nowhere.much.invalid> writes:

> A *tool*? What kind of Linux user are you? :o)
>
> dd if=/dev/zero count=64000000 | tee /dev/sdb1/file | md5sum
>
> Followed by;
>
> md5sum /dev/sdb1/file
>
> You might be able to use /dev/random as input, but I suspect you'll
> exhaust the randomness pool pretty quickly.
>
> Hopefully, whatever file system is on the USB stick won't generate
> a sparse file from all zeroes (like Solaris does), else this won't work.
>
> (Note; this is idle noodlings, so don't be surprised if it's wrong!)

The trouble with using 0s is that when the stick 'loops back round' then
you'll just be reading the earlier 0s back. You want something
non-repetitive but predictable.

For instance: https://github.com/ewxrjk/vbig/tree/release
(a bit rough and ready, mind l-)

--
http://www.greenend.org.uk/rjk/

Message has been deleted

Mike Tomlinson

unread,
Jun 14, 2011, 8:59:16 AM6/14/11
to
In article <hSm*ve...@news.chiark.greenend.org.uk>, Theo Markettos
<theom...@chiark.greenend.org.uk> writes

>Well yes, but if you don't have a large enough file lying about you have to
>create one, which means having a number of GB free in the first place

Hard disc space isn't exactly at a premium on most modern PCs.

>, plus
>you have to write N GB to disc, read it back to md5sum, then read it again,
>write to the flash, read it back and md5sum that. If you have a 32GB flash
>stick it could be rather slow.

OK then, stick a DVD in the drive, then:

dd if=/dev/scd0 of=/dev/flash-disk ; md5sum /dev/scd0 /dev/flash-disk

if the checksums match you know you have an exact copy.

> I'm
>sure there must be a proper tool to do this with a nice interface etc,
>rather than some program I could hack up, hence my inquiry.

I have seen one mentioned in a usenet group, but wasn't paying much
attention. You'll certainly get useful answers in comp.sys.ibm.pc.hardw
are.storage, but killfile the group idiot, Rod Speed.

--
Mike Tomlinson

Mike Tomlinson

unread,
Jun 14, 2011, 9:11:32 AM6/14/11
to
In article <aB2+nbCk...@jasper.org.uk>, Mike Tomlinson
<mi...@jasper.org.uk> writes

>I have seen one mentioned in a usenet group

This one:

http://www.heise.de/ct/Redaktion/bo/downloads/h2testw_1.4.zip

Grimdows only tho, which makes it kinda OT for here.

--
Mike Tomlinson

Theo Markettos

unread,
Jun 14, 2011, 10:18:27 AM6/14/11
to
Huge <Hu...@nowhere.much.invalid> wrote:
> A *tool*? What kind of Linux user are you? :o)
>
> dd if=/dev/zero count=64000000 | tee /dev/sdb1/file | md5sum

ITYF I posted exactly that in <news:m2...@news.chiark.greenend.org.uk>
(using /dev/urandom to make non-wrapping data as already been mentioned)

However it doesn't work (or the version I last tried didn't work: it wrote
to raw block devices rather than files for one thing). The reason it
doesn't work is due to buffered I/O: tee discovers that it's reached end of
file when it tries to write the final+1 block. But as they run in parallel,
that block has already been pushed through md5sum, so the md5sum isn't
byte-synchronised with the stream that has actually been written to disc,
and gives a different sum. There's no way to know if was just an out-by-one
error on the length, or the data being read back is radically different.

Hence more care is required, and a proper program that synchronises I/O
rather than just plays with pipes is required.

And as for 'disc space is cheap', I'm often doing this on a netbook with
only 8GB of SSD.

Theo

Theo Markettos

unread,
Jun 14, 2011, 10:28:06 AM6/14/11
to
Mike Tomlinson <mi...@jasper.org.uk> wrote:
> OK then, stick a DVD in the drive, then:
>
> dd if=/dev/scd0 of=/dev/flash-disk ; md5sum /dev/scd0 /dev/flash-disk
>
> if the checksums match you know you have an exact copy.

md5sum /dev/scd0 is actually more tricky than it looks: I've tried it on a
few DVD writers that are otherwise working fine, and what happens is the
number of blocks you get back before EOF varies, even repeated straight
after each other and on the same disc (all self-written DVD-Rs, I was
actually trying to fix a DVD writing problem). So you get different sums
based on the number of blocks that come off each time. I don't understand
why, but can only assume some kind of optical variation or a problem with
Linux's SATA DVD support. It doesn't make a difference when reading
ordinary ISO9660 discs where there's no reason to stray off the filesystem.

Theo

Mike Tomlinson

unread,
Jun 14, 2011, 11:42:36 AM6/14/11
to
In article <gZD*9V...@news.chiark.greenend.org.uk>, Theo Markettos <theom...@chiark.gre
enend.org.uk> writes

>md5sum /dev/scd0 is actually more tricky than it looks

oh? Fedora 14 DVD, self-burnt:

[root@blerg tmp]# dd if=/dev/scd0 of=/tmp/test.img ; md5sum /dev/scd0 /tmp/test.img
6956608+0 records in
6956608+0 records out
3561783296 bytes (3.6 GB) copied, 771.531 s, 4.6 MB/s
112fef4270bfc8611a714f8ef8478ca0 /dev/scd0
112fef4270bfc8611a714f8ef8478ca0 /tmp/test.img
[root@blerg tmp]#

>: I've tried it on a
>few DVD writers that are otherwise working fine, and what happens is the
>number of blocks you get back before EOF varies, even repeated straight
>after each other and on the same disc (all self-written DVD-Rs, I was
>actually trying to fix a DVD writing problem).

Just a thought - are you finalizing the discs?

> So you get different sums
>based on the number of blocks that come off each time.

oh?

[root@blerg tmp]# md5sum /dev/scd0 ; md5sum /dev/scd0 ; md5sum /dev/scd0 ; md5sum
/dev/scd0 ; md5sum /dev/scd0
112fef4270bfc8611a714f8ef8478ca0 /dev/scd0
112fef4270bfc8611a714f8ef8478ca0 /dev/scd0
112fef4270bfc8611a714f8ef8478ca0 /dev/scd0
^C (got bored)
[root@blerg tmp]#

> I don't understand
>why, but can only assume some kind of optical variation

No, you should read back exactly what you wrote. If not, something is amiss.

> or a problem with
>Linux's SATA DVD support.

Other things that come to mind are iffy DVD firmware and/or a memory fault.

--
Mike Tomlinson

Nix

unread,
Jun 14, 2011, 1:25:25 PM6/14/11
to
On 14 Jun 2011, Theo Markettos said:
> Hence more care is required, and a proper program that synchronises I/O
> rather than just plays with pipes is required.

Sounds like a job for <http://www.amazon.com/dp/0833030477>

(only you'd have to get the random numbers in somehow, and I think
typing or scanning might be rather laborious.)

--
NULL && (void)

Tom Anderson

unread,
Jun 14, 2011, 1:55:50 PM6/14/11
to

Returning - momentarily, i assure you! - to seriousness, what's wrong
with:

I=0; while [[ $I -lt 1000000000 ]]; do echo $I; I=$((I + 1)); done

?

That should generate about 10 GB of data, exactly the same every time,
containing no swathes of zero bytes, consuming about 1 MB of resident set
to do so (and another 102 MB of address space, at least on my machine -
dynamic linkers, eh?).

Writing a program to verify the contents of the file is left as an
exercise for the reader.

tom

--
Re-enacting the future

Theo Markettos

unread,
Jun 14, 2011, 2:21:45 PM6/14/11
to
Tom Anderson <tw...@urchin.earth.li> wrote:
> On Tue, 14 Jun 2011, Nix wrote:
>
> > On 14 Jun 2011, Theo Markettos said:
> >> Hence more care is required, and a proper program that synchronises I/O
> >> rather than just plays with pipes is required.
> >
> > Sounds like a job for <http://www.amazon.com/dp/0833030477>
> >

"Customers who viewed this item also viewed:

Tuscan Whole Milk, 1 Gallon, 128 fl oz
3M 8979N Performance Plus Nuclear Duct Tape, Slate Blue 48m
Fresh Whole Rabbit
Male Testicular Exam Model Anatomy"

Hmm...



> Returning - momentarily, i assure you! - to seriousness, what's wrong
> with:
>
> I=0; while [[ $I -lt 1000000000 ]]; do echo $I; I=$((I + 1)); done
>
> ?

Nothing wrong as such: the data can be whatever you like as long as it isn't
a repeating pattern. The issue is the I/O, and getting it synchronised so
that you write exactly the same number of bytes to flash as you md5sum.

> Writing a program to verify the contents of the file is left as an
> exercise for the reader.

If you aren't using md5sum and just checking the numbers then you don't have
that problem. Though I don't know how it compares speed-wise with md5...

Theo

Theo Markettos

unread,
Jun 14, 2011, 6:11:59 PM6/14/11
to
Mike Tomlinson <mi...@jasper.org.uk> wrote:
> In article <gZD*9V...@news.chiark.greenend.org.uk>, Theo Markettos <theom...@chiark.gre
> enend.org.uk> writes
>
> >: I've tried it on a
> >few DVD writers that are otherwise working fine, and what happens is the
> >number of blocks you get back before EOF varies, even repeated straight
> >after each other and on the same disc (all self-written DVD-Rs, I was
> >actually trying to fix a DVD writing problem).
>
> Just a thought - are you finalizing the discs?

I think so. This finalises, doesn't it?

wodim cdimage.iso

> No, you should read back exactly what you wrote. If not, something is amiss.
>
> > or a problem with
> >Linux's SATA DVD support.
>
> Other things that come to mind are iffy DVD firmware and/or a memory fault.

Drives were Tsstcorp SH-S223 something, I think. I didn't have any others
to test. There is definitely some wierdness with them on Linux that I
haven't figured out - whether it's firmware, SATA controller, libata,
kernel, I'm not sure.

Anyway, rather OT for this thread...

Theo

Nix

unread,
Jun 15, 2011, 4:37:33 PM6/15/11
to
On 14 Jun 2011, Theo Markettos uttered the following:

> Tom Anderson <tw...@urchin.earth.li> wrote:
>> On Tue, 14 Jun 2011, Nix wrote:
>>
>> > On 14 Jun 2011, Theo Markettos said:
>> >> Hence more care is required, and a proper program that synchronises I/O
>> >> rather than just plays with pipes is required.
>> >
>> > Sounds like a job for <http://www.amazon.com/dp/0833030477>
>> >
>
> "Customers who viewed this item also viewed:
>
> Tuscan Whole Milk, 1 Gallon, 128 fl oz
> 3M 8979N Performance Plus Nuclear Duct Tape, Slate Blue 48m
> Fresh Whole Rabbit
> Male Testicular Exam Model Anatomy"
>
> Hmm...

All notable for not entirely serious reviews. They didn't care about the
*products* at all :)

--
NULL && (void)

Nix

unread,
Jun 15, 2011, 4:40:17 PM6/15/11
to
On 14 Jun 2011, Theo Markettos verbalised:

> Mike Tomlinson <mi...@jasper.org.uk> wrote:
>> > or a problem with
>> >Linux's SATA DVD support.
>>
>> Other things that come to mind are iffy DVD firmware and/or a memory fault.
>
> Drives were Tsstcorp SH-S223 something, I think. I didn't have any others
> to test. There is definitely some wierdness with them on Linux that I
> haven't figured out - whether it's firmware, SATA controller, libata,
> kernel, I'm not sure.

Oh, they suck. I have two and they're just barely capable of reading a
DVD without falling over in a heap *if* the DVD is utterly flawless. Any
errors and they introduce extra errors (32767 sectors' worth) after
spending next to eternity deciding that they can't read that one sector:
obviously a firmware bug. That's just my firmware. Others have reported
different bugs, in burning, CD audio, you name it. Plus they're
region-locked in hardware and you can't reverse it.

I bought a cheap USB DVD burner. It is many times more useful and
capable. The TSSTcorp DVD burners are basically wasting two drive bays
at this point.

Avoid avoid.

--
NULL && (void)

Theo Markettos

unread,
Jun 15, 2011, 9:41:13 PM6/15/11
to
Nix <nix-ra...@esperi.org.uk> wrote:
> Oh, they suck. I have two and they're just barely capable of reading a
> DVD without falling over in a heap *if* the DVD is utterly flawless. Any
> errors and they introduce extra errors (32767 sectors' worth) after
> spending next to eternity deciding that they can't read that one sector:
> obviously a firmware bug. That's just my firmware. Others have reported
> different bugs, in burning, CD audio, you name it. Plus they're
> region-locked in hardware and you can't reverse it.

Thanks, will know to avoid in future. We appear to have a pile of them
though :(

Theo

Nix

unread,
Jun 16, 2011, 11:30:03 AM6/16/11
to
On 16 Jun 2011, Theo Markettos told this:

They're very common in server hardware. I have no idea why: they're
incapable even of reading many vendor install CDs...

--
NULL && (void)

Daniel James

unread,
Jun 19, 2011, 12:38:19 PM6/19/11
to
In article <hZD*RC...@news.chiark.greenend.org.uk>, Theo Markettos wrote:
> Drives were Tsstcorp SH-S223 something, I think. I didn't have any others
> to test. There is definitely some wierdness with them on Linux ...

There seems to be some weirdness with them regardless of OS ... I see lots
of reports of read and write failures.

Mine is behaving better since I reflashed it with firmware SB06 but I'm not
convinced it's doing everything right ... still, I did manage to produce a
bootable LiveCD from an ISO with it (from Ubuntu AMD64) yesterday, which is
a first for that drive!

Cheers,
Daniel.

Theo Markettos

unread,
Mar 23, 2012, 2:58:06 PM3/23/12
to
On 7 Jun 2011, in Message-ID: <0lo*m2...@news.chiark.greenend.org.uk>
I wrote:
> I need to replace my router's swap drive, so I did a lot of staring at
> specs a few weeks ago and ended up with a Lexar Jumpdrive 120x 2GB which
> was 7 quid from Amazon/MyMemory. I have yet to do speed tests on it - is
> there a good speed test program (that includes seek times?)

[following up to a very old thread]

I've just happened to come across a file system speed tester that looks like
it will do the job for speed testing flash. Haven't tried it in anger, but
it seems promising. It's 'iozone':
http://www.iozone.org/

Some results of using it on various flash devices can be found here:
http://www.altechnative.net/2012/01/25/flash-module-benchmark-collection-sd-cards-cf-cards-usb-sticks/

Theo
0 new messages