Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Anyone have a Sandisk Extreme III card?

1 view
Skip to first unread message

void.no....@gmail.com

unread,
Feb 21, 2009, 4:53:22 PM2/21/09
to
I have a Sandisk Extreme III 8 GB SD card (the 30MB/sec edition), and
I tried copying files from the hard drive to it and back, and timed
the transfers. I copied 2 files - a 400 MB and 700 MB file (total of
1.1 GB) - from the HD to the SD card, and it took 75 seconds, so
that's a write speed of about 15 MB/sec for the SD card. Then I copied
the files back to the HD and it took 90 seconds, so that's a read
speed of about 12 MB/sec for the SD card. Doesn't seem to live up to
its 30 MB/sec billing. I am using the SanDisk MobileMate SDDR-104 USB
card reader, if that makes a difference. Is anyone able to get 30 MB/
sec transfers with this card?

D-Mac

unread,
Feb 21, 2009, 8:38:00 PM2/21/09
to

I use this exact same card exclusively with my Nikon and Fuji DSLRs and
just tried your "test".

Not 30 MB/s but 22 MB/s which is about the limit of USB2 - according to
my identical test on a remote hard drive. The name "mobile mate" has me
wondering if it's USB or USB2.

I'm not saying you are doing this but... The USB ports on the front of
most cases are NOT USB2 but the older and much slower USB ports. It's
the ports on the back (directly off the mainboard) that are USB2.

Otherwise, you can check the serial number on the bottom of the card at
Sandisk's web site to see if it's a forgery. Sandisk is the most forged
flash card in the world.

D-Mac.info

void.no....@gmail.com

unread,
Feb 21, 2009, 9:59:19 PM2/21/09
to
On Feb 21, 8:38 pm, D-Mac <alienjo...@y7mail.com> wrote:

> void.no.spam....@gmail.com wrote:
> > I have a Sandisk Extreme III 8 GB SD card (the 30MB/sec edition), and
> > I tried copying files from the hard drive to it and back, and timed
> > the transfers. I copied 2 files - a 400 MB and 700 MB file (total of
> > 1.1 GB) - from the HD to the SD card, and it took 75 seconds, so
> > that's a write speed of about 15 MB/sec for the SD card. Then I copied
> > the files back to the HD and it took 90 seconds, so that's a read
> > speed of about 12 MB/sec for the SD card. Doesn't seem to live up to
> > its 30 MB/sec billing. I am using the SanDisk MobileMate SDDR-104 USB
> > card reader, if that makes a difference. Is anyone able to get 30 MB/
> > sec transfers with this card?
>
> I use this exact same card exclusively with my Nikon and Fuji DSLRs and
> just tried your "test".
>
> Not 30 MB/s but 22 MB/s which is about the limit of USB2 - according to
> my identical test on a remote hard drive. The name "mobile mate" has me
> wondering if it's USB or USB2.

It is USB 2.0, according to Sandisk's web site. And USB 1.1 has a max
speed of only 1.5 MB/sec.

> I'm not saying you are doing this but... The USB ports on the front of
> most cases are NOT USB2 but the older and much slower USB ports. It's
> the ports on the back (directly off the mainboard) that are USB2.

May I ask what type of card reader you are using? I suspect the
MobileMate, although it is USB 2.0, doesn't support 30 MB/sec.

> Otherwise, you can check the serial number on the bottom of the card at
> Sandisk's web site to see if it's a forgery. Sandisk is the most forged
> flash card in the world.

I got mine from Adorama, who is an authorized Sandisk dealer, so I
doubt it's a forgery.

David J Taylor

unread,
Feb 22, 2009, 1:48:41 AM2/22/09
to
void.no....@gmail.com wrote:
[]

> It is USB 2.0, according to Sandisk's web site. And USB 1.1 has a max
> speed of only 1.5 MB/sec.

USB 2.0 means nothing - it needs to be "USB 2.0 hi-speed". USB 2.0 can
still be "full-speed", but that's only 12Mb/s (1.5MB/s).

The fastest I've seen off any USB 2.0 solid-state device I've tested is
18.5MB/s, and that was a 4GB memory stick (and watch those memory sticks,
some write a /lot/ slower than they can read. From a hard-disk I've seen
a speed of 30MB/s (Seagate FreeAgent Go 320GB).

Would a Firewire card reader be faster?

David

nospam

unread,
Feb 22, 2009, 3:00:54 AM2/22/09
to
In article <dz6ol.38286$Sp5....@text.news.virginmedia.com>, David J
Taylor <david-...@blueyonder.neither-this-bit.nor-this.co.uk> wrote:

> Would a Firewire card reader be faster?

yes, assuming the card itself isn't the limiting factor. check rob
galbraith's benchmarks.

David J Taylor

unread,
Feb 22, 2009, 3:23:51 AM2/22/09
to

Checking:

http://www.robgalbraith.com/bins/reader_report_multi_page.asp?cid=6007-9392

It does look as if FireWire is faster. Pity he doesn't do SD reader tests
either on FireWire or on the PC. I gave up CF cards years ago, and I'm a
PC rather than a Mac user.

Thanks,
David

Ron Hunter

unread,
Feb 22, 2009, 3:59:09 AM2/22/09
to
The nature of flash media is that they write slower than many other
types of memory. They are much faster than earlier versions, and will
probably get somewhat faster in the future. Right now, none of them are
fast enough to really make full use of the theoretical 480mbps speed of
USB 2.0 Hi-Speed, let alone Firewire speeds.
Still, they are fast enough for the purpose of saving digital images,
and are usually as fast as the camera hardware can write the data.

nospam

unread,
Feb 22, 2009, 5:27:44 AM2/22/09
to
In article <7I-dnSv4vaV8jjzU...@giganews.com>, Ron Hunter
<rphu...@charter.net> wrote:

> Right now, none of them are
> fast enough to really make full use of the theoretical 480mbps speed of
> USB 2.0 Hi-Speed, let alone Firewire speeds.

actually quite a few are:

<http://www.robgalbraith.com/bins/reader_report_all.asp?cid=6007-9392&ca
rd_type=CompactFlash>

Message has been deleted

ASAAR

unread,
Feb 22, 2009, 8:52:34 AM2/22/09
to
On Sun, 22 Feb 2009 06:48:41 GMT, David J Taylor wrote:

> The fastest I've seen off any USB 2.0 solid-state device I've tested is
> 18.5MB/s, and that was a 4GB memory stick (and watch those memory sticks,
> some write a /lot/ slower than they can read. From a hard-disk I've seen
> a speed of 30MB/s (Seagate FreeAgent Go 320GB).

A number of cards were tested here for both read and write speed
(and more) :

http://www.tomshardware.com/reviews/sdhc-memory-card,2143.html

David J Taylor

unread,
Feb 22, 2009, 10:08:19 AM2/22/09
to

Thanks, although I don't buy SD cards that expensive, as none of my
cameras need them. The fastest reader I have is my Sharkoon:

http://www.hardware.info/en-UK/productdb/bGdkZZiamJXKaMg/viewproduct/Sharkoon_FlexiDrive_XC_SDHC/

Cheers,
David

Tzortzakakis Dimitrios

unread,
Feb 22, 2009, 11:44:53 AM2/22/09
to

? <void.no....@gmail.com> ?????? ??? ??????
news:e4509441-905b-41d5...@e24g2000vbe.googlegroups.com...
I recently read an article on a german online magazine, www.spiegel.de , how
a hard drive's free space, when formatted, does not exacltly match the
advertised size. For example, my 320 GBHitachi deskstar sata 2 10400 rpm
formatted is 298 GB (Win XP greek).
YMMV.


--
Tzortzakakis Dimitrios
major in electrical engineering
mechanized infantry reservist
hordad AT otenet DOT gr


ASAAR

unread,
Feb 22, 2009, 12:04:39 PM2/22/09
to
On Sun, 22 Feb 2009 15:08:19 GMT, David J Taylor wrote:

> Thanks, although I don't buy SD cards that expensive, as none of my
> cameras need them. The fastest reader I have is my Sharkoon:

Hmm. I didn't check prices in the article, but I have a number of
the Kingston and Transcend SDHC (2,4,8 and 16GB) cards and some of
them appeared to be mentioned in the article. While I can't recall
what I paid for most of these, they were all quite reasonably
priced. I think J&R (local to me) had 4GB Kingston SDHC and 4GB
SanDisk CF card for about $6, but I paid about $15 several months
earlier for a slower 8 GB Class 2 SanDisk card because that was good
enough for my mp3 player/jpg viewer.

Jürgen Exner

unread,
Feb 22, 2009, 12:45:29 PM2/22/09
to
"Tzortzakakis Dimitrios" <no...@nospam.com> wrote:
>I recently read an article on a german online magazine, www.spiegel.de , how
>a hard drive's free space, when formatted, does not exacltly match the
>advertised size. For example, my 320 GBHitachi deskstar sata 2 10400 rpm
>formatted is 298 GB (Win XP greek).

Well, dahhhhh! Big surprise.

Fist of all any file system has administrative overhead. Where do you
suppose directories, free sector list, etc. are stored?

And second there's the difference between decimal and binary "kilo",
"mega", and "giga", the former based on 10^3 and used by HD
manufacturers, the later based on 2^10 and commonly used in computer
science. That difference is 2.4%.

jue

John McWilliams

unread,
Feb 22, 2009, 12:55:53 PM2/22/09
to
Larry Thong wrote:

> David J Taylor wrote:
>
>> The fastest I've seen off any USB 2.0 solid-state device I've tested
>> is 18.5MB/s, and that was a 4GB memory stick (and watch those memory
>> sticks, some write a /lot/ slower than they can read. From a
>> hard-disk I've seen a speed of 30MB/s (Seagate FreeAgent Go 320GB).
>
> The problem with USB in any format is it inherently sucks and was never
> designed for performance.

Good for keyboards and printers, but crap for volume.


>
>> Would a Firewire card reader be faster?
>

> Relative to what? Firewire is still a better form of USB but it isn't
> the cure all. When these manufacturers start using SAS or SATA for
> connecting devices, only then they will achieve high performance and
> reliability.

In the mean time FW800 is here, works, and at least for most Mac users,
emminently available. eSata requires more stuff.

--
John McWilliams

David J Taylor

unread,
Feb 22, 2009, 12:57:03 PM2/22/09
to

7.3% at the gigabyte level.

1GB => 1073741824 bytes.

Cheers,
David

John McWilliams

unread,
Feb 22, 2009, 12:57:40 PM2/22/09
to

Yes, this has been true since the beginning. A certain amount of
headroom is needed for drivers, and secret sauce. .

--
john mcwilliams

David J Taylor

unread,
Feb 22, 2009, 12:58:54 PM2/22/09
to

It was just the class of card - I normally buy SanDisk Ultra, and not the
Extreme. Amongst the different manufacturers there is quite a speed
difference, and the USB memory sticks (as opposed to cards) can be very
slow writers.

Cheers,
David

David J Taylor

unread,
Feb 22, 2009, 1:16:32 PM2/22/09
to
John McWilliams wrote:
> Larry Thong wrote:
[]

>> The problem with USB in any format is it inherently sucks and was
>> never designed for performance.
>
> Good for keyboards and printers, but crap for volume.


I have Windows systems here routinely handling 40GB/day 24 x 7 over USB
2.0, so it's not that bad.

I don't think anyone claims it provides the ultimate disk performance, but
the 100-300GB 2.5-inch portable drives are very handy backup devices....

David

Jürgen Exner

unread,
Feb 22, 2009, 1:30:58 PM2/22/09
to
>Jürgen Exner wrote:
>> And second there's the difference between decimal and binary "kilo",
>> "mega", and "giga", the former based on 10^3 and used by HD
>> manufacturers, the later based on 2^10 and commonly used in computer
>> science. That difference is 2.4%.
>
>7.3% at the gigabyte level.
>
>1GB => 1073741824 bytes.

True, you are right. I was thinking kB only.

A long time ago in the good old days it was customary to use use lower
case to denote decimal and upper case to denote binary. So 1kB would be
1000 bytes and 1KB would be 1024 bytes.

jue

D-Mac

unread,
Feb 22, 2009, 6:39:57 PM2/22/09
to
void.no....@gmail.com wrote:

>
> May I ask what type of card reader you are using? I suspect the
> MobileMate, although it is USB 2.0, doesn't support 30 MB/sec.
>

No idea of the brand of reader I have. It's incorporated in a floppy
drive and part of the PC. It is quite a lot faster than the "all in one"
reader I used before upgrading. I believe it's about the fastest reader
in the studio of 4 PCs although the MAC seems to be even faster and it
too, has built a in reader.

D-Mac.info

Ron Hunter

unread,
Feb 23, 2009, 4:15:17 AM2/23/09
to

The theoretical speed of Firewire800 is 800mbps, which would be 100
megabytes/second. None of the cards approaches that. They don't even
meet the lower 480mbps speed of USB 2.0 Hi-Speed. Still, 47 MB/s is a
VERY good data rate, and I suspect it exceeds the speed at which any
current camera can send data, or take it.

Ron Hunter

unread,
Feb 23, 2009, 4:16:23 AM2/23/09
to
Larry Thong wrote:
> David J Taylor wrote:
>
>> The fastest I've seen off any USB 2.0 solid-state device I've tested
>> is 18.5MB/s, and that was a 4GB memory stick (and watch those memory
>> sticks, some write a /lot/ slower than they can read. From a
>> hard-disk I've seen a speed of 30MB/s (Seagate FreeAgent Go 320GB).
>
> The problem with USB in any format is it inherently sucks and was never
> designed for performance.
>
>> Would a Firewire card reader be faster?
>
> Relative to what? Firewire is still a better form of USB but it isn't the
> cure all. When these manufacturers start using SAS or SATA for connecting
> devices, only then they will achieve high performance and reliability.
>
Firewire is generally faster than USB 2.0, but as long as it is fast
enough for the desired purpose, differences are irrelevant.

Ron Hunter

unread,
Feb 23, 2009, 4:22:45 AM2/23/09
to
Yes. Raw figures for HD storage count the overhead involved with the
formatting, such as sector headers, directory structures, and allocation
tables. If you live in a Windows world, the losses are VASTLY greater,
with system space which can run into multi-gigabytes for things like
swapfile, and 'system resource' areas.

whisky-dave

unread,
Feb 23, 2009, 6:41:40 AM2/23/09
to

"John McWilliams" <jp...@comcast.net> wrote in message
news:gns3mk$bv4$2...@news.motzarella.org...

Not forgetting the magic smoke, if you see that then the chances are
somethings really
fraked up ;-)

David J Taylor

unread,
Feb 23, 2009, 6:54:37 AM2/23/09
to
Ron Hunter wrote:
[]

> Yes. Raw figures for HD storage count the overhead involved with the
> formatting, such as sector headers, directory structures, and
> allocation tables. If you live in a Windows world, the losses are
> VASTLY greater, with system space which can run into multi-gigabytes
> for things like swapfile, and 'system resource' areas.

UNIX and its variant need swapfile space and metadata space as well, it's
not just Windows. Things like System Restore on Windows can be turned
off, if you must to save space.

One other recommendation I would make is not to keep disks too full -
perhaps something like 75% full - so that they can be defragmented more
easily for best performance.

At least disk storage space is cheap now - I hate to think how much I paid
for my first hard disk in 1978 (?) - a top of the range 16MB or 20MB unit
IIRC!

Cheers,
David

David J Taylor

unread,
Feb 23, 2009, 6:56:06 AM2/23/09
to
Ron Hunter wrote:
[]

> The theoretical speed of Firewire800 is 800mbps, which would be 100
> megabytes/second. None of the cards approaches that. They don't even
> meet the lower 480mbps speed of USB 2.0 Hi-Speed. Still, 47 MB/s is a
> VERY good data rate, and I suspect it exceeds the speed at which any
> current camera can send data, or take it.

For me, where the speed matters most is not at the camera, but when
downloading 4GB of photos from SD card to the PC.

Cheers,
David

nospam

unread,
Feb 23, 2009, 7:19:04 AM2/23/09
to
In article <krKdnQm9Mu279D_U...@giganews.com>, Ron Hunter
<rphu...@charter.net> wrote:

> >> Right now, none of them are
> >> fast enough to really make full use of the theoretical 480mbps speed of
> >> USB 2.0 Hi-Speed, let alone Firewire speeds.
> >
> > actually quite a few are:
> >
> > <http://www.robgalbraith.com/bins/reader_report_all.asp?cid=6007-9392&ca
> > rd_type=CompactFlash>
>
> The theoretical speed of Firewire800 is 800mbps, which would be 100
> megabytes/second. None of the cards approaches that.

true, but they exceed fw400 & usb2

> They don't even
> meet the lower 480mbps speed of USB 2.0 Hi-Speed. Still, 47 MB/s is a
> VERY good data rate, and I suspect it exceeds the speed at which any
> current camera can send data, or take it.

47 mb/s is *faster* than what both usb2 and firewire 400 can provide.
due to a lot of overhead in usb2, you will never see 48 mbytes/sec.
30ish is typical. firewire 400 easily reaches 35-40 mbytes/sec.

Floyd L. Davidson

unread,
Feb 23, 2009, 7:28:42 AM2/23/09
to
>Ron Hunter wrote:
>[]
>> Yes. Raw figures for HD storage count the overhead involved with the
>> formatting, such as sector headers, directory structures, and
>> allocation tables. If you live in a Windows world, the losses are
>> VASTLY greater, with system space which can run into multi-gigabytes
>> for things like swapfile, and 'system resource' areas.
>
>UNIX and its variant need swapfile space and metadata space as well, it's
>not just Windows. Things like System Restore on Windows can be turned
>off, if you must to save space.

That's quite true. System space isn't the significant
difference between OS's, though it is true that Unix
variations have always made an effort to minimize the
amount of space used. (Things like reducing the amount
of swap space necessary by making executables
non-writable, which allows the executable binary to be
paged in only as needed. Because of that the program
area of a executable need never be written to swap.)

The historical difference between disk useage on Unix
and Windows had to do with file system block allocation.
Microsoft was never one to look ahead and design today's
OS with the idea in mind that tomorrow's hardware would
be different. Unix on the other had was designed right
from the start by people researching how to write an OS
to match tomorrow's needs. Hence when a large
Winchester disk was 10Mb, MSDOS used a filesystem that
divided up a 10Mb disk very nicely... and Unix used a
filesystem that could divide up a 2G disk very nicely.
When 500Mb disks became available those MS filesystems
were horrible at allocating space efficiently (the first
byte allocated caused a huge block to be locked up).

One of the things I recall getting a chuckle out of was
a fellow from SGI explaining how, as they were trying to
take over the computer imaging market 15 years or so
ago, they realized that at some point they would have...
*5 Gigabyte* filesystems! And of course that meant
programs like /fsck/ (the File System CHeck utility)
needed to be rewritten simply because as the originally
worked it would take literally all day to reboot a
crashed system if it had two or three 5 Gig filesystems
that required checking. (Today a 50 Gig filesystem can
be checked with fsck in a few minutes at worst.)

>One other recommendation I would make is not to keep disks too full -
>perhaps something like 75% full - so that they can be defragmented more
>easily for best performance.

That's another problem with even the current MS
filesystems.

On a Unix system 95% would be about the maximum. Plus
defragmentation is absolutely an unnecessary waste of
time on any decently written filesystem (as long as you
keep it below 95% full, though it doesn't make a lot of
difference after that either).

>At least disk storage space is cheap now - I hate to think how much I paid
>for my first hard disk in 1978 (?) - a top of the range 16MB or 20MB unit
>IIRC!

Lordy yes! And just imagine what we'll be using in
another decade. Mind boggling...

--
Floyd L. Davidson <http://www.apaflo.com/floyd_davidson>
Ukpeagvik (Barrow, Alaska) fl...@apaflo.com

David J Taylor

unread,
Feb 23, 2009, 8:57:24 AM2/23/09
to
Floyd L. Davidson wrote:
[]

> That's quite true. System space isn't the significant
> difference between OS's, though it is true that Unix
> variations have always made an effort to minimize the
> amount of space used. (Things like reducing the amount
> of swap space necessary by making executables
> non-writable, which allows the executable binary to be
> paged in only as needed. Because of that the program
> area of a executable need never be written to swap.)

Also true in Windows, I believe, where non-writeable portions of the
executable are not written to swap.

> The historical difference between disk useage on Unix
> and Windows had to do with file system block allocation.
> Microsoft was never one to look ahead and design today's
> OS with the idea in mind that tomorrow's hardware would
> be different. Unix on the other had was designed right
> from the start by people researching how to write an OS
> to match tomorrow's needs. Hence when a large
> Winchester disk was 10Mb, MSDOS used a filesystem that
> divided up a 10Mb disk very nicely... and Unix used a
> filesystem that could divide up a 2G disk very nicely.
> When 500Mb disks became available those MS filesystems
> were horrible at allocating space efficiently (the first
> byte allocated caused a huge block to be locked up).

Although today's file system from Microsoft, NTFS, no longer has that
limitation.

> One of the things I recall getting a chuckle out of was
> a fellow from SGI explaining how, as they were trying to
> take over the computer imaging market 15 years or so
> ago, they realized that at some point they would have...
> *5 Gigabyte* filesystems! And of course that meant
> programs like /fsck/ (the File System CHeck utility)
> needed to be rewritten simply because as the originally
> worked it would take literally all day to reboot a
> crashed system if it had two or three 5 Gig filesystems
> that required checking. (Today a 50 Gig filesystem can
> be checked with fsck in a few minutes at worst.)

Yes, it was bad enough with a 500MB disk - UNIX took about 45 minutes to
reboot!

>> At least disk storage space is cheap now - I hate to think how much
>> I paid for my first hard disk in 1978 (?) - a top of the range 16MB
>> or 20MB unit IIRC!
>
> Lordy yes! And just imagine what we'll be using in
> another decade. Mind boggling...

.. and what /will/ we be storing there? The change I've seen in my own
use if that disk is now so cheap that I use disk for backup. There's no
longer the need to clean off old stuff to make way for the new, and you
can use simple, external, portable USB disks for backup. I recently got a
320GB one for a good price. All the photos I've ever taken don't more
than half fill that disk, to a first order. I still do backup onto DVD,
but I wonder for how much longer.

Cheers,
David

Floyd L. Davidson

unread,
Feb 23, 2009, 4:11:28 PM2/23/09
to
>Floyd L. Davidson wrote:
>[]
>> That's quite true. System space isn't the significant
>> difference between OS's, though it is true that Unix
>> variations have always made an effort to minimize the
>> amount of space used. (Things like reducing the amount
>> of swap space necessary by making executables
>> non-writable, which allows the executable binary to be
>> paged in only as needed. Because of that the program
>> area of a executable need never be written to swap.)
>
>Also true in Windows, I believe, where non-writeable portions of the
>executable are not written to swap.

Only in more recent versions. But the use of ELF binaries,
as opposed to COFF, has been in Linux for example, for a long
long time.

Any Unix that took 45 minutes to boot wasn't being
admin'd correctly. That would be absurd!

Maybe you were using SCO UNIX? :-)

Or something else that was actually a 10-15 year old
kernel.

>>> At least disk storage space is cheap now - I hate to think how much
>>> I paid for my first hard disk in 1978 (?) - a top of the range 16MB
>>> or 20MB unit IIRC!
>>
>> Lordy yes! And just imagine what we'll be using in
>> another decade. Mind boggling...
>
>.. and what /will/ we be storing there? The change I've seen in my own
>use if that disk is now so cheap that I use disk for backup. There's no
>longer the need to clean off old stuff to make way for the new, and you
>can use simple, external, portable USB disks for backup. I recently got a
>320GB one for a good price. All the photos I've ever taken don't more
>than half fill that disk, to a first order. I still do backup onto DVD,
>but I wonder for how much longer.

I'm running two machines, each with multiple tera-byte
disk arrays. Not only is disk backup useful... I can't
imagine the cost of anything else! (I'm just mind
bogggled at the idea of how big these things are though.
50G partitions...)

David J Taylor

unread,
Feb 23, 2009, 7:40:07 PM2/23/09
to
Floyd L. Davidson wrote:
[]

> Only in more recent versions. But the use of ELF binaries,
> as opposed to COFF, has been in Linux for example, for a long
> long time.

I don't regard any version of Windows based on the 16-bit kernel as
anything other than a toy. I'm talking Windows NT, 2000, XP etc.
Windows-32 has had NTFS since 1992-1993, some 17 years.

[]


> Any Unix that took 45 minutes to boot wasn't being
> admin'd correctly. That would be absurd!
>
> Maybe you were using SCO UNIX? :-)
>
> Or something else that was actually a 10-15 year old
> kernel.

Yes, this must have been about 12-15 years back, when 450MB was a large
disk!

Cheers,
David

Floyd L. Davidson

unread,
Feb 23, 2009, 10:15:59 PM2/23/09
to
>Floyd L. Davidson wrote:
>[]
>> Only in more recent versions. But the use of ELF binaries,
>> as opposed to COFF, has been in Linux for example, for a long
>> long time.
>
>I don't regard any version of Windows based on the 16-bit kernel as
>anything other than a toy. I'm talking Windows NT, 2000, XP etc.
>Windows-32 has had NTFS since 1992-1993, some 17 years.

NTFS isn't the issue. A kernel that pages the binary is
what made a difference. I assume that did begin with
Windows NT for the Microsoft OS's.

>[]
>> Any Unix that took 45 minutes to boot wasn't being
>> admin'd correctly. That would be absurd!
>>
>> Maybe you were using SCO UNIX? :-)
>>
>> Or something else that was actually a 10-15 year old
>> kernel.
>
>Yes, this must have been about 12-15 years back, when 450MB was a large
>disk!

But I mean the kernel in use at that time would have had
to have been 10-15 years old *then*. Otherwise,
somebody wasn't doing sys admin right.

In practice, if you want a fast booting system, Linux
even today only requires about a 200Mb root partition,
which /fsck/ can zip through in seconds before booting
the system.

Anything that takes longer than that is doing so *only*
because the systems admin has decided that it is okay to
take a long time to boot. And 45 minutes has always
been ridiculous, and I'm hard pressed to even imagine
how that could have been done. I knew of machines with
3Gb is disk that booted faster than that in the late
80s!

David J Taylor

unread,
Feb 24, 2009, 2:27:18 AM2/24/09
to
Floyd L. Davidson wrote:
[]

> NTFS isn't the issue. A kernel that pages the binary is
> what made a difference. I assume that did begin with
> Windows NT for the Microsoft OS's.

NTFS makes much better use of disk space than the large clusters in FAT32,
and we were talking a little about disk usage.

I suspect, but would need to read the books to find out, that as soon as
386-style virtual memory came into use that read-only pages were
available, so possibly Windows-16 version 3.1, but I still think of those
as "toy" versions. Windows NT has always been like this. If you haven't
read Windows Internals 4, you may find it interesting.

http://www.amazon.com/exec/obidos/ASIN/0735619174/systemsinternals?creative=327641&camp=14573&link_code=as1

I see that version 5 was due last month - and now includes Vista.

> Anything that takes longer than that is doing so *only*
> because the systems admin has decided that it is okay to
> take a long time to boot. And 45 minutes has always
> been ridiculous, and I'm hard pressed to even imagine
> how that could have been done. I knew of machines with
> 3Gb is disk that booted faster than that in the late
> 80s!

This was a 450MB disk which had not been shut down properly - i.e. the OS
had crashed, so fsck was run. Normally it booted much more quickly, of
course.

Cheers,
David

Floyd L. Davidson

unread,
Feb 24, 2009, 3:13:37 AM2/24/09
to
>Floyd L. Davidson wrote:
>[]
>> NTFS isn't the issue. A kernel that pages the binary is
>> what made a difference. I assume that did begin with
>> Windows NT for the Microsoft OS's.
>
>NTFS makes much better use of disk space than the large clusters in FAT32,
>and we were talking a little about disk usage.

That was from a discussion of reducing the size of a
swap area by paging non-writable binaries. It has
*nothing* directly to do with the file system.

The move from a FAT32 DOS type of file system to
something that can allocated smaller blocks is a
different topic, though certainly a valid one in
the overall context of the discussion.

>I suspect, but would need to read the books to find out, that as soon as
>386-style virtual memory came into use that read-only pages were
>available, so possibly Windows-16 version 3.1, but I still think of those
>as "toy" versions. Windows NT has always been like this. If you haven't
>read Windows Internals 4, you may find it interesting.
>
> http://www.amazon.com/exec/obidos/ASIN/0735619174/systemsinternals?creative=327641&camp=14573&link_code=as1
>
>I see that version 5 was due last month - and now includes Vista.
>
>> Anything that takes longer than that is doing so *only*
>> because the systems admin has decided that it is okay to
>> take a long time to boot. And 45 minutes has always
>> been ridiculous, and I'm hard pressed to even imagine
>> how that could have been done. I knew of machines with
>> 3Gb is disk that booted faster than that in the late
>> 80s!
>
>This was a 450MB disk which had not been shut down properly - i.e. the OS
>had crashed, so fsck was run. Normally it booted much more quickly, of
>course.

But unless the disk was bad, there is simply *no* way
that fsck would have taken any 45 minutes to run through
a 450Mb disk unless something else was terribly wrong.

If the system were properly administered it would have
had less than 50 Mb in a root partition that would have
been usable in seconds and then a relatively "normal"
boot process would have taken place even while the rest
of the disk is being fixed. But 45 minutes sounds like
the disk was bad and had *thousands* of "bad" files.
Nothing can help that, except a new disk.

David J Taylor

unread,
Feb 24, 2009, 3:31:55 AM2/24/09
to
Floyd L. Davidson wrote:
> "David J Taylor"
[]

>> This was a 450MB disk which had not been shut down properly - i.e.
>> the OS had crashed, so fsck was run. Normally it booted much more
>> quickly, of course.
>
> But unless the disk was bad, there is simply *no* way
> that fsck would have taken any 45 minutes to run through
> a 450Mb disk unless something else was terribly wrong.
>
> If the system were properly administered it would have
> had less than 50 Mb in a root partition that would have
> been usable in seconds and then a relatively "normal"
> boot process would have taken place even while the rest
> of the disk is being fixed. But 45 minutes sounds like
> the disk was bad and had *thousands* of "bad" files.
> Nothing can help that, except a new disk.

Floyd, I thought it ought to be faster as well, but the system suppliers
said this was typical. The disk was OK, just the OS crash caused the fsck
to be run, and that took the time.

Fortunately, I now run only Windows and FreeBSD, and such problems are but
a distant memory.

Cheers,
David

Ron Hunter

unread,
Feb 24, 2009, 4:03:50 AM2/24/09
to
One quibble.
NTFS still has a minimum file system block, which on my drives is 4k. I
often wonder why no one has written a good 'small file driver' that
would reduce this waste. Guess with HD prices approaching $100/terabyte
no one sees a need.

David J Taylor

unread,
Feb 24, 2009, 4:55:08 AM2/24/09
to
Ron Hunter wrote:
[]

> One quibble.
> NTFS still has a minimum file system block, which on my drives is 4k.
> I often wonder why no one has written a good 'small file driver' that
> would reduce this waste. Guess with HD prices approaching
> $100/terabyte no one sees a need.

The UNIX systems I have seen are worse - minimum block size 8KB - although
whether that's typical I don't know. You can format NTFS drives with a
512-byte allocation unit if you want.

Also remember that very small files occupy /no/ space on NTFS, the file
contents are actually written into the directory block, so the data itself
occupies no extra space on disk. NTFS is quite good in practice.

Cheers,
David

Floyd L. Davidson

unread,
Feb 24, 2009, 3:13:31 PM2/24/09
to
>Ron Hunter wrote:
>[]
>> One quibble.
>> NTFS still has a minimum file system block, which on my drives is 4k.
>> I often wonder why no one has written a good 'small file driver' that
>> would reduce this waste. Guess with HD prices approaching
>> $100/terabyte no one sees a need.
>
>The UNIX systems I have seen are worse - minimum block size 8KB - although

Actually, both have a default cluster size of 4Kb, and
that is not a design flaw in either case. Smaller
blocks mean more allocation overhead, larger mean more
unused space per file. It's a tradeoff, but for general
purpose filesystems a 4KB is optimum.

>whether that's typical I don't know. You can format NTFS drives with a
>512-byte allocation unit if you want.

There are filesystems (various Reiser filesystems for
example) which Unix systems can use that do "tail
packing", and basically do not waste space, plus they
are extremely efficient with many small size files.
Others (ext3 for example) have decided specifically not
to implement that in favor of avoiding the inefficiency
in other areas.

>Also remember that very small files occupy /no/ space on NTFS, the file
>contents are actually written into the directory block, so the data itself
>occupies no extra space on disk.

What you described would mean that each directory block
is pre-allocated with a usually unused huge amount of
space. That would be grossly inefficient.

Instead, space is allocated in one place instead of
another. Either way it occupies "extra space on disk".

The difference is that directory blocks are allocated in
smaller sizes, so there is indeed *less* extra space.

>NTFS is quite good in practice.

NTFS is good in comparison to older filesystems from
MSDOS, but it has way too many roots in back
compatibility. (Does it still require defragging???)

David J Taylor

unread,
Feb 25, 2009, 2:44:30 AM2/25/09
to
Floyd L. Davidson wrote:
> "David J Taylor"
[]
>> Also remember that very small files occupy /no/ space on NTFS, the
>> file contents are actually written into the directory block, so the
>> data itself occupies no extra space on disk.
>
> What you described would mean that each directory block
> is pre-allocated with a usually unused huge amount of
> space. That would be grossly inefficient.
>
> Instead, space is allocated in one place instead of
> another. Either way it occupies "extra space on disk".
>
> The difference is that directory blocks are allocated in
> smaller sizes, so there is indeed *less* extra space.

I can't find the exact figures, Floyd, but I think that with NTFS the
directory is allocated in 1KB units, so you might as well use any spare
space for the file's data rather than using an extra disk cluster. We're
not talking huge files or gross inefficiencies.

Cheers,
David

0 new messages