CP/M free space

48 views
Skip to first unread message

Richard Plinston

unread,
May 3, 2001, 2:16:27 PM5/3/01
to
Chuck F (cbfal...@my-deja.com) (cbfal...@XXXXworldnet.att.net)
wrote:
> Axel Berger wrote:
>
> > It is perhaps not politically correct to say so here, but I actually
> > consider FAT to be an improvement. After all, we are talking a time
> > when RAM was really scarce. The only way to make the CPM system work is
> > the way it is actually done, by keeping an allocation map in RAM. On
> > big disks with not too big allocation blocks - and with an OS designed
> > to work in as little as 16 kB they had better be small - this is a lot
> > of RAM taken up. Now you can use FAT entirely on disk. Of course it
> ... snip ...
>
> On the contrary, the CP/M map takes one bit per block.

With a maximum number of blocks for an HD this may take up 8Kb of the
available RAM. Keeping the blocks to say 8000 for a hard disk keeps
this down to 1 Kb but this is right off the top of the TPA. On 16 bit
systems, or even on MP/M this is not a problem (it is held in the banked
memory of MP/M.

> The FAT
> system takes 12, 16, or 32 bits per block. Keeping them in memory
> is essential to any reasonable speed.

But with a FAT system it is only ever necessary to hold _one_ free space
block number, it uses this to get the next one as it releases the
cluster to the requesting process. Certainly buffer space is required
for the FAT table but using the 'high water mark' means that new files
are written (and take space) in a localised area (at least until the end
of disk is reached then it has to search for free blocks which slows the
whole thing down.

> FAT exists because Gates did it that way for his earliest Basic,
> which pre-dated general use of CP/M. It is the design of an 18
> year old.

Nah, he must have stolen it from someone else ;-)

I believe that 'his' Basic for the Altair was based on source code for a
DEC Basic for which the source in PDP assembler was certainly available
at the computer centre at Harvard that he frequented. This would have
been relatively easy to recode for 8080.

> Every file system has its plusses and minuses. CP/M is quite safe
> when changing floppies because of the allocation map check sum; on
> the other hand a floppy change under FAT can destroy everything.
> However FAT includes a disk description in the boot sector, and
> CP/M does not.

That was certainly a large problem in MS-DOS up through versions 2.11.
It was solved in 3.something though wasn't it ?

Actually I recall that with 3.x there was no problem in changing a
diskette with DOS because it remebered the identity of it and refused to
work with the new diskette. What _was_ a serious problem was if you
took a diskette to another machine and added a file and then returned it
to the original machine. DOS would then quite happily trash the whole
thing because it 'knew' it was the same diskette. CP/M (and
Concurrent-CP/M-86 etc) used the directory hash and noticed it had
changed.

In the days of 'floppy-net' for DOS machines there had to be a strict
protocol for handling file transfers.

Harold Bower

unread,
May 3, 2001, 4:17:24 PM5/3/01
to
Richard Plinston wrote:
>
> Chuck F (cbfal...@my-deja.com) (cbfal...@XXXXworldnet.att.net)
> wrote:
> > Axel Berger wrote:
> >
> > > It is perhaps not politically correct to say so here, but I actually
> > > consider FAT to be an improvement. After all, we are talking a time
> > > when RAM was really scarce. The only way to make the CPM system work is
> > > the way it is actually done, by keeping an allocation map in RAM. On
> > > big disks with not too big allocation blocks - and with an OS designed
> > > to work in as little as 16 kB they had better be small - this is a lot
> > > of RAM taken up. Now you can use FAT entirely on disk. Of course it
> > ... snip ...
> >
> > On the contrary, the CP/M map takes one bit per block.
>
> With a maximum number of blocks for an HD this may take up 8Kb of the
> available RAM. Keeping the blocks to say 8000 for a hard disk keeps
> this down to 1 Kb but this is right off the top of the TPA. On 16 bit
> systems, or even on MP/M this is not a problem (it is held in the banked
> memory of MP/M.

Also, the CP/M 'Block' size (related to MSDOS 'Cluster' size) was
variable beginning with CP/M 2.2 and could go up to 64k bytes per
Block. On partitions of up to 20 MB, I commonly use 4k while others use
8k. This winds up being a very large drive (or partition) size.

> > The FAT
> > system takes 12, 16, or 32 bits per block. Keeping them in memory
> > is essential to any reasonable speed.
>
> But with a FAT system it is only ever necessary to hold _one_ free space
> block number, it uses this to get the next one as it releases the
> cluster to the requesting process. Certainly buffer space is required
> for the FAT table but using the 'high water mark' means that new files
> are written (and take space) in a localised area (at least until the end
> of disk is reached then it has to search for free blocks which slows the
> whole thing down.

Yes, but at the cost of error recovery as Chuck was pointing out. I too
suffered the trashing of disks in MSDOS but not CP/M.

[snip]


> > Every file system has its plusses and minuses. CP/M is quite safe
> > when changing floppies because of the allocation map check sum; on
> > the other hand a floppy change under FAT can destroy everything.
> > However FAT includes a disk description in the boot sector, and
> > CP/M does not.

Once the original CP/M glitch was corrected by Bridger Mitchell and
which (with his permission) we incorporated into ZSDOS.

> That was certainly a large problem in MS-DOS up through versions 2.11.
> It was solved in 3.something though wasn't it ?
>
> Actually I recall that with 3.x there was no problem in changing a
> diskette with DOS because it remebered the identity of it and refused to
> work with the new diskette. What _was_ a serious problem was if you
> took a diskette to another machine and added a file and then returned it
> to the original machine. DOS would then quite happily trash the whole
> thing because it 'knew' it was the same diskette. CP/M (and
> Concurrent-CP/M-86 etc) used the directory hash and noticed it had
> changed.
>
> In the days of 'floppy-net' for DOS machines there had to be a strict
> protocol for handling file transfers.

Why don't we try the 32MB disk size on for size as well? MSDOS did not
reliably fix this until 5.x while we has fixes in the CP/M world well
before by computing absolute disk sector address based on 24-bit or more
math. It did not require any changes to disk formats at all (contrary
to the FAT method) but simply adding extended math the the BIOS
(primarily) which was loaded at boot time. I had a Z-System merrily
perking with five 20 MB partitions on a 100 MB drive and Jim Thale had a
200 MB drive merrily perking well before MSDOS fixed the 32MB
barrier...all with the natural definitions of CP/M 2.2 in the DPH/DPB
structure.

Hal

nos...@nouce.bellatlantic.net

unread,
May 3, 2001, 7:58:55 PM5/3/01
to
On Thu, 03 May 2001 19:16:27 +0100, Richard Plinston
<rip...@Azonic.co.nz> wrote:

>> > the way it is actually done, by keeping an allocation map in RAM. On
>> > big disks with not too big allocation blocks - and with an OS designed
>> > to work in as little as 16 kB they had better be small - this is a lot
>> > of RAM taken up. Now you can use FAT entirely on disk. Of course it
>> ... snip ...
>>
>> On the contrary, the CP/M map takes one bit per block.
>
>With a maximum number of blocks for an HD this may take up 8Kb of the
>available RAM. Keeping the blocks to say 8000 for a hard disk keeps
>this down to 1 Kb but this is right off the top of the TPA. On 16 bit
>systems, or even on MP/M this is not a problem (it is held in the banked
>memory of MP/M.


One BIT per block and with the typical block size for larger than 1mb
disks (2k or more commonly 4k) this is not that badas disks were
also limited to 8mb!!! thats 65536 128byte logical sectors and using
4k allocation size that would be 2048 BITS or 256 BYTES not that bad
in reality.

The average system with a 10mb disk was usually 52-56k free ram.

Allison

>> FAT exists because Gates did it that way for his earliest Basic,
>> which pre-dated general use of CP/M. It is the design of an 18
>> year old.

No it didnt!

>I believe that 'his' Basic for the Altair was based on source code for a
>DEC Basic for which the source in PDP assembler was certainly available
>at the computer centre at Harvard that he frequented. This would have
>been relatively easy to recode for 8080.

Very difficult to go from PDP-10 (a 36bit machine) or worse PDP-11
(a very CISC 16bit machine) to the lowly 8080.

>> However FAT includes a disk description in the boot sector, and
>> CP/M does not.

Not so. That was done later to facilitate interchangeability. Later
CPM-2x machines had that as well. It was a matter of standardization
that PCs enforced. CPM machine could and did allocate parts of the
disk for media description and self configuration of drivers.

Allison


Richard Plinston

unread,
May 4, 2001, 2:28:49 PM5/4/01
to
> Why don't we try the 32MB disk size on for size as well?

First it was a limit on partition size. MS-DOS could have larger disks
appropriately partitioned. It was also not a problem for OEM versions.
I used a Wyse machine that had MS-DOS 3.1 and an 80 MByte drive in a
single partition. The problem only occurred in MS branded versions and
PC-DOS, most OEMs such as Wyse, Compaq, etc did not have the limit at
all.

Fortunately the mechanism was compatible with DR-DOS 3.4 and
Concurrent-DOS, the last was the main OS on that machine.

> MSDOS did not reliably fix this until 5.x

Actually it was the IBM rewrite that fixed this in the MS/IBM versions.
IBM were annoyed that they were stuck with 32MB partitions and other
OEMs were not so they incorporated it into PC-DOS 3.4/4.0 and gave the
source back to MS who then broke it and released this as MS-DOS 4.01 and
announced to the world that they had: "finally broken the 32Mb barrier".

Richard Plinston

unread,
May 5, 2001, 3:53:19 AM5/5/01
to
> >With a maximum number of blocks for an HD this may take up 8Kb of the
> >available RAM. Keeping the blocks to say 8000 for a hard disk keeps
> >this down to 1 Kb but this is right off the top of the TPA.

> One BIT per block

Exactly, and with 16bit block numbers the "maximum number of blocks for
an HD" is 64K thus 64000 BITS and thus this bit array "may take up to
8Kb of the available RAM".

> and with the typical block size for larger than 1mb
> disks (2k or more commonly 4k) this is not that badas disks were
> also limited to 8mb!!! thats 65536 128byte logical sectors and using
> 4k allocation size that would be 2048 BITS or 256 BYTES not that bad
> in reality.

Exactly, as I said: "keeping the blocks to say 8000 [in number] for a
hard disk keeps
this down to 1 Kb". I ran a machine with CP/M format drives that had 2
50 Mbyte, a 20MByte and a tape streamer. I also have used CP/M drives
of 80Mb and 160Mbyte though not with 2.2 of course.

> The average system with a 10mb disk was usually 52-56k free ram.

Excatly why you need to keep the free space bit array down in size on
CP/M systems.

> >DEC Basic for which the source in PDP assembler

> Very difficult to go from PDP-10 (a 36bit machine)

It is not a PDP-10, it is DEC-10 (and not a PDP).

> or worse PDP-11 (a very CISC 16bit machine) to the lowly 8080.

Why do you think it to be 'worse' ? Intel developed its 8080 software
on PDP-11 (as did Gary when working for Intel and did DRI). The Altair
Basic was also developed and compiled on a PDP, or did you think that
Bill keyed it all in using the front panel switches ?.

bill

unread,
May 4, 2001, 8:50:07 PM5/4/01
to
Richard Plinston wrote:
.....

> Actually it was the IBM rewrite that fixed this in the MS/IBM versions.
> IBM were annoyed that they were stuck with 32MB partitions and other
> OEMs were not so they incorporated it into PC-DOS 3.4/4.0 and gave the
> source back to MS who then broke it and released this as MS-DOS 4.01 and
> announced to the world that they had: "finally broken the 32Mb barrier".

What do you have to say about the fact that 86-DOS could
support >1 gigabyte drives (in 1980)?

Or rather, why did IBM want that cut down to 32 megs before
Personal Computer Disk Operating System 1.0 was released?

Bill
Tucson, AZ

anon...@bogus_address.con

unread,
May 5, 2001, 2:13:35 AM5/5/01
to

On 2001-05-04 bi...@SunSouthWest.com said:

>What do you have to say about the fact that 86-DOS could
>support >1 gigabyte drives (in 1980)?
>
>Or rather, why did IBM want that cut down to 32 megs before
>Personal Computer Disk Operating System 1.0 was released?
>
>Bill
>Tucson, AZ

Why do =you= think IBM wanted a 32 meg limit, Bill?

Richard Plinston

unread,
May 6, 2001, 4:37:57 AM5/6/01
to
On 2001-05-04 bi...@SunSouthWest.com said:

>What do you have to say about the fact that 86-DOS could
>support >1 gigabyte drives (in 1980)?

First I would ask you to support such a claim.

I have understood that QDOS used CP/M media. This is because SCP would
build the system on a hard drive using CP/M and then swap the S100
processor board to an 8086 and reboot. Early versions of 86-DOS and
similar may have had some other system for disks but FAT was added after
MS took over development. Later 86-DOS and SCP-DOS were rebadged
versions of the developed MS-DOS and thus had FAT.

The limit of 32MByte derives directly from 16bit FAT entries as sector
pointers: 64K x 512bytes.

How did 86-DOS organise its disks ?

>Or rather, why did IBM want that cut down to 32 megs before
>Personal Computer Disk Operating System 1.0 was released?

PC-DOS 1.0 was released on the IBM-PC (not XT) which did not support
hard disks. The XT was released with PC-DOS 2.0.

Why do you think that it was IBM that wanted to 'cut down to 32MByte' ?
It was MS that changed to FAT (which was an MS design for DEC
Stand-Alone Basic).

bill

unread,
May 7, 2001, 5:09:53 PM5/7/01
to
Richard Plinston wrote:
>
> On 2001-05-04 bi...@SunSouthWest.com said:
>
> >What do you have to say about the fact that 86-DOS could
> >support >1 gigabyte drives (in 1980)?
>
> First I would ask you to support such a claim.

R.T.F.M. !!!

> I have understood that QDOS used CP/M media. .....

Show me QDOS.

And I don't mean that menuing thing from Gazelle.

If QDOS exists, then *somebody* must be able to
produce it! Where is it? Anybody ??

I'll tell you why you can't produce it. It doesn't exist.

QDOS is a LIE, a fabrication by Gates and Tim Paterson to
cover up their theft of CP/M. It wasn't a THING, it was
the PROCESS by which Paterson used DRI copyrighted materials
in the hands of his sometime employer, Bill Gates, in clear
violation of a signed license agreement, to take over Gary
Kildall's intellectual property.

All the license agreements, all the reviews, all the actual
physical disks and printed manuals are 86-DOS. So are the
magazine advertisements. What does all that suggest?

There IS no QDOS. And, somebody's lying.

'n that's the fact, Jack.

And, if that's libel, Gates knows where to find me.


Bill
Tucson, AZ

anon...@bogus_address.con

unread,
May 8, 2001, 12:03:21 AM5/8/01
to

On 2001-05-07 bi...@SunSouthWest.com said:

>Show me QDOS.
>
>And I don't mean that menuing thing from Gazelle.
>
>If QDOS exists, then *somebody* must be able to
>produce it! Where is it? Anybody ??
>
>I'll tell you why you can't produce it. It doesn't exist.

Hee-hee! Nice to see that you're still with us, Bill! :)

Show me 'Cairo.'

And I don't mean that place in Egypt.

If 'Cairo' exists, then *somebody* must be able to

Chewy509

unread,
May 8, 2001, 2:36:56 AM5/8/01
to

>
> Show me 'Cairo.'

>
> If 'Cairo' exists, then *somebody* must be able to
> produce it! Where is it? Anybody ??
>
> I'll tell you why you can't produce it. It doesn't exist.

It called Windows NT 4.0... "Cairo" is the internal project name...

Chewy509...


legu...@iro.umontreal.ca

unread,
May 8, 2001, 1:22:42 PM5/8/01
to
bill <bi...@sunsouthwest.com> wrote:

: Richard Plinston wrote:
:>
:> On 2001-05-04 bi...@SunSouthWest.com said:
:>
:> >What do you have to say about the fact that 86-DOS could
:> >support >1 gigabyte drives (in 1980)?
:>
:> First I would ask you to support such a claim.

: R.T.F.M. !!!

:> I have understood that QDOS used CP/M media. .....

: Show me QDOS.

: And I don't mean that menuing thing from Gazelle.

: If QDOS exists, then *somebody* must be able to
: produce it! Where is it? Anybody ??

: I'll tell you why you can't produce it. It doesn't exist.

Hi
I've made my masters degree thesis on the history of OS, including DOS.
I've seen in many reference books that QDOS was released in July 1980 as
QDOS 0.10, it was renamed 86-DOS 0.30 in December 1980, and then SCP
released 86-DOS 1.00 in April 1981. All these were for S-100 bus on 8"
single density. Tim Paterson ported it to the IBM PC format (160K DSDD)
during the months of June and July.
I have the 86-DOS 1.00 files, but not 86-DOS 0.30 ou QDOS. Do the latter exist?
I'm pretty sure it does exist, beause all the litterature I came across say
so, but I can't be sure until I have a chance to get the software I've been
searching for years.

Louis-Luc

bill

unread,
May 8, 2001, 11:10:36 PM5/8/01
to
legu...@iro.umontreal.ca wrote:

> Hi
> I've made my masters degree thesis on the history of OS, including DOS.
> I've seen in many reference books that QDOS was released in July 1980 as
> QDOS 0.10, it was renamed 86-DOS 0.30 in December 1980, and then SCP
> released 86-DOS 1.00 in April 1981. All these were for S-100 bus on 8"
> single density. Tim Paterson ported it to the IBM PC format (160K DSDD)
> during the months of June and July.
> I have the 86-DOS 1.00 files, but not 86-DOS 0.30 ou QDOS. Do the latter exist?
> I'm pretty sure it does exist, beause all the litterature I came across say
> so, but I can't be sure until I have a chance to get the software I've been
> searching for years.

The person I got 86-DOS version 0.3 docs from, as well as upgrade docs
to version 1.0, didn't think anything earlier had ever been 'released'.

He saved docs and software from the trash at a *very* large OEM.

Tim Paterson claimed (about 1-1/2 years ago) less than thirty copies
of version 0.3 were ever distributed, and those only to OEM potential
customers, not to the general public.


Clearly, whatever it was, was being called 86-DOS before December, 1980.

Please tell us ''reference books that QDOS was released'' which ones?

Without trying to be definitive, take a look at page 173 of the
August, 1980 issue of Byte Magazine. This magazine was probably
on news stands in July; Advertising cut-offs usually come anywhere
from a month to two months prior - meaning Seattle Computer had
been calling this product 86-DOS at least around May, 1980. And
as I mentioned, this isn't definitive, it may have been earlier.

It's a general rule of evidence that statements closest in time to
an event are considered more accurate than 'recovered' later ones.
In that light, and again without vouching for the actual factual
accuracy of it, the 'MS-DOS Encyclopedia' (1987) includes facts
according to Tim Paterson and Bill Gates that are greatly at odds
with the versions they were telling later. The 'Encyclopedia' was
published by Microsoft. Either they weren't telling the truth then,
or they aren't telling the truth now.

For example, I have never seen any other mention of M-DOS, at least
not the one mentioned in the 'Encyclopedia' which, it was claimed,
was the basis for the FAT directory structure we've all come to know
and love. And you thought it derived from Basic-86? (Page 8, Para.3)

For that matter, has anyone disassembled Stand Alone Disk Basic?
Does it use a recognizable FAT structure anything like DOS?

Bill
Tucson, AZ


Richard Plinston

unread,
May 11, 2001, 1:32:20 AM5/11/01
to
bill wrote:
>
> The person I got 86-DOS version 0.3 docs from, as well as upgrade docs
> to version 1.0, didn't think anything earlier had ever been 'released'.

It doesn't have to have been released as a product to be given a name.

> Tim Paterson claimed (about 1-1/2 years ago) less than thirty copies
> of version 0.3 were ever distributed, and those only to OEM potential
> customers, not to the general public.

> Without trying to be definitive, take a look at page 173 of the


> August, 1980 issue of Byte Magazine. This magazine was probably
> on news stands in July; Advertising cut-offs usually come anywhere
> from a month to two months prior - meaning Seattle Computer had
> been calling this product 86-DOS at least around May, 1980. And
> as I mentioned, this isn't definitive, it may have been earlier.

Regardless of what it was called when it had been developed enough to be
released it may well have had the name of QDOS while it was being
written, or do you think that they print the release documents _before_
they start writing the code ?

As a comparison you may consider: ADOS, NEWDOS, DOS 5.0, DOS/286,
CP/DOS. These were all names applied to a product under development at
various times, according to Ed Iacobucci, but when it was released it
was called OS/2.

It would be silly to claim that 'DOS 5.0' _never_existed_ just because
the relesed versions were called OS/2, in fact just as silly as claiming
that QDOS _never_existed_ just because it was renamed before it was
released.

(actually the prereleases did display the DOS 5.0 name).

Jim Bianchi

unread,
May 10, 2001, 4:44:48 PM5/10/01
to
Given the choice between eating his brussel sprouts and posting to

comp.os.cpm, Richard Plinston wrote:
>Regardless of what it was called when it had been developed enough to be
>released it may well have had the name of QDOS while it was being written,
>or do you think that they print the release documents _before_ they start
>writing the code ?

Sure. Happens all the time. Ever hear of 'vaporware?'

>As a comparison you may consider: ADOS, NEWDOS, DOS 5.0, DOS/286, CP/DOS.
>These were all names applied to a product under development at various
>times, according to Ed Iacobucci, but when it was released it was called
>OS/2.

Ahh, and the spanish version was called 'dos DOS,' right?

--
ji...@sonic.net
Eclectic Garbanzo BBS, (707) 539-1279

Linux: gawk, date, finger, wait, unzip, touch, nice, suck, strip, mount,
fsck, umount, make clean, sleep. (Who needs porn when you have /usr/bin?)

Jack Crenshaw

unread,
May 14, 2001, 2:04:33 PM5/14/01
to
In article <3AF1A07B...@Azonic.co.nz>, Richard Plinston says...

>
>But with a FAT system it is only ever necessary to hold _one_ free space
>block number, it uses this to get the next one as it releases the
>cluster to the requesting process. Certainly buffer space is required
>for the FAT table but using the 'high water mark' means that new files
>are written (and take space) in a localised area (at least until the end
>of disk is reached then it has to search for free blocks which slows the
>whole thing down.

With all this talk of file structures lately, I'm surprised that no one has
commented on the incredible amount of disk thrashing that goes on with PC-based
machines. I definitely don't recall having this problem with CP/M machines,
either floppy- or HD-based.

As a matter of fact, a common practice with CP/M floppies was to store one's
most oft-used utilities first, on a clean floppy. This would put them in sector
order (can we say "defragmented") in the tracks closest to the directory.
Because of CP/M's use of sector skew factors, small files would load in a
fraction of a second (average access time = 1/600 s = 1.7 ms, plus the time to
move the head one track). End result: No discernible difference in speed
between built-in commands like DIR and floppy-based commands like SDIR.

In my first exposure to the Apple MacIntosh, I was instantly struck by the
fact that the poor floppy drive was getting thrashed unmercifully, with the head
going all over the disk, seemingly a different place for each sector. The
now-familiar brrrt-buzz-buzz, chatter-chatter was new to me at the time, and
shocking.

A few years later, we had HD Macs that did the same thing, only faster. Later
yet, I was exposed to the NeXt, which did the same thing. Later yet, here we
are with Windows, which not only thrashes the HD unmercifully, but does so
even when no one is sitting at the keyboard <!> (Garbage collection, I presume).

I'd be interested in the opinion of others as to the reason for all this
thrashing. My own personal opinion is that the difference is the difference
between a FAT and the in-RAM allocation map of CP/M. I suspect that MDDOS
and Windows have to keep going back to the FAT to update it as files are being
written/accessed.

Comments, anyone?

Jack W. Crenshaw
jcr...@earthlink.net

Harold Bower

unread,
May 14, 2001, 4:46:18 PM5/14/01
to
Jack Crenshaw wrote:
>
> In article <3AF1A07B...@Azonic.co.nz>, Richard Plinston says...
> >
> >But with a FAT system it is only ever necessary to hold _one_ free space
> >block number, it uses this to get the next one as it releases the
> >cluster to the requesting process. Certainly buffer space is required
> >for the FAT table but using the 'high water mark' means that new files
> >are written (and take space) in a localised area (at least until the end
> >of disk is reached then it has to search for free blocks which slows the
> >whole thing down.
>
> With all this talk of file structures lately, I'm surprised that no one has
> commented on the incredible amount of disk thrashing that goes on with PC-based
> machines. I definitely don't recall having this problem with CP/M machines,
> either floppy- or HD-based.

There was a bit of head movement in CP/M systems under certain
circumstances as well. When searching for a file, the system always
needed to access the Directory on the disk, so when loading a bunch of
config files (under an alias in ZCPR 3.x systems), disks would seek
Directory, move to file and load, seek Dir, move to file...etc. I wrote
a little Resident module in the early '80s to minimize this because I
was worried about burning out my one-and-only 80-track drive at the
time. I later used it in a TCJ article (LINKPRL with SPEEDUP as the
example program). It cached the directory from a specified drive in
high memory and would access it for all reads (and write thru on writes)
to get head positioning data, then simply move from program to program
as needed. It cut down significantly on head movement and made the old
4 MHz Z80 quite snappy.

I believe that one of the effects you notice with the MS-DOS 'thrashing'
is the lack of deblocking in the driver code. Many of the more popular
CP/M formats in the 'Double' and 'Quad' density areas (800k 80-track and
DS 8" disks) used 1024-byte physical sectors so only one seek and read
was needed to get a kilobyte of data, where two 512-byte reads are
needed in the MS-DOS system. The CP/M 2 and greater deblocker with the
auto-increment of 'unallocated' sectors resulted in a seemingly smoother
operation.

> As a matter of fact, a common practice with CP/M floppies was to store one's
> most oft-used utilities first, on a clean floppy. This would put them in sector
> order (can we say "defragmented") in the tracks closest to the directory.
> Because of CP/M's use of sector skew factors, small files would load in a
> fraction of a second (average access time = 1/600 s = 1.7 ms, plus the time to
> move the head one track). End result: No discernible difference in speed
> between built-in commands like DIR and floppy-based commands like SDIR.

There were other programs by the late 80s/early 90s that reorganized
CP/M disks as well (at least as Z-system utilities).

> In my first exposure to the Apple MacIntosh, I was instantly struck by the
> fact that the poor floppy drive was getting thrashed unmercifully, with the head
> going all over the disk, seemingly a different place for each sector. The
> now-familiar brrrt-buzz-buzz, chatter-chatter was new to me at the time, and
> shocking.

Agree, except that I saw (and heard) it on Mac's predecessor, the Lisa.
In addition, the RPM variations were unnerving as the track seeking
moved to different regions of the disk.

> A few years later, we had HD Macs that did the same thing, only faster. Later
> yet, I was exposed to the NeXt, which did the same thing. Later yet, here we
> are with Windows, which not only thrashes the HD unmercifully, but does so
> even when no one is sitting at the keyboard <!> (Garbage collection, I presume).
>
> I'd be interested in the opinion of others as to the reason for all this
> thrashing. My own personal opinion is that the difference is the difference
> between a FAT and the in-RAM allocation map of CP/M. I suspect that MDDOS
> and Windows have to keep going back to the FAT to update it as files are being
> written/accessed.

That may be one reason. If the FAT (one or both copies) are not
periodically updated, any failure (or shutdown) would not be around the
next time it were used. CP/M's bitmap of allocated units was in memory,
but was rebuilt everytime the drive was (re)logged. To locate where
specific files were still required a seek to the directory (see above),
and allocation data was written to the directory each time a file was
closed (note a system to use disk space and not leave a trace was to
open (which gained disk allocation via the in-memory bit map, use the
space, but never close the file..hence no trace...See Steve Russel's SLR
assemblers and linkers).

Another reason might be sub-optimally configured buffers, numbers of
file 'handles', and possibly mismatched interleave (HD)/skew (FD) values
for given processor/OS overhead. Since most CP/M systems only had a
single disk buffer (controlled by the BIOS), it was easier to adjust the
system parameters and applications to obtain 'smooth' operation. For
example, It is common practice in CP/M to have the application host a
(large) buffer, and read/write large chunks at a time to optimize IO
with relative little disk head movement. The trend in MS-DOS and "C"
seems to be something along the lines of "while (put (output, (get
(input, ch))));" (pardon the butchered C-like structure) for a
byte-at-a-time transfer with the BIOS thrashing between 512-byte chunks
of input and output files constantly (particularly with mal-configured
buffering). Additionally, if files are read and written a byte at a
time, the overhead involved can lead to IO misses due to latency causing
further problems (primarily slowdowns).

> Comments, anyone?

As a side note, The work I did on porting UZI to the Z180 used multiple
buffers and it is amazing how fast and smooth the CP/M programs ran
under a CP/M emulator in the UZI environment. When not constrained to
the single BIOS IO buffer and with 'proper' buffer management, IO is
surprisingly smooth.

> Jack W. Crenshaw
> jcr...@earthlink.net

Hal

Jack Crenshaw

unread,
May 15, 2001, 12:22:37 PM5/15/01
to
In article <3B004449...@worldnet.att.net>, Harold Bower says...

>
>Jack Crenshaw wrote:
>>
>> With all this talk of file structures lately, I'm surprised that no one has
>> commented on the incredible amount of disk thrashing that goes on with PC-based
>> machines. I definitely don't recall having this problem with CP/M machines,
>> either floppy- or HD-based.
>
>There was a bit of head movement in CP/M systems under certain
>circumstances as well. When searching for a file, the system always
>needed to access the Directory on the disk, so when loading a bunch of
>config files (under an alias in ZCPR 3.x systems), disks would seek
>Directory, move to file and load, seek Dir, move to file...etc.

I understand. Another of the neat features of CP/M utilities was that
one could write itty-bitty apps in assembly language. For example, my
directory program, imaginatively called "ls", separated directories from
files, alpha-sorted both, and showed file size as well as name. It still
fit into one 4k DSDD block.

I can see how the many little files of a Unix wannabe like Z-System could
cause more thrashing.


>I wrote
>a little Resident module in the early '80s to minimize this because I
>was worried about burning out my one-and-only 80-track drive at the
>time. I later used it in a TCJ article (LINKPRL with SPEEDUP as the
>example program). It cached the directory from a specified drive in
>high memory and would access it for all reads (and write thru on writes)
>to get head positioning data, then simply move from program to program
>as needed. It cut down significantly on head movement and made the old
>4 MHz Z80 quite snappy.

Neat.

>I believe that one of the effects you notice with the MS-DOS 'thrashing'
>is the lack of deblocking in the driver code. Many of the more popular
>CP/M formats in the 'Double' and 'Quad' density areas (800k 80-track and
>DS 8" disks) used 1024-byte physical sectors so only one seek and read
>was needed to get a kilobyte of data, where two 512-byte reads are
>needed in the MS-DOS system. The CP/M 2 and greater deblocker with the
>auto-increment of 'unallocated' sectors resulted in a seemingly smoother
>operation.

For that matter, in this day and time, one wonders why we don't just read
the entire _TRACK_ into RAM.

>> In my first exposure to the Apple MacIntosh, I was instantly struck by the
>> fact that the poor floppy drive was getting thrashed unmercifully, with the head
>> going all over the disk, seemingly a different place for each sector. The
>> now-familiar brrrt-buzz-buzz, chatter-chatter was new to me at the time, and
>> shocking.
>
>Agree, except that I saw (and heard) it on Mac's predecessor, the Lisa.

Yeah, now that you mention it, I first heard it there too.

>In addition, the RPM variations were unnerving as the track seeking
>moved to different regions of the disk.

ROFL!!! Good point.

>That may be one reason. If the FAT (one or both copies) are not
>periodically updated, any failure (or shutdown) would not be around the
>next time it were used. CP/M's bitmap of allocated units was in memory,
>but was rebuilt everytime the drive was (re)logged. To locate where
>specific files were still required a seek to the directory (see above),
>and allocation data was written to the directory each time a file was
>closed (note a system to use disk space and not leave a trace was to
>open (which gained disk allocation via the in-memory bit map, use the
>space, but never close the file..hence no trace...See Steve Russel's SLR
>assemblers and linkers).

_WAY_ cool!!! You are another SLR fan, I take it. Me, too. I was completely
stunned by the speed of his disk I/O. Now I know at least one of his secrets.

>Another reason might be sub-optimally configured buffers, numbers of
>file 'handles', and possibly mismatched interleave (HD)/skew (FD) values
>for given processor/OS overhead. Since most CP/M systems only had a
>single disk buffer (controlled by the BIOS), it was easier to adjust the
>system parameters and applications to obtain 'smooth' operation. For
>example, It is common practice in CP/M to have the application host a
>(large) buffer, and read/write large chunks at a time to optimize IO
>with relative little disk head movement. The trend in MS-DOS and "C"
>seems to be something along the lines of "while (put (output, (get
>(input, ch))));" (pardon the butchered C-like structure) for a
>byte-at-a-time transfer with the BIOS thrashing between 512-byte chunks
>of input and output files constantly (particularly with mal-configured
>buffering). Additionally, if files are read and written a byte at a
>time, the overhead involved can lead to IO misses due to latency causing
>further problems (primarily slowdowns).

Thanks for the insights.

Jack


CBFalconer

unread,
May 15, 2001, 7:29:07 PM5/15/01
to
Jack Crenshaw wrote:
>
> In article <3B004449...@worldnet.att.net>, Harold Bower says...
> >
> >Jack Crenshaw wrote:
> >>
> >> With all this talk of file structures lately, I'm surprised that no one has
> >> commented on the incredible amount of disk thrashing that goes on with PC-based
> >> machines. I definitely don't recall having this problem with CP/M machines,
> >> either floppy- or HD-based.
> >
> >There was a bit of head movement in CP/M systems under certain
> >circumstances as well. When searching for a file, the system always
> >needed to access the Directory on the disk, so when loading a bunch of
> >config files (under an alias in ZCPR 3.x systems), disks would seek
> >Directory, move to file and load, seek Dir, move to file...etc.
>
> I understand. Another of the neat features of CP/M utilities was that
> one could write itty-bitty apps in assembly language. For example, my
> directory program, imaginatively called "ls", separated directories from
> files, alpha-sorted both, and showed file size as well as name. It still
> fit into one 4k DSDD block.

There was a utility called LRUN which allowed all the bitty
routines to be packed in a .LBR and executed from there. This cut
the average wastage down to 64 bytes per utility. My CCP+ was
arranged to incorporate this feature, so that COMMAND.LBR contents
was automatically on the search path. In addition CCP+ could call
a default program if nothing was found, which I set up to be the
PCD interpreter. This had a similar path ending in PCDS.LBR. PCD
executables tended to be 1/4 the size of machine language
executables, so with all this a single system floppy held a lot.
The search was never objectionably long.

In fact, I could even create a 256k memory drive with the
CoPower88 and mount the often used stuff there, especially the
compiler. The compiler implemented virtual code memory, so a fast
source file hid all the thrashing.

>
> I can see how the many little files of a Unix wannabe like Z-System could
> cause more thrashing.
>

--
Chuck F (cbfal...@my-deja.com) (cbfal...@XXXXworldnet.att.net)
http://www.qwikpages.com/backstreets/cbfalconer :=(down for now)
(Remove "NOSPAM." from reply address. my-deja works unmodified)
mailto:u...@ftc.gov (for spambots to harvest)


Bruce Morgen

unread,
May 15, 2001, 8:35:08 PM5/15/01
to
CBFalconer <cbfal...@my-deja.com> wrote:

I'm not sure it predates your
CCP+, but I first encountered
that "CMDRUN" feature in Rick
Conn's ZCPR2. In the later
iterations of ZCPR, we had
the ultimate extended command
processor in Jay Sage's
ARUNZ.COM, and many of us
worked on Rick's more refined
LRUN utility, LX.COM. With
stuff like that you could get
serious work done on a two-
floppy system is a highly
automated manner that at the
same time made the most of
limited disk space.

I also remember Jim
Lopushinski's(sp?) brilliant
LBRDISK concept -- an RSX
(CP/Mish TSR) that treated an
.LBR file as CP/M disk drive.
With LBRDISK, I was able to
squeeze an entire ROGUE game
onto a 191KB SS/DD Kaypro II
diskette -- not just the
executable, but all the
little data files too!


>
>In fact, I could even create a 256k memory drive with the
>CoPower88 and mount the often used stuff there, especially the
>compiler. The compiler implemented virtual code memory, so a fast
>source file hid all the thrashing.
>

Those CoPower cards were great
for the swap file when running
Bridger Mitchell's Backgrounder
II -- not as quick as the main
memory ramdisk in my SB-180,
but a real eye-opener on my
Kaypro 4-84!

bill

unread,
May 15, 2001, 9:33:59 PM5/15/01
to
Jack Crenshaw wrote:

> With all this talk of file structures lately, I'm surprised that no one has
> commented on the incredible amount of disk thrashing that goes on with PC-based
> machines. I definitely don't recall having this problem with CP/M machines,
> either floppy- or HD-based.

> I'd be interested in the opinion of others as to the reason for all this


> thrashing. My own personal opinion is that the difference is the difference
> between a FAT and the in-RAM allocation map of CP/M. I suspect that MDDOS
> and Windows have to keep going back to the FAT to update it as files are being
> written/accessed.
>
> Comments, anyone?

Not too difficult, really.

For reasons known only to microsoft and the Justice Department
windows keeps *three* separate time/date stamps for every file:
created; changed (modified); and last accessed. This last one
seems of particular interest to Law Enforcement types.

Imagine reading several hundred files to get windows up and running.
Then multiply the number of files by the 'average' disk access time,
even assuming you've 'optimized' your file locations.

You'll come out pretty close to the minutes it takes to ''boot''.

Creating a 'library' won't work unless you read *everything* out
of it at once. Otherwise, it'll be: 1) go to the directory and
find out where the file is; 2) go read the file; 3) back to the
directory to write that you've ''accessed'' it, over and over again.

You want to read a file while running a program? You gotta write
that you've read it. Need dozens of nice little .DLLs for your
program? You get all those useless writes in the bargain.

Pity that few so called ''programmers'' know anything about
this, and optimize their file usage to minimize the grief.


Bill
Tucson, AZ

Harold F. Bower

unread,
May 16, 2001, 12:32:11 AM5/16/01
to
bill wrote:
[snip]

> For reasons known only to microsoft and the Justice Department
> windows keeps *three* separate time/date stamps for every file:
> created; changed (modified); and last accessed. This last one
> seems of particular interest to Law Enforcement types.
[snip]

> Creating a 'library' won't work unless you read *everything* out
> of it at once. Otherwise, it'll be: 1) go to the directory and
> find out where the file is; 2) go read the file; 3) back to the
> directory to write that you've ''accessed'' it, over and over again.
>
> You want to read a file while running a program? You gotta write
> that you've read it. Need dozens of nice little .DLLs for your
> program? You get all those useless writes in the bargain.
>
> Pity that few so called ''programmers'' know anything about
> this, and optimize their file usage to minimize the grief.

Actually, many of us who used Bridger Mitchell's DateStamper mod (circa
1984, earlier on Kaypros) to CP/M did know about it, and used a feature
which circumvented the write to 'Last Access' times. DateStamper also
stored all three times (as does Joe Wright's NZTIME), but the MSB on one
character in the Name/Type of directory entries caused the program to
skip the time update which actually followed the sequence: 1) go to
directory and file the file entry; 2) If the NoUpdate bit is clear, skip
updating "access" time; 3) read the file.

The 'Last Accessed' times had a valuable purpose in CP/M RBBS systems
where it was used to delete files which had not been accessed in some
time to conserve valuable space (at that time).

Hal

Bill Marcum

unread,
May 16, 2001, 3:13:13 PM5/16/01
to

bill wrote in message <3B01D9...@SunSouthWest.com>...

>
>For reasons known only to microsoft and the Justice Department
>windows keeps *three* separate time/date stamps for every file:
>created; changed (modified); and last accessed. This last one
>seems of particular interest to Law Enforcement types.
>
I don't know if law enforcement has anything to do with it, but three
time stamps per file is a feature that Microsoft copied from Unix.
Although, if you visit a Unix newsgroup and say something about "file
creation time", you'll get an earful.

Alex Plantema

unread,
May 20, 2001, 5:01:40 PM5/20/01
to
Harold Bower wrote in message <3B004449...@worldnet.att.net>...

>Another reason might be sub-optimally configured buffers, numbers of
>file 'handles', and possibly mismatched interleave (HD)/skew (FD) values
>for given processor/OS overhead. Since most CP/M systems only had a
>single disk buffer (controlled by the BIOS), it was easier to adjust the
>system parameters and applications to obtain 'smooth' operation. For
>example, It is common practice in CP/M to have the application host a
>(large) buffer, and read/write large chunks at a time to optimize IO
>with relative little disk head movement. The trend in MS-DOS and "C"
>seems to be something along the lines of "while (put (output, (get
>(input, ch))));" (pardon the butchered C-like structure) for a
>byte-at-a-time transfer with the BIOS thrashing between 512-byte chunks
>of input and output files constantly (particularly with mal-configured
>buffering). Additionally, if files are read and written a byte at a
>time, the overhead involved can lead to IO misses due to latency causing
>further problems (primarily slowdowns).

With the introduction of the concept of file handles in MS-DOS 2 with their
one byte at a time transfer programmer were no longer encouraged to buffer
their data. A system call for each byte slows a program down anyway.
On the other hand, CP/M 3 introduced separate buffers for directory sectors
and data sectors, which reduced the number of head movements as well, and
multirecord I/O, which did encourage data buffering.

Alex.

Jack Crenshaw

unread,
May 23, 2001, 11:32:42 AM5/23/01
to
In article <3B01D9...@SunSouthWest.com>, bill says...

Jack Crenshaw

unread,
May 23, 2001, 11:33:32 AM5/23/01
to
In article <3B01D9...@SunSouthWest.com>, bill says...
>
>Not too difficult, really.
>
>For reasons known only to microsoft and the Justice Department
>windows keeps *three* separate time/date stamps for every file:
>created; changed (modified); and last accessed. This last one
>seems of particular interest to Law Enforcement types.
>
>Imagine reading several hundred files to get windows up and running.
>Then multiply the number of files by the 'average' disk access time,
>even assuming you've 'optimized' your file locations.
>
>You'll come out pretty close to the minutes it takes to ''boot''.
>
>Creating a 'library' won't work unless you read *everything* out
>of it at once. Otherwise, it'll be: 1) go to the directory and
>find out where the file is; 2) go read the file; 3) back to the
>directory to write that you've ''accessed'' it, over and over again.
>
>You want to read a file while running a program? You gotta write
>that you've read it. Need dozens of nice little .DLLs for your
>program? You get all those useless writes in the bargain.
>
>Pity that few so called ''programmers'' know anything about
>this, and optimize their file usage to minimize the grief.
>

That's a fascinating bit of data, Bill. It needs a much wider
dissemination. Thanks for sharing it.

Jack


Richard Plinston

unread,
May 24, 2001, 2:27:40 AM5/24/01
to
>
> In article <3B01D9...@SunSouthWest.com>, bill says...
> >
> >Creating a 'library' won't work unless you read *everything* out
> >of it at once. Otherwise, it'll be: 1) go to the directory and
> >find out where the file is; 2) go read the file; 3) back to the
> >directory to write that you've ''accessed'' it, over and over again.

Wrong. It does not update the 'access time' on every read or write, it
only updates the time on each file open (or close in the case of create
and update time). So the _actual_ sequence is, for access time

* Go to the directory and find the file
* update the file entry that was just read
* go read file as required

or for create:

* Create directory entry (with access time)
* Write file
* update directory entry with size, FAT and date/time
(note that the entry requires updating anyway)

The _only_ overhead for access time is the the directory entry is
updated once at the point where it has just been read. No additional
searches or multiple updates at all.

That is, the file access time is not, as you imagined, the 'last time
anything in the file was accessed', but is actually 'the last time the
file was opened'.

Also CP/M Plus, MP/M and many subsequent DRI systems also catered for
multiple date/time stamps on files, including access time is this was
required.

Reply all
Reply to author
Forward
0 new messages