Has anyone built disk images with 1024 directory entries for RomWBW?

884 views
Skip to first unread message

Jim McGinnis

unread,
Apr 24, 2020, 10:46:18 AM4/24/20
to retro-comp
I knew that some of my projects would become problem children when fully deployed on a single 8MB drive due to the DIR entries limit.

Some of my assembly projects (SYSLIB and variants as well) just barely fit WHEN I am using CPM+ and have initialized the directory with date/time stamping. There is plenty of "SPACE."
I am running short of the DIR entries. 

I am sure there are debates to be made for 512 vs 1024 on an 8MB drive. I have reached a practical limit that will require working around the DIR limit. Which can be done.

Has anyone investigate the impacts to existing tools in RomWBW that might be assuming 512 DIR entries for the 8MB drives?

Has anyone created a compatible disk image for RomWBW with 1024 DIR entries?


This is such a great place to hang out along with comp.os.cpm, RC2014-Z*), and Altair-Duino.
I cannot emphasize enough how helpful and educational the posters have been here...

Best regards and stay COVID-safe...

Jim




Jim McGinnis

unread,
Apr 24, 2020, 12:28:55 PM4/24/20
to retro-comp
I found the current diskdefs used for the current HD images under Wayne's GIT tree.

I will try to build an image using the unmodified data below first. Then try an experiment with "maxdir 1024" and see what happens. Might be a mess.

# RomWBW 8MB Hard Disk, LU 0-3
diskdef wbw_hd0
  seclen 512
  tracks 65
  sectrk 256
  blocksize 4096
  maxdir 512
  skew 0
  boottrk 1
  os 2.2
end

diskdef wbw_hd1
  seclen 512
  tracks 130
  sectrk 256
  blocksize 4096
  maxdir 512
  skew 0
  boottrk 66
  os 2.2
end

diskdef wbw_hd2
  seclen 512
  tracks 195
  sectrk 256
  blocksize 4096
  maxdir 512
  skew 0
  boottrk 131
  os 2.2
end

diskdef wbw_hd3
  seclen 512
  tracks 260
  sectrk 256
  blocksize 4096
  maxdir 512
  skew 0
  boottrk 196
  os 2.2
end

Wayne Warthen

unread,
Apr 24, 2020, 12:37:37 PM4/24/20
to retro-comp
On Friday, April 24, 2020 at 7:46:18 AM UTC-7, Jim McGinnis wrote:

I am sure there are debates to be made for 512 vs 1024 on an 8MB drive. I have reached a practical limit that will require working around the DIR limit. Which can be done.

Actually, it is not much of a debate.  It absolutely should have been 1024 originally.  The choice of 512 goes back over a decade when RomWBW came to life and was used simply because it was compatible with the other work being done at the time.

So, why haven't I changed it???  Quite simply, it is because such a change would have disastrous consequences for anyone that just upgraded their ROM and then started accessing their existing disks.  It would appear to work initially, but immediately corrupt your disk contents.

To clarify the problem, you can't simply create a new disk image with expanded directory space.  The disk directory format lives inside of the CP/M CBIOS (by DRI design).
 
Has anyone investigate the impacts to existing tools in RomWBW that might be assuming 512 DIR entries for the 8MB drives?

None of the existing tools would be impacted.  They would all just work.  The issue is the CBIOS DPB.
 
Has anyone created a compatible disk image for RomWBW with 1024 DIR entries?

I have.  It is trivial to do.

I am very open to thoughts from this group.  Do people think I should just make the change and warn everyone?

-Wayne

Jim McGinnis

unread,
Apr 24, 2020, 12:42:38 PM4/24/20
to retro...@googlegroups.com
Wayne,

I get it. Yes, it could make a mess.

I would be happy to be an "early adopter" or "beta tester" if the push comes.

The only files the image would need for me to be successful is the ability to host the basic CPM3 baseline + the "new" Kermit for cpm3 that you added to the repository. (Thanks!)


Oh, is it possible to create a compile time configuration switch for CBIOS? Is that possible?

And then we have expanded the maintenance burden two fold...   ;-(   (Sorry for thinking about it...)

Cheers
Jim

Wayne Warthen

unread,
Apr 24, 2020, 1:04:56 PM4/24/20
to retro-comp
On Friday, April 24, 2020 at 9:42:38 AM UTC-7, Jim McGinnis wrote:
Oh, is it possible to create a compile time configuration switch for CBIOS? Is that possible?

Yes.  Might require setting a config switch in multiple places, but should be possible.  I was actually thinking about this when I was responding to your prior message.  I will work on this.  Can probably post something later today.  Be prepared to do your own build.  For now, I'm not going to create any pre-built ROM or disk images with this config set out of fear that they will get picked up and used unintentionally.

-Wayne 

Alan Cox

unread,
Apr 24, 2020, 1:16:06 PM4/24/20
to retro-comp
> I am very open to thoughts from this group.  Do people think I should just make the change and warn everyone?

It would be better IMHO to fix it compatibly.

My suggestion would be: recognize PC partition tables (at least
primary ones are trivial - the others are harder)
If there isn't one (no AA55 marker) then assume legacy layout
If there is a primary partition table entry with a type field we pick
then use that and one of the sectors can contain the BPB and other
relevant information.  That is how CP/M fixed the mess on CP/M 86.
If there isn't one with our type field then assume legacy layout

Amstrad took a slightly different approach, the media has a byte on it
which tells you which of several BPB's to use. That might be simpler
to implement and much easier to check. So you'd have a sector with a
table of

[bpbtype.b][sectoroffset.32][label] (and maybe preferred drive letter
so you can do default bindings at boot)

Even it if you just picked 64bits of unused space on the current
layout and stick a magic number in it the odds of anyone getting a
misdetect on an old disk are basically zero so you could use that in a
sector plus the table of type/offsets.

There were a few other similar systems on various platforms, but
generally they come down to putting a way to describe the BPB and an
offset on the media

Alan

 

Wayne Warthen

unread,
Apr 24, 2020, 1:52:57 PM4/24/20
to retro-comp
I was wondering if someone was going to bring this up.  Yes, it can be done this way.  I realize that use of a partition table and embedding the DPB would be the ideal solution, but it makes a lot of stuff pretty painful under legacy CP/M and compatible products.  The use of a media byte is probably pretty workable.  I will work on this since the issue continues to come up.

-Wayne

Philip Hoeffer

unread,
Apr 24, 2020, 3:19:51 PM4/24/20
to Wayne Warthen, retro-comp
Not that it is up for a vote, but I bang my head against the 512 limit daily. I would give almost anything for 1024
directory entries. So my vote would be a build time decision?

Maybe something in the config files... #define "1KDIRENT equ 1024" for build time?
Also an additional entry in diskdefs like hd0-1k?

Ty Hoeffer
Palmyra, VA

--
You received this message because you are subscribed to the Google Groups "retro-comp" group.
To unsubscribe from this group and stop receiving emails from it, send an email to retro-comp+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/retro-comp/160cebe1-a98c-4c5a-bdee-b07405682432%40googlegroups.com.

Richard Deane

unread,
Apr 25, 2020, 4:22:45 AM4/25/20
to retro-comp
Wayne - I was just about to email you to ask for 1024 support as I run out of directory entries long before running out of free space.
Happy to take whatever method wins in cabinet - some good ideas being put forward.
Happy to be an early adopter / beta tester. I am pretty much setup to do all app software deployment via xmodem and lbr files (since you provide nulu in rom) so should be easy for me to upgrade / downgrade.

Richard

Phillip Stevens

unread,
Apr 25, 2020, 6:49:12 AM4/25/20
to retro-comp
Wayne Warthen wrote:
None of the existing tools would be impacted.  They would all just work.  The issue is the CBIOS DPB.
 
Has anyone created a compatible disk image for RomWBW with 1024 DIR entries?
I have.  It is trivial to do.

I was going to wait until Spencer's new simplified PPIDE module arrived and then do the 1024 file 16MB file modification for CP/M-IDE, but it is taking an age so...
it is now done.

As Wayne notes it is a trivial change. Just notify that there are 1024 directory entries, and reserve sufficient allocation blocks for the directory entries. Changes only 3 bytes in the BIOS.

It took only a little while to rework all my drives into the new format. And it was quite simple using cpmtools commands to copy the contents of the legacy formatted drive to a directory, and then copy them back onto a a newly formatted drive.

> cpmcp -f rc2014v1-16MB USER.CPM 0:*.* ~/Desktop/cpm_drives/user.cpm/
> cpmcp -f rc2014-16MB USER.CPM ~/Desktop/cpm_drives/user.cpm/*.* 0:

I am very open to thoughts from this group.  Do people think I should just make the change and warn everyone?

Yes. Absolutely make the change.

The only request that I'd have is have a binary power for allocating sectors and tracks.
This makes it trivial (bit shifting) to calculate the LBA on the IDE Drive.

There's no reason to use odd sector and track values, since this is not legacy encumbered. I used these values for reference.

diskdef rc2014-16MB
  seclen
512
  tracks
1024
  sectrk
32
  blocksize
4096
  maxdir
1024
  skew
0
  boottrk
-
  os
2.2
end

For me it would be useful to have just one disk format for all my rc2014, yaz180, and scz180 machines.

Cheers, Phillip

Douglas Miller

unread,
Apr 25, 2020, 8:12:26 AM4/25/20
to retro...@googlegroups.com
Just a caution for everyone that is implementing this. More than the DPB may need to be changed. I'm not familiar with the CBIOS, but will talk CP/M3 in general. If the DPB declares the drive "removable", i.e. CKS != 0, then you must increase the CSV buffer to match the new directory size - or else corruption and crashing will ensue. Also, the directory hash buffer needs to be enlarged for the new directory size or you will get very bizarre behavior.

If you are not using directory hash, you really should reconsider - especially with such a large directory. CP/M needs all the help it can get to avoid being bogged-down by long directory searches.

I guess if you are telling GENCPM to create all the buffers, then it will be automatic.

Wayne Warthen

unread,
Apr 25, 2020, 12:35:28 PM4/25/20
to retro-comp
H Folks,

For what it is worth, I am fully committed to making this change.  I produced a build-time version (conditional assembly and disk image building) that works fine.  I am currently pursuing changes that would allow all supported OSes to adapt to either legacy or new directory format on a per-slice basis.  I also have this mostly working in the lab, though this dynamic detection stuff is proving to be much uglier than I expected.

@Douglas, yes, you are absolutely right about the complexity of these changes.  There is a lot to account for.

@Phillip, yes, I remember that you asked to change the geometry previously.  See question #2 below.

Since I am doing this, I don't really want to do it again in the future.  So, I have questions for the community... :-)
  1. So, how many directory entries are desired?  Everyone is talking about 1024, but I have tested all the way up to the max of 2048.  Bear in mind Douglas' cogent comments about CP/M 3 and directory size.  Although, hashing is in use.
  2. Currently, the slice size used by RomWBW is 8MB + 128K.  This sounds odd, but I originally did it to allow a 128K system area in addition to a maximum sized filesystem (8MB).  I regret doing this now (as Phillip points out) because the slice sizes are not a power of 2.  It would be better to make the slice sizes 8MB and use a bit of that for the system area.  If I make this change it would preclude any backward compatibility because slices would now be at new offsets.
  3. I am quite nervous about folks losing data as a result of this change.  All of you following this thread are high competency.  But many users of RomWBW are new to CP/M.  I can probably catch and block the scenario where someone upgrades their ROM and then tries to use an old disk format.  However, I don't yet see a way to block/warn someone using an old ROM to access a new disk format.  I know this is the least likely scenario, but I have been a software developer long enough to know how important data integrity is.  Anyone have any ideas how I could keep an old ROM/OS image from mangling a new disk format?  If I implement backward compatibility, it will prevent a new ROM from mangling an old disk, but it would not prevent an old ROM from mangling a new disk.
I am going to focus on some implementation details for the moment while I watch for replies to these questions.

I expect to hear different opinions and I hope that everyone will understand that I ultimately need to pick what I am most comfortable with.

Thanks!

Wayne

Jim McGinnis

unread,
Apr 25, 2020, 4:48:52 PM4/25/20
to retro-comp
Greetings Wayne

Replies below...

1.So, how many directory entries are desired?  Everyone is talking about 1024, but I have tested all the way up to the max of 2048.  Bear in mind Douglas' cogent comments about CP/M 3 and directory size.  Although, hashing is in use.

Gioven that I am mostly running CPM3, 1024 seems more than adequate unless we move to a 16MB slice size and then 2048 does makes sense. I am very happy with 8MB and 1024.

2. Currently, the slice size used by RomWBW is 8MB + 128K.  This sounds odd, but I originally did it to allow a 128K system area in addition to a maximum sized filesystem (8MB).  I regret doing this now (as Phillip points out) because the slice sizes are not a power of 2.  It would be better to make the slice sizes 8MB and use a bit of that for the system area.  If I make this change it would preclude any backward compatibility because slices would now be at new offsets.

Moving to powers of 2 is logical. The impact is not arduous to handle.

3. I am quite nervous about folks losing data as a result of this change.  All of you following this thread are high competency.  But many users of RomWBW are new to CP/M.  I can probably catch and block the scenario where someone upgrades their ROM and then tries to use an old disk format.  However, I don't yet see a way to block/warn someone using an old ROM to access a new disk format.  I know this is the least likely scenario, but I have been a software developer long enough to know how important data integrity is.  Anyone have any ideas how I could keep an old ROM/OS image from mangling a new disk format?  If I implement backward compatibility, it will prevent a new ROM from mangling an old disk, but it would not prevent an old ROM from mangling a new disk.

I expect there to be some risks as outlined above. My intention is to fully move forward across all of my systems so that "hd compatibility" will be universally the same for each of them. I do not have so many existing HD images that this presents a significant problem for me.

I ultimately need to pick what I am most comfortable with.
Carry on! I expect that your comfort is primary for your decision making process, as it should be. Very fine.


Best regards,

Jim

Alan Cox

unread,
Apr 25, 2020, 6:02:55 PM4/25/20
to retro-comp
Gioven that I am mostly running CPM3, 1024 seems more than adequate unless we move to a 16MB slice size and then 2048 does makes sense. I am very happy with 8MB and 1024.

For CP/M 3 with banking it's essentially free. Ditto for MP/M. For CP/M 2.x less so as you eat into your TPA more.

I've always used 32MB volumes with CP/M 3 because you end up with no shifts or maths in the disk I/O. Sector becomes LBA 0 track LBA 1 and drive LBA 2 and it's easier to use more of the media. For CP/M 2 the same trick sort of works with 8MB volumes except you do have to shift it all right twice to get from a 128 byte sector LBA to a 512 byte one - and you are stuck with 8MB limits for CP/M 2.2 anyway.

Alan

Phillip Stevens

unread,
Apr 25, 2020, 6:30:38 PM4/25/20
to retro-comp
For CP/M 2 the same trick sort of works with 8MB volumes except you do have to shift it all right twice to get from a 128 byte sector LBA to a 512 byte one - and you are stuck with 8MB limits for CP/M 2.2 anyway.

Alan,

I’ve heard that here before, iirc. That CP/M 2.2 can’t support 16MB drives.

Is there a BDOS corner case you’re aware of?

I’ve used 16 MB for a while with nothing obvious breaking.

Is there something you can point out?

Cheers, Phillip

Douglas Miller

unread,
Apr 25, 2020, 6:47:54 PM4/25/20
to retro-comp
It has to do with the math being performed. In the CP/M 2.2 BDOS it uses a 16-bit integer (register pair) to calculate the ARECORD (absolute record number). Since records are 128 bytes, that is 65536*128 = 8MB. That is the largest "drive" that can be addressed. If you've been using a 16MB drive on 2.2, you probably have not tried to allocate blocks beyond the 8MB boundary and have just been lucky. Or else you've been overwriting/corrupting older files and not noticed.

Phillip Stevens

unread,
Apr 25, 2020, 8:17:30 PM4/25/20
to retro-comp
Phillip Stevens wrote:
I’ve heard that here before, iirc. That CP/M 2.2 can’t support 16MB drives.

Is there a BDOS corner case you’re aware of?

I’ve used 16 MB for a while with nothing obvious breaking.
Is there something you can point out?

Douglas Miller wrote:
It has to do with the math being performed. In the CP/M 2.2 BDOS it uses a 16-bit integer (register pair) to calculate the ARECORD (absolute record number). Since records are 128 bytes, that is 65536*128 = 8MB. That is the largest "drive" that can be addressed. If you've been using a 16MB drive on 2.2, you probably have not tried to allocate blocks beyond the 8MB boundary and have just been lucky. Or else you've been overwriting/corrupting older files and not noticed.

Ahh. I see. That's not a corner case at all. In fact I'm surprised that I didn't read the limitation in the old DRI guides anywhere.

As 16MB is so "large" in terms of CP/M, I've only ever pushed full drives when doing disk performance testing with 1MB and 2MB files. So the corruption by record wrap may not have been obvious.

Anyway, now that's clear let me calculate again.

A directory entry is 32 Bytes, and we have up to 16 Allocation Blocks in which to store Directory Blocks. Those numbers, we can't change.

Previously, I've used a Block size of 4096 Bytes, which means that up to 2048 directory entries (files) could be stored in the Directory Blocks.
And, also quite logically it is possible to store 2048-16 files with up to 4096 Bytes. So that works well too.

Provided the hard drive is "fixed" there's no real overhead to allowing up to 2048 files.

So, I'll do my rework towards this outcome.

diskdef rc2014-8MB
  seclen
512
  tracks
512
  sectrk
32
  blocksize
4096
  maxdir
2048

  skew
0
  boottrk
-
  os
2.2
end

If I'm not mistaken (???), as long as the seclen and tracks are contiguous data, there is no real dependency on these numbers when calculating the LBA, as long as each BIOS calculates the right record number in terms of LBA?

The only real issue is the blocksize and maxdir numbers (and of course skew).

P.

Douglas Miller

unread,
Apr 25, 2020, 8:52:36 PM4/25/20
to retro...@googlegroups.com


On Saturday, April 25, 2020 at 7:17:30 PM UTC-5, Phillip Stevens wrote:
...


If I'm not mistaken (???), as long as the seclen and tracks are contiguous data, there is no real dependency on these numbers when calculating the LBA, as long as each BIOS calculates the right record number in terms of LBA?

The only real issue is the blocksize and maxdir numbers (and of course skew).

P.

Correct. In fact, it's actually a huge waste of CPU in the case of devices like this that can/do use LBA. since the BDOS starts with the ARECORD, all you really need to do is compute "ARECORD << 2" to convert to 512B blocks (LBA), then issue that to the controller. But instead, the BDOS goes through a divide operation by SPT, and sends the quotient and remainder to the BIOS as track and sector, where they get recombined into LBA.

BTW, a file may contain more than one directory entry. So, DRM+1 is not necessarily the number of files you could create, just the max number of directory entries. If you have large files, you cannot create as many actual files.

For what it's worth, I have been using a heuristic to compute a default DRM when creating HDD partition tables (for another project - different partitioning scheme - the partition sector includes the DPBs). I divide the DSM+1 value by the number of blocks per directory entry (16 or 8 depending on the candidate EXM) to get a minimum value, and possibly apply a user-provided multiplier (e.g. "1.5"). Basically, that min value is the number of directory entries required to allocate every block on the disk (sort of). Since most files never consist of completely-full directory entries, that number is usually too small and needs to be increased. As an impractical upper limit, you could say that that was the number of allocation blocks (i.e. can you fill the disk with one-block files?). And, you have to consider that the number should (but might not required to) be based on whole allocation blocks and must be <= 16 blocks (ALV0 is only 16 bits). My heuristic goes through a bunch of different block sizes and sees what values work best (or not at all). It would be nice to have a more-formal, scientific,  heuristic. Not sure if anyone has ever polished such a thing. I seem to recall some macros from DRI for CP/M 3 that might have tried to do that sort of thing, although I seem to recall they often produced sub-optimal values.

Phillip Stevens

unread,
Apr 25, 2020, 9:46:40 PM4/25/20
to retro-comp
Douglas Miller wrote:
Correct. In fact, it's actually a huge waste of CPU in the case of devices like this that can/do use LBA. since the BDOS starts with the ARECORD, all you really need to do is compute "ARECORD << 2" to convert to 512B blocks (LBA), then issue that to the controller. But instead, the BDOS goes through a divide operation by SPT, and sends the quotient and remainder to the BIOS as track and sector, where they get recombined into LBA.

I never tried to pick the BDOS apart, preferring to leave it as its maker intended. But perhaps it would be worth putting some thought into this now, since I'm not trying to support non-LBA drives?
If there's a division to be removed, then it would certainly improve disk throughput. Added to the list.

P.

Douglas Miller

unread,
Apr 25, 2020, 10:55:57 PM4/25/20
to retro-comp
Oops, just realized that my LBA conversion expression was wrong, should be "ARECORD >> 2". It's different with CP/M 3 since you, presumably, would be letting the BDOS handle the de/blocking.

If one wanted to make a BDOS that could handle floppies but still take this optimization, you could maybe trigger it when SPT is set to zero (i.e. "SPT=0 means use LBA"). Still need to work out whether 16-bit is a large enough LBA or if some other use of BIOS settrk/setsec is needed.

With a larger SPT value, as I think is being used here, the cost of the division is less. But still could be worth eliminating (along with the reconversion in the BIOS).

Wayne Warthen

unread,
Apr 26, 2020, 12:11:52 AM4/26/20
to retro-comp
So, here is the current DPB for RomWBW (legacy):

diskdef wbw_hd0
  seclen 512
  tracks 65
  sectrk 256
  blocksize 4096
  maxdir 512
  skew 0
  boottrk 1
  os 2.2
end

First, I'll just say that I don't want to modify BDOS, but happy to accommodate doing things such that it could be done.

Second, it is turning out to be a little troublesome to use 2048 directory entries in CP/M 3.  When using directory hashing, it is only possible to hash six drives.  With 1024 dir entries, I can hash 15 drives.  Annoying that I can't do all 16, but good enough.  So, it is kind of a choice between hashed directories of 1024 entries and unhashed directories of 2048 entries.

I did a little test (very unscientific).  I copied all the files in the current zsdos drive to other user areas until I ran out of space.  I ran out of space on the drive well before using up the 1024 directory entries.  I generally think the file sizes represent a typical usage scenario and I think it indicates 1024 entries is pretty workable.  Thoughts?

Taking everything into account, I am currently thinking this:

diskdef wbw_hd0
  seclen 512
  tracks 511
  sectrk 32
  blocksize 4096
  maxdir 1024
  skew 0
  boottrk 1
  os 2.2
end

This is very close to what you suggested Phillip.  However, it reduces maxdir to 1024.  I am open to making it 2048 if that is preferred by most folks with the understanding that we lose directory hashing in CP/M 3.  The other change is that tracks has been reduced by one so that a boottrk can be carved out of the 8MB space.  This will make a slice exactly 8MB.  That will really help the math required to use slices (which is not inside BDOS).

Phillip, does that fit your objectives reasonably?

None of this answers the question of backward compatibility, but that is a different topic...

Thanks,

Wayne
 


Douglas Miller

unread,
Apr 26, 2020, 8:05:55 AM4/26/20
to retro-comp


On Saturday, April 25, 2020 at 11:11:52 PM UTC-5, Wayne Warthen wrote:
...
Phillip, does that fit your objectives reasonably?

None of this answers the question of backward compatibility, but that is a different topic...

Thanks,

Wayne
 

I'll make a few observations, and since I'm not particularly invested in this platform you can consider them as you wish.

A) By decreasing the SPT from 256 to 32 you will increase the number of subtract loops done in the BDOS by a factor of 8. You might want to be certain that the simplification of the BIOS really offsets that overhead. I would wonder if a little shifting and combining in the BIOS is really worse than those 8x loops. I've not seen the BIOS code, so maybe it is not optimized as much as it could.

B) Yes, hash tables take up a lot of space. I'm not familiar with the memory mapping available on this platform, so not sure how much space you have for hash tables.

C) While it wastes some space (unused portion of an allocation block), you don't have to use a power-of-two for the number of directory entries. You could bump it to, say, 768 and maybe have enough memory to hash all drives. Just make sure you reserve the whole last (partially used) allocation block in ALV0.

D) Again, not familiar with this platform, but if a partition sector(s) is used on the disk, might want to look into an upgrade there whereby DPBs are also stored along with the partition offsets. As you mention, backward compatibility is another issue, so probably look for a way to build in a version byte.

E) While not directly related to the directory size issue, if a new memory mapping card is being considered, I would suggest making the MMU allow direct copy between banks (eliminating the need to copy into common memory, then back to the other bank). This allows the disk buffers to be placed in banked memory instead of common, and increases the TPA. That can also allow the common memory boundary to be raised which can give you more banked memory for hash tables.

Phillip Stevens

unread,
Apr 26, 2020, 8:09:05 AM4/26/20
to retro-comp
Wayne Warthen wrote:
First, I'll just say that I don't want to modify BDOS, but happy to accommodate doing things such that it could be done.

Yes, imho it feels sort of wrong to rewrite that stuff too. Although I have to admit swapping out one function for ldir, because it was such an obvious fix (shame).

Second, it is turning out to be a little troublesome to use 2048 directory entries in CP/M 3. So, it is kind of a choice between hashed directories of 1024 entries and unhashed directories of 2048 entries.

Not an expert, but isn't it a bit unnecessary to do hashing for data stored on a modern SD or SSD?  They have their own error correction and wear levelling algorithms at the physical layer, which presents corrected bits on their SPI or IDE interfaces. If the hashing is for other reasons then ok, but if it is to purely to protect against disk errors I'd call CP/M 3 directory hashing obsolete.

(Note this isn't saying that file hashing for data transmission, security or proof of identity is unnecessary. Otherwise I'd be calling out block-chain too).
 
I did a little test (very unscientific).  I copied all the files in the current zsdos drive to other user areas until I ran out of space.  I ran out of space on the drive well before using up the 1024 directory entries.  I generally think the file sizes represent a typical usage scenario and I think it indicates 1024 entries is pretty workable.  Thoughts?

Reading through Douglas' post above, I hear that the magic answer lies somewhere between.

I am currently leaning towards 2048 because it doesn't cause limitations.
Yes, it is unlikely that someone will need 2032 files of less than 4096 Bytes on one drive, but they could and it would be supported. 

Aside from the hashing issue in CP/M 3, for most people 2048 files doesn't cost anything as they're using CP/M 2.2.
But, looking from the other side until last week everyone was happy with 512 files.
 
Taking everything into account, I am currently thinking this:

diskdef wbw_hd0
  seclen
512
  tracks
511
  sectrk
32
  blocksize
4096
  maxdir
1024
  skew
0
  boottrk
1
  os
2.2
end


This is very close to what you suggested Phillip.  However, it reduces maxdir to 1024.  I am open to making it 2048 if that is preferred by most folks with the understanding that we lose directory hashing in CP/M 3.  The other change is that tracks has been reduced by one so that a boottrk can be carved out of the 8MB space.  This will make a slice exactly 8MB.  That will really help the math required to use slices (which is not inside BDOS).

I think it works very well for the hd0 drive, with the boot track. 
For your hd1,..., hdn drives (without the boot track) you could then do the full 8 MB, with either 2048 or 1024 files.

So an open question is on hashing, or no hashing for CP/M 3.0 ???

diskdef wbw_hdn
  seclen
512
  tracks
512

  sectrk
32
  blocksize
4096
  maxdir
1024
  skew
0

  boottrk
-
  os
2.2
end

Phillip, does that fit your objectives reasonably?

Absolutely.

But, just brainstorming a little...

Since most of the systems we're talking about now boot from ROM, having a boot track is a bit of an anachronism.

I think it pushes the start of the directory out by one track, which means that a "normal" non-boot drive can't be written to the first LBA on the drive, everything is pushed out by one track.

I wonder if it would be possible to put the boot track as the LAST track in the first 8 MB slice, rather than the first track in the slice?
What would that imply?

As long as the boot loader knew where to find it, it should be the same result.
And it would allow any "new" wbw_hdn drive to be written to the first slice and be usable (except for the last track).
Possible? Interesting?

Cheers, P.

Douglas Miller

unread,
Apr 26, 2020, 8:15:35 AM4/26/20
to retro-comp


On Sunday, April 26, 2020 at 7:09:05 AM UTC-5, Phillip Stevens wrote:
...
Not an expert, but isn't it a bit unnecessary to do hashing for data stored on a modern SD or SSD?  They have their own error correction and wear levelling algorithms at the physical layer, which presents corrected bits on their SPI or IDE interfaces. If the hashing is for other reasons then ok, but if it is to purely to protect against disk errors I'd call CP/M 3 directory hashing obsolete.

(Note this isn't saying that file hashing for data transmission, security or proof of identity is unnecessary. Otherwise I'd be calling out block-chain too).
 
...
Cheers, P.

The hashing I'm talking about is CP/M 3 "directory hashing", which means that the hash table allows the BDOS to quickly locate what directory entries it needs for a given file (even empty ones), without doing *any* I/O. Even on a fast IDE/CF device, I think that is a significant boost.

Phillip Stevens

unread,
Apr 26, 2020, 8:20:43 AM4/26/20
to retro-comp
Phillip Stevens wrote:
Not an expert, but isn't it a bit unnecessary to do hashing for data stored on a modern SD or SSD?  They have their own error correction and wear levelling algorithms at the physical layer, which presents corrected bits on their SPI or IDE interfaces. If the hashing is for other reasons then ok, but if it is to purely to protect against disk errors I'd call CP/M 3 directory hashing obsolete.
(Note this isn't saying that file hashing for data transmission, security or proof of identity is unnecessary. Otherwise I'd be calling out block-chain too).

Douglas Miller wrote:
The hashing I'm talking about is CP/M 3 "directory hashing", which means that the hash table allows the BDOS to quickly locate what directory entries it needs for a given file (even empty ones), without doing *any* I/O. Even on a fast IDE/CF device, I think that is a significant boost.

Then definitely, under "proof of identity", worth doing for CP/M 3. ;-)

P.

Phillip Stevens

unread,
Apr 26, 2020, 8:35:38 AM4/26/20
to retro...@googlegroups.com
Douglas Miller wrote:
A) By decreasing the SPT from 256 to 32 you will increase the number of subtract loops done in the BDOS by a factor of 8. You might want to be certain that the simplification of the BIOS really offsets that overhead. I would wonder if a little shifting and combining in the BIOS is really worse than those 8x loops. I've not seen the BIOS code, so maybe it is not optimized as much as it could.

That is a significant impact, and something worth taking into account. Given that the only objective is to get the 8 MB drive mapped contiguously onto LBA address, it makes no physical difference how sectrk or tracks are defined. Now you've brought in the division (or subtract loops) in BDOS it makes sense to maximise sectrk into a reasonable number, rather than 32.

A question. IIRC sectors count from 1 (rather than 0), right?
If so, then it would be "messy" to have 256 as the sectrk. Unless there was a decrement or similar.

Otherwise if sectors count from 0, then it would be ok (and much more efficient) to have 256, as the bit shifting LBA code is easier if anything.

This is something I've never accounted for, as I didn't study the BDOS code previously (obviously an oversight).

P.

Douglas Miller

unread,
Apr 26, 2020, 8:49:43 AM4/26/20
to retro-comp
The BDOS calls the BIOS SECTRN function with a 0-based logical sector number (16-bit, in BC). Then, the BDOS calls SETSEC with whatever value was returned by SECTRN (HL -> BC). So, the BIOS has full control (unless I've missed something) over whether the sector is 0-based or 1-based.

I think this all started with the 8" SD (IBM format) floppies, where the physical sector number started with 1. Very old versions of CP/M would pass a 1-based sector number (with no SECTRN function) to the BIOS. I think that got carried forward, to a detriment.

Phillip Stevens

unread,
Apr 26, 2020, 9:07:16 AM4/26/20
to retro-comp
Douglas Miller wrote:
A) By decreasing the SPT from 256 to 32 you will increase the number of subtract loops done in the BDOS by a factor of 8. You might want to be certain that the simplification of the BIOS really offsets that overhead. I would wonder if a little shifting and combining in the BIOS is really worse than those 8x loops. I've not seen the BIOS code, so maybe it is not optimized as much as it could.

That is a significant impact, and something worth taking into account. Given that the only objective is to get the 8 MB drive mapped contiguously onto LBA address, it makes no physical difference how sectrk or tracks are defined. Now you've brought in the division (or subtract loops) in BDOS it makes sense to maximise sectrk into a reasonable number, rather than 32.

A question. IIRC sectors count from 1 (rather than 0), right?
If so, then it would be "messy" to have 256 as the sectrk. Unless there was a decrement or similar.

Otherwise if sectors count from 0, then it would be ok (and much more efficient) to have 256, as the bit shifting LBA code is easier if anything.

This is something I've never accounted for, as I didn't study the BDOS code previously (obviously an oversight).


Douglas Miller wrote: 
The BDOS calls the BIOS SECTRN function with a 0-based logical sector number (16-bit, in BC). Then, the BDOS calls SETSEC with whatever value was returned by SECTRN (HL -> BC). So, the BIOS has full control (unless I've missed something) over whether the sector is 0-based or 1-based.

My sectran code looks like this, so it is 0 based I guess. ;-)

sectran:    ;translate passed from BDOS sector number BC
    ld      h
,b
    ld      l
,c
    ret

So that makes it easy, (and exactly what Alan previously said), the SPT should be 256, and is then simply the offset LBA0.

That then reduces the number of tracks down to 64 for 8 MB drives, and masked into the offset LBA1, correct?
This can then be added to the origin LBA of the slice (or any FATFS file).

And the general definition looks more like this... right?

diskdef wbw_hdn
  seclen
512
  tracks
64
  sectrk
256
  blocksize
4096
  maxdir
1024

  skew
0
  boottrk
-
  os
2.2
end

OR (for CP/M 2.2)


diskdef wbw_hdn
  seclen
512
  tracks
64
  sectrk
256
  blocksize
4096

Anna Christina Naß

unread,
Apr 26, 2020, 9:09:14 AM4/26/20
to retro...@googlegroups.com
Am 26.04.20 um 14:09 schrieb Phillip Stevens:

Hi,

As I'm not into BIOS programming at all, I think I will be fine with
whatever solution will result, but just one remark from me here:

> Since most of the systems we're talking about now boot from ROM, having
> a boot track is a bit of an anachronism.

I'm using RomWBW as 'ROM', but I boot my CP/M 3 from the CF card - so I
think I use the boot track :)

And just one thought from a more "users" perspective:

I like the idea that came up some days ago that all CP/M slices reside
in one primary partition of a CF/SD/HDD.
So the CP/M slices are safe from being overwritten by PC software as
they (as a whole) can properly be seen there.

And at least my Linux "fdisk" command lists partition type "52" as
"CP/M" and type "db" as "CP/M / CTOS" :)

Regards,
Anna

Bill Shen

unread,
Apr 26, 2020, 9:16:26 AM4/26/20
to retro-comp
Douglas,
Interesting comment about what BDOS does with sector & track calculation.  From day one my CF disk definition is 1024 for sectrk, it is mainly to help me with sector/track calculation in BIOS.  I have not thought of what BDOS does with it.  So does it help or hurt overall calculation with the airly large sectrk value?
  Bill
PS, I have track offset of 1 and that does eat up 128K of CF memory, but I use the first track to store bootstrap, monitor, and CP/M BIOS/BDOS/CCP.

# RAM disk in Tiny68K
# BLS of 4096
# 1024 sectors per track
# 63 tracks
diskdef t68kram
   seclen
128
   tracks
63
   sectrk
1024
   blocksize
4096
   maxdir
512

   skew
0
   boottrk
1
   os
2.2

Douglas Miller

unread,
Apr 26, 2020, 9:25:30 AM4/26/20
to retro-comp


On Sunday, April 26, 2020 at 8:16:26 AM UTC-5, Bill Shen wrote:
Douglas,
Interesting comment about what BDOS does with sector & track calculation.  From day one my CF disk definition is 1024 for sectrk, it is mainly to help me with sector/track calculation in BIOS.  I have not thought of what BDOS does with it.  So does it help or hurt overall calculation with the airly large sectrk value?
  Bill
PS, I have track offset of 1 and that does eat up 128K of CF memory, but I use the first track to store bootstrap, monitor, and CP/M BIOS/BDOS/CCP.

# RAM disk in Tiny68K
# BLS of 4096
# 1024 sectors per track
# 63 tracks
diskdef t68kram
   seclen
128
   tracks
63
   sectrk
1024
   blocksize
4096
   maxdir
512
   skew
0
   boottrk
1
   os
2.2



...

A large SPT value will reduce the number of subtractions done by the BDOS (to divide ARECORD by SPT). So, in general, a larger SPT would be better (considering only the BDOS division loop). As you point out, though, it means you have less granularity for the boot track(s). Like everything else in life, absolutely nothing is absolute.

Douglas Miller

unread,
Apr 26, 2020, 9:54:00 AM4/26/20
to retro-comp


On Sunday, April 26, 2020 at 8:07:16 AM UTC-5, Phillip Stevens wrote:
...

And the general definition looks more like this... right?

diskdef wbw_hdn
  seclen
512
  tracks
64
  sectrk
256
  blocksize
4096
  maxdir
1024
  skew
0
  boottrk
-
  os
2.2
end

OR (for CP/M 2.2)


diskdef wbw_hdn
  seclen
512
  tracks
64
  sectrk
256
  blocksize
4096
  maxdir
2048
  skew
0
  boottrk
-
  os
2.2
end
 
I'll just share that my "logistical nightmare" alarm is going off in the back of my lizard brain. Seems like a proliferation of lots of formats (DPBs), that must be hard-coded into the BIOS? (Maybe the format is not hard-coded? in which case my concerns may be off-base)

It just seems like you have a case where some users want to customize certain partitions for certain use cases (lots of little files, in this case). Having DPBs hard-coded in the BIOS just isn't conducive to this sort of diversity, IMHO. Of course, it's no trivial task to convert to some more-complicated scheme, especially at this late date. I think I feel uncomfortable because I've been using a partitioning scheme (that originated on SASI disks in the early 80's) whereby the partition table, drive config parameters, and DPBs are all stored on a "magic sector" (as it was called). Obviously, the device driver needs to read and honor this magic sector - as does boot and any other code that depends on the disk layout - but it relieves one from having to recompile/regenerate the OS all the time and worry about accidental mismatches totally corrupting your disk. Maybe users these days do not keep anything precious on the disk, so blowing it away and rebuilding is fine. But just seems like an opportunity to improve a difficult situation.

I'll also add that I am able to use cpmtools with these "magic sector" disks, although with a little patching. I have a program that will print out the cpmtools diskdef for one/all partitions (based on data in the magic sector), and I also have a patched version of cpmtools that allows me to use ad-hoc diskdefs (env vars or commandline options) to better integrate with the magic sector. I've not been able to get the owner of cpmtools to accept these changes, yet, but will continue make my case.

Wayne Warthen

unread,
Apr 26, 2020, 4:33:52 PM4/26/20
to retro-comp
So, a lot of interesting ideas...  I am going to try and consolidate them and provide some thoughts:
  • Putting system track at end of slice instead of start
It is a creative idea and I see why it would optimize the math required for calculating sector offsets for slices. However, I don't think I am going to get comfortable with this.  There are many tools and documents that expect system tracks to be at the start.  The system tracks value in the DPB is not an offset, it is an quantity.  I am worried about code breakage and confusion.  Sorry Phil.
  • Directory entries (and hashing)
I'm really struggling with this one.  I like the idea of maxing it out (2048) just to make sure I never deal with this again.  However, in every real-word test I can come up with, 1024 seems to be entirely sufficient for an 8MB hard disk.  I'm not sure I got Douglas' heuristic right because I come up with a number much less than 1024.  However, as long as the average file size is >= 8K, you will not run out if directory entries.  And I dislike precluding directory hashing.  Probably inclined to use 1024 directory entries unless someone can come up with a plausible real world example that would need more.
  • Moving CP/M slices into a dedicated partition
Anna is right that it would help with data integrity.  This is really not that hard and is an elegant solution, but a couple downsides.  Technically, it means doing a sector I/O on every disk login.  It also means doing a 32-bit addition on every sector seek.  The bigger concern is operational.  A user would be required to run fdisk before they could use any disk slices.  Use of fdisk seems to be a stumbling point for many users.  I would like to hear more thoughts on the pros and cons of this.
  •  Sectors per track
As long as it is a power of 2, the math is as optimal as it can be.  I acknowledge that a higher SPT reduces the iterations in the BDOS divide.  However, I have seen some code that will choke on values of 256 or higher because they assume that a single byte is sufficient to store the value.  I currently like 32 -- just feels about right.
  •  Starting sector number on a track
I don't use skew on any of my formats and really don't ever plan to.  So, BDOS will always consider the first sector to be zero.  I don't think there is a lot more to discuss there.

I didn't see any answers to my challenge regarding how to keep people from corrupting data by using mismatched software and disk formats.  However, I did realize something.  The current RomWBW disk format has a 128K system area in front of the filesystem (which is silly, don't ask).  If the new format has a more reasonable system area (maybe 8k or 16K), then anyone using mismatched software/disk formats will see an empty or garbage directory when attempting to access a disk .  It is not an ideal solution, but it does mean that a user would have some indication things are wrong right away.  Maybe that along with copious warnings everywhere would be sufficient.

I guess that is enough for now...  Thanks for all the input!

-Wayne
 

 

Alan Cox

unread,
Apr 26, 2020, 6:06:55 PM4/26/20
to retro-comp
> The current RomWBW disk format has a 128K system area in front of the filesystem (which is silly, don't ask).

Anything using partition tables expects that space to be usable and
free. Fuzix for example uses those blocks. So if you do that you can
no longer build hybrid fdisk/ROMWBW disks safely because the first 63
sectors are the boot area (legacy) and 2048 (modern). I wonder if
that's why you originally created a 128K system area ?

Phillip Stevens

unread,
Apr 26, 2020, 7:26:54 PM4/26/20
to retro-comp
Wayne Warthen wrote:
So, a lot of interesting ideas...  I am going to try and consolidate them and provide some thoughts:
  • Putting system track at end of slice instead of start
It is a creative idea and I see why it would optimize the math required for calculating sector offsets for slices. However, I don't think I am going to get comfortable with this.  There are many tools and documents that expect system tracks to be at the start.  The system tracks value in the DPB is not an offset, it is an quantity.  I am worried about code breakage and confusion.  Sorry Phil.

No problem. Was just a random thought bubble. :-)
  • Sectors per track
As long as it is a power of 2, the math is as optimal as it can be.  I acknowledge that a higher SPT reduces the iterations in the BDOS divide.  However, I have seen some code that will choke on values of 256 or higher because they assume that a single byte is sufficient to store the value.  I currently like 32 -- just feels about right.

I think I understand that really doesn't matter about the sectrac & tracks numbers, for the actual format, as long as the data is stored in contiguous LBA addresses.
So it is (in this situation with LBA drives) purely a BIOS internal concern. Therefore, you don't really have to "decide" on that point at all.

I'll do some testing / quantification on an RC2014 BIOS soon to see how much difference 32 & 512 makes vs 256 & 64.
Just for interest. I imagine it will pay out when the drive is quite full?

The one open question for me is whether it is possible to differentiate hd0 with a boot track, from hd1, ..., hdn to have no boot track?
hd0 == system drive
hdn == data drive

That would allow systems that don't use a boot track to use the full 512 tracks of an 8 MB slice.
And that would make the BIOS math much easier (=faster) for those hdn drives.

diskdef wbw_hdn
  seclen
512

  tracks
512
  sectrk
32
  blocksize
4096

  maxdir
1024

Douglas Miller

unread,
Apr 26, 2020, 8:06:43 PM4/26/20
to retro-comp


On Sunday, April 26, 2020 at 6:26:54 PM UTC-5, Phillip Stevens wrote:
...

I'll do some testing / quantification on an RC2014 BIOS soon to see how much difference 32 & 512 makes vs 256 & 64.
Just for interest. I imagine it will pay out when the drive is quite full?

...

Right, the overhead for the division will be at its worst near the end of the drive. It will be interesting to see how measurable the difference is. With floppies, or even spinning harddisks, it might be swamped enough by the disk overhead. CF has a better chance of being noticeable. I suspect a ramdisk would be more so.

Wayne Warthen

unread,
Apr 26, 2020, 9:50:15 PM4/26/20
to retro-comp
Anything using partition tables expects that space to be usable and
free. Fuzix for example uses those blocks. So if you do that you can
no longer build hybrid fdisk/ROMWBW disks safely because the first 63
sectors are the boot area (legacy) and 2048 (modern). I wonder if
that's why you originally created a 128K system area ?

I'd like to claim I was thinking of that when I did it, but no.  Lucky coincidence I guess.

I can easily (and will) ensure a system area of 64 sectors.    How important is it to accommodate the modern standard of 2048 sectors?  I can do that, but would certainly not want to carve 1MB out of the 8MB.  I would need to rethink how to handle that.

Probably another argument in favor of using a CP/M partition to hold the slices.

Thanks Alan.

-Wayne 

Wayne Warthen

unread,
Apr 26, 2020, 9:59:13 PM4/26/20
to retro-comp
On Sunday, April 26, 2020 at 4:26:54 PM UTC-7, Phillip Stevens wrote:
  • Sectors per track
As long as it is a power of 2, the math is as optimal as it can be.  I acknowledge that a higher SPT reduces the iterations in the BDOS divide.  However, I have seen some code that will choke on values of 256 or higher because they assume that a single byte is sufficient to store the value.  I currently like 32 -- just feels about right.

I'll do some testing / quantification on an RC2014 BIOS soon to see how much difference 32 & 512 makes vs 256 & 64.
Just for interest. I imagine it will pay out when the drive is quite full?

That will be interesting to hear back on. 

The one open question for me is whether it is possible to differentiate hd0 with a boot track, from hd1, ..., hdn to have no boot track?
hd0 == system drive
hdn == data drive

That would allow systems that don't use a boot track to use the full 512 tracks of an 8 MB slice.
And that would make the BIOS math much easier (=faster) for those hdn drives.

diskdef wbw_hdn
  seclen
512
  tracks
512
  sectrk
32
  blocksize
4096
  maxdir
1024
  skew
0
  boottrk
-
  os
2.2
end

Any slice of a hard disk can be booted by RomWBW and it has turned out to be a very useful capability because that is what allows me to have a "cokmbo" disk that will boot any of 5 OSes.  So, I am not inclined to make it only the first slice.  I'm not sure the math to handle it is very much.

LBA = (slice << 14) | (++track << 5) | sector

The only impact is incrementing track which is pretty minor, right?  The track increment is handled internally in BDOS, so CBIOS does not do that itself.

Thanks,

Wayne

Jose Luis Collado

unread,
Apr 26, 2020, 10:50:29 PM4/26/20
to retro-comp
Wayne, IMHO as a user (and incompetent programmer) multi-slice boot capability is one of the most useful features of ROMWBW’s recent updates, so I vote for planned changes not disabling this.

Cheers, JL.

Phillip Stevens

unread,
Apr 26, 2020, 10:59:48 PM4/26/20
to retro-comp
Wayne Warthen wrote:
Any slice of a hard disk can be booted by RomWBW and it has turned out to be a very useful capability because that is what allows me to have a "combo" disk that will boot any of 5 OSes.  So, I am not inclined to make it only the first slice.

Ahh. Ok. I didn't understand that feature to boot an OS from any slice. Not something I've used.
Then of course it makes sense to use one disk format across all the slices, and have them all the same.

P.

Phillip Stevens

unread,
Apr 27, 2020, 7:44:19 AM4/27/20
to retro-comp
Wayne Warthen wrote:
  • Sectors per track
As long as it is a power of 2, the math is as optimal as it can be.  I acknowledge that a higher SPT reduces the iterations in the BDOS divide.  However, I have seen some code that will choke on values of 256 or higher because they assume that a single byte is sufficient to store the value.  I currently like 32 -- just feels about right.

I'll do some testing / quantification on an RC2014 BIOS soon to see how much difference 32 & 512 makes vs 256 & 64.
Just for interest. I imagine it will pay out when the drive is quite full?

That will be interesting to hear back on.

Well, I haven't done testing, but I have done a code review.

Based on the example BIOS implementation (provided by DRI, which I've followed closely) for the chkuna (check un-allocated) routine within the write routine, there is a single Byte comparison with the CPMSPT constant to work out whether the next track is required.

This CPMSPT constant is going to be 4x the configured host sectors per track. If the host sectors per track is 32, then the CPMSPT will be 128, which will work. Anything more than HSTSPT of 32, and the single Byte comparison won't work any more.
And, at that point you start heading out on your own tack with your own BIOS implementation. The CPMSPT is held in two Bytes, so it can be larger, and the BDOS expects that it may be larger, so the issue is simply within the BIOS.

But, my BIOS can't handle this larger than HSTSPT of 32. Other BIOS implementations may be similar.
And if I'm remembering correctly, that's why I too ended up using a HSTSPT of 32 too. Thinking about this has reminded me that 32 wasn't a coincidence.

Anyway, I think that's totally done now.

P.

Alan Cox

unread,
Apr 27, 2020, 9:03:47 AM4/27/20
to retro...@googlegroups.com
I can easily (and will) ensure a system area of 64 sectors.    How important is it to accommodate the modern standard of 2048 sectors?  I can do that, but would certainly not want to carve 1MB out of the 8MB.  I would need to rethink how to handle that.

Not very. Modern systems software will barf or will install a magic chain loader in the base area and do other things to cope. They had to for upgrade in place to work. Older OS installs will blindly write over the area but of course only the first 63 sectors

Alan

Wayne Warthen

unread,
Apr 28, 2020, 3:36:33 PM4/28/20
to retro-comp
On Monday, April 27, 2020 at 4:44:19 AM UTC-7, Phillip Stevens wrote:
But, my BIOS can't handle this larger than HSTSPT of 32. Other BIOS implementations may be similar.
And if I'm remembering correctly, that's why I too ended up using a HSTSPT of 32 too. Thinking about this has reminded me that 32 wasn't a coincidence.

Thanks for looking into this -- I think you saved me some grief.  I did have the idea in the back of my mind that I should probably not go beyond 32 SPT, but couldn't remember exactly why.  I am now remembering that there are many other BIOSes out there that use a single byte for SPT.  I'm sure it seemed reasonable at the time and improved code performance.

As a quick update, I have managed to prototype a solution with a partition table entry.  Essentially, if there is a partition table and it has an entry with a type of 0x52 (CP/M), then that partition will be used for RomWBW slices and the slices will be in the new format.  If not, RomWBW falls back to the old format.  If the system has multiple physical storage devices, you can freely mix old and new, but all slices on a given device will be either old or new.  This means that existing storage devices can be used easily.  Anyone wanting the new format, just creates a CP/M partition.  The distributed disk images will be updated to be the new format.

I am currently targeting this for the new DPB:

  seclen 512
  tracks 511
  sectrk 32
  blocksize 4096
  maxdir 1024
  skew 0
  boottrk 1

Still not 100% locked into the directory entries, but leaning toward 1024.

I need a few more days to pull all this together, but I think I can do it.

-Wayne

Phillip Stevens

unread,
Apr 30, 2020, 7:17:58 AM4/30/20
to retro...@googlegroups.com
Phillip Stevens wrote:
I'll do some testing / quantification on an RC2014 BIOS soon to see how much difference 32 & 512 makes vs 256 & 64.
Just for interest. I imagine it will pay out when the drive is quite full?

Well, I haven't done testing, but I have done a code review.

Based on the example BIOS implementation (provided by DRI, which I've followed closely) for the chkuna (check un-allocated) routine within the write routine, there is a single Byte comparison with the CPMSPT constant to work out whether the next track is required.

This CPMSPT constant is going to be 4x the configured host sectors per track. If the host sectors per track is 32, then the CPMSPT will be 128, which will work. Anything more than HSTSPT of 32, and the single Byte comparison won't work any more.
And, at that point you start heading out on your own tack with your own BIOS implementation. The CPMSPT is held in two Bytes, so it can be larger, and the BDOS expects that it may be larger, so the issue is simply within the BIOS.

But, my BIOS can't handle this larger than HSTSPT of 32. Other BIOS implementations may be similar.
And if I'm remembering correctly, that's why I too ended up using a HSTSPT of 32 too. Thinking about this has reminded me that 32 wasn't a coincidence.

Anyway, I think that's totally done now.

Famous last words... 

I've now rewritten the bios to use 256 Sectors per Track and 64 Tracks.

To test I've copied a 1MB file on a 8MB CP/M-IDE drive, on a SSD DOM. The disk is 6.8MB full when the copy starts, so the file copy is towards the maximum capacity of the drive.

b> a:pip random5.txt=random4.txt

where random4.txt is a random binary 1MB file.



With 256 Sectors per Track the 1MB copy takes 67 seconds. But, with 32 Sectors per Track it takes a whole 68 seconds.

So, I've saved a whole second.
And about 10 bytes.


Perhaps there's another test that can be suggested to expose more improvement?

Cheers, Phillip


Douglas Miller

unread,
Apr 30, 2020, 7:41:15 AM4/30/20
to retro-comp
On Thursday, April 30, 2020 at 6:17:58 AM UTC-5, Phillip Stevens wrote:
...
Perhaps there's another test that can be suggested to expose more improvement?

Cheers, Phillip


I suspect you'd see the greatest improvement wherever there is minimal overhead for I/Os. That would probably be CP/M 3 PIP (or any multi-sector I/O on CP/M 3 with a BIOS that leverages that). In that case, the I/Os go directly from the device to/from PIP memory (no buffering or deblocking causing extra mem-mem copying). While it's pretty obvious that the BDOS division will be faster, realizing a noticeable difference in "real life" will depend on a lot of things. Slow devices, like floppies, would make the division loop a tiny part of the whole I/O. And any extra overhead between read and write (user processing of data) will shrink the significance as well.

Phillip Stevens

unread,
Apr 30, 2020, 8:17:25 AM4/30/20
to retro-comp
 Phillip Stevens wrote:
Perhaps there's another test that can be suggested to expose more improvement?

Douglas Miller wrote:
I suspect you'd see the greatest improvement wherever there is minimal overhead for I/Os. That would probably be CP/M 3 PIP (or any multi-sector I/O on CP/M 3 with a BIOS that leverages that). In that case, the I/Os go directly from the device to/from PIP memory (no buffering or deblocking causing extra mem-mem copying). While it's pretty obvious that the BDOS division will be faster, realizing a noticeable difference in "real life" will depend on a lot of things.

I was wondering whether copying random small files (lots of them) rather than one large file would make a greater difference?
But since the sector/track has to be calculated for each read and write cycle I guess it is the same test in the end. ???

Probably the RC2014 driving an SSD DOM is a good test of divide latency, because all of the other systems are fast compared to the CPU.

As another example, I just checked in the same code changes on a Z180 with the same PPIDE and SSD, and there the 1MB disk to disk copy takes 15 seconds before and after the change.
I can't hand time a difference. There the CPU is less of bottleneck, so the divide doesn't have as much impact.

Anyway, an interesting investigation, and my code is now better for doing it.

P.

Bill Shen

unread,
Apr 30, 2020, 8:41:53 AM4/30/20
to retro-comp
15 seconds for megabyte PIP is really fast for CP/M2.2.  Have you try CP/M3 PIP?

With my hardware of 22MHz Z80, 8-bit CF interface, and CF disk and 256 sectors/track, CP/M3 PIP of megabyte file is 14 seconds but CP/M2.2 PIP is 37 seconds.
  Bill

Phillip Stevens

unread,
Apr 30, 2020, 9:01:59 AM4/30/20
to retro-comp
 Phillip Stevens wrote:
As another example, I just checked in the same code changes on a Z180 with the same PPIDE and SSD, and there the 1MB disk to disk copy takes 15 seconds before and after the change.

 Bill Shen wrote:
15 seconds for megabyte PIP is really fast for CP/M2.2.  Have you try CP/M3 PIP?

With my hardware of 22MHz Z80, 8-bit CF interface, and CF disk and 256 sectors/track, CP/M3 PIP of megabyte file is 14 seconds but CP/M2.2 PIP is 37 seconds.

I think the PPIDE is pretty good at getting "performance" numbers, especially driving an SSD. The Z180 is 36MHz 1-Memory 2-I/O Wait States, so that's not particularly special.  The BIOS is simple too, with COMMON memory disk I/O routines. No banking or otherwise to get in the way.  So that helps too.

I haven't built a CP/M 3 version yet, I'm afraid. I haven't found an application that requires CP/M 3, so there's little incentive. Still looking for a round 'tuit.

Phillip





Alan Cox

unread,
Apr 30, 2020, 1:49:03 PM4/30/20
to retro-comp
I think the PPIDE is pretty good at getting "performance" numbers, especially driving an SSD. The Z180 is 36MHz 1-Memory 2-I/O Wait States, so that's not particularly special.  The BIOS is simple too, with COMMON memory disk I/O routines. No banking or otherwise to get in the way.  So that helps too.

I haven't built a CP/M 3 version yet, I'm afraid. I haven't found an application that requires CP/M 3, so there's little incentive. Still looking for a round 'tuit.

The main things you get from banked CP/M 3 are speed, TPA, better error messages and resident modules (eg there was one for nice keyboard editing history).

When you do a larger sized I/O on CP/M 3 it doesn't get deblocked and goes direct to the user map. It's vastly faster than CP/M 2 as a result. CP/M 2.2 isn't really testing the actual device speed merely the CPU speed with a modern disk. The CF adapter is noticably faster with CP/M 3 on my 8085 and even more so with Fuzix on the RC2014 board because I also use the Z80DMA there so the CF runs at Z80 DMA speed direct to user space not CPU speed to user space.

Alan


Phillip Stevens

unread,
May 2, 2020, 9:39:42 AM5/2/20
to retro-comp
Phillip Stevens wrote:
As another example, I just checked in the same code changes on a Z180 with the same PPIDE and SSD, and there the 1MB disk to disk copy takes 15 seconds before and after the change.
I can't hand time a difference. There the CPU is less of bottleneck, so the divide doesn't have as much impact.

I'm not sure how many times it is possible to polish a piece of code, before it becomes unhealthy.
Anyway, here goes.

But I've checked in some code for the Z180 to do the buffer copy using the DMA Controller, and I'm getting consistently 14 seconds for a 1MB copy on a full 8MB drive. One second saved.

The RC2014 Z80 doesn't have DMA, but using an unrolled LDI (rather than LDIR) saves 532 cycles per CP/M sector and shaves another 2 seconds off the time, now at 65 seconds for the same 1MB copy on a full drive.

Since one has about 4x the CPU clock speed of the other, the 4x performance difference is exactly natural.

Phillip

Douglas Miller

unread,
May 2, 2020, 10:00:29 AM5/2/20
to retro-comp
It does seem strange that using the DMAC only improved things 6%. But, that all depends on what percentage of the I/O is spent handling the overhead. If this was CP/M 2.2 (no multi-sector count I/O), that might explain it. I would have guessed that the DMAC would transfer the 512 byte block at 3-4 cycles per byte, as compared to LDIR 21 cycles per byte. But, if the overhead is so large that it swamps out the transfer, then you won't notice it much. And if you don't run the DMAC in burst mode, it will be slower. Provided, of course, the HD interface can run at 3-4 cycles per byte.

Of course, the cost of unrolling the LDIR is that you lose another 256+ bytes of TPA.


Alan Cox

unread,
May 2, 2020, 10:12:26 AM5/2/20
to retro-comp
The RC2014 Z80 doesn't have DMA, but using an unrolled LDI (rather than LDIR) saves 532 cycles per CP/M sector and shaves another 2 seconds off the time, now at 65 seconds for the same 1MB copy on a full drive.

Since one has about 4x the CPU clock speed of the other, the 4x performance difference is exactly natural.



Unless I am missing something here,the 512 byte copy on a CF adapter
takes 18000 cycles without DMA (that's just the drive set up , block
transfer with LDI in and out). You do 2048 of these for the transfer
plus some meta data lets say 10% as a reasonable transfer count cost.
That is 40 million cycles or a bit over 5 seconds for the Z80 RC2014
board with LDI. You are taking 65 seconds, so your disk performance is
5% of your actual performance.

In other words you are watching the wrong ball - there is a very large
per block constant (for a given CPU speed) in your experiment which is
telling you that most of the overhead is somewhere else.

You can interpret it two ways
1. I agree - if you are using CP/M 2.2 then your raw disk performance
doesn't matter at modern speeds (it was a problem with 1980s hard
disks hence CP/M 3 fixing it) and you are correct that PPIDE is about
as fast as CF because the bottleneck simply isn't the device.

or

2. You need to fix the big overhead in the core code

All this looks like what I've seen on other devices. Bitbang SD cards
feel fine with CP/M despite being at best 20K/second raw transfer
rate.

Alan

Phillip Stevens

unread,
May 2, 2020, 10:15:31 AM5/2/20
to retro-comp
Douglas Miller wrote:
> It does seem strange that using the DMAC only improved things 6%. But, that all depends on what percentage of the I/O is spent handling the overhead.... Provided, of course, the HD interface can run at 3-4 cycles per byte.

It is the PPIDE interface causing the delay. Twiddling the RD and WR lines and management of two byte transfers is complex and slow (relatively). Just using the DMAC for the CP/M <-> host copy.

> Of course, the cost of unrolling the LDIR is that you lose another 256+ bytes of TPA.

It is not fully unrolled, just 32 LDI instructions.
And it fitted in slack space. So no loss really. Luckily. :-)

P.

Phillip Stevens

unread,
May 2, 2020, 10:36:55 AM5/2/20
to retro-comp
Alan Cox wrote:
> In other words you are watching the wrong ball - there is a very large
> per block constant (for a given CPU speed) in your experiment which is
> telling you that most of the overhead is somewhere else.

Yes. I think the ball I’m missing is PIP as a test tool. The result is much less than what I’d expect to see using compiled C against BDOS I/O.

> 2. You need to fix the big overhead in the core code.

> All this looks like what I've seen on other devices. Bitbang SD cards
> feel fine with CP/M despite being at best 20K/second raw transfer
> rate.

Yes, I see about 1/8th the throughout for CSIO SD vs PPIDE SSD too.

I’ll have a play with a C copy on the improved BIOS versions tomorrow.
I’d expect it to be an oom faster. More fun. :-).



>
> Alan

Phillip Stevens

unread,
May 3, 2020, 9:12:30 AM5/3/20
to retro...@googlegroups.com
Its raining and cold, and can't go outside anyway. Ideal for some tedious bench marking.

TL;DR
Firstly, to stay on topic, this is about whether the 256 SPT setting is faster than 32 SPT. Well it is, but not by much.
And secondly, never throw shade on CP/M tools. I know nothing. PIP is pretty damn fast.

The equipment is one standard RC2014 Plus 64KB Ram, with one of Spencer's new IDE Modules (makes no difference whether Spencer or Ed's Module, but Spencer just launched the new IDE Module).

On Sunday, 3 May 2020 00:12:26 UTC+10, Alan Cox wrote:
Unless I am missing something here,the 512 byte copy on a CF adapter
takes 18000 cycles without DMA (that's just the drive set up , block
transfer with LDI in and out). You do 2048 of these for the transfer
plus some meta data lets say 10% as a reasonable transfer count cost.
That is 40 million cycles or a bit over 5 seconds for the Z80 RC2014
board with LDI. You are taking 65 seconds, so your disk performance is
5% of your actual performance.

To get to the bottom of this, I've done a test using a C program and z88dk drivers. It takes 21.5 seconds to copy a 1048576 Byte file (throughput is double as the same disk is being read and written).

This is way longer than Alan mentions, so digging deeper, the core of the C program looks like this...

    for (;;)
   
{
        br
= fread(buffer, sizeof(char), BUFFER_SIZE, In);
       
if (br == 0) break;     // eof or error
        bw
= fwrite(buffer, sizeof(char), BUFFER_SIZE, Out);
       
if (bw != br) break;     // error or disk full
   
}

which assembles down to this...

667   00AB              l_main_00114:
668   00AB  21 00 00    ld hl,_buffer
669   00AE  E5          push hl
670   00AF  21 01 00    ld hl,0x0001
671   00B2  E5          push hl
672   00B3  21 00 10    ld hl,0x1000
673   00B6  E5          push hl
674   00B7  2A 00 10    ld hl, (_In)
675   00BA  E5          push hl
676   00BB  CD 00 00    call _fread
677   00BE  F1          pop af
678   00BF  F1          pop af
679   00C0  F1          pop af
680   00C1  F1          pop af
681   00C2  7C          ld a,h
682   00C3  4D          ld c,l
683   00C4  47          ld b,a
684   00C5  B5          or a, l
685   00C6  28 1E        jr Z,l_main_00107
686   00C8  C5          push bc
687   00C9  21 00 00    ld hl,_buffer
688   00CC  E5          push hl
689   00CD  21 01 00    ld hl,0x0001
690   00D0  E5          push hl
691   00D1  21 00 10    ld hl,0x1000
692   00D4  E5          push hl
693   00D5  2A 02 10    ld hl, (_Out)
694   00D8  E5          push hl
695   00D9  CD 00 00    call _fwrite
696   00DC  F1          pop af
697   00DD  F1          pop af
698   00DE  F1          pop af
699   00DF  F1          pop af
700   00E0  C1          pop bc
701   00E1  AF          xor a,a
702   00E2  ED 42        sbc hl,bc
703   00E4  28 C5        jr Z,l_main_00114
704   00E6              l_main_00107:

Not looking too bad, yet.
So what about the IDE Module drivers? How many cycles do they need to work?

1309  F133              ide_rdblk2:
1310  F133  16 48           ld d,__IO_IDE_DATA|__IO_IDE_RD_LINE ; 7
1311  F135  ED 51           out (c),d               ; 12 and assert read pin
1312  F137  0E 20           ld c,__IO_PIO_IDE_LSB   ;  7 drive lower lines with lsb
1313  F139  ED A2           ini                     ; 16 read the lower byte (HL++)
1314  F13B  0C              inc c                   ;  4 drive upper lines with msb
1315  F13C  ED A2           ini                     ; 16 read the upper byte (HL++)
1316  F13E  0C              inc c                   ;  4 drive control port
1317  F13F  16 08           ld d,__IO_IDE_DATA      ;  7
1318  F141  ED 51           out (c),d               ; 12 deassert read pin
1319  F143  10 EE           djnz ide_rdblk2         ; 13 keep iterative count in b

1383  F180              ide_wrblk2:
1384  F180  16 28           ld d,__IO_IDE_DATA|__IO_IDE_WR_LINE ; 7
1385  F182  ED 51           out (c),d               ; 12 and assert write pin
1386  F184  0E 20           ld c,__IO_PIO_IDE_LSB   ;  7 drive lower lines with lsb
1387  F186  ED A3           outi                    ; 16 write the lower byte (HL++)
1388  F188  0C              inc c                   ;  4 drive upper lines with msb
1389  F189  ED A3           outi                    ; 16 write the upper byte (HL++)
1390  F18B  0C              inc c                   ;  4 drive control port
1391  F18C  16 08           ld d,__IO_IDE_DATA      ;  7
1392  F18E  ED 51           out (c),d               ; 12 deassert write pin
1393  F190  10 EE           djnz ide_wrblk2         ; 13 keep iterative count in b

So the IDE Module is not quite as efficient as the CF Module, by the look of it.

98 cycles per 2 bytes. 25088 per 512 Byte sector. Or written another way it can do a 1MB raw to Read or Write in 6.9689 seconds, into hstbuf.
So, the 21.5 seconds for the 1MB copy consists of 14 seconds for the raw data transfer and 7 seconds of housekeeping.
Perhaps not so bad after all.

But, how do we look from inside CP/M?

The RC2014 Z80 doesn't have DMA, but using an unrolled LDI (rather than LDIR) saves 532 cycles per CP/M sector and shaves another 2 seconds off the time, now at 65 seconds for the same 1MB copy on a full drive.

I've tested the 4 situations 32 Byte SPT, 256 Byte SPT, empty 8 MB drive, and full 8MB drive. The results look like this, using PIP to do the file copy.

SPT     Empty     Full - using PIP
32      46 sec    69 sec
256     42 sec    65 sec

Reversing the LDI optimisation costs about 1.2 seconds, so there are only 3 seconds in it across the spread of disk fullness.

In other words you are watching the wrong ball - there is a very large
per block constant (for a given CPU speed) in your experiment which is
telling you that most of the overhead is somewhere else.

In another post I cast derogatory remarks about PIP, and that I was sure that it would be soaking up the 21 seconds that are getting lost here.

So I wrote a simple C program, using the z88dk CP/M stdio functions (from the classic library), to do a file copy.
Pretty much the same code as above.

The result was not pretty. My CP/M C program is much worse than PIP at doing file copies.
Next time, I'll keep quiet. PIP is very good.

SPT     Empty    Full - using C program z88dk classic stdio
256     53 sec   80 sec

So where did that 20 to 30 seconds of "fat" get added onto the file access?

Was it coming from the deblocking algorithm in my bios?

Well no. The unrolled LDI 32 version takes about 4.8 seconds for the transfer, and the LDIR version takes about 6 seconds for the transfer.
That leaves a remaining 14 seconds getting lost inside the BDOS and PIP code, somewhere.

And that is something that I still can't find.

EDIT. I've written a simple assembly program CP.COM that replicates the PIP fast copy algorithm. It is a little faster than PIP (about 2 to 5%), and it shows that PIP is very efficient.
This demonstrates that the overhead comes from within BDOS.

1. I agree - if you are using CP/M 2.2 then your raw disk performance
doesn't matter at modern speeds (it was a problem with 1980s hard
disks hence CP/M 3 fixing it) and you are correct that PPIDE is about
as fast as CF because the bottleneck simply isn't the device.

So to put this I/O thing to bed properly, I ran the same tests on the YAZ180 at 36.864MHz, using the same PPIDE and the same SSD DOM, with the C program on z88dk FatFS.
It takes 5.5 seconds to copy a 1048576 Byte file (throughput is double as the same disk is being read and written).

The I/O performance is almost linear with respect to the CPU frequency. So the system I/O performance is not contributing at all to these numbers.
It is all coming from within the CP/M BDOS core code and application, and instruction time.
 
2. You need to fix the big overhead in the core code
All this looks like what I've seen on other devices. Bitbang SD cards
feel fine with CP/M despite being at best 20K/second raw transfer
rate.

And, I still can't find the issue in the BDOS core code. It is not BIOS, and it is not sector deblocking. And, there's nothing else left.
That will have to be a problem for another rainy afternoon.

Phillip

Mark T

unread,
May 3, 2020, 9:52:17 AM5/3/20
to retro-comp
Maybe the code generated from c could be modified. Would be no need to pop the arguments for the first call and then push them again for the second. Guess that would defeat the purpose of the benchmark though.

I thought C arguments on the stack were reverse order to those in the example, is that standard C order?

Mark

Alan Cox

unread,
May 3, 2020, 10:32:21 AM5/3/20
to retro-comp
> Maybe the code generated from c could be modified. Would be no need to pop the arguments for the first call and then push them again for the second. Guess that would defeat the purpose of the benchmark though.

Depends on the compiler. Most of them used the pushed value as the memory storage for the value if it won't fit in registers, and you can also use & on them and change them.

> I thought C arguments on the stack were reverse order to those in the example, is that standard C order?

C deliberately does not specify how arguments are passed. They might be in registers, on a stack, through magic call instructions whatever. Ditto the order. It's why for

            foo(a(),b())

C carefully does not say whether a or b gets evaluated first.


The first rule of optimization is "don't do it"

CP/M 3 solves most of the speed issue by a rather simple change. When you read a chunk of data and a whole physical disk block is being read to user space in one call then it does the I/O direct to the user space from the BIOS. No deblocking, no double copies. On a floppy that's not a huge difference, on a period hard disk it's a big difference, on a modern CF card that runs at memory bus speed it's a huge difference. You can even use the Z180 DMA for it. The CP/M 3 PIP command also knows about and takes great care to exploit this.

Hard disks were after all a bit of an afterthought for a microcomputer when CP/M was first written.

Alan

Douglas Miller

unread,
May 3, 2020, 10:32:44 AM5/3/20
to retro...@googlegroups.com


On Sunday, May 3, 2020 at 8:12:30 AM UTC-5, Phillip Stevens wrote:
...
And, I still can't find the issue in the BDOS core code. It is not BIOS, and it is not sector deblocking. And, there's nothing else left.
That will have to be a problem for another rainy afternoon.

Phillip

Don't be too sure yet. Here are some things to think about:

PIP, especially on 2.2, is not perfect at auto-detect "text" vs. "binary" files, so make sure you are using the "O" (object/binary) option. Also make certain you are *not* using the "V" (verify) option. When PIP thinks a file is text, it will process each character.

PIP should be filling memory with (part of) a file, then writing it to the destination. But, on CP/M 2.2 that will involve, at a minimum, two transfers of each block of data. The first is the INIR (et al.) between the device and the deblock buffer in the BIOS. The second is the LDIR (or probably slower code) between the deblock buffer and the user buffer/memory. On CP/M 3, because of the multi-sector optimization that PIP takes, there is only one transfer: between device and user buffer/memory (except for a possible final partial-sector I/O if the file is not an even number of blocks).

I've seen BIOSes that are worse, copying data yet another (third) time. These copy operations are not fast, even if Z80 instructions are used.

I'll also add that the CP/M 3 optimization requires a BIOS that does the right thing: pass physical sectors/blocks to the BDOS and does not do the deblocking itself. I've seen plenty of CP/M 3 BIOSes that are rather simple clones of a CP/M 2.2 BIOS and thus rob CP/M 3 of the multisector optimization.

Mark T

unread,
May 3, 2020, 12:58:40 PM5/3/20
to retro-comp
I was aware that C didn’t specify the order of evaluation, but I always thought that the location on the stack was consistent.

I guess I was just lucky when I used variable argument passing similar to printf, probably only worked with the compiler I used at the time. Or maybe the calling convention was declared, it was a long time ago.

Mark

Douglas Miller

unread,
May 3, 2020, 1:38:02 PM5/3/20
to retro-comp
Usually, each compiler has it's own calling convention. With the compiler being constant, the calling convention should be also. Some might have variable rules for number of arguments vs. register/stack location. But I think 8080/Z80 compilers always used the stack, albeit possibly different order.

Interocitor Steve

unread,
May 3, 2020, 2:23:49 PM5/3/20
to retro-comp
What is RomWBW?  =Steve.

Wayne Warthen

unread,
May 3, 2020, 5:56:06 PM5/3/20
to retro-comp
On Sunday, May 3, 2020 at 11:23:49 AM UTC-7, Interocitor Steve wrote:
What is RomWBW?  =Steve.


RomWBW provides a complete software system for a wide variety of hobbyist Z80/Z180 CPU-based systems produced by these developer communities:

General features include:

  • Banked memory services for several banking designs
  • Disk drivers for RAM, ROM, Floppy, IDE, CF, and SD
  • Serial drivers including UART (16550-like), ASCI, ACIA, SIO
  • Video drivers including TMS9918, SY6545, MOS8563, HD6445
  • Real time clock drivers including DS1322, BQ4845
  • Multiple OS support including CP/M 2.2, ZSDOS, CP/M 3, ZPM3
  • Built-in VT-100 terminal emulation support

Phillip Stevens

unread,
May 3, 2020, 7:11:47 PM5/3/20
to retro...@googlegroups.com
Yes, sorry the example that I picked was probably the most contorted version of the truth. There's background reading, but to keep it relatively short.

z88dk has two compilers. 
  1. sccz80 that grew out of small C, and use L->R argument passing. Where there are two arguments it will pass the right one in registers, and the left on the stack. char is passed as int because this suits z80 stack management better. This is the house compiler, maintained by the team (suborb really). New features and bug fixes come quickly.
  2. sdcc that is multi targeted uses standard C R->L argument passing. With more than one argument it will always use the stack. It is maintained externally, and is patched by z88dk team to use library functions for most of its intrinsic calls.
That said, both compilers can be forced to do unnatural calling (from their point of view) by using either -stdc or -smallc flags respectively when invoked.

The z88dk classic library has the file access capabilities in the CP/M target. The z88dk new library doesn't yet have file access capabilities as it is not totally finished (I work around this by using ChaN FatFS as an external library. But that is written in C, and so it won't be going into the core new library.)

So the example code I provided as using the classic library with L->R calling, but using sdcc with the -smallc flag to corrupt it to that calling convention.
 
Mark T wrote:
I was aware that C didn’t specify the order of evaluation, but I always thought that the location on the stack was consistent.

I guess I was just lucky when I used variable argument passing similar to printf, probably only worked with the compiler I used at the time. Or maybe the calling convention was declared, it was a long time ago.

The other twist is that z88dk (and the compilers) implement both __callee and __fastcall flags for function declaration. A function can be either ordinary, or one of callee or fastcall. If it is ordinary, you see the caller is responsible for clearing up the stack (as in my example code). If there is only one argument then fastcall can be used, passing the argument in DEHL (or a subset) is supported. Multiple arguments are supported by callee, where the called function clears the stack. fastcall/callee is the "best" calling convention, and the new library tries to implement this when it can. But, because a function call by pointer can't be callee or fastcall, both ordinary and special options for the function calling need to be implemented.

Unfortunately fopen() fread() and fwrite() in the classic library are ordinary functions, so the stack needs to be cleared after calling them.

Douglas Miller wrote:
Usually, each compiler has it's own calling convention. With the compiler being constant, the calling convention should be also. Some might have variable rules for number of arguments vs. register/stack location. But I think 8080/Z80 compilers always used the stack, albeit possibly different order.

For z88dk, the z80 library functions are usually written in a best mixture of sdcc R->L arguments, with sccz80 RHS on registers LHS stack in fastcall or callee where possible. Then, to get the right connection to all the different options some simple glue code is written.

Oh, and one more twist. Function name mangling between C and assembly is usually simply addition of an underscore. This works consistently, except when building a sccz80 library function, where mangling is not done. So there needs to be another function name definition glue written to support sdcc calling into the classic library.

(Note above is not in anyway guaranteed to be correct. My understanding is usually faulty and incomplete).

Totally off topic,
Phillip

Interocitor Steve

unread,
May 4, 2020, 8:42:00 PM5/4/20
to retro-comp
Thanks, Wayne.  Sounds great.  =Steve.

Phillip Stevens

unread,
May 13, 2020, 6:40:31 AM5/13/20
to retro...@googlegroups.com
On this cycle "cost" of various things, some further thoughts.

 
The equipment is one standard 7.3MHz RC2014 Plus 64KB Ram, with one of Spencer's new IDE Modules (makes no difference whether Spencer or Ed's Module, but Spencer just launched the new IDE Module available on Tindie).

To get to the bottom of this, I've done a test using a C program and z88dk IDE drivers. It takes 21.5 seconds to copy a 1048576 Byte file (throughput is double as the same disk is being read and written).

SPT     Empty - using C program z88dk with ChaN FATFS onto FAT32
256     21 sec

The raw BIOS time is 98 cycles per 2 bytes. 25088 per 512 Byte sector. Or written another way it can do a 1MB raw to Read or Write in 6.9689 seconds, into the system hstbuf.
So, the 21.5 seconds for the 1MB copy consists of 14 seconds for the raw data transfer and 7 seconds of housekeeping. Not too bad.

But, how do we look from inside CP/M?
I've tested the 4 situations 32 Byte SPT, 256 Byte SPT, empty 8 MB drive, and full 8MB drive. The results look like this, using PIP to do the file copy.

SPT     Empty     Full - using PIP to standard BDOS

32      46 sec    69 sec
256     42 sec    65 sec
In other words you are watching the wrong ball - there is a very large
per block constant (for a given CPU speed) in your experiment which is
telling you that most of the overhead is somewhere else.

In another post I cast derogatory remarks about PIP, and that I was sure that it would be soaking up the 21 seconds that are getting lost here.

So I wrote a simple C program, using the z88dk CP/M stdio functions (from the classic library), to do a file copy.
The result was not pretty. My CP/M C program is much worse than PIP at doing file copies. 
Next time, I'll keep quiet. PIP is very good.

SPT     Empty    Full - using C program z88dk classic stdio
256     53 sec   80 sec

So where did that 20 to 30 seconds of "fat" get added onto the CP/M file access?

Was it coming from the deblocking algorithm in my BIOS?

Well no. The unrolled LDI 32 version takes about 4.8 seconds for the transfer, and the LDIR version takes about 6 seconds for the transfer.
That leaves a remaining 14 seconds getting lost inside the BDOS and PIP code, somewhere.

EDIT. I've written a simple assembly program CP.COM that replicates the PIP fast copy algorithm. It is a little faster than PIP (about 2 to 5%), and it shows that PIP is very efficient.
This demonstrates that the overhead comes from within the BDOS.
 
SPT     Empty - using assembly calls to standard BDOS
256     39 sec

And, I still can't find the issue in the BDOS core code. It is not BIOS, and it is not sector deblocking. And, there's nothing else left.
That will have to be a problem for another rainy afternoon.

Mark T wrote:
Maybe the code generated from c could be modified. Would be no need to pop the arguments for the first call and then push them again for the second.

1.
I dug into why the C program using the z88dk classic library was adding 14 seconds onto the BDOS time.
It turns out that the library code originated in the HITECH C stdio library, and it used the RRAN BDOS 33 and WRAN BDOS 34 calls all the time. This invoked the BDOS code to locate the correct record from the pointer bytes, which slows things down.
Also, PIP uses the strategy of reading the maximum number of 128 Byte sectors possible to count with 1 Byte (255) which is 32kB of buffer, before writing all of that to the disk sequentially. I copied this strategy to advantage.

Changing the z88dk stdio library to use the READ BDOS 20 and WRITE BDOS 21 when possible shaved about 50% off the overhead spent in BDOS, taking the extra time for using the classic library down to 6 seconds, and a total of 45 sec.

2.
But, the time 39 seconds spent in BDOS (using assembly calls to BDOS, in comparison to the theoretical 14 seconds for the raw data transfer and 5 seconds for deblocking) is still quite large.
So, I reasoned that my standard DRI BDOS (with some 8080isms removed), was just bad.

I've built now the "best" CP/M 2.2 system possible, by building a NZ-COM system. This provides arguably the best CPP and BDOS written for the CP/M 2.2 BIOS API.
I've done the test again using the same PIP application and same BIOS, but now with the NZ-COM BDOS providing the file system I/O calls.
 
SPT     Empty - using PIP to NZ-COM BDOS
256     47 sec

The NZ-COM BDOS although written specifically for Z80, is actually substantially slower than the standard DRI BDOS. It adds 5 seconds onto the PIP copy, and when repeated using the simple CP assembly program the result is the same.
I guess the point here is that "all that comfort, comes at a cost". The cost is not only TPA consumed vs DRI BDOS, but it can be measured in a substantial performance degradation too.

The logical next step here is to rewrite the standard BDOS to avoid the need to generate generalised sector and track information from the record number, and just write directly to the BIOS using relative record or LBA information, as Douglas suggested.
Another project, another time.

Phillip

fritzeflink

unread,
Jul 16, 2020, 5:43:16 PM7/16/20
to retro-comp


Am Freitag, 24. April 2020 18:37:37 UTC+2 schrieb Wayne Warthen:
On Friday, April 24, 2020 at 7:46:18 AM UTC-7, Jim McGinnis wrote:

I am sure there are debates to be made for 512 vs 1024 on an 8MB drive. I have reached a practical limit that will require working around the DIR limit. Which can be done.

Actually, it is not much of a debate.  It absolutely should have been 1024 originally.  The choice of 512 goes back over a decade when RomWBW came to life and was used simply because it was compatible with the other work being done at the time.


I am very open to thoughts from this group.  Do people think I should just make the change and warn everyone?

-Wayne


Hi Wayne,


as my CPU280 is in retirement  I got a SC126 with CF module and I'm playing with CP/M3 and my old Z3PLUS archives.
First I had to look what's the ROMWBW is and that's a  fine compilation I've to read a lot about.  Big thumps up.

Now running with aliases and playing with scripts and xmodem I realized that there is a lot of place but no more directory possible.

So I found these posts and like to have 1024 dir entries too  - mostly for CP/M3. 
As I'm a technican and not not a programmer pleas give me a hint.
 









 

Pellatonian

unread,
Jul 17, 2020, 2:58:42 PM7/17/20
to retro-comp
1024 directory entries makes clear sense for most people.  I have avoided problems by defining more disks (most of my ROMWBW systems have 16 drives configured) so I can separate projects onto different disks so no one drive is overloaded.

However, if the default number of directory entries is changed can i suggest that a ROMWBW configuration tag be created that allows the existing DPB layout to be used.  There are a lot of systems out there using the current setup which will be locked out of newer ROMWBW revisions and features until they find a way to reload their drives in the new format.



Wayne Warthen

unread,
Jul 17, 2020, 3:43:24 PM7/17/20
to retro-comp
On Friday, July 17, 2020 at 11:58:42 AM UTC-7, Pellatonian wrote:
1024 directory entries makes clear sense for most people.  I have avoided problems by defining more disks (most of my ROMWBW systems have 16 drives configured) so I can separate projects onto different disks so no one drive is overloaded.

However, if the default number of directory entries is changed can i suggest that a ROMWBW configuration tag be created that allows the existing DPB layout to be used.  There are a lot of systems out there using the current setup which will be locked out of newer ROMWBW revisions and features until they find a way to reload their drives in the new format.

The new code is backward compatible.  If you upgrade to the dev branch and use an old hard disk (CF, SD, etc.), it will work fine (although limited to 512 entries).  1024 entries are only used for slices that are contained in a partition of type 0x2E (which does not exist on old hard disk format).

Jim McGinnis

unread,
Jul 22, 2020, 11:06:45 AM7/22/20
to retro-comp
Just a heads-up. The new 1024 DIR entry disk images work great with two caveats:

1) The single OS prebuilt images are missing the prepended "hdnew_prefix.bin" and will not work correctly.
The "combo" prebuilt image does have the prepended bin file and works flawlessly to boot each slice.

2) And, one other "gotcha." The INITDIR tool does not work correctly/locks up in the CPM3 environment when run against the new disk format. Likely there is a dependence there. This tool works very well using the old 512 DIR disks.

I have added "issues' here:  https://github.com/wwarthen/RomWBW/issues

Wayne, I hope you move is going well and without surprises....

Best regards,

Jim

Frank P.

unread,
Jul 22, 2020, 11:29:31 AM7/22/20
to retro-comp
On Wednesday, July 22, 2020 at 11:06:45 AM UTC-4, Jim McGinnis wrote:
1) The single OS prebuilt images are missing the prepended "hdnew_prefix.bin" and will not work correctly.
The "combo" prebuilt image does have the prepended bin file and works flawlessly to boot each slice.

I'm a little confused by that statement, since the combo image is exactly 6 times the size of each of the single OS images, at least in RomWBW-v3.0.1-Package. Where would this "prepended" file be? In each image?

Or are you discussing something in the RomWBW dev tree? I must say I haven't been paying as much attention to this topic as I should have, so if I'm asking something stupid, please let me know.

Jim McGinnis

unread,
Jul 22, 2020, 12:13:49 PM7/22/20
to retro...@googlegroups.com
Hmm...I  made an assumption that should never have been asserted. Yes, It is the "dev" tree which is the only place (I think) these new images for the new HD formats exist right now. My bad.

When building images from the dev branch:
The "combo" image is constructed using the individual stand-alone images and ALSO prepends the bin file.
None of the stand-alone individual images is prepended with the bin file.

Prerequisites for building "new" images with 1024 DIR entries:

1) Individual slice images.
2) The bin file that is prepended to any disk image of one or more slices.
3) Final images that contain one or more boot-capable (or not) images (slices) and the image file is prepended with the bin file.

By experimentally adding (in front of) the CPM3 image the bin file as a single slice image, the image boots and works correctly.
Just for test purposes I created hdnew_jim.img using a modified Build.cmd file in \dev\Source\Images

This is an extract of the cmd file near the end with my added script lines...

echo.
echo Building New Hard Disk Images...
echo.
call BuildDisk.cmd cpm22 wbw_hdnew ..\cpm22\cpm_wbw.sys
call BuildDisk.cmd zsdos wbw_hdnew ..\zsdos\zsys_wbw.sys
call BuildDisk.cmd nzcom wbw_hdnew ..\zsdos\zsys_wbw.sys
call BuildDisk.cmd cpm3 wbw_hdnew ..\cpm3\cpmldr.sys
call BuildDisk.cmd zpm3 wbw_hdnew ..\cpm3\cpmldr.sys
call BuildDisk.cmd ws4 wbw_hdnew

if exist ..\BPBIOS\bpbio-ww.rel call BuildDisk.cmd bp wbw_hdnew

copy hdnew_prefix.bin ..\..\Binary\

echo.
echo Building Combo Disk (new format) Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm22.img + ..\..\Binary\hdnew_zsdos.img + ..\..\Binary\hdnew_nzcom.img + ..\..\Binary\hdnew_cpm3.img + ..\..\Binary\hdnew_zpm3.img + ..\..\Binary\hdnew_ws4.img ..\..\Binary\hdnew_combo.img

echo.
echo Building Jim Disk (new format) cpm3 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm3.img ..\..\Binary\hdnew_jim.img


The"combo" image works wonderfully. You can see in the cmd file that the bin file is prepended to the result img file.
But none of the single slice images have that bin file prepended - except the test slice image I made. It works just fine.

The script can be user-modified to create the right images.

Here is a suggested temporary addition to the existing Build.cmd file. The name of the final file is arbitrary and likely will not be "clean"ed up by the makefile without mods to the makefile. Expect this method to change with a future release of the dev branch. YMMV.
echo.
echo Building final (1024 DIR) cpm22 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm22.img ..\..\Binary\hdnew_cpm22_1024.img
echo Building final (1024 DIR) cpm3 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm3.img ..\..\Binary\hdnew_cpm3_1024.img
echo Building final (1024 DIR) zsdos Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_zsdos.img ..\..\Binary\hdnew_zsdos_1024.img
echo Building final (1024 DIR) cpm22 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_nzcom.img ..\..\Binary\hdnew_nzcom_1024.img
echo Building final (1024 DIR) cpm22 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_zpm3.img ..\..\Binary\hdnew_zpm3_1024.img


Cheers

Jim

Jim McGinnis

unread,
Jul 22, 2020, 12:28:55 PM7/22/20
to retro-comp
Here is a simple text version in case anyone wants to snag it (typos fixed, I hope!):

echo.
echo Building Final Disk (1024 DIR) cpm22 Image...

copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm3.img ..\..\Binary\hdnew_cpm3_1024.img
echo Building Final Disk (1024 DIR) zsdos Image...

copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_zsdos.img ..\..\Binary\hdnew_zsdos_1024.img
echo Building Final Disk (1024 DIR) nzcom Image...

copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_nzcom.img ..\..\Binary\hdnew_nzcom_1024.img
echo Building Final Disk (1024 DIR) cpm3 Image...

copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm3.img ..\..\Binary\hdnew_cpm3_1024.img
echo Building Final Disk (1024 DIR) zpm3 Image...

copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_zpm3.img ..\..\Binary\hdnew_zpm3_1024.img
echo Building Final Disk (1024 DIR) ws4 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_ws4.img ..\..\Binary\hdnew_ws4_1024.img

Frank P.

unread,
Jul 22, 2020, 12:48:04 PM7/22/20
to retro-comp
Thanks for clarifying that Jim. I think I'll wait until this all settles out before I delve into this. Since I have a couple "custom" HD images in addition to the 6 stock ones, is there a way to convert an old HD image to an "hdnew_*" image?

Jim McGinnis

unread,
Jul 22, 2020, 1:39:09 PM7/22/20
to retro-comp
Frank,
In the absence (to my knowledge) of a simple conversion tool, I migrated all my drives by introducing a 1024-DIR-capable image into the system and then manually copying the 512-DIR-capable image files and slices to the new disk. Yes, it required several hours.

I have a PPIDE SSD drive (32GB) that is the main drive in my SC126 system. It contains all the files I want to preserve when moving to the 1024 DIR capable image and ROM. I also have two (2) uSD cards installed - each 16GB.

1. Create a new image on one of the uSD drives. Install the new SC126 RomWBW FLASH - ( used 3.3.1-pre.21). Boot that uSD card image so that you can read old and new format drives.
2. Create FAT partition space directories for all the slices (drives) from the main hard disk on the uSD card.
3. Copy the contents of the old main drive slices (disks) to the FAT area on the uSD card by creating directories there for each slice/drive on the main disk.
4. Remove the main drive and install a new image to it. Reinstall. Setup all the slices/drives as needed.
5. Copy the uSD FAT partition slice/drive backup files to the newly imaged main drive slices/disks.
6. Copy the ROM drive (containing all the new baseline files for the new ROM) to each slice/disk on the new disk drive, as appropriate. This step keeps all the tools in sync with the ROM and OS (HBIOS/CBIOS baselines) on the drives.

Note that the HD images all seem to create two partitions on the HD drive. I confirmed this using the FDISK80 tool. When I insert the disk into a reader on my Windows 10 PC, it normally detects two new drives. One is FAT(32) and the other is unrecognized. Windows will ask to reformat the RomWBW partition. I ignore Windows. Alot.

It is painful, but it works. There are other methods that are PC based using zx.exe and cpmtools or PC side emulators. But I chose Z80 native...

Cheers
Jim


RomWBW partition is sized for 64 slices (8MB x 64 = 512MB).
FAT Partition is sized as 512MB also.

There is plenty of space in the FAT partition unless you are an extraordinary user of slices on the main disk.



Frank P.

unread,
Jul 22, 2020, 2:35:13 PM7/22/20
to retro-comp
You have to be a little careful with those numbers - each slice is (per the Hard Disk Anatomy document) 8MiB+128KiB System Area. So you need to allow 8519680 bytes (8320KiB) per slice, just like in the stock images. For 64 slices, you need to reserve exactly 520MiB, not 512 (it's that whole "System Area" thing). I have my (rather large, but what I had on hand) 32GB SD card partitioned for 256 slices (why not?), followed by a FAT32 partition that fills the rest of the card (28ish GiB). Not going to be filling either of those soon :)

Jim McGinnis

unread,
Jul 22, 2020, 3:52:52 PM7/22/20
to retro-comp
Well, I guess your argument is with the numbers taken from FDISK80 reporting, not me. Go figure...

Jim McGinnis

unread,
Jul 22, 2020, 5:42:33 PM7/22/20
to retro...@googlegroups.com

Continuing on your point for "being careful" and for the benefit of some readers...

I think what you were trying to point out is that the number of slices is less than 64. Which has been a poignant issue with small CF cards etc. 64 MB CF cards always had a compromised last slice. Point: the last slice may be compromised or useless.

I think, correct me if I am wrong, the size of a SLICE = 8.125 MB (8320 KiB as you stated) where MB = 1024 x 1024 Bytes - as is reported by FDISK80. The 1/8 MB = 128KiB and is the overhead you mentioned for the slice.

So, if FDISK80 reports 512M - and you do the LBA math, it is only capable of supporting 63 full slices.

63.0154 slices  - approximately. That last slice is for all practical purposes, lost.

To get all 64, the allocation needs to be +520MB, not +512MB.

FDISK80 reserves the first track for the partition table. So, the first partition occurs at 1:0:1 instead of 0:0:0. This is evident in FDISK80.

The FAT16 partition, once formatted, looses some space as well. For the FAT16, it is about 24KB - 32KB after Windows 10 or FAT.COM formats the FAT16 partition. So, for a 512MB FAT16 partition that has been formatted, the residual space is 536,584,192 bytes instead of 536,608,768 bytes.

Wayne's defaults of 63 full slices and "about" 512MB for the FAT partition are just fine settings for me.

For perspective, the entire Walnut Creek CD-ROM is about 648 MB. (Not arguing the ISO size on CD vs on any particular file system, etc.)  How much FAT16/32 style storage do I need on my uSD card or SSD - not that much! Just silly for my use.

The disks are so large that it seems like a fuel tanker is parked in the yard for running my lawn mower - how technology has changed.

And how much storage is needed today to get anything done?   I don't want to participate in that conversation.

Thanks Frank. Excellent point.

Now I wish FDISK80 would simply allow you to specify more than 8 reserved slices. How about 256 (or 255 if that is required)?

Looks like a project for someone with the time!

Cheers!  Be safe!






Frank P.

unread,
Jul 22, 2020, 7:28:08 PM7/22/20
to retro-comp
To be clear, I did not use FDISK80 to partition my 32GB card, I used Windows administrative tool Computer Management, and within that Disk Management. That allowed me to create an unformatted volume in the first 2,181,038,080 bytes (and then some to round up to the next highest MiB that Disk Management would allow) of the card, and then fill the remainder of the card with a FAT32 volume (originally I used the maximum size of a FAT16 volume, 4GiB, but then I learned that the FAT command would work with a FAT32 file system but would only support a single partition, so I just let it go out to the full size of the card).

Jim McGinnis

unread,
Jul 22, 2020, 8:06:44 PM7/22/20
to retro-comp
Well Frank, that has to be the most traditional way of configuring drives and partitions that probable has been missed by most. I had never thought to use Admin tools and the plug-ins to manage a disk from the start for a CPM type system.  I have used Linux dd and related commands but find they are more complicated to use. I makes great sense to do just what you have done once you know the math for the number of slices required, etc. 

And it is a set of tools that many will find familiar. More familiar than FDISK80...

Thanks for the hint and clarification.

Wayne Warthen

unread,
Jul 22, 2020, 10:47:48 PM7/22/20
to retro-comp
On Wednesday, July 22, 2020 at 8:06:45 AM UTC-7, Jim McGinnis wrote:
1) The single OS prebuilt images are missing the prepended "hdnew_prefix.bin" and will not work correctly. 
The "combo" prebuilt image does have the prepended bin file and works flawlessly to boot each slice.

The biggest issue is my lack of documentation of the new format.  :-(

The new format slices must live inside a partition, so the essentially hdnew_prefix.bin is just the hard disk partition area.  As you have noticed, the individula "new" slices do not have the prefix.  That is because I assume that people will want to concatenate the individual slice images.  If I prepended the prefix to them, they could not be concatenated.  So... the idea is that to create a cusom hdnew image, you concatenate the prefix followed by whichever slices you want.  Since hdnew_combo.img is a already a set of concatenated slices, the prefix is already prepended.

The "old" format did not rely on a partition and they just started at the beginiing of the hard disk.  So, the old slices could just be concatenated by themselves.  The new format is, well, different.

I hope this helps.  I don't have time to create more detailed doc right now.

Regardless, I am thrilled to see such interest in the new format.

Thanks!

Wayne 

Jim McGinnis

unread,
Jul 23, 2020, 9:07:46 AM7/23/20
to retro-comp
Wayne, earlier in the thread I posted a possible temporary update to the Build.cmd file for the hdnew images build area at the end:

It does create a new file type with the partition area prepended.
Example:  hdnew_cpm3.img + hdnew_prefix.bin ==> hdnew_cpm3_1024.img

But as I suggested in the posts, while this may work, it may not be the final "turn-key" solution that you desire.

echo.
echo Building Final Disk (1024 DIR) cpm22 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm3.img ..\..\Binary\hdnew_cpm3_1024.img

echo Building Final Disk (1024 DIR) zsdos Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_zsdos.img ..\..\Binary\hdnew_zsdos_1024.img

echo Building Final Disk (1024 DIR) nzcom Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_nzcom.img ..\..\Binary\hdnew_nzcom_1024.img

echo Building Final Disk (1024 DIR) cpm3 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_cpm3.img ..\..\Binary\hdnew_cpm3_1024.img

echo Building Final Disk (1024 DIR) zpm3 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_zpm3.img ..\..\Binary\hdnew_zpm3_1024.img

echo Building Final Disk (1024 DIR) ws4 Image...
copy /b hdnew_prefix.bin + ..\..\Binary\hdnew_ws4.img ..\..\Binary\hdnew_ws4_1024.img


Yep. I am enjoying the additional new DIR headroom!  Thanks again!

Jim
Reply all
Reply to author
Forward
0 new messages