Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

huge memory model

204 views
Skip to first unread message

Paul Edwards

unread,
Aug 22, 2020, 6:46:23 PM8/22/20
to
It just occurred to me that maybe I was wrong
to use the large memory model for PDOS/86, and
end up with a truly horrible and wasteful use
of memory. Maybe I should have used the huge
memory model instead:

https://en.wikipedia.org/wiki/Intel_Memory_Model

In normal processing, most time is spent
running applications (which may be "large")
rather than the operating system itself, so
it shouldn't matter if PDOS/86 is a bit less
efficient.

The main kludge I did was in memory management.
In order to not have to change my memory
management routines (memmgr), I instead divide
all memory requests by 16 and suballocate
space from a 0x5000 byte block, and then scale
up to the real memory address.

BFN. Paul.

Paul Edwards

unread,
Aug 22, 2020, 7:08:54 PM8/22/20
to
Should size_t be "long" or "unsigned long" in
the huge memory model?

Borland only seem to make it an "unsigned int"
in all memory models.

Thanks. Paul.

wolfgang kern

unread,
Aug 23, 2020, 3:52:23 AM8/23/20
to
I use two memory managers in my OS, one works on 1KB junks similar and
particular compatible with Himem.sys functions, and the second covers
large blocks of any size in three granularity steps (4K,64K,1M).
But only one Get_Mem function, requested size chose which one to use.
__
wolfgang

Paul Edwards

unread,
Aug 29, 2020, 6:13:49 AM8/29/20
to
If a (possibly modified) huge memory model
did normalization of all pointers before
using them, and had size_t equal to long,
it wouldn't have helped on the 80286.

Or did the 80286 have another way of
having buffers greater than 64k?

BFN. Paul.

wolfgang kern

unread,
Aug 29, 2020, 8:58:27 AM8/29/20
to
My first 80286 (my it rest in pieces now) had a memory extension card
with 2.5 MB RAM in addition to the 640K and games used IIRC DOS4GW or
something similar to access the whole RAM (my memory fades after more
than four decades passed).

Internal memory pointers were 32 bit then anyway, either linear or CS:IP
pairs.

"memory models" are just compiler issues, I never cared for such in my OS.
__
wolfgang

Alexei A. Frounze

unread,
Aug 29, 2020, 11:56:27 AM8/29/20
to
On Saturday, August 29, 2020 at 3:13:49 AM UTC-7, Paul Edwards wrote:
> If a (possibly modified) huge memory model
> did normalization of all pointers before
> using them, and had size_t equal to long,
> it wouldn't have helped on the 80286.

Smaller C's huge memory model operates with 32-bit
(in reality, only 20 bits are usable) physical addresses,
which are converted into segment:offset pairs just before
accessing the memory. This uses i80386 instructions and
is quite a bit of overhead. The i80286 would be much
worse.

> Or did the 80286 have another way of
> having buffers greater than 64k?

You can preinitialize the GDT and/or LDT to make segment
selectors additive (or addable?), just like in real mode.
But unlike real mode, protected mode gives you access to
all 16 MBs.

Alex

Paul Edwards

unread,
Aug 29, 2020, 6:12:00 PM8/29/20
to
On Sunday, 30 August 2020 01:56:27 UTC+10, Alexei A. Frounze wrote:

> You can preinitialize the GDT and/or LDT to make segment
> selectors additive (or addable?), just like in real mode.
> But unlike real mode, protected mode gives you access to
> all 16 MBs.

Could you elaborate on this please? How
were the addresses addable to give a
16 MiB address space?

Thanks. Paul.

Alexei A. Frounze

unread,
Aug 30, 2020, 12:11:20 AM8/30/20
to
The i80286 segment descriptor has a 24-bit base physical
address and a 16-bit limit (one less the segment size
in the normal case).

The segment selector has 13 bits of address if we
ignore the 2 bits of the privilege level and the bit
that selects between the GDT and the current LDT.

So, you can fill the GDT/LDT with 8192 segment descriptors
(well, 8191 or slightly fewer because of the NULL descriptor
and some other system segments) with linearly increasing
physical base addresses and limit=65535.

16 MB / 8192 = 2048.
2048 bytes would be the minimum segment size and with 8192
of them you'd cover all 16MB.

IOW, you can have a GDT/LDT configuration, where every
segment starts on a 2KB boundary and is 64KB long.
Adjacent segments overlap.
This is much like in real mode, but with the start address
being a multiple of 2KB instead of 16 bytes.

[If you fill both the GDT and the LDT in such a manner
and use the segment selector bit that selects either
the GDT or the LDT, you'll have 14 address bits in the
selector and double the total number of the segments to
16384. With this you can lower the segment start address
from being a multiple of 2048 to a multiple of 1024.]

And then your normalizing pointer increment routine
would be something like:

; in/out: ds:bx the logical address;
; out bx is normalized to be less than 2048
; in: dx:сx the byte offset to add
; to the logical address
; destroyed: dx:cx
add bx, cx
adc dl, 0
mov dh, dl
mov dl, bh
and dl, 11111000b
and bh, 00000111b
mov cx, ds
add dx, cx
mov ds, dx
ret

With this normalization you can handle up to 62KB
worth of data without crossing a segment boundary,
which is handy for copying or searching in large
blocks of memory.

It would be a perf overkill do use this routine
often, e.g. on every byte, word, qword or tword
access, but it'll work every time.

So, given a 24-bit physical address in a pair of
16-bit registers, you can always convert it into
a segment:offset pair to access memory through
the properly configured segments.

Borland Pascal 7 used this scheme in protected mode
and DPMI and Windows both supported it.
There was this global variable in BP7, SelectorInc,
which you could add to a segment selector to move
by 64KB in the linear/physical address space and
thus access objects larger than 64KB.
The configuration that I describe corresponds to
SelectorInc = 256.

Alex

Paul Edwards

unread,
Aug 30, 2020, 1:58:42 AM8/30/20
to
On Sunday, 30 August 2020 14:11:20 UTC+10, Alexei A. Frounze wrote:

> With this normalization you can handle up to 62KB
> worth of data without crossing a segment boundary,

Thanks for the explanation Alex. I had
expected the answer to be that the
segment is shifted by 8 bits instead
of 4 bits.

BFN. Paul.

Alexei A. Frounze

unread,
Aug 30, 2020, 4:47:51 AM8/30/20
to
To expand on how to do this:
; in: dh:ax is the 24-bit physical address
; out: ds:bx is the corresponding logical address
; using the GDT at privilege level 0
; destroyed: dl
; limitation: because of the NULL descriptor, this
; cannot cover physical addresses
; between 0 and 2KB
mov bx, ax
mov dl, bh
and dl, 11111000b
and bh, 00000111b
mov ds, dx
ret

Interestingly, this is just one instruction longer
than what Smaller C uses in the huge model in real mode
with i80386 instructions and 32-bit registers.

muta...@gmail.com

unread,
Mar 9, 2021, 3:55:15 AM3/9/21
to
On Sunday, August 30, 2020 at 2:11:20 PM UTC+10, Alexei A. Frounze wrote:

> And then your normalizing pointer increment routine
> would be something like:
>
> ; in/out: ds:bx the logical address;
> ; out bx is normalized to be less than 2048
> ; in: dx:сx the byte offset to add
> ; to the logical address
> ; destroyed: dx:cx
> add bx, cx

> It would be a perf overkill do use this routine
> often, e.g. on every byte, word, qword or tword
> access, but it'll work every time.

I'd like to revisit this issue.

I don't mind the performance issue. I just want my
programs to behave correctly.

I like the idea of a subroutine being called every time
a seg:off is added/subtracted to, producing a normalized
pointer. I want the normalized pointer to still be a proper
seg:off though, I don't want it to be a linear address. I
just want the offset reduced to under 16.

And I would like a different executable and different
subroutine to handle the 80286 using your above scheme
of mapping the 16 MiB address space.

It occurs to me that if we are using subroutines, and
32-bit pointers, that this should just be a target of
GCC. I'm not sure if GCC can cope with "int" being
16-bit. Other targets of GCC 3.2.3 seem to have 16-bit
"int".

But I expect size_t to be 32-bit.

I don't really care if "int" is 16-bit or 32-bit.

I would like to avoid using the "near" and "far" keywords.
I don't mind different memory models, but my focus is
on "huge" so that I can have a large size_t, as is appropriate
for a machine with more than 64k of memory.

I found the ia16.md here:
i16gcc.zip
i16gcc\SOURCE\I16GCC\I16GCC.DIF

I don't mind at all that people want to use tricks to reduce
space usage by making pointers 16-bits instead of 32-bits.
Or 32-bits instead of 64-bits. I just don't want those differences
to be visible in C code. Even in the C runtime library I really
expect that to be relegated to some "glue" when interacting
with the OS, not in open C code.

I don't mind the segmented pointers either, so long as
when you need to pay the price for that, you pay the
price, which is a lot of subroutine calls. As opposed to
being stuck with a size_t of 16 bits.

I'm wondering how much work is required to convert
ia16.md into handling huge memory model with 32-bit
size_t, if I'm willing to simply generate lots of subroutine
calls.

Anyone have any idea? I would like to contact the author
after I have some idea what is involved. Maybe subroutines
would only take 200 lines of code changes to the ia16
target.

Thanks. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 4:14:37 AM3/9/21
to
On Tuesday, March 9, 2021 at 7:55:15 PM UTC+11, muta...@gmail.com wrote:

> I don't mind at all that people want to use tricks to reduce
> space usage by making pointers 16-bits instead of 32-bits.
> Or 32-bits instead of 64-bits. I just don't want those differences
> to be visible in C code. Even in the C runtime library I really
> expect that to be relegated to some "glue" when interacting
> with the OS, not in open C code.

I think a small memory model program needs to be
given a 16-bit pointer to the OS API, plus a 16-bit
function to call whenever it wants to call a routine
contained within the OS API. ie these things should
be within the module's "address space".

It is the responsibility of the 32-bit OS to look after its
16-bit executables.

I don't need MSDOS compatibility.

And similarly, a 64-bit OS needs to be aware of its
32-bit executables, and make sure the OS API is
visible to it.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 4:40:11 AM3/9/21
to
On Tuesday, March 9, 2021 at 8:14:37 PM UTC+11, muta...@gmail.com wrote:

> I think a small memory model program needs to be
> given a 16-bit pointer to the OS API, plus a 16-bit
> function to call whenever it wants to call a routine
> contained within the OS API. ie these things should
> be within the module's "address space".

Actually, I think the OS API should just be given
as a series of integers, as the OS API could be
16-bit, 32-bit or 64-bit. The OS should know from
the number what the application is trying to do,
and take care of calling the desired API correctly.
Even if it is a 16-bit tiny memory model program
calling a 64-bit OS_DISK_READ() function.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 5:45:06 AM3/9/21
to
Also we might be on a Commodore 128. I don't know
how that works, but presumably when you do an OS
call, it may actually switch to the other bank to process
it.

The goal is to determine how to write an operating
system in C that can run anywhere, with the assumption
that the OS actually fits into that space.

Actually we may have the same issue with an 80386
with 8 GiB of memory. It is the same model as the
Commodore 128. The hardware would presumably
provide a mechanism to do bank-switching.

The Z80 does that too I think.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 1:15:05 PM3/9/21
to
On Tuesday, March 9, 2021 at 8:40:11 PM UTC+11, muta...@gmail.com wrote:

> Actually, I think the OS API should just be given
> as a series of integers, as the OS API could be
> 16-bit, 32-bit or 64-bit. The OS should know from
> the number what the application is trying to do,
> and take care of calling the desired API correctly.
> Even if it is a 16-bit tiny memory model program
> calling a 64-bit OS_DISK_READ() function.

I think fread() should be implemented in the C
library as:

__osfunc(__OS_FREAD, ptr, size, nmemb, stream);

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 1:28:58 PM3/9/21
to
On Wednesday, March 10, 2021 at 5:15:05 AM UTC+11, muta...@gmail.com wrote:

> I think fread() should be implemented in the C
> library as:
>
> __osfunc(__OS_FREAD, ptr, size, nmemb, stream);

Which would eventually result in an fread() function
in the OS being executed, likely still in user mode (or
a more complicated example, fscanf(), same story),
until it is rationalized to do a PosReadFile() (internal
use only function) which in a decent operating system
will do an interrupt, but that is left to the discretion of
the OS vendor.

So you're now probably in supervisor mode, and you
need to implement PosReadFile(). The OS is the only
thing that knows what files look like on a FAT-16, so
it retrieves the required sectors by doing its own call
to fread(), this time it is a BIOS call, like this:

__biosfunc(__BIOS_FREAD, ptr, size, nmemb, stream);

which once again gets translated into a call to the
BIOS fread() function (or similar, could be fscanf()),
which in turn gets rationalized into a call to BosReadFile()
which treats the entire disk as a file, and retrieves the
requested data.

BosReadFile() will possibly be translated into an
interrupt which puts the CPU into BIOS state instead
of supervisor state.

There is probably no reason to have a separate
__biosfunc and __osfunc or PosReadFile and
BosReadFile.

So perhaps __func() and __ReadFile().

As I said, these are internal functions. Everyone is
required to execute the C90 functions as the only
official interface.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 5:42:51 PM3/9/21
to
On Tuesday, March 9, 2021 at 7:55:15 PM UTC+11, muta...@gmail.com wrote:

> I like the idea of a subroutine being called every time
> a seg:off is added/subtracted to, producing a normalized
> pointer. I want the normalized pointer to still be a proper
> seg:off though, I don't want it to be a linear address. I
> just want the offset reduced to under 16.

Actually, what do existing compilers like TCC
and Watcom do when producing huge memory
model MSDOS executables?

If they're already doing what I need, maybe it is
as simple as changing size_t from unsigned int
to unsigned long in PDPCLIB and then build
with the existing huge memory model capability
of these existing compilers???

Thanks. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 7:45:48 PM3/9/21
to
On Wednesday, March 10, 2021 at 5:28:58 AM UTC+11, muta...@gmail.com wrote:

> __biosfunc(__BIOS_FREAD, ptr, size, nmemb, stream);

I guess the existing int86 calls people use could
remain, and just translate that into a BIOS function
code.

And if we're running under an emulator, or have
a S/390 co-processor attached to our 80386
box, we should be able to switch processors,
not just memory models.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 7:51:45 PM3/9/21
to
On Wednesday, March 10, 2021 at 11:45:48 AM UTC+11, muta...@gmail.com wrote:

> And if we're running under an emulator, or have
> a S/390 co-processor attached to our 80386
> box, we should be able to switch processors,
> not just memory models.

The same process that is used to switch to a
coprocessor can be used to switch to real mode,
when the 16-bit program in question is using
real mode instructions instead of 80386 protected
mode instructions (assuming you can write 16-bit
programs using an 80386). The distinction between
16-bit and 32-bit is blurry to me anyway. Are we
talking about int, data pointers, code pointers, or
biggest register? What if there is a 32-bit register
but no-one uses it because it's really slow? Does it
start or stop being 32-bit?

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 7:59:11 PM3/9/21
to
On Wednesday, March 10, 2021 at 11:45:48 AM UTC+11, muta...@gmail.com wrote:

> And if we're running under an emulator, or have
> a S/390 co-processor attached to our 80386
> box, we should be able to switch processors,
> not just memory models.

All the emulators could be combined, so that we
have a computer with a massive number of
co-processors, and can run executables from
anywhere.

Assuming they're all written in C90 and are following
the agreed-upon OS convention.

Or maybe interrupts can be intercepted and converted
into the agreed convention.

BFN. Paul.

Alexei A. Frounze

unread,
Mar 9, 2021, 8:26:40 PM3/9/21
to
On Tuesday, March 9, 2021 at 2:42:51 PM UTC-8, muta...@gmail.com wrote:
> On Tuesday, March 9, 2021 at 7:55:15 PM UTC+11, muta...@gmail.com wrote:
> > I like the idea of a subroutine being called every time
> > a seg:off is added/subtracted to, producing a normalized
> > pointer. I want the normalized pointer to still be a proper
> > seg:off though, I don't want it to be a linear address. I
> > just want the offset reduced to under 16.
> Actually, what do existing compilers like TCC
> and Watcom do when producing huge memory
> model MSDOS executables?

The 16-bit models in Borland/Turbo C and Watcom C still limit object/array sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory model (however, ptrdiff_t is 32-bit in the huge model).

So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.

My compiler supports arrays/objects larger than 64KB in its huge model. But it uses 80386 registers and instructions.

Alex

muta...@gmail.com

unread,
Mar 9, 2021, 8:27:19 PM3/9/21
to
On Wednesday, March 10, 2021 at 11:59:11 AM UTC+11, muta...@gmail.com wrote:

> All the emulators could be combined, so that we
> have a computer with a massive number of
> co-processors, and can run executables from
> anywhere.
>
> Assuming they're all written in C90 and are following
> the agreed-upon OS convention.

You can assume that all the executables are either
ASCII or EBCDIC, ie consistent, the same as the
data on the hard disk.

You can take a USB stick wherever you want, and
the speed will depend on what coprocessors are
present.

The OS could automatically select the right executables
for the current hardware.

If you have the right hardware, programs will run at
native speed.

When a 68000 executable requests a service from an
80386 OS (even via the agreed mechanism), it is not
just the integer size that needs to be taken into account,
but in this case, the endianness.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 8:31:05 PM3/9/21
to
On Wednesday, March 10, 2021 at 12:26:40 PM UTC+11, Alexei A. Frounze wrote:

> The 16-bit models in Borland/Turbo C and Watcom C still limit object/array
> sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory
> model (however, ptrdiff_t is 32-bit in the huge model).
>
> So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.

Thanks. So what does Turbo C etc actually do in huge model
then? How is it different from large?

Thanks. Paul.

Alexei A. Frounze

unread,
Mar 9, 2021, 8:53:15 PM3/9/21
to
Cumulative size of static objects is 64KB in large vs 1MB in huge.
Basically, how many 64-KB data segments are used for your non-heap variables.

Alex

muta...@gmail.com

unread,
Mar 9, 2021, 11:20:53 PM3/9/21
to
I don't think this is the correct approach for the BIOS. The
BIOS necessarily behaves differently, such as providing
extra services to help the boot sector to load the next
sector. In addition, the OS can't have:

fopen("\\0x190", "r+b");
(to get the BIOS to open device x'190' which it recognizes)
as well as
fopen("config.sys", "r");
(to help the OS itself to read files)

Only the OS has this problem of not being able to
distinguish which is which, and needing different
calls. And the FILE pointer may be different too,
with different buffer sizes for example.

I think BIOS calls need to be done as bios->fread()
and for the file pointer you pass a void * which was
returned from bios->fopen, except for bios->stdout,
bios->stdin, bios->stderr, bios->bootdev which are
all provided (and pre-opened) before the boot sector
is given control.

So that means the OS functions are free to be defined
as an enum, ie OS_FOPEN, OS_FREAD etc, and then the
C library function fopen() will be defined as one line:
(*__osfunc)(OS_FOPEN, ptr, size, nmemb, stream);
Where __osfunc is a function pointer provided by the OS.

A simple implementation can simply use the enum as
an index into a structure. Hmmm, maybe __osfunc needs
to know where to find its data block too, so maybe the
first parameter should be __osdata, also given to the
executable when it is invoked.

In the OS itself, which has to actually provide a real
implementation of fread/fscanf, it will eventually
resolve into a call to something like PosReadFile()
(that name is never exposed, so not important),
which is where an interrupt may be generated,
depending on how advanced the implementation is.

BFN. Paul.

muta...@gmail.com

unread,
Mar 10, 2021, 12:16:06 AM3/10/21
to
On Wednesday, March 10, 2021 at 3:20:53 PM UTC+11, muta...@gmail.com wrote:

> C library function fopen() will be defined as one line:
> (*__osfunc)(OS_FOPEN, ptr, size, nmemb, stream);
> Where __osfunc is a function pointer provided by the OS.

It occurs to me that all my static global variables in
stdio.c will need to have space allocated for them
in the executable (or at least via malloc) rather than
using ones that are defined in the OS executable.

And when the OS is running, it also needs its own
copy of those variables, unless we attempt to
compile twice.

I have variables like this:

static int inreopen = 0;

FILE *__userFiles[__NFILE];

I'm guessing that all of these static variables need to
be put into a new structure, and I can put those structures
into stdio.h, string.h (at least for use by strtok) so long as
I call the structure __SOMETHING and then the startup
code needs to do a malloc for all of these structures,
possibly a single malloc, and the OS needs to preserve
this address (it has access to the C library's global
variables) whenever it does a system().

It's probably better to have a __stdio, __string etc global
variable (all saved and restored over a system() call)
rather than requiring stdio.c to be aware of all the other
things like string.c.

Although at the time an application's call to fopen()
resolves to an OS call of PosOpenFile(), we really
want the OS to get its version of the global variables
restored, in case it decides to call fopen() itself to
look at a permissions.txt or something to see whether
it is willing to satisfy the PosOpenFile() request or not.

So it looks like I will need some sort of header file
such as __start.h which includes all the other header
files to get their structures, and defines a single global
variable that can then be accessed by everyone else,
so the OS can save the previous state on every
Pos*() call. Saving the state is basically part of doing
a switch from application to OS.

BFN. Paul.

muta...@gmail.com

unread,
Mar 10, 2021, 12:48:07 AM3/10/21
to
On Wednesday, March 10, 2021 at 4:16:06 PM UTC+11, muta...@gmail.com wrote:

> It occurs to me that all my static global variables in
> stdio.c will need to have space allocated for them
> in the executable (or at least via malloc) rather than
> using ones that are defined in the OS executable.

I know what this is called - making it naturally reentrant.

BFN. Paul.

Rod Pemberton

unread,
Mar 10, 2021, 3:15:14 AM3/10/21
to
On Tue, 9 Mar 2021 21:16:05 -0800 (PST)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> I call the structure __SOMETHING and then the startup
> code needs to do a malloc for all of these structures,
> possibly a single malloc, and the OS needs to preserve
> this address (it has access to the C library's global
> variables) whenever it does a system().

If malloc() isn't available yet, you might want to code
another function for your OS like alloca() or sbrk().

--
Diplomacy with dictators simply doesn't work.

muta...@gmail.com

unread,
Mar 10, 2021, 7:58:33 AM3/10/21
to
On Wednesday, March 10, 2021 at 12:53:15 PM UTC+11, Alexei A. Frounze wrote:

> > Thanks. So what does Turbo C etc actually do in huge model
> > then? How is it different from large?

> Cumulative size of static objects is 64KB in large vs 1MB in huge.
> Basically, how many 64-KB data segments are used for your non-heap variables.

I have no use for that, at least at the moment. So I
shouldn't be asking that GCC IA16 person for huge
memory model, I should be asking him for large
memory model, but with data pointers able to cope
with 32-bit integers added to them, with normalization?

BTW, another thing I realized was that with my new
minimal BIOS and a 32-bit OS, I could have a new
computer built (ie, in Bochs!) that has the entire
1 MiB free instead of being limited to 640k.

No graphics etc of course, but I'm not trying to
support that, I'm only after text, including ANSI
escape codes, to be sent through to the BIOS.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 8:25:07 AM3/14/21
to
On Wednesday, March 10, 2021 at 12:26:40 PM UTC+11, Alexei A. Frounze wrote:

> The 16-bit models in Borland/Turbo C and Watcom C still limit object/array sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory model (however, ptrdiff_t is 32-bit in the huge model).
>
> So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.
>
> My compiler supports arrays/objects larger than 64KB in its huge model. But it uses 80386 registers and instructions.

I've been giving this more thought, and I'd like to abstract
the problem before inquiring about changes to the C90
standard to create a C90+ standard.

If I have a S/3X0 that only has 32-bit registers available,
I'd like to change the machine to reuse 3 of the 16 registers
as segment registers.

So there will be 32:32 far pointers available.

I'm not sure what range of memory that should cover, but
let's say 64 GiB. (any suggestion?).

I want the compiler to be able to generate far data pointers
and near code pointers.

I want to be able to allocate 8 GiB of memory, even though
size_t is 4 GiB. I need a different function, not malloc().

I don't want to burden the compiler with a formal "long long"
data type.

I want long to be 32-bits.

I want to declare a:

char huge *p;

to point to my above-size_t memory block.

I don't expect to be able to do a strlen() of p, but I do expect
to be able to do p++ to traverse the entire 8 GiB memory
block, perhaps looking for the character 'Q', with the segment
register being automatically adjusted by the compiler, at its
discretion.

I'd like to represent a 64-bit value to be given to huge_malloc()
by two unsigned longs, both containing 32-bit values, even on
machines where longs are 128 bits.

I'd also like a:
char huge *z;
z = addhuge(p, unsigned long high, unsigned long low);

A subhuge too.

The same functions can exist in all compilers, including
MSDOS, even if they just return NULL for huge_malloc()
for obvious reasons. But even MSDOS can give you a
memory block bigger than 64k, so if you request 128k
using huge_malloc(), no worries, you'll get it.

I think:
char far *
has a different meaning. That is a once-off segmented
reference, but still restricted to size_t ie 4 GiB.

Any thoughts?

Thanks. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 8:56:02 AM3/14/21
to
On Sunday, March 14, 2021 at 11:25:07 PM UTC+11, muta...@gmail.com wrote:

> I'm not sure what range of memory that should cover, but
> let's say 64 GiB. (any suggestion?).

What about if the segment register were to shift
left the full 32-bits?

This is all Windows virtual memory anyway.

Real hardware with non-virtual memory could use
a more realistic shift value.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 10:04:16 AM3/14/21
to
On Sunday, March 14, 2021 at 11:25:07 PM UTC+11, muta...@gmail.com wrote:

> char huge *p;
>
> to point to my above-size_t memory block.
>
> I don't expect to be able to do a strlen() of p, but I do expect
> to be able to do p++ to traverse the entire 8 GiB memory
> block, perhaps looking for the character 'Q', with the segment
> register being automatically adjusted by the compiler, at its
> discretion.

Rather than rely on the compiler, how about:

p = HUGE_ADDINT(p, 5);
p = HUGE_ADDLONG(p, 7);

etc

and if you have a magical compiler, that just translates to
p = p + 5;
Without a magical compiler, you do a function call or
whatever.
If you are on a platform without segmented memory,
it translates to:
p = p + 5;
also.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 11:38:06 AM3/14/21
to
On Sunday, March 14, 2021 at 11:56:02 PM UTC+11, muta...@gmail.com wrote:

> What about if the segment register were to shift
> left the full 32-bits?

How about the 8086 be adjustable so that it can
shift the full 16-bits? Software could have been
written to hedge against the shift value rather
than assuming that "4" was set in stone.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 12:09:25 PM3/14/21
to
On Monday, March 15, 2021 at 2:38:06 AM UTC+11, muta...@gmail.com wrote:

> How about the 8086 be adjustable so that it can
> shift the full 16-bits? Software could have been
> written to hedge against the shift value rather
> than assuming that "4" was set in stone.

And INT 21H function 48H should have returned
a far pointer instead of just the segment.

That dooms every allocation to be done on a 64k
boundary if you wish to shift 16 bits.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 12:14:22 PM3/14/21
to
On Monday, March 15, 2021 at 3:09:25 AM UTC+11, muta...@gmail.com wrote:

> And INT 21H function 48H should have returned
> a far pointer instead of just the segment.
>
> That dooms every allocation to be done on a 64k
> boundary if you wish to shift 16 bits.

Regardless, it just so happens that I have 4 GiB
of memory anyway, so if allocations using the
old API go on a 64k boundary, so be it.

Can I make PDOS/86 handle a shift of either 4
bits or 16 bits, as per some setting in the FAT
boot sector? Or some BIOS call? I'm guessing
the BIOS won't be able to cope with that? How
many places is "4" hardcoded?

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 12:41:30 PM3/14/21
to
On Monday, March 15, 2021 at 3:14:22 AM UTC+11, muta...@gmail.com wrote:

> Can I make PDOS/86 handle a shift of either 4
> bits or 16 bits, as per some setting in the FAT
> boot sector? Or some BIOS call? I'm guessing
> the BIOS won't be able to cope with that? How
> many places is "4" hardcoded?

Can PDOS/86 run GCC 3.2.3 which is a 3 MB
executable? Assuming the full 4 GiB address
space is available.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:20:34 PM3/14/21
to
On Monday, March 15, 2021 at 3:41:30 AM UTC+11, muta...@gmail.com wrote:

> Can PDOS/86 run GCC 3.2.3 which is a 3 MB
> executable? Assuming the full 4 GiB address
> space is available.

Once everyone has recompiled their programs
to use large memory model, can 32-bit flat
pointers co-exist with segmented memory
doing 16 bit shifts?

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:32:58 PM3/14/21
to
On Monday, March 15, 2021 at 4:20:34 AM UTC+11, muta...@gmail.com wrote:

> Once everyone has recompiled their programs
> to use large memory model, can 32-bit flat
> pointers co-exist with segmented memory
> doing 16 bit shifts?

At that stage, they WILL be flat pointers. What is
necessary is for the 32-bit instructions to not use
CS and DS and ES. Just pretend they are set to 0.
No need to convert the tiny/small/compact/medium
memory model programs to large/huge at all.

I think.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:45:36 PM3/14/21
to
On Monday, March 15, 2021 at 4:32:58 AM UTC+11, muta...@gmail.com wrote:

> At that stage, they WILL be flat pointers. What is
> necessary is for the 32-bit instructions to not use
> CS and DS and ES. Just pretend they are set to 0.
> No need to convert the tiny/small/compact/medium
> memory model programs to large/huge at all.

Perhaps what was needed was a "load absolute"
instruction.

LABS ds:bx,0xb8000

It will do the required bit shifting. No-one ever needs to know.
And a STABS too of course.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:53:01 PM3/14/21
to
On Monday, March 15, 2021 at 4:45:36 AM UTC+11, muta...@gmail.com wrote:

> It will do the required bit shifting. No-one ever needs to know.
> And a STABS too of course.

A STABS instruction on the S/380 will require 2 32-bit
longs. What if we have 128-bit addresses and 32-bit
longs? How much is enough? I guess you'll be forced
to recompile when that happens?

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 3:21:27 PM3/14/21
to
On Monday, March 15, 2021 at 4:45:36 AM UTC+11, muta...@gmail.com wrote:

> LABS ds:bx,0xb8000
>
> It will do the required bit shifting. No-one ever needs to know.
> And a STABS too of course.

You need to update segments when loading a
medium/large/huge MSDOS executable, so you
need to know what ds is currently, and you need
to be on a segment boundary too, which may be
an expensive 64k if you are doing a 16-bit shift
back in the old days.

But how is a C program meant to update the
segment in a portable manner? E.g. in S/380
where you have a 64-bit segmented address,
assuming you had 7 GiB of code.

You will have segments of 0 and 1 in the executable.

Let's say you load to a 16 GiB location. You will be
aware of that absolute location, as you needed to
do a 64-bit addition (for the 7 GiB) without using
64-bit registers (which you don't have). So it would
have been some HUGE_ADDLONG() calls, presumably
in 2 GiB chunks, as you can't do 4 GiB.

There is a similar issue I faced with PDOS/86. I
can use 64k clusters for FAT-16 but I can't actually
read that amount. It's 1 byte too big to represent in
a 16-bit integer.

Not sure.

BFN. Paul..

muta...@gmail.com

unread,
Mar 14, 2021, 3:29:46 PM3/14/21
to
On Monday, March 15, 2021 at 4:53:01 AM UTC+11, muta...@gmail.com wrote:

> A STABS instruction on the S/380 will require 2 32-bit
> longs. What if we have 128-bit addresses and 32-bit
> longs? How much is enough? I guess you'll be forced
> to recompile when that happens?

How about a union of a long and a void *, to ensure it
is aligned for both, prior to looking at sizeof(void *)
and sizeof(long) before analyzing it? Maybe it is a
job for unsigned char, not long. The rule for STABS
should be set in stone (litle-endian vs big-endian).

BFN. Paul.

Rod Pemberton

unread,
Mar 14, 2021, 5:29:40 PM3/14/21
to
On Sun, 14 Mar 2021 05:25:06 -0700 (PDT)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> I've been giving this more thought, and I'd like to abstract
> the problem before inquiring about changes to the C90
> standard to create a C90+ standard.
>
> If I have a S/3X0 that only has 32-bit registers available,
> I'd like to change the machine to reuse 3 of the 16 registers
> as segment registers.
>
> So there will be 32:32 far pointers available.
>
> I'm not sure what range of memory that should cover, but
> let's say 64 GiB. (any suggestion?).
>
> I want the compiler to be able to generate far data pointers
> and near code pointers.

The C specifications don't support segmented address pointers.

E.g., the LCC C compiler for DOS eliminated near and far pointers to
comply with the C specification. Versions 3.5, 3.6 have them.
Versions 4.5, 4.6 don't. I.e., the later versions don't support 16-bit
x86 code (which must add a segment and offset for huge/far pointers),
only 32-bit x86 code (with a segment/selector that doesn't change).

> I want to be able to allocate 8 GiB of memory, even though
> size_t is 4 GiB. I need a different function, not malloc().

Do you actually need a new malloc()? You might.

Allocating a contiguous memory block for C objects and memory
allocations is a requirement of C.

So, multiple calls to malloc(), e.g., two 4GiB calls, would work,
IF AND ONLY IF,
you can guarantee that the memory allocator allocates both blocks
contiguously. E.g.,

__set_contiguous_multiple_allocations(1);
malloc(4GiB);
malloc(4GiB);
__set_contiguous_multiple_allocations(0);

Where, __set_contiguous_multiple_allocations() is a custom function
that turns contiguous allocations on/off within the memory allocator,
for repeated calls to malloc(). Of course, now you need access and
control of the memory allocator, which you may not have, in addition to
access and control of the C compiler proper.

> I don't want to burden the compiler with a formal "long long"
> data type.
>
> I want long to be 32-bits.
>
> I want to declare a:
>
> char huge *p;
>
> to point to my above-size_t memory block.
>

But, in this example, you have "to burden the compiler with a formal"
"huge" pointer type ... Same difference? I.e., I see an advantage to
supporting "long long" but see no advantage to support "huge" or "far"
or "near", if you don't need to do so.

> I don't expect to be able to do a strlen() of p

Why not?

strlen() is just a loop that detects a zero byte/word (which usually
maps to a nul char '\0' on most implementations, i.e., because
they're the same size byte/word and char for most platforms).

strlen() should work on an infinite length string.

> but I do expect to be able to do p++ to traverse the entire 8 GiB
> memory block

Same thing. No difference.

> perhaps looking for the character 'Q', with the segment
> register being automatically adjusted by the compiler, at its
> discretion.

What?

Are you saying you want another string terminator like nul '\0' for C
but using the character 'Q'? What for? Unnecessary...

> I'd like to represent a 64-bit value to be given to huge_malloc()
> by two unsigned longs, both containing 32-bit values, even on
> machines where longs are 128 bits.

Instead of passing a 64-bit value into a malloc() variant, why wouldn't
you have a malloc() variant that allocated 4KB or 64KB blocks of memory
at a time, instead of allocating bytes of memory at a time like
malloc()? E.g., 32-bit x (4KB per allocation). This wouldn't give you
a 64-bit address range, but it would eliminate the need for extending
integers or pointers, or passing in a segment etc.

> I'd also like a:
> char huge *z;
> z = addhuge(p, unsigned long high, unsigned long low);
>
> A subhuge too.

You're beginning to really complicate things ...

> The same functions can exist in all compilers, including
> MSDOS, even if they just return NULL for huge_malloc()
> for obvious reasons. But even MSDOS can give you a
> memory block bigger than 64k, so if you request 128k
> using huge_malloc(), no worries, you'll get it.

AISI, your only real problem is with values larger than 32-bit. You
need an additional keyword to indicate the increased size, be it "long
long" for integers/pointers or just a "huge" or "far" for pointers.

--
Clinton: biter. Trump: grabber. Cuomo: groper. Biden: mauler.

Rod Pemberton

unread,
Mar 14, 2021, 5:31:45 PM3/14/21
to
On Sun, 14 Mar 2021 12:21:26 -0700 (PDT)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> But how is a C program meant to update the
> segment in a portable manner? E.g. in S/380
> where you have a 64-bit segmented address,
> assuming you had 7 GiB of code.

C doesn't support segmented address pointers, but C compilers for DOS
do, i.e., huge, far, near etc, because the address range for 16-bit
x86 was too small without adding in the segment. The address range
for 32-bit is quite large without adding in the base address of the
selector for the segment. So, the segment is usually not changed, nor
added to the 32-bit offset for 32-bit x86 code. See my reply up thread
for more on this issue.

muta...@gmail.com

unread,
Mar 15, 2021, 1:04:46 AM3/15/21
to
On Monday, March 15, 2021 at 8:29:40 AM UTC+11, Rod Pemberton wrote:

> > I want the compiler to be able to generate far data pointers
> > and near code pointers.

> The C specifications don't support segmented address pointers.

C90+ will (or may).

> So, multiple calls to malloc(), e.g., two 4GiB calls, would work,
> IF AND ONLY IF,
> you can guarantee that the memory allocator allocates both blocks
> contiguously. E.g.,
>
> __set_contiguous_multiple_allocations(1);
> malloc(4GiB);
> malloc(4GiB);
> __set_contiguous_multiple_allocations(0);

Yes, you may be able to get that to work, but I think
the correct abstraction is far_malloc64().

> > I don't want to burden the compiler with a formal "long long"
> > data type.
> >
> > I want long to be 32-bits.
> >
> > I want to declare a:
> >
> > char huge *p;
> >
> > to point to my above-size_t memory block.
>
> But, in this example, you have "to burden the compiler with a formal"
> "huge" pointer type ... Same difference?

No. Most implementations will be allowed to
get away with:

#define huge
#define far_malloc(a, b) ((a == 0) ? NULL : malloc(b))

> I.e., I see an advantage to
> supporting "long long"

It's too much work to expect an MSDOS compiler
to do all that for you. You may as well ask for a
long long long long long long too. This is not the
right approach. C90 had it right, stopping at long,
but allowing that to be 64-bit or 128-bit or 256-bit
or whatever technology allows.

> but see no advantage to support "huge" or "far"
> or "near", if you don't need to do so.

The advantage is that you don't need new
instructions or registers or calling conventions
on S/370 to suddenly support accessing more
than 4 GiB of memory. You simply need to
recompile your program with an appropriate
compiler.

Quibbling aside.

> > I don't expect to be able to do a strlen() of p
> Why not?

That's precisely what size_t is for. That's what you
can support "normally". If you can support 64-bit
strlen() then set size_t to a 64-bit value.

> strlen() is just a loop that detects a zero byte/word (which usually
> maps to a nul char '\0' on most implementations, i.e., because
> they're the same size byte/word and char for most platforms).

Yes, and it will cut out at size_t and wrap.

> strlen() should work on an infinite length string.

There's not many infinite things in this world. :-)

> > but I do expect to be able to do p++ to traverse the entire 8 GiB
> > memory block

> Same thing. No difference.

Nope. Segmented memory will wrap when the offset
reaches the maximum.

> > perhaps looking for the character 'Q', with the segment
> > register being automatically adjusted by the compiler, at its
> > discretion.

> What?
>
> Are you saying you want another string terminator like nul '\0' for C
> but using the character 'Q'? What for? Unnecessary...

No, it was an example application. If you have a
simple application that looks for a 'Q' then you
can go while (*p != 'Q') p++;

Then you will know where 'Q' is. Don't look at me, I
don't write many applications. :-)

> > I'd like to represent a 64-bit value to be given to huge_malloc()
> > by two unsigned longs, both containing 32-bit values, even on
> > machines where longs are 128 bits.

> Instead of passing a 64-bit value into a malloc() variant, why wouldn't
> you have a malloc() variant that allocated 4KB or 64KB blocks of memory
> at a time, instead of allocating bytes of memory at a time like
> malloc()? E.g., 32-bit x (4KB per allocation). This wouldn't give you
> a 64-bit address range, but it would eliminate the need for extending
> integers or pointers, or passing in a segment etc.

The whole point is to get a 64-bit address range.
On systems that only have 32-bit registers, but
lots of memory.

> > I'd also like a:
> > char huge *z;
> > z = addhuge(p, unsigned long high, unsigned long low);
> >
> > A subhuge too.

> You're beginning to really complicate things ...

Adding a 64-bit value to a 64-bit pointer on a system
with only 32-bit registers requires a function call, or
a large macro.

It's a complicated scenario. That's why we have a
separation between near and far memory. Near
memory is simple.

> > The same functions can exist in all compilers, including
> > MSDOS, even if they just return NULL for huge_malloc()
> > for obvious reasons. But even MSDOS can give you a
> > memory block bigger than 64k, so if you request 128k
> > using huge_malloc(), no worries, you'll get it.

> AISI, your only real problem is with values larger than 32-bit. You

Yes, 32 is already difficult for 8086 to handle. I'm not
willing to make matters worse. God only knows how
the Commodore 64 supports 32-bit longs. I haven't
reached that point yet, but it's on my journey. I want to
write a C program for the C64.

> need an additional keyword to indicate the increased size, be it "long
> long" for integers/pointers or just a "huge" or "far" for pointers.

The additional keyword huge/far is dead simple to
implement on a C64 or standard S/370. It is simply
ignored.

long long is ridiculous unless the standard allows
it to be 32-bits or 16 bits. But that completely
defeats the purpose of why it is being added.

BFN. Paul.

muta...@gmail.com

unread,
Mar 15, 2021, 1:13:15 AM3/15/21
to
On Monday, March 15, 2021 at 8:31:45 AM UTC+11, Rod Pemberton wrote:

> > But how is a C program meant to update the
> > segment in a portable manner? E.g. in S/380
> > where you have a 64-bit segmented address,
> > assuming you had 7 GiB of code.

> C doesn't support

Who died and made ISO God?

> segmented address pointers, but C compilers for DOS
> do, i.e., huge, far, near etc, because the address range for 16-bit
> x86 was too small without adding in the segment. The address range

And were they wrong to do that? No, they weren't.
I thought it was strange at the time, but no, they
were right. For extraordinary situations, use a
far pointer. E.g. memmgr.c when being built for
PDOS/86. It needs to go beyond size_t. Normal
applications can be limited to size_t, but not
extraordinary ones. I guess if you have C compiler
support you can just make everything a huge
pointer without the keyword. Maybe that is in fact
the proper approach?

But if you have compiler support in place, you can
still code the extraordinary situation (going above
size_t) and you may be able to have a 128-bit far
pointer on a 16-bit system with 16-bit normal
pointers. In fact, you could even have a situation
where the segment is shifted (128-16) bits left
to access memory way out somewhere else,
while only occupying a 16-bit segment and
16-bit offset.

> for 32-bit is quite large without adding in the base address of the
> selector for the segment. So, the segment is usually not changed, nor
> added to the 32-bit offset for 32-bit x86 code. See my reply up thread
> for more on this issue.

80386 is not my only target. 8086 is another target.

BFN. Paul.

muta...@gmail.com

unread,
Mar 15, 2021, 1:40:19 AM3/15/21
to
On Monday, March 15, 2021 at 4:04:46 PM UTC+11, muta...@gmail.com wrote:

> No. Most implementations will be allowed to
> get away with:
>
> #define huge

Also:

#define far

> #define far_malloc(a, b) ((a == 0) ? NULL : malloc(b))

Sorry, should be:

#define far_malloc(a, b) ((a != 0) ? NULL : malloc(b))

ie refuse any high 32-bit request.

Another thing I should add is that in the extraordinary
situation of memmgr(), it would STILL have a #define
of whether you wanted to activate all the far pointer
manipulation instead of just operating within the
limits of size_t.

BFN. Paul.

muta...@gmail.com

unread,
Mar 15, 2021, 2:03:48 AM3/15/21
to
On Monday, March 15, 2021 at 4:13:15 PM UTC+11, muta...@gmail.com wrote:

> were right. For extraordinary situations, use a
> far pointer. E.g. memmgr.c when being built for

Correction.

For extraordinary situations, use a huge pointer.

For unusual situations, such as the occasional
reference to absolute address 0xb8000, feel
free to use a far pointer.

For normal situations, just use an appropriate
memory model so that you don't need to pollute
your code with "far" crap.

Or perhaps it should be:

char ABSADDR *ptr;

And then an implementation can do either:

#define ABSADDR
or
#define ABSADDR far

> PDOS/86. It needs to go beyond size_t. Normal
> applications can be limited to size_t, but not
> extraordinary ones. I guess if you have C compiler
> support you can just make everything a huge
> pointer without the keyword. Maybe that is in fact
> the proper approach?

But even if that is the ideal approach (which is not
true if you are interested in speed - I don't really
care personally at this stage), MSDOS was around
for a very long time, but not a single C compiler
even produced magical huge pointers. Only
"Smaller C" does that, and only with 80386
instructions, and I don't think it is C90-compliant
yet.

BFN. Paul.

Alexei A. Frounze

unread,
Mar 15, 2021, 3:05:49 AM3/15/21
to
On Sunday, March 14, 2021 at 11:03:48 PM UTC-7, muta...@gmail.com wrote:
...
> But even if that is the ideal approach (which is not
> true if you are interested in speed - I don't really
> care personally at this stage), MSDOS was around
> for a very long time, but not a single C compiler
> even produced magical huge pointers. Only
> "Smaller C" does that, and only with 80386
> instructions, and I don't think it is C90-compliant
> yet.

I'm thinking of making some improvements w.r.t. compliance,
but I'm not planning to support every single thing that's in the
standard (anywhere between ANSI C and C99). For example,
I'm not going to ever support functions with what's known as
identifier-list (as opposed to parameter-type-list), which is
already absent from C2020.
VLAs is another questionable feature, which the latest standards
(again, C2020 in particular) make optional.
Complex types are OK, but low priority.
Some math functions are low priority as well.
Wide characters / Unicode is going to be incomplete too.
Likely the same with time zones, "saving" and leap seconds.

C2020 adds alignment, atomics, threads, attributes, etc. None
of that is in the plans. Though, anonymous unions are.

Alex

Rod Pemberton

unread,
Mar 15, 2021, 3:52:58 AM3/15/21
to
On Sun, 14 Mar 2021 22:04:45 -0700 (PDT)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> > > I don't expect to be able to do a strlen() of p
> >
> > Why not?
>
> That's precisely what size_t is for. That's what you
> can support "normally". If you can support 64-bit
> strlen() then set size_t to a 64-bit value.
>

a) No, that's not what size_t is for. size_t is the type returned by
the sizeof() operator, which is usually "unsigned long" for ANSI C.

b) I'm sorry. I clearly made a mistake here. I don't normally think of
C string functions as having their return type limited, because it's
not usually an issue. And, I was only thinking about how the code for
strlen() generally works, not about how strlen() was declared. Yes,
you're correct that the string functions returns are limited, and
limited to size_t for ANSI C. Also, you're correct that size_t would
need to be larger to comply with the C specifications for strlen(), or
you'd need to use a different return type for strlen() or any function
that returned size_t e.g., up sized to "unsigned long long".

> > > but I do expect to be able to do p++ to traverse the entire 8 GiB
> > > memory block
>
> > Same thing. No difference.
>
> Nope. Segmented memory will wrap when the offset
> reaches the maximum.

I suspect you were getting at some other issue here, than what I'm
about to respond to below, but I suspect that I'm not getting it.

I meant that the strlen() function will use a pointer like p++ to
increment through the string to find the nul terminator.

If the pointer wraps at the offset maximum (which it will for segmented
memory) when incrementing p++, then it'll do the same within the
strlen() function, because it too increments a pointer just like p++.

So, AISI, there is no difference between "being able to do a strlen() of
p" and "able to do p++ to traverse the entire 8GiB memory block" as both
increment a pointer, both will wrap if segmented, or not wrap if not,
etc. ATM, the only way that I can figure that the two pointers both
won't wrap for segmented memory is if they're declared differently,
e.g., one is normal and one "huge" or "far".

strlen() is usually coded like so using a pointer which is being
incremented (from P.J. Plauger's "The Standard C Library"):

size_t (strlen)(const char *s)
{
const char *sc;

for(sc=s;*sc!='\0';++sc);
return(sc-s);
}

(Let's not get into a comp.lang.c style argument over whether his
use of '\0' here instead of zero is correct way to detect the nul
terminator for rare word-addressed machines. It's correct for
common byte addressed machines.)

> The whole point is to get a 64-bit address range.
> On systems that only have 32-bit registers, but
> lots of memory.

It'll need to be a constructed pointer, i.e., add two values together,
or you'll need functions to separately set the segment. E.g., DJGPP
compiler (based on GCC) has a function to set the FS selector, and has
functions to do far memory accesses, e.g., farpeek, farpoke, and
another one to do memory moves between the application's C space, and
low memory which is outside of C's address space for the application.

> God only knows how the Commodore 64 supports 32-bit longs.

It's been way too long now since I coded for the C64's 6510 (6502
variant) to know how to answer that (last code around 1992 ...?). If
you'd asked me a decade ago, maybe I could give a reasonable answer...
AIR, 6502 was only 8-bit. However, the 6502 processor was more
RISC-like, i.e., few registers, accumulator based instructions, index
registers, a scratch page in low memory, and had some "powerful"
addressing modes, etc. Offhand, I don't recall anymore how the 8-bit
processor handled the 16-bit addresses. Perhaps, the addresses were
stored in memory only? ... You'd need to look it up.

> I haven't reached that point yet, but it's on my journey. I want to
> write a C program for the C64.

Okay, I thought you were potentially joking previously about the C128
etc, so I didn't really respond ... I also never used the C128.

For example, the C64 only has 64KB of memory, and half of that is
underneath the ROMs, which must be disabled to access memory underneath.
(FYI, the C64 programmer's reference manual has one of the bits reversed
for that ...) So, since it's memory limited, I don't know how you're
going to fit an OS coded in C onto it. Compression?

Also, I'm not sure if the C64 had a C compiler. I only used BASIC and
a commercial assembler package for it. Perhaps, Ron Cain's Small C is
available now? You'll probably have to track down a C64 archive on the
internet.

muta...@gmail.com

unread,
Mar 15, 2021, 6:44:58 AM3/15/21
to
On Monday, March 15, 2021 at 6:05:49 PM UTC+11, Alexei A. Frounze wrote:

> > "Smaller C" does that, and only with 80386
> > instructions, and I don't think it is C90-compliant
> > yet.

> I'm thinking of making some improvements w.r.t. compliance,
> but I'm not planning to support every single thing that's in the
> standard (anywhere between ANSI C and C99). For example,
> I'm not going to ever support functions with what's known as
> identifier-list (as opposed to parameter-type-list), which is
> already absent from C2020.

I think it would be good to change the C90 standard
to whatever YOU are actually willing to support. This
is the real limit, tested by real world exercise.

I would like to adjust to you.

> VLAs is another questionable feature, which the latest standards
> (again, C2020 in particular) make optional.
> Complex types are OK, but low priority.

Ok, I'm not sure what that actually is.

> Some math functions are low priority as well.
> Wide characters / Unicode is going to be incomplete too.
> Likely the same with time zones, "saving" and leap seconds.

Those things are the C library. I have no interest in that.

> C2020 adds alignment, atomics, threads, attributes, etc. None
> of that is in the plans. Though, anonymous unions are.

Ok, I don't know if I need any of that. I'm not sure
what they are.

Here we go ...

I downloaded your latest Smaller C from here:

https://github.com/alexfru/SmallerC

I checked the license and it looks very liberal
to me, thankyou.

I found the largest .c file and compiled it with
bcc32. I was shocked that it compiled out of
the box.

But would it run?

It did, but just welcomed me with:

C:\devel\pdos\pdpclib>smlrc
Error in "" (1:1)
Input file not specified

I tried --help, -h, -?, nothing would give me usage.
I checked the source file to find out what the help
string was, and didn't find a mention of 8086 or
something like that that would have helped me.

I did of course figure out:
smlrc world.c world.s
which worked, but I can't recognize the assembler,
so didn't know what format it was producing.

I went looking for documentation and found what
looked like what I wanted - "-huge".

I wasn't that concerned about what happened with
the assembler at that point, I wanted to see if it
would choke on PDPCLIB.

It didn't like this:

-#if defined(__CMS__) || defined(__MVS__) || defined(__VSE__)
+#ifdef JUNK___ /* defined(__CMS__) || defined(__MVS__) || defined(__VSE__) */

I use that heavily, so it was disappointing. But I persevered
and found other things it choked on, such as:

-#if ('\x80' < 0)
+#if 1 || ('\x80' < 0)

+ return (0.0);
+#ifdef JUNK___
return (x >= y) ?
(x >= -y ? atan(y/x) : -pi/2 - atan(x/y))
:
(x >= -y ? pi/2 - atan(x/y)
: (y >= 0) ? pi + atan(y/x)
: -pi + atan(y/x));
+#endif

i = (long)value;
- value -= i;
+ /* value -= i; */

-#define offsetof(x, y) (size_t)&(((x *)0)->y)
+/*#define offsetof(x, y) (size_t)&(((x *)0)->y)*/

And I managed to get a clean compile.

Next thing was to see if I could get my version
of nasm to accept it. I didn't have any examples
of how to run nasm in my own code. nasm -h
gave usage, but not what I was looking for, ie
how to do an assembly only. I went back to your
documentation to see if you mentioned how to
run nasm, which you did. I tried that, but nasm
didn't like -f bin on your assembler output. But
the "-f" gave me a hint that I needed "-f" and I
was wondering why I didn't see that in the usage.

I found it in the first line:
usage: nasm [-@ response file] [-o outfile] [-f format] [-l listfile]
but that is useless. I needed something like this:
-t assemble in SciTech TASM compatible mode
They should have put "-f" before "-t" with a description.
It took a while, but I finally figured out this cryptic message:

For a list of valid output formats, use -hf.
For a list of debug formats, use -f <form> -y.

I tried nasm -hf
and got close to what I needed.
I latched on to this:
aout Linux a.out object files
and since I was familiar with a.out, I gave that a go,
but it didn't like your assembler code either.

Then I noticed:
obj MS-DOS 16-bit/32-bit OMF object files

And yay, I got what I wanted, a clean assemble!

So we're in business.

For some reason I went back to your documentation
and noticed this:

* most of the _preprocessor_. The core compiler (smlrc) relies on an
external preprocessor (smlrpp (which is a version of ucpp adapted for use
with Smaller C) or gcc).

But I couldn't find a source file with that name.
I went back to the documentation, but didn't
find that.

But I had noticed a smlrpp.md so I tried looking in
that and found:

gcc -Wall -O2 -o smlrpp -DSTAND_ALONE -DUCPP_CONFIG arith.c assert.c cpp.c eval.c lexer.c macro.c mem.c nhash.c

I tried using that exact command where I found
the source code, and it worked with no warnings
or anything.

A few attempts and I found:
smlrpp -h

This led me to try:
smlrpp -zI -I . -o math.e math.c

and I was surprised that it ran without error. I checked
math.e and it looked reasonable.

So now to put my code back.

It worked! I no longer had preprocessor problems.

Compilation errors were mainly involving floating
point, but I don't care about that.

+#if !defined(__SMALLERC__)
fpval*=10.;
+#endif

This is the only one I care about:

+#if !defined(__SMALLERC__)
tms.tm_wday = dow(tms.tm_year + 1900, mo, da);
+#endif

Why is it choking on that?

smlrc -huge -I . time.e time.s
Error in "time.c" (310:322)
Expression too long

Oh, I know why. It's a huge macro.

#define dow(y,m,d) \
((((((m)+9)%12+1)<<4)%27 + (d) + 1 + \
((y)%400+400) + ((y)%400+400)/4 - ((y)%400+400)/100 + \
(((m)<=2) ? ( \
(((((y)%4)==0) && (((y)%100)!=0)) || (((y)%400)==0)) \
? 5 : 6) : 0)) % 7)

That's not very neat to work around. I need to
provide a function instead. I have one of those
too, but is there a reason why there is a limit on
this valid C code?

I'm happy to just return random dow's for now.

Now for what counts - pdos.c

smlrc -huge -I . pdos.e pdos.s
Error in "fat.h" (84:27)
Identifier table exhausted

I don't know why it's complaining about that:

/*Structure for Directory Entry */
typedef struct {
unsigned char file_name[8]; /*Short file name (0x00)*/
unsigned char file_ext[3]; /*Short file extension (0x08)*/
unsigned char file_attr; /*file attributes (0x0B)*/
unsigned char extra_attributes; /*extra attributes (0x0C)*/

I can't progress without that.

All code can be found here:

https://sourceforge.net/p/pdos/gitcode/ci/master/tree/

My Smaller C mods have been committed.

Thanks for the great product!!! Any idea about that fat.h?
I may be able to produce a heaps better PDOS/86 that
does proper memory allocation.

BFN. Paul.
Message has been deleted

Alexei A. Frounze

unread,
Mar 16, 2021, 2:29:33 AM3/16/21
to
You're not using it right. :)

If you're on Linux, you can make it. If you're on DOS/Windows,
you can just use the included binaries. You can recompile
it too with DJGPP or MinGW (OW should work as well), but
there's no make file for those, though you can just figure it
out from the Linux make file and/or documentation.

After that you should...
Well, smlrcc.md tells you how to use the compiler once it's
built or simply used out of the bin*/ directories (you should
preserve the relative directory structure for those .EXEs to
be able to find the .a libraries).

Note that the generated assembly is for NASM and is
expected to be assembled into an ELF object file irrespective
of the format of the final executable.
smlrcc.md mentions this as well, there's no mistake.
The ELF object files are to be linked using the dedicated linker,
smlrl, which comes with the compiler as well.
It knows how to handle some special sections produced for
DOS EXEs (for huge and unreal models).

Alex

muta...@gmail.com

unread,
Mar 16, 2021, 4:29:32 AM3/16/21
to
On Tuesday, March 16, 2021 at 5:26:28 PM UTC+11, Alexey F wrote:

> You're not using it right. :)

Sure, but instead of:

C:\devel\pdos\pdpclib>smlrc
Error in "" (1:1)
Input file not specified

Perhaps you could say:

C:\devel\pdos\pdpclib>smlrc
Error in "" (1:1)
Input file not specified
Sucker, you'll have to read smlrc.md


> If you're on Linux, you can make it. If you're on DOS/Windows,
> you can just use the included binaries. You can recompile
> it too with DJGPP or MinGW (OW should work as well), but
> there's no make file for those, though you can just figure it
> out from the Linux make file and/or documentation.

bcc32 (Borland C) compiles it too. Any reason I shouldn't use that?

> After that you should...
> Well, smlrcc.md tells you how to use the compiler once it's
> built or simply used out of the bin*/ directories (you should
> preserve the relative directory structure for those .EXEs to
> be able to find the .a libraries).

I don't need .a libraries. :-)

> The ELF object files are to be linked using the dedicated linker,
> smlrl, which comes with the compiler as well.

Thanks. I have that in place now too.

Next thing I need to do is write some 8086 nasm code
to replace my existing masm code for MSDOS so that
I can produce the required elf32 object code.

I have code like this:

mov ax, sp
mov cl, 4
shr ax, cl ; get sp into pages
mov bx, ss
add ax, bx
add ax, 2 ; safety margin because we've done some pushes etc
mov bx, es
sub ax, bx ; subtract the psp segment

; free initially allocated memory

mov bx, ax
mov ah, 4ah
int 21h

I need to think about the best way for Smaller C to fit
in with this.

BFN. Paul.

muta...@gmail.com

unread,
Mar 16, 2021, 5:05:46 AM3/16/21
to
I didn't actually realize you could do 32-bit memory
access in real mode using an 80386. I thought you
needed to go into protected mode.

Assuming I have an 80386+ where segment
registers are shifted 16 bits, not 4 bits, I would
like a 32-bit OS that runs 32-bit programs with
CS = 0 and 16-bit programs with CS = wherever
they have been relocated to, but not necessarily
on a 64k boundary. I would like the 16-bit modules
to be relocatable at the offset level, so that they
could e.g. have a starting location of 0x500.

I would like these 16-bit modules to also be able to
be loaded by PDOS/86 running on standard 8086
hardware to be relocatable despite the fact that
the segment registers will be shifted 4 bits, not 16.

And I would like these 16-bit modules to be loaded
on a standard 80386 where segment registers are
also shifted 4 bits.

So. Is it possible to write magical 16-bit programs?
They need to just trust that they have been appropriately
relocated and not make any assumptions about how
many bits the segment registers will be shifted.

C code doesn't generate silly assumptions like that.
You have to go out of your way to butcher the code.

BFN. Paul.

muta...@gmail.com

unread,
Mar 16, 2021, 5:55:25 AM3/16/21
to
On Tuesday, March 16, 2021 at 8:05:46 PM UTC+11, muta...@gmail.com wrote:

> C code doesn't generate silly assumptions like that.
> You have to go out of your way to butcher the code.

A magical huge pointer will need to know though.
But it can get the number of bits to add to the
segment register whenever the offset is about to
be exceeded from a global variable. Which would
be 1 for a 16-bit shift machine and 4096 for a 4-bit
shift machine.

BFN. Paul.

muta...@gmail.com

unread,
Mar 29, 2021, 3:18:32 AM3/29/21
to
On Tuesday, March 16, 2021 at 8:55:25 PM UTC+11, muta...@gmail.com wrote:

> > C code doesn't generate silly assumptions like that.
> > You have to go out of your way to butcher the code.

> A magical huge pointer will need to know though.
> But it can get the number of bits to add to the
> segment register whenever the offset is about to
> be exceeded from a global variable. Which would
> be 1 for a 16-bit shift machine and 4096 for a 4-bit
> shift machine.

So where are we on this?

How do I avoid hardcoding the number 4 in my executables?

Including both freeing memory on startup:

mov ax, sp
mov cl, 4
shr ax, cl ; get sp into pages

and setting a global variable as to how much magical
huge pointers should add to the segment register
whenever a segment boundary is crossed.

Perhaps the first can be eliminated by producing
executables with the:

https://wiki.osdev.org/MZ

If both the minimum and maximum allocation fields are cleared, MS-DOS will attempt to load the executable as high as possible in memory.


Maybe a new INT 21H call is needed for executables
with magical huge pointers?

BFN. Paul.

muta...@gmail.com

unread,
Mar 29, 2021, 5:17:19 AM3/29/21
to
On Monday, March 29, 2021 at 6:18:32 PM UTC+11, muta...@gmail.com wrote:

> If both the minimum and maximum allocation fields are cleared, MS-DOS will attempt to load the executable as high as possible in memory.

I have built a huge executable with Smaller C and
I can see that those fields are not being cleared.

000000 4D5AA000 8C000000 02006B0E 6B0ED217 MZ........k.k...

Both x'0a' and x'0c' are set to 0E6B.

> Maybe a new INT 21H call is needed for executables
> with magical huge pointers?

And I realized that Smaller C must have this embedded
in it somewhere. Because it needs to convert flat
addresses into segment:offset.

BFN. Paul.

Alexei A. Frounze

unread,
Mar 29, 2021, 10:19:25 PM3/29/21
to
On Monday, March 29, 2021 at 2:17:19 AM UTC-7, muta...@gmail.com wrote:
> I have built a huge executable with Smaller C and
> I can see that those fields are not being cleared.
>
> 000000 4D5AA000 8C000000 02006B0E 6B0ED217 MZ........k.k...
>
> Both x'0a' and x'0c' are set to 0E6B.

.bss and stack aren't stored in the .EXE but obviously need memory,
so these aren't 0...
The max isn't FFFF to make memory available to nested .EXEs.

Any problem with any of these two?

Alex

muta...@gmail.com

unread,
Mar 30, 2021, 5:29:26 AM3/30/21
to
On Tuesday, March 30, 2021 at 1:19:25 PM UTC+11, Alexei A. Frounze wrote:

> > I have built a huge executable with Smaller C and
> > I can see that those fields are not being cleared.
> >
> > 000000 4D5AA000 8C000000 02006B0E 6B0ED217 MZ........k.k...
> >
> > Both x'0a' and x'0c' are set to 0E6B.

> .bss and stack aren't stored in the .EXE but obviously need memory,
> so these aren't 0...
> The max isn't FFFF to make memory available to nested .EXEs.
>
> Any problem with any of these two?

No specific problem. I would just like to know what "best
practice" is. Do modern .exe files set both to 0 so that
the executable can be loaded into high memory?

Or is that what old .exe files do?

And either way, isn't "standard practice" for MSDOS to
make the entire memory available and then it's up to
the executable to reduce the amount of memory used?

BFN. Paul.

Scott Lurndal

unread,
Mar 30, 2021, 1:45:26 PM3/30/21
to
"muta...@gmail.com" <muta...@gmail.com> writes:
>On Tuesday, March 30, 2021 at 1:19:25 PM UTC+11, Alexei A. Frounze wrote:
>
>> > I have built a huge executable with Smaller C and
>> > I can see that those fields are not being cleared.
>> >
>> > 000000 4D5AA000 8C000000 02006B0E 6B0ED217 MZ........k.k...
>> >
>> > Both x'0a' and x'0c' are set to 0E6B.
>
>> .bss and stack aren't stored in the .EXE but obviously need memory,
>> so these aren't 0...
>> The max isn't FFFF to make memory available to nested .EXEs.
>>
>> Any problem with any of these two?
>
>No specific problem. I would just like to know what "best
>practice" is. Do modern .exe files set both to 0 so that
>the executable can be loaded into high memory?

Modern .exe files don't support segmentation and "high memory".

muta...@gmail.com

unread,
Mar 30, 2021, 2:01:24 PM3/30/21
to
On Wednesday, March 31, 2021 at 4:45:26 AM UTC+11, Scott Lurndal wrote:

> >No specific problem. I would just like to know what "best
> >practice" is. Do modern .exe files set both to 0 so that
> >the executable can be loaded into high memory?

> Modern .exe files don't support segmentation and "high memory".

I'm talking about modern MSDOS .exe files. I want to
know what the best practice was for MSDOS during
its entire life, and even beyond. And then when I
understand that, I'll see how PDOS/86 can grow from
there so that it can support relocation at the offset
level and also non-4 segment shift values.

I'm guessing there will be no choice but to create a
new executable format.

BFN. Paul.

Scott Lurndal

unread,
Mar 30, 2021, 4:11:19 PM3/30/21
to
"muta...@gmail.com" <muta...@gmail.com> writes:
>On Wednesday, March 31, 2021 at 4:45:26 AM UTC+11, Scott Lurndal wrote:
>
>> >No specific problem. I would just like to know what "best
>> >practice" is. Do modern .exe files set both to 0 so that
>> >the executable can be loaded into high memory?
>
>> Modern .exe files don't support segmentation and "high memory".
>
>I'm talking about modern MSDOS .exe files.

Now there's an oxymoron.

> I want to
>know what the best practice was for MSDOS during
>its entire life, and even beyond. And then when I
>understand that, I'll see how PDOS/86 can grow from
>there so that it can support relocation at the offset
>level and also non-4 segment shift values.
>
>I'm guessing there will be no choice but to create a
>new executable format.

You can do as you will. It is your project, after all.

There is basically no interest in MSDOS in the wider
operating system community (in fact, it is a stretch
to consider MSDOS an operating system at all).

muta...@gmail.com

unread,
Mar 30, 2021, 4:29:53 PM3/30/21
to
On Wednesday, March 31, 2021 at 7:11:19 AM UTC+11, Scott Lurndal wrote:

> > I want to
> >know what the best practice was for MSDOS during
> >its entire life, and even beyond. And then when I
> >understand that, I'll see how PDOS/86 can grow from
> >there so that it can support relocation at the offset
> >level and also non-4 segment shift values.
> >
> >I'm guessing there will be no choice but to create a
> >new executable format.

> You can do as you will. It is your project, after all.

Sure. That's not in dispute. What's in question is
whether I need a new executable format to achieve
my twin goals. And if so, what it needs to look like.
And also, whether I need a new MSDOS call. And
also what the 8086+ hardware should look like.
And also, when you add a new MSDOS interrupt,
what happens when you run a new MSDOS
executable on an old MSDOS? Is there a carry
flag set on an unknown AH/AL value when doing an
INT 21H? Or something like that?

> There is basically no interest in MSDOS in the wider
> operating system community

Perhaps there should be. Some people prepare
for the Rapture. I prepare for time warp.

Regardless, as Rick from "The Young Ones" said ...

That's all very well,
but, after years of stagnation,

TV has woken up to the need
for locally-based minority programmes,

made by amateurs
and of interest to only two people!

It's important, right?
It's now and I want to watch!

...

Did you see that?

The voice of youth!
They're still wearing flared trousers!


BFN. Paul.

Scott Lurndal

unread,
Mar 30, 2021, 4:53:22 PM3/30/21
to
"muta...@gmail.com" <muta...@gmail.com> writes:
>On Wednesday, March 31, 2021 at 7:11:19 AM UTC+11, Scott Lurndal wrote:
>
>> > I want to
>> >know what the best practice was for MSDOS during
>> >its entire life, and even beyond. And then when I
>> >understand that, I'll see how PDOS/86 can grow from
>> >there so that it can support relocation at the offset
>> >level and also non-4 segment shift values.
>> >
>> >I'm guessing there will be no choice but to create a
>> >new executable format.
>
>> You can do as you will. It is your project, after all.
>
>Sure. That's not in dispute. What's in question is
>whether I need a new executable format to achieve
>my twin goals.

Ok. Then you can either design your own, or leverage
a very successful one. One that's extensible while
maintaining backward compatability.

https://en.wikipedia.org/wiki/Executable_and_Linkable_Format

muta...@gmail.com

unread,
Apr 8, 2021, 6:17:48 AM4/8/21
to
In the last 24 hours I found out that there already
is such a thing as a "huge" pointer. It seems the
only problem is that you need to explicitly declare
such things, instead of just giving an option to the
compiler to make all pointers huge.

So I think what I want to do is ask the gcc IA16
people to provide a new memory model. It can't
be "huge", because that just uses far pointers.

So what should the name of the new memory
model?

Thanks. Paul.

muta...@gmail.com

unread,
May 8, 2021, 11:42:12 PM5/8/21
to
On Wednesday, March 10, 2021 at 12:53:15 PM UTC+11, Alexei A. Frounze wrote:

> > > The 16-bit models in Borland/Turbo C and Watcom C still limit object/array
> > > sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory
> > > model (however, ptrdiff_t is 32-bit in the huge model).
> > >
> > > So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.

> > Thanks. So what does Turbo C etc actually do in huge model
> > then? How is it different from large?

> Cumulative size of static objects is 64KB in large vs 1MB in huge.
> Basically, how many 64-KB data segments are used for your non-heap variables.

Maybe we were talking cross-purposes, but I've just
looked at the generated code from Watcom C with "-mh".

C:\devel\pdos\pdpclib\xxx8>type foo.c
int foo(char *p, unsigned long x)
{
p += x;
return (*p);
}

C:\devel\pdos\pdpclib\xxx8>type comph.bat
wcl -q -w -c -I. -mh -zl -D__MSDOS__ -fpi87 -s -zdp -zu -ecc foo.c


0000 _foo:
0000 55 push bp
0001 89 E5 mov bp,sp
0003 C4 5E 06 les bx,dword ptr 0x6[bp]
0006 8B 4E 0C mov cx,word ptr 0xc[bp]
0009 89 D8 mov ax,bx
000B 8C C2 mov dx,es
000D 8B 5E 0A mov bx,word ptr 0xa[bp]
0010 9A 00 00 00 00 call __PIA
0015 89 C3 mov bx,ax
0017 8E C2 mov es,dx
0019 26 8A 07 mov al,byte ptr es:[bx]
001C 30 E4 xor ah,ah
001E 5D pop bp
001F CB retf

That assembler code looks exactly like I want.

All I need to do is provide my own C library, with size_t
as an unsigned long, and recompile everything as
huge memory model.

All with 8086 instructions.

BFN. Paul.
0 new messages