Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

huge memory model

205 views
Skip to first unread message

Paul Edwards

unread,
Aug 22, 2020, 6:46:23 PM8/22/20
to
It just occurred to me that maybe I was wrong
to use the large memory model for PDOS/86, and
end up with a truly horrible and wasteful use
of memory. Maybe I should have used the huge
memory model instead:

https://en.wikipedia.org/wiki/Intel_Memory_Model

In normal processing, most time is spent
running applications (which may be "large")
rather than the operating system itself, so
it shouldn't matter if PDOS/86 is a bit less
efficient.

The main kludge I did was in memory management.
In order to not have to change my memory
management routines (memmgr), I instead divide
all memory requests by 16 and suballocate
space from a 0x5000 byte block, and then scale
up to the real memory address.

BFN. Paul.

Paul Edwards

unread,
Aug 22, 2020, 7:08:54 PM8/22/20
to
Should size_t be "long" or "unsigned long" in
the huge memory model?

Borland only seem to make it an "unsigned int"
in all memory models.

Thanks. Paul.

wolfgang kern

unread,
Aug 23, 2020, 3:52:23 AM8/23/20
to
I use two memory managers in my OS, one works on 1KB junks similar and
particular compatible with Himem.sys functions, and the second covers
large blocks of any size in three granularity steps (4K,64K,1M).
But only one Get_Mem function, requested size chose which one to use.
__
wolfgang

Paul Edwards

unread,
Aug 29, 2020, 6:13:49 AM8/29/20
to
If a (possibly modified) huge memory model
did normalization of all pointers before
using them, and had size_t equal to long,
it wouldn't have helped on the 80286.

Or did the 80286 have another way of
having buffers greater than 64k?

BFN. Paul.

wolfgang kern

unread,
Aug 29, 2020, 8:58:27 AM8/29/20
to
My first 80286 (my it rest in pieces now) had a memory extension card
with 2.5 MB RAM in addition to the 640K and games used IIRC DOS4GW or
something similar to access the whole RAM (my memory fades after more
than four decades passed).

Internal memory pointers were 32 bit then anyway, either linear or CS:IP
pairs.

"memory models" are just compiler issues, I never cared for such in my OS.
__
wolfgang

Alexei A. Frounze

unread,
Aug 29, 2020, 11:56:27 AM8/29/20
to
On Saturday, August 29, 2020 at 3:13:49 AM UTC-7, Paul Edwards wrote:
> If a (possibly modified) huge memory model
> did normalization of all pointers before
> using them, and had size_t equal to long,
> it wouldn't have helped on the 80286.

Smaller C's huge memory model operates with 32-bit
(in reality, only 20 bits are usable) physical addresses,
which are converted into segment:offset pairs just before
accessing the memory. This uses i80386 instructions and
is quite a bit of overhead. The i80286 would be much
worse.

> Or did the 80286 have another way of
> having buffers greater than 64k?

You can preinitialize the GDT and/or LDT to make segment
selectors additive (or addable?), just like in real mode.
But unlike real mode, protected mode gives you access to
all 16 MBs.

Alex

Paul Edwards

unread,
Aug 29, 2020, 6:12:00 PM8/29/20
to
On Sunday, 30 August 2020 01:56:27 UTC+10, Alexei A. Frounze wrote:

> You can preinitialize the GDT and/or LDT to make segment
> selectors additive (or addable?), just like in real mode.
> But unlike real mode, protected mode gives you access to
> all 16 MBs.

Could you elaborate on this please? How
were the addresses addable to give a
16 MiB address space?

Thanks. Paul.

Alexei A. Frounze

unread,
Aug 30, 2020, 12:11:20 AM8/30/20
to
The i80286 segment descriptor has a 24-bit base physical
address and a 16-bit limit (one less the segment size
in the normal case).

The segment selector has 13 bits of address if we
ignore the 2 bits of the privilege level and the bit
that selects between the GDT and the current LDT.

So, you can fill the GDT/LDT with 8192 segment descriptors
(well, 8191 or slightly fewer because of the NULL descriptor
and some other system segments) with linearly increasing
physical base addresses and limit=65535.

16 MB / 8192 = 2048.
2048 bytes would be the minimum segment size and with 8192
of them you'd cover all 16MB.

IOW, you can have a GDT/LDT configuration, where every
segment starts on a 2KB boundary and is 64KB long.
Adjacent segments overlap.
This is much like in real mode, but with the start address
being a multiple of 2KB instead of 16 bytes.

[If you fill both the GDT and the LDT in such a manner
and use the segment selector bit that selects either
the GDT or the LDT, you'll have 14 address bits in the
selector and double the total number of the segments to
16384. With this you can lower the segment start address
from being a multiple of 2048 to a multiple of 1024.]

And then your normalizing pointer increment routine
would be something like:

; in/out: ds:bx the logical address;
; out bx is normalized to be less than 2048
; in: dx:сx the byte offset to add
; to the logical address
; destroyed: dx:cx
add bx, cx
adc dl, 0
mov dh, dl
mov dl, bh
and dl, 11111000b
and bh, 00000111b
mov cx, ds
add dx, cx
mov ds, dx
ret

With this normalization you can handle up to 62KB
worth of data without crossing a segment boundary,
which is handy for copying or searching in large
blocks of memory.

It would be a perf overkill do use this routine
often, e.g. on every byte, word, qword or tword
access, but it'll work every time.

So, given a 24-bit physical address in a pair of
16-bit registers, you can always convert it into
a segment:offset pair to access memory through
the properly configured segments.

Borland Pascal 7 used this scheme in protected mode
and DPMI and Windows both supported it.
There was this global variable in BP7, SelectorInc,
which you could add to a segment selector to move
by 64KB in the linear/physical address space and
thus access objects larger than 64KB.
The configuration that I describe corresponds to
SelectorInc = 256.

Alex

Paul Edwards

unread,
Aug 30, 2020, 1:58:42 AM8/30/20
to
On Sunday, 30 August 2020 14:11:20 UTC+10, Alexei A. Frounze wrote:

> With this normalization you can handle up to 62KB
> worth of data without crossing a segment boundary,

Thanks for the explanation Alex. I had
expected the answer to be that the
segment is shifted by 8 bits instead
of 4 bits.

BFN. Paul.

Alexei A. Frounze

unread,
Aug 30, 2020, 4:47:51 AM8/30/20
to
To expand on how to do this:
; in: dh:ax is the 24-bit physical address
; out: ds:bx is the corresponding logical address
; using the GDT at privilege level 0
; destroyed: dl
; limitation: because of the NULL descriptor, this
; cannot cover physical addresses
; between 0 and 2KB
mov bx, ax
mov dl, bh
and dl, 11111000b
and bh, 00000111b
mov ds, dx
ret

Interestingly, this is just one instruction longer
than what Smaller C uses in the huge model in real mode
with i80386 instructions and 32-bit registers.

muta...@gmail.com

unread,
Mar 9, 2021, 3:55:15 AM3/9/21
to
On Sunday, August 30, 2020 at 2:11:20 PM UTC+10, Alexei A. Frounze wrote:

> And then your normalizing pointer increment routine
> would be something like:
>
> ; in/out: ds:bx the logical address;
> ; out bx is normalized to be less than 2048
> ; in: dx:сx the byte offset to add
> ; to the logical address
> ; destroyed: dx:cx
> add bx, cx

> It would be a perf overkill do use this routine
> often, e.g. on every byte, word, qword or tword
> access, but it'll work every time.

I'd like to revisit this issue.

I don't mind the performance issue. I just want my
programs to behave correctly.

I like the idea of a subroutine being called every time
a seg:off is added/subtracted to, producing a normalized
pointer. I want the normalized pointer to still be a proper
seg:off though, I don't want it to be a linear address. I
just want the offset reduced to under 16.

And I would like a different executable and different
subroutine to handle the 80286 using your above scheme
of mapping the 16 MiB address space.

It occurs to me that if we are using subroutines, and
32-bit pointers, that this should just be a target of
GCC. I'm not sure if GCC can cope with "int" being
16-bit. Other targets of GCC 3.2.3 seem to have 16-bit
"int".

But I expect size_t to be 32-bit.

I don't really care if "int" is 16-bit or 32-bit.

I would like to avoid using the "near" and "far" keywords.
I don't mind different memory models, but my focus is
on "huge" so that I can have a large size_t, as is appropriate
for a machine with more than 64k of memory.

I found the ia16.md here:
i16gcc.zip
i16gcc\SOURCE\I16GCC\I16GCC.DIF

I don't mind at all that people want to use tricks to reduce
space usage by making pointers 16-bits instead of 32-bits.
Or 32-bits instead of 64-bits. I just don't want those differences
to be visible in C code. Even in the C runtime library I really
expect that to be relegated to some "glue" when interacting
with the OS, not in open C code.

I don't mind the segmented pointers either, so long as
when you need to pay the price for that, you pay the
price, which is a lot of subroutine calls. As opposed to
being stuck with a size_t of 16 bits.

I'm wondering how much work is required to convert
ia16.md into handling huge memory model with 32-bit
size_t, if I'm willing to simply generate lots of subroutine
calls.

Anyone have any idea? I would like to contact the author
after I have some idea what is involved. Maybe subroutines
would only take 200 lines of code changes to the ia16
target.

Thanks. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 4:14:37 AM3/9/21
to
On Tuesday, March 9, 2021 at 7:55:15 PM UTC+11, muta...@gmail.com wrote:

> I don't mind at all that people want to use tricks to reduce
> space usage by making pointers 16-bits instead of 32-bits.
> Or 32-bits instead of 64-bits. I just don't want those differences
> to be visible in C code. Even in the C runtime library I really
> expect that to be relegated to some "glue" when interacting
> with the OS, not in open C code.

I think a small memory model program needs to be
given a 16-bit pointer to the OS API, plus a 16-bit
function to call whenever it wants to call a routine
contained within the OS API. ie these things should
be within the module's "address space".

It is the responsibility of the 32-bit OS to look after its
16-bit executables.

I don't need MSDOS compatibility.

And similarly, a 64-bit OS needs to be aware of its
32-bit executables, and make sure the OS API is
visible to it.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 4:40:11 AM3/9/21
to
On Tuesday, March 9, 2021 at 8:14:37 PM UTC+11, muta...@gmail.com wrote:

> I think a small memory model program needs to be
> given a 16-bit pointer to the OS API, plus a 16-bit
> function to call whenever it wants to call a routine
> contained within the OS API. ie these things should
> be within the module's "address space".

Actually, I think the OS API should just be given
as a series of integers, as the OS API could be
16-bit, 32-bit or 64-bit. The OS should know from
the number what the application is trying to do,
and take care of calling the desired API correctly.
Even if it is a 16-bit tiny memory model program
calling a 64-bit OS_DISK_READ() function.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 5:45:06 AM3/9/21
to
Also we might be on a Commodore 128. I don't know
how that works, but presumably when you do an OS
call, it may actually switch to the other bank to process
it.

The goal is to determine how to write an operating
system in C that can run anywhere, with the assumption
that the OS actually fits into that space.

Actually we may have the same issue with an 80386
with 8 GiB of memory. It is the same model as the
Commodore 128. The hardware would presumably
provide a mechanism to do bank-switching.

The Z80 does that too I think.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 1:15:05 PM3/9/21
to
On Tuesday, March 9, 2021 at 8:40:11 PM UTC+11, muta...@gmail.com wrote:

> Actually, I think the OS API should just be given
> as a series of integers, as the OS API could be
> 16-bit, 32-bit or 64-bit. The OS should know from
> the number what the application is trying to do,
> and take care of calling the desired API correctly.
> Even if it is a 16-bit tiny memory model program
> calling a 64-bit OS_DISK_READ() function.

I think fread() should be implemented in the C
library as:

__osfunc(__OS_FREAD, ptr, size, nmemb, stream);

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 1:28:58 PM3/9/21
to
On Wednesday, March 10, 2021 at 5:15:05 AM UTC+11, muta...@gmail.com wrote:

> I think fread() should be implemented in the C
> library as:
>
> __osfunc(__OS_FREAD, ptr, size, nmemb, stream);

Which would eventually result in an fread() function
in the OS being executed, likely still in user mode (or
a more complicated example, fscanf(), same story),
until it is rationalized to do a PosReadFile() (internal
use only function) which in a decent operating system
will do an interrupt, but that is left to the discretion of
the OS vendor.

So you're now probably in supervisor mode, and you
need to implement PosReadFile(). The OS is the only
thing that knows what files look like on a FAT-16, so
it retrieves the required sectors by doing its own call
to fread(), this time it is a BIOS call, like this:

__biosfunc(__BIOS_FREAD, ptr, size, nmemb, stream);

which once again gets translated into a call to the
BIOS fread() function (or similar, could be fscanf()),
which in turn gets rationalized into a call to BosReadFile()
which treats the entire disk as a file, and retrieves the
requested data.

BosReadFile() will possibly be translated into an
interrupt which puts the CPU into BIOS state instead
of supervisor state.

There is probably no reason to have a separate
__biosfunc and __osfunc or PosReadFile and
BosReadFile.

So perhaps __func() and __ReadFile().

As I said, these are internal functions. Everyone is
required to execute the C90 functions as the only
official interface.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 5:42:51 PM3/9/21
to
On Tuesday, March 9, 2021 at 7:55:15 PM UTC+11, muta...@gmail.com wrote:

> I like the idea of a subroutine being called every time
> a seg:off is added/subtracted to, producing a normalized
> pointer. I want the normalized pointer to still be a proper
> seg:off though, I don't want it to be a linear address. I
> just want the offset reduced to under 16.

Actually, what do existing compilers like TCC
and Watcom do when producing huge memory
model MSDOS executables?

If they're already doing what I need, maybe it is
as simple as changing size_t from unsigned int
to unsigned long in PDPCLIB and then build
with the existing huge memory model capability
of these existing compilers???

Thanks. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 7:45:48 PM3/9/21
to
On Wednesday, March 10, 2021 at 5:28:58 AM UTC+11, muta...@gmail.com wrote:

> __biosfunc(__BIOS_FREAD, ptr, size, nmemb, stream);

I guess the existing int86 calls people use could
remain, and just translate that into a BIOS function
code.

And if we're running under an emulator, or have
a S/390 co-processor attached to our 80386
box, we should be able to switch processors,
not just memory models.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 7:51:45 PM3/9/21
to
On Wednesday, March 10, 2021 at 11:45:48 AM UTC+11, muta...@gmail.com wrote:

> And if we're running under an emulator, or have
> a S/390 co-processor attached to our 80386
> box, we should be able to switch processors,
> not just memory models.

The same process that is used to switch to a
coprocessor can be used to switch to real mode,
when the 16-bit program in question is using
real mode instructions instead of 80386 protected
mode instructions (assuming you can write 16-bit
programs using an 80386). The distinction between
16-bit and 32-bit is blurry to me anyway. Are we
talking about int, data pointers, code pointers, or
biggest register? What if there is a 32-bit register
but no-one uses it because it's really slow? Does it
start or stop being 32-bit?

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 7:59:11 PM3/9/21
to
On Wednesday, March 10, 2021 at 11:45:48 AM UTC+11, muta...@gmail.com wrote:

> And if we're running under an emulator, or have
> a S/390 co-processor attached to our 80386
> box, we should be able to switch processors,
> not just memory models.

All the emulators could be combined, so that we
have a computer with a massive number of
co-processors, and can run executables from
anywhere.

Assuming they're all written in C90 and are following
the agreed-upon OS convention.

Or maybe interrupts can be intercepted and converted
into the agreed convention.

BFN. Paul.

Alexei A. Frounze

unread,
Mar 9, 2021, 8:26:40 PM3/9/21
to
On Tuesday, March 9, 2021 at 2:42:51 PM UTC-8, muta...@gmail.com wrote:
> On Tuesday, March 9, 2021 at 7:55:15 PM UTC+11, muta...@gmail.com wrote:
> > I like the idea of a subroutine being called every time
> > a seg:off is added/subtracted to, producing a normalized
> > pointer. I want the normalized pointer to still be a proper
> > seg:off though, I don't want it to be a linear address. I
> > just want the offset reduced to under 16.
> Actually, what do existing compilers like TCC
> and Watcom do when producing huge memory
> model MSDOS executables?

The 16-bit models in Borland/Turbo C and Watcom C still limit object/array sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory model (however, ptrdiff_t is 32-bit in the huge model).

So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.

My compiler supports arrays/objects larger than 64KB in its huge model. But it uses 80386 registers and instructions.

Alex

muta...@gmail.com

unread,
Mar 9, 2021, 8:27:19 PM3/9/21
to
On Wednesday, March 10, 2021 at 11:59:11 AM UTC+11, muta...@gmail.com wrote:

> All the emulators could be combined, so that we
> have a computer with a massive number of
> co-processors, and can run executables from
> anywhere.
>
> Assuming they're all written in C90 and are following
> the agreed-upon OS convention.

You can assume that all the executables are either
ASCII or EBCDIC, ie consistent, the same as the
data on the hard disk.

You can take a USB stick wherever you want, and
the speed will depend on what coprocessors are
present.

The OS could automatically select the right executables
for the current hardware.

If you have the right hardware, programs will run at
native speed.

When a 68000 executable requests a service from an
80386 OS (even via the agreed mechanism), it is not
just the integer size that needs to be taken into account,
but in this case, the endianness.

BFN. Paul.

muta...@gmail.com

unread,
Mar 9, 2021, 8:31:05 PM3/9/21
to
On Wednesday, March 10, 2021 at 12:26:40 PM UTC+11, Alexei A. Frounze wrote:

> The 16-bit models in Borland/Turbo C and Watcom C still limit object/array
> sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory
> model (however, ptrdiff_t is 32-bit in the huge model).
>
> So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.

Thanks. So what does Turbo C etc actually do in huge model
then? How is it different from large?

Thanks. Paul.

Alexei A. Frounze

unread,
Mar 9, 2021, 8:53:15 PM3/9/21
to
Cumulative size of static objects is 64KB in large vs 1MB in huge.
Basically, how many 64-KB data segments are used for your non-heap variables.

Alex

muta...@gmail.com

unread,
Mar 9, 2021, 11:20:53 PM3/9/21
to
I don't think this is the correct approach for the BIOS. The
BIOS necessarily behaves differently, such as providing
extra services to help the boot sector to load the next
sector. In addition, the OS can't have:

fopen("\\0x190", "r+b");
(to get the BIOS to open device x'190' which it recognizes)
as well as
fopen("config.sys", "r");
(to help the OS itself to read files)

Only the OS has this problem of not being able to
distinguish which is which, and needing different
calls. And the FILE pointer may be different too,
with different buffer sizes for example.

I think BIOS calls need to be done as bios->fread()
and for the file pointer you pass a void * which was
returned from bios->fopen, except for bios->stdout,
bios->stdin, bios->stderr, bios->bootdev which are
all provided (and pre-opened) before the boot sector
is given control.

So that means the OS functions are free to be defined
as an enum, ie OS_FOPEN, OS_FREAD etc, and then the
C library function fopen() will be defined as one line:
(*__osfunc)(OS_FOPEN, ptr, size, nmemb, stream);
Where __osfunc is a function pointer provided by the OS.

A simple implementation can simply use the enum as
an index into a structure. Hmmm, maybe __osfunc needs
to know where to find its data block too, so maybe the
first parameter should be __osdata, also given to the
executable when it is invoked.

In the OS itself, which has to actually provide a real
implementation of fread/fscanf, it will eventually
resolve into a call to something like PosReadFile()
(that name is never exposed, so not important),
which is where an interrupt may be generated,
depending on how advanced the implementation is.

BFN. Paul.

muta...@gmail.com

unread,
Mar 10, 2021, 12:16:06 AM3/10/21
to
On Wednesday, March 10, 2021 at 3:20:53 PM UTC+11, muta...@gmail.com wrote:

> C library function fopen() will be defined as one line:
> (*__osfunc)(OS_FOPEN, ptr, size, nmemb, stream);
> Where __osfunc is a function pointer provided by the OS.

It occurs to me that all my static global variables in
stdio.c will need to have space allocated for them
in the executable (or at least via malloc) rather than
using ones that are defined in the OS executable.

And when the OS is running, it also needs its own
copy of those variables, unless we attempt to
compile twice.

I have variables like this:

static int inreopen = 0;

FILE *__userFiles[__NFILE];

I'm guessing that all of these static variables need to
be put into a new structure, and I can put those structures
into stdio.h, string.h (at least for use by strtok) so long as
I call the structure __SOMETHING and then the startup
code needs to do a malloc for all of these structures,
possibly a single malloc, and the OS needs to preserve
this address (it has access to the C library's global
variables) whenever it does a system().

It's probably better to have a __stdio, __string etc global
variable (all saved and restored over a system() call)
rather than requiring stdio.c to be aware of all the other
things like string.c.

Although at the time an application's call to fopen()
resolves to an OS call of PosOpenFile(), we really
want the OS to get its version of the global variables
restored, in case it decides to call fopen() itself to
look at a permissions.txt or something to see whether
it is willing to satisfy the PosOpenFile() request or not.

So it looks like I will need some sort of header file
such as __start.h which includes all the other header
files to get their structures, and defines a single global
variable that can then be accessed by everyone else,
so the OS can save the previous state on every
Pos*() call. Saving the state is basically part of doing
a switch from application to OS.

BFN. Paul.

muta...@gmail.com

unread,
Mar 10, 2021, 12:48:07 AM3/10/21
to
On Wednesday, March 10, 2021 at 4:16:06 PM UTC+11, muta...@gmail.com wrote:

> It occurs to me that all my static global variables in
> stdio.c will need to have space allocated for them
> in the executable (or at least via malloc) rather than
> using ones that are defined in the OS executable.

I know what this is called - making it naturally reentrant.

BFN. Paul.

Rod Pemberton

unread,
Mar 10, 2021, 3:15:14 AM3/10/21
to
On Tue, 9 Mar 2021 21:16:05 -0800 (PST)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> I call the structure __SOMETHING and then the startup
> code needs to do a malloc for all of these structures,
> possibly a single malloc, and the OS needs to preserve
> this address (it has access to the C library's global
> variables) whenever it does a system().

If malloc() isn't available yet, you might want to code
another function for your OS like alloca() or sbrk().

--
Diplomacy with dictators simply doesn't work.

muta...@gmail.com

unread,
Mar 10, 2021, 7:58:33 AM3/10/21
to
On Wednesday, March 10, 2021 at 12:53:15 PM UTC+11, Alexei A. Frounze wrote:

> > Thanks. So what does Turbo C etc actually do in huge model
> > then? How is it different from large?

> Cumulative size of static objects is 64KB in large vs 1MB in huge.
> Basically, how many 64-KB data segments are used for your non-heap variables.

I have no use for that, at least at the moment. So I
shouldn't be asking that GCC IA16 person for huge
memory model, I should be asking him for large
memory model, but with data pointers able to cope
with 32-bit integers added to them, with normalization?

BTW, another thing I realized was that with my new
minimal BIOS and a 32-bit OS, I could have a new
computer built (ie, in Bochs!) that has the entire
1 MiB free instead of being limited to 640k.

No graphics etc of course, but I'm not trying to
support that, I'm only after text, including ANSI
escape codes, to be sent through to the BIOS.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 8:25:07 AM3/14/21
to
On Wednesday, March 10, 2021 at 12:26:40 PM UTC+11, Alexei A. Frounze wrote:

> The 16-bit models in Borland/Turbo C and Watcom C still limit object/array sizes to under 64KB and size_t is 16-bit regardless of the 16-bit memory model (however, ptrdiff_t is 32-bit in the huge model).
>
> So, even though the pointer is far/huge enough, it's not enough to transparently hide segmentation and its 64KB size limits.
>
> My compiler supports arrays/objects larger than 64KB in its huge model. But it uses 80386 registers and instructions.

I've been giving this more thought, and I'd like to abstract
the problem before inquiring about changes to the C90
standard to create a C90+ standard.

If I have a S/3X0 that only has 32-bit registers available,
I'd like to change the machine to reuse 3 of the 16 registers
as segment registers.

So there will be 32:32 far pointers available.

I'm not sure what range of memory that should cover, but
let's say 64 GiB. (any suggestion?).

I want the compiler to be able to generate far data pointers
and near code pointers.

I want to be able to allocate 8 GiB of memory, even though
size_t is 4 GiB. I need a different function, not malloc().

I don't want to burden the compiler with a formal "long long"
data type.

I want long to be 32-bits.

I want to declare a:

char huge *p;

to point to my above-size_t memory block.

I don't expect to be able to do a strlen() of p, but I do expect
to be able to do p++ to traverse the entire 8 GiB memory
block, perhaps looking for the character 'Q', with the segment
register being automatically adjusted by the compiler, at its
discretion.

I'd like to represent a 64-bit value to be given to huge_malloc()
by two unsigned longs, both containing 32-bit values, even on
machines where longs are 128 bits.

I'd also like a:
char huge *z;
z = addhuge(p, unsigned long high, unsigned long low);

A subhuge too.

The same functions can exist in all compilers, including
MSDOS, even if they just return NULL for huge_malloc()
for obvious reasons. But even MSDOS can give you a
memory block bigger than 64k, so if you request 128k
using huge_malloc(), no worries, you'll get it.

I think:
char far *
has a different meaning. That is a once-off segmented
reference, but still restricted to size_t ie 4 GiB.

Any thoughts?

Thanks. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 8:56:02 AM3/14/21
to
On Sunday, March 14, 2021 at 11:25:07 PM UTC+11, muta...@gmail.com wrote:

> I'm not sure what range of memory that should cover, but
> let's say 64 GiB. (any suggestion?).

What about if the segment register were to shift
left the full 32-bits?

This is all Windows virtual memory anyway.

Real hardware with non-virtual memory could use
a more realistic shift value.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 10:04:16 AM3/14/21
to
On Sunday, March 14, 2021 at 11:25:07 PM UTC+11, muta...@gmail.com wrote:

> char huge *p;
>
> to point to my above-size_t memory block.
>
> I don't expect to be able to do a strlen() of p, but I do expect
> to be able to do p++ to traverse the entire 8 GiB memory
> block, perhaps looking for the character 'Q', with the segment
> register being automatically adjusted by the compiler, at its
> discretion.

Rather than rely on the compiler, how about:

p = HUGE_ADDINT(p, 5);
p = HUGE_ADDLONG(p, 7);

etc

and if you have a magical compiler, that just translates to
p = p + 5;
Without a magical compiler, you do a function call or
whatever.
If you are on a platform without segmented memory,
it translates to:
p = p + 5;
also.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 11:38:06 AM3/14/21
to
On Sunday, March 14, 2021 at 11:56:02 PM UTC+11, muta...@gmail.com wrote:

> What about if the segment register were to shift
> left the full 32-bits?

How about the 8086 be adjustable so that it can
shift the full 16-bits? Software could have been
written to hedge against the shift value rather
than assuming that "4" was set in stone.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 12:09:25 PM3/14/21
to
On Monday, March 15, 2021 at 2:38:06 AM UTC+11, muta...@gmail.com wrote:

> How about the 8086 be adjustable so that it can
> shift the full 16-bits? Software could have been
> written to hedge against the shift value rather
> than assuming that "4" was set in stone.

And INT 21H function 48H should have returned
a far pointer instead of just the segment.

That dooms every allocation to be done on a 64k
boundary if you wish to shift 16 bits.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 12:14:22 PM3/14/21
to
On Monday, March 15, 2021 at 3:09:25 AM UTC+11, muta...@gmail.com wrote:

> And INT 21H function 48H should have returned
> a far pointer instead of just the segment.
>
> That dooms every allocation to be done on a 64k
> boundary if you wish to shift 16 bits.

Regardless, it just so happens that I have 4 GiB
of memory anyway, so if allocations using the
old API go on a 64k boundary, so be it.

Can I make PDOS/86 handle a shift of either 4
bits or 16 bits, as per some setting in the FAT
boot sector? Or some BIOS call? I'm guessing
the BIOS won't be able to cope with that? How
many places is "4" hardcoded?

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 12:41:30 PM3/14/21
to
On Monday, March 15, 2021 at 3:14:22 AM UTC+11, muta...@gmail.com wrote:

> Can I make PDOS/86 handle a shift of either 4
> bits or 16 bits, as per some setting in the FAT
> boot sector? Or some BIOS call? I'm guessing
> the BIOS won't be able to cope with that? How
> many places is "4" hardcoded?

Can PDOS/86 run GCC 3.2.3 which is a 3 MB
executable? Assuming the full 4 GiB address
space is available.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:20:34 PM3/14/21
to
On Monday, March 15, 2021 at 3:41:30 AM UTC+11, muta...@gmail.com wrote:

> Can PDOS/86 run GCC 3.2.3 which is a 3 MB
> executable? Assuming the full 4 GiB address
> space is available.

Once everyone has recompiled their programs
to use large memory model, can 32-bit flat
pointers co-exist with segmented memory
doing 16 bit shifts?

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:32:58 PM3/14/21
to
On Monday, March 15, 2021 at 4:20:34 AM UTC+11, muta...@gmail.com wrote:

> Once everyone has recompiled their programs
> to use large memory model, can 32-bit flat
> pointers co-exist with segmented memory
> doing 16 bit shifts?

At that stage, they WILL be flat pointers. What is
necessary is for the 32-bit instructions to not use
CS and DS and ES. Just pretend they are set to 0.
No need to convert the tiny/small/compact/medium
memory model programs to large/huge at all.

I think.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:45:36 PM3/14/21
to
On Monday, March 15, 2021 at 4:32:58 AM UTC+11, muta...@gmail.com wrote:

> At that stage, they WILL be flat pointers. What is
> necessary is for the 32-bit instructions to not use
> CS and DS and ES. Just pretend they are set to 0.
> No need to convert the tiny/small/compact/medium
> memory model programs to large/huge at all.

Perhaps what was needed was a "load absolute"
instruction.

LABS ds:bx,0xb8000

It will do the required bit shifting. No-one ever needs to know.
And a STABS too of course.

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 1:53:01 PM3/14/21
to
On Monday, March 15, 2021 at 4:45:36 AM UTC+11, muta...@gmail.com wrote:

> It will do the required bit shifting. No-one ever needs to know.
> And a STABS too of course.

A STABS instruction on the S/380 will require 2 32-bit
longs. What if we have 128-bit addresses and 32-bit
longs? How much is enough? I guess you'll be forced
to recompile when that happens?

BFN. Paul.

muta...@gmail.com

unread,
Mar 14, 2021, 3:21:27 PM3/14/21
to
On Monday, March 15, 2021 at 4:45:36 AM UTC+11, muta...@gmail.com wrote:

> LABS ds:bx,0xb8000
>
> It will do the required bit shifting. No-one ever needs to know.
> And a STABS too of course.

You need to update segments when loading a
medium/large/huge MSDOS executable, so you
need to know what ds is currently, and you need
to be on a segment boundary too, which may be
an expensive 64k if you are doing a 16-bit shift
back in the old days.

But how is a C program meant to update the
segment in a portable manner? E.g. in S/380
where you have a 64-bit segmented address,
assuming you had 7 GiB of code.

You will have segments of 0 and 1 in the executable.

Let's say you load to a 16 GiB location. You will be
aware of that absolute location, as you needed to
do a 64-bit addition (for the 7 GiB) without using
64-bit registers (which you don't have). So it would
have been some HUGE_ADDLONG() calls, presumably
in 2 GiB chunks, as you can't do 4 GiB.

There is a similar issue I faced with PDOS/86. I
can use 64k clusters for FAT-16 but I can't actually
read that amount. It's 1 byte too big to represent in
a 16-bit integer.

Not sure.

BFN. Paul..

muta...@gmail.com

unread,
Mar 14, 2021, 3:29:46 PM3/14/21
to
On Monday, March 15, 2021 at 4:53:01 AM UTC+11, muta...@gmail.com wrote:

> A STABS instruction on the S/380 will require 2 32-bit
> longs. What if we have 128-bit addresses and 32-bit
> longs? How much is enough? I guess you'll be forced
> to recompile when that happens?

How about a union of a long and a void *, to ensure it
is aligned for both, prior to looking at sizeof(void *)
and sizeof(long) before analyzing it? Maybe it is a
job for unsigned char, not long. The rule for STABS
should be set in stone (litle-endian vs big-endian).

BFN. Paul.

Rod Pemberton

unread,
Mar 14, 2021, 5:29:40 PM3/14/21
to
On Sun, 14 Mar 2021 05:25:06 -0700 (PDT)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> I've been giving this more thought, and I'd like to abstract
> the problem before inquiring about changes to the C90
> standard to create a C90+ standard.
>
> If I have a S/3X0 that only has 32-bit registers available,
> I'd like to change the machine to reuse 3 of the 16 registers
> as segment registers.
>
> So there will be 32:32 far pointers available.
>
> I'm not sure what range of memory that should cover, but
> let's say 64 GiB. (any suggestion?).
>
> I want the compiler to be able to generate far data pointers
> and near code pointers.

The C specifications don't support segmented address pointers.

E.g., the LCC C compiler for DOS eliminated near and far pointers to
comply with the C specification. Versions 3.5, 3.6 have them.
Versions 4.5, 4.6 don't. I.e., the later versions don't support 16-bit
x86 code (which must add a segment and offset for huge/far pointers),
only 32-bit x86 code (with a segment/selector that doesn't change).

> I want to be able to allocate 8 GiB of memory, even though
> size_t is 4 GiB. I need a different function, not malloc().

Do you actually need a new malloc()? You might.

Allocating a contiguous memory block for C objects and memory
allocations is a requirement of C.

So, multiple calls to malloc(), e.g., two 4GiB calls, would work,
IF AND ONLY IF,
you can guarantee that the memory allocator allocates both blocks
contiguously. E.g.,

__set_contiguous_multiple_allocations(1);
malloc(4GiB);
malloc(4GiB);
__set_contiguous_multiple_allocations(0);

Where, __set_contiguous_multiple_allocations() is a custom function
that turns contiguous allocations on/off within the memory allocator,
for repeated calls to malloc(). Of course, now you need access and
control of the memory allocator, which you may not have, in addition to
access and control of the C compiler proper.

> I don't want to burden the compiler with a formal "long long"
> data type.
>
> I want long to be 32-bits.
>
> I want to declare a:
>
> char huge *p;
>
> to point to my above-size_t memory block.
>

But, in this example, you have "to burden the compiler with a formal"
"huge" pointer type ... Same difference? I.e., I see an advantage to
supporting "long long" but see no advantage to support "huge" or "far"
or "near", if you don't need to do so.

> I don't expect to be able to do a strlen() of p

Why not?

strlen() is just a loop that detects a zero byte/word (which usually
maps to a nul char '\0' on most implementations, i.e., because
they're the same size byte/word and char for most platforms).

strlen() should work on an infinite length string.

> but I do expect to be able to do p++ to traverse the entire 8 GiB
> memory block

Same thing. No difference.

> perhaps looking for the character 'Q', with the segment
> register being automatically adjusted by the compiler, at its
> discretion.

What?

Are you saying you want another string terminator like nul '\0' for C
but using the character 'Q'? What for? Unnecessary...

> I'd like to represent a 64-bit value to be given to huge_malloc()
> by two unsigned longs, both containing 32-bit values, even on
> machines where longs are 128 bits.

Instead of passing a 64-bit value into a malloc() variant, why wouldn't
you have a malloc() variant that allocated 4KB or 64KB blocks of memory
at a time, instead of allocating bytes of memory at a time like
malloc()? E.g., 32-bit x (4KB per allocation). This wouldn't give you
a 64-bit address range, but it would eliminate the need for extending
integers or pointers, or passing in a segment etc.

> I'd also like a:
> char huge *z;
> z = addhuge(p, unsigned long high, unsigned long low);
>
> A subhuge too.

You're beginning to really complicate things ...

> The same functions can exist in all compilers, including
> MSDOS, even if they just return NULL for huge_malloc()
> for obvious reasons. But even MSDOS can give you a
> memory block bigger than 64k, so if you request 128k
> using huge_malloc(), no worries, you'll get it.

AISI, your only real problem is with values larger than 32-bit. You
need an additional keyword to indicate the increased size, be it "long
long" for integers/pointers or just a "huge" or "far" for pointers.

--
Clinton: biter. Trump: grabber. Cuomo: groper. Biden: mauler.

Rod Pemberton

unread,
Mar 14, 2021, 5:31:45 PM3/14/21
to
On Sun, 14 Mar 2021 12:21:26 -0700 (PDT)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> But how is a C program meant to update the
> segment in a portable manner? E.g. in S/380
> where you have a 64-bit segmented address,
> assuming you had 7 GiB of code.

C doesn't support segmented address pointers, but C compilers for DOS
do, i.e., huge, far, near etc, because the address range for 16-bit
x86 was too small without adding in the segment. The address range
for 32-bit is quite large without adding in the base address of the
selector for the segment. So, the segment is usually not changed, nor
added to the 32-bit offset for 32-bit x86 code. See my reply up thread
for more on this issue.

muta...@gmail.com

unread,
Mar 15, 2021, 1:04:46 AM3/15/21
to
On Monday, March 15, 2021 at 8:29:40 AM UTC+11, Rod Pemberton wrote:

> > I want the compiler to be able to generate far data pointers
> > and near code pointers.

> The C specifications don't support segmented address pointers.

C90+ will (or may).

> So, multiple calls to malloc(), e.g., two 4GiB calls, would work,
> IF AND ONLY IF,
> you can guarantee that the memory allocator allocates both blocks
> contiguously. E.g.,
>
> __set_contiguous_multiple_allocations(1);
> malloc(4GiB);
> malloc(4GiB);
> __set_contiguous_multiple_allocations(0);

Yes, you may be able to get that to work, but I think
the correct abstraction is far_malloc64().

> > I don't want to burden the compiler with a formal "long long"
> > data type.
> >
> > I want long to be 32-bits.
> >
> > I want to declare a:
> >
> > char huge *p;
> >
> > to point to my above-size_t memory block.
>
> But, in this example, you have "to burden the compiler with a formal"
> "huge" pointer type ... Same difference?

No. Most implementations will be allowed to
get away with:

#define huge
#define far_malloc(a, b) ((a == 0) ? NULL : malloc(b))

> I.e., I see an advantage to
> supporting "long long"

It's too much work to expect an MSDOS compiler
to do all that for you. You may as well ask for a
long long long long long long too. This is not the
right approach. C90 had it right, stopping at long,
but allowing that to be 64-bit or 128-bit or 256-bit
or whatever technology allows.

> but see no advantage to support "huge" or "far"
> or "near", if you don't need to do so.

The advantage is that you don't need new
instructions or registers or calling conventions
on S/370 to suddenly support accessing more
than 4 GiB of memory. You simply need to
recompile your program with an appropriate
compiler.

Quibbling aside.

> > I don't expect to be able to do a strlen() of p
> Why not?

That's precisely what size_t is for. That's what you
can support "normally". If you can support 64-bit
strlen() then set size_t to a 64-bit value.

> strlen() is just a loop that detects a zero byte/word (which usually
> maps to a nul char '\0' on most implementations, i.e., because
> they're the same size byte/word and char for most platforms).

Yes, and it will cut out at size_t and wrap.

> strlen() should work on an infinite length string.

There's not many infinite things in this world. :-)

> > but I do expect to be able to do p++ to traverse the entire 8 GiB
> > memory block

> Same thing. No difference.

Nope. Segmented memory will wrap when the offset
reaches the maximum.

> > perhaps looking for the character 'Q', with the segment
> > register being automatically adjusted by the compiler, at its
> > discretion.

> What?
>
> Are you saying you want another string terminator like nul '\0' for C
> but using the character 'Q'? What for? Unnecessary...

No, it was an example application. If you have a
simple application that looks for a 'Q' then you
can go while (*p != 'Q') p++;

Then you will know where 'Q' is. Don't look at me, I
don't write many applications. :-)

> > I'd like to represent a 64-bit value to be given to huge_malloc()
> > by two unsigned longs, both containing 32-bit values, even on
> > machines where longs are 128 bits.

> Instead of passing a 64-bit value into a malloc() variant, why wouldn't
> you have a malloc() variant that allocated 4KB or 64KB blocks of memory
> at a time, instead of allocating bytes of memory at a time like
> malloc()? E.g., 32-bit x (4KB per allocation). This wouldn't give you
> a 64-bit address range, but it would eliminate the need for extending
> integers or pointers, or passing in a segment etc.

The whole point is to get a 64-bit address range.
On systems that only have 32-bit registers, but
lots of memory.

> > I'd also like a:
> > char huge *z;
> > z = addhuge(p, unsigned long high, unsigned long low);
> >
> > A subhuge too.

> You're beginning to really complicate things ...

Adding a 64-bit value to a 64-bit pointer on a system
with only 32-bit registers requires a function call, or
a large macro.

It's a complicated scenario. That's why we have a
separation between near and far memory. Near
memory is simple.

> > The same functions can exist in all compilers, including
> > MSDOS, even if they just return NULL for huge_malloc()
> > for obvious reasons. But even MSDOS can give you a
> > memory block bigger than 64k, so if you request 128k
> > using huge_malloc(), no worries, you'll get it.

> AISI, your only real problem is with values larger than 32-bit. You

Yes, 32 is already difficult for 8086 to handle. I'm not
willing to make matters worse. God only knows how
the Commodore 64 supports 32-bit longs. I haven't
reached that point yet, but it's on my journey. I want to
write a C program for the C64.

> need an additional keyword to indicate the increased size, be it "long
> long" for integers/pointers or just a "huge" or "far" for pointers.

The additional keyword huge/far is dead simple to
implement on a C64 or standard S/370. It is simply
ignored.

long long is ridiculous unless the standard allows
it to be 32-bits or 16 bits. But that completely
defeats the purpose of why it is being added.

BFN. Paul.

muta...@gmail.com

unread,
Mar 15, 2021, 1:13:15 AM3/15/21
to
On Monday, March 15, 2021 at 8:31:45 AM UTC+11, Rod Pemberton wrote:

> > But how is a C program meant to update the
> > segment in a portable manner? E.g. in S/380
> > where you have a 64-bit segmented address,
> > assuming you had 7 GiB of code.

> C doesn't support

Who died and made ISO God?

> segmented address pointers, but C compilers for DOS
> do, i.e., huge, far, near etc, because the address range for 16-bit
> x86 was too small without adding in the segment. The address range

And were they wrong to do that? No, they weren't.
I thought it was strange at the time, but no, they
were right. For extraordinary situations, use a
far pointer. E.g. memmgr.c when being built for
PDOS/86. It needs to go beyond size_t. Normal
applications can be limited to size_t, but not
extraordinary ones. I guess if you have C compiler
support you can just make everything a huge
pointer without the keyword. Maybe that is in fact
the proper approach?

But if you have compiler support in place, you can
still code the extraordinary situation (going above
size_t) and you may be able to have a 128-bit far
pointer on a 16-bit system with 16-bit normal
pointers. In fact, you could even have a situation
where the segment is shifted (128-16) bits left
to access memory way out somewhere else,
while only occupying a 16-bit segment and
16-bit offset.

> for 32-bit is quite large without adding in the base address of the
> selector for the segment. So, the segment is usually not changed, nor
> added to the 32-bit offset for 32-bit x86 code. See my reply up thread
> for more on this issue.

80386 is not my only target. 8086 is another target.

BFN. Paul.

muta...@gmail.com

unread,
Mar 15, 2021, 1:40:19 AM3/15/21
to
On Monday, March 15, 2021 at 4:04:46 PM UTC+11, muta...@gmail.com wrote:

> No. Most implementations will be allowed to
> get away with:
>
> #define huge

Also:

#define far

> #define far_malloc(a, b) ((a == 0) ? NULL : malloc(b))

Sorry, should be:

#define far_malloc(a, b) ((a != 0) ? NULL : malloc(b))

ie refuse any high 32-bit request.

Another thing I should add is that in the extraordinary
situation of memmgr(), it would STILL have a #define
of whether you wanted to activate all the far pointer
manipulation instead of just operating within the
limits of size_t.

BFN. Paul.

muta...@gmail.com

unread,
Mar 15, 2021, 2:03:48 AM3/15/21
to
On Monday, March 15, 2021 at 4:13:15 PM UTC+11, muta...@gmail.com wrote:

> were right. For extraordinary situations, use a
> far pointer. E.g. memmgr.c when being built for

Correction.

For extraordinary situations, use a huge pointer.

For unusual situations, such as the occasional
reference to absolute address 0xb8000, feel
free to use a far pointer.

For normal situations, just use an appropriate
memory model so that you don't need to pollute
your code with "far" crap.

Or perhaps it should be:

char ABSADDR *ptr;

And then an implementation can do either:

#define ABSADDR
or
#define ABSADDR far

> PDOS/86. It needs to go beyond size_t. Normal
> applications can be limited to size_t, but not
> extraordinary ones. I guess if you have C compiler
> support you can just make everything a huge
> pointer without the keyword. Maybe that is in fact
> the proper approach?

But even if that is the ideal approach (which is not
true if you are interested in speed - I don't really
care personally at this stage), MSDOS was around
for a very long time, but not a single C compiler
even produced magical huge pointers. Only
"Smaller C" does that, and only with 80386
instructions, and I don't think it is C90-compliant
yet.

BFN. Paul.

Alexei A. Frounze

unread,
Mar 15, 2021, 3:05:49 AM3/15/21
to
On Sunday, March 14, 2021 at 11:03:48 PM UTC-7, muta...@gmail.com wrote:
...
> But even if that is the ideal approach (which is not
> true if you are interested in speed - I don't really
> care personally at this stage), MSDOS was around
> for a very long time, but not a single C compiler
> even produced magical huge pointers. Only
> "Smaller C" does that, and only with 80386
> instructions, and I don't think it is C90-compliant
> yet.

I'm thinking of making some improvements w.r.t. compliance,
but I'm not planning to support every single thing that's in the
standard (anywhere between ANSI C and C99). For example,
I'm not going to ever support functions with what's known as
identifier-list (as opposed to parameter-type-list), which is
already absent from C2020.
VLAs is another questionable feature, which the latest standards
(again, C2020 in particular) make optional.
Complex types are OK, but low priority.
Some math functions are low priority as well.
Wide characters / Unicode is going to be incomplete too.
Likely the same with time zones, "saving" and leap seconds.

C2020 adds alignment, atomics, threads, attributes, etc. None
of that is in the plans. Though, anonymous unions are.

Alex

Rod Pemberton

unread,
Mar 15, 2021, 3:52:58 AM3/15/21
to
On Sun, 14 Mar 2021 22:04:45 -0700 (PDT)
"muta...@gmail.com" <muta...@gmail.com> wrote:

> > > I don't expect to be able to do a strlen() of p
> >
> > Why not?
>
> That's precisely what size_t is for. That's what you
> can support "normally". If you can support 64-bit
> strlen() then set size_t to a 64-bit value.
>

a) No, that's not what size_t is for. size_t is the type returned by
the sizeof() operator, which is usually "unsigned long" for ANSI C.

b) I'm sorry. I clearly made a mistake here. I don't normally think of
C string functions as having their return type limited, because it's
not usually an issue. And, I was only thinking about how the code for
strlen() generally works, not about how strlen() was declared. Yes,
you're correct that the string functions returns are limited, and
limited to size_t for ANSI C. Also, you're correct that size_t would
need to be larger to comply with the C specifications for strlen(), or
you'd need to use a different return type for strlen() or any function
that returned size_t e.g., up sized to "unsigned long long".

> > > but I do expect to be able to do p++ to traverse the entire 8 GiB
> > > memory block
>
> > Same thing. No difference.
>
> Nope. Segmented memory will wrap when the offset
> reaches the maximum.

I suspect you were getting at some other issue here, than what I'm
about to respond to below, but I suspect that I'm not getting it.

I meant that the strlen() function will use a pointer like p++ to
increment through the string to find the nul terminator.

If the pointer wraps at the offset maximum (which it will for segmented
memory) when incrementing p++, then it'll do the same within the
strlen() function, because it too increments a pointer just like p++.

So, AISI, there is no difference between "being able to do a strlen() of
p" and "able to do p++ to traverse the entire 8GiB memory block" as bo