Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why's the stack grow down?

63 views
Skip to first unread message

Ralph Melton

unread,
Nov 29, 1990, 6:56:17 PM11/29/90
to
"For largely historical reasons, the stack on most computers grows towards
smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)

Can anyone tell me about those historical reasons? What was the first
computer/system/language to use a stack growing downwards?

There must be good folklore in this topic.

Ralph

--
Ralph Melton The White Rabbit ral...@portia.stanford.edu

"When you hear of a storybook romance, you don't think of the storybook
as being _Alice in Wonderland_ . . ."

The Unknown User

unread,
Nov 29, 1990, 7:28:06 PM11/29/90
to

In article <1990Nov29....@portia.Stanford.EDU> ral...@elaine26.stanford.edu (Ralph Melton) writes:
>"For largely historical reasons, the stack on most computers grows towards
>smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)
>Can anyone tell me about those historical reasons? What was the first
>computer/system/language to use a stack growing downwards?

I don't know if you JUST want folklore, but it seems to make a lot
more sense having a stack growing downward from the highest address...

If you have the stack growing upwards from some arbitrary address,
you'll have less space for a stack than if you start at the end of memory
and go down, then you have lots of room...

The O/S is usually at low numbered areas of memory, so you can't
(or DON'T usually) start at 0 and go up...

These are just my reasons, based on many other conventions...

--
/Apple II(GS) Forever! unk...@ucscb.ucsc.edu MAIL ME FOR INFO ABOUT CHEAP CDs\
|WRITE TO ORIGIN ABOUT ULTIMA VI //e and IIGS! Mail me for addresses, & info. |
\ "Dammit Bev, is it you inside or is it the clown?" -IT by Stephen King /

Rick Smith

unread,
Nov 30, 1990, 10:17:05 AM11/30/90
to
>Ralph Melton writes:
>>"For largely historical reasons, the stack on most computers grows towards
>>smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)
>>Can anyone tell me about those historical reasons? What was the first
>>computer/system/language to use a stack growing downwards?

To use stacks efficiently, the instruction set needs to provide some form
of index register (actually, base register) addressing. The address of the
top of the stack goes in the register, of course. Instructions then can
address items on the stack by using offsets from that register. *These
offsets were always POSITIVE in early machines.* Thus, you wouldn't be
able to fetch items off the stack easily if the stack grew upwards.

Who did it First? Hard to say.The Manchester Mark I was the first to use
index registers. The IBM 704 probably did the most to popularize them in
the US. Unlike the older computers, modern machines often make a functional
distinction between base and index registers. In older machines, base and
index registers were interchangable.

Strictly speaking, then, any machine capable of base register addressing
could easily support stacks. However, Real Stack Support implies that the
instruction set has built-in addressing modes for pushing and poping,
particularly for subroutine calls. The Burroughs B5000 series is
reputed to have been doing this in the early 60s, as part of its hardware
support for efficient execution of Algol programs.

A related bit of trivia: What was the first machine that allowed *negative*
offsets from base registers? That would have been the first machine that
could easily support stacks that grow upwards... The PDP-11 supported
signed base offsets in 1970; I can't think of any before that...

Rick.
sm...@sctc.com Arden Hills, Minnesota

Craig Burley

unread,
Nov 30, 1990, 10:20:48 AM11/30/90
to

"For largely historical reasons, the stack on most computers grows towards
smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)

Can anyone tell me about those historical reasons? What was the first
computer/system/language to use a stack growing downwards?

There must be good folklore in this topic.

I don't think it's folklore as much as sensible practice.

My understanding is that it is based on the assumption that you've got a
known memory space into which the program is loaded starting at some point;
the program will use stack space (LIFO, Last-In-First-Out); the program
will use heap space (AIAO, Arbitrary-In-Arbitrary-Out); and (the kicker, I
think), when it wants to extend a given chunk of the heap, it will tend to
want to do so by adding to the end of it rather than the beginning.

Given the last assumption, it is clear the implementation will be faster if
it is more likely that a chunk can be extended (a la "realloc" in C) by adding
to the end of it instead of adding to the beginning and then copying its
previous contents downward in memory. By avoiding the copy, things go faster.

Another assumption is that most often, the heap chunk to be reallocated is
the most recently allocated one, or the most "prominent" (i.e. at the "top"
of the heap, if there is such a thing).

To make this all work, the heap should grow from low memory addresses to high
ones, i.e. from the bottom to the top (upwards). Also, before true heap
storage, many systems and programs were (and perhaps still are) written to
get more storage by simply writing beyond the end of the program as defined
by some linker symbol; I've done programming in this kind of environment.

So, if the heap grows above the program, and of course the size of the heap
is not known at link/load time, obviously the way to allow the most contiguous
space for a heap is to put the program at the lowest address available. (Yes,
I know I've gone through a long explanation to arrive at what seems a natural
conclusion, but it is useful to be aware that were it not for one or two of
these assumptions, there would be no real reason other than "naturalness" to
load the program so its highest-relative-address datum was placed at the highest
available address in the memory space; there are others reasons as well having
to do with virtual-memory implementations, but these came after the other
issues were well understood and so were designed with the "natural" assumptions
in mind, as is evident when one looks at VM architectures on a variety of
machines with different memory and stack architectures.)

Now, if the program starts at address 0 (I'll use 0 to mean the lowest available
address) and ends at address P, and the heap therefore starts at address P+1
so it has the most contiguous space in which to grow, where does one put the
stack? One Prime 50 series and earlier machines, the architecture happened
to dictate that the stack grew upward, unlike more "modern" machines, so one
had to, at link/load time, allocate the remaining space by hand (or adopt a
default) and establish the limit on heap space and stack storage by setting
the address of the beginning of the stack (and thus the end of heap space).
This was within a 64Kword (128KB) space, usually.

But if you don't know or care about expected stack vs. heap usage (that is,
you don't want programmers on your machine to have to care), you can design
your machine to grow the stack downwards from the top of memory, T. (I call
these down-stack machines vs. up-stack machines like the Prime; some people
might use the terms high-stack and low-stack respectively, but I won't use
these terms here.)

So the stack grows downwards from T towards 0, while the heap grows upwards
from P+1 to H, the current top of the heap. As long as SP, the stack pointer,
is always > H (let's ignore off-by-one errors please :-), everything's fine.

This approach not only allows programmers to not worry much about where to
put the stack, it allows programs to exist that might otherwise have a very
hard time: for example, a program that uses and then flushes a lot of heap
storage, and later has extensive recursion, thus using and flushing a lot of
stack storage. So the lowest value of SP reached can be quite a bit lower than
the highest value of H reached, as long as SP > H at any particular point in
time. Growing a stack upwards from a fixed address does not allow this
capability (you can get around it in painful ways, of course!).

Note that on Prime 50 series machines, the fact that they grew stacks upwards
did offer them several advantages, initially and later on. Given a 64K
space (T=64K-1), a fixed stack address S for an upwards-growing stack, and
the usual 0..P, P+1..H, it was more efficient to detect, if desired, stack
overflow, since that was simply a matter of overflowing an address instead
of checking whether SP > H (and on most downward-growing stack machines I've
looked at, the functional unit that moves SP has no idea of the current value
of H), though I can't remember if they actually used this feature in the
old 64K days. And since the heap was fixed (and presumably properly
managed by custodian procedures), it was easy and natural to make sure it
didn't overflow into the stack (H < S at all times). (The heap
routines on down-stack machines do of course know where SP is, so they can
check fairly easily when trying to allocate more memory. More on how
modern machines detect stack overflow into the heap later.)

When Prime went to a segmented architecture, they still ran old 64K programs
the same way (64K was the size of a segment), but multi-segmented programs
got themselves a stack in its own segment. This meant that due to the
(sometimes annoying) way in which segmented architectures differ from
flat-space machines, one could not simply overflow the heap into the stack
without having bugs in the program (the heap typically had its own set of
segments separate from the program and its data, I think), and further a
stack overflow consisted of hitting the end of a segment, which the hardware
and operating system handling by extending the stack across multiple segments
if available (or reporting a stack overflow if not).

Further, a procedure in Prime machine code using the segmented-access mode
(64V or 32I modes, initially) could easily extend the stack by an arbitrary
amount using the STEX instruction for the usual reasons (entering a PL/I
BEGIN block with its own variables, or it's C equivalent, or needing
fleeting storage quickly). Because the stack grew upwards, the extension
was placed at the end of the current stack frame (unless it hit the top of
the segment; the instruction left flags or the L-reg set to indicate if this
happened, as I recall) so it became part of a new contiguous stack frame
most of the time. Also, because the extension came after the current frame
in an upwards-growing stack, and because the stack could only be extended
using STEX (no MOV ...-(SP) or some such thing), the stack frame always
contained a valid pointer to where the next frame should be allocated,
saving the necessity for two stack-related registers (like VAX's SP and FP,
or Frame Pointer, the latter pointing to the juicy procedural stuff like
passed arguments, return location, previous stack frame, and the like),
needing only an SP. (Prime 50 series machines had four segmented base
address registers, PB for Program Base -- presumably constant instructions
and data; LB for Link Base -- changeable data; SB for Stack Base -- stack
frame plus extensions; and XB for temporary Base -- references through
arbitrary pointers via offsets.)

Sure, if a program needed to STEX on top of a previous STEX and wanted to
treat both extensions as a contiguous space (the latter being "above" the
former in memory, the usual scenario), it might have to STEX again and
copy if the second STEX broke the contiguity of the frame due to reaching
the end of a segment. But a) that's due to the segmented architecture,
not growing stacks upwards, b) it doesn't necessarily happen often or at all
for a given program, so the copying and extra extension aren't really expensive
in that sense, and c) programs on downward-stack machines wishing to do the
same thing (SUB SP,n and later SUB SP,p) must always copy the data written at
the first extension down to where the second extension has gone so that the
new data can be viewed as following the end of the original in a contiguous
way.

How do modern down-stack and up-heap machines detect collisions between the
stack and the heap? Either they don't (various personal computers are
probably this way, though the Macintosh does implement a periodic check of the
SP against the top of the heap during clock interrupts, which can catch many
typical overflow conditions -- after they happen!), they do but it's
time-consuming (generate code to check any time something is actually
allocated), or they can use a technique that depends on a paged-architecture
virtual-memory machine: the operating system can arbitrary pick a page serving
as a sort of DMZ (DeMilitarized Zone, something to do with Vietnam for those
of you who are too young to remember :-) between the heap and the stack. The
stack grows downwards towards this DMZ page, and the heap grows upwards towards
it. The DMZ page isn't a real page in RAM; it's a page table entry that causes
a trap to the OS when any reference is made to it (as with most pages in a
given program's VM space) and also is recognized by the OS as the DMZ when
and if the stack does enter it.

If the stack reaches down to a new page, ordinarily the OS trap handler just
makes (and hopefully zeros out :-) a new page in its page table entries and
lets the program keep running, using the new page for the stack.

But when the stack reaches the DMZ page, the OS trap handler either just bombs
with a stack overflow (the cheap, quick way to deal with things) or it first
makes sure it can make a new DMZ page out of a currently nonexistent page in the
memory space somewhere below the current one. If it can, it means the heap
hasn't grown up in to the page just below the current DMZ page, so it makes the
new DMZ page (reducing the amount of space available for heap storage), unmakes
the previous one, and continues as if it was a normal stack fault (it turns
the previous DMZ page into a usable page).

Similarly, if the OS trap handler sees that the reference to the DMZ page
came from the heap (either the OS allows programs to grow the heap just by
referencing upper pages in the same way it allows them to grow the stack by
referencing lower ones, or more typically, it requires an explicit call by
the program to get more pages for heap storage and recognizes that the call
constitutes a conceptual "reference" to or through the DMZ page), it can
relocate the DMZ page upwards as long as it makes certain the stack isn't
already there. (It is probably ok to check the stack pointer; if it is
currently above the desired new DMZ, then even if the new DMZ page already
exists, meaning the stack once grew down to that point, it is ok to delete
that page and any existing contiguous lower pages because a subsequent
overflow of the stack into that new DMZ will be detected only if it actually
happens, and it might not -- the heavy-stack-usage period of the program
might be over.)

Is this DMZ page approach perfect? That is, will it detect any overflow of
a heap into stack or stack into heap (assuming other bugs or bizarre usages
of the machine aren't present)? No. For the heap, it might be, but there
is an easy way to overflow the stack without (immediate, at least) detection:
if a stack frame is larger than a page, it might be that it gets allocated
when the SP is near enough to the top of the DMZ page that the new frame's
initial contents lie below the bottom of the DMZ. If the page where the
frame's initial contents go (not necessarily the page directly below the DMZ
page for a stack frame more than two pages long) doesn't already exist, then
the OS can (and should) treat the trap as a stack overflow (unless simply
referencing nonexistent pages is also the way to grow the heap under that
OS). But if the page exists, it means the heap has grown to that point, and
in fact the stack may already have overwritten live heap storage, and there
is no trap (aside from the usual page-fault trap, which the OS shouldn't be
doing any special checking for).

The solution? There are several: one is to ask the OS (if it's nice) to
allocate as many contiguous pages as the DMZ as necessary to cover the
largest stack frame or stack extension (or extensions if references to
the extension contents don't happen right away) in the program (this
technique is easy if the compilers, linkers, and, perhaps, assemblers that
built the program keep track of this info somehow); another is to generate
code such that anytime a stack frame or extension larger than one page
is made, immediate useless references are made into the extension, one
per page size, to encompass the entire frame or extension to trigger
a fault if the DMZ lies within it (but this technique could conceivably
be thwarted by a very clever machine that recognizes useless reads, so
maybe one should write zeros instead of doing useless reads); finally, you
could give up and rewrite your program in FORTRAN 77 (-: .

Had enough? I know I have! It'll be a scream if this whole explanation
turns out to be dead wrong....
--

James Craig Burley, Software Craftsperson bur...@ai.mit.edu

Rick Smith

unread,
Nov 30, 1990, 10:26:19 AM11/30/90
to
unk...@ucscb.UCSC.EDU (The Unknown User) writes:

> If you have the stack growing upwards from some arbitrary address,
>you'll have less space for a stack than if you start at the end of memory
>and go down, then you have lots of room...

In many, if not most, execution environments the program begins with One
Big Piece of memory. Starting from the top or starting from the bottom
hardly matters. "'Historically'" people run their stack starting from one
end and their persistent memory allocations (e.g. buffers) from the other.
Stack overflow is checked by comparing the "sbrk" or "dp" pointer (or whatever
your system calls it) against the top of the stack.

> The O/S is usually at low numbered areas of memory, so you can't
>(or DON'T usually) start at 0 and go up...

In this modern age of virtual addresses, it probably doesn't matter if you
start at zero or not, you can't "see" the OS or the interrupt vectors in
your address space anyway. RT-11, one of the classic single-user OSes
(Before PCs), lived in upper memory so you could do your own interrupt
hacking.

Dik T. Winter

unread,
Nov 30, 1990, 8:57:58 PM11/30/90
to
In article <1990Nov30....@sctc.com> sm...@sctc.com (Rick Smith) writes:
> >Ralph Melton writes:
> >>"For largely historical reasons, the stack on most computers grows towards
> >>smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)
> >>Can anyone tell me about those historical reasons? What was the first
> >>computer/system/language to use a stack growing downwards?
I remember being quite surprised when confronted with our new PDP 11/45 that
the stack was growing downwards. The machines I had used until then had
always an upward growing stack (Electrologica X8, CDC Cyber). But at least
the stacks on my desk still grow upwards.

>
> To use stacks efficiently, the instruction set needs to provide some form
> of index register (actually, base register) addressing. The address of the
> top of the stack goes in the register, of course. Instructions then can
> address items on the stack by using offsets from that register. *These
> offsets were always POSITIVE in early machines.* Thus, you wouldn't be
> able to fetch items off the stack easily if the stack grew upwards.
Positive? I do not think so. In general the offset would take just as
many bits as a register, and overflow during address calculation would simply
be ignored. Or else the offset would be sign extended. This was true for
the CDC Cyber and the Electrologica X8, and I just checked the manuals, it
is also true for the DEC-10 and CDC 3200. Of these the X8 and DEC-10 have
instructions to directly support stacks, and both grow upwards.
>
> Who did it First?
See below.

> Hard to say.The Manchester Mark I was the first to use
> index registers. The IBM 704 probably did the most to popularize them in
> the US.
But the 704 did of course make it very complicated. It had 3 index registers
and you could add 1, 2 or all 3 index registers to the address. (This
explains the limit of 3 dimensions for arrays in older Fortrans.)

> Unlike the older computers, modern machines often make a functional
> distinction between base and index registers.
Eh? Modern machines make no distinction at all, there is just one kind of
register.

> In older machines, base and
> index registers were interchangable.
This conforms to C thinking about arrays.

>
> Strictly speaking, then, any machine capable of base register addressing
> could easily support stacks. However, Real Stack Support implies that the
> instruction set has built-in addressing modes for pushing and poping,
> particularly for subroutine calls. The Burroughs B5000 series is
> reputed to have been doing this in the early 60s, as part of its hardware
> support for efficient execution of Algol programs.
I have a Burroughs manual somewhere, but not here at home, so I can not check
the date. The X8 had complete support for stacks as early as 1965. DEC-10
dates from 1967 or thereabouts, I believe.

>
> A related bit of trivia: What was the first machine that allowed *negative*
> offsets from base registers? That would have been the first machine that
> could easily support stacks that grow upwards... The PDP-11 supported
> signed base offsets in 1970; I can't think of any before that...
As indicated above, there were many machines that supported it before that time.

It must be remembered that in the early sixties stack handling was not yet
completely understood. I must have somewhere reports from Princeton University
from the 1962-1965 time frame, detailing how to do stacks for Algol 60. And
to quash another followup: heaps were not yet thought about; those came later.
(But the CDC Cyber Algol-68 compiler dealt very well with upward growing
stacks and downward growing heaps.)

$64,000 question: who was the first that did downward growing stacks?
From all information I have available, it must be DEC with the PDP-11.
It has the two addressing modes auto-increment and auto-decrement.
Auto-increment is post-increment while auto-decrement is pre-decrement.
Still this allows for both upward growing and downward growing stacks.
For a downward growing stack the stackpointer points to the last element
pushed onto the stack, while for an upward growing stack the stackpointer
points the the first free element on the stack. What made the stack
downgrowing is the stack overflow trap. This trap was generated if in
autodecrement mode an address was referenced below 0400. (And this again
quashes another thread: downward stacks were not choosen because you could
start at top of memory and grow downwards until you hit a limit.) In later
versions this fixed address was supplanted by a modifiable address, including
green, orange and red zones.

The concept of a stack must have come fairly late in the design because the
DG Nova does not implement stacks (it has some memory addresses that operate
as auto-increment registers and some that operate as auto-decrement registers,
but they do not overlap). And I think myself that the auto-increment and
auto-decrement modes on the 11 were more inspired by the concept of walking
through lists (just like the DG Nova) than by the true idea of a stack, but
the modes came in handy to implement a stack.
--
dik t. winter, cwi, amsterdam, nederland
d...@cwi.nl

Steve Masticola

unread,
Nov 30, 1990, 10:37:08 PM11/30/90
to
Ralph Melton writes:

> "For largely historical reasons, the stack on most computers grows towards
> smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)

> Can anyone tell me about those historical reasons? What was the first
> computer/system/language to use a stack growing downwards?

I don't think it's _just_ historical reasons. Assume you want to
dedicate x amount of memory for a stack and detect overflows. If you
place the stack boundary at an arbitrary point and grow (either down
or up), you'll have to compare the stack pointer against the stack
boundary constant to detect an overflow.

On the other hand, if you place the stack boundary at zero, the
overflow detection logic in the adder will detect overflow for you
without having to keep the stack boundary constant around.

Growing toward the bottom would work too, if the memory was populated
at the bottom of the address space (ignoring virtual memory) and if
the address space could be represented in exactly one word. In most
machines these conditions are not met.

Do people still put the stack from zero to x? Who knows? Probably not
in a 6502 :-)

- Steve (mast...@athos.rutgers.edu)

The Unknown User

unread,
Dec 2, 1990, 5:02:42 AM12/2/90
to

In article <jdarcy.660121026@zelig> jda...@encore.com (Jeff d'Arcy) writes:
.meis...@osf.org (Michael Meissner) writes:
.>unk...@ucscb.UCSC.EDU (The Unknown User) writes:
.>| The O/S is usually at low numbered areas of memory, so you can't
.>| (or DON'T usually) start at 0 and go up...
.>
.>REAL O/S's aren't in the user's accessable address space to begin with
.>and use virtual memory (or least base registers) :-).
.designed* with downward-growing stacks in mind. Then there's the 6502, which
.uses page 0 (256 bytes) almost like a register set, which wouldmake an upward-
.growing stack at a low address very undesirable since you have better uses for
.that space.

Yeah, I most likely thought of that when I was originally posting my
first reply, but didn't state it...

.I think the basis for all this is the fact that memory always starts at 0
.but you don't necessarily know how far it goes. Therefore it's easier to
.let the bootstrap program (if there is one) load stuff starting at 0 and
.have the real OS worry about where to put the stack. Since the bottom of
.memory is already taken, the logical place is the top of memory, and that
.means the stack should grow downward.

That's the same thing I was (attempting to) say, but you've said
it a lot better! thanks.

Bernie Cosell

unread,
Dec 2, 1990, 8:59:59 AM12/2/90
to
sm...@sctc.com (Rick Smith) writes:

}>Ralph Melton writes:
}>>"For largely historical reasons, the stack on most computers grows towards
}>>smaller addresses." (Scott Knaster, _How to Write Macintosh Software_)
}>>Can anyone tell me about those historical reasons? What was the first
}>>computer/system/language to use a stack growing downwards?

}To use stacks efficiently, the instruction set needs to provide some form
}of index register (actually, base register) addressing. The address of the
}top of the stack goes in the register, of course. Instructions then can
}address items on the stack by using offsets from that register. *These
}offsets were always POSITIVE in early machines.*

Oh??

}Who did it First? Hard to say.The Manchester Mark I was the first to use
}index registers. The IBM 704 probably did the most to popularize them in
}the US. Unlike the older computers, modern machines often make a functional
}distinction between base and index registers. In older machines, base and
}index registers were interchangable.

The problem here is that while the IBM 70x machines had the capability of doing
'stacks', they didn't actually *have* stacks, and so any stack implementation
was mostly up to the programmer to do as she pleased.

}... The Burroughs B5000 series is


}reputed to have been doing this in the early 60s, as part of its hardware
}support for efficient execution of Algol programs.

And I don't remember the exact chronology, but DEC's PDP-6 also had explicit
stack operations. The PDP-6's stack grew *up*: the loader initialized the
stack pointer to be at the end of your program.

}A related bit of trivia: What was the first machine that allowed *negative*
}offsets from base registers? That would have been the first machine that
}could easily support stacks that grow upwards... The PDP-11 supported
}signed base offsets in 1970; I can't think of any before that...

The PDP-11 was the first machine _I_ ever saw that was set up to do stacks
_downwards_ [because of the way pre-increment and post-decrementer [or was it
the other way around? :-)] were set up].

The PDP-6 did signed offsets just fine. The -6 actually gave away a bit of the
offset to allow it to be a proper signed number, and so the question of
adding/subtracting it in the hardware was irrelevant [and it allowed things
like a very simple C convention where you have a 'frame pointer' pointing to
your return address, and formals have negative offsets and locals have positive
offsets].

/Bernie\

Michael Meissner

unread,
Dec 2, 1990, 1:25:47 AM12/2/90
to
In article <95...@darkstar.ucsc.edu> unk...@ucscb.UCSC.EDU (The
Unknown User) writes:

| I don't know if you JUST want folklore, but it seems to make a lot
| more sense having a stack growing downward from the highest address...
|
| If you have the stack growing upwards from some arbitrary address,
| you'll have less space for a stack than if you start at the end of memory
| and go down, then you have lots of room...

Under most systems, the heap grows in the oppisite direction as the
stack, so you get the same number of bytes no matter which way the
stack grows.

However, I did have the experience of implementing the low levels of
the UNIX library on a machine whose hardware used stacks that grew
upwards (the Data General MV). If the user did not call sbrk
directly, but just called malloc and friends, we sbrk'ed the entire
available address space and had the stack grow up and heap grow down.
However, because sbrk's semantics require it to grow up, we were
caught between a rock and a hard place, and reserved by default a 1/4
megabyte hole for the sbrk activity. There were all sorts of link
time options to specify using a fixed size stack, fixed size heap,
reserving a given size of memory from stack/heap, etc.

| The O/S is usually at low numbered areas of memory, so you can't
| (or DON'T usually) start at 0 and go up...

REAL O/S's aren't in the user's accessable address space to begin with
and use virtual memory (or least base registers) :-). Some of us
prefer to have a little bit more security than glorified program
loaders. Also, on the MIPS and VAX computers, the OS is in the upper
regions of memory.

--
Michael Meissner email: meis...@osf.org phone: 617-621-8861
Open Software Foundation, 11 Cambridge Center, Cambridge, MA, 02142

Considering the flames and intolerance, shouldn't USENET be spelled ABUSENET?

Jeff d'Arcy

unread,
Dec 2, 1990, 1:57:06 AM12/2/90
to
meis...@osf.org (Michael Meissner) writes:
>unk...@ucscb.UCSC.EDU (The Unknown User) writes:
>| The O/S is usually at low numbered areas of memory, so you can't
>| (or DON'T usually) start at 0 and go up...
>
>REAL O/S's aren't in the user's accessable address space to begin with
>and use virtual memory (or least base registers) :-).

Even the most real OS needs a stack regardless of whether it's in user mode,
supervisor mode, executive mode, kernel mode, or ala mode. While there are
most certainly exceptions (page faults, interrupts. . .oops), most versions
of U*X expect the kernel to start at 0. This makes downward-growing stacks
starting at high addresses the logical choice. In another universe, the Mac
puts all sorts of global stuff down in low memory and everything runs in the
same 68K supervisor mode so the same considerations apply. Come to think of
it, the 68K predecrement/postincrement addressing modes are *specifically


designed* with downward-growing stacks in mind. Then there's the 6502, which

uses page 0 (256 bytes) almost like a register set, which wouldmake an upward-

growing stack at a low address very undesirable since you have better uses for

that space.

I think the basis for all this is the fact that memory always starts at 0

but you don't necessarily know how far it goes. Therefore it's easier to

let the bootstrap program (if there is one) load stuff starting at 0 and

have the real OS worry about where to put the stack. Since the bottom of

memory is already taken, the logical place is the top of memory, and that

means the stack should grow downward.

Sorry if I've repeated others' statements; I just joined this discussion.

--

Jeff d'Arcy, Generic Software Engineer - jda...@encore.com
I *am* the evil twin

Don Stokes

unread,
Dec 2, 1990, 10:05:18 AM12/2/90
to
sm...@sctc.com (Rick Smith) writes:

> A related bit of trivia: What was the first machine that allowed *negative*
> offsets from base registers? That would have been the first machine that
> could easily support stacks that grow upwards... The PDP-11 supported
> signed base offsets in 1970; I can't think of any before that...

PDP-11s don't really have signed base offsets, it's just the effect of a
16 bit address space with 16 bit offsets. Any n bit address machine
with n bit offsets will display this "feature". Can you say "modulo"?

Don Stokes, ZL2TNM / / Home: d...@zl2tnm.gp.co.nz
Systems Programmer, GP Print Ltd /GP/ Wellington, Work: d...@gp.co.nz
________________________________/ /__New_Zealand_____PSI:__(5301)47000028::DON

Jeff d'Arcy

unread,
Dec 2, 1990, 10:09:45 AM12/2/90
to
unk...@ucscb.UCSC.EDU (The Unknown User) writes:
>jda...@encore.com (Jeff d'Arcy) writes:
>> [stuff]

>>I think the basis for all this is the fact that memory always starts at 0
>> [more stuff]

>
> That's the same thing I was (attempting to) say, but you've said
>it a lot better! thanks.

You're quite welcome. Unfortunately, I must correct the above statement. On
*most machines* memory starts at 0. The box I'm working on now is actually
one of few exceptions; some buffoon in ******* decided that memory would end
at 0x40000000, so the starting address depends on how much memory you have.
This has forced us to remove lots of U*X dependencies on starting at physical
address 0, plus we have to deal with the kernel getting loaded into different
areas depending upon configuration. YUK! Fortunately, there aren't too many
hardware designers obnoxious enough to pull stuff like that.

Hans Mulder

unread,
Dec 3, 1990, 8:42:11 AM12/3/90
to
In article <Nov.30.22.37....@athos.rutgers.edu> mast...@athos.rutgers.edu (Steve Masticola) writes:
>Do people still put the stack from zero to x? Who knows? Probably not
>in a 6502 :-)

Well, if you really want to know:
In a 6502 the stack grows downwards from 0x01ff to 0x0100.
If you push too much, the stack pointer wraps around, back to 0x01ff again.
And then if you pop things it wraps just as easily.

Actually, the S register is just an 8 bit register (like all registers,
except the PC), and if you use it as a stack pointer, the addressing part
of the 6502 sticks on the 0x01 high byte.


Lovely little piece of silicon, that 6502.

Nice day to you all,

Hans Mulder h...@fwi.uva.nl

Magnus Olsson

unread,
Dec 3, 1990, 11:08:57 AM12/3/90
to
In article <jdarcy.660121026@zelig> jda...@encore.com (Jeff d'Arcy) writes:

>Then there's the 6502, which
>uses page 0 (256 bytes) almost like a register set, which wouldmake an upward-
>growing stack at a low address very undesirable since you have better uses for
>that space.

Sorry, but for the 6502 this consideration is totally irrelevant.

The 6502's stack pointer is only one byte long, and the stack is always confined
to page 1 (hex 100-1FF), so it can never interfere with page 0 variables.

And saying that page 0 is like a register set is exaggerating things a bit.
I'd rather say that page 0 is the only place where you can use all the addressing
modes. Some of these modes require only an 8-bit address and hence are faster
than "normal" absolute addressing, but there's really nothing very exciting about
page 0 - it's just like *any* part of RAM on a non-lobotomized processor.

Magnus Olsson | \e+ /_
Dept. of Theoretical Physics | \ Z / q
University of Lund, Sweden | >----<
Internet: mag...@thep.lu.se | / \===== g
Bitnet: THEPMO@SELDC52 | /e- \q

Rick Smith

unread,
Dec 3, 1990, 12:39:16 PM12/3/90
to
d...@cwi.nl (Dik T. Winter) writes:

> > top of the stack goes in the register, of course. Instructions then can
> > address items on the stack by using offsets from that register. *These
> > offsets were always POSITIVE in early machines.* Thus, you wouldn't be
> > able to fetch items off the stack easily if the stack grew upwards.

>Positive? I do not think so. In general the offset would take just as
>many bits as a register, and overflow during address calculation would simply
>be ignored. Or else the offset would be sign extended. This was true for

My first experience at implementing stacks was in the early 70s on the
I remember implementing stacks on the IBM/360 series. You *had* to implement
downward growing stacks because there was no "overflow." We had a 16MB
address space and 0.5 MB of addressable memory. The unsigned address
computation simply would point itself to nonexistent memory and generate
an addressing exception.

However, your point is well taken with respect to machines whose memory
size tended to match their address space.

> > Unlike the older computers, modern machines often make a functional
> > distinction between base and index registers.
>Eh? Modern machines make no distinction at all, there is just one kind of
>register.

You are right, any register may be used as either an index register or
a base register. What I meant, however, is that a register acting as an
index register may behave differently than one behaving as a base register.
For example, if you refer to long word data in an indexed instruction, some
machines will interpret the index value as the number of long words rather
than the number of addressable units (e.g. bytes), in effect, multiplying
the value by four before applying it to the address computation.

I meant the comment more along the lines of the phrase, "Kids, don't try
this one at home!"

Linus Torvalds

unread,
Dec 4, 1990, 6:36:50 AM12/4/90
to
In article <jdarcy.660121026@zelig> jda...@encore.com (Jeff d'Arcy) writes:
> ....... with downward-growing stacks in mind. Then there's the 6502, which

>uses page 0 (256 bytes) almost like a register set, which wouldmake an upward-
>growing stack at a low address very undesirable since you have better uses for
>that space.

The 6502 indeed uses the low 256 bytes as a faster memory, but that is not the
reason it uses a downward growing stack (if i remember correctly from my
days programming a VIC-20). The stack has it's "own" address space that
is just 256 bytes from $0100-$0200, and will wrap around at a stack
overflow, as the stack pointer is just 8 bits wide. An upward growing
stack would be quite possible, as the stack pointer is always relative
to $0100. It seems it grows downwards just because most others used it
this way. Maybe some other historical reasons?

Flames to: torv...@cs.helsinki.fi "Linus Torvalds"

------
Sorry, no sig

0 new messages