Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

History question: why do stacks commonly grow down?

141 views
Skip to first unread message

Ivan Godard

unread,
Nov 25, 2012, 1:49:19 AM11/25/12
to
My best guess is that by growing down you can address function locals by
positive offsets off the stack pointer, saving having to implement a
frame pointer register. Mere speculation, but supporting the explanation
is that all early larger machines (that I'm acquainted with) had stacks
that grew up, but also had no reason to economize on a single register.

Of course, addressing locals off the stack pointer means you can't do
alloca() or equivalent; probably that's why all modern machines maintain
a frame pointer.

These days, is there any advantage to one direction over the other?

Robert Wessel

unread,
Nov 25, 2012, 2:48:23 AM11/25/12
to
On Sat, 24 Nov 2012 22:49:19 -0800, Ivan Godard <iv...@ootbcomp.com>
wrote:

>My best guess is that by growing down you can address function locals by
>positive offsets off the stack pointer, saving having to implement a
>frame pointer register. Mere speculation, but supporting the explanation
>is that all early larger machines (that I'm acquainted with) had stacks
>that grew up, but also had no reason to economize on a single register.
>
>Of course, addressing locals off the stack pointer means you can't do
>alloca() or equivalent; probably that's why all modern machines maintain
>a frame pointer.


Probably more important is that many machines used to be programmed
assuming a single threaded model, and you'd load the program at the
beginning of the address space, start the heap just past that, and the
start the stack at the end of the address space - then the heap and
stack can grow at each other and you can use the entire address space
with any additional heroics.


>These days, is there any advantage to one direction over the other?


Not really.

Ivan Godard

unread,
Nov 25, 2012, 2:52:40 AM11/25/12
to
Yes, but you could start the stack at the bottom (growing up) and the
heap at the top and have the same effect. Why was stack-down chosen,
when it seems unnatural at least given the saucer-stack analogy we all
first learned?

Stephen Sprunk

unread,
Nov 25, 2012, 3:31:57 AM11/25/12
to
On 25-Nov-12 00:49, Ivan Godard wrote:
> My best guess is that by growing down you can address function locals by
> positive offsets off the stack pointer, saving having to implement a
> frame pointer register. Mere speculation, but supporting the explanation
> is that all early larger machines (that I'm acquainted with) had stacks
> that grew up, but also had no reason to economize on a single register.

It should be just as easy to address locals by negative offsets from an
upward-growing stack pointer, so that seems moot.

> Of course, addressing locals off the stack pointer means you can't do
> alloca() or equivalent; probably that's why all modern machines maintain
> a frame pointer.

You can do alloca() without a frame pointer if needed, and that still
has no bearing on whether the stack grows upward or downward.

> These days, is there any advantage to one direction over the other?

Neither. However, if one assumes a single stack and a single heap, then
it is most logical for them to grow from opposite ends of the address
space toward each other. Which grows in which direction is merely a
matter of convention.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Antti Louko

unread,
Nov 25, 2012, 3:43:00 AM11/25/12
to
On 2012-11-25 10:31, Stephen Sprunk wrote:

> Neither. However, if one assumes a single stack and a single heap, then
> it is most logical for them to grow from opposite ends of the address
> space toward each other. Which grows in which direction is merely a
> matter of convention.

Heap tends to have executable code more often than stack. Thus, it is
simpler to only allow execution in the same region which has the static
program code, eg. start of the address space.

nm...@cam.ac.uk

unread,
Nov 25, 2012, 4:25:26 AM11/25/12
to
In article <k8sf1p$7u1$1...@dont-email.me>,
Ivan Godard <iv...@ootbcomp.com> wrote:
>
>My best guess is that by growing down you can address function locals by
>positive offsets off the stack pointer, saving having to implement a
>frame pointer register. Mere speculation, but supporting the explanation
>is that all early larger machines (that I'm acquainted with) had stacks
>that grew up, but also had no reason to economize on a single register.

(a) Only if you have no negative addressing and (b) if you update the
stack pointer immediately after entry rather than immediately before
the call.

>Of course, addressing locals off the stack pointer means you can't do
>alloca() or equivalent; probably that's why all modern machines maintain
>a frame pointer.

It can do the scoped forms of local allocation, which is what almost
all languages that have local allocators have used. The looser
semantics of alloca() very rarely help and make it difficult to
manage space (needing a frame pointer isn't the only problem).

>These days, is there any advantage to one direction over the other?

Was there ever? I don't remember a systematic one and, even on a
single architecture, different compilers often used different
directions.


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
Nov 25, 2012, 6:23:24 AM11/25/12
to
In case anyone is interested, this is one of the better ways to
implement local allocation, a rising stack, and use nothing but
positive offsets. I cannot tell you when it was first used, but
I would guess early 1960s. Note that it makes the following
assumptions:

(a) The local allocation is scoped, as in Algol 60, Fortran and
C99 and (with a secondary stack) Algol 68 and alloca();
(b) All local objects are fixed-size, and there are a fixed number
of arguments;
(c) All other objects and variable length argument lists go onto a
secondary stack, which also gives better cache usage;
(d) Interrupts and system calls use a separate stack, which gives
much better RAS and security.

Stack pointer -> Register save area for current procedure
[Optional] Address of secondary stack top
[Optional] Address of enclosing name scopes
Arguments to current procedure
Local variables for top-level scope
[As required] Local values for subsidiary scopes

Note that a register is NOT needed for the current stack top, as
the compiled code knows exactly how much local data is being used.
The address of the secondary stack and enclosing name scopes can
be kept in registers, either all of the time or over the call, but
need not be.

The calling sequence consists of:
[Optional] Storing the address of the secondary stack
[Optional] Storing the address of the current enclosing name
scopes
Storing the arguments
Bumping the stack pointer
Calling the new procedure

The entry sequence consists of:
Saving the registers, including the stack pointer
[Optional] Saving the address of the secondary stack, but only
if it is used by this procedure

The exit sequence consists of:
Restoring the registers, including the stack pointer
Returning

There is absolutely no difficulty in adding automatic stack extension
(including discontiguous stacks) and/or stack overflow checking to this.
The compiler knows the maximum amount of stack used by a procedure, so
it can put in a test on entry immediately after saving the arguments
that there is still enough space. If not, it gets more or aborts, as
required.

So what's the problem? :-)


Regards,
Nick Maclaren.

nm...@cam.ac.uk

unread,
Nov 25, 2012, 7:04:02 AM11/25/12
to
In article <k8sv3c$8nv$1...@needham.csi.cam.ac.uk>, <nm...@cam.ac.uk> wrote:
>In case anyone is interested, this is one of the better ways to
>implement local allocation, a rising stack, and use nothing but
>positive offsets. ...

I forgot to mention that you can, of course, use the heap instead
of a secondary stack (as is done by most compilers today), but you
obviously have to free the space explicitly if you do.


Regards,
Nick Maclaren.

Gerald Breuer

unread,
Nov 25, 2012, 10:08:53 AM11/25/12
to
If you would grow up and push an item on the stack with
*++s = x (pseudocode, not C),
you would have to know the size of the last item to push
a further item on the stack.
If you would push an item and grow up the stack afterwards with
*s++ = x (pseudocode, not C - really ^^),
you would have to address the top-item on the stack with
an offset.
With stacks pushed by growing down with
*--s = x,
you wouldn't have to know the size of the last item and you
can address the top item with a zero offset.

Michael S

unread,
Nov 25, 2012, 10:53:05 AM11/25/12
to
On Nov 25, 5:09 pm, Gerald Breuer <Gerald.Bre...@googlemail.com>
wrote:
I like your theory.

Andy (Super) Glew

unread,
Nov 25, 2012, 12:55:54 PM11/25/12
to
Conjecture:

In the beginning there were statically allocated variables.
No heap. No stack. Allocated starting at address 0 or low, and growing up.

Then came stacks. To maximize space, allocated starting at address
high, and growing down until they met the statics.

Then came heaps. ...



--
The content of this message is my personal opinion only. Although I am
an employee (currently of MIPS Technologies; in the past of companies
such as Intellectual Ventures and QIPS, Intel, AMD, Motorola, and
Gould), I reveal this only so that the reader may account for any
possible bias I may have towards my employer's products. The statements
I make here in no way represent my employers' positions on the issue,
nor am I authorized to speak on behalf of my employers, past or present.

Bakul Shah

unread,
Nov 25, 2012, 1:46:19 PM11/25/12
to
May be because first came static allocation, then
stack allocation and then heap memory allocation?

nm...@cam.ac.uk

unread,
Nov 25, 2012, 2:06:45 PM11/25/12
to
In article <50B25BAA...@SPAM.comp-arch.net>,
Andy (Super) Glew <an...@SPAM.comp-arch.net> wrote:
>
>Conjecture:
>
>In the beginning there were statically allocated variables.
>No heap. No stack. Allocated starting at address 0 or low, and growing up.
>
>Then came stacks. To maximize space, allocated starting at address
>high, and growing down until they met the statics.

Sorry, but no. I don't go back to the early days of stacks, but
that wasn't the case even when I started. Yes, static allocation
came first, but early stacks grew up as often as down, and there
was usually no heap. Heap didn't really come into widespread use
until memory became plentiful.

The layout of static at the bottom and a stack falling to meet it
was introduced into the wider world by Unix (hence: not before
1970), though it may have come from earlier DEC systems.


Regards,
Nick Maclaren.

Ivan Godard

unread,
Nov 25, 2012, 3:33:56 PM11/25/12
to
Plausible for machines that are using the stack for temporary storage
the way we use registers today. So the model would be of most use for
low-register-count machines with fast memory compared to processor
speed: 8080 and friends, Nova, and so on.

John Levine

unread,
Nov 25, 2012, 4:20:37 PM11/25/12
to
>In the beginning there were statically allocated variables.
>No heap. No stack. Allocated starting at address 0 or low, and growing up.
>
>Then came stacks. To maximize space, allocated starting at address
>high, and growing down until they met the statics.

A reasonable theory, but it's hard to find support for it. Two
influential machines with stacks were the 1964 vintage PDP-6 (PDP-10,
DEC-20) and the 1969 PDP-11 (VAX). On the PDP-6, stacks grew up, on
the PDP-11, they grew down. Both can do positive or negative index
offsets equally well, no preference there. Bell et al in "Computer
Engineering" note the stack directions but don't give any motivation.

The PDP-11 did a lot of things in what were then seen perverse ways,
notably little-endian addressing. The downward growing stacks might
have partly been that, partly for the idea that the program and heap
are at the bottom of memory, the stack at the top. On the other hand,
the original PDP-11/20 had a stack overflow trap when the stack
crossed location 0400, but the book notes that nobody used it and it
was dropped from later versions.

So your guess is as good as any, but it's still a guess.

Serious stack machines like the B5500 had stacks that grew up, but
since there was no indexing off the stack pointer, the direction was
largely invisible other than to OS memory allocation.

--
Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly

ken...@cix.compulink.co.uk

unread,
Nov 25, 2012, 4:27:24 PM11/25/12
to
In article <k8sioh$l3i$1...@dont-email.me>, iv...@ootbcomp.com (Ivan Godard)
wrote:

> Yes, but you could start the stack at the bottom (growing up

Not with the 6502 page zero was needed. With CPM page zero was used for
various system functions so you could not put the stack there either.
The TPA was between reserved space at top of memory and reserved space
at the bottom. The TRS80 had the bottom 16K of memory reserved for
system use. In addition I believe the push and pop operations on those
processors was hardwired.

Ken Young

Robert Wessel

unread,
Nov 26, 2012, 12:13:41 AM11/26/12
to
On Sun, 25 Nov 2012 15:27:24 -0600, ken...@cix.compulink.co.uk wrote:

>In article <k8sioh$l3i$1...@dont-email.me>, iv...@ootbcomp.com (Ivan Godard)
>wrote:
>
>> Yes, but you could start the stack at the bottom (growing up
>
> Not with the 6502 page zero was needed.

On the 6502, the stack was fixed to be on page 1, and the stack
pointer was eight bits.

Quadibloc

unread,
Nov 30, 2012, 10:01:15 PM11/30/12
to
I would think that growing down was just the natural way to fill
memory, adopted without thinking.

And it has one advantage. The same program, running in two computers,
one with 4,096 memory locations, and another with the memory expanded
to 8,192 words, could run with, say, a stack of 128 elements in the
first computer, and of 4,224 elements in the second if the stack was
placed at the end of the program.

John Savard

John Levine

unread,
Nov 30, 2012, 10:10:22 PM11/30/12
to
>I would think that growing down was just the natural way to fill
>memory, adopted without thinking.

Nope. On the PDP-6 and -10, stacks grow up. On the PDP-11, they grow
down. Someone decided to switch the direction.

Mike Hore

unread,
Dec 2, 2012, 8:40:21 PM12/2/12
to
On 1/12/12 2:10 PM, John Levine wrote:
>> I would think that growing down was just the natural way to fill
>> memory, adopted without thinking.
>
> Nope. On the PDP-6 and -10, stacks grow up. On the PDP-11, they grow
> down. Someone decided to switch the direction.
>

From my memory, the PDP-11 was the first. That's the first time I
remember any stack growing down. It suited the instructions, since the
address modes included PRE-decrement and POST-increment. So with the
stack growing down, you could address the top item with a zero offest
and save a whole 16 bits. ISTR the first papers on the PDP-11 made a
bit of noise about this.

Cheers, Mike.


---------------------------------------------------------------
Mike Hore mike_h...@OVE.invalid.aapt.net.au
---------------------------------------------------------------

Joe keane

unread,
Dec 3, 2012, 11:40:52 PM12/3/12
to
In article <k9bseu$2j0v$1...@leila.iecc.com>,
John Levine <jo...@iecc.com> wrote:
>On the PDP-6 and -10, stacks grow up.

That's weird.

>On the PDP-11, they grow down.

I think PDP-11 (mini) invented this.

Then 6502 (home computer) just copied that.

And the RISC 'workstations' used the same thing.

But they all died, so we use 'micros' now...

[1/2*:-)]

Torben Ægidius Mogensen

unread,
Dec 5, 2012, 7:37:00 AM12/5/12
to
Yes. And that meant that you couldn't put frames on the stack, as you
would quickly run out of stack space. So languages that needed frames
put these elsewhere, using a 16-bit stack pointer stored in page 0 and
(zero-page),Y addressing.

Since offsets (stored in Y) are positive, it made most sense to let the
frame stack grow downwards (like the return stack in page 1).

Torben
0 new messages