The short answer: it was designed that way.
The 8086 was compatible with the 8080. Stanley Mazor of the 8080 design
team designed the 8080 stack that way:
"Deleting the on-chip stack saved chip area, but was a net advantage to the
user -now the stack had unlimited size. I defined the stack as growing
downward from the high end of memory; this facilitated indexing into the
stack and simplified displaying the stack. This was abandoned on the 8086."
http://www.xnumber.com/xnumber/Microcomputer_invention.htm
By "abandoned ...," I think he means the stack was put back on-chip for the
8086.
Numerous past threads on comp.arch, comp.arch.embedded,
alt.folklore.computers, or comp.dsp will blame the 8080, PDP-11, or K&R C...
depending on the author...
Rod Pemberton
Or does he mean that it being stuck at the high end of memory
was abandoned, as it could now reside in arbitrary locations?
Phil
--
Dear aunt, let's set so double the killer delete select all.
-- Microsoft voice recognition live demonstration
That was my first thought: that the stack was SP based on x86 and not fixed
to top of memory (which I'm assuming from his statement since I don't know
much about the 8080... Altair's and Imsai's were a couple of years before I
got into computers.). But, after a reread, I said "What, that doesn't make
too much sense... Why would he mention something trivial and irrelevent
like the stack being at the top of memory being 'abandoned'? Using the word
'abandoned' would be overkill. Now, if he was talking about testing out
ideas in silicon..." The rest of the article indicated most of their
changes were intended to give a 10x speedup. So, the smallest contributors
to performance lost on-chip silicon. I suspect the designers found an in
memory stack with the 8080 to be an order of magnitude slower than an on
chip stack considering the memory speeds (500ns or slower) of the era. The
6502 (a year and a half or so after the 8080) was the first microprocessor
with a pipeline (not CPU - apparently the Cray CDC 6600) so it probably
wasn't much more than one order of magnitude. Anyway, I'd assume that once
the designers had more silicon work with, they put it back on the silicon.
I'd have to do some research to find out for sure, but it's a highly
plausible assumption to me...
Rod Pemberton
> That was my first thought: that the stack was SP based on x86 and not
> fixed to top of memory (which I'm assuming from his statement since I
> don't know much about the 8080...
The 8080, like the 8086, had a 16-bit stack pointer, which allowed the
stack to be placed anywhere in memory, and 14 stack related
instructions. This was a major improvement over its predecessor, the
8008, which had an internal stack of seven 14-bit entries, and no
instructions to directly manipulate it.
There has never been a case where the stack was fixed to the top of
[r/w] memory.
-- Chuck
According to the link Rod gave, by contrast the 4004 had a 4 level
stack on the silicon of the microprocessor itself.
For the 8086, the stack location is governed by the SS: register,
complete with address wrap-around in the 64k segment, as you know. So
what you infer by your question is the only interpretation that fits.
Steve
> Phil
> --
> Dear aunt, let's set so double the killer delete select all.
> -- Microsoft voice recognition live demonstration- Hide quoted text -
>
> - Show quoted text -
"MCS6500 Microcomputer Family Programming Manual," 2nd ed., (C) MOS
Technology, Inc., January 1976
pg 52:
"5.1 Concepts of Pipelining and Program Sequence"
" The overlap of fetching the next memory location while interpreting
the current data from memory minimizes the operation time of a normal 2-
or 3-byte instruction and is referred to as _pipelining_. It is this
feature
that allows a 2-byte instruction to only take 2 clock times and a 3-byte
instruction to be interpreted in 3 clock cycles."
(typewrite underscore of "pipelining" appears to be original text)
Someone should probably slap that up on 6502 Wikipedia page since the page
seems to dismiss this even though the 1-stage pipeline is the primary reason
the 6502 had a similar throughput to, but operated at 1/4 clock of the 8080.
Rod Pemberton
However, "downward from the high end of memory" does imply that.
If the guy's going to be ambiguous, then it's not surprising
that we're clutching at straws trying to work out what contrast
he was trying to make.
Happy new year to all those yet to reach it, and happy 2008
to all those who already have.
> If the guy's going to be ambiguous, then it's not surprising
> that we're clutching at straws trying to work out what contrast
> he was trying to make.
Indeed so. However, since most of the items which he was talking
about were NOT, in fact, abandoned, I think
that I know what he did mean.
The 8080 was a flat memory model cpu. Although the stack could be
placed at any location in physical memory, it made a lot of sense for
the programmer to put it at the highest possible address, to keep it as
far away as possible from the heap -- because the heap grew upward and
the stack grew downward.
The 8086, on the other hand was a segmented memory model cpu, with each
segment being capable of addressing 1024K of memory. In this model, the
heap and the stack would be in different physical memory banks, and thus
could not collide. From the point of view of a hardware designer, one
could reasonably say that the concept of a flat memory model with the
stack at the top had been abandoned.
However, IBM chose to map all four segments to the same physical
memory, thus restoring the flat memory model (at the hardware level),
and, of course, programmers kept on putting the stack at the top of
memory.
-- Chuck