I just read the "Intel 386 Microprocessor Design and Development Oral
History Panel"
<
http://archive.computerhistory.org/resources/text/Oral_History/Intel_386_Design_and_Dev/102702019.05.01.acc.pdf>.
Here I'll discuss some of the points mentioned there.
One interesting aspect was that the 386 project at the start (in 1982)
was the compatible upgrade for the stopgap 8086 line (while the 432
was less than a shining success), with the company putting more
resources in the P7 project (eventually i960MX). While the project
was developed, IBM PC compatibles became a huge success with a huge
binary-software base, and the 386 project became king.
The most interesting aspect of the 386 to me is that it turned the
specialized registers of the 8086 into general-purpose registers.
This is covered in this oral history in very little space:
|[John] Crawford: [My model] gave us the 32-bit extensions but without
|having a mode bit and without having a huge difference in the
|instruction sets. As I mentioned earlier, the price for that was we
|kept the eight-register model which was a drawback of the X86, but not
|a major one. I guess in addition I got half credit for the register
|extension because I was able to generalize the registers. On the 8086
|they were quite specialized and the software people hated that
|too. One of the things I was able to get into the architecture was to
|allow any register to be used to address memory as a base or an index
|register, and was even able to squeeze in a scale factor for the
|index.
I had programmed the 286 in assembly language in a course, and it
looked like a mess, while the 386 looks much more regular, but I think
the only relevant difference is that instead of the [BX|BP+SI|DI]
addressing mode, the 386 supports [reg], offset[reg],
offset[reg+reg*1|2|4|8]. If you then ignored all the special-register
instructions like LODS (which were slower than the sequence of
general-register instructios on the 486 anyway), you had
general-purpose registers.
It also supports the 8086 addressing mode encoding in some way, but as
far as I am concerned, I would not have noticed if there was a
supervisor-settable mode bit (like the 64-bit and 32-bit instruction
sets on AMD64); I never mixed the two addressing mode encodings, and
never saw any code that did. I think you can mix them with some
prefix byte, and with a user-settable mode bit.
Another interesting aspect is how fast the project was. Started in
1982, it taped out in 1985, and the Compaq Deskpro 386 was announced
in September 1986. And this time span included quite a long
"definition" (architecture, not microarchitecture) time. Supposedly
these days CPUs take 5 years to develop (although that's a number I
have read many years ago; how true is it these days?).
Finally, they said quite a bit about the 286 in this panel. In
particular, they made this elaborate segment scheme to compete with
the "Zilog MMU". And apparently neither the customers nor most at
Intel liked it (including the designer of the 8086). Interestingly, I
worked on a 286 box running Xenix in a course, and it seems to me that
if you can live with 64KB processes (and swapping instead of paging),
the 286 MMU does ok.
In the 386 apparently they wanted to add paging (which is not really
mentioned that way, they only say something about a large flat address
space), and they needed a long time until they came up with the
combination of segments and paging that they eventually selected: map
segments to a linear address space and then use paging for the linear
address space. This does not look particularly sensible to me, but if
you follow the premise that OSs either use segments or paging, but not
really both at once, it's probably the solution that's cheapest to
implement.
Back to the 286: What the panelists stated was that at least in 1982
(too late for the 286) Intel was aware that the 68000 with its large
flat address space was much more attractive than the Zilog MMU
features. And Bob Childs apparently already had told them that when
the 286 was "defined":
|[Jim] Slager: I remember I think I was with Bob Childs and we were
|telling him all the great things about the 286 and he said, "Well, do
|people really want that? Don't they just really want large address
|spaces?" Oh man, <laughter> because he had it 100 percent right, but
|we didn't listen
So let's consider what might have happened if they had listened. One
approach would have been to just extend the 8086 to 32 bits, including
segment registers (which would still be used with a 4-bit shift).
Should easily fit into the 134,000 NMOS transistors that the real
80286 has (twice as many as the 68,000 supposedly has). Provide for
an off-chip paging MMU.
If there are any transistors left, John Crawford's general-purpose
register addressing modes would have been an ideal addition. However,
if we assume a 16-bit external interface like the 286 had, the
question is if the longer encoding of the general addressing modes
would have caused enough of a slowdown to discourage their use.
If Intel had gone that way, it would be interesting how various
software projects would have played out. Would the MMU have become a
universal add-on to the alternative-realtity 286? How would OS/2 have
developed? How would Windows have developed?
- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <
c17fcd89-f024-40e7...@googlegroups.com>