rickman <
gnu...@gmail.com> writes:
>On 9/16/2016 4:37 AM, Anton Ertl wrote:
>> Anyway, Intel could have achieved assembly-source compatibility while
>> still providing a flat address space.
>
>Huh? The 8080 has a flat address space. How do you extend that to a
>larger address space without sectors. The basic instructions assume 16
>bit addresses. Are you suggesting a totally different instruction set
>on *top* of the 8 bit register instructions?
There are various ways to do it.
The 65C816 extended the 6502 instruction set to support a 16MB address
space, and was not only assembly language compatible, but even binary
compatible.
The MIPS, SPARC, and Power architectures were extended from 32-bit to
64-bit and were not only assembly language compatible, but even binary
compatible, and similar things happened to the S/360 architecture.
The 386 architecture included the 8086 and 80286 architecture through
16-bit modes.
Similarly, the AMD64 architecture has modes for running 386
architecture programs and 8086 and 80286 programs in addition to its
64-bit mode for its 64-bit architecture.
ARM went in the same direction with ARMv8: It has a 64-bit
architecture and a 32-bit architecture and modes for executing
programs in these architectures.
I am sure there are other examples.
>>> In the end Intel ended up with a full 32 bit CPU with 32 bit addressing
>>> and a 64 bit CPU with 64 bit addresses. The purpose of the segment
>>> registers now is not so much a way to utilize a large address space, but
>>> to separate the various memory sections and support security.
>>
>> The only purpose of segment registers (in 64-bit mode) is to provide
>> thread-local storage. Instead of extending segment descriptors for
>> 64-bit use (which would have required more space), segments in 64-bit
>> mode were reduced in capability and can no longer be used for
>> security; they have not been used for security before that, anyway.
>> Segment fans always whine about security; paging goes home and fscks
>> the system:-).
>
>You seem to be totally ignoring the incremental nature of the
>development of the technology. Remember how the 8086 was available
>earlier than the 68000? That's because they didn't try to solve
>tomorrow's problems today. The 68000 has numerous issues such as higher
>transistor count and slower throughput compared to the 8086. It was
>only if you looked many years ahead that the 68000 looked like a better
>design.
I did not have to look ahead. I just had to look at the assembly
language. Anyway, I don't know what this has to do with my refutation
of your claim about segments being used for security these days.
Although, come to think of it, maybe with "now" you mean a time when
the 80286 was current and there were attempts to use protected mode
for security (maybe in OS/2 1.x, now 25 years gone).
> In the end the x86 line dominated, so clearly it was the better
>choice.
They had to invent a new architecture for 32 bits, while the 68000
line could continue with its existing architecture. The 8086 does not
dominate, AMD64 (and ARMv8) dominates; AMD64 still has 8086 in some
dark corner, but it's hardly used these days, if at all.
As for better choice: If IBM had chosen the 68000, it would have
dominated, and it would clearly have been the better choice.
As for performance,
<
http://performance.netlib.org/performance/html/dhrystone.data.col0.html>
lists an 8MHz 8086 (ATT PC6300) at 0.44 Dhrystone MIPS and a 7.16MHz
68000 (Amiga 1000) at 0.54 Dhrystone MIPS, and an IBM PC (4.77MHz
8088) at 0.22 Dhrystone MIPS. So no, they did not chose the 8088 for
performance.