an...@mips.complang.tuwien.ac.at (Anton Ertl) wrote:
> The IBM 360 is often credited with introducing or at least widely
> popularising 8-bit bytes and byte addressing, and probably rightly so,
> but I wonder how the IBM 360 influenced the early microprocessor
> makers. Maybe, if Intel had decided to make a 7007 instead of the
> 8008, we would be using the AMD56 architecture now.
It would seem from Intel's other horrific blunders and foolish missteps over
the past decades they were and are relatively hostile to outside influences
and they seem to have gone out of their way several times to shoot themselves
in the foot. IBM has always had a 4K instruction segment relative to the
code base register. Intel's 64K segment was positively luxurious in
comparison but they had so few registers available they made it much harder
to use their system than it should have been. IBM made the transitions from
24 bit to 31 bit to 64 bit addressing painless and absolutely transparent.
You can link 24, 31, and 64 bit code in one executable and programs in any
mode or residency can call other programs in any mode and residency. 64 bit
code doesn't require any larger data footprint than 24 bit code except where
you need to store an actual 64 bit address, since there are intructions to load
addresses of various sizes cleanly from various length operands. The code
footprint is the same because IBM had plenty of room in their instruction
encodings. 64 bit registers are usable in 31 and 24 bit mode.
AMD and Intel flubbed the 32 to 64 bit changeover so badly they still
haven't fixed it and it will probably require changes to ELF and various OS
to do it correctly, which isn't going to happen overnight. Nobody seemed to
have a clue on this, and they should have seen it coming from the 16 bit to
32 bit changeover but they didn't.
Intel's instruction encoding is bizarre and complicated in the extreme. It's
amusing to see how numb Intel users have become to the fact, they don't even
consider it odd anymore. On Intel some intructions that are available in
16/32 bit mode (AAA comes to mind) don't work in 64 bit mode, you can't use
registers in the wrong mode, and you can't link an executable that contains
programs in different modes. I'm sure the Intel experts among you know even
more problems and inconsistencies. As far as I know IBM has none of these
problems. IBM is very careful with evolution. Object code from the 1960s
still runs on the latest hardware. At the same time, there is very little
apparent legacy burden on the instruction set, software, or OS.
Given IBM was there first in all cases, and did things seamlessly and
elegantly (and their tech pubs are freely available online) there really
isn't any excuse for Intel's consistent history of embarassing fuckups.
> So was it an influence coming from the IBM 360 that made Intel (and
> also Motorola and other Microprocessor makers) decide to go with 8-bit
> bytes, was it technical reasons (like better compatibility with 4-bit
> CPUs), or was it just an arbitrary decision (unlikely given that all
> early microprocessors I know of (except the 4004) used 8-bit bytes).
I think Motorola and other chipmakers probably did keep a close eye on IBM
although nobody seems to have adopted their hardware/software as a package
philosophy except perhaps to a much less extent Sun with Solaris/SPARC, and
that was also a very nice platform.
> So, why did microprocessor makers decide on byte addressing and an
> 8-bit byte. Did they want compatibility with something influenced by
> the IBM 360, or was it something else?
Unless there are people who worked at Intel in the 1970s participating in
this list, I think the best we can do is conjecture. Personally, I'm very
sick of Intel's shitty "engineering" and I'm glad to see other companies and
architectures make headway whenever they can.