Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[?] Why should Distros be called as i386 for a 32-bit PC, and as amd64 for a 64-bit PC, when Intel Core PCs are also 64bit systems

28 views
Skip to first unread message

Susmita/Rajib

unread,
Mar 14, 2021, 7:10:05 AM3/14/21
to
While Intel PCs are also 64bit processors?

For instance, my current laptop is Lenovo IdeaPad 320-15ISK 80XH01FKIN
15.6-inch Laptop (6th Gen Core i3-6006U/4GB/2TB/Integrated Graphics),
a 64bit processor.

It can't be that intellectuals, technocrats and cognitive elites
involved in the development of this complete OS+packages could be
misplaced in their perceptions. This is impossible.

So why such naming?

Could I be educated in this regard please?

The Wanderer

unread,
Mar 14, 2021, 7:30:04 AM3/14/21
to
On 2021-03-14 at 06:49, Susmita/Rajib wrote:

> While Intel PCs are also 64bit processors?

Because of the history of the processor microarchitectures involved.

The x86 processor line (32-bit and older) was, to the best of my
knowledge, originally an Intel thing. Before i386 (where the 'i' may
stand for Intel, I'm not sure), there was 286, and other things earlier
than that; if my memory just offhand is accurate, the oldest one was
probably called 8086. After i386, you have 486, 586, and 686; current
Debian 32-bit packages are actually compiled against the 686 baseline,
not 386 as such.

Intel owns the patents for the 32-bit x86 CPU architecture, and licenses
them to other companies for a price. AMD is one of those other
companies; that's how AMD is allowed to create 32-bit x86 CPUs.

When 64-bit came along, rather than extending the x86 line, Intel
started from scratch and designed an entire new CPU architecture. That
got called ia64, and it never caught on; it eventually failed in the
marketplace, except possibly in very limited market segments.
when Intel created a 64-bit architecture (called ia64), it turned out to
be a developmental dead end and failed in the marketplace.

At around the same time, AMD created a 64-bit CPU architecture which
extended the x86 line, and was backwards compatible with existing
software. That got called amd64, and is also sometimes called x86_64, or
other names in addition. It caught on, and became so successful that
Intel abandoned its ia64 approach and started making amd64 CPUs itself.

AMD owns the patents for the 64-bit amd64 CPU architecture, and licenses
them to other companies for a price. Intel is one of those other
companies; that's how Intel is allowed to create 64-bit amd64 CPUs.


Or, put briefly: because AMD created the underlying design for how that
type of CPU works, even if Intel is the one making the specific CPU
model in question.

Does that make sense?

--
The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man. -- George Bernard Shaw

signature.asc

Andrei POPESCU

unread,
Mar 14, 2021, 7:30:04 AM3/14/21
to
Because the architecture was created by AMD:

https://en.wikipedia.org/wiki/X86-64#AMD64

Kind regards,
Andrei
--
http://wiki.debian.org/FAQsFromDebianUser
signature.asc

Eduardo M KALINOWSKI

unread,
Mar 14, 2021, 7:30:05 AM3/14/21
to
Because AMD was first in developing what is now known as amd64; at the
time Intel was pursuing its non-i386-compatible architecture ia64. Later
it also implemented the same architecture as AMD, but the name was stuck.

I believe the i in i386 and similar stands for Intel, and yet several
other companies also made i386 chips.


--
Eduardo M KALINOWSKI
edu...@kalinowski.com.br

to...@tuxteam.de

unread,
Mar 14, 2021, 7:40:04 AM3/14/21
to
As a direct response to your subject, I quote "Why should Distros
be called as i386 for a 32-bit PC, and as amd64 for a 64-bit PC,
when Intel Core PCs are also 64bit systems?"

Because the currently successful Intel architecture (core, etc)
is (more or less) a copy of AMD's.

The history goes roughly like this: Intel designed a 64 bit
architecture to replace their aging 32 bit line of *86 processors
(386 and followers), which thei called Itanium [1]. As sometimes
happens with such technology jumps, it was too ambitious and
its market acceptance was a bit disappointing (this happened to
Intel a couple of times in its history, whenever it tried to
break out of its compatibility treadmill [2]. It is interesting
to see how they have become themselves victims of the very
technological lock-in they take advantage of).

Anyway: AMD saw its opportunity window and came up with a far
more conservative 64 bit architecture which was much more
backward compatible, the AMD64, also sometimes called x86-64 [3].
Basically, they had the instruction set and wider (64 bit)
registers and... much more of them (something which was known
to be a weakness in the x86 32 bit family).

To keep AMD from eating all of their lunch, Intel had to follow,
so that's why they copied an architecture from AMD who copied
it from Intel :-)

Well, more or less. Follow the links below for the whole, long
story.

And oh, Linux ran on Itanium, too. Linus Torvalds didn't like
that architecture, which he called "Itanic" (although I think
others came up with that name).

Cheers

[1] https://en.wikipedia.org/wiki/Itanium
[2] https://en.wikipedia.org/wiki/Intel_processors#32-bit_processors:_the_non-x86_microprocessors
[3] https://en.wikipedia.org/wiki/AMD64

- t
signature.asc

Roberto C. Sánchez

unread,
Mar 14, 2021, 7:40:05 AM3/14/21
to
That is an excellent summary.

The only thing I would add is that Intel, an effort to not appear
completely buffoonish, does not refer to amd64. They might use x86_64
more commonly now (I'm not certain), but for a quite some time they used
the designation "em64t" (standing for something like "ehananced memory
64-bit something or other"). Occasional references to em64t can be
found in some places. It refers to the same thing: amd64 <-> x86_64 <->
em64t.

It was a bitter pill to swallow for Intel that they, as the originators
of the x86 architecture could have been so far off when it came to the
development of the successor 64-bit architecture.

Regards,

-Roberto

--
Roberto C. Sánchez

Andrei POPESCU

unread,
Mar 14, 2021, 7:50:04 AM3/14/21
to
On Du, 14 mar 21, 07:19:25, The Wanderer wrote:
>
> When 64-bit came along, rather than extending the x86 line, Intel
> started from scratch and designed an entire new CPU architecture. That
> got called ia64, and it never caught on; it eventually failed in the
> marketplace, except possibly in very limited market segments.
> when Intel created a 64-bit architecture (called ia64), it turned out to
> be a developmental dead end and failed in the marketplace.

As far as I recall from articles at the time, there were good reasons to
use the opportunity of the transition from 32 to 64 bits to create a
completely new architecture.

Apparently the x86 architecture has some significant deficiencies, which
probably explains why it's now being challenged more and more by ARM
(comparable performance at a fraction of the power consumption).

Regardless of the merits (or not) of the ia64 architecture, Intel simply
tried to force the industry to follow its lead, at significant
additional costs (see RAMBUS), but the industry chose amd64 instead.

In hindsight, it probably would have been wiser for Intel to make the
transition to ia64 as smooth as possible and charge higher costs later,
with AMD out of the game. Fortunately for us (consumers) they were
overconfident.
signature.asc

Peter Ehlert

unread,
Mar 14, 2021, 7:50:04 AM3/14/21
to
it's Historical
I'm old, I was there
I'm old, so I don't remember the details
AMD was the first on the market with 64bit hardware. (I was an early
adopter)
Packages and Kernels were named "amd64" to indicate they were compatible
vs the 32bit versions

*it's not a good thing to change names... a gazillion things would be
affected.

to...@tuxteam.de

unread,
Mar 14, 2021, 8:00:06 AM3/14/21
to
On Sun, Mar 14, 2021 at 04:25:38AM -0700, Peter Ehlert wrote:

[...]

> AMD was the first on the market with 64bit hardware. (I was an early
> adopter)

Well, nearly. Itanium Merced was 2001 [1] (althoug you wouldn't buy
/that/ as a private person), DEC Alpha was even 1992 [2]; it was the
first 64 bit hardware which ran Linux.

The first AMD64 aka x86-64 was the Opteron, from AMD, 2003 [3]. But
yes, it was (if you ignore for a moment the second-hand alphas)
the first us mere mortals could, you know, buy.

Cheers

[1] https://en.wikipedia.org/wiki/Itanium#Itanium_(Merced):_2001
[2] https://en.wikipedia.org/wiki/DEC_Alpha
[3] https://en.wikipedia.org/wiki/Opteron

- t
signature.asc

songbird

unread,
Mar 14, 2021, 8:20:04 AM3/14/21
to

Susmita/Rajib

unread,
Mar 14, 2021, 8:40:04 AM3/14/21
to
On 14/03/2021, The Wanderer <wand...@fastmail.fm> wrote:
> To: debia...@lists.debian.org
> Subject: Re: [?] Why should Distros be called as i386 for a 32-bit
> PC, and as amd64 for a 64-bit PC, when Intel Core PCs are also 64bit
> systems
> Date: Sun, 14 Mar 2021 07:19:25 -0400
... ... [snipped] ... ... [snipped] ... ...
>
> AMD owns the patents for the 64-bit amd64 CPU architecture, and licenses
> them to other companies for a price. Intel is one of those other
> companies; that's how Intel is allowed to create 64-bit amd64 CPUs.
>
>
> Or, put briefly: because AMD created the underlying design for how that
> type of CPU works, even if Intel is the one making the specific CPU
> model in question.
>
> Does that make sense?
... ... [snipped] ... ... [snipped] ... ...

Yes, of course, brilliantly explained. Of ia64 failing to get enough
market share and thus dropped. One may be stirred to verify the
information contained in the post, but by this compact overview, the
overall idea is beautifully transmitted.

Thank you very much for this post. Much appreciated.

Sven Hartge

unread,
Mar 14, 2021, 10:00:04 AM3/14/21
to
Andrei POPESCU <andreim...@gmail.com> wrote:
> On Du, 14 mar 21, 07:19:25, The Wanderer wrote:

>> When 64-bit came along, rather than extending the x86 line, Intel
>> started from scratch and designed an entire new CPU architecture.
>> That got called ia64, and it never caught on; it eventually failed in
>> the marketplace, except possibly in very limited market segments.
>> when Intel created a 64-bit architecture (called ia64), it turned out
>> to be a developmental dead end and failed in the marketplace.

> As far as I recall from articles at the time, there were good reasons
> to use the opportunity of the transition from 32 to 64 bits to create
> a completely new architecture.

> Regardless of the merits (or not) of the ia64 architecture, Intel
> simply tried to force the industry to follow its lead, at significant
> additional costs (see RAMBUS), but the industry chose amd64 instead.

IA64 (Itanium) was completely incompatible with the installed i386 base.
The first CPUs had a (very slow) compatibility layer, assisted by
software, so you could run your "legacy" 16bit/32bit applications.

Also the CPU was designed so that many complexities where delegated into
the compiler to create the most optimal code but the compilers at the
time where not up to the task, greatly hampering the new architecture.

Intel envisioned IA64 to be the go to processor for centralized
server-based loads, whereas the i386 was for your on-the-desk PC. (Just
like at the start of the IBM PC it was enivsioned as a kinda-smart
terminal for the mainframe. The SysRq-Key is the last remnant of that
legacy.)

Note: when IA64 was designed (starting in 1994 at HP) we where nowhere
near the limits of the 32bit i386 architecture with RAM and frequency,
so it made sense, somewhat.

But years passed and the i386 architecture got better and better,
including stuff like MMX, SSE and AVX was incorporated, IA64 couldn't
really keep up.

Dnd when AMD then presented their AMD64 architecture that could run
legacy 8bit/16bit/32bit code as fast as the new code, allowing for a
smooth transition, the nickname "Itanic" for IA64 became true: It had
been dead on arrival.



--
Sigmentation fault. Core dumped.

Susmita/Rajib

unread,
Mar 14, 2021, 10:20:04 AM3/14/21
to
Thank you, Mr. Roberto C. Sánchez, Mr. Andrei Popescu, Mr. Eduardo M
KalinovskI, Mr. Tomas and Mr. songbird, who posted their replies to
educate me. The Wanderer was so superlative in his exposition that my
rest teachers were compelled to play a supporting role. This always
occurs in any cooperative venture, but that doesn't attenuate
contribution of anyone in this cooperative game in any way.
I returned to check replies to my thread, only to find the post of The
Wanderer, and it was so brilliant that I wrote him a reply with thanks
for his wonderful explanation.
When I returned to my thread a second time, i was pleasantly surprised
to find you all here.
Thank you for also acknowledging the excellent short article of The
Wanderer, directly or indirectly, and limiting yourselves to
complementary posts. Much appreciated.
Best
Rajib

John Hasler

unread,
Mar 14, 2021, 11:50:04 AM3/14/21
to
The Wanderer wrote:
> It caught on, and became so successful that Intel abandoned its ia64
> approach and started making amd64 CPUs itself.

Which was unfortunate as the x86 architecture needed to die.
--
John Hasler
jha...@newsguy.com
Elmwood, WI USA

Stefan Monnier

unread,
Mar 14, 2021, 3:00:04 PM3/14/21
to
> Well, nearly. Itanium Merced was 2001 [1] (althoug you wouldn't buy
> /that/ as a private person), DEC Alpha was even 1992 [2];

FWIW And MIPS was there even a bit earlier with their R4000 (tho the
software support for it only appeared some years later: they first
wanted to have an installed base to which to deploy the software), which
I believe was the first 64bit microprocessor.

IIRC the claim back then was that adding 64bit support to the R4000 was
rather cheap (it increased the die area by a few percents only, and
64bit adds were still fast enough not to slow down the overall chip's
frequency).

The same must have been true for the Opterons (except that the increase
in die area much have been even much smaller since the CPU itself had
become a much smaller part of the overall die because of the
incorporation of things like the memory controller and the L1 and L2
caches).

So it was a great move on the part of AMD: cheap to implement but with
an enormous marketing impact.


Stefan

Stefan Monnier

unread,
Mar 14, 2021, 3:20:04 PM3/14/21
to
> IA64 (Itanium) was completely incompatible with the installed i386 base.
> The first CPUs had a (very slow) compatibility layer, assisted by
> software, so you could run your "legacy" 16bit/32bit applications.

The original plan/claims was that the support for legacy i386
application would be "just as fast". This never materialized
(unsurprisingly: it's easy to make a CPU that can run efficiency several
slightly different instruction sets (ISA), like your average amd64 CPU which
can run applications using the amd64 ISA, the i386 ISA, the 80286 ISA
or the 8086 ISA, more or less; but it's much harder to make a CPU that
can run efficiently very different ISAs).

> Also the CPU was designed so that many complexities where delegated into
> the compiler to create the most optimal code but the compilers at the
> time where not up to the task, greatly hampering the new architecture.

More specifically, it depended on solving problems against which
compiler writers had banged their heads for several decades already (and
it is still on going). Worse: it was based on "old new ideas", IOW it
as trying to solve the problems that were already started to disappear
but was set o bump into new problems that were already starting
to appear.

The name Itanic came from the fact that it seemed likely (even quite
early on, meaning a long time before the name "Itanium" was announced)
to several (most?) knowledgeable CPU designers to lead to a monumental
failure ;-)

> Note: when IA64 was designed (starting in 1994 at HP) we where nowhere
> near the limits of the 32bit i386 architecture with RAM and frequency,
> so it made sense, somewhat.

Indeed. Also, they wanted to move away from the i386 instruction set so
as not to be bothered by pre-existing licensing agreements with AMD, and
thus making sure there'd be no competing implementation. The IA64
architecture was quite complex, and there are reasons to believe that
complexity was seen as a virtue (makes it easier to get more patents and
keep competitors out).

> But years passed and the i386 architecture got better and better,
> including stuff like MMX, SSE and AVX was incorporated, IA64 couldn't
> really keep up.

The IA64 architecture was a resounding success in one area tho: it
killed most of the competition that was coming from "above" (at least
DEC's Alpha, SGI's MIPS, HP's PA, and it likely sped up the demise of
Sun's SPARC, I don't think it had much impact on POWER or PowerPC, OTOH)
and thus helped open up the server (and supercomputer) market for Intel
(and AMD).

> Dnd when AMD then presented their AMD64 architecture that could run
> legacy 8bit/16bit/32bit code as fast as the new code, allowing for a
> smooth transition, the nickname "Itanic" for IA64 became true: It had
> been dead on arrival.

To make matters worse, the IA64 arrived very late to the market (IIRC
something like 3 years later than planned).


Stefan

Cmdte Alpha Tigre Z

unread,
Mar 14, 2021, 10:30:04 PM3/14/21
to
Perfect explanation. Also very good all those additional
contributions from the others.

Susmita/Rajib

unread,
Mar 15, 2021, 12:10:04 AM3/15/21
to
I return to thank Dr. Stefan Monnier, Mr. John Hasler, Mr. Sven
Hartge, Mr. Peter Ehlert and Cmdte Alpha Tigre Z, in addition to The
Wanderer, Mr. Roberto C. Sánchez, Mr. Andrei Popescu, Mr. Eduardo M
KalinovskI, Mr. Tomas and Mr. songbird, whom i have thanked already
(in this case, a second time Thank You can do only good and appreciate
the inputs more).

I have never come across a more gentler individual as Dr. Monnier, who
self-captions his own posts as [OFFTOPIC].

Following his lead, I was tempted to have marked this thread
off-topic, acknowledging his being so conscientious, but restrained
myself, as I didn't consider the question off-topic.

Thank you very, very much for all your inputs. Please put this thread
to rest and focus instead of helping seekers who need your support. I
have had enough information already from the post of The Wanderer.

Actually, I would have very much needed your precious inputs, had i a
plan to write an article on the topic.

May be, Debian should make a summary of all the information collected
from here and post an article on its page for a pre-emptive
clarification on the flavours that Debian is available in, and not let
the information accumulated here go waste.

Best
Rajib

Sven Hartge

unread,
Mar 15, 2021, 4:20:04 AM3/15/21
to
Stefan Monnier <mon...@iro.umontreal.ca> wrote:

>> Note: when IA64 was designed (starting in 1994 at HP) we where nowhere
>> near the limits of the 32bit i386 architecture with RAM and frequency,
>> so it made sense, somewhat.

> Indeed. Also, they wanted to move away from the i386 instruction set
> so as not to be bothered by pre-existing licensing agreements with
> AMD, and thus making sure there'd be no competing implementation. The
> IA64 architecture was quite complex, and there are reasons to believe
> that complexity was seen as a virtue (makes it easier to get more
> patents and keep competitors out).

HP then also poured additional stuff into the architecture to make
migration from PA-RISC easier. I imagine this also made stuff vastly
more complex.

>> But years passed and the i386 architecture got better and better,
>> including stuff like MMX, SSE and AVX was incorporated, IA64 couldn't
>> really keep up.

> The IA64 architecture was a resounding success in one area tho: it
> killed most of the competition that was coming from "above" (at least
> DEC's Alpha, SGI's MIPS, HP's PA, and it likely sped up the demise of
> Sun's SPARC, I don't think it had much impact on POWER or PowerPC,
> OTOH) and thus helped open up the server (and supercomputer) market
> for Intel (and AMD).

I think, IBM is big enough and old enough and established enough with
POWER that a "young whippersnapper" like Intel is no real danger to them
in their own enclosed Mainframe walled garden. I believe Apple moving
away from PowerPC did more damage to IBMs aspirations in that market.

For the others: they where either on board from the start (like HP),
where already dead (like DEC/Compaq) or slipping into the embedded
market (like MIPS).

And SPARC: after being bought by Oracle, the end was more or less
directly clear.

>> Dnd when AMD then presented their AMD64 architecture that could run
>> legacy 8bit/16bit/32bit code as fast as the new code, allowing for a
>> smooth transition, the nickname "Itanic" for IA64 became true: It had
>> been dead on arrival.

> To make matters worse, the IA64 arrived very late to the market (IIRC
> something like 3 years later than planned).

Indeed. The German computer magazine c't had many interesting articles
about the IA64 architecture and also quite early painted its dark
future, because of ever slipping sales figures, performance problems,
the failure to deliver on made promises and the increasing pressure of
the i386/amd64 architectures.

Grüße,

Andrei POPESCU

unread,
Mar 15, 2021, 4:50:05 AM3/15/21
to
On Du, 14 mar 21, 15:17:39, Stefan Monnier wrote:
>
> The original plan/claims was that the support for legacy i386
> application would be "just as fast". This never materialized
> (unsurprisingly: it's easy to make a CPU that can run efficiency several
> slightly different instruction sets (ISA), like your average amd64 CPU which
> can run applications using the amd64 ISA, the i386 ISA, the 80286 ISA
> or the 8086 ISA, more or less; but it's much harder to make a CPU that
> can run efficiently very different ISAs).

Apple seems to be doing quite well with the M1. Apparently it has a few
custom instructions to speed up x86 emulation. They also have the
benefit of controlling the software and now also the hardware stack.

There's already work in progress to port Linux mainline (and
consequently Debian) to the Apple M1 :)
signature.asc

to...@tuxteam.de

unread,
Mar 15, 2021, 5:10:04 AM3/15/21
to
On Mon, Mar 15, 2021 at 09:15:10AM +0100, Sven Hartge wrote:

[...]

> For the others: they where either on board from the start (like HP),
> where already dead (like DEC/Compaq) or slipping into the embedded
> market (like MIPS).

MIPS had its chance to become the unified architecture for high-end
workstations [1]. Until it was bought up by Silicon Graphics (SGI).
Which, on the one hand was bitterly needed by MIPS, because they
needed that cash injection, and by SGI, because they depended on the
MIPS architecture.

On the other hand, though, all other workstation developers, in fierce
competition with SGI, didn't want /that/ dependency and went to look
for/make other architectures (Power, Alpha, PA, you name it).

So on the one hand, we might have, these days, been running on MIPS;
on that other hand, we wouldn't have ARM, and -- who knows, soon,
Risc-V. And Linus Torvalds wouldn't have had this cool stint at
Transmeta. But that is a totally different kettle of fish.

Or is it?

> --
> Sigmentation fault. Core dumped.

:-)

Reminds me of an error message somewhere deep in TeX's or
METAFONT's bowels (sorry, from memory, therefore imprecise)
asking for "...someone to fix me fix me".

Cheers

[1] https://en.wikipedia.org/wiki/Advanced_Computing_Environment

- t
signature.asc

Sven Hartge

unread,
Mar 15, 2021, 6:20:04 AM3/15/21
to
to...@tuxteam.de wrote:
> On Mon, Mar 15, 2021 at 09:15:10AM +0100, Sven Hartge wrote:

>> For the others: they where either on board from the start (like HP),
>> where already dead (like DEC/Compaq) or slipping into the embedded
>> market (like MIPS).

> MIPS had its chance to become the unified architecture for high-end
> workstations [1]. Until it was bought up by Silicon Graphics (SGI).
> Which, on the one hand was bitterly needed by MIPS, because they
> needed that cash injection, and by SGI, because they depended on the
> MIPS architecture.

> On the other hand, though, all other workstation developers, in fierce
> competition with SGI, didn't want /that/ dependency and went to look
> for/make other architectures (Power, Alpha, PA, you name it).

> So on the one hand, we might have, these days, been running on MIPS;
> on that other hand, we wouldn't have ARM, and -- who knows, soon,
> Risc-V. And Linus Torvalds wouldn't have had this cool stint at
> Transmeta. But that is a totally different kettle of fish.

Another rumor I read was that IBM, when developing the first IBM PC in
1980, opted to use the 8086/8088 CPU instead of the also availble M68k
CPU because the Intel one was less powerful so it would not be in
competition with the mainframes the PC was supposed to interface with
primarily.

If this rumor is true and IBM had acted differently, the PC ecosystem
today would also look quite differently.

Grüße,
Sven.

Dan Ritter

unread,
Mar 15, 2021, 6:50:05 AM3/15/21
to
Susmita/Rajib wrote:
> May be, Debian should make a summary of all the information collected
> from here and post an article on its page for a pre-emptive
> clarification on the flavours that Debian is available in, and not let
> the information accumulated here go waste.


Wikipedia has quite a good article at
https://en.wikipedia.org/wiki/X86-64

which is also linked at

https://en.wikipedia.org/wiki/AMD64

-dsr-

to...@tuxteam.de

unread,
Mar 15, 2021, 7:10:04 AM3/15/21
to
On Mon, Mar 15, 2021 at 11:09:35AM +0100, Sven Hartge wrote:

[...]

> Another rumor I read was that IBM, when developing the first IBM PC in
> 1980, opted to use the 8086/8088 CPU instead of the also availble M68k
> CPU because the Intel one was less powerful so it would not be in
> competition with the mainframes the PC was supposed to interface with
> primarily.

Too lazy to research now, but it sounds credible, yes.

> If this rumor is true and IBM had acted differently, the PC ecosystem
> today would also look quite differently.

Or the Z8000. Absolutely. 8086 was, architecturally, the worst possible
choice at that time.

Cheers
- t
signature.asc

Sven Hartge

unread,
Mar 15, 2021, 7:40:05 AM3/15/21
to
Having had a 68k would have been awesome. No stupid memory segmentation,
32bit instructions and internal address size, 24bit external address size.

Imagine a PC with 4GB adressable memory space in 1980.


to...@tuxteam.de

unread,
Mar 15, 2021, 8:00:04 AM3/15/21
to
On Mon, Mar 15, 2021 at 12:34:42PM +0100, Sven Hartge wrote:

[...]

> Having had a 68k would have been awesome. No stupid memory segmentation,

So were Z8000, NS32K and many others. The horrible segmentation thing on
the '86 were the tribute to backward compatibility, which is the price
you pay for market dominance :-)

> 32bit instructions and internal address size, 24bit external address size.
>
> Imagine a PC with 4GB adressable memory space in 1980.

Yup.

Cheers
- t
signature.asc

IL Ka

unread,
Mar 15, 2021, 8:20:04 AM3/15/21
to

No stupid memory segmentation,

IMHO segmentation was a good idea originally.
You could have separate segments for code and data and since 286 it is possible to protect them (AFAIK segments were also used to separate user-space and kernel-space)
But with the advent of virtual memory (386), they have become an obsolete legacy.

Intel is full of such things: hardware context switching, some old MMX instructions, I/O ports, real mode: nobody needs all of that in 2021, but it exists and occupies place in "intel development manual"

Joe

unread,
Mar 15, 2021, 9:00:05 AM3/15/21
to
I can. It would have cost as much as a mainframe to make full use of it.

--
Joe

Andrei POPESCU

unread,
Mar 15, 2021, 9:40:05 AM3/15/21
to
On Lu, 15 mar 21, 09:22:26, Susmita/Rajib wrote:
>
> Thank you very, very much for all your inputs. Please put this thread
> to rest and focus instead of helping seekers who need your support. I
> have had enough information already from the post of The Wanderer.

Lengthy, more or less offtopic threads are sort of a tradition on
debian-user during the freeze, mostly because the level of issues is at
its lowest ;)

> Actually, I would have very much needed your precious inputs, had i a
> plan to write an article on the topic.

Most of the information in this thread isn't quoting any (authoritative)
sources[1], it's probably better to look for other sources for an
article.

> May be, Debian should make a summary of all the information collected
> from here and post an article on its page for a pre-emptive
> clarification on the flavours that Debian is available in, and not let
> the information accumulated here go waste.

For Debian's purposes the information can probably be summarized as a
Frequently Asked Question, like:

Q: Why is the 64bit x86 architecture named 'amd64' whereas the 32bit
version is named 'i386'?
A: Because the AMD64 / x86-64 architecture was introduced by AMD.

(probably with a link to the corresponding Wikipedia page)


This already exists in some form in at least one place on Debian's
sites, so I'll ask you instead:

Where should this information be added to be more easily found by
users less familiar with Debian (and its history)?


[1] Wikipedia generally provides a good introduction to a specific
topic, but it's *not* authoritative. It can be a good start for further
research though.

For Debian's own decisions a good source could be mailing list posts
from Debian Developers at the time announcing the new architecture.
signature.asc

Gene Heskett

unread,
Mar 15, 2021, 9:40:05 AM3/15/21
to
That, IIRC was a new, super shiny, thing from zilog. No experience with
it, but if it was as unreliable as the z-80, was, I'm not sorry it
failed. The Z-80 had an instruction that swapped the
foregrund/background register sets. But it only worked on odd hours of
the day. And had no way of testing if the command had worked without
sacrificing 1 of the three registers. in both sets.

When I finally got schmardter and wrote a test loop to check it, called
zilog, and it was out of their 90 day warranty. They would not replace
it. I should have called them and got a sample, but I'm honest and told
them the truth. I never again used a zilog chip in anything.

I was then on a small town AM/FM radio stations budget, developing an
Automatic Transmitter System for a temperature picky fm transmitter that
really ought to have been replaced, starting with the brand label on the
front panel.

This was in 1980, and the late 70's saw many Ma and Pa small town
broadcasters severely impacted by trying to replace aging tube
transmitters with early solid state versions before the tech was mature
enough to be as dependable as the tube models. It took another ten years
before the semi failure rates went below that of electrolytic
capacitors.

Now design rule violations by the gear makers are responsible for a good
share of the failure bugs. But they are a distant part of the list, well
behind electrolytic caps whose technology has not been seriously
improved in a hundred years now. Even Tesla has put money into new
versions, and come up short or they would be in his cars replaceing the
lithium and dangerous batteries right now.

> Cheers
> - t
Take care and stay safe and well, Tomas.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>

Gene Heskett

unread,
Mar 15, 2021, 10:00:04 AM3/15/21
to
No it wouldn't, and we had it by the late '80's with the advent of 68040
abd 68060 accellerator boards for the Amiga's. But that flat memory
model and poor production QC doomed it. Any program could make a
missfire and write into another programs memory space, crashing the
whole Mary Ann. Then Commode-door brought out a 68060 board for the
4000's. Major failure because that $1600, 4 square inches of pcb, had
every electrolytic capacitor installed in reverse polarity. Too damned
compact to be easily fixed, but I did two of them anyway.

Yup, I am a card carrying CET. What else could I do?

to...@tuxteam.de

unread,
Mar 15, 2021, 10:00:05 AM3/15/21
to
On Mon, Mar 15, 2021 at 09:31:05AM -0400, Gene Heskett wrote:
> On Monday 15 March 2021 07:05:02 to...@tuxteam.de wrote:
>
> > On Mon, Mar 15, 2021 at 11:09:35AM +0100, Sven Hartge wrote:
> >
> > [...]
> >
> > > Another rumor I read was that IBM, when developing the first IBM PC
> > > in 1980, opted to use the 8086/8088 CPU instead of the also availble
> > > M68k CPU because the Intel one was less powerful so it would not be
> > > in competition with the mainframes the PC was supposed to interface
> > > with primarily.
> >
> > Too lazy to research now, but it sounds credible, yes.
> >
> > > If this rumor is true and IBM had acted differently, the PC
> > > ecosystem today would also look quite differently.
> >
> > Or the Z8000. Absolutely. 8086 was, architecturally, the worst
> > possible choice at that time.
> >
> That, IIRC was a new, super shiny, thing from zilog. No experience with
> it, but if it was as unreliable as the z-80, was, I'm not sorry it
> failed. The Z-80 had an instruction that swapped the

[...]

I take that back. Z8000 was a 16 bit data/24 bit address thing; it
did have a segmented architecture, so it wasn't as "clean" as I
remembered it. At that time I was just a little student, so my
"experience" with that stuff was to drool over design articles
in the usual magazines (EE, AFAIR).

Cheers
- t
signature.asc

John Hasler

unread,
Mar 15, 2021, 10:10:05 AM3/15/21
to
Gene writes:
> That, IIRC was a new, super shiny, thing from zilog. No experience
> with it, but if it was as unreliable as the z-80, was, I'm not sorry
> it failed. The Z-80 had an instruction that swapped the
> foregrund/background register sets. But it only worked on odd hours
> of the day. And had no way of testing if the command had worked
> without sacrificing 1 of the three registers. in both sets.

I used lots of Z80s and had good luck with them. I wrote an OS for my
first Z80 homebrew computer that used register swapping to service
interrupts and print in the background. It worked quite well. Most
applications used only one register set, though, due to the need for
Intel compatibility.

My first Unix machine was an Onyx with a Z8000 running System III. The
8 inch disk got flaky after about ten years but other than that it was
quite reliable. Odd architecture, though. I would have preferred 68k.

Gene Heskett

unread,
Mar 15, 2021, 10:10:05 AM3/15/21
to
Snerk. We all did that back in the day, Tomas. that and similar magazines
were this 8th grade graduates electronics education. Do they still exist
today? Retired now, so the subs expired.

to...@tuxteam.de

unread,
Mar 15, 2021, 10:10:05 AM3/15/21
to
On Mon, Mar 15, 2021 at 10:02:12AM -0400, Gene Heskett wrote:

[...]

> Snerk. We all did that back in the day, Tomas. that and similar magazines
> were this 8th grade graduates electronics education. Do they still exist
> today? Retired now, so the subs expired.

Some of them: https://www.ee.com/

Cheers
- t
signature.asc

Stefan Monnier

unread,
Mar 15, 2021, 10:50:04 AM3/15/21
to
>> Another rumor I read was that IBM, when developing the first IBM PC in
>> 1980, opted to use the 8086/8088 CPU instead of the also availble M68k
>> CPU because the Intel one was less powerful so it would not be in
>> competition with the mainframes the PC was supposed to interface with
>> primarily.
> Too lazy to research now, but it sounds credible, yes.

I'm sure there have been several different factors and it's hard to know
which were more important (often the more personal and less technical
factors are the more important ones in those areas, but the hardest to
track down and verify). Another important factor (linked to pragmatic
constraints of overall production cost and availability of all the
various components at particular dates) made it important to use an 8bit
interface between the CPU and the system (which arguably also ensured it
was no threat performancewise to the rest of IBM's linup).
That's another reason why they went with the 8088 rather than the 8086,
and also another reason why they went with Intel rather than Motorola,
since the 68008 wasn't available yet.


Stefan

to...@tuxteam.de

unread,
Mar 15, 2021, 11:00:05 AM3/15/21
to
On Mon, Mar 15, 2021 at 10:45:15AM -0400, Stefan Monnier wrote:
> >> Another rumor I read was that IBM, when developing the first IBM PC in
> >> 1980, opted to use the 8086/8088 CPU instead of the also availble M68k
> >> CPU because the Intel one was less powerful so it would not be in
> >> competition with the mainframes the PC was supposed to interface with
> >> primarily.
> > Too lazy to research now, but it sounds credible, yes.
>
> I'm sure there have been several different factors and it's hard to know
> which were more important (often the more personal and less technical
> factors are the more important ones in those areas, but the hardest to
> track down and verify).

ISTR that the Big Iron and the small stuff factions whithin IBM were
in fierce competition at the time. That's why the idea seemed plausible
to me.

[...] Another important factor (linked to pragmatic
> constraints of overall production cost and availability of all the
> various components at particular dates) made it important to use an 8bit
> interface between the CPU and the system (which arguably also ensured it
> was no threat performancewise to the rest of IBM's linup).
> That's another reason why they went with the 8088 rather than the 8086,
> and also another reason why they went with Intel rather than Motorola,
> since the 68008 wasn't available yet.

...the outcome was surely that of multiple factors. IBM was a complex
beast at the time!

Cheers
- t
signature.asc

Stefan Monnier

unread,
Mar 15, 2021, 11:10:05 AM3/15/21
to
>> Indeed. Also, they wanted to move away from the i386 instruction set
>> so as not to be bothered by pre-existing licensing agreements with
>> AMD, and thus making sure there'd be no competing implementation. The
>> IA64 architecture was quite complex, and there are reasons to believe
>> that complexity was seen as a virtue (makes it easier to get more
>> patents and keep competitors out).
> HP then also poured additional stuff into the architecture to make
> migration from PA-RISC easier. I imagine this also made stuff vastly
> more complex.

It has all the signs of a "design by committee" were you get the union
of all the ideas, indeed :-(

But I think for such a thing to get the time and funding needed to get
to production, there needs to be a commitment to the idea that such
complexity is good.

> I think, IBM is big enough and old enough and established enough with
> POWER that a "young whippersnapper" like Intel is no real danger to them
> in their own enclosed Mainframe walled garden. I believe Apple moving
> away from PowerPC did more damage to IBMs aspirations in that market.

Agreed.

> For the others: they where either on board from the start (like HP),
> where already dead (like DEC/Compaq) or slipping into the embedded
> market (like MIPS).

I didn't want to imply that they would have survived (that slice of the
CPU market was shrinking fast anyway: after the Pentium Pro, they were
not noticeably faster than PCs any more and the market was too small to
keep financing the development of leading CPUs, especially since for
high-end machines all the value was in the interconnect rather than the
CPUs anyway), but the IA64 was explicitly the end of it for them (and
that happened long before the first IA64 CPU was available).

> And SPARC: after being bought by Oracle, the end was more or less
> directly clear.

But that took place much later: the IA64 buzz that killed Alpha/PA/MIPS
was in the 90s whereas Oracle bought SPARC in 2009.

> Indeed. The German computer magazine c't had many interesting articles
> about the IA64 architecture and also quite early painted its dark
> future, because of ever slipping sales figures, performance problems,
> the failure to deliver on made promises and the increasing pressure of
> the i386/amd64 architectures.

>From a purely technical perspective, it's hard to understand how Intel
managed to pour so much energy into such an obviously bad idea.
The only explanations seem all to be linked to market strategies.


Stefan

Michael Stone

unread,
Mar 15, 2021, 11:10:05 AM3/15/21
to
More. Memory was often the largest line item back then, and ordinary
mainframes couldn't afford much of it. The Cray 2 was a game-changer in
the supercomputer space with its 1Gbyte memory capacity. Mostly those
were bought by three letter agencies, but some really large corporations
and universities with very generous donors got one.

Stefan Monnier

unread,
Mar 15, 2021, 11:30:04 AM3/15/21
to
>> The original plan/claims was that the support for legacy i386
>> application would be "just as fast". This never materialized
>> (unsurprisingly: it's easy to make a CPU that can run efficiency several
>> slightly different instruction sets (ISA), like your average amd64 CPU which
>> can run applications using the amd64 ISA, the i386 ISA, the 80286 ISA
>> or the 8086 ISA, more or less; but it's much harder to make a CPU that
>> can run efficiently very different ISAs).
> Apple seems to be doing quite well with the M1.

But that's not a CPU that runs amd64 code: the amd64 code is executed on
it by software emulation rather than by hardware emulation. And indeed,
Intel could have developed an efficient software emulation of amd64 for
its Itanium which could have been more efficient than its own
hardware emulator.

[ Similarly, at some point in time, DEC's Alpha was claimed to be the
fastest processor to run i386 code, via its software emulation. ;-) ]

Apple has a lot of experience in that kind of emulation (having done it
for the transition from Motorola's 68K to PowerPC, then again from
PowerPC to i386, and now from amd64 to ARM (notice they relied on
hardware emulation for the i386 to amd64 transition)).

But note that they only do emulation for applications AFAIK, which is
easier than doing a "full" emulation that lets you run an actual OS
(like `qemu` does).

> There's already work in progress to port Linux mainline (and
> consequently Debian) to the Apple M1 :)

Since the M1 implements the ARM instruction set, I don't think there's
much work to do here, indeed (most likely the hardest part is to fight
Apple's opaqueness).

Last I heard Debian works on the M1 already :-), but its Emacs package
doesn't :-(


Stefan

Nicholas Geovanis

unread,
Mar 15, 2021, 12:30:04 PM3/15/21
to
On Sun, Mar 14, 2021, 1:50 PM Stefan Monnier <mon...@iro.umontreal.ca> wrote:
> Well, nearly. Itanium Merced was 2001 [1] (althoug you wouldn't buy
> /that/ as a private person), DEC Alpha was even 1992 [2];

FWIW And MIPS was there even a bit earlier with their R4000 (tho the
software support for it only appeared some years later: they first
wanted to have an installed base to which to deploy the software), which
I believe was the first 64bit microprocessor.

And the demise of the DEC Alpha was quite unfortunate. It was super-fast and OSF/1 was rock-solid. But DEC lost the competitive bid on that project and Sequent/Dynix, based on hundreds of 486 CPUs, won it. Now owned by IBM and deep-sixed: They really bought the customer base instead.

The final pedantry is that, contrary to an earlier post, the first IBM PCs were built around the 8088, not the 8086.

IIRC the claim back then was that adding 64bit support to the R4000 was
rather cheap (it increased the die area by a few percents only, and
64bit adds were still fast enough not to slow down the overall chip's
frequency).

The same must have been true for the Opterons (except that the increase
in die area much have been even much smaller since the CPU itself had
become a much smaller part of the overall die because of the
incorporation of things like the memory controller and the L1 and L2
caches).

So it was a great move on the part of AMD: cheap to implement but with
an enormous marketing impact.


        Stefan

Dan Ritter

unread,
Mar 15, 2021, 12:40:04 PM3/15/21
to
Stefan Monnier wrote:
> > There's already work in progress to port Linux mainline (and
> > consequently Debian) to the Apple M1 :)
>
> Since the M1 implements the ARM instruction set, I don't think there's
> much work to do here, indeed (most likely the hardest part is to fight
> Apple's opaqueness).
>
> Last I heard Debian works on the M1 already :-), but its Emacs package
> doesn't :-(

Graphics is currently the blocker. Framebuffer works, but
getting the GPU working beyond that will probably be fun for
someone.

https://asahilinux.org/2021/03/progress-report-january-february-2021/

contains lots of useful info.

-dsr-

Michael Stone

unread,
Mar 15, 2021, 12:40:04 PM3/15/21
to
On Sun, Mar 14, 2021 at 10:44:00AM -0500, John Hasler wrote:
>The Wanderer wrote:
>> It caught on, and became so successful that Intel abandoned its ia64
>> approach and started making amd64 CPUs itself.
>
>Which was unfortunate as the x86 architecture needed to die.

Moving to ia64 would have been much, much worse. Luckily it was unlikely
to have ever happened once people got to touch actual silicon.


On Sun, Mar 14, 2021 at 02:50:10PM -0400, Stefan Monnier wrote:
>So it was a great move on the part of AMD: cheap to implement but with
>an enormous marketing impact.

It had much more than a marketing impact, because x86 was a PITA for
more than 2GB of RAM and that was getting cheap and becoming a common
problem by 2003. Switching to opteron for 8G or 16G servers was a huge
win vs x86, with better scaling for multiprocessor configurations.
(These were becoming more common as well, and intel was still using an
old (obsolete?) flat SMP bus whereas AMD arrived on the scene with a far
superior NUMA architecture based on hypertransport--designed in
partnership with what was left of the old DEC alpha team.) It was simply
the right product at the right time.


On Sun, Mar 14, 2021 at 03:17:39PM -0400, Stefan Monnier wrote:
>> But years passed and the i386 architecture got better and better,
>> including stuff like MMX, SSE and AVX was incorporated, IA64 couldn't
>> really keep up.
>
>The IA64 architecture was a resounding success in one area tho: it
>killed most of the competition that was coming from "above" (at least
>DEC's Alpha, SGI's MIPS, HP's PA, and it likely sped up the demise of
>Sun's SPARC, I don't think it had much impact on POWER or PowerPC, OTOH)
>and thus helped open up the server (and supercomputer) market for Intel
>(and AMD).

Yes--SGI, HP, & DEC (Compaq then HP) all preemptively killed off their
CPU lines based on the promises made for ia64. When ia64 turned out to
be late and the performance turned out to be disappointing, it was too
late to revive their previous architectures and recapture the customers
that had already abandoned ship for x86 & later amd64. It worked out
really well for intel, and really badly for everybody else.


On Mon, Mar 15, 2021 at 09:15:10AM +0100, Sven Hartge wrote:
>Stefan Monnier <mon...@iro.umontreal.ca> wrote:
>> The IA64 architecture was a resounding success in one area tho: it
>> killed most of the competition that was coming from "above" (at least
>> DEC's Alpha, SGI's MIPS, HP's PA, and it likely sped up the demise of
>> Sun's SPARC, I don't think it had much impact on POWER or PowerPC,
>> OTOH) and thus helped open up the server (and supercomputer) market
>> for Intel (and AMD).
>
>I think, IBM is big enough and old enough and established enough with
>POWER that a "young whippersnapper" like Intel is no real danger to them
>in their own enclosed Mainframe walled garden. I believe Apple moving
>away from PowerPC did more damage to IBMs aspirations in that market.

IBM didn't want to just be a mainframe manufacturer, they really wanted
to amortize the costs for those CPUs against multiple product lines.
They actually made a good number of high end computing sales for a few
years by being the only player left standing, until amd64 just became
too compelling. They still have some very large deployments, but their
overall market share is not what they'd hoped for.

>For the others: they where either on board from the start (like HP),
>where already dead (like DEC/Compaq) or slipping into the embedded
>market (like MIPS).

At the time ia64 was announced alpha & MIPS processors were in some of
the largest and most sucessful systems in the world. With further
development they could have remained there, but their management was
convinced that ia64 was going to have an unbeatable performance
advantage and that they couldn't compete with the R&D money intel was
pouring in. With hindsight it's clear that neither was true but these
decisions were made in the late 90s and intel hadn't yet run into the
brick wall of making the compiler magic actually work. The architecture
that was in the worst shape was PA-RISC--which is why HP had gone in
with Intel on ia-64 in the first place. (And, of course, the alpha had
no future once HP bought Compaq.) Also with hindsight, even if ia64 had
been successful this strategy would have destroyed the companies because
it was premised on the idea that even if they were all selling the same
computers they'd somehow be able to keep their margins and lock
customers in with proprietary OSs or some other proprietary magic. The
industry went in a very different direction and preferred open software
architectures, and that probably would have been true even with a
successful ia64. HPaq & SGI bet on the wrong horse in every way.

The cloud revolution of the 2010s might have unfolded very differently
if some of the high performance architectures from the late 90s could
have hung on long enough for the linux convergence to offer them a way
out of the unix wars. (Or, they might have kept the unix wars going. Who
knows.) Instead, people are only now trying to break out of the
monoculture by pushing what was 25 years ago one of the least successful
and least capable of the RISC architectures (ARM) into the
high-performance realm, for lack of other options in a space utterly
dominated by amd64.

Michael Stone

unread,
Mar 15, 2021, 12:50:05 PM3/15/21
to
On Mon, Mar 15, 2021 at 11:03:59AM -0400, Stefan Monnier wrote:
>From a purely technical perspective, it's hard to understand how Intel
>managed to pour so much energy into such an obviously bad idea.
>The only explanations seem all to be linked to market strategies.

They just had too much easy money coming in from the windows/x86 desktop
monopoly. It took years before they really had to justify in a critical
way the money they were spending.

John Hasler

unread,
Mar 15, 2021, 12:50:05 PM3/15/21
to
Gene writes:
> No it wouldn't, and we had it by the late '80's with the advent of
> 68040 abd 68060 accellerator boards for the Amiga's. But that flat
> memory model and poor production QC doomed it. Any program could make
> a missfire and write into another programs memory space, crashing the
> whole Mary Ann.

Starting in '82 the 68010 added virtual memory and virtualization suport.

John Hasler

unread,
Mar 15, 2021, 1:00:05 PM3/15/21
to
Michael Stone writes:
> ...HP bought Compaq.

Compaq bought HP and then renamed themselves HP. The name was all they
really wanted, of course. HP had already spun off their instrumentation
division (the real HP) as Agilent.

Michael Stone

unread,
Mar 15, 2021, 1:20:04 PM3/15/21
to
On Mon, Mar 15, 2021 at 11:55:40AM -0500, John Hasler wrote:
> Michael Stone writes:
>> ...HP bought Compaq.
>
>Compaq bought HP and then renamed themselves HP. The name was all they
>really wanted, of course.

That's a strange way to position it, since HP gave Compaq shareholders
HP shares (leading to 36% ownership by Compaq shareholders and 64%
ownership by HP shareholders), HP's management was in charge of the
resulting company, HP's employees got the lion's share of retention
bonuses, and Compaq's (DEC's) legacy products were the ones that were
quickly killed off. The entire deal was focused on the (dead-end) PC
businesses, and the legacy architectures of both companies weren't given
much attention. HP eventually spun off enterprise systems into its own
company; maybe if they'd done that with both HPs and DECs assets back in
2002 (or if Compaq had left DEC alone) and Carly Fiorina had just kept
the PC sales she was distracted by, the HP/DEC legacy lines could have
done better. Or they might have still died--but they were certainly
never going to succeed when owned by people who didn't care about them.

Celejar

unread,
Mar 15, 2021, 1:40:04 PM3/15/21
to
On Mon, 15 Mar 2021 12:39:10 -0400
Michael Stone <mst...@debian.org> wrote:

...

> On Mon, Mar 15, 2021 at 09:15:10AM +0100, Sven Hartge wrote:
> >Stefan Monnier <mon...@iro.umontreal.ca> wrote:
> >> The IA64 architecture was a resounding success in one area tho: it
> >> killed most of the competition that was coming from "above" (at least
> >> DEC's Alpha, SGI's MIPS, HP's PA, and it likely sped up the demise of
> >> Sun's SPARC, I don't think it had much impact on POWER or PowerPC,
> >> OTOH) and thus helped open up the server (and supercomputer) market
> >> for Intel (and AMD).
> >
> >I think, IBM is big enough and old enough and established enough with
> >POWER that a "young whippersnapper" like Intel is no real danger to them
> >in their own enclosed Mainframe walled garden. I believe Apple moving
> >away from PowerPC did more damage to IBMs aspirations in that market.
>
> IBM didn't want to just be a mainframe manufacturer, they really wanted
> to amortize the costs for those CPUs against multiple product lines.
> They actually made a good number of high end computing sales for a few
> years by being the only player left standing, until amd64 just became
> too compelling. They still have some very large deployments, but their
> overall market share is not what they'd hoped for.

Apparently POWER is having a bit of a resurgence lately due to its
openness and non-x86ness:

https://www.osnews.com/story/133093/review-blackbird-secure-desktop-a-fully-open-source-modern-power9-workstation-without-any-proprietary-code/

Of course, Raptor seems to be a tiny player, and it's hard to see how
they'll get any traction since the pricing isn't very competitive,
apparently at least in part due to the chicken-and-egg market share
problem, but it's an exciting development to watch.

Celejar

John Hasler

unread,
Mar 15, 2021, 1:50:05 PM3/15/21
to
I guess I misremembered. After the merger they certainly *acted* as if
Compaq management was in charge.

Gene Heskett

unread,
Mar 15, 2021, 2:20:04 PM3/15/21
to
On Monday 15 March 2021 12:40:51 John Hasler wrote:

> Gene writes:
> > No it wouldn't, and we had it by the late '80's with the advent of
> > 68040 abd 68060 accellerator boards for the Amiga's. But that flat
> > memory model and poor production QC doomed it. Any program could
> > make a missfire and write into another programs memory space,
> > crashing the whole Mary Ann.
>
> Starting in '82 the 68010 added virtual memory and virtualization
> suport.

But by then the amiga design was frozen until the funeral.

Michael Stone

unread,
Mar 15, 2021, 3:10:05 PM3/15/21
to
On Mon, Mar 15, 2021 at 01:35:42PM -0400, Celejar wrote:
>Apparently POWER is having a bit of a resurgence lately due to its
>openness and non-x86ness:
>
>https://www.osnews.com/story/133093/review-blackbird-secure-desktop-a-fully-open-source-modern-power9-workstation-without-any-proprietary-code/
>
>Of course, Raptor seems to be a tiny player, and it's hard to see how
>they'll get any traction since the pricing isn't very competitive,
>apparently at least in part due to the chicken-and-egg market share
>problem, but it's an exciting development to watch.

That doesn't do much for IBMs bottom line. :)

Stefan Monnier

unread,
Mar 15, 2021, 3:30:04 PM3/15/21
to
>> No it wouldn't, and we had it by the late '80's with the advent of
>> 68040 abd 68060 accellerator boards for the Amiga's. But that flat
>> memory model and poor production QC doomed it. Any program could make
>> a missfire and write into another programs memory space, crashing the
>> whole Mary Ann.
> Starting in '82 the 68010 added virtual memory and virtualization suport.

[ I can't remember any discussion of virtualization for that.
Back then this only existed on things like IBM mainframes and noone in
the workstation-and-lower markets cared about it, AFAIK. ]

Note that this is only true in the sense of "wifi ready" (a laptop that
came without any wifi card but maybe with an antenna in the bezel): the
68010 was a very minor improvement of the 68000 which just fixed some
blunders that were making it (almost) impossible to provide support for
virtual memory. You needed additional help (like an MMU) in order to
get virtual memory on the 68010 and that usually ended up very costly in
terms of performance.

Virtual memory only became vaguely usable with the 68020 (and then
actually usable on the 68030).


Stefan

Sven Hartge

unread,
Mar 15, 2021, 3:30:04 PM3/15/21
to
I don't say to put it in, only to have a flat 32bit address range.

Just like the current 64bit systems don't have 16 Exabyte of memory in
them.

(I still vividly remember using memmaker and manual ordering the drivers
in config.sys and autoexec.bat to shave another 2KB from the lower
memory so the IPX driver would fit so Doom would run.)



--
Sigmentation fault. Core dumped.

Sven Hartge

unread,
Mar 15, 2021, 3:40:04 PM3/15/21
to
Stefan Monnier <mon...@iro.umontreal.ca> wrote:

> From a purely technical perspective, it's hard to understand how Intel
> managed to pour so much energy into such an obviously bad idea. The
> only explanations seem all to be linked to market strategies.

This history repeats for Intel on several fronts:

Look at the Netburst Pentium 4 desaster, which as scrapped as soon as
the Israel division showed their improved concept based on the P3, which
ran laps around the P4 while at the same time using far less power and
had a bigger yield.

Or the discussion about ECC for desktop devices. Intel argues "not
needed", which is, if you follow the Rowhammer issues, not true. AMD
just does it and it works.

Then there was FB-DIMM back in the 2008s. Nice idea, just, again, too
expensive and disconnected from the market in the end.

And all in all the rather slow improvments on the CPU fronts, the
piecemeal 5% increases sold as "big achievements" every year, while at
the same time all improvements turned out to be major security problems.

I personally am really glad that AMD got their stuff together again and
with their ZenX-Architectures showed Intel how it is done.

What AMD now needs is a hit in the low, lower and ultra-low power
segment.

Grüße,

Dan Ritter

unread,
Mar 15, 2021, 3:50:04 PM3/15/21
to
Sven Hartge wrote:
> Stefan Monnier <mon...@iro.umontreal.ca> wrote:
>
> > From a purely technical perspective, it's hard to understand how Intel
> > managed to pour so much energy into such an obviously bad idea. The
> > only explanations seem all to be linked to market strategies.
>
> This history repeats for Intel on several fronts:
>
> Or the discussion about ECC for desktop devices. Intel argues "not
> needed", which is, if you follow the Rowhammer issues, not true. AMD
> just does it and it works.

Intel knew that their argument was bull: they owned the market
and needed ways of subdividing their CPUs to fit every price
point. Turning off ECC support was one of those ways.

That strategy started with the 80486, when they brought out a
cheap version called the 80486SX which lacked a floating point
unit. The SX has the floating point unit, it was just turned
off. Worse: purchasing the 80487 math coprocessor to enable
floating point support... the 487 was a full 486, that turned
off the original.

> Then there was FB-DIMM back in the 2008s. Nice idea, just, again, too
> expensive and disconnected from the market in the end.

Intel wanted more pricing points.

> I personally am really glad that AMD got their stuff together again and
> with their ZenX-Architectures showed Intel how it is done.
>
> What AMD now needs is a hit in the low, lower and ultra-low power
> segment.

They've got the low and lower parts now: 35W and 15W 4000-series
APUs, from the Renoir design. Stefan and I were just talking
about how you can't buy one with a normal motherboard right now
because they are entirely allocated to systems integrators. AMD
is selling 100% of production.

They don't have any 7W or lower parts, but those things aren't
very interesting compared to ARM64 architecture, where Qualcomm
and Apple and any number of smaller shops are doing great things
in the tablet and phone space.

-dsr-

Stefan Monnier

unread,
Mar 15, 2021, 4:00:04 PM3/15/21
to
>>So it was a great move on the part of AMD: cheap to implement but with
>>an enormous marketing impact.
> It had much more than a marketing impact, because x86 was a PITA for more
> than 2GB of RAM and that was getting cheap and becoming a common problem by
> 2003. Switching to opteron for 8G or 16G servers was a huge win vs x86, with
> better scaling for multiprocessor configurations. (These were becoming more
> common as well, and intel was still using an old (obsolete?) flat SMP bus
> whereas AMD arrived on the scene with a far superior NUMA architecture based
> on hypertransport--designed in partnership with what was left of the old DEC
> alpha team.) It was simply the right product at the right time.

I think the performance of the Opteron would have been
sufficient to make it quite successful even if limited to 32bit.
And Microsoft took its time before releasing a version of Windows for
amd64, so most of the machines sold between 2003 and 2005 were running
in 32bit mode, AFAICT.

So I think the marketing impact of Opteron's support of the new amd64
ISA during the 2003-2005 window was more important than the technical
impact. But you're right that 64bit support was really becoming
important right around that time: PAE was not as satisfactory a solution
(which is why AMD went ahead with amd64: the situation was becoming
untenable).

> At the time ia64 was announced alpha & MIPS processors were in some of
> the largest and most sucessful systems in the world. With further
> development they could have remained there, but their management was
> convinced that ia64 was going to have an unbeatable performance
> advantage and that they couldn't compete with the R&D money intel was
> pouring in.

They could have survived a few years more, definitely. But SGI only had
a good presence in supercomputers and computer graphics which were
pretty small markets where the CPU didn't matter that much, so it was
very costly for them to have to keep developing new top-of-the-line
processors additionally to top-of-the-line GPUs and interconnects.
They were already financially in poor health and they needed to start
designing their systems around someone else's CPU.

DEC was even worse because they didn't actually own any particular
segment of the market (besides from the VMS segment which was not
getting very many new customers) and PCs running Pentium Pros (and
successors) were taking over the workstation market.

In retrospect maybe DEC and SGI should have merged and then partnered
with AMD (as you note above some of DEC's processor design team indeed
ended up at AMD on the Opteron project), but I think it would have taken
a crapload of foresight and/or faith to do that.

[ Not sure what part HP could have played there, I wasn't very familiar
with their products (beside drooling over the idea of a 2MB L1 cache,
that is). ]


Stefan

Anssi Saari

unread,
Mar 15, 2021, 5:00:05 PM3/15/21
to
Dan Ritter <d...@randomstring.org> writes:

> Intel knew that their argument was bull: they owned the market
> and needed ways of subdividing their CPUs to fit every price
> point. Turning off ECC support was one of those ways.

> That strategy started with the 80486, when they brought out a
> cheap version called the 80486SX which lacked a floating point
> unit. The SX has the floating point unit, it was just turned
> off.

Initially, yes. A panic move when AMD brought out their 40 MHz 386. It
worked, got popular and later on the 486SX was manufactured separately
with a smaller die and no floating point.

As for the ECC support in Ryzen CPUs, as I understand it it's a bit of a
mess. Sure the CPUs support it but if it's not validated by motherboard
manufacturers, how do you know it actually works reliably?

Michael Stone

unread,
Mar 15, 2021, 5:10:05 PM3/15/21
to
On Mon, Mar 15, 2021 at 03:50:56PM -0400, Stefan Monnier wrote:
>In retrospect maybe DEC and SGI should have merged and then partnered
>with AMD (as you note above some of DEC's processor design team indeed
>ended up at AMD on the Opteron project), but I think it would have taken
>a crapload of foresight and/or faith to do that.

Yeah, the biggest thing they lacked was faith in their own products. I
remember being in meetings with SGI folks explaining how the future was
going to be windows on ia64, and immediately wondering who our new
supplier would be. The argument was always financial, but in reality the
problem was misallocation of resources, not lack of resources. (I guess
after billions of dollars thrown away on failed strategies the problem
does become a lack of resources, but it didn't start out that way.) They
would have had to become smaller and more focused, and big companies
don't shrink easily.

Dan Ritter

unread,
Mar 15, 2021, 5:30:04 PM3/15/21
to
Anssi Saari wrote:
> Dan Ritter <d...@randomstring.org> writes:
>
> As for the ECC support in Ryzen CPUs, as I understand it it's a bit of a
> mess. Sure the CPUs support it but if it's not validated by motherboard
> manufacturers, how do you know it actually works reliably?

... by trying it out and reporting the results to others, and
reading their results and reporting your confirmation.

This isn't a thing that the motherboard manufacturer can put in
by accident.

Anyway. If you need ECC support, you buy an EPYC server and get
registered ECC support. If you would like to have ECC as a feature, you
get a Ryzen board that's reported to work, and you get
unbuffered ECC for one-bit correction and two-bit reporting.

Then you overclock it to generate RAM errors, and it shows up in
your system log. Then you bring it back down to normal speed.

At last report: normal desktop Ryzens (nothing with a G suffix
unless it also has a PRO marking) on any ASrock, most ASUS, and
some Gigabyte motherboards will support this. To the best of my
current knowledge, no MSI motherboards.

-dsr-

Andrei POPESCU

unread,
Mar 15, 2021, 5:50:05 PM3/15/21
to
On Lu, 15 mar 21, 20:24:56, Sven Hartge wrote:
>
> (I still vividly remember using memmaker and manual ordering the drivers
> in config.sys and autoexec.bat to shave another 2KB from the lower
> memory so the IPX driver would fit so Doom would run.)

For me it was Warcraft :)

And for some game (possibly also Warcraft) I had to pretend having a
sound card by listing the driver in config.sys, otherwise it wouldn't
even start.

Kind regards,
Andrei
--
http://wiki.debian.org/FAQsFromDebianUser
signature.asc

Andrei POPESCU

unread,
Mar 15, 2021, 6:10:04 PM3/15/21
to
On Lu, 15 mar 21, 11:19:55, Stefan Monnier wrote:
>
> Last I heard Debian works on the M1 already :-), but its Emacs package
> doesn't :-(

No surprise considering Emacs is itself a full OS :p

(sorry, could not resist)
signature.asc

Andrei POPESCU

unread,
Mar 15, 2021, 6:10:04 PM3/15/21
to
On Lu, 15 mar 21, 17:21:39, Dan Ritter wrote:
>
> At last report: normal desktop Ryzens (nothing with a G suffix
> unless it also has a PRO marking)

Do you have a reliable source for the lack of ECC support in G suffix
processors?

And why would it work for PRO processors instead?

I think it's unlikely AMD has 2 different cores for PRO and non-PRO,
it's more likely it either works for both or neither.
signature.asc

Christian Groessler

unread,
Mar 15, 2021, 6:20:05 PM3/15/21
to
On 3/15/21 10:47 PM, Andrei POPESCU wrote:
> On Lu, 15 mar 21, 20:24:56, Sven Hartge wrote:
>> (I still vividly remember using memmaker and manual ordering the drivers
>> in config.sys and autoexec.bat to shave another 2KB from the lower
>> memory so the IPX driver would fit so Doom would run.)
> For me it was Warcraft :)
>
> And for some game (possibly also Warcraft) I had to pretend having a
> sound card by listing the driver in config.sys, otherwise it wouldn't
> even start.


For me it was "Worms".

And I was using QEMM and Quarterdeck Manifest to get maximal memory in
the lower 640k.

regards,
chris

Roberto C. Sánchez

unread,
Mar 15, 2021, 6:20:05 PM3/15/21
to
On Tue, Mar 16, 2021 at 12:08:51AM +0200, Andrei POPESCU wrote:
> On Lu, 15 mar 21, 11:19:55, Stefan Monnier wrote:
> >
> > Last I heard Debian works on the M1 already :-), but its Emacs package
> > doesn't :-(
>
> No surprise considering Emacs is itself a full OS :p
>
Yeah, but it could really do with a decent text editor :-p

> (sorry, could not resist)
>
(neither could I)

Regards,

-Roberto

--
Roberto C. Sánchez

Dan Ritter

unread,
Mar 15, 2021, 7:10:04 PM3/15/21
to
Andrei POPESCU wrote:
> On Lu, 15 mar 21, 17:21:39, Dan Ritter wrote:
> >
> > At last report: normal desktop Ryzens (nothing with a G suffix
> > unless it also has a PRO marking)
>
> Do you have a reliable source for the lack of ECC support in G suffix
> processors?
>
> And why would it work for PRO processors instead?
>
> I think it's unlikely AMD has 2 different cores for PRO and non-PRO,
> it's more likely it either works for both or neither.


https://www.asrock.com/mb/AMD/X570%20Taichi/index.asp#Specification

I'm going to omit a bunch of details:

AMD Ryzen series CPUs (Vermeer) support ... ECC & non-ECC, un-buffered memory*
- AMD Ryzen series CPUs (Matisse) support ... ECC & non-ECC, un-buffered memory*
- AMD Ryzen series APUs (Renoir) support ... ECC & non-ECC, un-buffered memory*
- AMD Ryzen series CPUs (Pinnacle Ridge) support ... ECC & non-ECC, un-buffered memory*
- AMD Ryzen series CPUs (Picasso) support non-ECC, un-buffered memory*

* For Ryzen Series CPUs (Picasso), ECC is only supported with * PRO CPUs.


The first APUs are the Raven Ridge, 2200G and 2400G, which
aren't even supported on the current motherboards

The next are Picassos, 3200G and 3400G, there's an explicit
statement that only the PRO versions support ECC.

The current ones are Renoir, 4000 series, and I haven't got a
reliable source that they are ECC only on the PRO -- but I
strongly suspect it.

It's not the cores that differ between the PROs and non- -- it's
the I/O chiplet.

-dsr-

Sven Hartge

unread,
Mar 16, 2021, 3:20:04 AM3/16/21
to
Ooooh, look at Mr Fancy here, cheating with 3rd party products to get
ahead :)

I'll throw in the special "maximise XMS memory boot disk" I had for
Comanche because that game just *hated* emm386.exe but without EMM,
stuff like "LOADHIGH" to push drivers into the UMB was not available and
I was too lazy to add another branch to my already convoluted config.sys
boot menu.

to...@tuxteam.de

unread,
Mar 16, 2021, 4:20:04 AM3/16/21
to
On Tue, Mar 16, 2021 at 12:08:51AM +0200, Andrei POPESCU wrote:
> On Lu, 15 mar 21, 11:19:55, Stefan Monnier wrote:
> >
> > Last I heard Debian works on the M1 already :-), but its Emacs package
> > doesn't :-(
>
> No surprise considering Emacs is itself a full OS :p
>
> (sorry, could not resist)

https://www.informatimago.com/linux/emacs-on-user-mode-linux.html

:-)

Cheers
- t
signature.asc

songbird

unread,
Mar 16, 2021, 8:30:05 AM3/16/21
to
Nicholas Geovanis wrote:
> On Sun, Mar 14, 2021, 1:50 PM Stefan Monnier <mon...@iro.umontreal.ca>
> wrote:
...
>> FWIW And MIPS was there even a bit earlier with their R4000 (tho the
>> software support for it only appeared some years later: they first
>> wanted to have an installed base to which to deploy the software), which
>> I believe was the first 64bit microprocessor.
>
> And the demise of the DEC Alpha was quite unfortunate. It was super-fast
> and OSF/1 was rock-solid. But DEC lost the competitive bid on that project
> and Sequent/Dynix, based on hundreds of 486 CPUs, won it. Now owned by IBM
> and deep-sixed: They really bought the customer base instead.

i wondered what happened to them, but didn't look into it.
when the university got rid of the mainframe we switched to
Sequent machines. the two cabinets replaced the entire
floor of Univac hardware (and all the AC and power costs).
the other nice thing was not listening to those printers
hammering away.


songbird

Martin Smith

unread,
Mar 16, 2021, 9:30:04 AM3/16/21
to
when I was working in the Mullard stores in the 60's they had an
enormous computer in a very large air conditioned hall about a mile from
the factory, I dont know what it precisely was but it ran off punched
tape, and in a side room at the stores we have what was called a line
printer that printed out invoice/advice note pairs it really was like a
machine gun printing a line at a time




--
Martin
0 new messages