Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

RISC vs CISC? Call a spade a spade?

38 views
Skip to first unread message

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
Aug 2, 1991, 6:06:44 PM8/2/91
to
From article <11...@mitech.com>, by g...@mitech.com (George J. Carrette):
>
> But what operating environments make it possible for parts of a ".o" file
> to be modifiable, and other parts not? Or make it impossible for unpriviledge
> code to create something that looks like a ".o" file?

The IBM System 38 and AS 400 systems do this. As originally developed, this
group of architectures was in the ultra-CISC category, with a very secure
capability-based segmented memory architecture implemented by a cumbersome
multi-layer interpreter structure that had significant performance problems.
They eliminated the top layer of the interpreter by changing it to a loader.

The compilers still generate what is essentially CISC code, and the loader
compiles this to what they call microcode (because it was originally
intended to be the microcode layer that implemented the CISC interpreter).
The microcode level is a comparitively RISCy, with much of the safety of
the code being dependant on the correctness of the loader. The hardware
protection on this family of machines is strong enough that I believe that
the loaded code is indeed protected from any unsafe run-time modification,
and I believe that the combination of loader and RISCy hardware interpreter
is a safe implementation of the CISC instruction set generated by the
compilers.

> No vendor actually intends to depend on the COMPILER to generate
> "correct" code which won't crash the system.

I believe that the loader on the System 38 / AS 400 is essentially the
final stage of the code generator for all compilers on that system, and
IBM really does seem to depend on it to generate correct crash-proof code,
however, I'm not an IBM person, and the System 38 / AS 400 architectures
are proprietary and not well described even in the internal IBM documents
I've seen. If anyone from IBM would like to clarify what's going on in
the AS 400, I'd sure appreciate it.

Doug Jones
jo...@cs.uiowa.edu

George J. Carrette

unread,
Aug 2, 1991, 11:28:21 AM8/2/91
to
For the last 14 years a good deal of my involvement with computers has
been the implementation of interpreters (and associated assemblers,
translators, and compilers). What are these interpreters?
(1) microcode to interpret an instruction set.
(2) hardware to interpret an instruction set.
(3) software to interpret lisp, basic, whatever.

Finally, a single message sent to me on 1-AUG-91 has made me realize
why I get this funny feeling that there is either a potentially
serious robustness problem with current RISC implementations, *or* in
the final analysis no real RISC superiority over CISC. Here is the
message:

> To: g...@mitech.com (George J. Carrette)
> Subject: Re: Feedback on system robustness testing
> I saw the posting, on Alt.sources. I saved it, ran it on a machine,
> it crashed, and I gave the source to a vendor rep. and said, "Fix It!"
> Thanks for doing us a service. Any vendor which depends on the compiler
> to generate "correct" code which won't crash the system is STUPID.

Just a very straightforward statement by a possibly unsophisticated user.

It has been a general rule of interpreter implementation that a good
way to speed up an interpreter is to remove as much runtime error checking
as possible. For lisp/basic you could call it syntax checking, and for
machine instruction sets you can call it instruction-decoding.

As much of the the responsibility for that error checking should be
placed on the COMPILER. The fastest interpreters tend to get really
confused and to die horrible deaths if given bogus, non-syntax-checked
input data.

This goes for all three types of interpreters, microcode, hardware, and
software. Further efficiency (space and time) gains may be had by
making assumptions (enforced by the compiler) about the context of
certain operations being interpreted.

This is all well and good. But what operating systems have mechanims
to enforce the SECURITY of data structures (parts of executable programs)
so that the "code" part of the output of a compiler is a priviledged
data structure that the user cannot modify?

If a "compilation" is done on-the-fly as part of an execution/paging
kind of operation, such as a defunct VLIW Hardware company using technology
out of Yale employed, then the output of this "compilation" (not much more
than a decompression in this case, but it serves to illustrate) is
indeed easy to make protected and priviledged.

But what operating environments make it possible for parts of a ".o" file
to be modifiable, and other parts not? Or make it impossible for unpriviledge
code to create something that looks like a ".o" file?

On the other hand, I know that if there is a big enough fire-wall built
around the interpreter user-context, which is sometimes at great
expense, making exception handling, context-switching and debugging
features very expensive operations, (No popular benchmarks pay much attention
to those things!) then all manner of problems of this nature can in theory
be ameliorated.

No vendor actually intends to depend on the COMPILER to generate
"correct" code which won't crash the system.

However, it should be obvious to all observers, given the experience
in the last year with the CRASHME program, that SOME RISC VENDORS are
not even bothering to put in place the most elementary of engineering
structures (e.g. testing) to deal with the issue.

-gjc

Henry Spencer

unread,
Aug 3, 1991, 8:31:27 PM8/3/91
to
In article <11...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>However, it should be obvious to all observers, given the experience
>in the last year with the CRASHME program, that SOME RISC VENDORS are
>not even bothering to put in place the most elementary of engineering
>structures (e.g. testing) to deal with the issue.

Why should this be obvious? As far as I know, *none* of the RISC vendors
who got bitten by "crashme" had to revise their hardware to respond. The
usual problem was operating-system bugs in response to bizarre kinds of
traps. The notion that the failures were the result of unorthodox sequences
of instructions somehow bypassing system protection is a CISCist fantasy.
None of the RISC machines depend on their compilers for their protection.
(A few CISC machines do.)

Yes, it would have been nice if the system bugs had been found earlier.
Note, however, that some CISC machines crashed for similar reasons.
Despite CISCist superstition, the "crashme" results say nothing about
the merits of RISCs vs CISCs.

To paraphrase a comment originally made about fiber optics vs. comsats,
"if OS bugs are the biggest problem the CISC people can find with RISC,
they must really be getting desperate".
--
Arthritic bureaucracies don't tame new | Henry Spencer @ U of Toronto Zoology
frontiers. -Paul A. Gigot, WSJ, on NASA | he...@zoo.toronto.edu utzoo!henry

Chris Torek

unread,
Aug 3, 1991, 9:07:39 AM8/3/91
to
In article <11...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>No vendor actually intends to depend on the COMPILER to generate
>"correct" code which won't crash the system.

This is not the case. The Burroughs A-series machines apparently
depended on the compiler's producing valid code, for instance. But
this is rare, to be sure.

>However, it should be obvious to all observers, given the experience
>in the last year with the CRASHME program, that SOME RISC VENDORS are
>not even bothering to put in place the most elementary of engineering
>structures (e.g. testing) to deal with the issue.

This has always been the case, whether or not the word `RISC' or
`CISC' appears. Indeed, you can even delete the word `VENDORS'.
The following is a true statement:

Some groups do insufficient systems testing (for the following
applications: ...).

Unfortunately, it is not properly inflammatory, and thus people keep
insisting on replacing `Some groups' with things like `RISC chip makers'
or `Unix programmers'. Without a qualifier, it then becomes a false
(but sufficiently inflammatory) statement.

The more interesting question `of RISCs and CISCs, which fail more
often?' can only be answered after first defining RISC and CISC, and
`failing often', much more precisely than most people who start this
are willing to comprehend. (If they were willing to invest enough
effort into comprehending the issue, they would probably not be so
sloppy in starting these flame wars.) Even then the measures are hard
to select.

A final question: `If you remove complex instructions or complex
addressing modes from an architecture, do implementations become harder
to test'? It should be obvious that the answer to this is `not
often.' Of course, this is not what the CISC fans wish to point out.
--
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA Domain: to...@ee.lbl.gov

Frank D. Cringle

unread,
Aug 4, 1991, 5:25:04 AM8/4/91
to
In article <11...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>Finally, a single message sent to me on 1-AUG-91 has made me realize
>why I get this funny feeling that there is either a potentially
>serious robustness problem with current RISC implementations, *or* in
>the final analysis no real RISC superiority over CISC.

Can anyone explain this phenomenon? I mean why George keeps posting
crashme and then complaining that the perverse RISC users are failing
to admit that it blew there machine to smithereens. Maybe they did
as I did: compile it, run it a few times with a variety of parameters,
watch the messages about caught signals scroll past, and then find the
exercise rather boring as nothing shows the least sign of crashing.

Here are the details for your log, George (if you are counting negative
results): DG Aviion AV300 (Motorola 88K) running DG/UX 4.31.

If we need some credentials here:


>For the last 14 years a good deal of my involvement with computers has
>been the implementation of interpreters (and associated assemblers,
>translators, and compilers). What are these interpreters?
> (1) microcode to interpret an instruction set.
> (2) hardware to interpret an instruction set.
> (3) software to interpret lisp, basic, whatever.

Over the last 20 years I have written microcode for /370-clone mainframes
(priveleged instructions and machine check code), designed hardware for
same, written CAD software (simulators for digital logic) and now design
controllers for mainframe peripherals. I share your concern for robust
implementation and error checking -- I find these goals most easily
achieved on the basis of a clear and simple design.

--
Frank D. Cringle | Tel. +49 231 5599 124
Dr. Materna GmbH | f...@materna.DE
Vosskuhle 37
D-4600 Dortmund 1

Craig Jackson drilex1

unread,
Aug 4, 1991, 5:42:43 PM8/4/91
to
In article <11...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>It has been a general rule of interpreter implementation that a good
>way to speed up an interpreter is to remove as much runtime error checking
>as possible. For lisp/basic you could call it syntax checking, and for
>machine instruction sets you can call it instruction-decoding.

This has always been true in single-thread programs (macro or micro).
I do not know enough about present hardware to know whether it is
an issue in current real hardware implementations, where lots of things
are overlapped. I have been lead to believe that it is not true.

>This is all well and good. But what operating systems have mechanims
>to enforce the SECURITY of data structures (parts of executable programs)
>so that the "code" part of the output of a compiler is a priviledged
>data structure that the user cannot modify?
>

>But what operating environments make it possible for parts of a ".o" file
>to be modifiable, and other parts not? Or make it impossible for unpriviledge
>code to create something that looks like a ".o" file?

The answer to both questions is "The Unisys A-Series, which started
architectural life as the Burroughs B6700". Compilers are anointed code
files, and only the console operator can anoint them. Executable files
are anointed in a different way, and only an anointed compiler can
creat them.

There is a special compiler that creates the operating system--it can
not produce normal executable programs. It can only produce system
libraries and bootable operating system files.

>No vendor actually intends to depend on the COMPILER to generate
>"correct" code which won't crash the system.

Burroughs/Unisys has done so for over 25 years.

>However, it should be obvious to all observers, given the experience
>in the last year with the CRASHME program, that SOME RISC VENDORS are
>not even bothering to put in place the most elementary of engineering
>structures (e.g. testing) to deal with the issue.

As none of the problems which have been analyzed have come from a chip
failing to detect an undefine opcode, and since some of the failures
have occurred on CISC chips, I would suggest that this is a unsupported
statement.
--
Craig Jackson
dri...@drilex.dri.mgh.com
{bbn,axiom,redsox,atexnet,ka3ovk}!drilex!{dricej,dricejb}

peter da silva

unread,
Aug 4, 1991, 8:34:48 PM8/4/91
to
In article <74...@ns-mx.uiowa.edu>, jo...@pyrite.cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
> From article <11...@mitech.com>, by g...@mitech.com (George J. Carrette):
> > But what operating environments [...] make it impossible for unpriviledge

> > code to create something that looks like a ".o" file?

Burroughs (now Unisys) has a whole set of minis that work this way.

The whole system was based on Algol.
--
Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180;
Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"

Ronald G Minnich

unread,
Aug 4, 1991, 8:48:27 PM8/4/91
to
In article <11...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>This is all well and good. But what operating systems have mechanims
>to enforce the SECURITY of data structures (parts of executable programs)
>so that the "code" part of the output of a compiler is a priviledged
>data structure that the user cannot modify?

Burroughs MCP, on the stack machines, which were your basic CISC,
near as I can tell.
I had several programs which would cause the B7900 to die a horrible
death, either in Fortran77 or Algol.
It was the compiler-writers job to fix the compiler so that these
programs would not cause the machine to crash and reboot.
(I was a hardware hacker at this stage, so I luckily did not
have to worry about ever fixing this stuff, praise be).
The programs were as simple as a trival string scan.
The problems with this architecture's lack of security began real
early (i.e. the 60s) and at least until i left Burroughs (1983) had
not gone away, and had gotten more complex. There was no
protection in the architecture against certain simple things,
compilers were priveleged things (not just ANYONE could create
runnable object).

>But what operating environments make it possible for parts of a ".o" file
>to be modifiable, and other parts not? Or make it impossible for unpriviledge
>code to create something that looks like a ".o" file?

See above.
Kind of a mess ...
But note that this was not a RISC, in any sense of the word.
ron
--
Quote of the month:"The 300 answering machines at (name suppressed) that
pass for tech and sales support have been amazingly _unhelpful_."
--Terrence Talbot

George J. Carrette

unread,
Aug 5, 1991, 5:29:15 AM8/5/91
to
In article <fdc.681297904@elwood>, f...@Materna.DE (Frank D. Cringle) writes:
> Maybe they did
> as I did: compile it, run it a few times with a variety of parameters,
> watch the messages about caught signals scroll past, and then find the
> exercise rather boring as nothing shows the least sign of crashing.

Run a test of this nature for a few moments of your limited human
attention span? For shame, for shame.

If that is your attitude toward software testing, good luck.

> I share your concern for robust
> implementation and error checking -- I find these goals most easily
> achieved on the basis of a clear and simple design.

If it sells iron, why not?

But seriously, when you combine the hardware and the operating system
together, "clear and simple" are not the words to describe anything that
is useful.

-gjc

Perry Scott

unread,
Aug 6, 1991, 12:55:28 PM8/6/91
to
>Here are the details for your log, George (if you are counting negative
>results): DG Aviion AV300 (Motorola 88K) running DG/UX 4.31.

Ditto for the HP9000/720 running HP-UX 8.05. Unfortunately, all we can
report is "we didn't find any defects" (but it was sure quick not
finding them. :-) That doesn't mean there isn't a combination of
parameter values that won't crash the system.

"crashme" tests the operating system as well as the underlying hardware.
As such, it's hard to sort out the results you're getting.

Perry Scott
HP Ft Collins

Crispin Cowan

unread,
Aug 8, 1991, 11:40:01 AM8/8/91
to
In article <11...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>Finally, a single message sent to me on 1-AUG-91 has made me realize
>why I get this funny feeling that there is either a potentially
>serious robustness problem with current RISC implementations, *or* in
>the final analysis no real RISC superiority over CISC.
...

>However, it should be obvious to all observers, given the experience
>in the last year with the CRASHME program, that SOME RISC VENDORS are
>not even bothering to put in place the most elementary of engineering
>structures (e.g. testing) to deal with the issue.

The behaviour of CRASHME is really simple to explain:
-no CPU faults have ever been found
-lots of OS faults have been found
-more OS faults have been found in OSes for RISC boxes than CISC
-RISC is now defined as any architecture introduced since 1985
What do we notice from the above? CRASHME crashes YOUNG OSes. RISC
machines tend to have newer implementations or ports for their operating
systems, and thus haven't had the same opportunity to dig out the last
few bugs that the dusty old CISC OS implementations have had. Big
surprise. Being RISC is not an OS security risk, being new is.

Crispin
-----
Crispin Cowan, CS grad student, University of Western Ontario
Phyz-mail: Middlesex College, MC28-C, N6A 5B7
E-mail: cri...@csd.uwo.ca Voice: 519-661-3342
Temporarily at: w1c...@watson.ibm.com
"If you want an operating system that is full of vitality and has a
great future, use OS/2." --Andy Tanenbaum

Guy Harris

unread,
Aug 9, 1991, 3:17:18 PM8/9/91
to
>What do we notice from the above? CRASHME crashes YOUNG OSes. RISC
>machines tend to have newer implementations or ports for their operating
>systems,

Well, *sort* of.

In the case of SunOS, for example, 95% or so of SunOS 4.x for 68K is built
from the exact same source code as SunOS 4.x for SPARC, the exceptions
being:

1) much of the compilers (much of the compiler *is* shared, though - for
example, the grammar for the C language doesn't change, other than
the hack for "alloca()", just because you're on a SPARC rather than a
68K);

2) the assembler;

3) some stuff in the linker (most of it is shared, though);

4) some stuff in the run-time linker;

5) some of the run-time support, and the whizzo assembler-language
implementations of some routines, in the C library;

6) machine-dependent stuff in the kernel (the bulk of the kernel is
shared).

The bugs in question tended to show up in 6).

The question then is whether:

1) the stuff in 6) is newer on SPARC than on 68K; it pretty much is, as
the first Sun-4s came out after the first Sun-3s and much after the
Sun-2s.

2) the stuff in 6) is trickier on SPARC than on 68K due to the fact that
SPARC is a RISC and the 68K isn't. Maybe, maybe not; some more stuff
may trap to the kernel on RISCs, but that stuff doesn't get done by
microcode on RISCs, either. Is it easier to do microcode than trap
handlers? Is there something *intrinsic* to microcode that causes it
to be more thoroughly, or more correctly, checked out?

George J. Carrette

unread,
Aug 9, 1991, 12:05:01 PM8/9/91
to
In article <1991Aug8.1...@watson.ibm.com>, w1c...@watson.ibm.com (Crispin Cowan) writes:
> What do we notice from the above? CRASHME crashes YOUNG OSes. RISC
> machines tend to have newer implementations or ports for their operating
> systems, and thus haven't had the same opportunity to dig out the last
> few bugs that the dusty old CISC OS implementations have had. Big
> surprise. Being RISC is not an OS security risk, being new is.

You can only draw this false conclusion if you are young and inexperienced.

The interesting comparison is between YOUNG CISC and YOUNG RISC.
(You are claiming the only comparison being made is between OLD CISC
and YOUNG RISC). Of course, you have to have been around for a few years,
over a decade, to have had significant experience with more than one YOUNG CISC.

[Furthermore. There IS A SPARC HARDWARE BUG. And unfortunately I have
promised NOT TO DISCLOSE what it is. The agreement is that if SUN doesn't
do anything about fixing it or recognizing it in another 6 months or so
then a certain party will disclose it.]

[Also, it would be nice if you could show how all the latest reported
CRASHME crashes (in version 1.2) are software bugs, by giving exact
references.]

My experience with early OS's on CISC machines makes me say that
that the YOUNG RISC OS's are more flakey than the YOUNG CISC ones were.

Furthermore, some of the old CISC OS's are clearly EVOLVING FASTER
AND GETTING SIGNIFICANT NEW FEATURES FASTER than the young UNIX ports.

What we are seeing now in Unix is RELATIVELY STABLE stuff. So what is
going to happen with the radical rewrites and new features that people
like AT&T have planned?

The MARKET for computers is just a lot different now. There is a large
amount of growth in a market where people are willing to put up with
a much lower level of functionality and robustness in order to get a
better price/performance ratio. This used to be called the PC market.

That is what has changed. The fact that computer instruction sets are
changing? well, that hasn't changed.

-gjc

Guy Harris

unread,
Aug 10, 1991, 2:39:58 PM8/10/91
to
>[Furthermore. There IS A SPARC HARDWARE BUG.

Does that mean "a bug intrinsic to the SPARC architecture" or "a bug
present in some, but not necessarily all, SPARC implementations"? (An
answer of "I have promised not to disclose what it is" will be treated
as an answer of "a bug present in some, but not necessarily all, SPARC
implementations".)

George J. Carrette

unread,
Aug 12, 1991, 4:31:50 AM8/12/91
to
In article <93...@auspex.auspex.com>, g...@auspex.auspex.com (Guy Harris) writes:
>>[Furthermore. There IS A SPARC HARDWARE BUG.
>
> Does that mean "a bug intrinsic to the SPARC architecture" or "a bug
> present in some, but not necessarily all, SPARC implementations"?

A bug in some.

And, a bug NOT FOUND by using CRASHME. It was found by a more directed
search and analysis of exceptional instructions. By somebody who knew something
about the structure of the instruction set, as compared with CRASHME which
knows nothing structural.

In all honesty CRASHME is going to be pretty lousy at putting the hardware
into interesting states as long as it can crash things so quickly due to
OS bugs.

When serious people (as compared with part-time hacks in this area like myself)
use random-input testing methods they get a rough idea of how many multi
billions of runs they need, and they have the motivation and resources to
run tests 24 hours a day, for weeks and months on end.

----------------------

Frankly I'm suprised that nobody has ATTACKED my leading comments about
CRASHME (and forgive me, but I proved to myself that I had to attach at least
some mildly leading and outrageous stuff to it, to motivate even a few
people to want to run it and consider the results):

that it really didn't prove anything except that the software testing
groups, (if any) at certain manufacturers don't have any tests like
CRASHME. And, even when they saw the results of CRASHME from a year ago
decided not to encorporate such tests into any kind of regular program.

-gjc

Crispin Cowan

unread,
Aug 12, 1991, 4:13:37 PM8/12/91
to
In article <93...@auspex.auspex.com> g...@auspex.auspex.com (Guy Harris) writes:
> Is it easier to do microcode than trap
> handlers? Is there something *intrinsic* to microcode that causes it
> to be more thoroughly, or more correctly, checked out?
Two things:
1. In most cases, you know that the microcode is going to be
cast in iron, so you get really careful & thorough about
your testing.

2. The functionality of microcode is generally smaller/simpler,
facilitating rigerous/exhaustive testing. Whether CISC or
RISC, the range of possible inputs to OS kernel code & trap
handlers is much larger than that of a CPU, and thus testing
it exhaustively because virtually impossible.

On the other side, once an error in microcode is committed to silicon,
it's pretty difficult to fix :-), while trap handler bugs can be fixed
with OS patches or a new release. Certainly not pleasant, but better
than getting out the chip extractor.

Chip Salzenberg

unread,
Aug 12, 1991, 12:23:55 PM8/12/91
to
According to g...@auspex.auspex.com (Guy Harris):

> Is there something *intrinsic* to microcode that causes it
> to be more thoroughly, or more correctly, checked out?

Perhaps the difficulty of microcode fixes cause manufacturers to take
more care with desk-checking microcode than with (relatively) easy to
fix operating system code.
--
Chip Salzenberg at Teltronics/TCT <ch...@tct.com>, <uunet!pdn!tct!chip>
If you meet Ken Thompson on the road, kill him.

George J. Carrette

unread,
Aug 13, 1991, 6:57:28 AM8/13/91
to
In article <1991Aug12....@watson.ibm.com>, w1c...@watson.ibm.com (Crispin Cowan) writes:
> ... while trap handler bugs can be fixed

> with OS patches or a new release. Certainly not pleasant, but better
> than getting out the chip extractor.

Of course, another thing is certain, TRAP HANDLERS and such can be BROKEN
by OS patches and new releases. By subtle changes in the optimizations
provided by the compiler, etc.

Steve Correll

unread,
Aug 13, 1991, 2:24:57 PM8/13/91
to
In article <12...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>Furthermore, some of the old CISC OS's are clearly EVOLVING FASTER
>AND GETTING SIGNIFICANT NEW FEATURES FASTER than the young UNIX ports.
>
>The MARKET for computers is just a lot different now. There is a large
>amount of growth in a market where people are willing to put up with
>a much lower level of functionality and robustness in order to get a
>better price/performance ratio. This used to be called the PC market.

I'm privileged to use the most popular CISC architecture in the world, running
its most popular operating system. While I don't see how you can describe
DOS/Windows an "old" CISC OS, it certainly is adding significant new features
faster than Unix: some time soon, DOS/Windows will learn to use the CPU as a
32-bit machine, and Microsoft has said that if we stick with DOS/Windows we
will soon have true memory protection, true multiprogramming, and perhaps even
demand paging. Meanwhile, I reboot a lot between jobs just to be safe.

Executing random bit patterns under DOS/Windows might tell us a lot about the
inherent superiority of CISC as a platform for operating systems, but I regret
that I am unable to offer the use of my hard disc for the experiment. :-)

Chris Torek

unread,
Aug 13, 1991, 4:13:43 PM8/13/91
to
>In article <1991Aug12....@watson.ibm.com> w1c...@watson.ibm.com
(Crispin Cowan) points out that
>>... trap handler bugs can be fixed with OS patches or a new release.

>>Certainly not pleasant, but better than getting out the chip extractor.

In article <13...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>Of course, another thing is certain, TRAP HANDLERS and such can be BROKEN
>by OS patches and new releases.

This much is true; however:

>By subtle changes in the optimizations provided by the compiler, etc.

this is false on, for instance, the SPARC. My trap handlers do not depend
on subtle optimizations (or lack thereof), and I suspect Sun's do not
either (or, if they do, it is unintentional, i.e., a bug-in-waiting).

There are pieces of code in the 4BSD VAX kernel that depend on the
compiler not optimizing, but the VAX is a CISC, by anyone's measure,
and these pieces are easily fixed.

Certainly, if the compiler itself is broken and emits bad code, this
could break the operation of a trap handler---but it could equally well
break anything else. This argument does not point any direction at
all, save to the existence of bugs---but we knew that already.

Crispin Cowan's point is valid: if you think you can fix it later, you
have less incentive to fix it now. This is more likely to mislead
those who construct software than those who construct hardware: the
hardware people are more acutely aware of the cost of fixing something
later. If you discover a bug in hardware, you may not fix it at all
(due to the perceived and/or calculated cost), leaving it for the
software people to fix, if possible. (A case in point: the VAX probe
instruction, if close to a page boundary, failed on the 11/780. A
prefetch was done in the mode the probe used for probing. As far as I
know, this bug was never fixed. The workaround was to make sure that
no probe instruction was within 8 bytes of the end of a page.)

Thus, the hardware people can justify extra time and expense for
verification and testing (this is not intended to be redundant:
`verification' includes `desk checking', and `testing' means `not
*just* desk checking or formal methods'). The software people may
not see any reason to do this, or their managers may deny it, and
the result is likley to be approval at higher levels because the
product will ship sooner.


--
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA Domain: to...@ee.lbl.gov

new area code as of September 2, 1991: +1 510 486 5427

Giridhar Rao

unread,
Aug 13, 1991, 10:42:45 PM8/13/91
to
> on subtle optimizations (or lack thereof), and I suspect Sun's do not
> either (or, if they do, it is unintentional, i.e., a bug-in-waiting).
>
> There are pieces of code in the 4BSD VAX kernel that depend on the
> compiler not optimizing, but the VAX is a CISC, by anyone's measure,

I saw so many discussions on this newsgroup about RISC vs. CISC.
Can some knowledgeble netter tell me what does CISC stand for ?
Is it something to do with micprogrammed control ?

Thanks very much.

--Giri.

> and these pieces are easily fixed.

> --
> In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
> Berkeley, CA Domain: to...@ee.lbl.gov
> new area code as of September 2, 1991: +1 510 486 5427


--
* --- / / Computer Science Department *
* / _ o o__ o __/ /_ __ o__ Arizona State University *
* /___)_/_ / (_/_(__)_/ \_/_>_/ (_ gd...@enuxha.eas.asu.edu *
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Herman Rubin

unread,
Aug 14, 1991, 9:09:27 AM8/14/91
to
In article <16...@dog.ee.lbl.gov>, to...@elf.ee.lbl.gov (Chris Torek) writes:

..........................

> There are pieces of code in the 4BSD VAX kernel that depend on the
> compiler not optimizing, but the VAX is a CISC, by anyone's measure,
> and these pieces are easily fixed.

This is nonsense. "Optimizing" means producing the "best" code for the
purpose. If certain types of code modification defeat the purpose, they
are not optimizations.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hru...@l.cc.purdue.edu (Internet, bitnet) {purdue,pur-ee}!l.cc!hrubin(UUCP)

Chris Torek

unread,
Aug 14, 1991, 2:47:23 PM8/14/91
to
>In article <16...@dog.ee.lbl.gov> to...@elf.ee.lbl.gov (Chris Torek) writes:
>>There are pieces of code in the 4BSD VAX kernel that depend on the
>>compiler not optimizing, but the VAX is a CISC, by anyone's measure,
>>and these pieces are easily fixed.

In article <16...@mentor.cc.purdue.edu>


hru...@pop.stat.purdue.edu (Herman Rubin) writes:
>This is nonsense. "Optimizing" means producing the "best" code for the
>purpose. If certain types of code modification defeat the purpose, they
>are not optimizations.

`Optimization' is not just making the program run as fast as possible,
nor is it just making the program as small as possible, nor is it just
making the best use of sneaky encodings. Optimization requires selecting
the proper balance between all three.

The code I was thinking of is for Unibus error recovery:

/*
* This routine is called by the locore code to process a UBA
* error on an 11/780 or 8600. The arguments are passed
* on the stack, and value-result (through some trickery).
* In particular, the uvec argument is used for further
* uba processing so the result aspect of it is very important.
* It must not be declared register.
*/

If the compiler elects to put uvec in a register, or notices that
changes to uvec merely change a dead variable and so removes them,
things will go badly. The most straightforward way to fix it is
to pass uvec via a pointer. Since this code is run almost never,
slight increases in space or time here are practically irrelevant.
The major result will be that the comment about the uvec argument
can go away, and people need not be confused by it.

Joe Buck

unread,
Aug 14, 1991, 3:10:03 PM8/14/91
to
In article <16...@mentor.cc.purdue.edu>, hru...@pop.stat.purdue.edu (Herman Rubin) writes:
|> In article <16...@dog.ee.lbl.gov>, to...@elf.ee.lbl.gov (Chris Torek) writes:
|>
|> ..........................
|>
|> > There are pieces of code in the 4BSD VAX kernel that depend on the
|> > compiler not optimizing, but the VAX is a CISC, by anyone's measure,
|> > and these pieces are easily fixed.
|>
|> This is nonsense. "Optimizing" means producing the "best" code for the
|> purpose. If certain types of code modification defeat the purpose, they
|> are not optimizations.

By that standard, there is no such thing as an "optimizing" compiler --
no compiler always produces the "best" code. Language is an evolving
thing. "optimizing compiler" has taken on a specific meaning because
of use.

One reason optimizing C compilers broke kernel code is that K&R C didn't
have the "volatile" keyword -- there was no way to tell the C compiler
that a particular word was really a device register and that it would
change. For that reason, the programmer might write a chunk of code
that tests a bit in a device register several times. An optimizing
compiler, not knowing that the bit could change by itself, would decide
to eliminate all but the first test, saving the result in a CPU register,
say.

This is no longer a problem because of the "volatile" keyword -- which
indicates that a variable may be changed by "someone else" -- in ANSI C.
But ANSI C was not available when the 4BSD Vax kernel was written. At
the time, kernel code was often written using C as assembly language:
execute this code exactly as it is written, do not rearrange to "produce
the equivalent result more efficiently" since we're counting on side effects.

I suggest that you be more careful about spouting off with phrases like
"This is nonsense".

--
Joe Buck
jb...@galileo.berkeley.edu {uunet,ucbvax}!galileo.berkeley.edu!jbuck

Donald Lindsay

unread,
Aug 14, 1991, 11:45:27 PM8/14/91
to

In article <93...@auspex.auspex.com> g...@auspex.auspex.com (Guy Harris) writes:
> Is it easier to do microcode than trap
> handlers? Is there something *intrinsic* to microcode that causes it
> to be more thoroughly, or more correctly, checked out?

Microcode is much harder to write than trap handlers, and much harder
to check out.

It's harder to write because the microcoder needs to know the
hardware much more intimately. Even worse, the visible architecture
is likely to be changed slightly, after the code development has been
started. Plus, microcode runs to time dependencies - its correctness
may depend on being fast enough, or on looking at a certain bus
exactly so many clocks after some other event.

It's harder to check out because it's typically convoluted.
Microcoding is the last, true home of gross hacks. Why? Well, first,
*anything* that makes some macroinstruction run one clock faster, is
going to get put in. Second, there's often a space crunch. Remember,
the microcode may have to do quite a lot: perhaps diagnostics,
perhaps a console debugger.

As for how hacky it gets ... imagine an 8-deep call stack, and code
that *overflows* the stack because it was faster to push three times
than pop five. This is a real example, from one of the early
MicroVaxen, and sure enough, early copies of that chip were shipped
with a bug that could crash the microcode.
--
Don D.C.Lindsay Carnegie Mellon Robotics Institute

Herman Rubin

unread,
Aug 15, 1991, 6:43:32 AM8/15/91
to
In article <1991Aug14.1...@agate.berkeley.edu>, jb...@forney.berkeley.edu (Joe Buck) writes:
> In article <16...@mentor.cc.purdue.edu>, hru...@pop.stat.purdue.edu (Herman Rubin) writes:
> |> In article <16...@dog.ee.lbl.gov>, to...@elf.ee.lbl.gov (Chris Torek) writes:

..........................

> |> > There are pieces of code in the 4BSD VAX kernel that depend on the
> |> > compiler not optimizing, but the VAX is a CISC, by anyone's measure,
> |> > and these pieces are easily fixed.

> |> This is nonsense. "Optimizing" means producing the "best" code for the
> |> purpose. If certain types of code modification defeat the purpose, they
> |> are not optimizations.

> By that standard, there is no such thing as an "optimizing" compiler --
> no compiler always produces the "best" code. Language is an evolving
> thing. "optimizing compiler" has taken on a specific meaning because
> of use.

That there is no such thing as a compiler producing the best code is something
with which I completely agree, but which seems to be lost on the field. The
purpose of compilers is to enable the USER to get his job done, and preferably
efficiently. Other than for studying compilers, there is little other function
of compilers. If an "optimizing compiler" produces bad code, who needs it?

> One reason optimizing C compilers broke kernel code is that K&R C didn't
> have the "volatile" keyword -- there was no way to tell the C compiler
> that a particular word was really a device register and that it would
> change.

If you have seen my postings, you would know that the idea that a computer
language is even adequate is at best a chimera. If the C language had been
realized as incomplete from the beginning, a programmer could have taken
care of the problem. THIS problem is taken care of by THIS new keyword;
the next one will take a different treatment. I can even provide examples
of natural situations where efficiency considerations can be communicated
even to those with little knowledge of computers, but existing software
does a poor job with them.

.......................

> I suggest that you be more careful about spouting off with phrases like
> "This is nonsense".

I suggest that the computer field be more careful about misusing existing
languages, like English and mathematics.

herr...@iccgcc.decnet.ab.com

unread,
Aug 15, 1991, 12:50:32 PM8/15/91
to
In article <16...@mentor.cc.purdue.edu>, hru...@pop.stat.purdue.edu (Herman Rubin) writes:
> In article <16...@dog.ee.lbl.gov>, to...@elf.ee.lbl.gov (Chris Torek) writes:
>
> ..........................
>
>> There are pieces of code in the 4BSD VAX kernel that depend on the
>> compiler not optimizing, but the VAX is a CISC, by anyone's measure,
>> and these pieces are easily fixed.
>
> This is nonsense. "Optimizing" means producing the "best" code for the
> purpose. If certain types of code modification defeat the purpose, they
> are not optimizations.

Of course! We all know it is an abuse of the language. But who
is going to be first and advertise his Code Improver, admitting
he hasn't finished his optimizer?

dan herrick d...@NCoast.org

David Gudeman

unread,
Aug 15, 1991, 2:54:43 PM8/15/91
to
In article <16...@mentor.cc.purdue.edu> Herman Rubin writes:
]
]That there is no such thing as a compiler producing the best code is something

]with which I completely agree, but which seems to be lost on the
]field.

Hardly. One of the first things they usually tell you in a compilers
class is that the word "optimize" is a misnomer that sticks around for
historical reasons, and that the problem of producing the "best
possible" code is intractable.

]... If an "optimizing compiler" produces bad code, who needs it?

You do. You are posting news with a program that was probably
compiled by a bad compiler. In any case it was certainly passed along
by code that was produced by a bad compiler (pcc). The large majority
of applications running today are nowhere near as well-compiled as
they could be, yet they work and make life easier for millions of
people.

The use of high-level languages, supported by easy-to-write poorly
optimizing compilers (or even interpreters) has led the way in
bringing all the benifits of computer technology to industrialized
countries. If we thought we weren't allowed to produce code that
isn't an artwork of efficiency, very little code would ever be
produced and computers would be much less useful.
--
David Gudeman
gud...@cs.arizona.edu
noao!arizona!gudeman

Clark L. Coleman

unread,
Aug 15, 1991, 3:52:03 PM8/15/91
to

In article <16...@mentor.cc.purdue.edu>, hru...@pop.stat.purdue.edu (Herman Rubin) writes:
> In article <16...@dog.ee.lbl.gov>, to...@elf.ee.lbl.gov (Chris Torek) writes:
>
> ..........................
>
>> There are pieces of code in the 4BSD VAX kernel that depend on the
>> compiler not optimizing, but the VAX is a CISC, by anyone's measure,
>> and these pieces are easily fixed.
>
> This is nonsense. "Optimizing" means producing the "best" code for the
> purpose. If certain types of code modification defeat the purpose, they
> are not optimizations.

It seems to me that everyone is missing the point by talking about the
"optimizer" being naughty in this case. The authors of the code needed
programming language semantics that were not available (namely, the
"volatile" keyword and its implicit constraints on the optimizer.) So,
instead of finding semantic constraints that fit their needs (e.g.
writing the volatile code in assembly language and using an assembler
that promises not to eliminate code), they just crossed their fingers,
said a prayer, and used C. No doubt there were many good reasons for
this decision, which I don't want to debate here. The point is that
the "optimizer" did not break any contract made between the programming
language designers and the users of that language, so why have half a
dozen people followed up on this thread as if it had?

Repeat: the compiler did nothing wrong. There was no ANSI C contract
in effect, no "volatile", no promises not to remove the code redundancy.


-----------------------------------------------------------------------------
"The use of COBOL cripples the mind; its teaching should, therefore, be
regarded as a criminal offence." E.W.Dijkstra, 18th June 1975.
||| cl...@virginia.edu (Clark L. Coleman)

Herman Rubin

unread,
Aug 16, 1991, 9:03:09 AM8/16/91
to
In article <1991Aug15.2...@pony.Ingres.COM>, j...@Ingres.COM (Jon Krueger) writes:
> Just when you thought the foot couldn't go in any deeper,

> hru...@pop.stat.purdue.edu (Herman Rubin) writes:
>
> > I suggest that the computer field be more careful about
> > misusing existing languages, like English and mathematics.
>
> Then I take it you're going to stop using "mean" as a technical
> term in statistics. It was an existing word in English.

The use of mean for average in English happens to precede its use
in statistics. In fact, it came in directly from a transliteration
of the French "moyen". And I would very definitely abolish the term
statistical significance as totally misleading even in statistics.
At the time it was introduced and popularized, the evidence that it
has essentially nothing to do with real significance was not available.

There are many bad terms in mathematics, and generally mathematicians
recognize them as bad (if they think about them at all), and most of
these are quite old. I believe even statistical significance is quite
old, at least most of a century, and therefore resistant to change.
But the "normal" use of the word "optimizing" should be of considerable
concern to computer people, and therefore they should not have given
this a technical meaning substantially opposed to the usual one. The
evidence that the technical term does not correspond with the usual
one was available at least as far back as computers go.

Jon Krueger

unread,
Aug 15, 1991, 6:10:58 PM8/15/91
to
Herman Rubin removes all doubt:

> "Optimizing" means producing the "best" code for the purpose.

Dearest Herman,

When you have learned the accepted definitions of technical terms,
please feel free to ask us to indulge you in your private definitions.

Thank you ever so much.

-- Jon

Jon Krueger

unread,
Aug 15, 1991, 7:06:14 PM8/15/91
to
Just when you thought the foot couldn't go in any deeper,
hru...@pop.stat.purdue.edu (Herman Rubin) writes:

> I suggest that the computer field be more careful about
> misusing existing languages, like English and mathematics.

Then I take it you're going to stop using "mean" as a technical


term in statistics. It was an existing word in English.

-- Jon

Piercarlo Grandi

unread,
Aug 16, 1991, 10:16:06 AM8/16/91
to

On 15 Aug 91 19:52:03 GMT, cl...@hemlock.cs.Virginia.EDU (Clark L.
Coleman) said:

clc5q> the compiler did nothing wrong. There was no ANSI C
clc5q> contract in effect, no "volatile", no promises not to
clc5q> remove the code redundancy.

But there was the Classic C contract, which was that the compiler would
faithfully, straightforwardly translate the programmer's code. The
language designer himself was the same person that wrote the V7 drivers,
taking advantage of that contract.

The problem only exists if you assume that V7 and [34]BSD and SYS[35]
were written in ANSI C; that's not the case, they were written in
Classic C, and Classic C gave you, more or less explicitly, the
appropriate guarantees that no aggressive optimization would be done.

ANSI C removed such guarantees, violating precedent, mostly for the sake
of compiler vendors in the very competitive IBM PC market; indeed the
ANSI C contract makes some sense only in a single tasking OS with
memory/IO space mapped peripherals; conversely it is not the safe
solution in practice in a true multitasking or multithreaded
environment (thus the old Posix requirement that volatile be the default
in such an environment, going back to Classic C).

I remember a report of a talk by John Mashey, in which he gave the
harrowing details of how unpleasant it had been to rewrite the MIPS Unix
kernel from Classic C to ANSI C, and all the dangers thereof.

Architecture is about tradeoffs, *robust* tradeoffs. If only this were
better known!
--
Piercarlo Grandi | ARPA: pcg%uk.ac...@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: p...@aber.ac.uk

Roger B.A. Klorese

unread,
Aug 16, 1991, 12:01:28 PM8/16/91
to
In article <PCG.91Au...@aberda.aber.ac.uk> p...@aber.ac.uk (Piercarlo Grandi) writes:
>I remember a report of a talk by John Mashey, in which he gave the
>harrowing details of how unpleasant it had been to rewrite the MIPS Unix
>kernel from Classic C to ANSI C, and all the dangers thereof.

I think you are imagining this. RISC/os is written in Classic C, with the
addition of "volatile."
--
ROGER B.A. KLORESE MIPS Computer Systems, Inc.
MS 6-05 930 DeGuigne Dr. Sunnyvale, CA 94088 +1 408 524-7421
rog...@mips.COM {ames,decwrl,pyramid}!mips!rogerk
"Stupidity is evil waiting to happen." -- Clay Bond

Clark L. Coleman

unread,
Aug 16, 1991, 3:23:36 PM8/16/91
to
In article <PCG.91Au...@aberda.aber.ac.uk> p...@aber.ac.uk (Piercarlo Grandi) writes:
>
>On 15 Aug 91 19:52:03 GMT, cl...@hemlock.cs.Virginia.EDU (Clark L.
>Coleman) said:
>
> clc5q> the compiler did nothing wrong. There was no ANSI C
> clc5q> contract in effect, no "volatile", no promises not to
> clc5q> remove the code redundancy.
>
>But there was the Classic C contract, which was that the compiler would
>faithfully, straightforwardly translate the programmer's code. The
>language designer himself was the same person that wrote the V7 drivers,
>taking advantage of that contract.
>
>The problem only exists if you assume that V7 and [34]BSD and SYS[35]
>were written in ANSI C; that's not the case, they were written in
>Classic C, and Classic C gave you, more or less explicitly, the
>appropriate guarantees that no aggressive optimization would be done.

I'm going to have to call your bluff on this one.

Please document the "Classic C contract" that your code would not be
optimized aggressively enough to create a distinction between volatile
and nonvolatile (i.e., document the assertion that "volatile" was
unnecessary in pre-ANSI C, based on some statement in K&R, preferably.)

C was invented in a time of less aggressive compilers than we have today.
Perhaps people confused the implementations with which they were familiar
with the language definition itself. That's a pretty common occurrence.
And when their crummy code was exposed by a new generation of compilers,
they got what they deserved.

Chapter and verse from K&R, please. (BTW, what does "more or less
explicitly" mean? It was explicit or it wasn't.)

Andy Glew

unread,
Aug 16, 1991, 8:00:02 AM8/16/91
to

clc5q> the compiler did nothing wrong. There was no ANSI C
clc5q> contract in effect, no "volatile", no promises not to
clc5q> remove the code redundancy.

But there was the Classic C contract, which was that the compiler would
faithfully, straightforwardly translate the programmer's code. The
language designer himself was the same person that wrote the V7 drivers,
taking advantage of that contract.

The problem only exists if you assume that V7 and [34]BSD and SYS[35]
were written in ANSI C; that's not the case, they were written in
Classic C, and Classic C gave you, more or less explicitly, the
appropriate guarantees that no aggressive optimization would be done.

Where, pray tell, is the "Classic C" contract expressed?

--

Andy Glew, gl...@ichips.intel.com
Intel Corp., M/S JF1-19, 5200 NE Elam Young Pkwy,
Hillsboro, Oregon 97124-6497

This is a private posting; it does not indicate opinions or positions
of Intel Corp.

Intel Inside (tm)

John Mashey

unread,
Aug 16, 1991, 7:32:33 PM8/16/91
to
In article <70...@spim.mips.COM> rog...@mips.com (Roger B.A. Klorese) writes:
>In article <PCG.91Au...@aberda.aber.ac.uk> p...@aber.ac.uk (Piercarlo Grandi) writes:
>>I remember a report of a talk by John Mashey, in which he gave the
>>harrowing details of how unpleasant it had been to rewrite the MIPS Unix
>>kernel from Classic C to ANSI C, and all the dangers thereof.

>I think you are imagining this. RISC/os is written in Classic C, with the
>addition of "volatile."

I think there is multipel confusion caused by the typical telephone series
problem.

Here is the standard story:
1) In 4Q85, we had a C Compiler that could compile itself with global
optimization turned on, and usethe result to compile itself a gain,
and get the same thing. It was also adequate to compile the UNIX
kernel, albeit without global optimization turned on.
2) In early 1986, we started to do -O on the kernel, and it was
indeed harrowing at first, because:
a) hardly anyone had implemented volatile in an optimizing
compiler yet, and it's true implications weren't quite
understood.
b) Of course there were bugs in the optimizer.
3) hence, when we just blinding turned on -O (as volatile was in
process of being implemented), thigns broke everywhere, i.e.,
loops like:
while (p->devicestatus != OK)
junk = p->deviceinput;
and the compiler optimized everything away.

Then, we got volatile in the compiler, and declared volatile .... *p;
...and it still borke, because it still saw that junk (which had never been
mentioned) was never used again, and hence that statement disappeared.

4) It became clear after a while that:
any load or store to a volatile variable that would have happened
with simplistic code ... must happen in exactly the same order and
number ... or systems programmers go nuts.
So, our compiler folks did that.

5) And finally, there was the general issue of debugging an optimizer
when using the kernel. This was the case where it would almost
work optimized, and we had to do binary search to find the module
where 1 store was being omitted.

Now, the only ANSI C issue in this whole story is the fact that
we were able to add volatile to our existing compilers, rather than
some MIPS-specific keyword, and know we were at least heading in
a standards-oriented direction.

Most of the problem was dealing with new compilers doing global optimization
on code, where (at that time) the number of people in the world who had
ever dealt with the resulting issues inside the kernel was small...
Figuring out what to make volatile was pretty straightforward: make
every pointer to a device structure volatile, plus a few other places.

6) We started on this 1Q86, nad had most of it in pretty reasonably shape
by 2Q86, and shipped -O'd kernels in production around Sept/Oct of 1986.
--
-john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: ma...@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash
DDD: 408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: MIPS Computer Systems MS 1/05, 930 E. Arques, Sunnyvale, CA 94088-3650

Chip Salzenberg

unread,
Aug 16, 1991, 8:13:31 AM8/16/91
to
According to hru...@pop.stat.purdue.edu (Herman Rubin):

>If you have seen my postings, you would know that the idea that a computer
>language is even adequate is at best a chimera.

The only chimera around here is Herman's Amazing Language (HAL): it
slices, it dices, it uses CISC instructions, it even makes Julienne
fries (whatever they are).

I've asked Herman time and time again for a HAL spec. So far,
nothing.

Herman, can't you at least *specify* what you so loudly demand?


--
Chip Salzenberg at Teltronics/TCT <ch...@tct.com>, <uunet!pdn!tct!chip>

"He's alive; he's fat; and he's fighting crime. He's Elvis: FBI."

Bill Mangione-Smith

unread,
Aug 17, 1991, 3:49:57 AM8/17/91
to
In article <28ABC1...@tct.com> ch...@tct.com (Chip Salzenberg) writes:

According to hru...@pop.stat.purdue.edu (Herman Rubin):
>If you have seen my postings, you would know that the idea that a computer
>language is even adequate is at best a chimera.

The only chimera around here is Herman's Amazing Language (HAL): it
slices, it dices, it uses CISC instructions, it even makes Julienne
fries (whatever they are).

As much as I hate to admit it, I agree with Herman about the use of the term
optimizing. I didn't care much, till I wrote a paper talking about a
register optimal, and speed optimal, code scheduler. Since we all know this
is trouble wrapped inside real trouble, people of course kept assuming I
didn't really mean either optimal. Then they would want to discuss speed
improvements. And of course I would say "no, its optimal!". Trust me, this
is frustrating.

bill

George J. Carrette

unread,
Aug 17, 1991, 9:56:26 AM8/17/91
to
> C was invented in a time of less aggressive compilers than we have today.
> Perhaps people confused the implementations with which they were familiar
> with the language definition itself. That's a pretty common occurrence.

Bad code in the BSD kernel? Mere pikers in comparision to this kind of thing:

Personally I have seen code written in LISP that depended on knowing
that the compiler would allocate local variables in a very specific
order into internal processor registers, e.g.

(defun %draw-line (a b c d)
(prog (i j k l m)
do-some-stuff
(%%instruction-draw-line-frob-case-1)
...))

Where certain *microcode* was written depending on the locations of the
i,j,k,l,m & etc.

Of course, it may have been easier to have written the whole thing in
microcode, or better yet, to have had an optimizing lisp->microcode
compiler.

-gjc

Richard Harter

unread,
Aug 17, 1991, 4:54:59 PM8/17/91
to
In article <28ABC1...@tct.com>, ch...@tct.com (Chip Salzenberg) writes:

> The only chimera around here is Herman's Amazing Language (HAL): it
> slices, it dices, it uses CISC instructions, it even makes Julienne
> fries (whatever they are).

> I've asked Herman time and time again for a HAL spec. So far,
> nothing.

> Herman, can't you at least *specify* what you so loudly demand?

But he has, he has. Your problem is that your specification to
implementation software is inadequate. You need the 3-D machine
which has exactly three instructions:

DWIM -- Do What I Mean
DWIW -- Do What I Want
DIAB -- Do It Again Baby
--
Richard Harter: SMDS Inc. Net address: uunet!smds!rh Phone: 508-369-7398
US Mail: SMDS Inc., PO Box 555, Concord MA 01742. Fax: 508-369-8272
In the fields of Hell where the grass grows high
Are the graves of dreams allowed to die.

Henry Spencer

unread,
Aug 17, 1991, 8:36:04 PM8/17/91
to
In article <BILLMS.91A...@budada.eecs.umich.edu> bil...@budada.eecs.umich.edu (Bill Mangione-Smith) writes:
>As much as I hate to admit it, I agree with Herman about the use of the term
>optimizing. I didn't care much, till I wrote a paper talking about a
>register optimal, and speed optimal, code scheduler. Since we all know this
>is trouble wrapped inside real trouble, people of course kept assuming I
>didn't really mean either optimal...

It is agreed that the terminology is, uh, less than optimal. :-) However,
it *is* the consensus terminology, and complaining that it's "incorrect"
(as opposed to unfortunate) just marks one as a pedantic twit.
--
Any program that calls itself an OS | Henry Spencer @ U of Toronto Zoology
(e.g. "MSDOS") isn't one. -Geoff Collyer| he...@zoo.toronto.edu utzoo!henry

Perry Scott

unread,
Aug 15, 1991, 11:48:03 AM8/15/91
to
>Crispin Cowan's point is valid: if you think you can fix it later, you
>have less incentive to fix it now.
>
>Chris Torek


This isn't just a testing issue and RISC vs CISC, it's an economics
issue. Revenue (which pays the R&D budget) is derived from the sale of
iron. Therefore, software is shipped shortly after it appears to work
"good enough" on the new iron. More testing implies lost revenue,
sometimes to the tune of several hundred million dollars per month.
Manufacturers look at the bottom line and choose the "good enough" that
is least expensive in terms of lost revenue vs post-sales support costs.
While crashme is indeed finding defects, are they serious enough to stop
shipments ? Only if you have a very unusual application.

Of course these are my views, not necessarily HP's.

Perry Scott
In the software trenches at HP Ft Collins

Torben [gidius Mogensen

unread,
Aug 19, 1991, 6:02:21 AM8/19/91
to
to...@elf.ee.lbl.gov (Chris Torek) writes:

>Crispin Cowan's point is valid: if you think you can fix it later, you
>have less incentive to fix it now. This is more likely to mislead
>those who construct software than those who construct hardware: the
>hardware people are more acutely aware of the cost of fixing something
>later. If you discover a bug in hardware, you may not fix it at all
>(due to the perceived and/or calculated cost), leaving it for the
>software people to fix, if possible. (A case in point: the VAX probe
>instruction, if close to a page boundary, failed on the 11/780. A
>prefetch was done in the mode the probe used for probing. As far as I
>know, this bug was never fixed. The workaround was to make sure that
>no probe instruction was within 8 bytes of the end of a page.)

A similar bug was found in the 6502 processor, and I don't think this
was ever fixed. If you made an indirect jump, and the instruction was
placed such that the two-byte address field in the instruction spanned
a (256 byte) page boundary, then the instruction would use the first
byte on the same page rather than the first byte on the next page for
the most significant byte of the address.

It was fairly easy to fix during compilation, as you just inserted a
NOP before the jump when necessary.

Torben Mogensen (tor...@diku.dk)

Wm E Davidsen Jr

unread,
Aug 19, 1991, 10:46:06 AM8/19/91
to

| C was invented in a time of less aggressive compilers than we have today.
| Perhaps people confused the implementations with which they were familiar
| with the language definition itself. That's a pretty common occurrence.
| And when their crummy code was exposed by a new generation of compilers,
| they got what they deserved.

You mean when the compiler stopped generating code based on what the
programmer wrote, and started trying to "understand" what the programmer
meant, rather than what s/he said.
--
bill davidsen (davi...@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
GE Corp R&D Center, Information Systems Operation, tech support group
Moderator comp.binaries.ibm.pc and 386-users digest.

ge...@galton.uchicago.edu

unread,
Aug 19, 1991, 2:34:35 PM8/19/91
to

You know, I'm sure, that all of us users out here just hate it when you
guys do that. The only reason we keep buying this crud is that all of
the computer companies make computers that are are about equally broken,
i. e. thoroughly.

If there were a company that made a computer that actually worked, all
of the MIPS and features in the world wouldn't save its competition. It
would be Japan vs. Detroit all over again.

Regards,

Charlie

P. S. Nothing personal. I know HP is no worse than anybody else.

Charles Geyer
Department of Statistics
University of Chicago
ge...@galton.uchicago.edu

Clark L. Coleman

unread,
Aug 19, 1991, 6:10:37 PM8/19/91
to
In article <36...@crdos1.crd.ge.COM> davi...@crdos1.crd.ge.com (bill davidsen) writes:
>In article <1991Aug16.1...@murdoch.acc.Virginia.EDU> cl...@hemlock.cs.Virginia.EDU (Clark L. Coleman) writes:
>
>| C was invented in a time of less aggressive compilers than we have today.
>| Perhaps people confused the implementations with which they were familiar
>| with the language definition itself. That's a pretty common occurrence.
>| And when their crummy code was exposed by a new generation of compilers,
>| they got what they deserved.
>
> You mean when the compiler stopped generating code based on what the
>programmer wrote, and started trying to "understand" what the programmer
>meant, rather than what s/he said.

Well, I'm not sure that compilers today try to "understand" what the
programmer meant. I think what happened historically in compiler design
was that we noticed some really bad code coming out of code generators
(e.g. dead code, common subexpressions not being eliminated, etc.) and
noticed that the programmer could not get rid of most of them. For example,
common addressing expressions in A[i] := A[i] + 2. And so compiler
designers came up with optimization techniques to handle these cases.
It turned out that a common subexpression elimination algorithm would
get rid of CSE that COULD have been eliminated by the programmer, as
well as those that were not visible in the source code.

Now we get the problem that was discussed with respect to "volatile"
variables in C. Before there was such a thing as "volatile", the
systems programmer using C should have used assembly language routines
to test device registers and the like, so that no language contract is
assumed that turns out to be nonexistent. The best alternative to
assembly language is probably to extend some language like C with some
keyword like "volatile" so that the semantics are appropriate. Another
alternative would be to turn off all optimizations that might have the
effect that was discussed in the Berkeley Unix kernels; the slowness
of the resulting code would be unacceptable to most users, with the
entire operating system compiled with most optimizations turned off
entirely. And there is another alternative that I have not seen discussed:
The compiler could somehow keep track of the difference between the
optimizations that could have been done by the programmer and the ones
that are not visible in the source code (e.g. addressing expressions.)
My opinion is that the compiler would then have the following properties:

1) It would be slow, because of the complexity of tagging pieces of code
as volatile and nonvolatile and checking the tags during all phases
of optimization.

2) The programmers would inherit work previously done by the compiler,
creating all sorts of temporary variables, etc., as is currently
being discussed in this thread by other posters.

3) The market for the compiler would be virtually nonexistent, limited
to a few Usenet arguers who aren't in the position of purchasing
compilers anyway.

If you think such a compiler is a great idea, you should design it and
sell it for a small fortune to the wide market that is waiting with
bated breath for this "minimal optimizing" compiler. If you are just
railing against the fact that the market has a demand that is mostly
contrary to your own tastes, what's the point? I see far more complaints
about the lack of ability for today's compilers to optimize than I see
complaints about too much optimization.

Furthermore, I think the majority opinion is definitely that programmers
don't want to have to analyze which procedures to inline, which variables
to hold in registers, which loops to unroll, etc. For these higher level
optimizations, most programmers want a compiler that can do at least as
good a job as they could do. Not having to spend the time analyzing such
things improves their productivity. Why should I sit around counting
variable references in order to know which ones to specify as "register",
or estimating the probable code size of a function in comparison to the
probable call/return overhead code size, etc. Let the compiler do it.

If we carry your approach to its logical conclusion, the compiler should
not put a variable into a register unless specified as "register" by the
programmer (after all, the programmer would have said "register" if he
wanted it --- why try to guess what the programmer means?), nor inline
a procedure (the programmer can do that himself, too --- don't treat
him like an imbecile who doesn't know what he's doing.) Tail merging,
head merging --- the programmer can do those. And the C programmer can
do a shift instead of a multiply by a power of 2. Etc., etc., ad nauseam.
The market for compilers has chosen otherwise, because most of us value
our time more than that.

Wm E Davidsen Jr

unread,
Aug 20, 1991, 9:01:20 AM8/20/91
to

| If you think such a compiler is a great idea, you should design it and
| sell it for a small fortune to the wide market that is waiting with
| bated breath for this "minimal optimizing" compiler.

What market? Virtually all compilers can have optimization turned off,
that's all that's needed.

| If you are just
| railing against the fact that the market has a demand that is mostly
| contrary to your own tastes, what's the point?

I see no demand for a compiler which generates incorrect code. Most
vendors of them pretend that there's no problem, and that the code
emitted by the compiler performs the logic described by the source code.
However, in many cases that's not the case.

| I see far more complaints
| about the lack of ability for today's compilers to optimize than I see
| complaints about too much optimization.

You see what you look for... every makefile with optimize turned off
for some modules is a complaint about too much optimization, maybe
you're not counting that.

Alexander Vrchoticky

unread,
Aug 20, 1991, 12:10:23 PM8/20/91
to
davi...@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:

> You see what you look for... every makefile with optimize turned off
>for some modules is a complaint about too much optimization, maybe
>you're not counting that.

if a program works as intended with optimization turned off but fails
when optimization is turned on the problem can be one of the following:

o the optimizer is buggy, i.e. the transformations it does are not
faithful to the documented semantics of the language.

o the program contains unwarranted assumptions about
the implementation, i.e. the program is not valid under the
documented semantics of the language.

neither of the above has anything to do with the amount of optimizations.

--
Alexander Vrchoticky | al...@vmars.tuwien.ac.at
TU Vienna, CS/Real-Time Systems | +43/222/58801-8168
"it's never enough to see how others fall in love" (the blue aeroplanes)

Andy Glew

unread,
Aug 20, 1991, 6:13:47 AM8/20/91
to

You know, I'm sure, that all of us users out here just hate it when you
guys do that. The only reason we keep buying this crud is that all of
the computer companies make computers that are are about equally broken,
i. e. thoroughly.

Well, I worked at at least one company that was trying really hard not
to ship software crud.

That is the right thing to do.

And it won some points with customers. It just didn't win enough
points to make up for the typical 3-6 month delay to ship a really
high quality product (in this case, OS).

If there were a company that made a computer that actually worked, all
of the MIPS and features in the world wouldn't save its competition. It
would be Japan vs. Detroit all over again.

The key thing is to integrate software quality control in the software
design cycle so that it does not require a 3-6 month delay to deliver
a more reliable, better tested, etc. product. It typically does require
more manpower at early stages of the project.

By the way - everything I have said about software quality control
applies equally well to hardware quality control.

Stephen E. Witham

unread,
Aug 20, 1991, 3:27:52 PM8/20/91
to
In article <1991Aug18....@zoo.toronto.edu>, he...@zoo.toronto.edu (Henry Spencer) writes:
>
> It is agreed that the terminology is, uh, less than optimal. :-) However,
> it *is* the consensus terminology, and complaining that it's "incorrect"
> (as opposed to unfortunate) just marks one as a pedantic twit.

Just one more pedantic twit saying, "optimize" means make as good as
possible, and "improve" means make better.

Sure, it's good to know that there has been a warped use of the words
"optimize" and "optimal," and I suppose someone who hasn't had it
pointed out to them should be excused for following the unfortunate,
incorrect, habit, but once you've been informed, there isn't much
excuse for keeping it up.

Why should we computerists be satisfied with cretinisms?

--Steve

Henry Spencer

unread,
Aug 20, 1991, 9:46:05 PM8/20/91
to
In article <8...@smds.UUCP> s...@smds.UUCP (Stephen E. Witham) writes:
>Sure, it's good to know that there has been a warped use of the words
>"optimize" and "optimal," and I suppose someone who hasn't had it
>pointed out to them should be excused for following the unfortunate,
>incorrect, habit, but once you've been informed, there isn't much
>excuse for keeping it up.

Unfortunately, there is one: being understood when you talk to other
specialists. Lots of good English words, like "compiler", have special
meanings in this context, and often they are wrong by plain-English
standards. (A "compiler" is someone who gathers things together, not
someone who translates between languages; the very first computer-type
compilers were in fact more like what we now call linkers.) "Optimize"
is just a particularly unfortunate example of a word whose meaning in a
specialized context (compilers) differs from its plain-English meaning.
If you want to be understood in a compiler context, you use it the way
compiler people use it. Most people don't think Linguistic Purity is
worth having to explain themselves all the time.

>Why should we computerists be satisfied with cretinisms?

Why should computerists be satisfied with a view of language popular among
high-school English teachers but long since abandoned by linguists?
Languages in general, and the meanings of words in particular, change.
A language is defined by the way people use it, not by the high-school
textbooks. Compiler people use "optimize" to mean "improve", the same
way they use "compiler" to mean "translator". Too bad, but that's the
way it is.

Linley Gwennap

unread,
Aug 20, 1991, 7:19:59 PM8/20/91
to
Charlie Geyer posts: (much repetition from previous articles deleted)

> If there were a company that made a computer that actually worked, all
> of the MIPS and features in the world wouldn't save its competition. It
> would be Japan vs. Detroit all over again.

Think about this for a minute, Charlie. What if Vendor A said, "We have a
50 MIPS workstation but we can't ship it yet because we haven't finished
the mandatory 12-month test cycle, during which we run every application
ever invented to verify that they work, then we run semi-infinite code
combinations to avoid bugs caused by hand-generated code like CRASHME."
What if Vendor B said "We have a 50 MIPS workstation that we can ship
to you today!" You bring the Vendor B box in, demo it, and all of your
programs seem to work fine. What do you do, Charlie? My guess is that
you and your friends will eventually put Vendor A out of business because
no one will wait for their box.

Of course, you may argue that it shouldn't take 12 months of testing to
verify that a CPU has absolutely no bugs. You may argue that a new CPU
should work right the first time. Of course, you may not have designed
a CPU before.
---------------------------------------------------------------------------
DISCLAIMER: The views expressed here do not Linley Gwennap
represent the views of the Hewlett-Packard PA-RISC Marketing
Company. Caveat emptor. Hewlett-Packard

dafu...@sequent.com

unread,
Aug 21, 1991, 11:05:59 AM8/21/91
to
Heck, compiler/translator's nothing. I've been working for years to
eliminate the word "bug" from the lexicon and replace it with "defect".

Somehow software engineers pay more attention when you call their
software defective...

(Who was it that said the artist kids himself most gracefully?)
--
Dave Fuller This is the biased hyper-signature, or the
Sequent Computer Systems null signature divided by zero, minus any
(503) 578-5063 (voice) any ideas that I represent Sequent.
dafu...@sequent.com It means specific things to certain people.

Piercarlo Grandi

unread,
Aug 21, 1991, 9:15:32 PM8/21/91
to
On 20 Aug 91 16:10:23 GMT, al...@vmars.tuwien.ac.at (Alexander Vrchoticky) said:

alex> if a program works as intended with optimization turned off but
alex> fails when optimization is turned on the problem can be one of the
alex> following:

alex> o the optimizer is buggy, i.e. the transformations it does are not
alex> faithful to the documented semantics of the language.

alex> o the program contains unwarranted assumptions about the
alex> implementation, i.e. the program is not valid under the documented
alex> semantics of the language.

An amusing truism that the concerned readership of comp.arch knows quite
well.

alex> neither of the above has anything to do with the amount of
alex> optimizations.

The issue that you seem untroubled by is that architects have a problem,
that architecture without engineering does not seem very useful. Where
architecture meets engineering design tradeoffs happen, as engineering
is about probabilities and cost effectiveness.

There exist indeed stark design tradeoffs between the quality of the
semantics of a language and the ability to do reliable optimization in
the compiler and the ability of the programmer to write useful programs.

So, it does happen that architects have to consider the following
engineering problems:

* that buggy optimizers do happen;

* the tendency of an optimizer to be buggy is related to its size and
complexity;

* both size and complexity of an optimizer are related to both the
quality of the language semantics and the optimization sought;

* it also happens that programmers write buggy programs;

* the percentage of bugs/buggy programs is related to the quality
of the semantics of the language, among other factors.

Architects therefore tend to strive to design architectures that solve
the engineering problem (minimize the chances and opportunities for bugs
in programs and code generators while having good performance), they
don't take the attitude "we don't care about buggy programs and buggy
code generators".

For example I argue that doing high level optimizations in low level
languages is poor architecture because it creates engineering problems
that would not exist if high level optimizations were applied to
languages at the same level.

Whether my contention is correct or not, the issue of which
compiler/language/programmer architecture (where to put the layer
boundaries, which shape these should be) minimizes engineering problems
is an interesting research problem; dismissing the case where
engineering problems do manifest themselves is not very interesting.

Paul Leyland

unread,
Aug 22, 1991, 4:45:12 AM8/22/91
to
In article <14...@pt.cs.cmu.edu> lind...@cs.cmu.edu (Donald Lindsay) writes:
In article <93...@auspex.auspex.com> g...@auspex.auspex.com (Guy Harris) writes:
> Is it easier to do microcode than trap
> handlers? Is there something *intrinsic* to microcode that causes it
> to be more thoroughly, or more correctly, checked out?

Microcode is much harder to write than trap handlers, and much harder
to check out.

It depends greatly on the microarchitecture and the development
environment. I was principal microcoder with High Level Hardware.
Their Orion minicomputer, built around AMD 2901s and a 2910, was
designed for easy microcoding. The control store and instruction
mapper was held in RAM. There were good development tools, including
assembler, librarian, linker, disassembler, loader, and unloader.
There were Unix device special files which enabled microcode and
instruction decode tables to be read *and written* on a running
machine.

In my opinion, writing and testing microcode on that machine was
significantly easier than writing and testing a trap handler. Writing
a complete IEEE floating point arithmetic package in microcode took me
about three months. Maybe a thousand lines of functioning code. Only
one bug was ever discovered in it, so far as I know. (That had to do
with incorrect rounding when a subtraction left an answer which was
zero except for the first guard bit. I incorrectly returned a zero
answer, when I should have rounded up. Anyone who relies on a
half-significant-bit result deserves all they get, IMHO 8-). Having
said that, I (almost) successfully wrote trap-handling microcode.
Significantly harder than the FP stuff, even though it was only a
tenth the size. Turned out to have a bug if a page fault happened on
dual-address instructions whose addresses were exactly 2Mb apart and
both failed in turn. That was discovered and fixed after I left HLH.

It's harder to write because the microcoder needs to know the
hardware much more intimately.

That's no bad thing, IMHO.

... Even worse, the visible architecture
is likely to be changed slightly, after the code development has been
started.

All the more reason to talk to people, especially your co-workers.

Plus, microcode runs to time dependencies - its correctness
may depend on being fast enough, or on looking at a certain bus
exactly so many clocks after some other event.

True.

It's harder to check out because it's typically convoluted.

You're forgetting the golden rule. First get it right, then get it
fast. Check out the alogrithms. Then optimise and compare test
cases. I wrote several times as much test code as I did microcode.

Microcoding is the last, true home of gross hacks.

True 8-). Why do you think it's so much fun 8-)


... Why? Well, first,
*anything* that makes some macroinstruction run one clock faster, is
going to get put in. Second, there's often a space crunch. Remember,
the microcode may have to do quite a lot: perhaps diagnostics,
perhaps a console debugger.

Very true. Still no excuse for not having a good understanding of the
architecture and the behaviour of the code.

As for how hacky it gets ... imagine an 8-deep call stack, and code
that *overflows* the stack because it was faster to push three times
than pop five. This is a real example, from one of the early
MicroVaxen, and sure enough, early copies of that chip were shipped
with a bug that could crash the microcode.

How about five consecutive conditional control transfers on a
pipelined sequencer with no annul override on the instruction in the
pipeline? Real "hop, skip and jump" flow of control. Again, a real
example. Or the movc3 instruction which did a block move faster than
the architecture manual said was possible, because the manual writer
had forgotten that it was permissible to lock out DRAM refresh for
dozens of microseconds at a time. Again, real shipped code.

Paul
--
Paul Leyland <p...@convex.oxford.ac.uk> | Hanging on in quiet desperation is
Oxford University Computing Service | the English way.
13 Banbury Road, Oxford, OX2 6NN, UK | The time is come, the song is over.
Tel: +44-865-273200 Fax: +44-865-273275 | Thought I'd something more to say.

peter da silva

unread,
Aug 22, 1991, 9:58:46 AM8/22/91
to
In article <PCG.91Au...@aberdb.aber.ac.uk>, p...@aber.ac.uk (Piercarlo Grandi) writes:
> For example I argue that doing high level optimizations in low level
> languages is poor architecture because it creates engineering problems
> that would not exist if high level optimizations were applied to
> languages at the same level.

However, the availability and reliability of tools is an engineering problem,
too. It happens that a certain low level language has a wide variety of
implementations on a wide variety of platforms. In terms of minimising the
cost of a solution to a problem, it may be a better use of the designer's
time to use this language, do high level optimizations, and use highly
optimizing compilers than to rewrite the program several times for several
non-intersecting sets of higher level languages.

That's why people use Fortran and why Fortran programs often have comments
in their headers like "do not optimise this at level 3 on the G compiler".

That's why people use C and fill their code with #ifdefs and obscure devices.

The same arguments used to apply to Basic and Pascal as well, except that
there are now sufficient variants of these that you can no longer treat them
as the same languages.

> Whether my contention is correct or not, the issue of which
> compiler/language/programmer architecture (where to put the layer
> boundaries, which shape these should be) minimizes engineering problems
> is an interesting research problem; dismissing the case where
> engineering problems do manifest themselves is not very interesting.

True, but don't let people use this as an argument against radical
optimizations where they're justified.
--
Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180;
Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"

Donald Lindsay

unread,
Aug 22, 1991, 4:31:57 PM8/22/91
to

In article <PCL.91Au...@black.prg.ox.ac.uk>
p...@prg.ox.ac.uk (Paul Leyland) writes:
>>> Is it easier to do microcode than trap handlers?
>> It's harder to write because the microcoder needs to know the
>> hardware much more intimately.
>That's no bad thing, IMHO.
>... Or the movc3 instruction which did a block move faster than

>the architecture manual said was possible, because the manual writer
>had forgotten that it was permissible to lock out DRAM refresh for
>dozens of microseconds at a time.

I agree that it's good to know your machine intimately. From your
examples, apparently you did.

There is sometimes a problem, namely, that one can't find out the
details! Normally, that's not a microcoder's problem, because he's
usually down the hall from the perpetrators, ahh, logic designers. In
your case, it could have been a problem, because you were microcoding
a purchased chip. I'm glad to hear that it was well documented.

Do any chip designers make a (defined to be correct) simulator
available to OS groups?

Many (a majority?) of the people who write trap handlers are dealing
with chips that were recently designed, somewhere far far away.
Often, the chips (and the documentation) are early samples and have
numerous failings. Learning intimate details can be a struggle, with
months where folklore substitutes for knowledge. If anyone has war
stories that don't violate nondisclosure, please, post.
--
Don D.C.Lindsay Is there anything to do this Sunday
withing driving distance of Stanford?

Dan Westerberg

unread,
Aug 22, 1991, 8:55:55 PM8/22/91
to
The mini-computer that I am currently helping to build has a completely
microcoded CPU (with some minor exceptions). Personally, I *love* microcode.
As an ASIC designer, microcode gives me that warm and fuzzy feeling because I
know that should a bug show up in my silicon, it's quite possible to work
around it with microcode.

Our microcoders are top-notch people and are as familiar with our architecture
as the hardware engineers. This has proved beneficial a number of times, like
when a microcoder comes to me and says "I'd like to do XYZ, which is allowed,
but what if ABC also occurs?"

Several 'holes' in our architecture have been found this way.

Also, I'll admit that microcode can be difficult to write, but we've provided
our people with a full-fledged simulator of our CPU to test out their code on.
This means that the microcode handed off to the engineers has been cleared of
silly bugs. (We also have excellent tool developers who wrote our assembler,
allocator, and microcode simulator, much faster than I believed possible).

We do have the advantage that we are building our own CPU, based upon a 20
year old, well-defined architecture; and our microcoders are right across
the hall.

I'm an advocate of microcode, because should a bug be shipped to customers, it's
much easier to ship out a floppy with new microcode than to re-spin a chip or
replace chips/boards in the field.

Just my own thoughts,

Dan

p.s. -- This isn't meant to say that I'm against RISC or non-microcoded CPU's,
because I absolutely love the SPARCstation2 sitting on my desk :)

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ "These walls that still surround me ~ ~
~ Still contain the same old me ~ Dan Westerberg ~
~ Just one more who's searching for ~ da...@hob8.prime.com ~
~ The world that ought to be" ~ ~
~ - Neil Peart ~ Prime Computer, Framingham, MA ~
~ ~ ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sean Eric Fagan

unread,
Aug 23, 1991, 12:35:52 AM8/23/91
to
In article <1991Aug2...@hob8.prime.com> da...@hob8.prime.com (Dan Westerberg) writes:
>I'm an advocate of microcode, because should a bug be shipped to customers,
>it's much easier to ship out a floppy with new microcode than to re-spin
>a chip or replace chips/boards in the field.

Uhm, what makes you think you can't replace the chips in the field if there
is no microcode?

Also, why not just skip the microcode and use the microcode language as the
assembly language?

--
Sean Eric Fagan | "What *does* that 33 do? I have no idea."
s...@kithrup.COM | -- Chris Torek
-----------------+ (to...@ee.lbl.gov)
Any opinions expressed are my own, and generally unpopular with others.

Scott E. Townsend

unread,
Aug 23, 1991, 7:47:54 AM8/23/91
to
In article <1991Aug22.2...@cs.cmu.edu> lind...@cs.cmu.edu (Donald Lindsay) writes:

[ stuff deleted ]


>
>Many (a majority?) of the people who write trap handlers are dealing
>with chips that were recently designed, somewhere far far away.
>Often, the chips (and the documentation) are early samples and have
>numerous failings. Learning intimate details can be a struggle, with
>months where folklore substitutes for knowledge. If anyone has war
>stories that don't violate nondisclosure, please, post.
>--
>Don D.C.Lindsay Is there anything to do this Sunday
> withing driving distance of Stanford?

Well here's one. We've built a parallel processor here based on the 88000.
When I was working on the various trap handlers, things just wouldn't quite
work all the time. Aside from trying to do this with only the board-level
debugger on the MVME181 (apparently the Government isn't set-up to rent
expensive stuff like processor-specific logic analyzers/emulators), a few
details of the chip not documented in the standard 88000 programmer's
manual made things more challenging. For instance, the destination register
of the instruction preceeding an RTE might get corrupted.

That was last year. Motorola now ships CPU and CMMU errata sheets with
their VME boards. So though it's too late for my previous battle, they
have improved their communication to those developers "far far away".

NOTE: This is NOT a flame against Moto or the 88000. I _like_ the 88000,
but at least last year it suffered from some undocumented behaviour
which was difficult to know if you're not well-connected.

--
---------------------------------------------------------------------------
Scott Townsend, Sverdrup Technology Inc. NASA Lewis Research Center Group
fs...@bach.lerc.nasa.gov

Jeff Kenton OSG/UEG

unread,
Aug 23, 1991, 9:27:04 AM8/23/91
to
In article <1991Aug23....@eagle.lerc.nasa.gov>,

fs...@bach.lerc.nasa.gov (Scott E. Townsend) writes:

|>
|> Well here's one. We've built a parallel processor here based on the 88000.
|> When I was working on the various trap handlers, things just wouldn't quite
|> work all the time. Aside from trying to do this with only the board-level
|> debugger on the MVME181 (apparently the Government isn't set-up to rent
|> expensive stuff like processor-specific logic analyzers/emulators), a few
|> details of the chip not documented in the standard 88000 programmer's
|> manual made things more challenging. For instance, the destination register
|> of the instruction preceeding an RTE might get corrupted.
|>
|> That was last year. Motorola now ships CPU and CMMU errata sheets with
|> their VME boards. So though it's too late for my previous battle, they
|> have improved their communication to those developers "far far away".
|>

Motorola was shipping errata sheets with their earliest, pre-release versions
of the 88K chips in the spring of 1989, and I'm sure they continue to do so.
Still, I can't imagine doing the low level stuff without a logic analyzer.
You often need to find those chip bugs yourself in the early days, so you can
tell the manufacturer where they are.

-----------------------------------------------------------------------------
== jeff kenton Consulting at ken...@decvax.dec.com ==
== (617) 894-4508 (603) 881-2742 ==
-----------------------------------------------------------------------------

John ffitch

unread,
Aug 23, 1991, 8:14:43 AM8/23/91
to
I would like to support Paul Leyland's comments. I too wrote
microcode for the HLH Orion -- in my case not so much but just
additional instructions to support LISP function calling, CAR/CDR and
some simple functions. It helped to have such good tools, and I never
felt that coding it was too hard. One just had to think a bit.

Yes, it is the last home of the true hacker, and I am not ashamed of
that.

I should say that I started microcode on a PERQ, where we replaced the
whole microcode with our own BCPL instruction set. The tools were
aweful, just the 3 x 7segment display which counted the number of
timnes one popped an empty stack......

==John

George J. Carrette

unread,
Aug 23, 1991, 5:19:54 AM8/23/91
to
In article <1991Aug22.2...@cs.cmu.edu>, lind...@cs.cmu.edu (Donald Lindsay) writes:
> Many (a majority?) of the people who write trap handlers are dealing
> with chips that were recently designed, somewhere far far away.
> Often, the chips (and the documentation) are early samples and have
> numerous failings. Learning intimate details can be a struggle, with
> months where folklore substitutes for knowledge. If anyone has war
> stories that don't violate nondisclosure, please, post.
> --

Maybe somebody with the knowledge can start by explaining why various
SPARC machines running SUNOS 4.4.1 die using crashme version 1.2 on one of:

%crashme 9 29748 58774 4
%crashme 10 1000 10

Some people have made statements on this comp.arch that they KNOW that
all the crashme problems are due to software defects. So how about some
concrete examples? (Like, what exact software defect causes the above crash?)

I did have some interesting private mail where it was suggested that
computer manufacturers who use a commercial CHIP would want to keep
any undocumented details that they found out about by experimentation
a SECRET, because disclosing information about documentation problems
and even chip hardware bugs, even to the CHIP VENDOR, would only
help the other hardware vendors who were also using the same chip.

So for example, say there was a bug in the multiply instruction of
some chip that had a simple software work-around you could add to the compiler.
If you kept it a secret then you could end up being the only vendor
around who could run certain programs correctly. Of course, by not
telling the chip vendor you would make it more difficult for the vendor
to correct the problem.

Of course, any time when there is a special cozy relationship between
chip designers, vendors, and a limited subset of hardware and
software manufacturers, there can be even more problems with this sort
of nasty-secret arrangement.


-gjc

Joe Buck

unread,
Aug 23, 1991, 2:57:58 PM8/23/91
to
In article <14...@mitech.com> g...@mitech.com (George J. Carrette) writes:
>Maybe somebody with the knowledge can start by explaining why various
>SPARC machines running SUNOS 4.4.1 die using crashme version 1.2 on one of:

(you presumably mean SunOS 4.1.1, which is shot full of bugs.)


> %crashme 9 29748 58774 4
> %crashme 10 1000 10

>Some people have made statements on this comp.arch that they KNOW that
>all the crashme problems are due to software defects. So how about some
>concrete examples? (Like, what exact software defect causes the above crash?)

Generally, when you execute garbage you get a software trap. If the
trap handler is written incorrectly it will cause the machine to crash.
These trap handlers, for the most part, haven't been tested on random
garbage, which is why the existence of CRASHME is a great service --
it can be used by OS people to thoroughly test their trap handlers.

Now I can't say I KNOW that there's no way that crashme can cause a
Sparc chip to lock up, it's just that no one's demonstrated it yet.

>So for example, say there was a bug in the multiply instruction of
>some chip that had a simple software work-around you could add to the compiler.
>If you kept it a secret then you could end up being the only vendor
>around who could run certain programs correctly. Of course, by not
>telling the chip vendor you would make it more difficult for the vendor
>to correct the problem.

Early versions of the 386 had a multiply bug, and this might be what
you're talking about.

>Of course, any time when there is a special cozy relationship between
>chip designers, vendors, and a limited subset of hardware and
>software manufacturers, there can be even more problems with this sort
>of nasty-secret arrangement.

Fortunately, the existence of gcc works against this. Now, every time
someone comes out with a new chip they hire somebody (like Cygnus
Support) to port gcc to it -- then the source code is public for
all to see.

--
--
Joe Buck
jb...@galileo.berkeley.edu {uunet,ucbvax}!galileo.berkeley.edu!jbuck

Message has been deleted

Joe Buck

unread,
Aug 23, 1991, 4:14:14 PM8/23/91
to
In article <1991Aug23....@news.larc.nasa.gov> klu...@grissom.larc.nasa.gov ( Scott Dorsey) writes:

>In article <1991Aug23.1...@agate.berkeley.edu> jb...@forney.berkeley.edu (Joe Buck) writes:
>>
>>Now I can't say I KNOW that there's no way that crashme can cause a
>>Sparc chip to lock up, it's just that no one's demonstrated it yet.
>
> I bet I can write a trap handler for a SPARC machine that crashme will
>cause to lock up. Writing bad code isn't difficult to do if your intent
>is to write bad code. The problem here is that it's difficult to write
>good code...

No, you misunderstand me. When crashme first came out, some people were
claiming that the problems were in RISC chips themselves -- they thought
that hardware flaws were causing the crashes. When I said "no one's
demonstrated it yet", I meant that no one's demonstrated, as some people
were saying, that crashme exposed flaws in RISC chips that could not be
fixed with software, that the flaws were fundamental.

Henry Spencer

unread,
Aug 23, 1991, 4:41:37 PM8/23/91
to
In article <1991Aug22.2...@cs.cmu.edu> lind...@cs.cmu.edu (Donald Lindsay) writes:
>Often, the chips (and the documentation) are early samples and have
>numerous failings. Learning intimate details can be a struggle, with
>months where folklore substitutes for knowledge. If anyone has war
>stories that don't violate nondisclosure, please, post.

Not quite a war story, but a good quote... Mike Tilson of HCR, commenting
on the problems of porting compilers and Unix to new machines: "system
programmers don't expect to have to learn how to use logic analyzers".

eric smith

unread,
Aug 23, 1991, 5:09:23 PM8/23/91
to

ma...@mips.com (John Mashey) writes:

>Most of the problem was dealing with new compilers doing global optimization
>on code, where (at that time) the number of people in the world who had
>ever dealt with the resulting issues inside the kernel was small...
>Figuring out what to make volatile was pretty straightforward: make
>every pointer to a device structure volatile, plus a few other places.

The problems of using compiler optimization on kernel code can go beyond
the issue of code that references volatile locations being optimized out.
In our first port of V7 to the 68000 at Altos in the early '80s, we ran
into the following problem (OK, it's been a long time and the details
are a little fuzzy).

A certain well-known SIO chip was accessed by sending it a sequence of
bytes beginning with the internal register number. In the case where
this register number was zero, the code inexplicably didn't work. After
resorting to a logic analyzer we saw that an extra data reference was
being generated. It turned out that the compiler translated the
statement "sio->reg = 0" into something like "andb reg,0" where the
andb instruction belonged to a class of processor operations that did
a *read* before a write. The chip ignored the fact that it was a read,
merely seeing it as a data reference, and the communication got pretty
confused.

-----
Eric Smith
er...@sco.com
er...@infoserv.com
CI$: 70262,3610

Paul Campbell

unread,
Aug 23, 1991, 12:24:45 PM8/23/91
to
In article <1991Aug22.2...@cs.cmu.edu> lind...@cs.cmu.edu (Donald Lindsay) writes:
>
>Do any chip designers make a (defined to be correct) simulator
>available to OS groups?
>

Yes, in a past life I got Unix up to a shell prompt running on a chip
simulator (took about and hour on an Amdahl to get it to run to this
point - if you disabled the clearing of memory at boot time) before
we ever saw working hardware - we even found some chip bugs on the way

If you are designing hardware you should be doing this!! You need to get
it to your software people asap, it can save you a silicon turn or two
very late in the game, it also gives you faster time to market since the
worst part of the kernel port will be done as soon as the hardware is
available.

Of course you should code your model in Verilog/VHDL and GIVE AWAY copies
to the people who want to use your chips - one of the first questions
I ask of vendors these days is "will you give me a Verilog model", you
can bet that if one vendor can and one can't I'm going to choose the
one who can .... because we're going to do full board simulation before we
tape out our oun chips so it saves us writing a model.


Paul


--
Paul Campbell UUCP: ..!mtxinu!taniwha!paul AppleLink: CAMPBELL.P

Tom Metzger's White Ayrian Resistance has been enjoined to stop selling Nazi
Bart Simpson t-shirts - Tom of course got it wrong, Bart is yellow, not white.

Andy Glew

unread,
Aug 23, 1991, 8:36:30 PM8/23/91
to

A certain well-known SIO chip was accessed by sending it a sequence of
bytes beginning with the internal register number. In the case where
this register number was zero, the code inexplicably didn't work. After
resorting to a logic analyzer we saw that an extra data reference was
being generated. It turned out that the compiler translated the
statement "sio->reg = 0" into something like "andb reg,0" where the
andb instruction belonged to a class of processor operations that did
a *read* before a write. The chip ignored the fact that it was a read,
merely seeing it as a data reference, and the communication got pretty
confused.

An example of the class of problems that make me think that memory
mapped I/O, for "active" I/O devices, is not necessarily a good thing.
By "active" I/O devices, I mean devices where reads can have side
effects.

Memory mapping "passive" I/O, like frame buffers, queues that are read
by Ethernet controllers, etc., where there is actually something very
similar to real RAM behind the memory address, is okay, but memory
mapping active I/O devices is not necessarily such a good idea.
(Careful use of uncached memory and write activated control registers
in write-through is similarly okay).

A while back a similar example was posted, where the programmer had
tried to do something like "mio->reg ^= 0xF00". The compiler emmitted
an immediate to memory XOR, which used a bus transaction differenmt
from that used for normal LD/ST. The I/O board ignored this bus
transaction.

Or how about:
mem->reg.byte0 = 0;
mem->reg.byte1 = 0;
being combined by the compiler into a 16 bit store? Which may very well
not be safe if the order of I/O operations is important.

The last is a good example. ANSI C volatile has several target
audiences, including writers of parallel applications, signal
handlers, and I/O drivers.
Now, you might declare byte0 and byte1 above as volatile. But the
writer of parallel programs probably would like the compiler to
combine the writes.
So, what does ANSI do? I suspect that it makes it illegal to
combine the writes. I.e. the language is deficient for parallel
programming. Conversely, if volatile does not forbid combining the
writes, the language is deficient for writing memory mapped I/O device
drivers.

If you teach your compiler to handle all of the various cases, how
much additional work is it to tell the compiler that "If an object is
declared with the I/O attribute, use I/O operations to read and write
to it"? Or, for that matter, if the programmer has to worry about
exactly what bus transactions get emmitted to a memory mapped device,
why shouldn't the programmer then be saying IO_write(location,data)?


I/O instructions have one big advantage: they are easy to distinguish
from memory instructions. It is harder to distinguish memory mapped I/O
from ordinary memory accesses - you detect cacheability, e.g., later
in the pipeline.

Phil Ngai

unread,
Aug 24, 1991, 2:27:05 AM8/24/91
to
he...@zoo.toronto.edu (Henry Spencer) writes:
>Not quite a war story, but a good quote... Mike Tilson of HCR, commenting
>on the problems of porting compilers and Unix to new machines: "system
>programmers don't expect to have to learn how to use logic analyzers".

Maybe they should.

--
I want my "Freedom of Association"! (tm of soc.motss)

Michael O'Dell

unread,
Aug 24, 1991, 9:10:26 AM8/24/91
to
The first-order real problem with the famous serial chip in question
is the indirect addressing crock to get to the registers.
Note that the manufacturer DID make a version of the chip which
allowed you to address all the registers with address lines without
the "register-register". I seem to remember that it had several
lossages - one being funny timing requirements, but the other was that
it was noticably more expensive, and the package larger (more address
pins, don't you know) and hence it never got used in designs. the
smaller footprint and lower price was too much to pass up.

the place memory-mapped I/O is a problem is now that the CPUs of
common machines can do some significant number of instructions during
one VME bus cycle. The Primsa P1 could do about 100 instructions in
the time it took for a LOAD from VME space to complete.
So when we looked at peripheral controllers to qualify, the controller
which need 10-odd writes and 20-odd reads to VME space in order to
start a transaction lost out to the board which needed on one or two.

This is one of the real advantages of the new mezzanine busses like
SBus and TurboBus - you can touch things on less than geologic
time scales. And keep in mind these differences are only going to get
bigger.

When trying to improve I/O speeds for non-supercomputing workloads,
improving Megabytes/second won't help. You gotta address
Transactions/second - the name of the game is Latency, not Bandwidth.

THis means that machines MUST be more compact, and the bus protocols
much simpler and almost certainly synchronous. The days of using a
general-purpose bus (like VME) for I/O are coming to an end.
We can get higher-performance and lower-price controllers
by putting them on I/O distribution buses.

-Mike O'Dell

Bellcore sharing MY opinions??? Not in my lifetime.

John R. Levine

unread,
Aug 24, 1991, 9:41:58 AM8/24/91
to
In article <GLEW.91Au...@pdx007.intel.com> gl...@pdx007.intel.com (Andy Glew) writes:

> mem->reg.byte0 = 0; mem->reg.byte1 = 0;

>[might be] combined by the compiler into a 16 bit store...

>ANSI C volatile has several target audiences, including writers of parallel
>applications, signal handlers, and I/O drivers.
> Now, you might declare byte0 and byte1 above as volatile. But the
>writer of parallel programs probably would like the compiler to
>combine the writes.
> So, what does ANSI do?

In my draft of the standard, section 2.1.2.3 says that "At sequence points,
volatile objects are stable in the sense that previous evaluations are
complete and subsequent evaluations have not yet occurred." Footnote 64 to
section 3.5.3 says "A volatile declaration may be used to describe an object
corresponding to a memory-mapped input/output port on an object accessed by an
asynchronously interrupting function. Actions on objects so declared shall
not be "optimized out" by an implementation or reordered except as permitted
by the rules for evaluating expressions."

I have long claimed, albeit not always very coherently, that volatile is a
snare and a delusion. It does seem to be useful for variables shared between
a mainline routine and a signal running on the same CPU, but for memory mapped
I/O and variables shared between multiple processors its simplistic memory
model just isn't adequate.

For memory-mapped I/O, the example above shows a typical problem. Other
messages have noted that obscure implementation details, e.g. a clear
instruction that does a read/modify/write, can cause major trouble with I/O
ports that do something on each reference. It also seems to me that given
that memory-mapped I/O code is inherently machine specific, it is silly to
invest a lot of effort in defining a putatively portable way to write it. I'd
be happier with some pragmas that told the local compiler about the warts of
the devices in a way that was adequate to produce reliable code.

On multiprocessors, write-behind caches make volatile variables nearly
useless. On a multiprocessor IBM 370, for example, a processor is allowed to
delay memory writes arbitrarily long as viewed by other processors. (It has
to be internally consistent, but you don't need volatile for that.) You can
force out delayed writes with a pipe-drain no-op, but that is very slow. It
looks to me like a conforming implementation on a 370 should put a pipe-drain
after every store to a volatile variable. I doubt users would be very happy
about that. They'd need some way to identify synchronization points, perhaps
with a pragma or a SYNCPOINT() macro, but with explicit syncpoints there's no
need even to store shared variables after every reference, much less force
them to memory.

--
John R. Levine, IECC-by-the-sea, Harvey Cedars NJ "Where TAT-9 comes ashore"
jo...@iecc.cambridge.ma.us, {ima|spdcc|world}!iecc!johnl

John Mashey

unread,
Aug 24, 1991, 2:22:52 PM8/24/91
to
In article <1991Aug24....@amd.com> ph...@brahms.amd.com (Phil Ngai) writes:
>he...@zoo.toronto.edu (Henry Spencer) writes:
>>Not quite a war story, but a good quote... Mike Tilson of HCR, commenting
>>on the problems of porting compilers and Unix to new machines: "system
>>programmers don't expect to have to learn how to use logic analyzers".
>
>Maybe they should.

There are 2-3 distinct categories system programmer activities:
In some cases, the same people may do all ofthem, in some cases
the skill-sets are rather different:

1. NEW CHIP, NEW HARDWARE
2. NEW CHIP, OLD HARDWARE
3. OLD CHIP, NEW HARDWARE
4. OLD CHIP, OLD HARDWARE

In case 4, one is doing operating system work on stable hardware,
and using a logic analyzer is probably unnecessary.

In cases 2 and 3, there may be some logic analyzer work, although one
hopes that the diagnostics folks get thru that ... of course, UNIX is almost
always the killer diag that finds new bugs in something.

In case 1, some use of a logic analyzer will likely occur early in the
process.

From past experience, cases 1 and 4 are radically different, and if you've
experienced 4, but not 1, you might have shock to encounter 1.
(We've actually been fairly lucky at MIPS, as the bugs in early chips
have generally been workable-around in software; being able to boot UNIX
on an RTL-level model of the hardware in advance of tapeout is a big
help ... well, it's more like making the difficult possible.
Still, new chip+hardware bringup is an unusual art.)

--
-john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: ma...@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash
DDD: 408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: MIPS Computer Systems MS 1/05, 930 E. Arques, Sunnyvale, CA 94088-3650

Blair P. Houghton

unread,
Aug 24, 1991, 3:58:16 PM8/24/91
to
I don't think it's that complex, really, it just means that
the latches should be latched when the access is made, and
the access should be made only when the latches are latched
(the latches being virtual or the side-effect of predetermined
synchrony, not necessarily hardware that latches.)

The appelation 'volatile' certainly isn't concerned with
protocol on the bus.

The "don't optimize-out volatiles" feature merely prevents
the ANSI C implementation from throwing away variables
which are assigned-to/read-from and are never involved in a
corresponding read-from/assigned-to operation.

--Blair
"Slippery little suckers."
-Julia Roberts

Robert Heiss

unread,
Aug 24, 1991, 4:11:41 PM8/24/91
to
In article <13...@scolex.sco.COM> er...@sco.COM (eric smith) writes:
[surprised that 68K storing 0 to a device register also caused a read]

That 68000 quirk sold a lot of logic analyzers ... sigh! Even with
compiler optimization turned off, our sun device drivers suffered from
spurious reads. Even disassembling the code and studying it instruction
by instruction revealed no obvious error.

The CISC 68000 has a CLRB instruction specialized for storing zero. It's
so simple, anyone (even some compiler experts) would assume that it is the
smallest, fastest and safest method of storing zero. Surprise, it's merely
the smallest.

On some 68000 implementations (68010 I think) CLRB has read-modify-write
timing identical to NEGB and NOTB. Why? Maybe doing it that way saves a
microinstruction, who knows. The result of this clever economy is that
CLRB is slower than storing a register to memory, and CLRB is unsafe for
access to volatile memory locations.

--
Robert Heiss r...@wilbur.coyote.trw.com

Henry Spencer

unread,
Aug 24, 1991, 6:37:18 PM8/24/91
to
In article <28B6BD...@deneva.sdd.trw.com> r...@wilbur.coyote.trw.com (Robert Heiss) writes:
>On some 68000 implementations (68010 I think) CLRB has read-modify-write
>timing identical to NEGB and NOTB. Why? Maybe doing it that way saves a
>microinstruction, who knows...

Plus ca change... The same situation occurred with some instructions on
some versions of the pdp11. The typical reason is that a whole group of
instructions is being handled by common microcode, with only the ALU
operation being modified by the precise choice of instruction.

Doug Gwyn

unread,
Aug 24, 1991, 8:45:17 PM8/24/91
to
In article <1991Aug24.1...@iecc.cambridge.ma.us> jo...@iecc.cambridge.ma.us (John R. Levine) writes:
-In article <GLEW.91Au...@pdx007.intel.com> gl...@pdx007.intel.com (Andy Glew) writes:
->ANSI C volatile has several target audiences, including writers of parallel
->applications, signal handlers, and I/O drivers.
-I have long claimed, albeit not always very coherently, that volatile is a
-snare and a delusion. It does seem to be useful for variables shared between
-a mainline routine and a signal running on the same CPU, but for memory mapped
-I/O and variables shared between multiple processors its simplistic memory
-model just isn't adequate.
-For memory-mapped I/O, the example above shows a typical problem. ...

I don't know why you're even discussing this. "volatile" qualification
was NOT INTENDED to serve as a mechanism in support of parallel threads
of execution accessing shared data. It was NOT INTENDED to guarantee
highly implementation-specific semantics concerning bus access etc.
Faulting it because it does not serve a function it was never designed
to serve is rather a waste of time. Such applications require a
different form of support, presumably via one or more conforming
extensions beyond the basic C standard.

David Wright

unread,
Aug 24, 1991, 10:24:20 PM8/24/91
to
In article <1991Aug23.0...@kithrup.COM> s...@kithrup.COM (Sean

Eric Fagan) writes:
>In article <1991Aug2...@hob8.prime.com> da...@hob8.prime.com (Dan
>Westerberg) writes:
>>I'm an advocate of microcode, because should a bug be shipped to customers,
>>it's much easier to ship out a floppy with new microcode than to re-spin
>>a chip or replace chips/boards in the field.
>
>Uhm, what makes you think you can't replace the chips in the field if there
>is no microcode?

But that requires a service call. New microcode, at least on a Prime
of recent vintage, requires only shipping a new floppy to a customer.

-- David Wright, not officially representing Stardent Computer Inc
wri...@stardent.com or uunet!stardent!wright

Nick Felisiak

unread,
Aug 25, 1991, 10:56:22 AM8/25/91
to
In article <1991Aug23....@maths.bath.ac.uk> jp...@maths.bath.ac.uk (John ffitch) writes:
>
>I should say that I started microcode on a PERQ, where we replaced the
>whole microcode with our own BCPL instruction set. The tools were
>aweful, just the 3 x 7segment display which counted the number of
>timnes one popped an empty stack......
>

Oh, not true John; the screen, being store-mapped, was accessible from the
microcode. All you needed was a magnifying glass to count the pixels :-)!
I'll never forget coming into an office to find two people staring at a Perq
screen saying things like "it is, you know". They later explained they'd
found a page table entry in the middle of the screen!

I was amused when I first saw the Accent project's 'lights'. They had
a thin strip at the top of the screen divided into a number (32?) of
'lights', which could be switched on or off by target-level instructions.
The microcode just filled the appropriate area of store with all zeros or
all ones.

Nick
--
Nick Felisiak ni...@spider.co.uk
Spider Systems Limited +44 31 554 9424

Doug McDonald

unread,
Aug 25, 1991, 11:30:22 AM8/25/91
to

In article <28B6BD...@deneva.sdd.trw.com> r...@wilbur.coyote.trw.com (Robert Heiss) writes:


But presumably this was documented, yes???

The PDP-11 did similar things and they most certainly were documented.
I never had a problem with one of those. About 80x86s I don't know -
I've never built nor bought an interface for one of those where a
read alone did something.

Doug McDonald

victor yodaiken

unread,
Aug 25, 1991, 11:44:11 AM8/25/91
to
In article <28B6BD...@deneva.sdd.trw.com> r...@wilbur.coyote.trw.com (Robert Heiss) writes:
>In article <13...@scolex.sco.COM> er...@sco.COM (eric smith) writes:
>[surprised that 68K storing 0 to a device register also caused a read]
>
>That 68000 quirk sold a lot of logic analyzers ... sigh! Even with

Ah but the early 68000's had some behavior that defeated logic analyzers.
We had a few that got very confused, every now and then, trying to
compute a "rte" from a trap (system call) when the next user mode
instruction was a "rts". I'm not sure I remember this right, but it
may have been that the switch of stack pointers was not always completed
in time.

Doug Gwyn

unread,
Aug 25, 1991, 3:38:30 PM8/25/91
to
In article <1991Aug25....@iecc.cambridge.ma.us> jo...@iecc.cambridge.ma.us (John R. Levine) writes:
>In article <17...@smoke.brl.mil> gw...@smoke.brl.mil (Doug Gwyn) writes:
>>... "volatile" qualification >was NOT INTENDED to serve as a mechanism

>>in support of parallel threads of execution accessing shared data. It
>>was NOT INTENDED to guarantee highly implementation-specific semantics
>>concerning bus access etc.
>I agree that they require a different form of support, but the standard and
>rationale make it pretty clear that "volatile" was indeed intended for both
>I/O ports and shared data.

Okay, I overstated it slightly. I meant that we thought that "volatile"
qualification might be NECESSARY for such applications; however, I don't
think the majority of X3J11 thought that it would really be SUFFICIENT
for them. As you pointed out (not quoted here), full semantics for such
things are quite complex and no simple universal language feature could
possibly cover all the bases.

>As Ken Thompson said in his paper on the Plan 9 C compiler, explaining its
>divergences from ANSI: "Volatile seems to have no meaning, so it is hard to
>tell if ignoring it is a departure from the standard."

Ken is very smart but also very opinionated. "Volatile" does have a
meaning, or we wouldn't have bothered with it. It can be safely ignored
by a conforming implementation only if the implementation treats ALL
object accesses as volatile-qualified; however, that precludes many
common optimization techniques. Just because "volatile" does not mean
what one might wish about memory-mapped I/O access does not imply that
it has no other meaning.

The main areas of the C standard where volatile qualification is relevant
are in the definitions of sig_atomic_t and setjmp().

Phil Brownfield

unread,
Aug 25, 1991, 11:50:56 AM8/25/91
to
In article <28B6BD...@deneva.sdd.trw.com> r...@wilbur.coyote.trw.com (Robert Heiss) writes:
>On some 68000 implementations (68010 I think) CLRB has read-modify-write
>timing identical to NEGB and NOTB. Why? Maybe doing it that way saves a
>microinstruction, who knows. The result of this clever economy is that
>CLRB is slower than storing a register to memory, and CLRB is unsafe for
>access to volatile memory locations.

Just to clarify which implementations do what, the CLR instruction does
an operand read on the original MC68000 and derivatives without any
instruction set extensions (e.g. MC68008, MC68302). But on the MC68010
and successors, CLR avoids reading the operand before overwriting it.
Except for MOVE, CLR is the only MC68000 instruction that both affects
condition codes and writes a memory location without needing to read it
first, so as I heard it microcode was shared with ordinary R-M-W
instructions as a design simplification.

MOVE.B #0,<ea> is as fast as CLR.B <ea> on the original '000, without
the memory mapped I/O quirk. Larger machine code, though.
--
Phil Brownfield | ph...@motaus.sps.mot.com
Speaking for myself. | {cs.utexas.edu!oakhill, mcdchg}!motaus!phil
Don't mistake kindness for weakness - Albert Collins

Henry Spencer

unread,
Aug 25, 1991, 6:14:53 PM8/25/91
to
In article <1991Aug25.1...@ux1.cso.uiuc.edu> mcdo...@aries.scs.uiuc.edu (Doug McDonald) writes:
>But presumably this was documented, yes???
>The PDP-11 did similar things and they most certainly were documented.

Oh yeah? Where? It wasn't documented on the 11 any more than it was
documented on the 68000. (Unless you count the fact that the 11s shipped
with complete microcode listings -- while these may have been able to
answer such questions in principle, in practice reading them was a
considerable adventure on the higher-end 11s, which used very horizontal
microcode.) Both design groups used the same philosophy: document the
architecture well and skimp on documenting the implementations. Works
okay until something like this, where implementation properties are
relevant, crops up.

I well remember the cries of "it's doing *what*?!?" when we tried to
produce specific types of bus cycles on an 11/45 for some hardware work.

Steve Correll

unread,
Aug 24, 1991, 7:57:00 PM8/24/91
to
In article <1991Aug2...@hob8.prime.com> da...@hob8.prime.com (Dan Westerberg) writes:
>...Personally, I *love* microcode.
>As an ASIC designer, microcode gives me that warm and fuzzy feeling because I
>know that should a bug show up in my silicon, it's quite possible to work
>around it with microcode.

Interestingly, this is practical to a limited extent even with a
non-microcoded RISC CPU. In a previous job, I "fixed" bugs in the pipeline
control logic of early revisions of a well-known RISC chip by adding
heuristics to the instruction scheduler within its assembler. The tradeoffs
are certainly different: with microcode, you can make the fix go away when
future hardware no longer needs it, but with macrocode, it may be hard to ask
customers to regenerate all the object files containing a fix. Thus, you're
less likely to make a temporary fix if it significantly degrades performance.

Chris Torek

unread,
Aug 25, 1991, 10:19:31 PM8/25/91
to
In article <1991Aug25.1...@motaus.sps.mot.com>

ph...@motaus.sps.mot.com (Phil Brownfield) writes:
>Just to clarify which implementations do what, the CLR instruction does
>an operand read on the original MC68000 and derivatives without any
>instruction set extensions (e.g. MC68008, MC68302). But on the MC68010
>and successors, CLR avoids reading the operand before overwriting it.

Yes. One of the amusing aspects of the `CLR situation' was a Heurikon
board that was shipped with some documentation noting that CLR does a
read and should not be used on memory mapped locations. We replaced
their PROMs with our own (including a much faster bcopy/bzero and a
small disassembler) and part of the job involved looking at their
code. I noticed that it used CLR instructions to write to memory
mapped registers, despite their own warning: It only worked because
the boards used 68010s. Yet while they were depending on the 68010,
they were still using memory zero and copy routines that were optimized
for the 68000. (On the 000, moveml is the fastest way to copy and
clear; on the 010, using the loop mode is MUCH faster.)
--
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA Domain: to...@ee.lbl.gov
new area code as of September 2, 1991: +1 510 486 5427

Larry Philps

unread,
Aug 25, 1991, 1:45:55 PM8/25/91
to
In <73...@spim.mips.COM> ma...@mips.com (John Mashey) writes:

> In article <1991Aug24....@amd.com> ph...@brahms.amd.com (Phil Ngai) writes:
> >he...@zoo.toronto.edu (Henry Spencer) writes:
> >>Not quite a war story, but a good quote... Mike Tilson of HCR, commenting
> >>on the problems of porting compilers and Unix to new machines: "system
> >>programmers don't expect to have to learn how to use logic analyzers".
> >
> >Maybe they should.
>
> There are 2-3 distinct categories system programmer activities:
> In some cases, the same people may do all ofthem, in some cases
> the skill-sets are rather different:
>
> 1. NEW CHIP, NEW HARDWARE
> 2. NEW CHIP, OLD HARDWARE
> 3. OLD CHIP, NEW HARDWARE
> 4. OLD CHIP, OLD HARDWARE
>
> In case 4, one is doing operating system work on stable hardware,
> and using a logic analyzer is probably unnecessary.

Actually, I would say their should be a 3rd dimension on your table,
in particular OLD/NEW OS. HCR was in the business of porting Unix
and in many cases the hardware had been running a proprietary OS for
ages before we saw it. So we really were in your case 4, but were
putting a new OS on the hardware.


There are a couple of cases in which a logic analyzer comes in very
handy for instruction tracing.

1) System crashes on a corrupted data structure, and the routine
that caused the corruption is not in the current call chain.
Even with a kernel debugger, a stack trace will not help at
this point. However, the system *usually* does not last long
after the corruption, and being able to look at a instruction
trace for the last couple of hundred instructions before the
corruption often pinpoints the problem.

2) System totally locks up. It's not in the monitor, but it ain't
talking to you. You would just die to know the current
instruction pointer ...

3) (This really happened once, and we had to rent a logic analyzer
for a couple of days to find out what was going on ...)
We had an OS the blew the kernel stack, and the code for
kernel stack recovery contained a bug. When the kernel stack
overflowed, and then the attempt to recover failed, the kernel
just gave up and toggled the CPU reset line. So we went from
multi-user mode to boot time memory checks with no indication
of where the system was when the error occurred.

---
Larry Philps, SCO Canada, Inc.
Postman: 130 Bloor St. West, 10th floor, Toronto, Ontario. M5S 1N5
InterNet: lar...@sco.COM or larryp%sco...@uunet.uu.net
UUCP: {uunet,utcsri,sco}!scocan!larryp
Phone: (416) 922-1937

Michael Tilson

unread,
Aug 25, 1991, 4:07:28 PM8/25/91
to
ph...@brahms.amd.com (Phil Ngai) writes:

> he...@zoo.toronto.edu (Henry Spencer) writes:
> >Not quite a war story, but a good quote... Mike Tilson of HCR, commenting
> >on the problems of porting compilers and Unix to new machines: "system
> >programmers don't expect to have to learn how to use logic analyzers".
>
> Maybe they should.

Actually, in the cases I was referring to, they had to. Kernel programmers
responsible for building new systems on new chips or prototype hardware
will need to at least be comfortable working with those who know how to
work a logic analyzer, and should be capable of learning how to use one
themselves if needed. This does come as a surprise to many.

Dan Westerberg

unread,
Aug 26, 1991, 1:00:13 PM8/26/91
to
In article <9...@taniwha.UUCP>, pa...@taniwha.UUCP (Paul Campbell) writes:
|>
|> Of course you should code your model in Verilog/VHDL and GIVE AWAY copies
|> to the people who want to use your chips - one of the first questions
|> I ask of vendors these days is "will you give me a Verilog model", you
|> can bet that if one vendor can and one can't I'm going to choose the
|> one who can .... because we're going to do full board simulation before we
|> tape out our oun chips so it saves us writing a model.
|>

I agree that intense simulation using Verilog/VHDL is a necessity in designing
chips these days, but I don't think it's the optimal solution for the software
people.

At least in Verilog, if you get a system simulation running with several large
ASICs modelled at an RTL level, the simulation gets *immense* very quickly. Even
with large amounts of CPU and memory available, the simulation environment just
is not very conducive for booting operating systems. I admit, however, that I've
not had the priviledge of trying this on a super-computer class machine.

I believe that a more realistic environment for software testing is a hardware
prototype in the lab, using hardware emulation technology. This provides the
*real* environment that the chips and software will eventually run under, and at
a speed potentially several orders of magnitude faster than simulation can
provide.


Dan

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ "These walls that still surround me ~ ~
~ Still contain the same old me ~ Dan Westerberg ~
~ Just one more who's searching for ~ da...@hob8.prime.com ~
~ The world that ought to be" ~ ~
~ - Neil Peart ~ Prime Computer, Framingham, MA ~
~ ~ ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Steve Correll

unread,
Aug 26, 1991, 4:30:35 PM8/26/91
to
>In article <GLEW.91Au...@pdx007.intel.com> gl...@pdx007.intel.com (Andy Glew) writes:
>
> mem->reg.byte0 = 0; mem->reg.byte1 = 0;
>[might be] combined by the compiler into a 16 bit store...
> So, what does [the] ANSI [volatile declaration] do?

In article <1991Aug24.1...@iecc.cambridge.ma.us> jo...@iecc.cambridge.ma.us (John R. Levine) writes:
>In my draft of the standard, section 2.1.2.3 says that "At sequence points,
>volatile objects are stable in the sense that previous evaluations are
>complete and subsequent evaluations have not yet occurred."

Because the end of the first statement constitutes a "sequence point", I
think you can make a pretty good argument that a compiler which combines
the two volatile byte stores into a 16-bit store has violated the ANSI
standard by either leaving the first evaluation incomplete or performing
the subsequent evaluation early. But the other criticisms of "volatile"
remain valid. System-specific needs call for system-specific solutions,
such as a pragma which says, "this is an I/O operation" and a code
generator which understands the requirements of I/O operations on the target
machine.

Andy Glew

unread,
Aug 26, 1991, 9:27:58 AM8/26/91
to
>
>Do any chip designers make a (defined to be correct) simulator
>available to OS groups?
>

In my past life at Gould I was one of the OS group users of such a
simulator. We booted UNIX, promptly checkpointed, and ran multiple
processes on the simulator. (I don't know if we ever finished running
MUSbus (now KENbus).

Q/A: what is "defined to be correct"? Most OS testing does not
require 100% accurate timing, just proper execution of instructions;
most does not require cache, although some does (especially if you are
trying to verify that the OS flushes cache correctly).
Opinion: what you need is a spectrum of simulators, ranging from
as accurate as possible (and slow) to simulators that execute
instructions but are not timing accurate. With luck, you can compile
all of these simulators from the same description.

Andy Glew

unread,
Aug 26, 1991, 9:42:36 AM8/26/91
to
>About 80x86s I don't know - I've never built nor bought an interface
>for one of those where a read alone did something.

I believe that I encountered such a "read-activated" I/O device on an
8088 vintage IBM PC - several years ago, before I worked for Intel.
What was it, a UART that could be memory mapped as well as I/O mapped?

I'd be interested in any other "read-activated" memory mapped I/O
devices people can tell me about. Or, for that matter, any memory
mapped I/O. Send email.

Paul Campbell

unread,
Aug 26, 1991, 12:49:54 AM8/26/91
to
In article <13...@scolex.sco.COM> er...@sco.COM (eric smith) writes:
>
>being generated. It turned out that the compiler translated the
>statement "sio->reg = 0" into something like "andb reg,0" where the

I bet it was more like 'clrb (An)', the early 68k
clr instruction actually did a read and then a write
which blew away lots of devices like this (thus showing
the dangers of using microcode subroutines without
thinking about them - or deciding to save a few bytes of
microcode ROM). The usual code to fix this looked like:

zero = 0;
sio->reg = zero;

with appropriate comments pointing out why this shouldn't
be changed. Of course a modern optimizing compiler would notice
that zero was always 0 and optimize the code back to
'clrb reg(An)' :-) and the next generation of kernel people
would go through the same hell. I think that later versions of the
68k series fixed this problem.

peter da silva

unread,
Aug 27, 1991, 10:42:46 AM8/27/91
to
In article <GLEW.91Au...@pdx007.intel.com>, gl...@pdx007.intel.com (Andy Glew) writes:
> I believe that I encountered such a "read-activated" I/O device on an
> 8088 vintage IBM PC - several years ago, before I worked for Intel.
> What was it, a UART that could be memory mapped as well as I/O mapped?

There are valid performance reasons for using a read-activated device. Suppose
you're working over a slow bus... if you can read stuff off a queue of events
(say, characters in a FIFO) with one read per character instead of a read and
a write you can handle data twice as fast.

The other way of getting this performance is to expose the whole queue to
the CPU, and have it read it from sequential locations. This is a better
solution today... but it wasn't workable back when all your I/O was mapped
into a single 4K page.

A good source of weird memory-mapped devices would be an old PDP-11 manual.
--
Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180;
Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"

John Mashey

unread,
Aug 28, 1991, 1:54:34 AM8/28/91
to
In article <GLEW.91Au...@pdx007.intel.com> gl...@pdx007.intel.com (Andy Glew) writes:
>I just wonder if anyone, anywhere, has been foolish enough to put them
>in cached memory? (With software explicitly cache flushing by knowing
>the cache structure).

Well, not on purpose :-)

I recall a bug a loonnng time ago. MIPers were trying to debug a QIC tape,
and it had the weird behavior that:
a) It failed, occasionally when running on a quiet system.
b) It worked fine when busy.
This of course, is counter to most OSers' experience, where something
fails on a loaded system, and works fine when you try to isolate it
(an adaptation of Heisenberg's Uncertainty Principle to computer debugging :-)

Recall that the R2000 allocates 2 512MB chunks of virtual address space
that are direct-mapped on top of the first 512MB of memory, but one cached
and the other uncached (i..e, there is a cached address, and an uncached
address for each such byte, differing only by setting a bit near the top of
the address.)

Light dawned, finally, that someone had used the cached version of
an address of some device control/status register. When the system was
idle, it sometimes got a hit in the cache, fouling up the software.
When the system was busy, the offending cache line got replaced often
enough that a device qacess caused a cache miss, and the correct
data was gotten.

Martin Golding

unread,
Aug 27, 1991, 5:44:10 PM8/27/91
to
lind...@cs.cmu.edu (Donald Lindsay) writes:

>Often, the chips (and the documentation) are early samples and have
>numerous failings. Learning intimate details can be a struggle, with
>months where folklore substitutes for knowledge. If anyone has war
>stories that don't violate nondisclosure, please, post.

One set of Motorola 68000's had an interesting and exciting flaw...
At elevated but less than maximum spec'd. temperatures, the extended add
instruction (something like ADDX.B -(Rx),-(Ry)) wouldn't bother to carry,
for very specific bit patterns. Our computer case was not exactly the last
word in cooling, and our customers don't believe in coddling their computers
(LOTS of horror stories about that) so their accounting runs would sometimes
produce funny answers.

Imagine trying to _debug_ something like this. Imagine spending days with
the engineer and probes, and putting the computer in a box and checking
temperatures, and finally finding the problem. Imagine contacting the
manufacturer and hearing "Oh. Yeah, we knew about that". Ooops.

Of course, when we put a periodic test in the kernel to detect the failure
and shut down, our own support people wanted it removed so they wouldn't
have to do a service call ...

I kind of liked the idea of having a built-in overheat test in the CPU,
but I got overruled and we eventually swapped out all the chips.

BTW, Motorola was more than cooperative once we found the problem. Our
association with them, over all, has been long, pleasant, and profitable
(for us, anyway. I don't get to see their books :-).


Martin Golding | sync, sync, sync, sank ... sunk:
DoD #0236 | He who steals my code steals trash.
HOG #still pending | (Twas mine, tis his, and will be slave to thousands.)
A poor old decrepit Pick programmer. Sympathize at:
mcspdx!adpplz!martin or mar...@adpplz.uucp

eric smith

unread,
Aug 27, 1991, 8:11:44 PM8/27/91
to

pa...@taniwha.UUCP (Paul Campbell) writes:

>In article <13...@scolex.sco.COM> er...@sco.COM (eric smith) writes:
>>
>>being generated. It turned out that the compiler translated the
>>statement "sio->reg = 0" into something like "andb reg,0" where the

> I bet it was more like 'clrb (An)', the early 68k
> clr instruction actually did a read and then a write
> which blew away lots of devices like this (thus showing
> the dangers of using microcode subroutines without
> thinking about them - or deciding to save a few bytes of
> microcode ROM). The usual code to fix this looked like:

> zero = 0;
> sio->reg = zero;

Yes, you're exactly correct, it was a clrb instruction (not andb or
xorb). Ok, I know, but I did say it was a long time ago, and anyway
I'm just a software guy whose brain is slowly fossilizing. And you
are exactly correct about the fix.

-----
Eric Smith
er...@sco.com
er...@infoserv.com
CI$: 70262,3610

Blaine Gaither

unread,
Aug 28, 1991, 12:25:16 AM8/28/91
to
>Do any chip designers make a (defined to be correct) simulator
>available to OS groups?

The earliest example I am familiar with is the simulator for the
Burroughs(RIP) B2500 was developed upon the B5500 Whilst the B2500
hardware was being developed. The os was booted upon the simulator as
well as the compiler testing. When the hardware was ready, it was a
very short bring-up (days not weeks).

This was circa 1970. There may have been earlier examples.
--

Blaine Gaither
Amdahl Corporation
143 No. 2nd East St., Rexburg, Idaho 83440-1619
UUCP:{ames,decwrl,sun,uunet}!amdahl!tetons!bdg (208) 356-8915
INTERNET: b...@idaho.amdahl.com

It is loading more messages.
0 new messages