Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Interresting thread in comp.lang.eiffel

6 views
Skip to first unread message

Bertran...@eiffel.com

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
The point made in the IEEE Computer article by Jean-Marc Jezequel
and myself (see
http://www.eiffel.com/doc/manuals/technology/contract/ariane)
is not that using Eiffel would by itself have avoided the bug.
Such a statement wouldn't make much sense (it's obvious that
in any general-purpose programming language you can mess things up).

But we do make the point that in the Design by Contract
culture pervasive in Eiffel, and directly supported
by the language (including its inheritance mechanism,
exception handling, documentation tools, debugging tools...)
the standard and almost obligatory way of reusing a module is
through its contract, including its conditions of use (precondition).
The normal procedure is to document a routine through its
contract and, when reusing it, to check that every call satisfies
the contract. In a software development culture that has assimilated
and integrated the principles of Design by Contract
the first task of quality assurance is to check that
every call takes the contracts into account. In the Ariane case
this wouldn't even have required a test; just static inspection.
Any organization that used Eiffel and had even a minimal
quality assurance process adapted to the Eiffel technology
would have done that, even if the quality assurance process
were otherwise imperfect. (That's the important point:
obviously, if the QA process is perfect, assuming that's
possible, it will catch errors. The important question is
how a serious but possibly imperfect process, routinely
associated with a certain technology, will perform.)

So the gist of the argument in our article is: a standard
Eiffel practice would most likely have caught the problem.

What's interesting is not a claim of the form "If they had done
X they would have avoided it". This is true of many `X' and,
after the fact, it's easy by definition to come up with an `X'
since we know what the error was. What's interesting is that
the actual X mentioned above -- checking that contracts are
satisfied -- is an integral part of the Eiffel method, routinely
applied, not an ex-post-facto suggestion based on anything
Ariane-specific.

It's also quite easy and cheap to do. In contrast, the recommendations
of the commission that examined the crash and published the report
involved "more of the same" quality assurance techniques -- more
management, more tests -- leading to ever higher costs, without
the guarantee that systemic problems such as not requiring
and enforcing explicit contracts won't again lead to similar
results, however many millions of dollars are spent on better
management practices and more extensive testing.

Many of the same observations, by the way, apply to the September
1999 loss of the Mars Climate Orbiter (see
http://mars.jpl.nasa.gov/msp98/news/mco990930.html).

So, no, use of Eiffel doesn't guarantee you won't make errors,
but it certainly helps avoid a large class of common and
damaging errors.

--
Bertrand Meyer
Interactive Software Engineering
ISE Building, 2nd Floor, 270 Storke Road Goleta CA 93117
Phone 805-685-1006, Fax 805-685-6869, http://eiffel.com


Sent via Deja.com http://www.deja.com/
Before you buy.

David Gillon

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to

Bertran...@eiffel.com wrote:
> But we do make the point that in the Design by Contract
> culture pervasive in Eiffel, and directly supported
> by the language (including its inheritance mechanism,
> exception handling, documentation tools, debugging tools...)
> the standard and almost obligatory way of reusing a module is
> through its contract, including its conditions of use (precondition).

But in the Ariane 501 case the reuse was at the LRU level, not software.
They treated the Inertial Reference System as a COTS plug-in, assuming
the Ariane 4 design requirements (contract?) remained valid and did not
need to be re-evaluated. The systems analysis failure which allowed this
was outside of the IRS development, meaning that Eiffel would only make
a difference if applied across the entire Ariane 5 system development.
Not knowing Eiffel, I'm unsure how well its DBC facilities would cope
with requirements flow down across completely different systems -- if
the interface requirements specification must be updated manually in
such cases then the historical failure can still occur in precisely the
same manner.

> The normal procedure is to document a routine through its
> contract and, when reusing it, to check that every call satisfies
> the contract.

But how well does this scale to cover re-use of independently developed
LRUs?

> In a software development culture

It's probably a mistake to think of a programme on the scale of Ariane 5
as having a single engineering culture. I'm not familiar enough with the
programme to know whether the IRS was sourced internally to Arianespace
or from an outside subcontractor, but even for the internal case the
development of IRS interface requirements, IRS requirements and IRS
software might well be separated across different sites, even countries.
We do know that the initial development for Ariane 4 and the
reassessment for Ariane 5 are separated by a space of years, so the
decision to use the system unchanged likely involved a largely different
set of engineers.

> the first task of quality assurance is to check that
> every call takes the contracts into account. In the Ariane case
> this wouldn't even have required a test; just static inspection.

I'm not certain this would have made a difference. The error seems to
have been an assumption that, in Eiffel terms, the contract was
unchanged, and so did not need to be reassessed. They likely had
processes that might have detected the error if invoked, but time and
cost pressure caused them to make the assumption that they were not
needed in this case.



> So the gist of the argument in our article is: a standard
> Eiffel practice would most likely have caught the problem.

If it had been applied. But even if such a process was in place, would
it have been applied in this case? Or would the decision have been taken
that it didn't need to be applied? I suspect the error and its avoidance
is fundamentally a human factors issue, rather than an engineering one.

> Many of the same observations, by the way, apply to the September
> 1999 loss of the Mars Climate Orbiter (see
> http://mars.jpl.nasa.gov/msp98/news/mco990930.html).

Including that this is apparently a failure in the interface
requirements specification between two separately developed processes,
not a simple invocation of one routine from another.

--

David Gillon

Gisle Sælensminde

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
In article <8j67p8$afd$1...@nnrp1.deja.com>, Bertran...@eiffel.com wrote:
>The point made in the IEEE Computer article by Jean-Marc Jezequel
>and myself (see
>http://www.eiffel.com/doc/manuals/technology/contract/ariane)
>is not that using Eiffel would by itself have avoided the bug.
>Such a statement wouldn't make much sense (it's obvious that
>in any general-purpose programming language you can mess things up).
>
>But we do make the point that in the Design by Contract
>culture pervasive in Eiffel, and directly supported
>by the language (including its inheritance mechanism,
>exception handling, documentation tools, debugging tools...)
>the standard and almost obligatory way of reusing a module is
>through its contract, including its conditions of use (precondition).
>The normal procedure is to document a routine through its
>contract and, when reusing it, to check that every call satisfies
>the contract. In a software development culture that has assimilated
>and integrated the principles of Design by Contract
>the first task of quality assurance is to check that
>every call takes the contracts into account. In the Ariane case
>this wouldn't even have required a test; just static inspection.
>Any organization that used Eiffel and had even a minimal
>quality assurance process adapted to the Eiffel technology
>would have done that, even if the quality assurance process
>were otherwise imperfect. (That's the important point:
>obviously, if the QA process is perfect, assuming that's
>possible, it will catch errors. The important question is
>how a serious but possibly imperfect process, routinely
>associated with a certain technology, will perform.)
>
>So the gist of the argument in our article is: a standard
>Eiffel practice would most likely have caught the problem.

I'm sure that if the Ariane 5 software had been written in
a different language from Ada, someone would have written articles
saying that with "standard Ada practice" the bug would have been
avoided. The problem was that a subsystem was moved from Ariane 4
to Ariane 5 without realistic testing. The Ariane 4 software
developers delibaratly omited a test, since analysis concluded
that this never could happen, and for Ariane 4 this was indeed
the case. Since this is poor management more than anything else,
I really wonder how a language can help against poor management.

As I understand, you don't say that Eiffel in itself could avoided
the problem, but that the design by contract mindset could have
avoided it. Design by contract is a nice concept which I would
like to see spread, but I don't think that it would have helped.
The problem was the decision to move a component from
Ariane 4 to Ariane 5 without sufficient testing. This is bad
practice anyway, and no language or design methology can avoid
bad management.


>What's interesting is not a claim of the form "If they had done
>X they would have avoided it". This is true of many `X' and,
>after the fact, it's easy by definition to come up with an `X'
>since we know what the error was. What's interesting is that
>the actual X mentioned above -- checking that contracts are
>satisfied -- is an integral part of the Eiffel method, routinely
>applied, not an ex-post-facto suggestion based on anything
>Ariane-specific.
>
>It's also quite easy and cheap to do. In contrast, the recommendations
>of the commission that examined the crash and published the report
>involved "more of the same" quality assurance techniques -- more
>management, more tests -- leading to ever higher costs, without
>the guarantee that systemic problems such as not requiring
>and enforcing explicit contracts won't again lead to similar
>results, however many millions of dollars are spent on better
>management practices and more extensive testing.

I cannot see how design by contract could avoided the failure. The
device worked as designed, much of the problem was that it was moved
to an environment it was not designed for without testing and analysis.

>Many of the same observations, by the way, apply to the September
>1999 loss of the Mars Climate Orbiter (see
>http://mars.jpl.nasa.gov/msp98/news/mco990930.html).

The problem was caused by someone where feeding orbital data into Mars
climate orbiter using feet as unit, while the program expected meters.
NASA is using the SI system, but Americans seems to be more used to
the old English units, so this is much of a cultural problem. Probably
the software developers hadn't even thought of this even in their
wildest dreams, so chances are that they would neither put anything
like this into the contract if they had used Eiffel and design by
contract methology. How could we know in hindsight?

>So, no, use of Eiffel doesn't guarantee you won't make errors,
>but it certainly helps avoid a large class of common and
>damaging errors.

Design by contract can help discovering bugs and problems during
testing and ensure that the implementation is correct with respect to
the design, but it don't help if realistic testing not is performed,
or improve bad design desicions. If you see backward on Comp.lang.ada,
you will in fact see that many comes with the claim that Ada and it's
mindset will reduce such errors. A good language design and design
methology will help in many cases, but it cannot replace common sense.

--
Gisle Sælensminde ( gi...@ii.uib.no )


Tarjei T. Jensen

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to

Bertran...@eiffel.com wrote

>But we do make the point that in the Design by Contract
>culture pervasive in Eiffel, and directly supported
>by the language (including its inheritance mechanism,
>exception handling, documentation tools, debugging tools...)
>the standard and almost obligatory way of reusing a module is
>through its contract, including its conditions of use (precondition).


You're skating on thin ice here. There is no reason to believe that Ada
programmers are less conscientious than Eiffel programmers. Quite the contrary.
Ada has a strong engineering culture.

Besides. This was running on 1750 hardware. Was there any eiffel compiler
available for at the time for the CPUs used? Are there any Eiffel compiler
available for space hardened CPUs?


Greetings,


Al Christians

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
"Tarjei T. Jensen" wrote:
>
> Bertran...@eiffel.com wrote
> >But we do make the point that in the Design by Contract
> >culture pervasive in Eiffel, and directly supported
> >by the language (including its inheritance mechanism,
> >exception handling, documentation tools, debugging tools...)
> >the standard and almost obligatory way of reusing a module is
> >through its contract, including its conditions of use (precondition).
>
> You're skating on thin ice here.

Didn't the managers decide to re-use the code without reviewing it?
I'd think that if you don't look at the code, it doesn't matter much
what language it isn't written in.

A while back I downloaded an Eiffel demo from B. Meyer's company, an
example of the results the programming culture he espouses. It was
a cute demo that ran under Windows, presented some GUI elements, and
moved them around on the screen in response to mouse movements. About
three mouse clicks later, Windows was dead. Meyer is a superb
pontificator, but all his design by contract cerebration couldn't
keep his own demo, his own firm's best foot forward, flying any
longer than the Ariane flew. Perhaps that is because real OS's,
eg Windows, and real rockets, eg Ariane, are not bound by contracts.


Al

bertran...@my-deja.com

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
"Tarjei T. Jensen" <tarjei...@kvaerner.com> wrote:

>
> There is no reason to believe that Ada
> programmers are less conscientious than Eiffel programmers.

Of course not. Who claimed that?

> Quite the contrary.

The contrary? Evidence?

> Ada has a strong engineering culture.

Very true. But that culture doesn't include
the discipline of Design by Contract.

bertran...@my-deja.com

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
Al Christians <ach...@easystreet.com> wrote:

> Didn't the managers decide to re-use the code without reviewing it?
> I'd think that if you don't look at the code, it doesn't matter much
> what language it isn't written in.

You are missing the point. It's not a matter of reviewing
the code; it's a matter of not reusing code without a specification.

The code of a module needs to be reviewed by the developers of that
module question. For users ("clients") of the module, what matters
is the specification -- the contract. You don't reuse without
a contract.

> A while back I downloaded an Eiffel demo from B. Meyer's company.
>[...] About


> three mouse clicks later, Windows was dead.

Fair enough. We've released software with bugs before.
We're learning, like everyone else, and a GUI demo is not
a mission-critical system developed with the kind of attention
such systems deserve. In addition one would need to know the
details and the context. I don't know what that demo was, but
obviously it worked elsewhere (otherwise we wouldn't have released
it), so it's impossible to know what the problem was and whose
"fault" was involved (the demo, the OS, the installation
etc.) Assuming it was indeed a bug in ISE's software --
in the absence of precise information we have by default
to take responsibility for it -- and you want to dismiss
a whole methodology on the basis of that example, no one
can really criticize you.

There are people building large, complex systems, and there
are people teaching how to build systems. Usually these two
communities are pretty much disjoint; the former produce
software, the latter produce papers and books. In our (ISE's) case
we do both: we developed and teach techniques for building
better software, and we also sell a sophisticated development
environment (ISE Eiffel), with thousands of library classes,
interfaces to all kinds of technologies (COM, the Windows API,
SQL, X Windows, GTK, C++, Java, NAG, CORBA etc. etc.) and
versions for many platforms (Windows, Linux, many Unixes, VMS).
This inevitably exposes us to the kind of criticism expressed
in Mr. Cristians's message whenever we mess up. So be it.
We are not perfect, but we do as we say (i.e. we use our
own technology), and the result is for everyone to download
and see: for worse, but also (as I hope you'll realize if
you explore the environment further) for better. And it
keeps us honest: a reader can't crash a slightly flawed paper,
but a user can crash a slightly buggy demo program (so that,
among other things, the bug gets fixed).

Al Christians

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
bertran...@my-deja.com wrote:
>
> you want to dismiss whole methodology on the basis of that example,

> no one can really criticize you.
>

No. But one actual example of what actually happened when I actually
ran code produced your way ought to carry more weight than one
hypothetical example you present of what you say would have happened if
only someone else had produced code your way. Or am I overly obsessed
with reality?

Has any rocket or other flying vehicle or projectile ever actually flown
with real-time Eiffel software on board keeping it in the air and on
course?


Al

Bertrand Meyer

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
David Gillon wrote:
>
[...]

>
> But in the Ariane 501 case the reuse was at the LRU level, not software.
> They treated the Inertial Reference System as a COTS plug-in, assuming
> the Ariane 4 design requirements (contract?) remained valid and did not
> need to be re-evaluated. [...]

> Not knowing Eiffel, I'm unsure how well its DBC facilities would cope
> with requirements flow down across completely different systems -- if
> the interface requirements specification must be updated manually in
> such cases then the historical failure can still occur in precisely the
> same manner.

If you do look into Eiffel and Design by Contract you will see that
it is the client's responsibility to ensure a precondition before the
call. That the operation has been built with another technology
doesn't matter; we deal with "external" features all the time
("external" is a keyword of Eiffel) and, when encapsulating them,
equip them with contracts exactly as if they were internal.

[BM]!!! The normal procedure is to document a routine through its
!!! contract and, when reusing it, to check that every call satisfies
!!! the contract.


>
> But how well does this scale to cover re-use of independently developed
> LRUs?

That's exactly the point. In an environment fostering Design by Contract
the obligatory path to reuse is through contracts. If you don't have
a contract, you examine the reused element and equip it with a contract.
The intellectual discipline leads you to ask "what are the conditions
of use of this thing?". You don't reuse it until you have the answer to
that question. Of course the answer might be wrong. But the key step
is to ask the question. In the case under discussion this would most
likely have prompted the realization that the operation had specific
constraints which the caller had to meet.

-- Bertrand Meyer

Stanley R. Allen

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
bertran...@my-deja.com wrote:
>
> "Tarjei T. Jensen" <tarjei...@kvaerner.com> wrote:
>
> > Ada has a strong engineering culture.
>
> Very true. But that culture doesn't include
> the discipline of Design by Contract.
>

And the Eiffel culture does not include the discipline of
protected objects.

This is a monumentally stupid discussion. We all know that
Eiffel has features X and Y that Ada does not have, and vice
versa.

The problem of software reliability will never be solved by
any language or feature of any language, nor will it ever be
solved by 'the culture' surrounding a language or technology.

When software failures are reported in the news, be they in
aerospace or telecommunications or banking systems, the ONLY
RATIONAL NON-DECEPTIVE CLAIM that we as supporters of Ada or
Eiffel or DBC can make is that our technologies COULD have
REDUCED THE PROBABILITY of the failure, and give reasons why.


--
Stanley Allen
mailto:Stanley_R...@raytheon.com

durchholz@halstenbach.com Joachim Durchholz

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
David Gillon <david....@gecm.com> wrote:
>
> Not knowing Eiffel, I'm unsure how well its DBC facilities would cope
> with requirements flow down across completely different systems -- if
> the interface requirements specification must be updated manually in
> such cases then the historical failure can still occur in precisely
the
> same manner.

Actually this is possible. DbC modelling is indeed applicable to
hardware, software, and even business processes, and across levels of
abstraction.

For the Ariane-5 example, you can do this (just showing the gross
outline and omitting *lots* of detail, i.e. ignore scaling and precision
issues in real numbers etc.):


deferred class ROCKET
-- A model of the physical Ariane rocket.
-- This is just a specification (the class is "deferred",
-- which is Eiffel terminology for an abstract class).

feature

x, y, z: REAL is deferred
-- Position in space.
-- (In reality we'd probably use a more suitable
-- coordinate system.)
-- (We'd probably even use a POSITION_VECTOR class with
-- suitably defined conversions etc. to keep ROCKET at
-- a manageable size (and to get a useful level of
-- abstraction).
end

dx, dy, dz: REAL is deferred
-- Current speed.
end

velocity: REAL is deferred
-- This might be a function or a value. At the interface
-- level, Eiffel doesn't care about the difference.
ensure -- A postcondition.
Result = sqrt (dx * dx + dy * dy + dz * dz)
end

invariant
velocity <= maximum_velocity
end


The purpose of this class isn't executable code (though one could write
a concrete subclass for simulation purposes). Rather, it's just a
specification.

The IRS could be programmed like this:


class INERTIAL_REFERENCE_SYSTEM

feature {NONE}
-- {NONE} means the same as "protected" in C++.
-- (One could do a "selective export" by giving
-- a list of class names instead of NONE, for
-- a more granular control over who gets to see
-- a group of features.)

reality: ROCKET

feature -- Public features: no {NONE} restriction here

maximum_velocity: REAL is 3600.0
-- Maximum velocity up to which the IRS is guaranteed
-- to work.

precision: REAL is 0.5
-- Absolute precision of sensor.

measured_x: REAL
-- Sensor output.
require
rocket.velocity <= maximum_velocity
ensure
(rocket.x - Result).abs <= precision

-- Simile for measured_y and measured_z.
-- Again, we'd probably return all three values
-- as an object of some VECTOR class. In this case,
-- we'd probably not use REAL as well, but some
-- scaled integer type.

end


Now the programmer who uses an object of type INERTIAL_REFERENCE_SYSTEM
and asks for its measured_x will automatically see the precondition. At
that point, he'll likely check that the precondition matches that of the
ROCKET that he has (either as an abstract specification, or even as a
software simulation). Either way, he'll want to verify that the
precondition of 'measured_x' matches the invariant of ROCKED - and when
he's using an ARIANE_5 object, he'll see that its 'maximum_velocity' is
indeed above the 'maximum_velocity' value of INERTIAL_REFERENCE_SYSTEM.

(This doesn't mean that DbC would have actually caught the problem. The
concrete situation was that no part of the Ariane-5 software was
actually accessing the IRS; depending on the details of modelling, this
might have been noted or not. The best design method cannot make up for
design mistakes; however, it can make mistakes more obvious.)

> > The normal procedure is to document a routine through its

> > contract and, when reusing it, to check that every call satisfies

> > the contract.
>
> But how well does this scale to cover re-use of independently
> developed LRUs?

The IRS is hardware - but you can use the INERTIAL_REFERENCE_SYSTEM
class as outlined above as a replacement for a Detail Design Document of
the hardware. The team building the main control software of the Ariane
can make a ROCKET_WITH_SENSORS class that has a ROCKET, a deferred
version of INERTIAL_REFERENCE_SYSTEM; this class is, again, a
specification of the Ariane that can be used as a design background,
just as the programmers of INERTIAL_REFERENCE_SYSTEM used ROCKET as
design background.
The power of this method is that the original specifications from ROCKET
will automatically become part of the specification of the main control
software. No expensive and error-prone transformation of design
specifications is required. (This benefit will be available only as far
as all levels of the system are designed using the same specification
language; in the worst case, the written specifications of external
parts would need their specifications rewritten as specification
classes. After that, these classes make a good superclass for simulation
software, and can serve as documentation; if it's done in Eiffel, the
Eiffel tools will automatically trace any high-level software
specifications back to its origins in external-parts specifications.
This should give excellent traceability with a minimum of overhead on
the side of the designer or programmer.)

> > In a software development culture

> > the first task of quality assurance is to check that
> > every call takes the contracts into account. In the Ariane case
> > this wouldn't even have required a test; just static inspection.
>

> I'm not certain this would have made a difference. The error seems to
> have been an assumption that, in Eiffel terms, the contract was
> unchanged, and so did not need to be reassessed. They likely had
> processes that might have detected the error if invoked, but time and
> cost pressure caused them to make the assumption that they were not
> needed in this case.

Agreed. However, if contracts are expressed in a uniform way and are
accessible through a powerful class browser (such as those available for
all commercial Eiffel compilers), checking is much easier, so it's more
likely that this test would have been done statically.

Given the concrete circumstances of the Ariane-5 crash, I'm not sure
that it would have made a difference. As you say, it's an easy thing to
overlook, even if everything is fully specified - in fact everything was
fully specified, the inconsistency just never became obvious.
Yet I still think there is at least some effect. However, the real
advantage of DbC in a safety-critical section is not that it would have
made the Ariane crash less likely by whatever margin, it's that it makes
keeping the code consistent with the specifications much easier. The
specification is available in the form of assertions, it's even possible
to write simulating subclasses (of ROCKET for testing I_R_S, of I_R_S to
test the main control software) that actually test them, so it's easy to
write a quick-and-dirty test that exercises the software against changed
hardware requirements.

> Including that this is apparently a failure in the interface
> requirements specification between two separately developed processes,
> not a simple invocation of one routine from another.

I hope I have demonstrated that DbC is applicable even in this case.

Finally, let me apply a disclaimer: This is how it *think* that DbC
could be applied to safety-critical software and hardware interfaces. I
have seen safety-critical programming from a short distance, so I hope
the ideas developed in this post are useful - but if I were charged with
introducing DbC into a safety-critical project, I'd do a few experiments
first.

Regards,
Joachim
--
This is not an official statement from my employer or from NICE.
Reply-to address changed to discourage unsolicited advertisements.

durchholz@halstenbach.com Joachim Durchholz

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
Tarjei T. Jensen <tarjei...@kvaerner.com> wrote:
>
> Besides. This was running on 1750 hardware. Was there any
> eiffel compiler available for at the time for the CPUs used?
> Are there any Eiffel compiler available for space hardened CPUs?

Did it have a C compiler? If yes, it also had an Eiffel compiler. Most
Eiffel compilers in existence can emit C code.

Keith Thompson

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
gi...@spurv.ii.uib.no (Gisle Sælensminde) writes:
[...]

> >Many of the same observations, by the way, apply to the September
> >1999 loss of the Mars Climate Orbiter (see
> >http://mars.jpl.nasa.gov/msp98/news/mco990930.html).
>
> The problem was caused by someone where feeding orbital data into Mars
> climate orbiter using feet as unit, while the program expected meters.

Actually, I think it was pounds vs. Newtons.

> NASA is using the SI system, but Americans seems to be more used to
> the old English units, so this is much of a cultural problem. Probably
> the software developers hadn't even thought of this even in their
> wildest dreams, so chances are that they would neither put anything
> like this into the contract if they had used Eiffel and design by
> contract methology. How could we know in hindsight?

As I recall, NASA uses SI, but the contractor (I *think* it was
Lockheed Martin) uses English units internally. They had procedures
for reconciling the units; they just missed it in this one case.

I've heard that someone on the project had noticed a problem some time
before the spacecraft was lost (the number in question was a thrust,
and this was during a phase when only minor course corrections were
being done), but didn't have time to follow up on it.

A more recent report is at
<http://mars.jpl.nasa.gov/msp98/news/mco991110.html>.

--
Keith Thompson (The_Other_Keith) k...@cts.com <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
Welcome to the last year of the 20th century.

Keith Thompson

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
"Joachim Durchholz" <joachim durc...@halstenbach.com> writes:
[...]

> Did it have a C compiler? If yes, it also had an Eiffel compiler. Most
> Eiffel compilers in existence can emit C code.

Do Eiffel compilers generate *portable* C code? I.e., does porting an
Eiffel compiler consist merely of copying the generated C code from
one platform to another and recompiling it? If so, I'm impressed --
especially if this works for the 1750 (which, among other oddities,
has 16-bit storage units).

Howard W. LUDWIG

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
Joachim Durchholz wrote:

> DbC modelling is indeed applicable to hardware, software, and even
> business processes, and across levels of abstraction.
>
> For the Ariane-5 example, you can do this (just showing the gross outline
> and omitting *lots* of detail, i.e. ignore scaling and precision issues in
> real numbers etc.):

[Although scaling seems to have been a key issue in the case of
Ariane 5.]


> deferred class ROCKET
> -- A model of the physical Ariane rocket.
> -- This is just a specification (the class is "deferred",
> -- which is Eiffel terminology for an abstract class).
>
> feature
>
> x, y, z: REAL is deferred
> -- Position in space.
> -- (In reality we'd probably use a more suitable
> -- coordinate system.)
> -- (We'd probably even use a POSITION_VECTOR class with
> -- suitably defined conversions etc. to keep ROCKET at
> -- a manageable size (and to get a useful level of
> -- abstraction).
> end
>
> dx, dy, dz: REAL is deferred
> -- Current speed.
> end
>
> velocity: REAL is deferred
> -- This might be a function or a value. At the interface
> -- level, Eiffel doesn't care about the difference.
> ensure -- A postcondition.
> Result = sqrt (dx * dx + dy * dy + dz * dz)
> end

I am inadequately knowledgeable of Design by Contract and
Eiffel, so somebody please help me out of my confusion. I
would normally expect some calculations (for this type of
application domain anyway) with Result being computed in a
"do" section between the preconditions ("require" section)
and the post-conditions ("ensure" section). What is ensured
above looks just like what I would expect to see in the "do"
section. If you duplicate the "do" computations (which I
grant are not listed in the code snippet above, but I would
expect to see there in practice) in the "ensure" section, of
course they are going to match and the postcondition does
not really check anything. What am I missing here? What do
you do to put truly meaningful checks in postconditions as
distinct from your nominal calculations in the "do" section?


> invariant
> velocity <= maximum_velocity
> end
>
> The purpose of this class isn't executable code (though one could write a
> concrete subclass for simulation purposes). Rather, it's just a
> specification.
>
> The IRS could be programmed like this:
>
> class INERTIAL_REFERENCE_SYSTEM
>
> feature {NONE}
> -- {NONE} means the same as "protected" in C++.
> -- (One could do a "selective export" by giving
> -- a list of class names instead of NONE, for
> -- a more granular control over who gets to see
> -- a group of features.)
>
> reality: ROCKET
>
> feature -- Public features: no {NONE} restriction here
>
> maximum_velocity: REAL is 3600.0
> -- Maximum velocity up to which the IRS is guaranteed
> -- to work.
>
> precision: REAL is 0.5
> -- Absolute precision of sensor.

The problem with the Mars Climate Orbiter (which Bertrand Meyer
referenced in an earlier posting in this thread) was a mismatch
of measurement units between two separate computer programs.
A valuable mechanism to keep such units straight and to find
mismatches is to have some sort of interface specification
which states clearly which units shall be used (what does 1.0
mean for floating point and what does 1 [LSB] mean for scaled
integer)--no functional equivalence accepted and then to have
the units explicitly included as part of the executed software
(that is, not just in comments). Code like the above, with
"3600.0" and "0.5" but no units, seems to be continuing with
the tradition of careless, shortcut software, which will not
alleviate problems, such as with MCO. No language guarantees
an adequate specification of contract.

> would need their specifications rewritten as specificationclasses. After

> ideas developed in this post are useful - but if I were charged with the
> ideas developed in this post are useful - but if I'd do a few experiments
> first.

> Regards,
> Joachim

Another part of the problem was synchronization of events.
It is clear the constraint would have been violated in any
case at higher altitudes. It is also clear that at very
low altitudes the constraint is not violated. The problem
was that the transition point occurred earlier for Ariane 5
than for Ariane 4. This seems to me to be a dynamic issue,
not a static issue, so that static checking would not work.

In terms of the culture and the thought process, the software
in question was regarded as serving no purpose after lift-
off and was, therefore, harmless; the fallacy of the logic
yielding that consequence became painfully apparent.
Inductive reasoning based on the success of every Ariane 4
launch and the reuse of the same hardware/software system
(the SRI) in Ariane 5 furthered the fallacious thinking.
But the conclusion was still, basically, there is no contract
to satisfy or violate--if software is doing nothing, then
there is nothing to assure. Therefore, the contract is
[vacuously] satisfied. Do we really think that most people,
even those steeped in DbC, would really pursue the contract
analysis further to verify up-front--by system analysis,
not after-the-fact system integration and verification--that
the velocity constraints would be or would not be met during
the time after lift-off that the software in question continues
to run, and that the velocity constraints are "don't cares"
after the software was scheduled to turn off? It is nice for
us to look in hindsight and examine what can we do to prevent
repeating this mistake. That is good, and we can put extra
or different steps in our process to make sure it will not
happen again in similar circumstances, and, had those steps
already been in place, would not have happened. But I think
it naive to say such-and-such already existing (in 1996)
design process or concept would almost certainly have caught
the problem. (System testing, which was already accepted
practice but deliberately skipped in this case, was demonstrated
to have been adequate.) The sample snippet of code that ignores
units of measurement but is supposedly an example of DbC does
not give me confidence that this approach will reliably find
critical errors for programs like MCO and Ariane 5. Please
don't get me wrong when I say this, because I see much value
in the DbC concept from what I have read about it, and I have
no doubt that it is a useful approach that will catch many
major flaws in a design--in some ways it seems to extend the
valuable concept of the peer review process by supporting more
automated checking, and by providing more substantive data
for human reviewers; however, to regard DbC as a cure-all
eliminating all substantive flaws from a specification or a
design is a delusion. The proverbial lack of a "silver bullet"
still seems to hold true.

Howard W. LUDWIG

David Starner

unread,
Jun 26, 2000, 3:00:00 AM6/26/00
to
On Tue, 27 Jun 2000 00:56:24 +0200, Joachim Durchholz <joachimdo...@halstenbach.com> wrote:
>> especially if this works for the 1750 (which, among other oddities,
>> has 16-bit storage units).
>
>Well, I wouldn't expect that this is a problem. Of course, the proof of
>such a pudding is in the eating, so I won't make any bold claims until I
>have seen such a beast run. But these portability issues are largely
>solved in C, so Eiffel just relies on the C semantics.

They're solved in C? For what value of solved? Most C programmers just
don't worry about 16 bit systems. The GNU project (in an era of 16 bit
machines) decided that it wasn't worth their trouble to write for 16
bit machines. C99 has/will improve things some, but I wouldn't say the
issues are solved, even largely.

--
David Starner - dstar...@aasaa.ofe.org
http/ftp: x8b4e53cd.dhcp.okstate.edu
"A dynamic character with an ability to survive certain death and
a questionable death scene leaving no corpse? Face it, we'll never
see her again." - Sluggy Freelance

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
Keith Thompson <k...@cts.com> wrote:
> "Joachim Durchholz" <joachim durc...@halstenbach.com> writes:
> [...]
> > Did it have a C compiler? If yes, it also had an Eiffel compiler.
Most
> > Eiffel compilers in existence can emit C code.
>
> Do Eiffel compilers generate *portable* C code? I.e., does porting an
> Eiffel compiler consist merely of copying the generated C code from
> one platform to another and recompiling it?

Largely. There are always library differences. Sometimes the C compiler
is stranger than the hardware that it was written for. But such problems
are rare.
The worst problems are usually getting all the tools (compiler, linker,
loader, whatever) called with the right options and in the right order.
For example, a port to gcc sort-of failed because the GNU tools were
unable to read .lib files in the Microsoft variant of the coff format.
(Linking the same libraries as DLLs would have worked fine but wasn't
considered an option initially.)

> If so, I'm impressed --

This is not really necessary. The generated code is quite
straightforward, so if there's a bug in the code generation (be it
portability or something else), it will usually be present all over the
generated C code.
C works remarkably well as a "portable assembler", with one exception:
It's darn difficult to make the generated executable react reliably to
integer overflows.

> especially if this works for the 1750 (which, among other oddities,
> has 16-bit storage units).

Well, I wouldn't expect that this is a problem. Of course, the proof of
such a pudding is in the eating, so I won't make any bold claims until I
have seen such a beast run. But these portability issues are largely
solved in C, so Eiffel just relies on the C semantics.

(This is not part of the language definition, it's just what *most*
Eiffel compilers do. There is a significant minority that compiles
directly to machine language; in such a case, standard portability
issues must be addressed again.)

Peter Horan

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
"Howard W. LUDWIG" wrote:

> > velocity: REAL is deferred
> > -- This might be a function or a value. At the interface
> > -- level, Eiffel doesn't care about the difference.
> > ensure -- A postcondition.
> > Result = sqrt (dx * dx + dy * dy + dz * dz)
> > end
>
> I am inadequately knowledgeable of Design by Contract and
> Eiffel, so somebody please help me out of my confusion. I
> would normally expect some calculations (for this type of
> application domain anyway) with Result being computed in a
> "do" section between the preconditions ("require" section)
> and the post-conditions ("ensure" section). What is ensured
> above looks just like what I would expect to see in the "do"
> section. If you duplicate the "do" computations (which I
> grant are not listed in the code snippet above, but I would
> expect to see there in practice) in the "ensure" section, of
> course they are going to match and the postcondition does
> not really check anything. What am I missing here? What do
> you do to put truly meaningful checks in postconditions as
> distinct from your nominal calculations in the "do" section?

The point in the general case is that you are able to specify
the postcondition without also having to write the code to
deliver the goods at the same time. In other words, the
postcondition says _what_ the relationship between result and
the parameters must be without saying _how_ this relationship
is obtained. It is a specification, not code.

As soon as you write the code, in this example, it is obvious
the postcondition will be satisfied, because the expressions in
the the two parts of the yet to be written assignment and the
two parts of the postcondition are the same. But what about
a postcondition such as
ensure
exists: file.exists
where things are not so similar?

> > feature -- Public features: no {NONE} restriction here
> >
> > maximum_velocity: REAL is 3600.0
> > -- Maximum velocity up to which the IRS is guaranteed
> > -- to work.
> >
> > precision: REAL is 0.5
> > -- Absolute precision of sensor.
>
> The problem with the Mars Climate Orbiter (which Bertrand Meyer
> referenced in an earlier posting in this thread) was a mismatch
> of measurement units between two separate computer programs.
> A valuable mechanism to keep such units straight and to find
> mismatches is to have some sort of interface specification
> which states clearly which units shall be used (what does 1.0
> mean for floating point and what does 1 [LSB] mean for scaled
> integer)--no functional equivalence accepted and then to have
> the units explicitly included as part of the executed software
> (that is, not just in comments). Code like the above, with
> "3600.0" and "0.5" but no units, seems to be continuing with
> the tradition of careless, shortcut software, which will not
> alleviate problems, such as with MCO. No language guarantees
> an adequate specification of contract.

It is only intended as an example. So specify the units also.
But I agree that there is always room for overlooking things.

> The sample snippet of code that ignores
> units of measurement but is supposedly an example of DbC does
> not give me confidence that this approach will reliably find
> critical errors for programs like MCO and Ariane 5. Please

It was only an example, as I suggested above. The point about
specification is that is can also develop as understanding
improves, and it can be developed ahead of the code.

> don't get me wrong when I say this, because I see much value
> in the DbC concept from what I have read about it, and I have
> no doubt that it is a useful approach that will catch many
> major flaws in a design--in some ways it seems to extend the
> valuable concept of the peer review process by supporting more
> automated checking, and by providing more substantive data
> for human reviewers; however, to regard DbC as a cure-all
> eliminating all substantive flaws from a specification or a
> design is a delusion. The proverbial lack of a "silver bullet"
> still seems to hold true.

I have not read the discussion in this way that DbC is a silver
bullet. But, I guess that is because I use DbC and would not
be without it. I do read the current discussion as "maybe DbC
would have helped".
--
Peter Horan School of Computing and Mathematics
pe...@deakin.edu.au Deakin University
+61-3-5227 1234 (Voice) Geelong
+61-3-5243 7723 (Home) Victoria 3217
+61-3-5227 2028 (FAX) Australia

-- The Eiffel Guarantee: Specification to Implementation (http://www.elj.com)

Thomas Beale

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to

"Gisle Sælensminde" wrote:

> As I understand, you don't say that Eiffel in itself could avoided
> the problem, but that the design by contract mindset could have
> avoided it. Design by contract is a nice concept which I would
> like to see spread, but I don't think that it would have helped.
> The problem was the decision to move a component from
> Ariane 4 to Ariane 5 without sufficient testing. This is bad
> practice anyway, and no language or design methology can avoid
> bad management.

You may be right, but this is a reductionist way of looking at things.

The question is whether such bad practice would have occurred in a _culture_
of design by contract. The question isn't about whether one event might have
taken place or not if the culture were the same, but Eiffel was use, but
rather whether it would have taken place in a different development culture.

The assertion being debated is more or less that: DbC represents a different
culture; such a different culture could well have averted the error, since in
the DbC mindset, _everything_ that is re-used has its contracts reviewed &
tested.

As a long-time user of Eiffel, I can certainly say that DbC, when used in a
pervasive way, does constitute a cultural change, and could well prevent such
"reuse" errors.

By the way, for Ada users, I don't think that Ada was at fault; if DbC were
being used in the Ada development, the cultural benefits would be gained, _if_
it was actually implemented in the language (or some reasonable library or
macro addition - whatever one does in Ada!). Like any nice idea, it does
little for real projects unless implemented directly in the tools.

- thomas beale

Thomas Beale

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to

"Tarjei T. Jensen" wrote:

> Bertran...@eiffel.com wrote
> >But we do make the point that in the Design by Contract
> >culture pervasive in Eiffel, and directly supported
> >by the language (including its inheritance mechanism,
> >exception handling, documentation tools, debugging tools...)
> >the standard and almost obligatory way of reusing a module is
> >through its contract, including its conditions of use (precondition).
>

> You're skating on thin ice here. There is no reason to believe that Ada
> programmers are less conscientious than Eiffel programmers. Quite the contrary.


> Ada has a strong engineering culture.

The above comment does not say otherwise. As an engineer (and software engineer)
who has worked on realtime async systems, I know indeed that Ada people are
conscientious developers. The above comment just says that if the DbC approach (as
found in Eiffel) were used (wherever) then things might have been different.

> Besides. This was running on 1750 hardware. Was there any eiffel compiler
> available for at the time for the CPUs used? Are there any Eiffel compiler
> available for space hardened CPUs?

Depends. Is there an ANSI C compiler for the 1750? An Eiffel system was made to run
inside an HP printer...

- thomas beale


Thomas Beale

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to

Al Christians wrote:

> A while back I downloaded an Eiffel demo from B. Meyer's company, an
> example of the results the programming culture he espouses. It was
> a cute demo that ran under Windows, presented some GUI elements, and
> moved them around on the screen in response to mouse movements. About
> three mouse clicks later, Windows was dead. Meyer is a superb
> pontificator, but all his design by contract cerebration couldn't
> keep his own demo, his own firm's best foot forward, flying any
> longer than the Ariane flew. Perhaps that is because real OS's,
> eg Windows, and real rockets, eg Ariane, are not bound by contracts.
>
> Al

One has to ask: what OS did you run the thing under? If 95 or 98, you know
who to call...

Secondly, would you use Windows-anything in a rocket? (Please don't say yes)

- thomas beale

David Gillon

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to

Joachim Durchholz wrote:

> Actually this is possible. DbC modelling is indeed applicable to
> hardware, software, and even business processes, and across levels of
> abstraction.

But how well does it function across multiple projects and companies,
each with their own development practises?

> The power of this method is that the original specifications from ROCKET
> will automatically become part of the specification of the main control
> software. No expensive and error-prone transformation of design
> specifications is required. (This benefit will be available only as far
> as all levels of the system are designed using the same specification
> language

On a programme such as Ariane 5, and certainly if we want to scale this
to the general case of safety critical vehicle management software, you
are looking at multiple teams working in multiple locations under
multiple development environments. Eiffel itself, or any other
requirements/implementation tool is effectively a decoy here, the real
issue is whether there is an advantage to using Design By Contract in
these cases. I don't see anything to indicate that the transfer of
interface requirements expressed as DbC data between companies would be
inherently safer than the transfer of the same information as a formal
interface requirements document.

> Agreed. However, if contracts are expressed in a uniform way and are
> accessible through a powerful class browser (such as those available for
> all commercial Eiffel compilers), checking is much easier

It's the 'if' that is the problem.

> > Including that this is apparently a failure in the interface
> > requirements specification between two separately developed processes,
> > not a simple invocation of one routine from another.
>
> I hope I have demonstrated that DbC is applicable even in this case.

Possibly, but a properly handled Interface Requirements document should
have caught the same errors. NASA/LockMart were bitten by trying to go
faster, better, cheaper and allowing their reviewing process to decay to
the point something as fundamental as a units inconsistency was not
spotted.

--

David Gillon

Richard Kenner

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
In article <3958361A...@deakin.edu.au> Peter Horan <pe...@deakin.edu.au> writes:
>It is only intended as an example. So specify the units also.
>But I agree that there is always room for overlooking things.

And that's exactly the point. *No* language or methodology can prevent
people from overlooking things, and that's precisely what happened in
both of these incidents.

Peter Horan

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
I don't understand your excitement. I agree with you.

Howard wrote before my comment and I was agreeing with him.


> No language guarantees an adequate specification of contract.

And I also added afterwards:


> The point about specification is that is can also develop as understanding
> improves, and it can be developed ahead of the code.

Marin D. Condic

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
Thomas Beale wrote:
> The question is whether such bad practice would have occurred in a _culture_
> of design by contract. The question isn't about whether one event might have
> taken place or not if the culture were the same, but Eiffel was use, but
> rather whether it would have taken place in a different development culture.
>
Yeah, but that is a lot like saying "If we lived in a culture of
philosopher-saints, there would be no world hunger." or "If we lived in
a culture of pacifism, there would be no war". Sure, if the Ariane guys
lived in a different world, it wouldn't have been the world they lived
in. In a different universe, the rocket may not have blown up.

My assertion has been that had they bothered to do *any* kind of
checking and testing, they would have found the problem. DbC may have
found the problem in advance. Plugging the damned thing in and running a
flight profile through it would also have found it. They hosed up
because someone said "don't bother checking it out."

Using Ariane-5 to tout Eiffel/DbC or any other technology or process
seems to me to be unfair. Either the claim is "Language/Process X and
*only* X would have saved the day" or the claim is "Language/Process X
and just about any other Language/Process that was properly applied
would have saved the day." In the former case, there is a dearth of
evidence to back this up. In the latter case, it really isn't much of a
claim.

> The assertion being debated is more or less that: DbC represents a different
> culture; such a different culture could well have averted the error, since in
> the DbC mindset, _everything_ that is re-used has its contracts reviewed &
> tested.
>

Do you think that a bunch of metal benders would be infected by a
software process? DbC may be just fine within the world of software but
remember that the bulk of the rocket building exercise is done by
mechanical engineers, electrical engineers, physicists, etc., who A) are
not likely to be versed in any sort of software technology/process and
B) have their own share of development problems which nobody has ever
demonstrated are amenable to the same processes as software.

The claim that DbC/Eiffel would have saved the day is either too
incredible to be believed (and certainly undemonstrated) or it is so
general as to amount to a statement of support for "mom, apple pie and
Chevrolet." I don't think that Ariane-5 makes a good backdrop for
hawking Eiffel/DbC or any other technology. It was a management
screw-up, pure and simple. We've been having those since Adam bit the
apple and I'll lay odds that we'll continue to have them until Gabriel
blows the horn.

MDC
--
======================================================================
Marin David Condic - Quadrus Corporation - http://www.quadruscorp.com/
Send Replies To: m c o n d i c @ q u a d r u s c o r p . c o m
Visit my web site at: http://www.mcondic.com/

"Some people think programming Windows is like nailing jello to the
ceiling... easy with the right kind of nails."

-- Ivor Horton - Beginning Visual C++ 6
======================================================================

Al Christians

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to
"Marin D. Condic" wrote:
> The claim that DbC/Eiffel would have saved the day is either too
> incredible to be believed (and certainly undemonstrated) or it is so
> general as to amount to a statement of support for "mom, apple pie and
> Chevrolet."

Exactly. The IBM methodology of 40 years back, inscribed on plaques
worldwide, would have saved the day just as well. It was a 1-word
methodology: "Think". The 3-word methodology of "design by
contract", like most others, is neither as strong nor as universally
applicable. The Ariane problem was not a design problem -- it was a
use problem.


Al

Berend de Boer

unread,
Jun 27, 2000, 3:00:00 AM6/27/00
to

Keith Thompson wrote:


> Do Eiffel compilers generate *portable* C code? I.e., does porting an
> Eiffel compiler consist merely of copying the generated C code from

> one platform to another and recompiling it? If so, I'm impressed --


> especially if this works for the 1750 (which, among other oddities,
> has 16-bit storage units).

Most Eiffel compilers generate plain ANSI C code. And you probably can
use any cross compiler you wish.

Groetjes,

Berend. (-:

Thomas Beale

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to

"Marin D. Condic" wrote:

> Thomas Beale wrote:
> > The question is whether such bad practice would have occurred in a _culture_
> > of design by contract. The question isn't about whether one event might have
> > taken place or not if the culture were the same, but Eiffel was use, but
> > rather whether it would have taken place in a different development culture.
> >
> Yeah, but that is a lot like saying "If we lived in a culture of
> philosopher-saints, there would be no world hunger." or "If we lived in
> a culture of pacifism, there would be no war". Sure, if the Ariane guys
> lived in a different world, it wouldn't have been the world they lived
> in. In a different universe, the rocket may not have blown up.

I disagree - we're just talking software (and engineering) culture here - you can
go into two different companies and see wildly different development (and for that
matter, management) cultures.

> My assertion has been that had they bothered to do *any* kind of
> checking and testing, they would have found the problem. DbC may have

But they apparently did do a lot of checking and testing; just not on things being
re-used, if I understand correctly.

> found the problem in advance. Plugging the damned thing in and running a
> flight profile through it would also have found it. They hosed up
> because someone said "don't bother checking it out."
>
> Using Ariane-5 to tout Eiffel/DbC or any other technology or process
> seems to me to be unfair. Either the claim is "Language/Process X and
> *only* X would have saved the day" or the claim is "Language/Process X
> and just about any other Language/Process that was properly applied
> would have saved the day." In the former case, there is a dearth of
> evidence to back this up. In the latter case, it really isn't much of a
> claim.

It probably isn't much of a claim that something like DbC might have saved the day!

> > The assertion being debated is more or less that: DbC represents a different
> > culture; such a different culture could well have averted the error, since in
> > the DbC mindset, _everything_ that is re-used has its contracts reviewed &
> > tested.
> >
> Do you think that a bunch of metal benders would be infected by a
> software process? DbC may be just fine within the world of software but
> remember that the bulk of the rocket building exercise is done by
> mechanical engineers, electrical engineers, physicists, etc., who A) are
> not likely to be versed in any sort of software technology/process and
> B) have their own share of development problems which nobody has ever
> demonstrated are amenable to the same processes as software.

Fair enough; probably there is a deep argument about basic engineering quality
processes which need to occur in mech/elec / software engineering alike, and which
need to cross borders in a complex engineering project like a rocket.

- thomas beale

bertran...@my-deja.com

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
Al Christians <ach...@easystreet.com> wrote:

> The IBM methodology of 40 years back, inscribed on plaques
> worldwide, would have saved the day just as well. It was a 1-word
> methodology: "Think". The 3-word methodology of "design by
> contract", like most others, is neither as strong nor as universally
> applicable.

A rather ill-considered comment, in my opinion. Design by Contract
is not a 3-word methodology. It has a 3-word name, and is supported
by lots of articles, including a fat book (too fat in some
people's view).

No one has the revealed truth but if you want to dismiss Design
by Contract I think you need more serious arguments than "there is
no silver bullet" (sure -- did anyone claim there is one? --
but that's not a good enough reason to dismiss proposed techniques)
and, now, "a three-word methodology is not enough".

> The Ariane problem was not a design problem -- it was a
> use problem.

Right, more precisely a reuse problem (as the article by
Jezequel and myself indeed argues). Design by Contract has a lot to
say about reuse (as well as design, but also documentation,
analysis, testing, management, and other applications).
In fact I have argued, in the paper and elsewhere, that
it's plain foolish to reuse without contracts.
You may disagree with the way the methodology addresses
the problem at hand, but you can't deny that it includes
it in its scope.

Ken Garlington

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
"Marin D. Condic" <mcondic...@acm.com> wrote in message
news:3958B07B...@acm.com...
[snip]

>
> My assertion has been that had they bothered to do *any* kind of
> checking and testing, they would have found the problem. DbC may have
> found the problem in advance. Plugging the damned thing in and running a
> flight profile through it would also have found it. They hosed up
> because someone said "don't bother checking it out."

Well, I think this is unfair to the IRS development team (assuming this is
what you meant by "them"). As noted in the report, and in my notes on the
subject at

http://www.flash.net/~kennieg/ariane.html#s3.1.5

the IRS team did not have access to the correct flight profile. If they had
run the Ariane 4 flight profile, they would not have run the error. Granted,
it would have been smart to run an end-to-end test, but that would not have
been sufficient without the proper test data. (Of course, the absence of
this data also implies some difficulties with detecting the error through a
DbC approach, as described in my notes...)

Veli-Pekka Nousiainen

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
"Thomas Beale" <tho...@deepthought.com.au> wrote in message
news:395888DF...@deepthought.com.au...

LOL

> - thomas beale
>
Veli-Pekka

Marin D. Condic

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
Ken Garlington wrote:
>
> "Marin D. Condic" <mcondic...@acm.com> wrote in message
> news:3958B07B...@acm.com...
> [snip]
> >
> > My assertion has been that had they bothered to do *any* kind of
> > checking and testing, they would have found the problem. DbC may have
> > found the problem in advance. Plugging the damned thing in and running a
> > flight profile through it would also have found it. They hosed up
> > because someone said "don't bother checking it out."
>
> Well, I think this is unfair to the IRS development team (assuming this is
> what you meant by "them"). As noted in the report, and in my notes on the
> subject at
>
I think the "them" I was referring to is that nebulous "them" or "they"
that we blame everything on. Not necessarily the same "them" or "they"
that covered up the alien landings at Roswell and are currently busy
following me around and tapping my phone lines. :-) (Anybody keeping up
with recent reports of cattle mutilations? :-)

What I meant - and I should have been more precise - was that the IRS in
conjunction with the given vehicle was going to be exposed to certain
conditions. Heat, vibration, Gamma rays, flight envelope, etc. There had
to be *somebody* in the organization who looked at the IRS and made
decisions such as: "Heat? Same as Ariane 4 and it worked fine there. No
need to test" This "somebody" (fow whatever reasons) did not include
"flight profile" in as part of what the IRS should be checked out
against. Maybe they *did* put it in a "shake-and-bake" because the
vibration and temperature ranges were different. If *I* was the systems
engineer responsible for that part, I'd want to verify physical
characteristics like that if they were different from prior flights.

So "they" decided that there was no need to run a test - either on the
IRS itself or in integration with the vehicle - that included the
Ariane-5 flight profile. (Or maybe it just didn't occur to them? Same
thing - it *should* have!) This, in my mind, would be pretty basic and
would normally be part of just about everyone's development process.
They'd have done that way back in the 1960's space program, long before
the invention of DbC/Eiffel, Structured Programming, Peer Reviews or
anything else that someone wants to claim would have saved the day.

Would DbC/Eiffel have saved the day? If DbC/Eiffel would have
pimp-slapped that unknown systems engineer who excluded that test and
forced him to include it, then I guess you can say that DbC/Eiffel would
have saved the day. But since just about any sound engineering practice
would have indicated that the device needed to be run against the new
profile, the claim doesn't amount to much and smells suspiciously of
"marketing". (Does brushing your teeth with Crest give you cleaner,
whiter teeth? Yes. And the same applies if you brushed your teeth with
Colgate, baking soda or sand. So what? The important part is to brush
your teeth!)

This is why I contend that it was a management problem and not a
methodology or technology problem. Somebody out there did *not* do
something that they should have known needed to be done. So far as I
know, there has never been, and never will be, a methodology or
technology that will guarantee that people won't shoot themselves in the
foot from time to time. If a claim is made that DbC/Eiffel would create
a "no foot shooting culture" - that is interesting, but not compelling.
Most cultures I know of frown upon and discourage their members from
shooting themselves in the foot. Yet every so often, a bridge in Tacoma
falls down, a skywalk in a hotel atrium colapses, a space shuttle blows
up, etc. Coming back after the fact and saying "my pet technology would
have prevented that!" is pretty well useless unless it can rightfully
claim that it would have caught and fixed the specific fault. Since the
fault was a lapse of human judgement - not a technological weakness - I
find such a claim difficult to believe and even harder to demonstrate.

Igor Boukanov

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
In comp.lang.eiffel Marin D. Condic <mcondic...@acm.com> wrote:
> Ken Garlington wrote:
> .....

Exactly! In Eiffel/DbC mostly checks related to static typing can be done
by compiler, to check the rest you have to run the code. And if one
decided not to test, then it does not matter was it DbC or not.

Regards, Igor

Al Christians

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
Thomas Beale wrote:
> An Eiffel system was made to run
> inside an HP printer...
>

Tell us about this, if you can. When I last checked for Eiffel
success stories, there were only a few, and the HP printer thing
was the most recent one that looked at all significant. But I
hear from acquaintances at HP say that they developed a JVM for
controlling printers, they were most pleased with the JVM, and
Eiffel was not anything major in HP's printer products. What's
the real story?


Al

Al Christians

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
"Marin D. Condic" wrote:
> If a claim is made that DbC/Eiffel would create
> a "no foot shooting culture" - that is interesting, but not > compelling.

Here's what I find on page 128 of the only DBC book I have that
predates the Ariane problem, _Seamless_OO_SOftware_Architecture_,
by Walden and Nerson:

".. restricting reuse to what can be successfully planned will not
leave us much better off than with the old technologies. ...In the
meantime, we are much better off with 'accidental reuse' than with
no reuse at all."


Al

Darren New

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
Bertrand Meyer wrote:
> That's exactly the point. In an environment fostering Design by Contract
> the obligatory path to reuse is through contracts. If you don't have
> a contract, you examine the reused element and equip it with a contract.

Here's something I don't understand. From the actual report:

+=+=+
The reason why the active SRI 2 did not send correct attitude data was that
the unit had declared a failure due to a software exception.

The internal SRI software exception was caused during execution of a data
conversion from 64-bit floating point to 16-bit signed integer value. The
floating point number which was converted had a value greater than what
could be represented by a 16-bit signed integer. This resulted in an Operand
Error. The data conversion instructions (in Ada code) were not protected
from causing an Operand Error, although other conversions of comparable
variables in the same place in the code were protected.

The reason for the three remaining variables, including the one denoting
horizontal bias, being unprotected was that further reasoning indicated that
they were either physically limited or that there was a large margin of
safety, a reasoning which in the case of the variable BH turned out to be
faulty.

+=+=+

Translating this into Eiffel terms, it sounds like they're saying "we had a
64-bit number that we converted to a 16-bit number. The number was too big,
so it violated the precondition on the conversion, threw an exception which
we decided not to rescue, and that blew up the program. We decided not to
rescue it, because we looked at it and figured it wouldn't ever trigger the
precondition in this case."

So it looks like they *had* a contract, and they *did* look at clients, and
in some of them added code to recover from exceptions, but didn't in this
case, because they had examined the code and the contracts.

So I really fail to see how Eiffel could possibly have prevented this error.
There were contracts, the programmers looked at them during reuse, and they
got it wrong because they couldn't test it experimentally due to constraints
entirely outside the language. Saying "If they'd only used Eiffel they would
have not made that mistake" is just bogus.

(Especially since Eiffel doesn't even *have* 64-bit and 16-bit integers.)

--
Darren New / Senior MTS & Free Radical / Invisible Worlds Inc.
San Diego, CA, USA (PST). Cryptokeys on demand.
"You know Lewis and Clark?" "You mean Superman?"

Berend de Boer

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to

Darren New wrote:

> So it looks like they *had* a contract, and they *did* look at clients, and
> in some of them added code to recover from exceptions, but didn't in this
> case, because they had examined the code and the contracts.

Let me repeat it again, sigh: there were TWO teams. The first team wrote
the code and did the contract stuff (if you can say that). They
documented it, but not in the code but in some other place.

The SECOND TEAM REUSED THIS STUFF.


> There were contracts, the programmers looked at them during reuse, and they

There were no contracts basically, the FIRST team probably didn't know
that such a thing existed. The SECOND team didn't look at the contracts
during reuse. The contracts (if any) were done by the FIRST team.
Everything you quoted applies to the FIRST team, not to the SECOND team
that reused the code.

Secondly, contracts (let's assume they knew about them) belong in the
code, not in some other document.

Thirdly, you can compile (certain) classes (or clusters) with or without
precondition checking on, so there is no run-time penalty for the final
code to specify them anyway.

Groetjes,

Berend. (-:

mjs...@my-deja.com

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
In article <395A4FBC...@pobox.com>,

Berend de Boer <ber...@pobox.com> wrote:

> Secondly, contracts (let's assume they knew about them) belong in the
> code, not in some other document.

Is reading through 10s or 100s or 1000s of thousands of lines of code
*really* the best way to document such limits to the outside world?
Why, in fact, should anybody even be re-reading the working, verified,
tested code at all? The SRI module should be treated as a black box
with a documented functionality (including operational limits). Re-
reading the code in this case would make no more sense than re-
verifying the hardware schematics. Putting contracts in the code may
help in development and maintainance, but surely a separate interfacing
and operational limits document is exactly what should be presented to
black-box re-users.

Mike

Eirik Mangseth

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to

<mjs...@my-deja.com> wrote in message news:8jdm3m$rq4$1...@nnrp1.deja.com...

Which incidentally is what the short and short/flat tools available in most
Eiffel environments provide. You don't have to poor over thousands of lines,
jsut the interface of the class(es) including assertions.

>
> Mike

Best regards

Eirik Mangseth
Stockpoint Nordic AS
Lysaker, Norway

"If I can't Eiffel in heaven, I won't go"

Eirik Mangseth

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to

"Al Christians" <ach...@easystreet.com> wrote in message
news:395A190E...@easystreet.com...

Proving what? That the same management that insisted continuing with
C decided to run with the herd? It's called "management by magazine"
and it's not very commendable, but very common.

> What's the real story?

The evangelist left and management bought
into Sun's marketing machine trading quality
for fashion.

>
>
> Al

Eirik M

mjs...@my-deja.com

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
In article <uNt65.3411$MS3....@news1.online.no>,

OK, I don't know what the output of such tools look like, but then the
question becoms is reading thousands of such interface contracts
*really* the best way to document such limits to the outside world,
when only a few dozen are pertinent to the hardware module interface?
Note, BTW, that it's not at all clear that the overflowing value had
not been manipulated before being used, so that in limiting the
conversion to +/- 32767 that may have implied a backwards limit on the
original Horizontal Bias input of +/- 14.33 or somesuch. Is it really
argueable that placing this limit (contract) in an interface document
is not as good as having it (perhaps with different limits) buried
among thousands of other contracts?

BTW, Is it actually standard Eiffel programming practice to guard all
float-to-integer conversions with contracts? (I'll be surprised if it
is)

Mike

Patrick Schoenbach

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
On Wed, 28 Jun 2000 20:10:45 GMT,
mjs...@my-deja.com <mjs...@my-deja.com> wrote:

>Putting contracts in the code may help in development and maintainance,
>but surely a separate interfacing and operational limits document is
>exactly what should be presented to black-box re-users.

Just asking. Are you familar with the notion of short and flat-short
form in Eiffel?

--
Best regards,
Patrick

--
----------------------------------------
Patrick Schoenbach
Interactive Software Engineering, Inc.
email: Patrick.S...@eiffel.com
URL: http://www.eiffel.com

Darren New

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
Berend de Boer wrote:
>
> Darren New wrote:
>
>
> > So it looks like they *had* a contract, and they *did* look at clients, and
> > in some of them added code to recover from exceptions, but didn't in this
> > case, because they had examined the code and the contracts.
>
> Let me repeat it again, sigh: there were TWO teams. The first team wrote
> the code and did the contract stuff (if you can say that). They
> documented it, but not in the code but in some other place.
>
> The SECOND TEAM REUSED THIS STUFF.

Yes. So when the second team reused this stuff, was the contract that was
violated one of the interfaces to the "cluster", or was BH an internal
variable getting passed around between routines local to the entire
ensamble?

> > There were contracts, the programmers looked at them during reuse, and they
>
> There were no contracts basically,

Bullfeathers. Of course there were. Ada just doesn't call them contracts
like Eiffel does. If an operation throws an exception because the input
isn't appropriate for the operation to be performed, that's a contract, yes?

> the FIRST team probably didn't know
> that such a thing existed.

I would be really, really suprised if anyone writing flight control software
for $500 million rockets didn't know what an assertion is, or what an ADT
is.

> The SECOND team didn't look at the contracts
> during reuse. The contracts (if any) were done by the FIRST team.
> Everything you quoted applies to the FIRST team, not to the SECOND team
> that reused the code.

So using Eiffel wouldn't have solved the problem, because the SECOND team
didn't look at the contracts during reuse. Seems pretty simple to me.

Using Eiffel but not looking at the contracts would not have avoided the
problem. Using Ada and looking at the contracts would have solved the
problem. So the language used is independent of the result, in this case.

I'm not sure why having all the contracts buried in the source code is
better than having them in a documentation binder, if the code is physically
tied to an object you're buying, like an IRS. I would hate to have to learn
how to send email by reading the source code of sendmail. Isn't it better to
have, say, an RFC to tell you what you need to do?

> Secondly, contracts (let's assume they knew about them) belong in the
> code, not in some other document.

They *are* in the code. That's why the program threw an exception.
Otherwise, it would have worked like Eiffel and *ignored* the overflow.

Everyone involved in the coding knew that sometimes, in Ada, a float being
cast to an integer throws an exception. It's a standard Ada kind of thing.
You get to turn it on and off in the code. The contract was right there. "BH
has to fit in a 16 bit integer at this point in the code." In what relevant
sense is that not just as much a contract as anything else in Eiffel?

> Thirdly, you can compile (certain) classes (or clusters) with or without
> precondition checking on, so there is no run-time penalty for the final
> code to specify them anyway.

Well, actually, not in this case. In this case, the problem was that a float
overflowed a 16-bit integer. How do you selectively turn off such checking
in Eiffel, on a conversion-by-conversion basis? Bonus points for even
*specifying* such a conversion in Eiffel. Remember, four of the calls *were*
checked, and three were *not* checked. So Eiffel doesn't give you the kind
of control over checking they needed anyway.

mjs...@my-deja.com

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
In article
<slrn8lkq5j.39h.P...@workstation.int.solidsoft.iksys.de>,

Patrick.S...@eiffel.com wrote:
> On Wed, 28 Jun 2000 20:10:45 GMT,
> mjs...@my-deja.com <mjs...@my-deja.com> wrote:
>
> >Putting contracts in the code may help in development and
maintainance,
> >but surely a separate interfacing and operational limits document is
> >exactly what should be presented to black-box re-users.
>
> Just asking. Are you familar with the notion of short and flat-short
> form in Eiffel?

Nope -- would it be able separate out only those contracts that applied
to the physical black-box inputs, and label those contracts in a clear
manner (ignoring the internal Eiffel class, method and variable names,
which the interfacer doesn't care about)? For example, in the Eiffel
Ariane paper are the following lines:

require
horizontal_bias <= Maximum_bias

but what the interfacer wants is going to be more like:

Horizontal Bias (connector J4, pin 6) valid range: 0 to +3.45V (0 to 62
m/s)

(you get the idea...)

Al Christians

unread,
Jun 28, 2000, 3:00:00 AM6/28/00
to
Eirik Mangseth wrote:
>
> The evangelist left and management bought
> into Sun's marketing machine trading quality
> for fashion.
>

Can you expound any examples of bad behavior of HP printers caused
by this lack of quality? Those things are everywhere, so if you
can point out the problems, you would have a better example
in favor of DBC than the Ariane V, as half the computerized world
could easily verify the flaws first-hand.

What do you call an evangelist who doesn't convert anyone?


Al

bertran...@my-deja.com

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
Al Christians wrote:

> Can you expound any examples of bad behavior of HP printers caused
> by this lack of quality? Those things are everywhere, so if you
> can point out the problems, you would have a better example
> in favor of DBC than the Ariane V, as half the computerized world
> could easily verify the flaws first-hand.

Thanks for the opportunity. I am not in a position to comment
on HP's strategy. But I can certainly cite some of what happened
during the development that Thomas Beale and Eirik Mangseth have
mentioned in this thread and which is documented on our site.
Among the notable events in that development:

- Because the Eiffel modules were being called by
the main body of C/C++ software, contracts in the
Eiffel part (preconditions) brought up bugs that
had been dormant in the C part, most likely for
years. This caused quite a stir as a bug in a laser
printer is not something to be taken lightly.

- Later in the project a violated contract (which
had not failed during simulation) showed that the
*hardware* (a chip from an external supplier, more
precisely its floating-point power operator) was
faulty! It caused even more of a stir, and interesting
discussions with the supplier.

As an anecdote, we were told in both cases that the initial
finger-pointing was at Eiffel -- "See this strange tool you
brought into the project, it breaks the software!" -- until
it was realized that the fault lay elsewhere and the Eiffel
contracts had actually served to evidence it. This is not
unlike the Ariane case (in different circumstances): reuse
errors. Only in these cases the errors were found during
development, directly thanks to the contracts.

There is a report with more details, in the form of an interview
of the project leader, at
http://www.eiffel.com/eiffel/projects/hp/creel.html.

This is a pretty good example of the kind of difficult real-time
system for which Eiffel has been shown to shine. To answer an
earlier message by Mr. Cristians, I don't know of any rockets
guided by Eiffel software. But many mission-critical systems
with extremely harsh requirements are functioning today (some of
them continuously for the past several years), written in Eiffel.
The page at http://www.eiffel.com/eiffel/projects give a few
pointers. I must qualify it by noting that (frustratingly
for ISE) there are a number of others (such as the price
reporting system of a major financial exchange, and several
large defense-related applications) that we have been unable
to discuss publicly, often because of the customers' quite
understandable desire to keep their leading edge. However
the projects that are documented on the Web page already include
some quite interesting stuff.

--
Bertrand Meyer
Interactive Software Engineering
ISE Building, 2nd Floor, 270 Storke Road Goleta CA 93117
Phone 805-685-1006, Fax 805-685-6869, http://eiffel.com

Berend de Boer

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to

mjs...@my-deja.com wrote:

> Note, BTW, that it's not at all clear that the overflowing value had
> not been manipulated before being used, so that in limiting the
> conversion to +/- 32767 that may have implied a backwards limit on the
> original Horizontal Bias input of +/- 14.33 or somesuch. Is it really
> argueable that placing this limit (contract) in an interface document
> is not as good as having it (perhaps with different limits) buried
> among thousands of other contracts?

Then the question becomes is reading thousands of such limits *really*
the best way to dcoument such limits to the outside world??

You just don't have a single directory with 1000s of classes of course.
Any programmer with at least a bit of experience tries to organize
things somewhat. And if you organize classes or documents, you have to
organize.


> BTW, Is it actually standard Eiffel programming practice to guard all
> float-to-integer conversions with contracts? (I'll be surprised if it
> is)

No. I just bail out. As most (Ada) programs do. Sometimes the costs can
be a little to high. Then you do the research the first team did. And
they did a good job. But there code could not be reused in every
situation.

Groetjes,

Berend. (-:

Tarjei T. Jensen

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to

Al Christians wrote
>Can you expound any examples of bad behavior of HP printers caused
>by this lack of quality? Those things are everywhere, so if you
>can point out the problems, you would have a better example
>in favor of DBC than the Ariane V, as half the computerized world
>could easily verify the flaws first-hand.

We have quite a number of HP printers and the problem rate especially with
regards to jetDirect cards are high enough to be annoying. It is not uncommon
for the tcp/ip stack to crash. Some of the printers do not wake reliably when
in power saving mode.

As far as I can se everything that were discovered in the HP project would have
been detected by any other programming language which does range checking on
variables. Other people have had similar experiences with Ada (search c.l.a.
for Dowhile Jones postings).

Greetings,

pc_c...@my-deja.com

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
In article <8jdm3m$rq4$1...@nnrp1.deja.com>,

mjs...@my-deja.com wrote:
> In article <395A4FBC...@pobox.com>,
> Berend de Boer <ber...@pobox.com> wrote:
>
> > Secondly, contracts (let's assume they knew about them) belong in
the
> > code, not in some other document.
>
> Is reading through 10s or 100s or 1000s of thousands of lines of code
> *really* the best way to document such limits to the outside world?
> Why, in fact, should anybody even be re-reading the working, verified,
> tested code at all? The SRI module should be treated as a black box
> with a documented functionality (including operational limits).

Documented functionality (including operational limits) == DbC Contracts

> Re-
> reading the code in this case would make no more sense than re-

> verifying the hardware schematics. Putting contracts in the code may


> help in development and maintainance, but surely a separate
interfacing
> and operational limits document is exactly what should be presented to
> black-box re-users.
>
>

A "separate interfacing and operational limits document" is exactly
what is the problem.

Consider the following simple case.

In the graphics library OpenGL there exist two functions glBegin()
and glEnd() which are used to begin and end of defining a list of
vertex-data. After calling glBegin() there is a limited set of valid
OpenGL calls you can make until you call glEnd() and thus indicate
that you are finished defining your list of vertex-data.

However what happens if you use a non-valid call between a
glBegin() and glEnd() pair? Well if you're lucky and have access to
official or correct OpenGL documentation such as the official
"OpenGL Programming Guide" you can read on page 48:

"... most other OpenGL calls generate an error [between a glBegin()
and a glEnd()]. Some vertex array commands, such as
glEnableClientState()
and glVertexPointer(), when called between glBegin() and glEnd(),
have undefined behaviour but do not neccesarily generate an error.
... These cases should be avoided, and debugging them may be more
difficult."

Wow! That's good information to have! However if you only read the
code, or rather the API function signatures which is what is presented
by the OpenGL library API, you will only see:

void glBegin(GLenum mode);

and

void glEnd(void);

Not that much information about the restrictions on their use to be
seen there! However if we use contracts and a language supporting
contracts we could present an API like this ('--' indicate comments):

drawing_a_primitive: BOOLEAN
-- Is a drawing primitive currently being defined?
-- Ie. has a `egl_begin' been called and
-- no corresponding `egl_end' been called yet?

gl_begin (mode: INTEGER) is
require
valid_state: not drawing_a_primitive
valid_geometric_primitive: is_valid_geometric_primitive (mode)
ensure
valid_drawing_state: drawing_a_primitive

gl_end is
require
valid_drawing_state: drawing_a_primitive
ensure
valid_state: not drawing_a_primitive

And then consider this non-valid OpenGL call (between a glBegin()
and a glEnd()):

egl_translate_f (x, y, z: REAL) is
-- Multiplies the current matrix by a translation matrix.
require
valid_state: not drawing_a_primitive

Note that *everything* else than the lines beginning with '--' is
compilable program code. Not only that but the 'require' and
'ensure' clauses are also part of the codes official API. Now by
just *reading* the codes official API that contains the "Documented
functionality (including operational limits)", we *know how* the
code is meant to be used!

We *know* that "egl_translate_f" should *not* be called if we're
between a "gl_begin" and a "gl_end" call! (You could argue that I
could have called the visible API feature "drawing_a_primitive",
"between_a_gl_begin_and_a_gl_end_call" instead to make its
meaning even more explicit.)

This is the way "Documented functionality (including operational
limits)", contracts, specifications or what ever you want to call the
semantics of the code, should be documented:

Integrated with the code!
Readable with the code!
Compilable with the code!
And testable with the code!

I guess it's just that if you haven't really worked with contracts and
DbC you can't really appreciate the power of using DbC.

/Paul

pc_c...@my-deja.com

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
In article <8jd4bb$na7$1...@toralf.uib.no>,

bouk...@sentef2.fi.uib.no (Igor Boukanov) wrote:
>
> Exactly! In Eiffel/DbC mostly checks related to static typing can be
done
> by compiler, to check the rest you have to run the code. And if one
> decided not to test, then it does not matter was it DbC or not.
>

I do not agree. DbC means *integrating* contracts, and thus the
specification of the code, with the implementation of the code. It
is impossible to reuse a given piece of code written with DbC without
actually seeing the contracts. It is impossible to reuse a given method/
function/procedure without seeing the contracts belonging to the
given method/function/procedure. Just as it is impossible not to see
and take into consideration the signature of the method/function/
procedure.

Seeing the contracts will alert a potential programmer, about to
reuse some piece of code, to the specific contracts/specifications
for the code he/she is about to reuse. It will make him/her think
about how he/she can reuse the code. And that is independent of
testing the code with contracts turned on or not.

Of course there's no guarantee that just because a language supports
DbC, programmers using the language will always specify contracts.
But the culture promoted by being able to specify contracts and
eventually using contracts will definitely reduce the amount of errors,
due to wrong reuse, in production code.

Just as a programming culture of consistent coding style and consistent
choice of module/class/function/attribute etc. names will in general
make the code more easy to support and/or reuse. There is of course
no guarantee for it (since many other factors also affect how easy a
given piece of code is to support/reuse), but generally well written
consistent code is easier to support and/or reuse.

So code with contracts is generally harder to mis-reuse than code
without contracts. And in my personal experience harder by *magnitude*
to mis-reuse than code without contracts. Add to that the ability
to check the contracts (ie. specifications) against the implementation
in runtime tests and you have a formidable tool for developing high-
quality reusable code.

I fully agree with the slogan "Reuse without contracts is sheer folly".
Just as delivering production code without testing is sheer folly. I am
convinced that in 5 or 10 years time very few people will consider
developing any new software without support for contracts along
the lines of Eiffel/DbC.

That said, of course no one can say that Eiffel/DbC would definitely
have caught the Ariane bug, but then again, as I've understood, no one
has claimed so.

/Paul Cohen

Thomas Beale

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to

Al Christians wrote:

> Thomas Beale wrote:
> > An Eiffel system was made to run inside an HP printer...
> >
>
> Tell us about this, if you can. When I last checked for Eiffel
> success stories, there were only a few, and the HP printer thing
> was the most recent one that looked at all significant. But I
> hear from acquaintances at HP say that they developed a JVM for
> controlling printers, they were most pleased with the JVM, and

> Eiffel was not anything major in HP's printer products. What's
> the real story?

Well for success stories (other please excuse my making noise about
this, but since we're all cross-posted to c.l.a they might be interested
:-) see under the AMP and GEHR headings at www.eiffel.com or at
http://www.deepthought.com.au/it/eiffel/eif_bus_sys/main.book.pdf
and http://www.gehr.org, particularly at
http://www.gehr.org/technical/source/Documentation/index.html if you
just want to dive into some online Eiffel....

(These are just projects in which I have personal involvement, there are
many others, no doubt bigger and better. )

For the printer, I note Bertrand has already replied on this.

- thomas beale

Ken Garlington

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
<pc_c...@my-deja.com> wrote in message news:8jfabb$1d8$1...@nnrp1.deja.com...

> That said, of course no one can say that Eiffel/DbC would definitely
> have caught the Ariane bug, but then again, as I've understood, no one
> has claimed so.

The exact claim was: "Does this mean that the [Ariane 5] crash would
automatically have been avoided had the mission used a language and method
supporting built-in assertions and Design by Contract? Although it is always
risky to draw such after-the-fact conclusions, the answer is probably
yes..."

mjs...@my-deja.com

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
In article <395AE477...@pobox.com>,

Berend de Boer <ber...@pobox.com> wrote:
>
>
> mjs...@my-deja.com wrote:
>
> > Note, BTW, that it's not at all clear that the overflowing value had
> > not been manipulated before being used, so that in limiting the
> > conversion to +/- 32767 that may have implied a backwards limit on
the
> > original Horizontal Bias input of +/- 14.33 or somesuch. Is it
really
> > argueable that placing this limit (contract) in an interface
document
> > is not as good as having it (perhaps with different limits) buried
> > among thousands of other contracts?
>
> Then the question becomes is reading thousands of such limits *really*
> the best way to dcoument such limits to the outside world??

Not at all the same, because the vast majority (99%?) of contracts in
the code do not percolate out to affect the interface specs. For
example, there might be ten different places where a particular input,
or some manipulation of it, are limited, but there can only be *one*
actual limit on that input (the intersection of all the internal
inputs).

Mike

mjs...@my-deja.com

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
In article <395B63A0...@deepthought.com.au>,
Thomas Beale <tho...@deepthought.com.au> wrote:

>
>
> mjs...@my-deja.com wrote:
>
> > surely a separate interfacing
> > and operational limits document is exactly what should be presented
to
> > black-box re-users.
>
> In normal software engineering parlance, this would be "requirements"
and
> "test cases".

Umm...OK...that changes everything...

Darren New

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
Thomas Beale wrote:
> Ah - you're missing another point about documentation, code etc which relates to
> all languages: the _primary expression_ of the contracts is in the code
> (remembering that in Eiffel and even well-written C++, and I guess Ada as well),
> but it can certainly be replicated elsewhere in documentary form.

That works only when you only have one version of a program. I.e., by doing
this, you're basically assuming that all versions of the program you write
will be structured the same way. It really doesn't work very well, methinks,
for stuff like internet protocols.

> Most software engineers have done or experienced similar things. The "Soda" tools
> for C++ do this kind of thing. So there is no need to talk of anything being
> "buried" in the code.

Fair enough. But given that the restrictions were "buried" in outside
documentation, I wouldn't imagine that putting it in the code would make it
*less* buried.

> > > Secondly, contracts (let's assume they knew about them) belong in the
> > > code, not in some other document.
> >

> > They *are* in the code. That's why the program threw an exception.
> > Otherwise, it would have worked like Eiffel and *ignored* the overflow.
>

> I don't know what was in the code, but what you are talking about sounds more like
> an exception mechanism, not a contract mechanism.

class int16
fromFloat(x) : ...
require -32K < x <= 32K

It's probably not out there *quite* so clearly, but...

> And Eiffel's contract mechanism
> would not have ignored anything, on the contrary it would have induced an
> exception;

That's what crashed the rocket. An exception was induced because of a
numeric conversion. Eiffel's contract mechanism would have ignored it
because there wasn't CPU time to be checking preconditions.

> the point is that the exception would have been induced at a
> well-defined place in the code - where the contractual precondition was, not in
> some far away place, making debuggin difficult.

So what you're saying is, had they used Eiffel, they would have calculated
what values of BH wouldn't fit in a 16-bit integer, and propagated that
information up the call stack in the form of preconditions on whatever
variables go into BH. But there's still no evidence that would have solved
the problem, due to the nature of BH.

> One could say this was an implicit assertion of the form "BH <= 32768 and BH >=
> -32767" everytime integers are handled in Ada. So there is an _assertion_ but no
> contract - a contract would consist of assertions for pre- and post- conditions
> (maybe just pre-condition - nothing stopping that) for the routine doing the call -

Yes. You just gave it. The routine doing the call that converted BH to an
integer had a precondition on that call that BH had to fit in a 16-bit
integer. I'm not sure why you don't think this is a contract.

> in other words they would have been understood as part of the routine
> specification, not just an implicit assertion somewhere in the code. A contract
> could be thought of as design-level assertions.

I'm aware of what assertions are, and what contracts are. I'm saying that
the routine that converted BH to an integer had a precondition that BH was
in range. BH wasn't. The error wasn't caught, beause the original
programmers knew it didn't need to get checked.

> > Well, actually, not in this case. In this case, the problem was that a float
> > overflowed a 16-bit integer. How do you selectively turn off such checking
> > in Eiffel, on a conversion-by-conversion basis? Bonus points for even
>

> Well, 16 and 64 bit integers don't exist natively in Eiffel (many argue they
> should) but if they did, failed conversions would generate the same kinds of
> exceptions as divide-by-zero and segmentation fault already do.

And your rocket would crash.

> You can write
> catch-all rescue clauses or override the default_rescue rotuine in classes to stop
> the whole thing dying easily enough.

Uh, no. There wasn't CPU time to be doing that. And they *did* have the
equivalent of a default_rescue, which dumped debugging information onto the
bus, which crashed the rocket. Why do you think Eiffel's default_rescue
would have any ability to keep the rocket flying, *even* assuming Eiffel
supported the application to start with?

Jason Voegele

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
Darren New <dn...@san.rr.com> wrote in message
news:395B7BAE...@san.rr.com...

> Thomas Beale wrote:
> > Most software engineers have done or experienced similar things. The
"Soda" tools
> > for C++ do this kind of thing. So there is no need to talk of anything
being
> > "buried" in the code.
>
> Fair enough. But given that the restrictions were "buried" in outside
> documentation, I wouldn't imagine that putting it in the code would make
it
> *less* buried.

Sure it would: no matter whether you were looking at the code or at the
external documentation, the contract would be there right in front of you.
Thus "less buried"

Jason Voegele

Bertrand Meyer

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
Tarjei T. Jensen wrote:

> As far as I can [see] everything that were discovered in the HP [printer] project

> would have been detected by any other programming language which does range checking on
> variables.

As a matter of fact, no. It is true that range checks are very useful,
but they capture only a small part of what can be expressed and
detected through contracts. The hardware defect that came out
during the HP project and caused such a shock was (as mentioned in my
earlier posting) a flaw in the hardware power operator in the chip,
and it manifested itself during testing through a violation of an
assertion of the form

2 ^ i <= N

(where ^ is the power operator; if my memory is correct i was 8 and N
was 256). This cannot be expressed as a simple interval type
declaration.

I believe the other problems encountered and detected also involved
non-trivial assertions. With contracts you can use the full power
of boolean expressions (and indeed, with Eiffel's agent mechanism,
arbitrary high-order functionals); for example you can have
a precondition of the form

some_property (x)

where `some_property' is a function describing a complex
condition. The actual conditions used in practice vary from
things like "this reference is non-void, i.e. that object
exists" (very simple, but repeatedly useful, and not expressible
as an interval type declaration) to rather complex ones,
such as "this object is present in the hash table",
"that window has been displayed at least once", "that list
of integers includes at least one positive value",
"that list of cars includes at least one non-domestic model" etc.
They include simple range checks but go far beyond them.

-- Bertrand Meyer
ISE Inc., Santa Barbara
http://www.tools-conferences.com http://www.eiffel.com

Robert Dewar

unread,
Jun 29, 2000, 3:00:00 AM6/29/00
to
In article <395BCE66...@eiffel.com>,

Bertran...@eiffel.com wrote:
> Tarjei T. Jensen wrote:
>
> > As far as I can [see] everything that were discovered in the
HP [printer] project
> > would have been detected by any other programming language
which does range checking on
> > variables.
>
> As a matter of fact, no. It is true that range checks are very
useful,
> but they capture only a small part of what can be expressed
and
> detected through contracts.


But the real issue is whether contracts as provided in Eiffel
are sufficiently more useful than the simple pragma Assert
found in typical Ada compilers. All the examples in this
message are in the category of simple assertions, and I think
it is far from clear that a case can be made for anything
more complex. Yes, I know examples can be constructed, but
I am struck by the fact that virtually 100% of examples
given do not need the additional complexity.

Thomas Beale

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to

Eirik Mangseth wrote:

> <mjs...@my-deja.com> wrote in message news:8jdm3m$rq4$1...@nnrp1.deja.com...
> > In article <395A4FBC...@pobox.com>,

> >Putting contracts in the code may

> > help in development and maintainance, but surely a separate interfacing


> > and operational limits document is exactly what should be presented to
> > black-box re-users.
>

> Which incidentally is what the short and short/flat tools available in most
> Eiffel environments provide. You don't have to poor over thousands of lines,
> jsut the interface of the class(es) including assertions.

It's not just "incidentally" either; this is a core point - flat/short is a form
akin to unix man pages or the black-box documentation of any library - but with
contracts. So it's exactly the black-box description you should be looking at,
since it's likely to be the only formal one.

Another comment: it has been said a few times (maybe more than a few) something
like: "it was a management screw-up, and nothing can stop that". Others have
been implying (if not saying directly; this probably includes Bertrand) that the
point is that a contract mentality can indeed be applied between whole teams of
people, whole sub-projects and so on. In other words, one part of the job of
managing a giant project like Ariane can be in fact carried out _using
contracts_. To do this would require the managers of the different teams to in
fact agree on contracts (and the rest of the formal interface specifications) as
one (of the many) protocols of co-ordinating their teams.

- thomas beale

Thomas Beale

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to

mjs...@my-deja.com wrote:

> surely a separate interfacing
> and operational limits document is exactly what should be presented to
> black-box re-users.

In normal software engineering parlance, this would be "requirements" and
"test cases".

- thomas beale

Thomas Beale

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to

Darren New wrote:

> I'm not sure why having all the contracts buried in the source code is
> better than having them in a documentation binder, if the code is physically
> tied to an object you're buying, like an IRS. I would hate to have to learn
> how to send email by reading the source code of sendmail. Isn't it better to
> have, say, an RFC to tell you what you need to do?

Ah - you're missing another point about documentation, code etc which relates to


all languages: the _primary expression_ of the contracts is in the code
(remembering that in Eiffel and even well-written C++, and I guess Ada as well),
but it can certainly be replicated elsewhere in documentary form.

In 1990 I was doing this with C which contained embedded comments for contracts;
the self-extracting man pages were run through an nroff format script and
interpolated with DDDs (Detailed Design Documents). In some cases - for notable
modules, they were interpolated through higher level documents.

More recently, I have used the common practice of using Eiffel -> MML -> FrameMaker
files which are interpolated into a FrameMaker book; all this can be done on unix
at least using make files and command lines to Frame.

Even more recently, I have stopped bothering with this practice, and just resorted
to putting HTML links into architecture documents (or wherever), which point to
complete online web trees of flat/short views of source. (Remember that the effects
of flat/short and selective export reduce what you see to what really is public).
See http://www.gehr.org for an example (follow the Eiffel link; follow a few
clusters down to classes, you will start seeing contracts. No code though....).

Most software engineers have done or experienced similar things. The "Soda" tools
for C++ do this kind of thing. So there is no need to talk of anything being
"buried" in the code.

> > Secondly, contracts (let's assume they knew about them) belong in the


> > code, not in some other document.
>
> They *are* in the code. That's why the program threw an exception.
> Otherwise, it would have worked like Eiffel and *ignored* the overflow.

I don't know what was in the code, but what you are talking about sounds more like

an exception mechanism, not a contract mechanism. And Eiffel's contract mechanism


would not have ignored anything, on the contrary it would have induced an

exception; the point is that the exception would have been induced at a


well-defined place in the code - where the contractual precondition was, not in
some far away place, making debuggin difficult.

> Everyone involved in the coding knew that sometimes, in Ada, a float being


> cast to an integer throws an exception. It's a standard Ada kind of thing.
> You get to turn it on and off in the code. The contract was right there. "BH
> has to fit in a 16 bit integer at this point in the code." In what relevant
> sense is that not just as much a contract as anything else in Eiffel?

One could say this was an implicit assertion of the form "BH <= 32768 and BH >=


-32767" everytime integers are handled in Ada. So there is an _assertion_ but no
contract - a contract would consist of assertions for pre- and post- conditions
(maybe just pre-condition - nothing stopping that) for the routine doing the call -

in other words they would have been understood as part of the routine
specification, not just an implicit assertion somewhere in the code. A contract
could be thought of as design-level assertions.

> > Thirdly, you can compile (certain) classes (or clusters) with or without


> > precondition checking on, so there is no run-time penalty for the final
> > code to specify them anyway.
>

> Well, actually, not in this case. In this case, the problem was that a float
> overflowed a 16-bit integer. How do you selectively turn off such checking
> in Eiffel, on a conversion-by-conversion basis? Bonus points for even

Well, 16 and 64 bit integers don't exist natively in Eiffel (many argue they
should) but if they did, failed conversions would generate the same kinds of

exceptions as divide-by-zero and segmentation fault already do. You can write


catch-all rescue clauses or override the default_rescue rotuine in classes to stop
the whole thing dying easily enough.

- thomas beale

Thomas Beale

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to

Darren New wrote:

> Thomas Beale wrote:
> > Ah - you're missing another point about documentation, code etc which relates to
> > all languages: the _primary expression_ of the contracts is in the code
> > (remembering that in Eiffel and even well-written C++, and I guess Ada as well),
> > but it can certainly be replicated elsewhere in documentary form.
>

> That works only when you only have one version of a program. I.e., by doing
> this, you're basically assuming that all versions of the program you write
> will be structured the same way. It really doesn't work very well, methinks,
> for stuff like internet protocols.

Do you mean documenting internet protocols from their code? I'm not saying that what is
in the code is the only documentation - far from it - you will see as much from the
technical documents on www.gehr.org; but the _formal expression_ wants to be in only one
place - in the software where it can not only be expressed formally, but where it can be
computed. And there should be formal expressions of many things which are not formally
expressed in today's software.


> > Most software engineers have done or experienced similar things. The "Soda" tools
> > for C++ do this kind of thing. So there is no need to talk of anything being
> > "buried" in the code.
>

> Fair enough. But given that the restrictions were "buried" in outside documentation, I
> wouldn't imagine that putting it in the code would make it *less* buried.

It would if you put it there first, and if that was the natural place to put it. In the
last business system I did, we spent 8 weeks doing requirements with 5 people, and put
the formal statements of business entities and contracts into Eiffel classes and/or the
BON case tool. We didn't think of this as coding, but we were certainly creating content
in source files. The Eiffel created thus was used to generate HTML documents for the
project web, for review by business analysts (i.e. people who can't ordinarily read
source code).

> > And Eiffel's contract mechanism
> > would not have ignored anything, on the contrary it would have induced an
> > exception;
>

> That's what crashed the rocket. An exception was induced because of a
> numeric conversion. Eiffel's contract mechanism would have ignored it
> because there wasn't CPU time to be checking preconditions.

I should be clearer: with contracts turned on at runtime, the exception would have been
generated at the point where the precondition is found; in other words you know in
advance that if you want to protect against this particular event, you can (for example)
put a rescue clause on that routine. If contracts were not compiled to execute at
runtime, the exception would have occurred..... who knows where? probably in the same
place in this particular case, if the overflow was trapped as a hardware error. In which
case, a rescue routine could still have been written in the right place at compile time.

> > the point is that the exception would have been induced at a
> > well-defined place in the code - where the contractual precondition was, not in
> > some far away place, making debuggin difficult.
>

> So what you're saying is, had they used Eiffel, they would have calculated
> what values of BH wouldn't fit in a 16-bit integer, and propagated that
> information up the call stack in the form of preconditions on whatever

potentially; in the form of precondition assertion violation exceptions. It is still up
to the software engineers to put in some kind of rescue code though.

> variables go into BH. But there's still no evidence that would have solved
> the problem, due to the nature of BH.
>

> > One could say this was an implicit assertion of the form "BH <= 32768 and BH >=
> > -32767" everytime integers are handled in Ada. So there is an _assertion_ but no
> > contract - a contract would consist of assertions for pre- and post- conditions
> > (maybe just pre-condition - nothing stopping that) for the routine doing the call -
>

> Yes. You just gave it. The routine doing the call that converted BH to an
> integer had a precondition on that call that BH had to fit in a 16-bit
> integer. I'm not sure why you don't think this is a contract.

A contract is an explcitly stated agreement between client and supplier modules /
classes. THere can be many more assertions than contracts. E.g . in Eiffel you can write
"check" instructions wherever you want. In C, C++ you can put "assert(...)" wherever you
want. But these are not seen as part of the contract of the routine. One might argue they
should be, but to make that a realistic proposition you would need tools to extract such
statements and add them to the existing explicit contracts, to provide the complete
documentation of the contract for each routine.

> > in other words they would have been understood as part of the routine
> > specification, not just an implicit assertion somewhere in the code. A contract
> > could be thought of as design-level assertions.
>

> I'm aware of what assertions are, and what contracts are. I'm saying that
> the routine that converted BH to an integer had a precondition that BH was
> in range. BH wasn't. The error wasn't caught, beause the original
> programmers knew it didn't need to get checked.

Ok, so let's call this a contract; then the question becomes why we

> ...> You can write


> > catch-all rescue clauses or override the default_rescue rotuine in classes to stop
> > the whole thing dying easily enough.
>

> Uh, no. There wasn't CPU time to be doing that.

This then is just bad design, to have _no_ CPU time whatsoever available to handle even
the simplest of excpetion handling (and to do it without polluting other memory).

But the real argument about contracts is that if they are part of the inbuilt formal
statement of the software, and by extension, part of the documentation, then re-using a
module with a contract should have prompted the engineers doing the re-use to re-evaluate
all the contracts, and then do testing based on what they decided. So something was
missing here.

> And they *did* have the
> equivalent of a default_rescue, which dumped debugging information onto the
> bus, which crashed the rocket. Why do you think Eiffel's default_rescue
> would have any ability to keep the rocket flying, *even* assuming Eiffel
> supported the application to start with?

Obviously this doesn't happen in normal systems, but I realise we are talking about some
real-time kernel here, so what would have happened would have depended on how the
compiled code was run on this particular kernel. The last time I worked on such systems
was with Motorola VERSAdos on distributed VME control equipment designed to run power
distribution systems, trains etc. We certainly did not let missed exceptions occur, nor
overwrite other memory areas. There are too many lives at stake. (More lives than for the
rocket, although I am sure our "real-time" was not quite as fast as Ariane's
"real-time"!).

- thomas beale


Darren New

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
Thomas Beale wrote:
> but the _formal expression_ wants to be in only one
> place - in the software where it can not only be expressed formally, but where it can be
> computed.

That's my point. Which code? Yours? Mine? Microsoft's?

> And there should be formal expressions of many things which are not formally
> expressed in today's software.

There are. Estelle. LOTOS. SDL. All spring to mind as better formalisms for
network protocols (for example) than Eiffel. All of which are computable.
(Well, maybe not SDL, it's been too long to remember.)

> I should be clearer: with contracts turned on at runtime, the exception would have been
> generated at the point where the precondition is found; in other words you know in
> advance that if you want to protect against this particular event, you can (for example)
> put a rescue clause on that routine.

No. You can't. You didn't have the CPU time to be checking that.

> > > the point is that the exception would have been induced at a
> > > well-defined place in the code - where the contractual precondition was, not in
> > > some far away place, making debuggin difficult.
> >
> > So what you're saying is, had they used Eiffel, they would have calculated
> > what values of BH wouldn't fit in a 16-bit integer, and propagated that
> > information up the call stack in the form of preconditions on whatever
>
> potentially; in the form of precondition assertion violation exceptions. It is still up
> to the software engineers to put in some kind of rescue code though.

Let's see... They *did* put in the assertions. The exception didn't occur
until after the device was supposed to be ignored anyway. The fact that the
device was turned on later after reuse caused its input to still be
considered by other parts of the system when the overflow did occur. The
rescue code was there, it's what caused the rocket to crash. I still fail to
see where Eiffel would have made the rocket fly.

> > Yes. You just gave it. The routine doing the call that converted BH to an
> > integer had a precondition on that call that BH had to fit in a 16-bit
> > integer. I'm not sure why you don't think this is a contract.
>
> A contract is an explcitly stated agreement between client and supplier modules /
> classes.

Yes. The supplier module being INTEGER or FLOAT or whatever, the caller
being the one passing in the out-of-range value for BH, right?

> > I'm aware of what assertions are, and what contracts are. I'm saying that
> > the routine that converted BH to an integer had a precondition that BH was
> > in range. BH wasn't. The error wasn't caught, beause the original
> > programmers knew it didn't need to get checked.
>
> Ok, so let's call this a contract; then the question becomes why we

Yeeessssss? :-)

> > ...> You can write
> > > catch-all rescue clauses or override the default_rescue rotuine in classes to stop
> > > the whole thing dying easily enough.
> >
> > Uh, no. There wasn't CPU time to be doing that.
>
> This then is just bad design, to have _no_ CPU time whatsoever available to handle even
> the simplest of excpetion handling (and to do it without polluting other memory).

No, they handled it in 4 out of 7 of the cases. There wasn't "no" CPU time
whatsoever. There was limited CPU time, and they used it to check the
exceptions they thought it might throw. They didn't polute other memory.
Have you not read the report? Or even the three paragraphs of it I posted?
They had turned off checking in the routine because they'd proven it
couldn't happen.

> But the real argument about contracts is that if they are part of the inbuilt formal
> statement of the software, and by extension, part of the documentation, then re-using a
> module with a contract should have prompted the engineers doing the re-use to re-evaluate
> all the contracts, and then do testing based on what they decided. So something was
> missing here.

Yeah! Something was missing! But it wasn't Eiffel. :-) They had the
documentation. They knew where it was and what it needed. They just came to
the wrong conclusion. I just haven't seen any evidence that had they been
using Eiffel instead of Ada, they would have magically realized that some
circumstances had changed and needed to be different, given that they didn't
have current test data.

You know, I *like* Eiffel. It's good. It formalizes a lot of stuff that
other languages don't. I just don't think it would have prevented the crash
in this case. It's easy to *say* "Well, you should go through every routine
in millions of lines of code that already work reliably and run this
hardware properly, to look for something different." But you can do that
with Ada or with Eiffel.

> > And they *did* have the
> > equivalent of a default_rescue, which dumped debugging information onto the
> > bus, which crashed the rocket. Why do you think Eiffel's default_rescue
> > would have any ability to keep the rocket flying, *even* assuming Eiffel
> > supported the application to start with?
>
> Obviously this doesn't happen in normal systems, but I realise we are talking about some
> real-time kernel here, so what would have happened would have depended on how the
> compiled code was run on this particular kernel.

Yah. In the rocket, it caused a CPU trap, which set bits in its diagnostic
register, that were interpreted as instructions. Oops. But since nobody was
even supposed to be paying attention to that unit after takeoff (it was an
alignment routine), there's no reason to think Eiffel would have especially
helped.

Oh well. Enough nonsense. :-)

Ed Falis

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
Robert Dewar <robert...@my-deja.com> wrote:
> But the real issue is whether contracts as provided in Eiffel are
> sufficiently more useful than the simple pragma Assert found in
> typical Ada compilers. All the examples in this message are in the
> category of simple assertions, and I think it is far from clear that a
> case can be made for anything more complex. Yes, I know examples can
> be constructed, but I am struck by the fact that virtually 100% of
> examples given do not need the additional complexity.


Having used both Ada assertions and Eiffel DbC, I find one significanct
difference, and one notational convenience in the latter.

The first is that preconditions and postconditions are inherited by
descendants, and have strict rules about the manner in which they can by
extended, which are checked by the compiler. This could be done
manually, but would involve a lot of cutting and pasting with pragma
Assert.

Invariants, which must hold following initialization, and after exit
from any exported routine, are much more easily implemented in the
Eiffel notation. Again, they could be done with pragma Assert, but at a
cost of a great deal of drudgery. There would still be no compiler
support to ensure that they'd been put in.

I haven't given much thought as to whether pragma Assert could simulate
the use of Eiffel agents in predicates, but I suspect from the cases
where I've used them that it would get pretty complex. And I did find
them very useful in the last substantial piece of software I built.

So, I do find DbC as implemented in Eiffel to be more powerful, cleaner
and more useful than pragma Assert. But, like most things, it depends on
how you use them, and how integrated they are in the development
process.

- Ed

Igor Boukanov

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
In comp.lang.eiffel pc_c...@my-deja.com wrote:
> I do not agree. DbC means *integrating* contracts, and thus the
> specification of the code, with the implementation of the code. It
> is impossible to reuse a given piece of code written with DbC without
> actually seeing the contracts. It is impossible to reuse a given method/
> function/procedure without seeing the contracts belonging to the
> given method/function/procedure. Just as it is impossible not to see
> and take into consideration the signature of the method/function/
> procedure.
>
> Seeing the contracts will alert a potential programmer, about to
> reuse some piece of code, to the specific contracts/specifications
> for the code he/she is about to reuse. It will make him/her think
> about how he/she can reuse the code. And that is independent of
> testing the code with contracts turned on or not.

But what if the contract is complicated, with a lot of conditions
etc., and one is under pressure to deliver a product? One may
decide that current understanding of the contract is good enough
to proceed with reuse and IMO it is a management decision. And if
you can not fully test because tests are expensive, then the error
would be caught only at at delivery time.

Of cause anything that reduce a possibility of violating the
specifications is a plus and Eiffel integration of the contract
with class/method signatures are extremely helpful, but then it
would be interesting to know was that Arian situation complicated
enough not to see at glance the particular reuse error?

Regards, Igor

Robert Dewar

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
In article <2000063000...@192.168.0.2>,

Ed Falis <efa...@pop.ne.mediaone.net> wrote:
> So, I do find DbC as implemented in Eiffel to be more
powerful, cleaner
> and more useful than pragma Assert. But, like most things, it
depends on
> how you use them, and how integrated they are in the
development
> process.


Yes, indeed, and my point is simply that in the great majority
of Eiffel code I have seen, even including examples that are
posted to demonstrate this facility, it is almost always the
case that simple assertion pragmas would have done as well.

gmc...@my-deja.com

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
In article <8ji9k3$8bk$1...@nnrp1.deja.com>,

On the level of individual code snippets, you're right. However Ed's
point was the power obtained from DBC's support for inheritance.
Spend some time browsing the source and the flat forms to an
extensive library, and you get to see how this plays out. A child
of say, COLLECTION, such as ARRAY, can extend the contract
in various ways, but it doesn't need to reproduce COLLECTION's
assertions. Other code generalized to handle COLLECTIONs need not worry
about the nature of the extensions, since the ARRAY still implicitly
binds to the COLLECTION's contract. And all this happens even as
functions are renamed or redefined and implementation details shift
around in the inheritance heirarchy.

Ed Falis

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
Ed Falis <efa...@pop.ne.mediaone.net> wrote:
> how you use them, and how integrated they are in the

Robert Dewar <robert...@my-deja.com> wrote:
> development

Ed Falis <efa...@pop.ne.mediaone.net> wrote:
> process.

Robert Dewar <robert...@my-deja.com> wrote:
> Yes, indeed, and my point is simply that in the great majority of
> Eiffel code I have seen, even including examples that are posted to
> demonstrate this facility, it is almost always the case that simple
> assertion pragmas would have done as well.
>
>

Point well-taken.

- Ed


Igor Boukanov

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
In comp.lang.eiffel gmc...@my-deja.com wrote:
> In article <8ji9k3$8bk$1...@nnrp1.deja.com>,

> On the level of individual code snippets, you're right. However Ed's
> point was the power obtained from DBC's support for inheritance.
> Spend some time browsing the source and the flat forms to an
> extensive library, and you get to see how this plays out. A child
> of say, COLLECTION, such as ARRAY, can extend the contract
> in various ways, but it doesn't need to reproduce COLLECTION's
> assertions. Other code generalized to handle COLLECTIONs need not worry
> about the nature of the extensions, since the ARRAY still implicitly
> binds to the COLLECTION's contract.

Unless you use that select trick ot covariant redefinitions that permit
for ARRAY to violate COLLECTION contract. I must admit it is not
a frequent problem but still you do not have it in Ada, do you?

Regards, Igor

mjs...@my-deja.com

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
In article <395BFB31...@san.rr.com>,

dn...@san.rr.com wrote:
>
> > But the real argument about contracts is that if they are part of
the inbuilt formal
> > statement of the software, and by extension, part of the
documentation, then re-using a
> > module with a contract should have prompted the engineers doing the
re-use to re-evaluate
> > all the contracts, and then do testing based on what they decided.
So something was
> > missing here.
>
> Yeah! Something was missing! But it wasn't Eiffel. :-) They had the
> documentation. They knew where it was and what it needed. They just
came to
> the wrong conclusion. I just haven't seen any evidence that had they
been
> using Eiffel instead of Ada, they would have magically realized that
some
> circumstances had changed and needed to be different, given that they
didn't
> have current test data.

This seems like as good a place as any to ask another question.
Something that nobody here has yet mentioned (unless I missed it) is
that the offending routine only ran for a preset time (about 40 seconds
after liftoff) and then stopped. Therefore, the offending signal was
only constrained during those 40 seconds of execution. How does one
express this in an Eiffel contract?

Mike

Robert A Duff

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
Bertrand Meyer <Bertran...@eiffel.com> writes:

> I believe the other problems encountered and detected also involved
> non-trivial assertions. With contracts you can use the full power
> of boolean expressions (and indeed, with Eiffel's agent mechanism,
> arbitrary high-order functionals); for example you can have
> a precondition of the form
>
> some_property (x)

What is the "agent" mechanism?

- Bob

Bertrand Meyer

unread,
Jun 30, 2000, 3:00:00 AM6/30/00
to
I wrote:

>>>> I believe the other problems encountered and detected also involved
>>>> non-trivial assertions. With contracts you can use the full power
>>>> of boolean expressions (and indeed, with Eiffel's agent mechanism,
>>>> arbitrary high-order functionals); for example you can have
>>>> a precondition of the form

>>>> some_property (x)

Robert A Duff asked:

> What is the "agent" mechanism?

It's a mechanism for adding the full power of functional operators
to Eiffel in a type-safe way. An agent is an object representing
an operation (usually a routine) ready to be applied to certain
operands; some of the operands ("closed") are set in the agent's
definition, other ("open") are provided only when the agent is
called. It's really lambda expressions, or combinators; the
mechanism is similar in spirit to Smalltalk's closures (but typed).

Agents were introduced into Eiffel more than a year ago and have
already been extensively used in a number of applications,
in particular in the financial industry where it has proved essential.
It plays a major role in the new EiffelVision 2 library.

Among released versions I believe only ISE supports agents for the
moment, but it's definitely an Eiffel construct, with nothing specific
to ISE Eiffel. The description is public as part of the ETL revision
(see reference below). The leader of one other compiler development has told
me that he was planning to support them. (I havent had the opportunity
to ask others.) My opinion is that agents have proved to be an integral
part of the Eiffel approach (even if that hadn't been recognized initially)
and that everyone will soon support them. People who have started using
agents don't want to go back... In particular, in spite of a comment
to the contrary that I saw some time ago (whether on this forum or
another I forgot, sorry), they fit very well with the rest of the
O-O approach, and in practice it becomes quickly clear when you
should use agents and when they're not appropriate.

Agents have particularly important applications to iterators,
numerical programming, financial applications, and graphical programming.

You may find a good overview of agents in the article by
Paul Dubois, Mark Howard, Michael Schweitzer, Emmanuel Stapf
and myself, "From Calls to Agents", as part of the Eiffel
Column in JOOP (Journal of Object-Oriented Programming),
September 1999. (I wasn't able to find it on the JOOP
web site at sigs.com -- if someone finds it, please post
the URL. I have the text, of course, but I don't think I
can make it available because of copyright issues.)

[By the way, let me throw in a pitch for the
Eiffel column in JOOP. It has provided lots
of information over the months, at least so
I hope, and it's nice that JOOP is, as it has
always been, very receptive to publishing
Eiffel articles. Please patronize them.]

An online introduction to agents appears at

http://www.eiffel.com/doc/manuals/language/agent/

in the form of a chapter from the manuscript-in-progress
of a future new edition of "Eiffel: The Language".

-- Bertrand Meyer

Eirik Mangseth

unread,
Jul 1, 2000, 3:00:00 AM7/1/00
to
Is there a performance penalty when using agents?
I do not wish to detract from their usefulness, 'cause
that's certainly been proven, so I'm just curious to
how they compare to using loops directly.

Best regards

Eirik Mangseth (who really wish he could attend TOOLS 2000)
Stockpoint Nordic AS, Lysaker, Norway

"If I can't Eiffel in heaven, I won't go"

"Bertrand Meyer" <Bertrand_Meyer/nos...@eiffel.com> wrote in message
news:395D113D...@eiffel.com...

Bertrand Meyer

unread,
Jul 1, 2000, 3:00:00 AM7/1/00
to
Eirik Mangseth wrote:

> Is there a performance penalty when using agents?
> I do not wish to detract from their usefulness, 'cause
> that's certainly been proven, so I'm just curious to
> how they compare to using loops directly.

Yes, there is an overhead. For an iterator that applies a void
action the current estimate is an execution time ratio of about
3 between an agent-based iterator and a directly programmed loop.
Of course in a realistic application the action will do something
significant and the more it does the lower this factor will be,
but the penalty remains significant.

We do not recommend getting rid of all loops to replace them
by calls to iterators using agents, at least not without
having checked the performance effect in your particular
case. The main current applications of agents are elsewhere:
graphical programming (as in EiffelVision 2), numerical
programming, financial applications, more extensive
assertions. On the first and last of these, note that:

- With EiffelVision 2, it is possible thanks to
agents to have little or no "glue" code (the
C in MVC) between the business model of an application
(what it actually does), and its GUI clusters.
Thanks to agents you can directly associate
actions from the business model with events
from the graphical interface. This builds on the
notion of command, already present in
EiffelVision 1, but makes it much simpler
to use.

- Regarding assertions, the prospect of a fully
contracted EiffelBase -- a set of data structure
and algorithms classes equipped with full
specifications -- seems attainable.

The current performance overhead, by the way, is not a fatality.
There will always remain a penalty but it may be possible
to bring it down considerably if the goal of loop-free programming
becomes paramount to Eiffel users. Outside of iterator-like uses,
agents have proved to pose no global performance problem.

-- Bertrand Meyer
ISE Inc.
http://www.eiffel.com http://www.tools-conferences.com

Robert Dewar

unread,
Jul 2, 2000, 3:00:00 AM7/2/00
to
In article <395E5D16...@eiffel.com>,

Bertrand_Meyer/Nos...@eiffel.com wrote:
> Yes, there is an overhead. For an iterator that applies a void
> action the current estimate is an execution time ratio of
> about 3 between an agent-based iterator and a directly
> programmed loop.

I don't see that as fundamental in the language design, it seems
to me that a compiler with appropriate global knowledge can
perfectly well optimize this case. Whether or not the
optimization is in practice worthwhile is another issue.

Eirik Mangseth

unread,
Jul 2, 2000, 3:00:00 AM7/2/00
to

"Bertrand Meyer" <Bertrand_Meyer/nos...@eiffel.com> wrote in message
news:395E5D16...@eiffel.com...

> Eirik Mangseth wrote:
>
> > Is there a performance penalty when using agents?
> > I do not wish to detract from their usefulness, 'cause
> > that's certainly been proven, so I'm just curious to
> > how they compare to using loops directly.
>
> Yes, there is an overhead. For an iterator that applies a void
> action the current estimate is an execution time ratio of about
> 3 between an agent-based iterator and a directly programmed loop.
> Of course in a realistic application the action will do something
> significant and the more it does the lower this factor will be,
> but the penalty remains significant.

I then assume that as the Eiffel Closure (a.k.a agents :) technology
matures it will gain in performance due to better optimisation techniques.

>
> We do not recommend getting rid of all loops to replace them
> by calls to iterators using agents, at least not without
> having checked the performance effect in your particular
> case.

As always.

> The main current applications of agents are elsewhere:
> graphical programming (as in EiffelVision 2),

Yes I like how they can be / are used in EV2

> numerical
> programming, financial applications,

This is an area I would like to learn more about
how people have gone about using Closures.

> more extensive
> assertions. On the first and last of these, note that:
>
> - With EiffelVision 2, it is possible thanks to
> agents to have little or no "glue" code (the
> C in MVC) between the business model of an application
> (what it actually does), and its GUI clusters.
> Thanks to agents you can directly associate
> actions from the business model with events
> from the graphical interface. This builds on the
> notion of command, already present in
> EiffelVision 1, but makes it much simpler
> to use.
>
> - Regarding assertions, the prospect of a fully
> contracted EiffelBase -- a set of data structure
> and algorithms classes equipped with full
> specifications -- seems attainable.

Yes, the ability to have quantifiers like for_all and
there_exist can't be nothing but a boon to DbC.

>
> The current performance overhead, by the way, is not a fatality.
> There will always remain a penalty but it may be possible
> to bring it down considerably if the goal of loop-free programming
> becomes paramount to Eiffel users. Outside of iterator-like uses,
> agents have proved to pose no global performance problem.
>
> -- Bertrand Meyer
> ISE Inc.
> http://www.eiffel.com http://www.tools-conferences.com

Eirik M

Bertrand Meyer

unread,
Jul 2, 2000, 3:00:00 AM7/2/00
to
Robert Dewar wrote:
>
> In article <395E5D16...@eiffel.com>,
> Bertrand_Meyer/Nos...@eiffel.com wrote:
> > Yes, there is an overhead. For an iterator that applies a void
> > action the current estimate is an execution time ratio of
> > about 3 between an agent-based iterator and a directly
> > programmed loop.
>
> I don't see that as fundamental in the language design, it seems
> to me that a compiler with appropriate global knowledge can
> perfectly well optimize this case.

Yes indeed. As was noted elsewhere in my message we think we
could bring this overhead down considerably. But since
Eirik Mangseth asked whether there is an overhead, the answer
had to be about what the overhead is today, not what it could be.
Hand-waving is easy...

> Whether or not the
> optimization is in practice worthwhile is another issue.

An issue for the language users to decide. For current uses
of agents the overhead doesn't seem to be a problem. If users
start clamoring for a loop-free, iterator-based style of programming,
then we'll have to take a second look at the issue.

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
<mjs...@my-deja.com> wrote:
>
> Nope -- would it be able separate out only those contracts that
> applied to the physical black-box inputs, and label those
> contracts in a clear manner (ignoring the internal Eiffel class,
> method and variable names, which the interfacer doesn't care
> about)? For example, in the Eiffel Ariane paper are the
> following lines:
>
> require
> horizontal_bias <= Maximum_bias
>
> but what the interfacer wants is going to be more like:
>
> Horizontal Bias (connector J4, pin 6)
> valid range: 0 to +3.45V (0 to 62 m/s)

I'd model the black box as an Eiffel class. I'd need a class framework
for identifying connectors and pins. In the black box class, I'd need
some specifications to provide interpretation for the voltages.

To get hardware and software integrated, I'd build an Eiffel class that
simulated the hardware (to be precise: everything that's outside the
domain of the productive Eiffel software). The routines of that
simulation would be used only in contracts. (This also defines how much
needs to be simulated: if the simulation has a function that's never
called from a contract, then that function is obviously superfluous.)
Then I'd build a second class with the same interface but that just
returned any function values by inspecting real hardware (querying ADCs
etc.)
The common part of these classes is the specification of the hardware.
I'd factor this specification out into an abstract class. Actually the
hardware supplier should probably write the abstract class and the
hardware-interface class, but given an ordinary hardware specification I
can write it myself (this is essentially just establishing traceability
from my software to the hardware specs, so this is an effort that I'll
have to spend anyway). The simulation class will have a lot of
additional machinery for initialization, simulation control, etc., just
like a normal hardware simulation.

Actually this pattern applies at multiple levels. It can be used to
describe the interface between software and memory-mapped ADC address,
between ADC and external connectors, between connectors and physical
sensor, and between physical sensor and physical reality. Adjacent
interfaces in this chain can be combined, yielding a connector-physical
interface, an ADC-physical interface, a memorymap-physical interface,
and ultimately an Eiffel-physical interface. The contracts on each
combined interface can be derived immedately from the contracts of each
component interface. If the specifications are formal enough and proper
tools are available, the combined interfaces should even be
automatically derivable, eliminating a source of potential errors. The
only thing that still needs to be written by hand is the software
simulation for each software-to-<whatever> interface that one actually
cares to write.

Caveat: This all is theory. I'd like to set up a small example to
demonstrate this, but I'm lacking the time to do it properly right now.
(Building the abstractions for voltages, connectors, fixed-point
numbers, physical-reality-to-sensor-voltage translation etc. all
requires time to do it right, and I don't want to produce more examples
like the one above.)
I'll post any examples on the Eiffel Forum Wiki at
http://efsa.sourceforge.net/cgi-bin/view/Main/WebHome but I can't tell
when I'll be ready. I'll announce it

Regards,
Joachim
--
This is not an official statement from my employer or from NICE.
Reply-to address changed to discourage unsolicited advertisements.


dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
David Starner <dvd...@x8b4e53cd.dhcp.okstate.edu> wrote:
> On Tue, 27 Jun 2000 00:56:24 +0200, Joachim Durchholz
> <joachimdo...@halstenbach.com> wrote:
> >> especially if this works for the 1750 (which, among other oddities,
> >> has 16-bit storage units).
> >
> >Well, I wouldn't expect that this is a problem. Of course, the proof
> >of such a pudding is in the eating, so I won't make any bold claims
> >until I have seen such a beast run. But these portability issues are
> >largely solved in C, so Eiffel just relies on the C semantics.
>
> They're solved in C? For what value of solved?

In the sense of "a reliable, Posix-compatible C compiler exists". It
doesn't have to be gcc.
Porting an Eiffel compiler to a new C compiler is a man-week of raw work
(discounting the paperwork) - been there, done that.
C has well-known portability problems. Eiffel compilers that emit code
for a variety of C compilers will, of course, avoid these problems as
far as possible. The nice thing about C is not that it's portable - C
isn't. The nice thing about C is that its portable subset is so large
that you can useful things with it.

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
<mjs...@my-deja.com> wrote:
> In article <395AE477...@pobox.com>,
>
> Not at all the same, because the vast majority (99%?) of contracts in
> the code do not percolate out to affect the interface specs.

In this case, the inner contract was either irrelevant for the
interface, or the interface spec is incomplete and thus buggy.
Inner contracts do not automatically percolate up. You have to verify
that every routine called within a routine has its preconditions
established prior to the call; if the caller doesn't establish the
precondition, the caller's caller must establish it, so it must be added
to the preconditions of the caller.
This is work, but it pays off even for the programmers (because they
will see any errors really quickly), so they will actually like to write
these preconditions. This is a huge improvement over Detailed Design
Documents that programmers have to write otherwise.

> For
> example, there might be ten different places where a particular input,
> or some manipulation of it, are limited, but there can only be *one*
> actual limit on that input (the intersection of all the internal
> inputs).

There are two cases:
a) The input is limited, and the computation places a constraint on the
input (such as taking a square root implies that the number be
nonnegative). This constraint *will* percolate to the outside, unless
some other stricter constraint properly includes the internal
constraint, in which case we will have situation (b).
b) The computation is limited, but the input values for the computation
are already guaranteed from a contract at the external interface of the
software. In this case, the constraint is percolating from the interface
to the internal contract.

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
Darren New <dn...@san.rr.com> wrote:
>
> > And there should be formal expressions of many things which are
> > not formally expressed in today's software.
>
> There are. Estelle. LOTOS. SDL. All spring to mind as better
> formalisms for network protocols (for example) than Eiffel. All
> of which are computable.

Hmm... I know you're a big fan of these specification methods, but I
don't see how they help.
I just finished reading a few seminar slides about Lotos, so my
understanding is limited to Lotos, and may be further restricted by my
lack of understanding, so please let me elaborate:

From what I gather, the main point of Lotos is that it specifies a
sequence of events (or actions, depending on the point of view).
I can specify these in Eiffel with no real difficulty as well, so
there's no real difference here. Well, OK, in Eiffel specifications are
more implicit: the sequence constraints must be expressed using
preconditions on the object's state. I.e. instead of writing "open file,
read a few lines of data, close the file" I have to write "'open'
ensures 'opened', 'read' and 'close' have a precondition of 'opened'".

Now the question is how this difference makes Lotos more useful or not.
I cannot deny that a Lotos specification is simpler to write. On the
minus side, I'd expect that a Lotos specification tends to be
overspecific.

Of course, an overspecified specification is simpler to verify: you
don't have to verify all cases but only those that are specified. (This
is probably also the reason why safety-critical programming tends to be
overspecific: V&V becomes easier because the number of potential
execution histories to test becomes less.)

On the other hand, I see that the slides on Lotos contain remarks like
"Verification tools will often return inconclusive results" (this is not
a literal quote, just what I gathered). This is, in a sense, a
disappointment: Lotos sacrifices expressivity and *still* doesn't give
me automatic verification.
This is a little cheap - no automatic verifiers for Eiffel exist.
Actually Eiffel was never optimized for automatic verification, so I'm
pretty sure that some things are more difficult than they need be.
However, I do expect that a language designed for verification could use
DbC and do away with explicit sequence constraints à la Lotos; I even
speculate that specifications in such a language wouldn't be
unmanageably more difficult to verify than a Lotos specification:
assertions nail down exactly those sequence constraints that count; it's
is possible that this would even guide a verifier to quickly find those
constraints that are relevant for proving whatever property it's about
to prove.

No, the shortcomings of Eiffel's DbC are of a different, entirely more
trivial nature: It's impossible or difficult to write assertions with
existential or universal quantifiers, and it's impossible to express
temporal constraints on routines of descendants (stuff like "the value
of Current.key will never change while Current is entered into a sorted
list").

(I agree that the run-time checking aspect of DbC, while helpful in
general, wouldn't have helped in the Ariane case. Actually Bertrand's
paper doesn't claim this; Bertrand's paper claims that adding contracts
to the code would have made it easier to detect the inconsistency, and
from my experience with DbC I'd say that this claim is plausible even if
it isn't falsifiable.)

> You know, I *like* Eiffel. It's good. It formalizes a lot of stuff
> that other languages don't. I just don't think it would have
> prevented the crash in this case. It's easy to *say* "Well, you
> should go through every routine in millions of lines of code that
> already work reliably and run this hardware properly, to look for
> something different." But you can do that with Ada or with Eiffel.

Actually just a specific part of DbC would have been sufficient, namely
contracts on routines (including contracts that constrain physical
reality - pure software contracts would indeed not have helped, the
failed precondition was that the rocket's horizontal velocity wouldn't
exceed a specific limit). The other part of DbC (the rules for
inheritance) wouldn't have been necessary to prevent the Ariane crash
(it would have been important in other cases where the root of the
failure had been in a misunderstanding about what a dynamically
dispatched routine's semantics is).

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
Robert Dewar <robert...@my-deja.com> wrote:
>
> Yes, indeed, and my point is simply that in the great majority
> of Eiffel code I have seen, even including examples that are
> posted to demonstrate this facility, it is almost always the
> case that simple assertion pragmas would have done as well.

It's somewhat difficult to post real-life examples in a newsgroup.
Besides, people wouldn't want to read 10+K of source code to verify that
it's easier to write assertions in Eiffel than in Ada.

The important point is that preconditions and postconditions are placed
at a position where an automatic tool can easily pick them off to
generate a detailed design document. C or Ada assert instructions cannot
do that: the tool would be unable to distinguish interface assertions
from assertions that happend to be coded at the beginning or end of a
routine.

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
Ken Garlington <Ken.Gar...@computer.org> wrote:
> <pc_c...@my-deja.com> wrote:
>
> > That said, of course no one can say that Eiffel/DbC would definitely
> > have caught the Ariane bug, but then again, as I've understood, no
> > one has claimed so.
>
> The exact claim was: "Does this mean that the [Ariane 5] crash would
> automatically have been avoided had the mission used a language and
> method supporting built-in assertions and Design by Contract? Although
> it is always risky to draw such after-the-fact conclusions, the answer
> is probably yes..."

I think the problem here is one of mutual misunderstanding: Eiffelists
tend to underestimate the complexity and constraints of safety-critical
programming, while programmers in the safety-critical field tend to
underestimate the power of Design by Contract.

I'm in the lucky position of having done a bit of both. From this
position I can say that (1) DbC could have been used to make the
constraints more explicit, to the point that it would have been
plainly obvious for the engineers who integrated the IRS into the
Ariane-5 platform, (2) as always, it depends on the thoroughness that
was applied when using the method, (3) DbC has a much better
effort-to-effect ratio than any other method of keeping detailed design
documents in sync with actual code, so the prospects for point (2) are
better than one would think at first, and (4) the actual effect would
have to be studied in a few test projects before relying on it for large
systems.

dot durchholz@halstenbach.com Joachim Durchholz

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
Igor Boukanov <bouk...@sentef2.fi.uib.no> wrote:
>
> But what if the contract is complicated, with a lot of conditions
> etc., and one is under pressure to deliver a product?

That's an interesting point.
With current Eiffel implementations, it is unsolved: you can make the
contracts more abstract, by putting them into boolean functions, but
then the exact meaning of the contract will not be as obvious as it
would have been before.
However, routines that check contracts are usually simple enough to be
subjected to automated analysis. It should be possible (and in fact not
too difficult) to write a contract verifier tool. The interesting
question here is whether this tool can be made to give diagnostics that
are meaningful for a programmer or a code reviewer.

Darren New

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
Joachim Durchholz wrote:
> established prior to the call; if the caller doesn't establish the
> precondition, the caller's caller must establish it, so it must be added
> to the preconditions of the caller.

Well, they did. The caller's caller's caller's caller's caller was flaming
rocket exhaust. It established the precondition by exploding in mid-air. No
problem. ;-)

Seriously, at some point, you have to stop working with preconditions and
start working with external input checking, coming from (say) clocks and
inertial guidence gyroscopes and lasers. It's not obvious from the report
that there was any place to put the precondition, or that it shouldn't have
been an input value check, which might have caused the same problem.

--
Darren New / Senior MTS & Free Radical / Invisible Worlds Inc.
San Diego, CA, USA (PST). Cryptokeys on demand.

"We have large financial earlobes."

wv...@my-deja.com

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
In article <8jsvrm$1a1lu$1...@ID-9852.news.cis.dfn.de>,

"Joachim Durchholz" <joachim....@halstenbach.com> wrote:
> David Starner <dvd...@x8b4e53cd.dhcp.okstate.edu> wrote:
> > On Tue, 27 Jun 2000 00:56:24 +0200, Joachim Durchholz
> > <joachimdo...@halstenbach.com> wrote:
>
> C has well-known portability problems.

It has? If C has portability problems, they are probably easier to
circumvent than in any other languages that I know of. Can you have an
example of a larger body of code than Linux that runs on Intel, Alphas,
SPARC, SMP, etc... Last time I count, /usr/src/linux is over 1.5
million lines of C code.

Ken Garlington

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
"Joachim Durchholz" <joachim dot durc...@halstenbach.com> wrote in message
news:8jt4i0$18ec7$1...@ID-9852.news.cis.dfn.de...

> Ken Garlington <Ken.Gar...@computer.org> wrote:
> > <pc_c...@my-deja.com> wrote:
> >
> > > That said, of course no one can say that Eiffel/DbC would definitely
> > > have caught the Ariane bug, but then again, as I've understood, no
> > > one has claimed so.
> >
> > The exact claim was: "Does this mean that the [Ariane 5] crash would
> > automatically have been avoided had the mission used a language and
> > method supporting built-in assertions and Design by Contract? Although
> > it is always risky to draw such after-the-fact conclusions, the answer
> > is probably yes..."
>
> I think the problem here is one of mutual misunderstanding: Eiffelists
> tend to underestimate the complexity and constraints of safety-critical
> programming, while programmers in the safety-critical field tend to
> underestimate the power of Design by Contract.
>
> I'm in the lucky position of having done a bit of both.

The domain of interest is not just "safety-critical programming" - it's
subcontracted real-time embedded high-integrity programming, which is a much
tighter corner. Every one of the terms of this phrase had a bearing in the
Ariane 5 incident, IMHO.

> From this
> position I can say that (1) DbC could have been used to make the
> constraints more explicit, to the point that it would have been
> plainly obvious for the engineers who integrated the IRS into the

> Ariane-5 platform...

Post the code, and let's see. I claim I can rely simply on the Ariane 5
report -- presumably generated by an unbiased source -- to demonstrate that
the issue would have been far less than "plainly obvious" in the Ariane 5
case.

> (2) as always, it depends on the thoroughness that
> was applied when using the method, (3) DbC has a much better
> effort-to-effect ratio than any other method of keeping detailed design
> documents in sync with actual code, so the prospects for point (2) are
> better than one would think at first,

Unfortunately, the "detailed design" is a minor player in the Ariane 5 case,
at best, since we are talking about the interaction of large-scale systems
engineering with software engineering.

> and (4) the actual effect would
> have to be studied in a few test projects before relying on it for large
> systems.

I think I would be much more sanguine about the power of DbC in the
_specific_ circumstances of an Ariane-5 project after it's been demonstrated
in a few _representative_ projects.

Ken Garlington

unread,
Jul 4, 2000, 3:00:00 AM7/4/00
to
"Joachim Durchholz" <joachim dot durc...@halstenbach.com> wrote in message
news:8jt4mc$194u7$1...@ID-9852.news.cis.dfn.de...

> Robert Dewar <robert...@my-deja.com> wrote:
> >
> > Yes, indeed, and my point is simply that in the great majority
> > of Eiffel code I have seen, even including examples that are
> > posted to demonstrate this facility, it is almost always the
> > case that simple assertion pragmas would have done as well.
>
> It's somewhat difficult to post real-life examples in a newsgroup.
> Besides, people wouldn't want to read 10+K of source code to verify that
> it's easier to write assertions in Eiffel than in Ada.

An interesting comment... and one that I think has bearing on the potential
disadvantages of complex assertions that attempt to capture subtle
system-wide constraints. If you've had Fagan training, you know that there
are very real limits on the ability of human readers to efficiently process
such information.

> The important point is that preconditions and postconditions are placed
> at a position where an automatic tool can easily pick them off to
> generate a detailed design document. C or Ada assert instructions cannot
> do that: the tool would be unable to distinguish interface assertions
> from assertions that happend to be coded at the beginning or end of a
> routine.

It is reasonably trivial to generate detailed design documents from Ada
source -- I have a simple tool that generates Interleaf-compatible files
from Ada text -- and you can readily suppress parts of the program that are
not of interest. One of the more straightforward ways of doing this is to
use extended comments, such as those used by ADADL for years.

As an alternative, one "feature" of a prior version of GNAT was the ability
to put assertions in the declarative (interface) part of a specification. In
fact, I used this technique in my example of scaling at
http://www.flash.net/~kennieg/ariane.html#a1. I think GNAT may have been
changed to prohibit this behavior, but I've never checked it. Certainly, if
assertions are limited to range checks, it is in fact quite easy in Ada to
incorporate those into the interface, and tools such as SPARK (and the
compiler itself, for that matter) depend upon this feature.

Darren New

unread,
Jul 5, 2000, 3:00:00 AM7/5/00
to
Joachim Durchholz wrote:
> Hmm... I know you're a big fan of these specification methods,

Not especially. I'm just familiar with them.

And before going further, let me disclaim that it's been a decade since I
looked at this stuff, so detailed syntax might be wrong.

> but I don't see how they help.

I'll try to give some examples of stuff you can do with (say) LOTOS that you
can't do with Eiffel.

> From what I gather, the main point of Lotos is that it specifies a
> sequence of events (or actions, depending on the point of view).

Yes. But the thing to remember is that these events/actions are essentially
I/O actions. They're interactions between the implementation and the outside
world.

> I can specify these in Eiffel with no real difficulty as well, so
> there's no real difference here.

No, not really. When LOTOS says
!prompt ; ?password ; (epsilon ; command_loop | close_session)
it's saying first I wait until you're ready for me to
issue a prompt, then I wait until you give me a password,
then I do something you can't influence, followed by
either going into the command loop or closing the session.

It does *not* say what I'm doing in the meantime. It does *not*
say how I decide whether to go into the command loop or to
close the session. It give no code at all, actually.

It's saying something very different than what you can really
specify easily in Eiffel.

> Well, OK, in Eiffel specifications are
> more implicit: the sequence constraints must be expressed using
> preconditions on the object's state. I.e. instead of writing "open file,
> read a few lines of data, close the file" I have to write "'open'
> ensures 'opened', 'read' and 'close' have a precondition of 'opened'".

But this is where it is different. The "file handle" object can express its
allowable sequence. The code that calls the file handle object can express
what it wants to do. But Eiffel doesn't give you any means of looking at the
code and knowing those two will go together.

Actually, LOTOS is utterly different from that. LOTOS is far closer to SCOOP
than it is to single-threaded Eiffel.

Eiffel's sequence constraints aren't like LOTOS's, as there's no concept of
making a decision about what to do next. There's no way to look at (say) a
chunk of an Eiffel program and finding the maximum number of stack elements
this particular stack object will ever hold. (I can say this with confidence
because Eiffel doesn't have formal semantics.)

> Now the question is how this difference makes Lotos more useful or not.

It does, because you've kind of missed the semantics. You're looking at one
LOTOS statement and saying "it tells me what sequence I can do things in."
But that's the basic statement of LOTOS, and the power comes from looking at
a *bunch* of LOTOS statements and saying "here's the allowable way for all
of them to play together."

> I cannot deny that a Lotos specification is simpler to write. On the
> minus side, I'd expect that a Lotos specification tends to be
> overspecific.

Actually, LOTOS specifications tend to say exactly and only what you need to
do, once you realize that a LOTOS action is actually an I/O event and not a
computation event.

> Of course, an overspecified specification is simpler to verify: you
> don't have to verify all cases but only those that are specified.

LOTOS doesn't overspecify. Eiffel overspecifies. Take, for example, an
FTP-like server. I can write a LOTOS spec to say that there's a login, a
store, a fetch, and a delete, and this is what it looks like on the wire to
the client. I can't do that in Eiffel. I can't write something that says
"You have to give me a valid login in such-and-such a format before I'll let
you fetch a file" or "whether you may store any particular file depends on
what the permissions on that file are and how you logged in."

And you're using "verify" differently than LOTOS does. Eiffel uses "verify"
to mean "doesn't violate preconditions." LOTOS already never violates
preconditions - that's the definition of the language (not unlike SCOOP).
LOTOS uses "verify" to mean "this LOTOS program outputs something compatible
with that LOTOS program."

For example, I don't think it's possible to look at the source code for an
FTP server written in Eiffel, and for an FTP client written in Eiffel, and
say "These two programs, when run together, will not deadlock." You can do
that in LOTOS. That's what it's for.

I also doubt you can take an Eiffel FTP server, and another Eiffel FTP
server, and say "will any client that works with the first FTP server
without deadlocking also work with the second FTP server without
deadlocking?" You can with LOTOS. That's what it's for. (Note that this
means you could formally prove that a program upgrade will not break other
programs, for example.)

The sort of thing you can answer with LOTOS is "If I store a file on an FTP
server, then come back and ask for it later, and the only thing that ran is
*that* client, will the file still be there? Yes/no/maybe?"

> On the other hand, I see that the slides on Lotos contain remarks like
> "Verification tools will often return inconclusive results" (this is not
> a literal quote, just what I gathered). This is, in a sense, a
> disappointment: Lotos sacrifices expressivity and *still* doesn't give
> me automatic verification.

Wrong kind of verification. LOTOS verification is "does this code implement
this specification?" It's not "is this code internally correct" but rather
"does this code do what the customer asked?" The customer defines the legal
I/O, and then you refine that until you're generating the I/O, and then you
formally prove that your code is a refinement of the spec. You can't do this
with DbC.

Note that DbC can't address this. If you define "validate" as "the code
matches the assertions" then you can't validate it and it's nowhere near
what LOTOS lets you do anyway; it's also overspecified. If you define
"validate" as "the DbC does what the customer asked for" then you still
can't validate it.

LOTOS is designed not to write programs, but to write specifications. I can
write a LOTOS spec, give it to a dozen people, let them implement it in
whatever language they want, and be sure that they'll all talk to each
other. If I do that in Eiffel, all I have is a dozen copies of the same
program in the same language. Why bother specifying with DbC when I can just
write the code and compile it?

In LOTOS, I can write a *second* program, and know what its relationship to
the first program is. Are they equivalent? Is one an upgrade of the other?
Is one a client of the other? (I'm forgetting all the technical terms that
define such relationships.)

> This is a little cheap - no automatic verifiers for Eiffel exist.

Nor can they, as Eiffel is defined today.

> Actually Eiffel was never optimized for automatic verification, so I'm
> pretty sure that some things are more difficult than they need be.
> However, I do expect that a language designed for verification could use
> DbC and do away with explicit sequence constraints à la Lotos;

That's sort of where Estelle comes in, but Estelle is less formal than
LOTOS. But the thing to remember is that Eiffel *cannot* do the job of
LOTOS, because Eiffel doesn't define parallelism, nor does it define
interactions with the outside world, nor does it (when you get right down to
it) define its fundamental data types. E.g., Where's the assertion that says
the values in arrays don't spontaneously change, in that if I call "item"
twice in a row I get the same value both times? Where's the assertion that
says if I put a string assignment inside a loop I get a different reference
each time, and didn't this behavior change at one point? For that matter,
where's the assertion that the values of variables don't spontaneously
change? Where's the assertion that "dispose" will be called before an "out
of memory" exception is generated? All these sorts of things are necessary
before you can even think about trying to prove something formally about a
program.

> I even
> speculate that specifications in such a language wouldn't be
> unmanageably more difficult to verify than a Lotos specification:
> assertions nail down exactly those sequence constraints that count; it's
> is possible that this would even guide a verifier to quickly find those
> constraints that are relevant for proving whatever property it's about
> to prove.

Ummm... Perhaps, but I'd tend to doubt it. Remember that in LOTOS if you say
"Do A or B or C"
and I say
"Do C or D"
then C is what happens. No amount of DbC is going to tell you what is coming
over the socket.

> No, the shortcomings of Eiffel's DbC are of a different, entirely more
> trivial nature: It's impossible or difficult to write assertions with
> existential or universal quantifiers, and it's impossible to express
> temporal constraints on routines of descendants (stuff like "the value
> of Current.key will never change while Current is entered into a sorted
> list").

Well, they're also incapable of indicating what output occured, what input
is acceptable, whether there was enough memory for what you're trying to do,
whether your numbers wrapped or lost precision, how the operating system
behaves, or anything about concurrent operations.

Since LOTOS is about temporal constraints, not being able to express
temporal constraints is very important. Heck, I don't even know it's
possible to *prove* that preconditions don't change the state of the system;
i.e., it's legal in Eiffel to have a precondition that includes a command,
which pretty much blows the whole argument there. :-) "ensure socket.close
= 0"

Incidentally, I think my trying to clarify further why LOTOS is much more
powerful specification-wise than Eiffel is is not unlike trying to explain
to someone who never used assertions why DbC is good. Very difficult,
because it's hard to show how the whole thing fits together.

I think that one thing that might help is to get some graduate students
together to work out a formal specification for Eiffel. All of a sudden,
there's a *real* buzzword for safety-concious managers. Eiffel is suddenly a
formally-defined language with precise semantics! :-)

Larry Kilgallen

unread,
Jul 5, 2000, 3:00:00 AM7/5/00
to

That is quite impressive. I had just assumed that Linux code to
address various platforms would use the wretched "#IF" and "IFDEF"
constructs rather than really having portable code.

David Starner

unread,
Jul 5, 2000, 3:00:00 AM7/5/00
to

Actually, for the most part, Linux does what an Ada programmer trying
to port an Ada kernel to various platforms would do: stick the
implementation specifics behind functions and constants and just
use the appropriate interface for the appropriate platform.

Linux is a terrible example of portability, though. It's heavily GCC
specific and it's a kernel, which means it has large non-portable parts
anyway. Since it doesn't use the standard C library, it doesn't worry
about differences in such, either.

The statement in this thread about only a couple weeks to retarget
the Eiffel compiler to each C compiler doesn't say great things about
the portability of C to me. How long would it take to retarget it to
a new Ada compiler? (Or, for something of a low blow, how long does it
take to retarget it for each new JVM?)

--
David Starner - dstar...@aasaa.ofe.org
http/ftp: x8b4e53cd.dhcp.okstate.edu
"A dynamic character with an ability to survive certain death and
a questionable death scene leaving no corpse? Face it, we'll never
see her again." - Sluggy Freelance

Bertrand Meyer

unread,
Jul 5, 2000, 3:00:00 AM7/5/00
to
Robert I. Eachus wrote:
>
> [...] (And note that DbC
> could not have avoided anything here. The people who built the Ariane 4
> software were not allowed to see the Ariane 5 specifications.)

I don't understand your point. It's the other way around.
When you reuse something you need to see its specifications.

Please tell me what I am missing.

-- Bertrand Meyer

Robert I. Eachus

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to

Joachim Durchholz wrote:

> However, routines that check contracts are usually simple enough to be
> subjected to automated analysis. It should be possible (and in fact not
> too difficult) to write a contract verifier tool. The interesting
> question here is whether this tool can be made to give diagnostics that
> are meaningful for a programmer or a code reviewer.

There is an extremely important point that keeps getting overlooked
in this discussion. The destruction of the Ariane 5 was not caused by
the
shutdown of the inertial guidance units, but by the stack breaking apart
when the engines were deflected too far. With the guidance system dead,
the Ariane 5 was doomed, but the engine deflection commanded by the
flight control computer could have been caused by real, correct, inputs
due to wind shear.

In other words, the specific fault tree which occurred might have
been avoided if the alignment program which crashed had been turned off
after 35 seconds instead of 40, but the probability of a similar crash
would have remained high. It was not just the inertial unit which was
reused from the Ariane 4 without evaluation or testing, it was the whole
guidance suite, including all the structural constants such as the
strength and moments of the stack.

It doesn't matter what methodology you use if you never check that
the control equations match the actual vehicle. I'm just surprised that
the Ariane 5 got as high off the pad as it did. (And note that DbC

Ken Garlington

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to
"Bertrand Meyer" <Bertrand_Meyer/nos...@eiffel.com> wrote in message
news:3963DEBF...@eiffel.com...
> Robert I. Eachus wrote:
> >
> > [...] (And note that DbC

> > could not have avoided anything here. The people who built the Ariane 4
> > software were not allowed to see the Ariane 5 specifications.)
>
> I don't understand your point. It's the other way around.
> When you reuse something you need to see its specifications.
>
> Please tell me what I am missing.

It's the "subcontracted" part of "subcontracted real-time embedded
safety-critical software."

The prime contractor (customer) releases a specification to the
subcontractor (vendor). However, that specfication is a _subset_ of the
total system (i.e. complete air vehicle and associated ground support
environment). The customer makes judgements as to how much information is
necessary. This is very much an art form -- releasing too much information
can overly constrain the vendor's ability to either innovate or reuse
existing capabilities; releasing too little information can cause the
procured item to fail to meet the overall system needs (as spectacularly
demonstrated in the Ariane 5 case).

Having been on both sides of this situation more than once, as both the
customer and the vendor, it's a lot harder to get the balance right for a
complex procurement than most people realize. Unless the customer has a
great deal of insight into the vendor's technology and processes, and vice
versa, it's extremely easy to get it wrong. There are a lot of pressures
that make it difficult to get this insight -- legal constraints (for
competitive procurements, specifications can't favor one vendor over
another); procurement schedule constraints (having to get specifications
"frozen" early in the process in order to get final costs established);
money constraints (for example, the vendor may charge more for a complex
specification than a simple one, plus may change extra for the costs
associated with updating the specification later); schedule constraints (for
example, either or both parties may push to reuse existing capabilities to
get to market faster); proprietary data constraints (the vendor and customer
may compete in certain fields, and so are loathe to exchange proprietary
details); different backgrounds of the personnel involved (the customer
representative may know how to build great airframes, but know nothing about
the uniqueness of software-driven systems); and so forth.

There have been some attempts to correct this situation. For example, the
Integrated Product Team (IPT) approach attempts to give the vendor greater
access to the system-wide (and user-oriented) view of the world, while the
more recent acquisition reform concept attempts to minimize unnecessary
constraints. Sometimes these approaches work well, and sometimes they don't.

This is why I was not surprised to see comments in the inquiry report such
as:

-- "it was jointly agreed not to include the Ariane 5 trajectory data in the
SRI requirements and specification"

-- "the systems specification of the SRI does not indicate operational
restrictions that emerge from the chosen implementation"

-- "The supplier of the SRI was only following the specification given to
it, which stipulated that in the event of any detected exception the
processor was to be stopped."

-- "A more transparent organisation of the cooperation among the partners in
the Ariane 5 programme must be considered. Close engineering cooperation,
with clear cut authority and responsibility, is needed to achieve system
coherence, with simple and clear interfaces between partners."

I allude to all of this at

http://www.flash.net/~kennieg/ariane.html#s3.1.5

but maybe it's worth adding some of the information above to my paper as
well.

Ken Garlington

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to
"Robert I. Eachus" <riea...@earthlink.net> wrote in message
news:3963CDDE...@earthlink.net...

> It was not just the inertial unit which was
> reused from the Ariane 4 without evaluation or testing, it was the whole
> guidance suite, including all the structural constants such as the
> strength and moments of the stack.

I understand you correctly, I think you're wrong. See section 2.3 of the
inquiry report (repeated below). It clearly states that the overall guidance
suite was extensively tested using real data. Note especially the
description of closed-loop testing related to "trajectories degraded with
respect to atmospheric parameters." However, the inertial unit component
was simulated during most of these tests, so the impacts of the actual
inertial unit to the overall system were not fully qualified.

-----

2.3 THE TESTING AND QUALIFICATION PROCEDURES

The Flight Control System qualification for Ariane 5 follows a standard
procedure and is performed at the following levels :

- Equipment qualification
- Software qualification (On-Board Computer software)
- Stage integration
- System validation tests.

The logic applied is to check at each level what could not be achieved at
the previous level, thus eventually providing complete test coverage of each
sub-system and of the integrated system.

Testing at equipment level was in the case of the SRI conducted rigorously
with regard to all environmental factors and in fact beyond what was
expected for Ariane 5. However, no test was performed to verify that the SRI
would behave correctly when being subjected to the count-down and flight
time sequence and the trajectory of Ariane 5.

It should be noted that for reasons of physical law, it is not feasible to
test the SRI as a "black box" in the flight environment, unless one makes a
completely realistic flight test, but it is possible to do ground testing by
injecting simulated accelerometric signals in accordance with predicted
flight parameters, while also using a turntable to simulate launcher angular
movements. Had such a test been performed by the supplier or as part of the
acceptance test, the failure mechanism would have been exposed.

The main explanation for the absence of this test has already been mentioned
above, i.e. the SRI specification (which is supposed to be a requirements
document for the SRI) does not contain the Ariane 5 trajectory data as a
functional requirement.

The Board has also noted that the systems specification of the SRI does not


indicate operational restrictions that emerge from the chosen

implementation. Such a declaration of limitation, which should be mandatory
for every mission-critical device, would have served to identify any
non-compliance with the trajectory of Ariane 5.

The other principal opportunity to detect the failure mechanism beforehand
was during the numerous tests and simulations carried out at the Functional
Simulation Facility ISF, which is at the site of the Industrial Architect.
The scope of the ISF testing is to qualify :

- the guidance, navigation and control performance in the whole flight
envelope,
- the sensors redundancy operation, - the dedicated functions of the stages,
- the flight software (On-Board Computer) compliance with all equipment of
the Flight Control Electrical System.

A large number of closed-loop simulations of the complete flight simulating
ground segment operation, telemetry flow and launcher dynamics were run in
order to verify :

- the nominal trajectory
- trajectories degraded with respect to internal launcher parameters
- trajectories degraded with respect to atmospheric parameters
- equipment failures and the subsequent failure isolation and recovery

In these tests many equipment items were physically present and exercised
but not the two SRIs, which were simulated by specifically developed
software modules. Some open-loop tests, to verify compliance of the On-Board
Computer and the SRI, were performed with the actual SRI. It is understood
that these were just electrical integration tests and "low-level " (bus
communication) compliance tests.

It is not mandatory, even if preferable, that all the parts of the subsystem
are present in all the tests at a given level. Sometimes this is not
physically possible or it is not possible to exercise them completely or in
a representative way. In these cases it is logical to replace them with
simulators but only after a careful check that the previous test levels have
covered the scope completely.

This procedure is especially important for the final system test before the
system is operationally used (the tests performed on the 501 launcher itself
are not addressed here since they are not specific to the Flight Control
Electrical System qualification).

In order to understand the explanations given for the decision not to have
the SRIs in the closed-loop simulation, it is necessary to describe the test
configurations that might have been used.

Because it is not possible to simulate the large linear accelerations of the
launcher in all three axes on a test bench (as discussed above), there are
two ways to put the SRI in the loop:

A) To put it on a three-axis dynamic table (to stimulate the Ring Laser
Gyros) and to substitute the analog output of the accelerometers (which can
not be stimulated mechanically) by simulation via a dedicated test input
connector and an electronic board designed for this purpose. This is similar
to the method mentioned in connection with possible testing at equipment
level.

B) To substitute both, the analog output of the accelerometers and the
Ring Laser Gyros via a dedicated test input connector with signals produced
by simulation.

The first approach is likely to provide an accurate simulation (within the
limits of the three-axis dynamic table bandwidth) and is quite expensive;
the second is cheaper and its performance depends essentially on the
accuracy of the simulation. In both cases a large part of the electronics
and the complete software are tested in the real operating environment.

When the project test philosophy was defined, the importance of having the
SRIs in the loop was recognized and a decision was taken to select method B
above. At a later stage of the programme (in 1992), this decision was
changed. It was decided not to have the actual SRIs in the loop for the
following reasons :

The SRIs should be considered to be fully qualified at equipment level
The precision of the navigation software in the On-Board Computer depends
critically on the precision of the SRI measurements. In the ISF, this
precision could not be achieved by the electronics creating the test
signals.
The simulation of failure modes is not possible with real equipment, but
only with a model.
The base period of the SRI is 1 millisecond whilst that of the simulation at
the ISF is 6 milliseconds. This adds to the complexity of the interfacing
electronics and may further reduce the precision of the simulation.
The opinion of the Board is that these arguments were technically valid, but
since the purpose of a system simulation test is not only to verify the
interfaces but also to verify the system as a whole for the particular
application, there was a definite risk in assuming that critical equipment
such as the SRI had been validated by qualification on its own, or by
previous use on Ariane 4.

While high accuracy of a simulation is desirable, in the ISF system tests it
is clearly better to compromise on accuracy but achieve all other
objectives, amongst them to prove the proper system integration of equipment
such as the SRI. The precision of the guidance system can be effectively
demonstrated by analysis and computer simulation.

Under this heading it should be noted finally that the overriding means of
preventing failures are the reviews which are an integral part of the design
and qualification process, and which are carried out at all levels and
involve all major partners in the project (as well as external experts). In
a programme of this size, literally thousands of problems and potential
failures are successfully handled in the review process and it is obviously
not easy to detect software design errors of the type which were the primary
technical cause of the 501 failure. Nevertheless, it is evident that the
limitations of the SRI software were not fully analysed in the reviews, and
it was not realised that the test coverage was inadequate to expose such
limitations. Nor were the possible implications of allowing the alignment
software to operate during flight realised. In these respects, the review
process was a contributory factor in the failure.

Robert I. Eachus

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to
Bertrand Meyer wrote:
>
> I don't understand your point. It's the other way around.
> When you reuse something you need to see its specifications.
>
> Please tell me what I am missing.

What you are missing is that, prior to the Ariane 5 crash no one
person or organization ever had access to both the Ariane 4 software to
be reused, and the Ariane 5 specifications.
This was a deliberate result of the project management structure.
Arianespace is a consortium of companies located in several countries,
and data rights were jealously guarded by national governments. The
developers of the Ariane 4 software wanted to check it against the
Ariane 5 specifications, since it was to be reused, but their company
was never given permission to do so, in part because it wasn't
developing any software for the Ariane 5!

On the systems integration side the original plan was to do a full
up system simulation. However, this was cancelled for "financial"
reasons before the company developing the test
rig received the re-used software source.

Incidently, you should also know that the software developers did
the right thing. They questioned seven "unprotected" (can't happen)
locations for exceptions in the code. (Remember, they were developing
for the Ariane 4, where this really was a can't happen. Higher
management reviewed the places the software developers wanted to add
checks, and allowed them to add, I think, two of the seven.

Notice also that if ANY software engineering had been done for the
Ariane 5, the program that crashed would have been shut down at T = 0,
not T = +40. Running the alignment program after T = 0 was a special
requirement which only applied to the Ariane 4, and was actually needed
on at least one launch. (If the countdown was stopped just before T =
0, there was not enough time to reset the on board clock. If the
alignment program was shutdown, it would take about an hour to realign
the gyros. This was not an issue on Ariane 5.)

So this was not a software engineering problem. This was a problem
specific to multi-national projects, where nations were allowed to
prevent the export of information. Not only did this
compartmentalization prevent the discovery of the requirements conflict,
but no one individual was ever in a position to determine that this
interface had not been tested or even checked. (The company who was
supposed to develop the test rig hadn't seen the software development
information before the test rig was cancelled, etc.)

Robert I. Eachus

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to
Ken Garlington wrote:

> I allude to all of this at
>
> http://www.flash.net/~kennieg/ariane.html#s3.1.5
>
> but maybe it's worth adding some of the information above to my paper as
> well.

Please do. The Ariane 5 crash is the software equivalent of
"Galloping Gertie." Prior to the the copllapse of the Tacoma Narrows
Bridge in 1940, bridge designers ignored dynamic loading of bridges and
developed based on static analysis. The George Washington
Bridge was built before the Tacoma Narrows Bridge. It was designed to
be two levels from the start, but due to the depression only one level
was completed, so the structure had a large
safety margin. However, the Tacoma Narrows Bridge was not only more
flexible than the GW Bridge, but the wind patterns in the area could
blow up. This allowed the bridge to develop lift and unload the
supporting cables. The rest is history. (see:
http://www.ketchum.org/wind.html or
http://www.icaen.uiowa.edu/~hawkeng/spring_97/articles/gertie.html)

The lesson that the Ariane disaster should teach is that a software
engineer cannot sign off on a design without access to all of the
requirements information.

Robert I. Eachus

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to
Ken Garlington wrote:
>
> "Robert I. Eachus" <riea...@earthlink.net> wrote in message
> news:3963CDDE...@earthlink.net...
> > It was not just the inertial unit which was
> > reused from the Ariane 4 without evaluation or testing, it was the whole
> > guidance suite, including all the structural constants such as the
> > strength and moments of the stack.
>
> I understand you correctly, I think you're wrong. See section 2.3 of the
> inquiry report (repeated below). It clearly states that the overall guidance
> suite was extensively tested using real data. Note especially the
> description of closed-loop testing related to "trajectories degraded with
> respect to atmospheric parameters." However, the inertial unit component
> was simulated during most of these tests, so the impacts of the actual
> inertial unit to the overall system were not fully qualified.

I rechecked, and I stand by my statement. You may be
thinking--correctly--that a lot of tests were run in the ICF, and that
they covered all of the tests objectives that they were responsible
for--obviously incorrect. "Some open-loop tests, to verify compliance

of the On-Board Computer and the SRI, were performed with the actual
SRI. It is understood that these were just electrical integration tests
and "low-level (bus communication) compliance tests." In other words,
the "full-up" tests did not involve simulated flight dynamics.

You have to understand that the report, while correct, was also a
political document. The statement below: "When the project test


philosophy was defined, the importance of having the SRIs in the loop
was recognized and a decision was taken to select method B above. At a
later stage of the programme (in 1992), this decision was changed."

implies, but does not state, that the change was to choose alternative
A. In fact, those particular tests were not performed at all and not
replaced by analysis. Those were the only tests that could have shown
that the engines could be deflected beyond the point of structural
failure.

You could contend, and the final paragraph you quoted seems to, that
the restriction of engine deflections to those compatible with the
physical structure was a responsibility of the SRI and not of the Fight
control software as a whole:

> Nevertheless, it is evident that the
> limitations of the SRI software were not fully analysed in the reviews, and
> it was not realised that the test coverage was inadequate to expose such
> limitations. Nor were the possible implications of allowing the alignment
> software to operate during flight realised. In these respects, the review
> process was a contributory factor in the failure.

But whether the blame goes to the SRI or to some other part of the
system. The stack did indeed rupture. That was a Class 1 failure, and
the failure mode was not discovered during analysis or testing. (My
understanding is that the engines on the Ariane 4 could not be deflected
enough to cause structural failure. But the engines on the Ariane 5 are
more powerful, and the stack is taller allowing aerodynamic forces to
create more torque.)

Bertrand Meyer

unread,
Jul 6, 2000, 3:00:00 AM7/6/00
to
Robert I. Eachus wrote:

> [...] What you are missing is that, prior to the Ariane 5 crash no one


> person or organization ever had access to both the Ariane 4 software to
> be reused, and the Ariane 5 specifications.
> This was a deliberate result of the project management structure.
> Arianespace is a consortium of companies located in several countries,
> and data rights were jealously guarded by national governments. The
> developers of the Ariane 4 software wanted to check it against the
> Ariane 5 specifications, since it was to be reused, but their company
> was never given permission to do so, in part because it wasn't
> developing any software for the Ariane 5!

I am afraid we are on different wavelengths. Why was it the task of
the Ariane 4 software developers to learn about Ariane 5? It's the
other way around! If I reuse something, I am responsible for finding
out what its contract is and whether it fits *my* needs.
It's not the business of the reused element's developers
to know what I am doing with it! It is their responsibility,
however, to make sure that their stuff *has* a contract, i.e.
a specification of what it is supposed to do and under what conditions.

(In the case of software developed for reuse -- probably
not the case here -- it is actually *impossible* for the
developers of the reused modules to know about the clients
(the reusing modules). If they had to, the modules couldn't
be called reusable any more.)

Blaming national vanities seems to me rather facile here.

> So this was not a software engineering problem. This was a problem
> specific to multi-national projects, where nations were allowed to
> prevent the export of information.

I don't see any evidence of that. Any large project has politics.
The fights of departments within companies can be as bad
as those of cooperating nations. In this case, I think what you
write reinforces the understanding that this was a technical
and managerial software problem -- a software engineering problem,
which would have been prevented by the proper software engineering
principles and practices, including Design by Contract.
The rest -- such as national enmities -- is project folklore.
The kind of stories we all like to hear about in cocktail parties
at software engineering conferences, and possibly true, but
a scapegoat when it comes to understanding failure reasons.

As a matter of fact, the presence of political or other extraneous
influences *reinforces* the necessity of applying good software engineering
principles. *Any* substantial project will encounter
non-technical sources of conflict. The only
way to avoid letting them wreck the project is to make sure that
serious technical policies are followed. If someone does tell
you "I can't get a clue to what this module does, the damn
{English | Germans | Americans | French | Italians} (pick your
favorite enemy) won't tell me what I need", it is a simple
matter to say "well, in that case we can't use it and we're
better off writing our own". It is precisely the role of
technical and management policies to mitigate the possible
effect of such external and pernicious influences when and if
they do arise. If the proper software engineering policies are not
applied, it's all too easy then for us software folks to
blame the politics and someone's (someone else's of course,
always) narrow-minded national favoritism.

-- Bertrand Meyer
ISE
http://www.eiffel.com http://www.tools-conferences.com

Ken Garlington

unread,
Jul 7, 2000, 3:00:00 AM7/7/00
to
"Robert I. Eachus" <riea...@earthlink.net> wrote in message
news:3965137B...@earthlink.net...

> Ken Garlington wrote:
> >
> > "Robert I. Eachus" <riea...@earthlink.net> wrote in message
> > news:3963CDDE...@earthlink.net...
> > > It was not just the inertial unit which was
> > > reused from the Ariane 4 without evaluation or testing, it was the
whole
> > > guidance suite, including all the structural constants such as the
> > > strength and moments of the stack.
> >
> > I understand you correctly, I think you're wrong. See section 2.3 of the
> > inquiry report (repeated below). It clearly states that the overall
guidance
> > suite was extensively tested using real data. Note especially the
> > description of closed-loop testing related to "trajectories degraded
with
> > respect to atmospheric parameters." However, the inertial unit
component
> > was simulated during most of these tests, so the impacts of the actual
> > inertial unit to the overall system were not fully qualified.
>
> I rechecked, and I stand by my statement. You may be
> thinking--correctly--that a lot of tests were run in the ICF, and that
> they covered all of the tests objectives that they were responsible
> for--obviously incorrect. "Some open-loop tests, to verify compliance

> of the On-Board Computer and the SRI, were performed with the actual
> SRI. It is understood that these were just electrical integration tests
> and "low-level (bus communication) compliance tests." In other words,
> the "full-up" tests did not involve simulated flight dynamics.

For this to make sense, I assume you're defining "full-up" to mean
"including a real vs. a simulated SRI." Is this correct? If so, it's true
that the "full-up" tests did not involve simulated flight dynamics, since
there were NO "full-up" tests including an actual SRI. The "open-loop" tests
described in your quote are separate from the closed-loop tests that were
performed.

However, your original comment involved the "whole guidance suite" not being
tested. As noted in the inquiry: "A large number of closed-loop simulations


of the complete flight simulating ground segment operation, telemetry flow

and launcher dynamics were run... In these tests many equipment items were
physically present and exercised but not the two SRIs... It would have been
technically feasible to include almost the entire inertial reference system
in the overall system simulations which were performed. For a number of
reasons it was decided to use the simulated output of the inertial reference
system, not the system itself or its detailed simulation. *** Had the system
been included, the failure could have been detected. ***"

It's not that the tests were incomplete -- it's that the system they were
testing was incomplete _with respect to the SRIs._.

The report also says that other elements in the control chain were analyzed,
particularly the Flight Control System (which implements the control laws).
"In accordance with its termes of reference, the Board has examined possible
other weaknesses, primarily in the Flight Control System. No weaknesses were
found which were related to the failure..." Again, the report makes clear
that the IRS was the weak link, not just the first link to break!

> You have to understand that the report, while correct, was also a

> political document. The statement below: "When the project test


> philosophy was defined, the importance of having the SRIs in the loop
> was recognized and a decision was taken to select method B above. At a
> later stage of the programme (in 1992), this decision was changed."

> implies, but does not state, that the change was to choose alternative
> A. In fact, those particular tests were not performed at all and not
> replaced by analysis.

The tests were not performed against a real SRI. The report notes this.
However, it does make clear that the system integration test _definition_
(vs. _environment) was sufficient (see above).

> Those were the only tests that could have shown
> that the engines could be deflected beyond the point of structural
> failure.
>
> You could contend, and the final paragraph you quoted seems to, that
> the restriction of engine deflections to those compatible with the
> physical structure was a responsibility of the SRI and not of the Fight
> control software as a whole:

I don't get that from the paragraph I quoted earlier. My point is as noted
above -- the system under test was not adequately represented.

> > Nevertheless, it is evident that
the
> > limitations of the SRI software were not fully analysed in the reviews,
and
> > it was not realised that the test coverage was inadequate to expose such
> > limitations. Nor were the possible implications of allowing the
alignment
> > software to operate during flight realised. In these respects, the
review
> > process was a contributory factor in the failure.
>

> But whether the blame goes to the SRI or to some other part of the
> system. The stack did indeed rupture. That was a Class 1 failure, and
> the failure mode was not discovered during analysis or testing. (My
> understanding is that the engines on the Ariane 4 could not be deflected
> enough to cause structural failure. But the engines on the Ariane 5 are
> more powerful, and the stack is taller allowing aerodynamic forces to
> create more torque.)

I don't see these conclusions supported anywhere in the report. As to engine
power, it says that the _trajectories_ are different: "The value of BH was
much higher than expected because the early part of the trajectory of Ariane
5 differs from that of Ariane 4 and results in considerably higher
horizontal velocity values." Furthermore, I am fairly confident that
erroneous flight control laws, or flight control laws given simultaneous
dual failures of a critical input like IRS values, could crash an Ariane 4
just as easily as an Ariane 5.

As to the failure mode: certainly it was known that an IRS failure would
cause the result seen. The fault tree below that, however, was faulty in
that it assumed IRS hardware failure was much more likely than software
failure. (That's not uncommon, by the way; I see a lot of fault trees that
make the same mistake, particularly in respect to reused designs.)

It's not surprising that the flight control laws would have been correct,
while the IRS would be faulty. It's almost impossible to design control laws
without a great deal of data about the airframe flight characteristics and
limits, while it is more likely (and was the case here) to design an IRS
without this information.

Are you speaking from some personal conversations with the aerodynamics
toads at Arianespace? If that's the case, I'll go along with your
interpretation, strange though it sounds. I'm just basing my interpretation
based on the inquiry report and my own experience with flight control
systems development...

It is loading more messages.
0 new messages