Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Ada vs Ruby

18 views
Skip to first unread message

Marc Heiler

unread,
Apr 15, 2008, 7:26:25 AM4/15/08
to
Hi,

On http://www.gcn.com/print/27_8/46116-1.html Ada is touted briefly.

The sentence(s) that most jumped into my eye (and hurt my brain a bit)
was this:

"[...] Ada has a feature called strong typing. This means that for every
variable a programmer declares, he or she must also specify a range of
all possible inputs.[...]"

"[...] This ensures that a malicious hacker can’t enter a long string of
characters as part of a buffer overflow attack or that a wrong value
won’t later crash the program. [...]"

But clearly that is simple to do in ruby as well (and I never heard of a
buffer overflow outside of the C world anyway): Just specify which input
range would be allowed and discard the rest, warn the programmer, or
simply convert it to the nearest allowed value - am I missing on
something? Maybe there are some other reasons why Ada is still so en
vogue for aviation software but I dont really get it (other than legacy
code that was sitting there for thousand of years already). Maybe it is
a paradigm that is only possible in Ada.

Ruby being too slow would be something I could not quite understand
insofar that, after all you could write parts in C anyway, or you could
use (in the case of replacing ADA) Lua - I'd figure Lua would be quite
fast. Somehow despite that Ada is still in use, to me it seems like a
"dead" language (means noone really learns it because there are better
alternatives available)

The biggest confusion I get here is simply that strong typing is touted
as a very good thing to have. I dont know if this is the case or not,
but it seems to me that this is more "behaviour" that is imposed onto
the programmer anyway (as in, he must do extra work to ensure his
variables are a certain way etc..)
For example, the "strong typing" as described here appears to me more a
"force the programmer to do this and that". This may have advantages in
the long run, I dont know, maybe fewer bugs or no buffer overflow
problems, but to me it still is forcing the programmer to comply. I dont
get what is so great about having to worry about many details. And on
blogs you do sometimes see proponents of this solution scold on the
people that use another solution (not only typing, but also test driven
development and so on...)
--
Posted via http://www.ruby-forum.com/.

Robert Dober

unread,
Apr 15, 2008, 7:53:29 AM4/15/08
to
On Tue, Apr 15, 2008 at 1:26 PM, Marc Heiler <shev...@linuxmail.org> wrote:
> Hi,
>
> On http://www.gcn.com/print/27_8/46116-1.html Ada is touted briefly.
>
> The sentence(s) that most jumped into my eye (and hurt my brain a bit)
> was this:
>
> "[...] Ada has a feature called strong typing. This means that for every
> variable a programmer declares, he or she must also specify a range of
> all possible inputs.[...]"
>
> "[...] This ensures that a malicious hacker can't enter a long string of
> characters as part of a buffer overflow attack or that a wrong value
> won't later crash the program. [...]"
>
> But clearly that is simple to do in ruby as well (and I never heard of a
> buffer overflow outside of the C world anyway): Just specify which input
> range would be allowed and discard the rest, warn the programmer, or
> simply convert it to the nearest allowed value - am I missing on
> something? Maybe there are some other reasons why Ada is still so en
> vogue for aviation software but I dont really get it (other than legacy
> code that was sitting there for thousand of years already). Maybe it is
> a paradigm that is only possible in Ada.
I was luck enough to write an Ada debugger in Ada for Ada83 in 1986
and I have to tell you
that it was indeed revolutionary for it's safety concepts. Agility was
of course not at all a design requirement of the DoD which has chosen
the final design of the language as proposed by Jean Ichbiah.

http://en.wikipedia.org/wiki/Ada_%28programming_language%29

As you can read above there is some discussion about the real value of
Ada, but I have to admit that living in the Ada world and being payed
for not doing anything else then using and studying it was a nice time
and put me into a mind setup of it's own.

It for sure is the champion of early failure (probably the compiler
detecting more potential runtime errors, especially in multitasking
than any other ) and I believe that this makes it very valuable in
mission critical domains.


> Ruby being too slow would be something I could not quite understand
> insofar that, after all you could write parts in C anyway, or you could
> use (in the case of replacing ADA) Lua - I'd figure Lua would be quite
> fast. Somehow despite that Ada is still in use, to me it seems like a
> "dead" language (means noone really learns it because there are better
> alternatives available)

Dead? I would be very much surprised, just restricted to a domain
where it is useful.


>
> The biggest confusion I get here is simply that strong typing is touted
> as a very good thing to have.

Under some conditions it is.


>I dont know if this is the case or not,
> but it seems to me that this is more "behaviour" that is imposed onto
> the programmer anyway (as in, he must do extra work to ensure his
> variables are a certain way etc..)

Oh it is an awfull lot of work, but less than in C++ I feel.


> For example, the "strong typing" as described here appears to me more a
> "force the programmer to do this and that".

Wait a second it is still the programmer who is defining the types ;)


>This may have advantages in
> the long run, I dont know, maybe fewer bugs or no buffer overflow
> problems, but to me it still is forcing the programmer to comply. I dont
> get what is so great about having to worry about many details. And on
> blogs you do sometimes see proponents of this solution scold on the
> people that use another solution (not only typing, but also test driven
> development and so on...)

If I had been an Ada programmer for the last 20 years I definitely
would not know about the other domains and the usefulness of duck
typing and agile development.
It is an old story repeating itself like history. There were people
programming in assembler (or even machine code) for their life and
then they were asked about Fortran, what did you think they told?

Robert


--
http://ruby-smalltalk.blogspot.com/

---
Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

Michael Neumann

unread,
Apr 15, 2008, 8:28:58 AM4/15/08
to
Marc Heiler wrote:
> Hi,
>
> On http://www.gcn.com/print/27_8/46116-1.html Ada is touted briefly.
>
> The sentence(s) that most jumped into my eye (and hurt my brain a bit)
> was this:
>
> "[...] Ada has a feature called strong typing. This means that for every
> variable a programmer declares, he or she must also specify a range of
> all possible inputs.[...]"
>
> "[...] This ensures that a malicious hacker can’t enter a long string of
> characters as part of a buffer overflow attack or that a wrong value
> won’t later crash the program. [...]"
>
> But clearly that is simple to do in ruby as well (and I never heard of a
> buffer overflow outside of the C world anyway): Just specify which input
> range would be allowed and discard the rest, warn the programmer, or
> simply convert it to the nearest allowed value - am I missing on
> something? Maybe there are some other reasons why Ada is still so en
> vogue for aviation software but I dont really get it (other than legacy
> code that was sitting there for thousand of years already). Maybe it is
> a paradigm that is only possible in Ada.

You're right. The problem in C is that C strings do not have a length,
they are just pointers, and strings have to be zero-terminated. That is
a very bad thing. Imagine there is no terminating zero, then any call to
a string related function will read the whole memory and will most
likely result in an exception. And determining the length of a string is
O(n). But the real security issue is, that some functions that read
input don't take a maximum length. Function gets(3) is one example.
It reads a line into a buffer, regardless how long the buffer is.

But this is more a library related problem, not so much language
related. There are string libraries out there for C that are safe.

Ada compilers have to pass a lot of tests before they get a certificate.
A huge problem is that you can't trust the compiler, especially not
optimizing compilers. They might produce code that is buggy, even if
your program is correct. That's where Ada shines.

Then the language C is not type safe. You can do all kind of type casts.
And there are numerous constructs in C that increase the possibilities
for errors. Ada is here a lot better too. For example you can limit the
range of an integer.

Furthermore, Ada has built-in support for processes and synchronization
primitives. C and C++ just can't reliably do that, as there is no
language support. That's why C++0x, the next upcoming version of C++,
exist. It's goal is to make C++ multi-thread safe.

And Ada's language specification is very detailed, whereas that of C
lets many things open, which is not that desirable. You don't want any
suprise here. This problem came up recently in the Gnu Compiler
Collection (GCC), where they changed the behaviour of the generated
code, just because the C spec didn't specified it. This broke some
applications and operating systems, and possibly introduced a lot
of unknown bugs. Nothing you can build on reliable software.

> Ruby being too slow would be something I could not quite understand
> insofar that, after all you could write parts in C anyway, or you could
> use (in the case of replacing ADA) Lua - I'd figure Lua would be quite
> fast. Somehow despite that Ada is still in use, to me it seems like a
> "dead" language (means noone really learns it because there are better
> alternatives available)

You will never ever be able to use Ruby for aviation software, neither
Lua, Python, Perl etc.

It's not about slowness. Realtime systems can be slow as long as they
meet their deadlines. Indeed, a lot of real-time systems are very slow.
They use 20 year old technology, no caches, no speculation etc., just
because in real-time systems, you always have to calculate with the
longest possible execution time, and modern processors only improve
average execution time.

Ada is not that bad at all. It's a beautiful language, maybe a bit
verbose, but very powerful. Personally, I like it more than C++.

> The biggest confusion I get here is simply that strong typing is touted
> as a very good thing to have. I dont know if this is the case or not,
> but it seems to me that this is more "behaviour" that is imposed onto
> the programmer anyway (as in, he must do extra work to ensure his
> variables are a certain way etc..)
> For example, the "strong typing" as described here appears to me more a
> "force the programmer to do this and that". This may have advantages in
> the long run, I dont know, maybe fewer bugs or no buffer overflow
> problems, but to me it still is forcing the programmer to comply. I dont
> get what is so great about having to worry about many details. And on
> blogs you do sometimes see proponents of this solution scold on the
> people that use another solution (not only typing, but also test driven
> development and so on...)

Well, in the case of safety critical software, you don't want to have
runtime exceptions. This software must not have errors, at least it's
desirable ;-)

Duck-typing doesn't guarantee you anything at compile-time.

Regards,

Michael


Robert Dober

unread,
Apr 15, 2008, 9:29:51 AM4/15/08
to
On Tue, Apr 15, 2008 at 2:28 PM, Michael Neumann <mneu...@ntecs.de> wrote:
> Marc Heiler wrote:
<snip>

>
> You will never ever be able to use Ruby for aviation software, neither
> Lua, Python, Perl etc.
Wanna bet?

Avdi Grimm

unread,
Apr 15, 2008, 10:15:14 AM4/15/08
to
On Tue, Apr 15, 2008 at 9:29 AM, Robert Dober <robert...@gmail.com> wrote:
> > You will never ever be able to use Ruby for aviation software, neither
> > Lua, Python, Perl etc.
> Wanna bet?

I think it depends on what is meant by "aviation software". I
wouldn't use Ruby for embedded avionics, for several reasons. But I
might use it (or Lua, or...) to power a visual display of the state of
that avionics, for example.

--
Avdi

Home: http://avdi.org
Developer Blog: http://avdi.org/devblog/
Twitter: http://twitter.com/avdi
Journal: http://avdi.livejournal.com

Robert Dober

unread,
Apr 15, 2008, 10:21:34 AM4/15/08
to
On Tue, Apr 15, 2008 at 4:15 PM, Avdi Grimm <av...@avdi.org> wrote:
> On Tue, Apr 15, 2008 at 9:29 AM, Robert Dober <robert...@gmail.com> wrote:
> > > You will never ever be able to use Ruby for aviation software, neither
> > > Lua, Python, Perl etc.
> > Wanna bet?
>
> I think it depends on what is meant by "aviation software". I
> wouldn't use Ruby for embedded avionics, for several reasons. But I
> might use it (or Lua, or...) to power a visual display of the state of
> that avionics, for example.
>
You know one can bet any value on statements like "X will never
happen". When am I going to pay? I can only win.
Sorry could not resist ;).
R.

britt.s...@gmail.com

unread,
Apr 15, 2008, 12:31:08 PM4/15/08
to
On Apr 15, 6:26 am, Marc Heiler <sheve...@linuxmail.org> wrote:
> Hi,
>
> Onhttp://www.gcn.com/print/27_8/46116-1.htmlAda is touted briefly.

>
> The sentence(s) that most jumped into my eye (and hurt my brain a bit)
> was this:
>
> "[...] Ada has a feature called strong typing. This means that for every
> variable a programmer declares, he or she must also specify a range of
> all possible inputs.[...]"
>

I am an Ada programmer. The quoted statement from the GCN article is
not correct as written - "must" should be "may". Many languages
including C++ and Java, claim to be strongly typed. Strong typing is
a very desirable language feature. One key difference with Ada is
that Ada supports strong typing and optional range constraints of
primitive (e.g. integer, fixed point and floating point) types.

> "[...] This ensures that a malicious hacker can't enter a long string of
> characters as part of a buffer overflow attack or that a wrong value
> won't later crash the program. [...]"
>
> But clearly that is simple to do in ruby as well (and I never heard of a
> buffer overflow outside of the C world anyway): Just specify which input
> range would be allowed and discard the rest, warn the programmer, or
> simply convert it to the nearest allowed value - am I missing on
> something? Maybe there are some other reasons why Ada is still so en
> vogue for aviation software but I dont really get it (other than legacy
> code that was sitting there for thousand of years already). Maybe it is
> a paradigm that is only possible in Ada.
>
> Ruby being too slow would be something I could not quite understand
> insofar that, after all you could write parts in C anyway, or you could
> use (in the case of replacing ADA) Lua - I'd figure Lua would be quite
> fast. Somehow despite that Ada is still in use, to me it seems like a
> "dead" language (means noone really learns it because there are better
> alternatives available)

Ada is far from dead - its a great general purpose language and is
currently being used on new projects. In the high assurance domains
where it is principally used, there is currently nothing better,
certainly not C++ or Java. There also exists the SPARK
(www.sparkada,com) subset of Ada and its associated set of formal
methods based static analysis tools. I use SPARK and, though it
requires a certain mindset to use effectively, I think its the "real
deal" for producing the highest quality code (i.e., free of initial
defects). We really don't expect to find many bugs during debugging
or formal testing, at least not many bugs that can't be traced back to
a missing or ambiguous requirement.

>
> The biggest confusion I get here is simply that strong typing is touted
> as a very good thing to have. I dont know if this is the case or not,
> but it seems to me that this is more "behaviour" that is imposed onto
> the programmer anyway (as in, he must do extra work to ensure his
> variables are a certain way etc..)
> For example, the "strong typing" as described here appears to me more a
> "force the programmer to do this and that". This may have advantages in
> the long run, I dont know, maybe fewer bugs or no buffer overflow
> problems, but to me it still is forcing the programmer to comply. I dont
> get what is so great about having to worry about many details. And on
> blogs you do sometimes see proponents of this solution scold on the
> people that use another solution (not only typing, but also test driven
> development and so on...)
> --
> Posted viahttp://www.ruby-forum.com/.

"worry about many details" isn't great fun but its necessary for
safety and/or security critical software. If a well specified
programming language and its associated compilers/ static analysis
tools help me to manage the details all the way from the big picture
design down to bit-level ASIC interfaces, then I welcome the help.

- Britt

frame...@gmail.com

unread,
Apr 15, 2008, 1:20:19 PM4/15/08
to
Interesting thread... also because I use both Ruby and Ada. No,
better...
since I _love_ both Ruby and Ada. Yes, they could not be more
different and...
no, I do not have any split-personality problem (at least, non that I
am
aware of it...:-)

In my personal experience, they are both great languages and each one
"shines"
in its field. I use Ruby for small to medium-large applications where
"duck
typing" allows you to write good and flexible software in little
time. However,
I discovered that when I go to large to very large applications, a
pedantic language
as Ada (which would not allow you to write sqrt(5) because "5" is an
integer and
not a float... my first Ada program...) is a better choice since many
errors
are caught at compile time and many others just at the first few runs
by the
checks automatically inserted by the compiler. For example, if you
write

type Month_Day is new Integer range 1..31;

MD : Month_Day := 30;

MD := MD + 3;

you will get a runtime error because MD exit from the allowed range.
In C this bug could comfortably sleeps for centuries...

Moreover, if you define

type Counter is new Integer;

Ada strong typing will prevent you to assign a value of type Month_Day
to
a variable of type Counter (the magic word is "new") and this makes a
lot
of sense, unless in your application you can convert a day into a
counter.
I discovered that when your software grows larger, this kind of
constraints
that you _ask to the compiler_ to enforce on you, can really help.
[there
are *lots* of discussion about the usefulness of the introduction of
new incompatible types. The sentece above is just my opinion,
based on some personal experience. I hope I did not open a new
can of worms...]

Maybe your initial productivity (mesured in lines of code written for
unit of time) can be smaller because of the loss of flexibility,
but if your software is very large you gain in debugging and
maintenace
time.

Of course, if you just want to extract data from a CSV file, or write
a wget-like program, Ada can be a "gun for mosquitos."

Todd Benson

unread,
Apr 15, 2008, 1:46:22 PM4/15/08
to

You can "type" your variables in Ruby if you have to. I don't think
that's the problem. It's the possibly reckless meta-programming in
libraries you use (I'm not talking about you, Trans, I think Facets is
great).

Being an engineer and a db guy, you would think that Ruby is the most
god awful thing I've ever seen. Well, it has its place.

For realtime, Michael is right about the "time of execution" being
_the_ important thing. I would like to see in the future, however, a
Ruby that talks to the hardware like RTLinux or QNX. I'd take up such
a project myself, except I don't know enough C or assembly. I suppose
you'd have to make certain objects allowed to have free reign over the
processor/memory. Like an Object#become_real, though that's a little
scary :)

Todd

Bill Kelly

unread,
Apr 15, 2008, 4:00:29 PM4/15/08
to

From: <frame...@gmail.com>

>
> For example, if you write
>
> type Month_Day is new Integer range 1..31;
>
> MD : Month_Day := 30;
>
> MD := MD + 3;
>
> you will get a runtime error because MD exit from the allowed range.
> In C this bug could comfortably sleeps for centuries...

The example you've provided causes me to wonder whether such
language level range limiting could instill a false sense of
security in the programmer.

Please have your ada program send me an email on February 31st!

<grin>

Seems like range checking would work well for Month range 1..12;
but not so well for Month_Day... ?


Regards,

Bill

Eleanor McHugh

unread,
Apr 15, 2008, 7:54:44 PM4/15/08
to
On 15 Apr 2008, at 13:28, Michael Neumann wrote:
> You will never ever be able to use Ruby for aviation software, neither
> Lua, Python, Perl etc.

You provide the budget, I'll provide the code ;) Having designed and
implemented avionics systems I see nothing in Ruby or any other
scripting language that would stand in the way of using it to do the
same thing. In fact Lua began its life as a language for device
control. That's not to say that MRI is particularly suited to the
task, but the necessary changes could be made if anyone wanted to
without having to change the language syntax and semantics.

> It's not about slowness. Realtime systems can be slow as long as
> they meet their deadlines. Indeed, a lot of real-time systems are
> very slow.
> They use 20 year old technology, no caches, no speculation etc.,
> just because in real-time systems, you always have to calculate with
> the
> longest possible execution time, and modern processors only improve
> average execution time.

It's true that realtime execution is easier when you get the execution
windows balanced, but it's mostly about coding defensively and knowing
how to handle failure states and recover when calculations exceed
their desired execution budget. The latter is particularly important
as many calculations have unpredictable run-time characteristics.

as for the reason 20 year old technology is so popular, you don't have
to look much further than the low cost of that generation of
processors and the low computational requirements of many problems: a
PIC17C42 for example has all the grunt you could ever want for
steering a light aircraft, and a Dragonball is more than adequate for
real-time GPS navigation. Chucking even a Pentium at these jobs would
be overkill unless you want to run a Windows kernel.

> Well, in the case of safety critical software, you don't want to
> have runtime exceptions. This software must not have errors, at
> least it's desirable ;-)

There's nothing wrong with runtime exceptions so long as you figure
out what the correct fail-safe behaviour of the system is and make
sure it takes it. In fact for high-spec aviation systems where there's
a statistical risk of cosmic ray interference flipping bits at run-
time I'd want to see the fail-safe strategy before I even considered
the rest of the system desing (although admittedly that was a
consideration that always made me laugh when I was doing my CAA
certifications ;).

> Duck-typing doesn't guarantee you anything at compile-time.

True. But nothing guarantees you anything at run-time, including 100%
compliance at compile-time. That's why most CS and IS degrees have
lectures explaining the difference between Verification (what your
compiler does) and Validation (what you do before you start coding).

As a rule of thumb even the highest-quality systems will have one bug
for every 30000 lines of source code (that's only 1% of the bug
density in standard shrink-wrap applications) which can amount to tens
of thousands of defects. These are not 'errors' in the sense that a
compiler understands them, but genuine misunderstandings of the
problem space in question that will lead to actively dangerous
software states.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
----
raise ArgumentError unless @reality.responds_to? :reason

Rick DeNatale

unread,
Apr 15, 2008, 10:27:12 PM4/15/08
to
On Tue, Apr 15, 2008 at 7:54 PM, Eleanor McHugh
<ele...@games-with-brains.com> wrote:
> On 15 Apr 2008, at 13:28, Michael Neumann wrote:
>
> > You will never ever be able to use Ruby for aviation software, neither
> > Lua, Python, Perl etc.

> > Well, in the case of safety critical software, you don't want to have


> runtime exceptions. This software must not have errors, at least it's
> desirable ;-)
> >
>
> There's nothing wrong with runtime exceptions so long as you figure out
> what the correct fail-safe behaviour of the system is and make sure it takes
> it. In fact for high-spec aviation systems where there's a statistical risk

> of cosmic ray interference flipping bits at run-time I'd want to see the


> fail-safe strategy before I even considered the rest of the system desing
> (although admittedly that was a consideration that always made me laugh when
> I was doing my CAA certifications ;).

This argument is giving me a flash back to a decade or two ago.

Bjarne Stroustrup used to use the same argument against Smalltalk,
saying that he wouldn't want to fly in an airplane whose autopilot
could throw a MessageNotFound exception.

I would counter argue saying that I'd rather fly on that plane than
the one with the C++ autopilot which instead would branch to a random
location because a dangling pointer caused a branch to a virtual
function through a virtual function table which really wasn't a
virtual function table anymore.

> > Duck-typing doesn't guarantee you anything at compile-time.
> >
>
> True. But nothing guarantees you anything at run-time, including 100%
> compliance at compile-time. That's why most CS and IS degrees have lectures
> explaining the difference between Verification (what your compiler does) and
> Validation (what you do before you start coding).

Amen, Sister! And languages which rely on static typing have a
tendency to do much more random things when things go wrong. Language
like Ruby tend to have a more vigilant runtime.

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

Arved Sandstrom

unread,
Apr 15, 2008, 10:33:34 PM4/15/08
to
"Marc Heiler" <shev...@linuxmail.org> wrote in message
news:056c8e2d83a73b69...@ruby-forum.com...
[ SNIP ]

> The biggest confusion I get here is simply that strong typing is touted
> as a very good thing to have. I dont know if this is the case or not,
> but it seems to me that this is more "behaviour" that is imposed onto
> the programmer anyway (as in, he must do extra work to ensure his
> variables are a certain way etc..)
> For example, the "strong typing" as described here appears to me more a
> "force the programmer to do this and that". This may have advantages in
> the long run, I dont know, maybe fewer bugs or no buffer overflow
> problems, but to me it still is forcing the programmer to comply. I dont
> get what is so great about having to worry about many details. And on
> blogs you do sometimes see proponents of this solution scold on the
> people that use another solution (not only typing, but also test driven
> development and so on...)

It sounds like by strong typing you actually mean static explicit typing, as
in Java or C. Bear in mind that you can have static typing without explicit
declarations, for example where type inference is used, like in Haskell or
F# (or to some extent in C# 3.0). This removes one of your
objections...inconvenience.

AHS


Phillip Gawlowski

unread,
Apr 15, 2008, 10:40:04 PM4/15/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Rick DeNatale wrote:

|
| Amen, Sister! And languages which rely on static typing have a
| tendency to do much more random things when things go wrong. Language
| like Ruby tend to have a more vigilant runtime.
|

I wouldn't fly in an aeroplane that relies on the runtime to catch errors.

Take the Space Shuttle as an extreme. Does the language breed perfection
in the Shuttle's source, or is it the process NASA uses?

I bet you dollars to doughnuts that it is the process, with
more-than-due-diligence in writing and testing the software. That the
requirements are clear cut and well understood is another bonus.

Languages don't matter. Compilers don't matter. Process, however, does.

Or methodology. TDD has its benefits, as does BDD. Without these, the
Agile way wouldn't work. QA is the key, not that language.

Don't just take my word for it:

http://www.nap.edu/html/statsoft/chap2.html

The above link has a case study on NASA's process for developing the
Space Shuttle's flight control software.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

You've got to stand up and live before you can sit down and write.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgFZv8ACgkQbtAgaoJTgL/GSgCfVkHiDMTN/GeqsfzImN3FdP8O
050AniEDC3937FBIB6wfdGa2EadoFWco
=YSBK
-----END PGP SIGNATURE-----

M. Edward (Ed) Borasky

unread,
Apr 15, 2008, 10:52:49 PM4/15/08
to
Rick DeNatale wrote:
> This argument is giving me a flash back to a decade or two ago.
>
> Bjarne Stroustrup used to use the same argument against Smalltalk,
> saying that he wouldn't want to fly in an airplane whose autopilot
> could throw a MessageNotFound exception.
>
> I would counter argue saying that I'd rather fly on that plane than
> the one with the C++ autopilot which instead would branch to a random
> location because a dangling pointer caused a branch to a virtual
> function through a virtual function table which really wasn't a
> virtual function table anymore.

And there is the apocryphal story that when John Glenn buckled himself
into the Mercury spacecraft, he turned to one of the aides and said,
"Just remember ... every piece of equipment here was provided by the low
bidder." :)


Rick DeNatale

unread,
Apr 15, 2008, 11:49:16 PM4/15/08
to
On Tue, Apr 15, 2008 at 10:52 PM, M. Edward (Ed) Borasky
<zn...@cesmail.net> wrote:

> And there is the apocryphal story that when John Glenn buckled himself into
> the Mercury spacecraft, he turned to one of the aides and said, "Just
> remember ... every piece of equipment here was provided by the low bidder."

Actually, I'm pretty sure that that was Wally, much more his style than Glenn.

And Project Mercury is a particular interest of mine.

http://www.mercuryspacecraft.com/wiki runs on the same server in my
house as my blog.

Rick DeNatale

unread,
Apr 15, 2008, 11:59:47 PM4/15/08
to
On Tue, Apr 15, 2008 at 10:40 PM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
> Rick DeNatale wrote:
>
> |
> | Amen, Sister! And languages which rely on static typing have a
> | tendency to do much more random things when things go wrong. Language
> | like Ruby tend to have a more vigilant runtime.
> |
>
> I wouldn't fly in an aeroplane that relies on the runtime to catch errors.
>
> Take the Space Shuttle as an extreme. Does the language breed perfection
> in the Shuttle's source, or is it the process NASA uses?
>
> I bet you dollars to doughnuts that it is the process, with
> more-than-due-diligence in writing and testing the software. That the
> requirements are clear cut and well understood is another bonus.
>
> Languages don't matter. Compilers don't matter. Process, however, does.
>
> Or methodology. TDD has its benefits, as does BDD. Without these, the
> Agile way wouldn't work. QA is the key, not that language.

I was pondering this thread earlier today, and before I pitched in,
and was going to draw an analogy with Frank Borman's comments during
the Senate commitee hearing on the Apollo 1 fire. He said that the
real cause of the fire, was "a lack of imagination" about the dangers
of doing ground testing with the spacecraft filled with pure O2 at
sea-level atmospheric pressure.

Relying on static-typing to 'prevent' fatal errors exhibits the same
kind of lack of imagination about the range of possible failure modes.
Nothing is perfect, but I'll take disciplined testing over relying on
ceremonial static typing any day.

> Don't just take my word for it:
>
> http://www.nap.edu/html/statsoft/chap2.html
>
> The above link has a case study on NASA's process for developing the
> Space Shuttle's flight control software.

Of course even with good process, it's still hard to get it right the
first time, remember "the bug heard round the world," which kept
Columbia on the pad during the first attempt to launch STS-1?

http://portal.acm.org/citation.cfm?id=1005928.1005929

Phillip Gawlowski

unread,
Apr 16, 2008, 12:21:33 AM4/16/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Rick DeNatale wrote:

|
| I was pondering this thread earlier today, and before I pitched in,
| and was going to draw an analogy with Frank Borman's comments during
| the Senate commitee hearing on the Apollo 1 fire. He said that the
| real cause of the fire, was "a lack of imagination" about the dangers
| of doing ground testing with the spacecraft filled with pure O2 at
| sea-level atmospheric pressure.
|
| Relying on static-typing to 'prevent' fatal errors exhibits the same
| kind of lack of imagination about the range of possible failure modes.
| Nothing is perfect, but I'll take disciplined testing over relying on
| ceremonial static typing any day.

Indeed. It is about knowing the limits of a language, and its features,
too. Not just "What can @language do?", but also "What can't @language
do?" needs to figure into it.

And a lot of math can figure into it, too. Jim Weirich told an anecdote
to that effect in his keynote at MWRC08 to a similar effect: A bug hits
once in a million. The piece of hardware using that buggy software
stalled "once, maybe twice a day". After a bit of math, the code was
called ~1.3 million times in 8 hours. Resulting in a failure of "one, or
twice a day".

Faith is good. Testing (unit tests, functional tests, integration test,
regression tests, usability tests, acceptance tests...) is better.

As Knuth once said: "Beware of this code. I have merely proven it
correct, not tested it" (or something along those lines, anyway).

|
| Of course even with good process, it's still hard to get it right the
| first time, remember "the bug heard round the world," which kept
| Columbia on the pad during the first attempt to launch STS-1?

No, I don't remember that. I was a wee one when the Shuttle Program
started. :)

However, without process, any process, it is impossible to get things
right at *any* time.

The difficulty is in picking the most correct approach to a problem. The
NASA process doesn't necessarily translate into, say, corporate or web
development, or any situations where requirements change rapidly and/or
are not well understood (in the case of the Space Shuttle, the
requirements were well understood. Or so I hope. Business processes
aren't necessarily well understood, or can even be expressed).

I wouldn't use Agile to build Flight control software. But I wouldn't
use a statistical methodology to build a billing system, either.

Long story short: in today's world, we don't have to be multilingual in
the languages we speak, but adaptable to the methodologies we are able
to work in.

Well, computer science seems to be maturing, and thus software
development, too.

| http://portal.acm.org/citation.cfm?id=1005928.1005929

Dang, I'll have to find an alternative to this link (lacking the means
to access this resource, unfortunately).

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ I'm looking for something that can deliver a 50-pound payload of snow
~ on a small feminine target. Can you suggest something? Hello...?
~ --- Calvin


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgFfskACgkQbtAgaoJTgL+wswCeJJ0C5u/s6zqW2wxd7zADaoWN
Oa0An0fKBWUtbsLuXHrX2Ebj9jBd+9G2
=iZk1
-----END PGP SIGNATURE-----

Matt Todd

unread,
Apr 16, 2008, 4:45:03 AM4/16/08
to
I'd much rather be damn sure and also have exception handling for what
I don't expect.

You know, because exceptions to the rules of life can happen and they
aren't always what you expect. Because life isn't always as linear as
we'd hope.

Matt Todd

ThoML

unread,
Apr 16, 2008, 5:05:35 AM4/16/08
to
> for example where type inference is used, like in Haskell or
> F# (or to some extent in C# 3.0)

IIRC D is capable of doing some type inferencing too. But D is still
on my to-be-learned list.

Eleanor McHugh

unread,
Apr 16, 2008, 9:09:28 AM4/16/08
to
On 16 Apr 2008, at 10:15, Michael T. Richter wrote:
> Is there any (serious) language made after, say, 1985 that doesn't
> have exception handling? Static typing or dynamic typing, strong
> typing or weak typing -- they pretty much all have some kind of
> exception handling mechanism.


It's not enough to have the mechanism, you also have to code the
system to use it intelligently otherwise you won't fail-safe.

Eleanor McHugh

unread,
Apr 16, 2008, 9:09:38 AM4/16/08
to
On 16 Apr 2008, at 03:40, Phillip Gawlowski wrote:
> I wouldn't fly in an aeroplane that relies on the runtime to catch
> errors.

I wouldn't fly in an aeroplane where a runtime error couldn't be
caught. That's because there will be runtime errors regardless of how
well designed and analysed the code is.

> Take the Space Shuttle as an extreme. Does the language breed
> perfection
> in the Shuttle's source, or is it the process NASA uses?

That process includes implementing fail safe conditions for runtime
errors. Without those the developers would be legally culpable for any
deaths that occurred as a result of their negligence. Waking up in the
morning knowing that is surprisingly good at focusing the attention on
detail...

> I bet you dollars to doughnuts that it is the process, with
> more-than-due-diligence in writing and testing the software. That the
> requirements are clear cut and well understood is another bonus.
>
> Languages don't matter. Compilers don't matter. Process, however,
> does.
>
> Or methodology. TDD has its benefits, as does BDD. Without these, the
> Agile way wouldn't work. QA is the key, not that language.

The court is still out on TDD and BDD. None of my friends in the
avionics industry has much confidence in these techniques, but the
main goal there is systems which don't kill people or destroy millions
of dollars of equipment. The only argument I see in favour of that
particular brand of agile development is that the problems involved
are essentially human rather than technical and the code is just a way
of forcing people to make decisions in a timely fashion.

Also whilst QA techniques transfer fairly well between languages, if
given the choice between two languages with different levels of
verbosity it is always advisable to use the less verbose language:
there's less to test, less to go wrong, and less likelihood of
muddling your (often vague) requirements.

Robert Dober

unread,
Apr 16, 2008, 9:39:46 AM4/16/08
to
On Wed, Apr 16, 2008 at 4:27 AM, Rick DeNatale <rick.d...@gmail.com> wrote:
<snip>

> > True. But nothing guarantees you anything at run-time, including 100%
> > compliance at compile-time. That's why most CS and IS degrees have lectures
> > explaining the difference between Verification (what your compiler does) and
> > Validation (what you do before you start coding).
>
> Amen, Sister! And languages which rely on static typing have a
> tendency to do much more random things when things go wrong. Language
> like Ruby tend to have a more vigilant runtime.

Reminds me of the old story about Donald Knuth (I do not know if it is
actually true) who was lecturing formal code proves and was asked by a
student if the code actually worked now. He replied:
I do not have any idea I only proved it correct I never tested it.
Although most of you know this story I believe that it is particularly
of interest in this context.

Cheers
Robert

Phillip Gawlowski

unread,
Apr 16, 2008, 9:42:49 AM4/16/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eleanor McHugh wrote:
| On 16 Apr 2008, at 03:40, Phillip Gawlowski wrote:
|> I wouldn't fly in an aeroplane that relies on the runtime to catch
|> errors.
|
| I wouldn't fly in an aeroplane where a runtime error couldn't be caught.
| That's because there will be runtime errors regardless of how well
| designed and analysed the code is.

Of course. But solely relying on trusting that the runtime will do The
Right Thing isn't the way to go. Error catching and handling is a tool
to the user, not a silver bullet.

|> Take the Space Shuttle as an extreme. Does the language breed perfection
|> in the Shuttle's source, or is it the process NASA uses?
|
| That process includes implementing fail safe conditions for runtime
| errors. Without those the developers would be legally culpable for any
| deaths that occurred as a result of their negligence. Waking up in the
| morning knowing that is surprisingly good at focusing the attention on
| detail...

And introducing large amounts of stress that are counterproductive. ;)

I doubt, however, that there is a single undefined state in the Space
Shuttle's software. No uncaught exception, no reliance on language
features to do the right things, but well understood and diligent
implementation of those, together with rigorous QA.

|
| The court is still out on TDD and BDD. None of my friends in the
| avionics industry has much confidence in these techniques, but the main
| goal there is systems which don't kill people or destroy millions of
| dollars of equipment. The only argument I see in favour of that
| particular brand of agile development is that the problems involved are
| essentially human rather than technical and the code is just a way of
| forcing people to make decisions in a timely fashion.

As I said in another reply in this thread, methodologies are but one
skill set. What works for a billing system doesn't necessarily work for
a cruise missile or the A380. Different problem domains require
different solutions.

And Agile's domain is in the face of changing or evolving requirements.

I suspect that aeronautical problems are well understood, and
requirements (while not easily) determined well before the first line of
code is written.

As far as I understand it, TOPCASED does work like this:
http://www.heise-online.co.uk/open/TOPCASED-System-development-using-Open-Source--/features/110028

| Also whilst QA techniques transfer fairly well between languages, if
| given the choice between two languages with different levels of
| verbosity it is always advisable to use the less verbose language:
| there's less to test, less to go wrong, and less likelihood of muddling
| your (often vague) requirements.

No silver bullets. Picking the right tool for the job is key.

But what use is a less verbose language, if only a handful of people
understand it well enough? Sure, often there is time to train, but
sometimes there is not.

Trade offs are everywhere, and none of them are easy.


- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ - You know you've been hacking too long when...
..you dream you have to write device drivers for your refrigerator,
washing machine, and other major household appliances before you can use
them.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgGAlYACgkQbtAgaoJTgL+D8wCfV3C36WahI5nn8mGme3aOzjw+
TaMAn3ocQN5tmE8FebmGbdeE0EvpaZR8
=RCFz
-----END PGP SIGNATURE-----

Sean O'Halpin

unread,
Apr 16, 2008, 9:48:43 AM4/16/08
to
On Wed, Apr 16, 2008 at 2:39 PM, Robert Dober <robert...@gmail.com> wrote:
>
> Reminds me of the old story about Donald Knuth (I do not know if it is
> actually true) who was lecturing formal code proves and was asked by a
> student if the code actually worked now. He replied:
> I do not have any idea I only proved it correct I never tested it.
> Although most of you know this story I believe that it is particularly
> of interest in this context.
>
> Cheers
> Robert

FYI: http://www-cs-faculty.stanford.edu/~knuth/faq.html - see last
question on page.

Regards,
Sean

Robert Dober

unread,
Apr 16, 2008, 10:24:06 AM4/16/08
to
On Wed, Apr 16, 2008 at 3:48 PM, Sean O'Halpin <sean.o...@gmail.com> wrote:

> FYI: http://www-cs-faculty.stanford.edu/~knuth/faq.html - see last
> question on page.

I stand gladly corrected, and need a memory extension :(

So he added at the end of a paper:
"Beware of bugs in the above code; I have only proved it correct, not
tried it.''


Thx

Eleanor McHugh

unread,
Apr 16, 2008, 12:06:26 PM4/16/08
to
On 16 Apr 2008, at 15:06, Michael T. Richter wrote:

> On Wed, 2008-04-16 at 22:09 +0900, Eleanor McHugh wrote:
>>
>> >> I'd much rather be damn sure and also have exception handling for
>> >> what
>> >> I don't expect.
>
>> > Is there any (serious) language made after, say, 1985 that doesn't
>> > have exception handling? Static typing or dynamic typing, strong
>> > typing or weak typing -- they pretty much all have some kind of
>> > exception handling mechanism.
>
>> It's not enough to have the mechanism, you also have to code the
>> system to use it intelligently otherwise you won't fail-safe.
>
> Well, yeah. But Matt made it sound like exception handling was a
> rare beast and a major decision criterion for selecting languages.
> I can only think of one language left in wide, common use that
> doesn't have exception handling: C. (I'm sure others will
> immediately jump up and list others, but that's just life.;)

And I've seen a lot of C programmers code their own with longjmp ;)

Eleanor McHugh

unread,
Apr 16, 2008, 12:23:09 PM4/16/08
to
On 16 Apr 2008, at 14:42, Phillip Gawlowski wrote:
> I doubt, however, that there is a single undefined state in the Space
> Shuttle's software. No uncaught exception, no reliance on language
> features to do the right things, but well understood and diligent
> implementation of those, together with rigorous QA.

It's a lovely idea, but ponder the impact of Gödel's Incompleteness
Theorems or Turing's proof of the Halting Problem. In practice there
are program states which can occur which cannot be identified in
advance because they are dependent on interactions with the
environment, or are artefacts of the underlying problem space.

That's why run-time error handling and fail-safe behaviour are so
important regardless of the rigour of Q&A processes.

> As I said in another reply in this thread, methodologies are but one
> skill set. What works for a billing system doesn't necessarily work
> for
> a cruise missile or the A380. Different problem domains require
> different solutions.
>
> And Agile's domain is in the face of changing or evolving
> requirements.
>
> I suspect that aeronautical problems are well understood, and
> requirements (while not easily) determined well before the first
> line of
> code is written.

Never rely upon suspicions when talking with people who actually know
for sure. As I pointed out earlier in this thread I've written and
certified cockpit systems (for both civilian and paramilitary use) and
requirements have tended to be just as amorphous as in any other
industry I've subsequently worked in. The main difference has been one
of management realising in the former case that good systems rely on
good code and that this is something a small percentage of developers
can produce, whereas in the latter there's a belief that any two
coders are interchangeable so long as the process and tools are right.

Personally I'll always bet on a small team of motivated hackers
determined to understand their problem domain over a larger team of
professional developers with the latest tools and methodologies but a
less consuming passion.

> No silver bullets. Picking the right tool for the job is key.
> But what use is a less verbose language, if only a handful of people
> understand it well enough? Sure, often there is time to train, but
> sometimes there is not.


If I have a large safety-critical or mission-critical codebase that
needs maintaining I'm more interested in finding developers who
understand the problem domain than who understand the language it's
developed in. Any half-competent developer will pick up a new language
in a matter of weeks, but learning a problem domain can take years.

Phillip Gawlowski

unread,
Apr 16, 2008, 12:57:10 PM4/16/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eleanor McHugh wrote:

| It's a lovely idea, but ponder the impact of Gödel's Incompleteness
| Theorems or Turing's proof of the Halting Problem. In practice there are
| program states which can occur which cannot be identified in advance
| because they are dependent on interactions with the environment, or are
| artefacts of the underlying problem space.
|
| That's why run-time error handling and fail-safe behaviour are so
| important regardless of the rigour of Q&A processes.

Sure. But to know these states, the software should be tested as
thoroughly as possible. I somehow doubt that anybody using something
mission-critical to flying or medical health wants to call the hotline
during the final approach of a plane or when a surgical robot gets
fantasies of being SkyNET. ;)

Anyway, this problem is (AFAIK, anyway), countered by using redundant
implementations of the hardware and software (well, as far as possible,
anyway), to minimize the effect of unknown states.

I don't think that I ever heard of a pilot encountering an unhandled
exception during normal operation, for example. I guess we mean the
same, after all.

(At least I don't see a contradiction in our arguments?)

|
| Never rely upon suspicions when talking with people who actually know
| for sure. As I pointed out earlier in this thread I've written and
| certified cockpit systems (for both civilian and paramilitary use) and
| requirements have tended to be just as amorphous as in any other
| industry I've subsequently worked in. The main difference has been one
| of management realising in the former case that good systems rely on
| good code and that this is something a small percentage of developers
| can produce, whereas in the latter there's a belief that any two coders
| are interchangeable so long as the process and tools are right.

How does that rebuke my assertion that requirements are well understood
nonetheless? Requirements can change for many reasons, and not all are
related to the actual software, but possibly its implementation?

I mean, we pretty much know the physics that make flight work, for
example. That a different airframe needs different software to work is
obvious (can't trim a fighter the same as a jumbo, for example).

However, the math stays the same, "just" the implementation changes
(Which, as I fully recognize, is a challenge in itself). And, sooner or
later, the requirements have to, for want of a better term, gel into
something that doesn't change anymore (or at least not as easy as in
more conventional development situations)?

Mind you, I'm not discounting your expertise in the matter at all.

| Personally I'll always bet on a small team of motivated hackers
| determined to understand their problem domain over a larger team of
| professional developers with the latest tools and methodologies but a
| less consuming passion.

Same here.

|
| If I have a large safety-critical or mission-critical codebase that
| needs maintaining I'm more interested in finding developers who
| understand the problem domain than who understand the language it's
| developed in. Any half-competent developer will pick up a new language
| in a matter of weeks, but learning a problem domain can take years.

Well, I kind of assumed that as a given. ;)

I'd be interested in the kinds of trade offs have to be made in this
particular problem domain (since I can't speak from experience, and
never claimed to, either, and didn't mean to imply as much).

I haven't worked on anything more mission critical than CRUD style apps,
and I can only infer from my knowledge what kind of problems development
teams face.

Still, it seems to be that no level of genius can create software as is
necessary for the Space Shuttle or a more average airplane, without the
level of testing the NASA or Boeing brings to bear for their software.

After all, smart people should recognize the difficulties they face when
working on mission critical software?


- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Rule of Open-Source Programming #48:

The number of items on a project's to-do list always grows or remains
constant.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgGL+UACgkQbtAgaoJTgL/sUwCgggZ5KbDmLs5/SFnvDVQOoU6p
njsAoJxX7yJj8idpx7kPybDLCTR+pCZN
=PKTz
-----END PGP SIGNATURE-----

Lionel Bouton

unread,
Apr 16, 2008, 1:55:41 PM4/16/08
to
Phillip Gawlowski wrote:
> [...]

> Anyway, this problem is (AFAIK, anyway), countered by using redundant
> implementations of the hardware and software (well, as far as possible,
> anyway), to minimize the effect of unknown states.
>

This solves some problems but not all of them. If the software and
hardware are designed based on flawed specifications you get a rocket
explosion (Ariane V first test flight: the 3 redundant systems all
failed one after the other because some flight patterns constraints were
reused from Ariane IV but weren't applicable to the new rocket...).

> I don't think that I ever heard of a pilot encountering an unhandled
> exception during normal operation, for example. I guess we mean the
> same, after all.

I think it happened at least once on an Airbus where a pilot had to
switch to manual controls or deactivate a safety measure because the
autopilot was going to bring the plane to stall (sensors reporting
incorrect measures). I could't find the reference for this specific
instance, but google brought other problems :

# Unforeseen conditions leading to autopilot misbehaving :
http://aviation-safety.net/database/record.php?id=19940630-0
http://shippai.jst.go.jp/en/Detail?fn=0&id=CA1000621

# Software glitches putting aircraft in danger :
http://online.wsj.com/article/SB114895279859065931-search.html?KEYWORDS=flight+check&COLLECTION=wsjie/6month

# several incidents, look for the "Cause" lines to filter the problems
# caused by flight systems.
http://www.airsafety.com/aa587/faa_fltcont.pdf

> |
> | Never rely upon suspicions when talking with people who actually know
> | for sure. As I pointed out earlier in this thread I've written and
> | certified cockpit systems (for both civilian and paramilitary use) and
> | requirements have tended to be just as amorphous as in any other
> | industry I've subsequently worked in. The main difference has been one
> | of management realising in the former case that good systems rely on
> | good code and that this is something a small percentage of developers
> | can produce, whereas in the latter there's a belief that any two coders
> | are interchangeable so long as the process and tools are right.

In my experience this latter case is widespread in companies with no
real inside CS knowledge believing they can manage IT projects
themselves (with pointy-haired bosses :-)). It's hard to realize that
you need sharp minds to produce good code when your own view of software
building is limited to playing with Lego systems... This is a recurrent
problem for CS people : making other people aware of the inherent
complexities of software design.

>
> I mean, we pretty much know the physics that make flight work, for
> example. That a different airframe needs different software to work is
> obvious (can't trim a fighter the same as a jumbo, for example).

If only you could have advised Arianespace :-)

Lionel

Arved Sandstrom

unread,
Apr 16, 2008, 2:15:35 PM4/16/08
to
"ThoML" <mica...@gmail.com> wrote in message
news:33ecf938-6e8f-4599...@s50g2000hsb.googlegroups.com...

AFAIK the type inferencing for D is similar to that for C# 3.0. IOW you can
omit the type on a declaration if you can infer the type from the
initializer. Sounds trivial but it does save typing if the datatype is quite
complex. Where it will really save time is in combination with object
initializer syntax, to provide anonymous types. I don't know enough about
it, but I'm guessing that such anonymous types would then fit in well with
the lambda expressions also showing up in C# now.

AHS


Francis Burton

unread,
Apr 16, 2008, 2:35:48 PM4/16/08
to
In article <5E300D89-09B9-4788...@games-with-brains.com>,

Eleanor McHugh <ele...@games-with-brains.com> wrote:
>On 16 Apr 2008, at 14:42, Phillip Gawlowski wrote:
>> I doubt, however, that there is a single undefined state in the Space
>> Shuttle's software. No uncaught exception, no reliance on language
>> features to do the right things, but well understood and diligent
>> implementation of those, together with rigorous QA.
>
>It's a lovely idea, but ponder the impact of G=F6del's Incompleteness =20=
>
>Theorems or Turing's proof of the Halting Problem. In practice there =20
>are program states which can occur which cannot be identified in =20
>advance because they are dependent on interactions with the =20

>environment, or are artefacts of the underlying problem space.

I'm not sure how the Halting Problem relates to presence or
absence of undefined states. What does it mean to call a state
"undefined" anyway, in this context? Presumably there's always
a finite number of states, which may be extremely large for most
programs that perform useful calculations. If we start with very
small, simple (and not useful) programs, we can enumerate all
the states. Are any of these undefined? As we increase the size
of the program, the number of states increases, presumably at
an alarming rate. At what point do we become unable to identify
program states?

An example of a simple program that does a barely useful task
is one that reads an input level from a 16 bit A/D, say, and
writes half that value to a 16 bit D/A. Can we be confident we
can write a 100% reliable and correct program to perform this
task? If not, why not? If so, let us increase the complexity
of the task progressively in small increments. At what point
are we forced to admit that we cannot be sure our program does
what it is meant to do?

I'm not trying to prove you wrong; I just want to get a better
handle on the problem.

Francis

Arved Sandstrom

unread,
Apr 16, 2008, 2:39:47 PM4/16/08
to
"Eleanor McHugh" <ele...@games-with-brains.com> wrote in message
news:E28E04F4-C633-487A...@games-with-brains.com...

> On 15 Apr 2008, at 13:28, Michael Neumann wrote:
>> You will never ever be able to use Ruby for aviation software, neither
>> Lua, Python, Perl etc.
>
> You provide the budget, I'll provide the code ;) Having designed and
> implemented avionics systems I see nothing in Ruby or any other
> scripting language that would stand in the way of using it to do the
> same thing. In fact Lua began its life as a language for device
> control. That's not to say that MRI is particularly suited to the
> task, but the necessary changes could be made if anyone wanted to
> without having to change the language syntax and semantics.
[ SNIP ]

I know nothing of avionics software, but I'd assume
http://en.wikipedia.org/wiki/Avionics_software is reasonably accurate. Half
of the stuff in that article is what you'd like to do on any project if you
didn't have impossible deadlines and shabby processes, and the other half is
simply extra rigour because errors are much less acceptable.

What I don't see is any particular emphasis on specific languages.
Considering that there seems to be no shortage of avionics software written
in C/C++, I don't immediately see why Ruby or Python wouldn't work either,
especially considering the intense process the software goes through.

I tend not to discount any particular language prima facie. I recall over
ten years ago having a colleague demonstrate a responsive, reliable (as near
as I could tell) and feature-rich moving map display program for small boat
navigation, and I asked him what it was written in. He replied, Visual
Basic. He later went on to sell it commercially.

I'm inclined to think that 90%+ of software reliability comes from training,
experience and above all, process. Not the programming language.

AHS


Robert Dober

unread,
Apr 16, 2008, 2:44:19 PM4/16/08
to
On Wed, Apr 16, 2008 at 6:23 PM, Eleanor McHugh
<ele...@games-with-brains.com> wrote:
> On 16 Apr 2008, at 14:42, Phillip Gawlowski wrote:
>
> > I doubt, however, that there is a single undefined state in the Space
> > Shuttle's software. No uncaught exception, no reliance on language
> > features to do the right things, but well understood and diligent
> > implementation of those, together with rigorous QA.
> >
>
> It's a lovely idea, but ponder the impact of Gödel's Incompleteness
> Theorems or Turing's proof of the Halting Problem. In practice there are
> program states which can occur which cannot be identified in advance because
> they are dependent on interactions with the environment, or are artefacts of
> the underlying problem space.
>
I am not sure but on a first approach I believe that neither Gödel nor
Turing apply because they are talking about systems describing
themselves. IIRC it is a theorem in TNT(1) making an assumption about
TNT in the first case and a turing machine reading the description of
a turing machine on its tape in the second case.
I do not believe that Aircraft Control Systems have this degree of
self awareness, but I can stand corrected if I am wrong, because
although I have been taught a lot about TM and TNT I do not know a lot
about Aircraft Control.

> That's why run-time error handling and fail-safe behaviour are so important
> regardless of the rigour of Q&A processes.

That however I agree with!

(1) http://en.wikipedia.org/wiki/Typographical_Number_Theory
Cheers

Mike Silva

unread,
Apr 16, 2008, 3:58:06 PM4/16/08
to
On Apr 16, 2:39 pm, "Arved Sandstrom" <asandst...@accesswave.ca>
wrote:

> What I don't see is any particular emphasis on specific languages.
> Considering that there seems to be no shortage of avionics software written
> in C/C++, I don't immediately see why Ruby or Python wouldn't work either,
> especially considering the intense process the software goes through.
>
> I tend not to discount any particular language prima facie. I recall over
> ....

> I'm inclined to think that 90%+ of software reliability comes from training,
> experience and above all, process. Not the programming language.

But that still leaves 10%-. For example, as noted here (http://
www.praxis-his.com/sparkada/pdfs/spark_c130j.pdf), an analysis of
safety-critical code written in three languages (C, Ada and SPARK),
all of which was already certified to DO-178B Level A (the most
stringent level), it was found that the SPARK code had one tenth the
residual error rate of the Ada code, and the Ada code had only one
tenth the residual rate of the C code. That's a 100:1 difference in
residual error rates in code all of which was certified to the highest
aviation standards. Would anybody argue that putting out safety-
critical software with an error rate 100 times greater than the
current art allows is a good thing? In fact, would anybody argue that
it is not grossly negligent?

Oh, and the anecdote about the compiler finding in minutes a bug that
had defied testing for a week should not be lightly dismissed either.

Eleanor McHugh

unread,
Apr 16, 2008, 7:11:35 PM4/16/08
to
On 16 Apr 2008, at 17:57, Phillip Gawlowski wrote:
> I'd be interested in the kinds of trade offs have to be made in this
> particular problem domain (since I can't speak from experience, and
> never claimed to, either, and didn't mean to imply as much).
>
> I haven't worked on anything more mission critical than CRUD style
> apps,
> and I can only infer from my knowledge what kind of problems
> development
> teams face.
>
> Still, it seems to be that no level of genius can create software as
> is
> necessary for the Space Shuttle or a more average airplane, without
> the
> level of testing the NASA or Boeing brings to bear for their software.

Oh definitely. Testing that code performs correctly is essential to
any embedded development process, as is validating that the code
written solves the correct problem. The latter is by far the more
difficult though.

The guidelines for developing civilian aviation software are
documented in RTCA-DO178B (see http://en.wikipedia.org/wiki/DO-178B)
which is abstract, non-prescriptive and an excellent alternative to
sleeping pills. Numerous concrete processes have emerged to suit how
various teams work, but in general the more critical the software then
the more that testing will result in hand-analysis of both source and
object code. Unit testing will be heavy on white boxing so the
majority of tests are likely to be disposable with unit changes but
there's lots of fun to be had with unglamorous and time-consuming old-
school software engineering (SLOCs, cyclomatic complexity, various
forms of test partitioning) that's independent of implementation
language or life-cycle methodology.

The maintenance of a clear audit trail on requirements and requirement
changes is essential for civil certification, so processes which lack
effective change control mechanisms are inappropriate. However I've
used RAD and Agile approaches (especially evolutionary prototyping)
successfully and had them pass certification so the myth that aviation
development is always monolithic waterfall is definitely unfounded.

In terms of the actual tradeoffs in mission critical systems (aviation
or otherwise) most come down to smoothing interaction with external
stimuli and breaking up costly computations and database queries into
discrete manageable chunks. There's very little genius required, just
careful attention to detail and an ability to analyse problem spaces:
that's probably why so many physicists, chemists and applied
mathematicians end up in this particular discipline.

For a theoretical foundation I recommend "Cybernetics" by Norbert
Wiener although it's a dense read.

Eleanor McHugh

unread,
Apr 16, 2008, 7:17:42 PM4/16/08
to
On 16 Apr 2008, at 19:40, Arved Sandstrom wrote:
> I tend not to discount any particular language prima facie. I recall
> over
> ten years ago having a colleague demonstrate a responsive, reliable
> (as near
> as I could tell) and feature-rich moving map display program for
> small boat
> navigation, and I asked him what it was written in. He replied, Visual
> Basic. He later went on to sell it commercially.

Much kudos to your friend. Twelve years ago I did the same thing in VB
for helicopters and whilst it was pushing the hardware at that time,
it was still usable. Of course these days most mobile phones have more
computational grunt and memory than that :)

Eleanor McHugh

unread,
Apr 16, 2008, 7:31:12 PM4/16/08
to

Any process that is algorithmic is necessarily implementable as a
Turing machine so I'd argue that the very act of coding the system
with a process defines TNT(1) whilst the target system itself becomes
TNT. Therefore until the system runs in situ and responds to its
environment one cannot make any firm statements regarding when the
system will halt. And if you can't tell when an autopilot will halt,
you have the potential for all kinds of mayhem...

Of course this is a terrible abuse of Church-Turing, but it seems to
fit the real world pretty well.

>> That's why run-time error handling and fail-safe behaviour are so
>> important
>> regardless of the rigour of Q&A processes.
> That however I agree with!

:)


Ellie
Who wonders what the hell this thread will look like in Google searches.

Eleanor McHugh

unread,
Apr 16, 2008, 9:21:54 PM4/16/08
to
On 16 Apr 2008, at 19:40, Francis Burton wrote:
> I'm not sure how the Halting Problem relates to presence or
> absence of undefined states. What does it mean to call a state
> "undefined" anyway, in this context? Presumably there's always
> a finite number of states, which may be extremely large for most
> programs that perform useful calculations. If we start with very
> small, simple (and not useful) programs, we can enumerate all
> the states. Are any of these undefined? As we increase the size
> of the program, the number of states increases, presumably at
> an alarming rate. At what point do we become unable to identify
> program states?

You're making the mistake of viewing program states as a discrete set,
all of which can be logically enumerated. If that were the case then
whilst complexity would make managing the creation of complex software
difficult, it would still be theoretically possible to create
'correct' programs. However Godel's incompleteness theorems tell us
that for any mathematical system based upon a set of axioms there will
be propositions consistent with that system which cannot be proven or
disproved by application of the system (i.e. they are unprovable,
which is what I meant by the casual short-hand 'undefined').

Both Turing machines and Register machines are axiomatic mathematical
systems and therefore can enter states which in terms of their axioms
are unknowable. A program is essentially a meta-state comprised of
numerous transient states and is thus a set of meta-propositions
leading to state propositions which need to be proved, any of which
may be unknowable. This applies equally to both the runtime behaviour
of the program _and_ to the application of any formal methods used to
create it.

For most day-to-day programming Godel incompleteness is irrelevant, in
the same way that quantum indeterminacy can be ignored when playing
tennis, but when you build large software systems which need to be
highly reliable unknowable states do have potential to wreak havoc.
This is why beyond a certain level of complexity it's helpful to use
statistical methods to gain additional insight beyond the normal
boundaries of a development methodology, and

Now the Halting Problem is fascinating because it's very simple in
conception: given a program and a series of inputs (which in Godel's
terms comprises a mathematical system and a set of axioms) determine
whether or not the program will complete. Turing proved that in the
general case this problem is insoluble, and not only does this place
an interesting theoretical limitation of all software systems but it
also applies to sub-programs right the way down to the finest grain of
detail. Basically anytime a program contains a loop condition the
Halting Problem will apply to that loop.

So in essence we're left with a view of software development in which
we can never truly know if a program is correct or even if it will halt.

> An example of a simple program that does a barely useful task
> is one that reads an input level from a 16 bit A/D, say, and
> writes half that value to a 16 bit D/A. Can we be confident we
> can write a 100% reliable and correct program to perform this
> task? If not, why not? If so, let us increase the complexity
> of the task progressively in small increments. At what point
> are we forced to admit that we cannot be sure our program does
> what it is meant to do?

That's very difficult to say for sure. Conceptually I'd have
confidence in the old BASIC standby of:

10 PRINT "HELLO WORLD"
20 GOTO 10

as this will run infinitely _and_ is intended to. But of course under
the hood the PRINT statement needs to be implemented in machine code
and that implementation could itself be incorrect, so even with a very
trivial program (from a coder's perspective) we see the possibility of
incorrect behaviour.

However balancing this is the fact that a program exists across a
finite period of time and is subject to modification and improvement.
This means that the set of axioms can be adjusted to more closely
match the set of propositions, in principle increasing our confidence
that the program is correct for the problem domain in question.

> I'm not trying to prove you wrong; I just want to get a better
> handle on the problem.

Douglas Hoffstadter has written extensively on Godel and computability
if you want to delve deeper, but none of this is easy stuff to get
into as it runs counter to our common sense view of mathematics.

Tom Cloyd

unread,
Apr 16, 2008, 11:16:42 PM4/16/08
to
My thanks to the core contributors to this fascinating thread. It's
stretched me well past my boundaries on several points, but also
clarified some key learning I've picked from my own field (psychology &
psychotherapy).

For me, the take-away here (and this is not at all news to me) is that
valid formal approaches to reliability are efficient (at least at times)
and powerful, and should definitely be used - WHEN WE HAVE THEM. The
problem is that they all stop short of the full spectrum of reality in
which our processes must survive. Thus we must leave ultimately the
comfort of deduction and dive into dragon realm of inferential
processes. Ultimately, there simply is not substitute, or adequate
simulation of, reality. Sigh.

Thanks, again. I've much enjoyed plowing through these posts.

t.

--

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tom Cloyd, MS MA, LMHC
Private practice Psychotherapist
Bellingham, Washington, U.S.A: (360) 920-1226
<< t...@tomcloyd.com >> (email)
<< TomCloyd.com >> (website & psychotherapy weblog)
<< sleightmind.wordpress.com >> (mental health issues weblog)
<< directpathdesign.com >> (web site design & consultation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Arved Sandstrom

unread,
Apr 17, 2008, 12:16:55 AM4/17/08
to
"Mike Silva" <snarf...@yahoo.com> wrote in message
news:a6f8591e-8e47-4d01...@c65g2000hsa.googlegroups.com...

On Apr 16, 2:39 pm, "Arved Sandstrom" <asandst...@accesswave.ca>
wrote:
> What I don't see is any particular emphasis on specific languages.
> Considering that there seems to be no shortage of avionics software
> written
> in C/C++, I don't immediately see why Ruby or Python wouldn't work either,
> especially considering the intense process the software goes through.
>
> I tend not to discount any particular language prima facie. I recall over
> ....
> I'm inclined to think that 90%+ of software reliability comes from
> training,
> experience and above all, process. Not the programming language.

******************************


But that still leaves 10%-. For example, as noted here (http://
www.praxis-his.com/sparkada/pdfs/spark_c130j.pdf), an analysis of
safety-critical code written in three languages (C, Ada and SPARK),
all of which was already certified to DO-178B Level A (the most
stringent level), it was found that the SPARK code had one tenth the
residual error rate of the Ada code, and the Ada code had only one
tenth the residual rate of the C code. That's a 100:1 difference in
residual error rates in code all of which was certified to the highest
aviation standards. Would anybody argue that putting out safety-
critical software with an error rate 100 times greater than the
current art allows is a good thing? In fact, would anybody argue that
it is not grossly negligent?

Oh, and the anecdote about the compiler finding in minutes a bug that
had defied testing for a week should not be lightly dismissed either.

******************************

I won't dispute the fact that some languages have more inherent support for
"correct" programming than others do. SPARK wouldn't be the only one; Eiffel
and various functional languages come to mind also. For others you can get
add-ons, such as JML for Java (see
http://en.wikipedia.org/wiki/Design_by_contract)

Having said that, it seems to me that the better correctness of programs in
SPARK or Ada compared to C/C++, say, would also be due to the qualities of
organizations that tend to use/adopt these languages. Those qualities
include programmer competence/experience/education, organizational
standards, processes in place, and external requirements (as in legal ones
for avionics or medical software). Not to mention, there is a correlation
between the ease of use of a language and the rate of poor coding (I may get
flak for that statement), which is not necessarily a fault of that language.
Note that by ease of use I do not mean masterability, I simply mean how
quickly a programmer can write something that sort of works.

For example, is shabby software written in Java or C or Python or PHP or
JavaScript shabby because one of those languages was chosen, or is it shabby
because the requirements analysis sucks, design is basically absent, there
is no documentation, testing is a myth, and the coders haven't mastered the
language? I've seen more than a few ads in my area advertising Web developer
jobs for $9 or $10 an hour...you could use the best language in the world at
a job like that and you'd still end up with crap. Conversely, get a team of
really experienced and smart coders who are well-versed in process, have
management backing for process, and I don't see the language of choice
mattering _that_ much. IOW, in that MoD analysis you refer to, was
everything else equal? Throw Ruby at a CMM Level 5 team and I wonder whether
the product is going to be an order or two of magnitude worse than if they
had Ada. Myself I doubt it.

AHS


Arved Sandstrom

unread,
Apr 17, 2008, 1:23:51 AM4/17/08
to
"Eleanor McHugh" <ele...@games-with-brains.com> wrote in message
news:28AE88BF-5240-4A14...@games-with-brains.com...

> On 16 Apr 2008, at 17:57, Phillip Gawlowski wrote:
>> I'd be interested in the kinds of trade offs have to be made in this
>> particular problem domain (since I can't speak from experience, and
>> never claimed to, either, and didn't mean to imply as much).
>>
>> I haven't worked on anything more mission critical than CRUD style
>> apps,
>> and I can only infer from my knowledge what kind of problems
>> development
>> teams face.
>>
>> Still, it seems to be that no level of genius can create software as
>> is
>> necessary for the Space Shuttle or a more average airplane, without
>> the
>> level of testing the NASA or Boeing brings to bear for their software.
>
> Oh definitely. Testing that code performs correctly is essential to
> any embedded development process, as is validating that the code
> written solves the correct problem. The latter is by far the more
> difficult though.
[ SNIP ]

On the latter note - validation as opposed to verification - it's important
to add that strictly speaking requirements should address the user's real
needs, not what the user thinks they are. Raise your hand if you've ever
worked on a project where you got some (ostensibly) fairly clear
requirements from a client, built an application that validates, presented
it to the client, and found out they really wanted something else.

It's why good business analysts should be worth more than architects or
coders. For the record, I'm not a BA - I just recognize their value.

AHS


Robert Dober

unread,
Apr 17, 2008, 2:25:05 AM4/17/08
to
On Thu, Apr 17, 2008 at 1:31 AM, Eleanor McHugh

Oh I was not clear enough I am afraid. I challenge that you can prove
that one cannot prove if your Aircraft control system can halt or not.
I even believe that in theory it can be proven. And if you are not
talking about theory than I agree with you
it is not a fair statement.
This is not nitpicking because I think that serious work is still done
in automatic proving of theorems and the day might come where even
complex systems can be proven to be correct.
Gödel and Turing only show that *complete* systems cannot be correct,
they say nothing about *complex*.

Cheers
Robert


>
>
>
> >
> > > That's why run-time error handling and fail-safe behaviour are so
> important
> > > regardless of the rigour of Q&A processes.
> > >
> > That however I agree with!
> >
>
> :)
>
>
> Ellie
> Who wonders what the hell this thread will look like in Google searches.
>
>
>
> Eleanor McHugh
> Games With Brains
> http://slides.games-with-brains.net
> ----
> raise ArgumentError unless @reality.responds_to? :reason
>
>
>
>

--

Phillip Gawlowski

unread,
Apr 17, 2008, 3:58:30 AM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eleanor McHugh wrote:
|
| Much kudos to your friend. Twelve years ago I did the same thing in VB
| for helicopters and whilst it was pushing the hardware at that time, it
| was still usable. Of course these days most mobile phones have more
| computational grunt and memory than that :)

But the mobile phones aren't necessarily as reliable as, say, the
hardware and operating system of an avionics system.

Considering that an operating system is an abstraction, and that
abstractions, more often than not, are leaky, I contend that specialized
hardware doesn't use much of an operating system, thusly eliminating a
large set of undefined (rather, unprovable as per Godel) states,
correct? Only providing the bare minimum of APIs needed for the software
on the application level to function properly, or dispersing with
operating systems entirely, working on the bare metal (I think that the
Apollo project computers functioned like that, but correct me if I'm wrong.)

In my experience, the more complex software gets, the more error-prone
it is. I notice that in my PDA, which performs rock solid, only needing
a driver upgrade for SD cards above 64 MB (well, at the time this thing
was made, cards larger than 64MB weren't widely available yet; which
shows that not all requirements can be gathered beforehand), my old
smart phone which failed on every possible situation, and all the
operating systems I've used with some depth so far.

So, isn't it part of the requirement gathering process, or the design
process, during software engineering to cut down on unnecessary
complexities and abstractions, too?

And that influences interface design for the user, too, I've noticed.
After all, the guidelines of the US Air Force for user interfaces
compose a 486 page book:

"From 1984 to 1986, the U.S. Air Force compiled existing usability
knowledge into a single, well-organized set of guidelines for its user
interface designers. I was one of several people who advised the project
(in a small way), and thus received a copy of the final 478-page book in
August 1986."

http://www.useit.com/alertbox/20050117.html

Hm, it seems to always come down to near-perfect requirements during design.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Don't stop with your first draft.
~ - The Elements of Programming Style (Kernighan & Plaugher)


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHAyEACgkQbtAgaoJTgL9T/ACfbW68rjje9gJR4vKKEEjyLuem
evwAnRjmnWbJeZCO4yi5gcd3ALNv8maU
=oYsQ
-----END PGP SIGNATURE-----

Eivind Eklund

unread,
Apr 17, 2008, 5:29:44 AM4/17/08
to
On Thu, Apr 17, 2008 at 3:21 AM, Eleanor McHugh
<ele...@games-with-brains.com> wrote:
> Now the Halting Problem is fascinating because it's very simple in
> conception: given a program and a series of inputs (which in Godel's terms
> comprises a mathematical system and a set of axioms) determine whether or
> not the program will complete. Turing proved that in the general case this
> problem is insoluble, and not only does this place an interesting
> theoretical limitation of all software systems but it also applies to
> sub-programs right the way down to the finest grain of detail. Basically
> anytime a program contains a loop condition the Halting Problem will apply
> to that loop.
>
> So in essence we're left with a view of software development in which we
> can never truly know if a program is correct or even if it will halt.

In general, proofs around Turing machines only applies with an
infinite length tape - in other words, about a computer with infinite
memory.

The Halting proof only proves that there exists programs that we can't
prove halting about, not that it isn't possible to prove things around
some programs.

The Goedel proof is about complete logical systems; in the case of a
computer program, we are working with a system where we have other
formalisms under it that's providing axioms that we don't prove, just
assume.

In the absence of hardware failure (something we of course can't prove
and is a problem in the real world), we can trivially prove halting
for a lot of cases. E.g, the simple program "halt" halts. halt if
true halts. halt if 1 == 1 would generally halt, assuming no
redefinition of ==. An so on.

I appreciate and respect your general contributions in this thread; I
just couldn't let this particular argument stand, as it confuse people
about what the theorems say.

Eivind.

Phillip Gawlowski

unread,
Apr 17, 2008, 6:26:49 AM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eivind Eklund wrote:

|
| The Goedel proof is about complete logical systems; in the case of a
| computer program, we are working with a system where we have other
| formalisms under it that's providing axioms that we don't prove, just
| assume.

But a language that is Turing-complete is a complete logical system, is
it not?

Since you can express any Turing machine in a Turing-complete language,
you have to necessarily deal with Goedel's incompleteness, don't you?

And language side-effects are there, no matter what language, due to the
way it (or its compiler) are implemented.

| In the absence of hardware failure (something we of course can't prove
| and is a problem in the real world), we can trivially prove halting
| for a lot of cases. E.g, the simple program "halt" halts. halt if
| true halts. halt if 1 == 1 would generally halt, assuming no
| redefinition of ==. An so on.

There are logical proofs that P is non-P (details of which are an
exercise to the reader since I have misplaced Shaum's Outline of Logic).

That a state is provable says nothing about the *conditions* that this
proof is valid. In a complete system, any proof is valid, so that the
proof that halt == true equates to false is a side-effect under the
(in-)correct circumstances.

Now, most applications don't reach this level of complexity, but the
level of complexity is there, you usually don't have to care about it.

Conversely, if every state of a program were provable, the processes
like DO-178B wouldn't be necessary, would they?

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ Why do we drink cow's milk? Who was the first guy who first looked at
a cow and said "I think I'll drink whatever comes out of these things
when I squeeze 'em!"? -- Calvin


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHJd0ACgkQbtAgaoJTgL9s4QCbBUjq4oK2p5AkQmreI4TmyzTj
DggAnA7rG8GsVcCbyvJBdM8KlHMSqE5E
=RMV1
-----END PGP SIGNATURE-----

Eleanor McHugh

unread,
Apr 17, 2008, 6:46:07 AM4/17/08
to
On 17 Apr 2008, at 07:25, Robert Dober wrote:
> On Thu, Apr 17, 2008 at 1:31 AM, Eleanor McHugh
> <ele...@games-with-brains.com> wrote:
>> Of course this is a terrible abuse of Church-Turing, but it seems
>> to fit
>> the real world pretty well.
>
> Oh I was not clear enough I am afraid. I challenge that you can prove
> that one cannot prove if your Aircraft control system can halt or not.
> I even believe that in theory it can be proven. And if you are not
> talking about theory than I agree with you
> it is not a fair statement.
> This is not nitpicking because I think that serious work is still done
> in automatic proving of theorems and the day might come where even
> complex systems can be proven to be correct.
> Gödel and Turing only show that *complete* systems cannot be correct,
> they say nothing about *complex*.

It's an interesting semantic point, in that complex is not necessarily
the same as complete. But likewise a Turing machine is an ideal device
which cannot exist in the real world: it requires both infinite
storage and infinite time so perforce there are computations which it
could make that cannot be made using the means available within our
physical environment. However for sake of argument let's suppose that
our complex system is being implemented on such an ideal device and
that our interest is in determining whether or not Godel
incompleteness is significant.

What matters is not specifically how complex the system is, in that a
highly complex system from our perspective may still be composed of
provable states, but how many of its states are unprovable within the
context of the complete system of which it forms a subset. We can
therefore see that the application of Godel to a complex but
incomplete system operating in ideal conditions would result in a
probabilistic incompleteness in that system that at least
theoretically could be measured.

However this then leads to the question of whether or not the complex
system is in fact only a subset of a larger complete system, or a
complete system in its own right. Considering that it is itself based
on a set of axioms (the requirements for the system) and therefore in
that regard complete, I would argue that it was not a subset at all.
This appears to create a probabilistic, relativistic incompleteness
(which is analogous to the viewpoint that physics has reached
regarding our physical universe).

But even if we allow the subset to be an incomplete system we can see
that increasing complexity increases the number of states in the
system, thus increasing the likelihood of included states being drawn
from the full set of states possible in the complete system. Yet
again, we can see that there is a probability of included states being
unprovable which beyond a certain level of complexity specific to the
system would be significant in preventing a formal proof of all the
included states.

Given that Godel's incompleteness applies to systems as simple as
arithmetic, and that even simple desktop applications often contain
many more orders of axioms than this, the concept of formal proof
should always be seen in light of this probabilistic incompleteness.
That is not to say that attempting to prove a system is futile, but
merely a recognition that beyond a certain level of complexity
specific to a given system there will always be an element of
unpredictability even when environmental adjustment of the system does
not apply.


Ellie

Francis Burton

unread,
Apr 17, 2008, 6:56:20 AM4/17/08
to
In article <4806C111...@comcast.net>,

Tom Cloyd <tomc...@comcast.net> wrote:
>My thanks to the core contributors to this fascinating thread. It's
>stretched me well past my boundaries on several points, but also
>clarified some key learning I've picked from my own field (psychology &
>psychotherapy).

Here are a website with some papers that I found in my cursory
research of the topic and might be interesting:

http://www.praxis-his.com/sparkada/publications_journals.asp

The Philosophical Transactions paper by Roderick Chapman reports
that, using SPARK (a well-behaved subset of Ada), "Proof of the
absence of run-time errors has been performed on programs of the
order of 100 000 lines of code."

Francis

Eleanor McHugh

unread,
Apr 17, 2008, 7:04:34 AM4/17/08
to
On 17 Apr 2008, at 10:29, Eivind Eklund wrote:
> The Halting proof only proves that there exists programs that we can't
> prove halting about, not that it isn't possible to prove things around
> some programs.
>
> The Goedel proof is about complete logical systems; in the case of a
> computer program, we are working with a system where we have other
> formalisms under it that's providing axioms that we don't prove, just
> assume.
>
> In the absence of hardware failure (something we of course can't prove
> and is a problem in the real world), we can trivially prove halting
> for a lot of cases. E.g, the simple program "halt" halts. halt if
> true halts. halt if 1 == 1 would generally halt, assuming no
> redefinition of ==. An so on.
>
> I appreciate and respect your general contributions in this thread; I
> just couldn't let this particular argument stand, as it confuse people
> about what the theorems say.

I fully agree with you that for degenerate cases the halting problem
is trivial. Unfortunately these tend to be the exceptions in real
world programs as opposed to the rule. For safety-critical systems
where unexpected halting is a run-time exception that needs to be
handled I think it is highly relevant, although it can rapidly become
a red herring if viewed out of context.

Rick DeNatale

unread,
Apr 17, 2008, 7:10:39 AM4/17/08
to
On Wed, Apr 16, 2008 at 12:57 PM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:
> Eleanor McHugh wrote:
>
> | It's a lovely idea, but ponder the impact of Gödel's Incompleteness
> | Theorems or Turing's proof of the Halting Problem. In practice there are
> | program states which can occur which cannot be identified in advance
> | because they are dependent on interactions with the environment, or are
> | artefacts of the underlying problem space.
> |
> | That's why run-time error handling and fail-safe behaviour are so
> | important regardless of the rigour of Q&A processes.
>
> Sure. But to know these states, the software should be tested as
> thoroughly as possible. I somehow doubt that anybody using something
> mission-critical to flying or medical health wants to call the hotline
> during the final approach of a plane or when a surgical robot gets
> fantasies of being SkyNET. ;)

Yes, testing, not a blind faith in whatever language is being used,
and it's compiler.

> Anyway, this problem is (AFAIK, anyway), countered by using redundant
> implementations of the hardware and software (well, as far as possible,
> anyway), to minimize the effect of unknown states.

Of course this isn't perfect either. In fact "The Bug Heard Round the
World." which I mentioned earlier in this thread, was a failure of
redundancy.

The Shuttle has, or at least did in the early days, redundant on-board
computers which monitor the health and behavior of shuttle systems,
with voting used to find discrepencies. The hardware is/was comprised
of (3 I think) identical IBM 4Pi computers with 1 of those having a
totally independently implemented software load. When control of the
launch/mission is transferred to this system, the separate processors
run in parallel, and their outputs are compared. If they disagree,
the launch is aborted.

Of course all of this worked well during the pre-STS1 mission sims.

However, on the day of the launch, there was a clock skew between the
redundant computers, so the output from one lagged just a bit behind
the others, and the system halted the launch, unnecessarily as it
turned out, at T-3

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

Sean O'Halpin

unread,
Apr 17, 2008, 7:36:57 AM4/17/08
to
On Thu, Apr 17, 2008 at 10:29 AM, Eivind Eklund <eek...@gmail.com> wrote:
> In the absence of hardware failure (something we of course can't prove
> and is a problem in the real world), we can trivially prove halting
> for a lot of

(trivlal :)

> cases. E.g, the simple program "halt" halts.

If 'halt' is defined as implying that the program halts, then you're
not really proving anything here. Like saying that true implies true.
You have to take that as an axiom.

> halt if
> true halts. halt if 1 == 1 would generally halt, assuming no
> redefinition of ==. An so on.

Once you invoke conditions, you are invoking some formal system of
axioms (like your assumption above) which you have to define. I grant
that there are very simple formal systems which are 'complete' but
they are generally not 'interesting'. If you want even simple integer
arithmetic[1], you're subject to Goedel's incompleteness theorem. And
this is just in the abstract. As soon as you put even the 'halt'
program on real hardware, all bets are off. You simply cannot prove
anything about it.

I think I'd rather fly on plane whose avionics software had been
written by someone who assumed that they had made mistakes somewhere
and that both their software and the hardware it was running on were
going to fail in some unforeseeable way (and had implemented the
appropriate safeguards both in process and error recovery) than by
someone who assumed that proving their software correct was sufficient
(though I realise that is a bit of a straw man :)

On a separate note, I'm continually surprised that computer 'science'
is so heavily biased to the mathematical theoretical side at the
expense of the empirical and pragmatic. Almost every single code
example in all textbooks and papers has 'error handling omitted for
clarity'! It's no wonder we have all these planes falling out of the
sky :)

Best regards,
Sean

[1] The number theorists are now shrieking in disbelief: Simple?
Integers? Arithmetic?! :)

Phillip Gawlowski

unread,
Apr 17, 2008, 7:51:14 AM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Rick DeNatale wrote:

|
| Yes, testing, not a blind faith in whatever language is being used,
| and it's compiler.

Indeed.

|> Anyway, this problem is (AFAIK, anyway), countered by using redundant
|> implementations of the hardware and software (well, as far as possible,
|> anyway), to minimize the effect of unknown states.
|
| Of course this isn't perfect either. In fact "The Bug Heard Round the
| World." which I mentioned earlier in this thread, was a failure of
| redundancy.

Perfection is an ideal, that we can only approach asymptotically, never
achieve (since we, as human beings, aren't perfect).

| Of course all of this worked well during the pre-STS1 mission sims.
|
| However, on the day of the launch, there was a clock skew between the
| redundant computers, so the output from one lagged just a bit behind
| the others, and the system halted the launch, unnecessarily as it
| turned out, at T-3
|

That is it was an unnecessary halt is probably the benefit of hindsight.
Unfortunately, I can only assume that it was so, since I cannot find a
free version of the paper you linked to earlier.

Without the benefit of hindsight, the problem of the skewed clocks could
have a much wider impact than it actually had, masking deeper problems
of the software and / or hardware used.

In such a case, we enter the area of risk management: Is it worth to
risk the whole mission on something that hasn't been done before at this
scale? While there was knowledge, at the time, of space flight thanks to
the Apollo and Mercury programs, something like the Space Shuttle was
new, and very different from the "throw away" capsules used before, with
different approaches to solve the problem of getting something into
orbit and back again, preferably all in one piece.

With the lives and money at stake with the Shuttle program, the decision
to cancel was wise, IMO, even though it turned out to be unnecessary.

One could even claim, that the systems performed as planned, and
prevented a catastrophe. Without actual empirical testing we probably
won't know for sure, and can only speculate.


In the end, though, this shows that no amount of software nor hardware
can replace judgment calls made by human beings. Technology can only
assist in making decisions. And in the cases where humans cannot make
decisions (like a Shuttle launch, where automation has to be used), a
use of technology (and not just languages and compilers and processes)
still requires humans for the get go.

I think that the movie Wargames touched on this topic in a good, and
decent, way, as well as Crimson Tide (in a not very related way, though,
but it demonstrates my point of not putting too much trust into process).

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Zero G and I fell fine.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHOa8ACgkQbtAgaoJTgL/EsgCfWwHO2OoGyM+8rtM7j9MOlk1Z
48YAn3vtgcnZiMVQy02jwmqwVUNaWRPO
=ZpIR
-----END PGP SIGNATURE-----

Eleanor McHugh

unread,
Apr 17, 2008, 7:53:29 AM4/17/08
to
On 17 Apr 2008, at 04:16, Tom Cloyd wrote:
> My thanks to the core contributors to this fascinating thread. It's
> stretched me well past my boundaries on several points, but also
> clarified some key learning I've picked from my own field
> (psychology & psychotherapy).
>
> For me, the take-away here (and this is not at all news to me) is
> that valid formal approaches to reliability are efficient (at least
> at times) and powerful, and should definitely be used - WHEN WE HAVE
> THEM. The problem is that they all stop short of the full spectrum
> of reality in which our processes must survive. Thus we must leave
> ultimately the comfort of deduction and dive into dragon realm of
> inferential processes. Ultimately, there simply is not substitute,
> or adequate simulation of, reality. Sigh.

That's the new physics for you ;)


Ellie

Eivind Eklund

unread,
Apr 17, 2008, 9:09:10 AM4/17/08
to
On Thu, Apr 17, 2008 at 12:26 PM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Eivind Eklund wrote:
>
> |
> | The Goedel proof is about complete logical systems; in the case of a
> | computer program, we are working with a system where we have other
> | formalisms under it that's providing axioms that we don't prove, just
> | assume.
>
> But a language that is Turing-complete is a complete logical system, is
> it not?

No. A computer language is a different kind of beast; it is not a way
to state truths about things, but a set of rules for mathematical
operations.

The relevance of the incompleteness is in the space of what programs
can exist, and what we can prove about them.

The incompleteness theorem says that there will be statements we can
make about some programs that will be true but that we will not be
able to prove true.

The halting theorem says that, specifically, that there exists
programs that will halt that we cannot prove if halts - under the
assumption of infinite memory. Under the assumption of finite memory,
we are dealing with a finite state machine, and a proof is *in theory*
possible by enumerating all all the states and which other state
follows that state.

In practice, of course, enumerating all states and the transition from
them very very quickly becomes intractable.

But as far as I can tell, incompleteness does not enter into it; only
practical feasibility.

> | In the absence of hardware failure (something we of course can't prove
> | and is a problem in the real world), we can trivially prove halting
> | for a lot of cases. E.g, the simple program "halt" halts. halt if
> | true halts. halt if 1 == 1 would generally halt, assuming no
> | redefinition of ==. An so on.
>
> There are logical proofs that P is non-P (details of which are an
> exercise to the reader since I have misplaced Shaum's Outline of Logic).
>
> That a state is provable says nothing about the *conditions* that this
> proof is valid. In a complete system, any proof is valid, so that the
> proof that halt == true equates to false is a side-effect under the
> (in-)correct circumstances.

I have no idea what you are trying to say here. Could you reformulate?

Specifically, the following makes no sense to me:

"In a complete system, any proof is valid" (this seems to depend on
some weird definition of complete system)
"There are logical proofs that P is non-P" (this seems like either an
example of a proof with subtle errors or proof-by-contradiction
proving that a particular statement X is false; ie, if we can prove
P!=P if we assume of X, then we know that X is false.)

> Now, most applications don't reach this level of complexity, but the
> level of complexity is there, you usually don't have to care about it.
>
> Conversely, if every state of a program were provable, the processes
> like DO-178B wouldn't be necessary, would they?

If we knew we had a perfect spec and could practically prove all the
relevant aspects of transformation from spec to software/hardware, I
guess we would be able to just say "Prove spec to software" instead of
having any other standard. Alas, to make software and hardware is a
human endeavor - even assuming we could prove halting properties of
our real world state machines on a perfect computer, this is only a
small part of systems development.

Eivind.

Eivind Eklund

unread,
Apr 17, 2008, 9:34:56 AM4/17/08
to
On Thu, Apr 17, 2008 at 1:36 PM, Sean O'Halpin <sean.o...@gmail.com> wrote:
> On Thu, Apr 17, 2008 at 10:29 AM, Eivind Eklund <eek...@gmail.com> wrote:
> > In the absence of hardware failure (something we of course can't prove
> > and is a problem in the real world), we can trivially prove halting
> > for a lot of
>
> (trivlal :)

A surprisingly non-trivial number of cases are trivial.

> > cases. E.g, the simple program "halt" halts.
>
> If 'halt' is defined as implying that the program halts, then you're
> not really proving anything here. Like saying that true implies true.
> You have to take that as an axiom.

I am proving that a specific program will halt. This is a very
trivial proof; that was the intent. You could of course use an "X:
goto X" for the halt example; it does the same thing.

>
> > halt if
> > true halts. halt if 1 == 1 would generally halt, assuming no
> > redefinition of ==. An so on.
>
> Once you invoke conditions, you are invoking some formal system of
> axioms (like your assumption above) which you have to define. I grant
> that there are very simple formal systems which are 'complete' but
> they are generally not 'interesting'. If you want even simple integer
> arithmetic[1], you're subject to Goedel's incompleteness theorem.

My claim wasn't that reasoning about the system is not subject to
Goedel's incompleteness theorem - it is. It was that the properties
that was described as being a result of Goedel's incompleteness
theorem was in fact not related to that theorem. That state proving
is impossible in the case of an infinite memory computer and often
infeasible the case of finite memory computers is, to the best of my
knowledge, a fully separate result.

Eivind.

Phillip Gawlowski

unread,
Apr 17, 2008, 9:36:50 AM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eivind Eklund wrote:

|
| No. A computer language is a different kind of beast; it is not a way
| to state truths about things, but a set of rules for mathematical
| operations.

Which makes it a complete logical system, though not necessarily in the
sense of abstract logic.

| I have no idea what you are trying to say here. Could you reformulate?

Sure.

| Specifically, the following makes no sense to me:
|
| "In a complete system, any proof is valid" (this seems to depend on
| some weird definition of complete system)

Logically complete. That is, a system that implements abstract logic in
tis entirety. Abstract logic is at the root of Godel's incompleteness
theorem. Essentially, you either have something that can be expressed
with abstract logic. But that means, that there are things that cannot
be proven with abstract logic *within the logical system itself*.
Whence, an incompleteness theorem.

| "There are logical proofs that P is non-P" (this seems like either an
| example of a proof with subtle errors or proof-by-contradiction
| proving that a particular statement X is false; ie, if we can prove
| P!=P if we assume of X, then we know that X is false.)

1. P → Q Premise
2. P → (Q → ¬P) Premise
~ 3. P Assumption
~ 4. Q 1,3 MP
~ 5. Q → ¬P 2,3 MP
~ 6. ¬P 4,5 MP
~ 7. P & ¬P 3,6 Conj
8. ¬P

Ergo: P = not-P. ;)

(Note: The character are UTF-8, in case you can't see them.)

The complete discussion of this proof:
http://www.iep.utm.edu/p/prop-log.htm#SH5e

Godel's first theorem:
"For any consistent formal, recursively enumerable theory that proves
basic arithmetical truths, an arithmetical statement that is true, but
not provable in the theory, can be constructed. That is, any effectively
generated theory capable of expressing elementary arithmetic cannot be
both consistent and complete."

Which drives logicians mad, since abstract logic has to be both complete
and non-contradictory. ;)

Anyway: Computer languages that are Turing complete are both complete
and contradictory (since Godel's theorem exists), given a sufficient
algorithm. However, in >=90% of cases, this doesn't matter.

|
| If we knew we had a perfect spec and could practically prove all the
| relevant aspects of transformation from spec to software/hardware, I
| guess we would be able to just say "Prove spec to software" instead of
| having any other standard. Alas, to make software and hardware is a
| human endeavor - even assuming we could prove halting properties of
| our real world state machines on a perfect computer, this is only a
| small part of systems development.

Not really, thanks to Godel. If we can prove it, it's either incomplete
(so we don't have the perfect specs), or contradictory (so we don't have
the perfect specs either).

I hold the opinion that Godel's Incompleteness Axiom is a misnomer, and
it should be called Godel's Incompleteness Paradox.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Abstain from wine, women, and song; mostly song.
%Abstinence is as easy to me, as temperance would be difficult.
~ -- Samuel Johnson


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHUnAACgkQbtAgaoJTgL/phACeMJaxYkj54DGzqYcLTQNO5a3X
WgwAnjWRPr2Og+LrUGgmELEYUFDvJkJo
=sZ1C
-----END PGP SIGNATURE-----

Robert Dober

unread,
Apr 17, 2008, 10:26:10 AM4/17/08
to
On Thu, Apr 17, 2008 at 3:09 PM, Eivind Eklund <eek...@gmail.com> wrote:
> On Thu, Apr 17, 2008 at 12:26 PM, Phillip Gawlowski
> <cmdja...@googlemail.com> wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > Eivind Eklund wrote:
> >
> > |
> > | The Goedel proof is about complete logical systems; in the case of a
> > | computer program, we are working with a system where we have other
> > | formalisms under it that's providing axioms that we don't prove, just
> > | assume.
> >
> > But a language that is Turing-complete is a complete logical system, is
> > it not?
Forgive me to add a comment as Eivind has been much clearer than me on
the description of what is the Halting Problem and Completeness. I
also think it was nice to add that your posts are very valuable
usually, I agree indeed.
But your last question can maybe be explained in simple words.

Being turing-complete means in theory that you can solve all problems
that a Turing Machine can solve (given unlimited memory) (1).
E.g.
Ruby being turing-complete means therefore that you can write a Ruby
program of which one can not determine if it will halt or not. But
chances are slim that such a Ruby program is written by chance, and
furthermore it would need infinite memory.

(1) A TM that solves a problem halts and it if halts it can only use a
finite amount of its endless tape, so theoretically Ruby
can do the same thing even if there is no limit of memory that can be
established :)

<snip>

Cheers
Robert

Robert Dober

unread,
Apr 17, 2008, 10:30:41 AM4/17/08
to
De falsum quodlibet, nice try ;)
IOW You can prove anything with a wrong premise as false -> X is
always true indeed what you proved was
false -> (P && !P)
which is correct of course.
<snip>

> Anyway: Computer languages that are Turing complete are both complete
> and contradictory (since Godel's theorem exists), given a sufficient
> algorithm. However, in >=90% of cases, this doesn't matter.
>
>
> |
> | If we knew we had a perfect spec and could practically prove all the
> | relevant aspects of transformation from spec to software/hardware, I
> | guess we would be able to just say "Prove spec to software" instead of
> | having any other standard. Alas, to make software and hardware is a
> | human endeavor - even assuming we could prove halting properties of
> | our real world state machines on a perfect computer, this is only a
> | small part of systems development.
>
> Not really, thanks to Godel. If we can prove it, it's either incomplete
> (so we don't have the perfect specs), or contradictory (so we don't have
> the perfect specs either).
>
> I hold the opinion that Godel's Incompleteness Axiom is a misnomer, and
> it should be called Godel's Incompleteness Paradox.
Is it really called an axiom? An axiom cannot be proven, it should be
called a Theorem.

Phillip Gawlowski

unread,
Apr 17, 2008, 10:41:46 AM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:
|> 1. P → Q Premise
|> 2. P → (Q → ¬P) Premise
| De falsum quodlibet, nice try ;)
| IOW You can prove anything with a wrong premise as false -> X is
| always true indeed what you proved was
| false -> (P && !P)
| which is correct of course.

Outside of propositional logic, yes. But I did warn that this doesn't
necessarily apply, too, and provided a link for thorough critique of the
proof by the reader. :)

| Is it really called an axiom? An axiom cannot be proven, it should be
| called a Theorem.

Sorry, my mistake. It *is* a theorem. Still a misnomer since the theorem
is more of a paradox.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ "When life gives you a lemon, make lemonade." -Susie "I say, when
life gives you a lemon, wing it right back and add some lemons of your
own!" -Calvin


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHYacACgkQbtAgaoJTgL9mbgCgkK2JMounvNuucP9HMaLPHcvC
YjoAn2okGjTi/OAGWGiz5kQl5hm6w0f3
=zHaV
-----END PGP SIGNATURE-----

Sean O'Halpin

unread,
Apr 17, 2008, 12:18:34 PM4/17/08
to
On Thu, Apr 17, 2008 at 2:34 PM, Eivind Eklund <eek...@gmail.com> wrote:
> My claim wasn't that reasoning about the system is not subject to
> Goedel's incompleteness theorem - it is.

Ah, I see.

> It was that the properties
> that was described as being a result of Goedel's incompleteness
> theorem was in fact not related to that theorem. That state proving
> is impossible in the case of an infinite memory computer and often
> infeasible the case of finite memory computers is, to the best of my
> knowledge, a fully separate result.

I just looked up Wikipedia on the halting problem[1] - I quote:

"...any finite-state machine, if left completely to itself, will fall
eventually into a perfectly periodic repetitive pattern. The duration
of this repeating pattern cannot exceed the number of internal states
of the machine..."

which agrees with your statement, though the article continues:

Minsky warns us, however, that machines such as computers with e.g. a
million small parts, each with two states, will have on the order of
21,000,000 possible states:

"This is a 1 followed by about three hundred thousand zeroes ...
Even if such a machine were to operate at the frequencies of cosmic
rays, the aeons of galactic evolution would be as nothing compared to
the time of a journey through such a cycle" (Minsky p. 25)

Minsky exhorts the reader to be suspicious -- although a machine may
be finite, and finite automata "have a number of theoretical
limitations":

"...the magnitudes involved should lead one to suspect that
theorems and arguments based chiefly on the mere finiteness [of] the
state diagram may not carry a great deal of significance" (ibid).

When you consider that 1 million bits is about 128K, that is a sobering thought.

Regards,
Sean

[1] http://en.wikipedia.org/wiki/Halting_problem#Common_pitfalls

Todd Benson

unread,
Apr 17, 2008, 1:29:21 PM4/17/08
to
On Thu, Apr 17, 2008 at 12:13 PM, Robert Dober <robert...@gmail.com> wrote:
> 2008/4/17 Phillip Gawlowski <cmdja...@googlemail.com>:

>
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> >
> > Robert Dober wrote:
> > |> 1. P → Q Premise
> > |> 2. P → (Q → ¬P) Premise
> > | De falsum quodlibet, nice try ;)
> > | IOW You can prove anything with a wrong premise as false -> X is
> > | always true indeed what you proved was
> > | false -> (P && !P)
> > | which is correct of course.
> >
> > Outside of propositional logic, yes. But I did warn that this doesn't
> > necessarily apply, too, and provided a link for thorough critique of the
> > proof by the reader. :)
> Oops I missed it, nice trick anyway.

>
> >
> >
> > | Is it really called an axiom? An axiom cannot be proven, it should be
> > | called a Theorem.
> >
> > Sorry, my mistake. It *is* a theorem. Still a misnomer since the theorem
> > is more of a paradox.
> I see no paradox in it, the paradox is the proof of the theorem right?
> The theorem itself just says that such paradoxes will occur in a
> complete system, but I admit it is difficult to accept that as not
> being paradoxal itself. :=)
> IIRC even Bertrand Russel did not believe Gödel's theorem and there
> were other prominent mathematicians defying it.
> Gödel was waaaay ahead of his time.
>
> Cheers
> Robert

Fascinating conversation! It comes up every once in a while in
database talk lists.

Formal logic system proves that it cannot prove everything that's true
within the system (It's not talking about itself, is it? :).

I love it!

Todd

Phillip Gawlowski

unread,
Apr 17, 2008, 1:35:08 PM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:

| Oops I missed it, nice trick anyway.

Yeah, abstract logic allows for neat stunts (and a lot of sales for
aspirin, too). :P

| I see no paradox in it, the paradox is the proof of the theorem right?
| The theorem itself just says that such paradoxes will occur in a
| complete system, but I admit it is difficult to accept that as not
| being paradoxal itself. :=)
| IIRC even Bertrand Russel did not believe Gödel's theorem and there
| were other prominent mathematicians defying it.
| Gödel was waaaay ahead of his time.

Well, the theorem is counter-intuitive in its nature, and paradoxical.

After all, any consistent system should be provable, but isn't. But if
it isn't provable, it isn't consistent, but yet it is.

The theorem is, in a way, its own proof. :P

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ - You know you've been hacking too long when...
..you think "grep keys /dev/pockets" or "grep homework /dev/backpack"


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHikkACgkQbtAgaoJTgL8PYwCfehqRFbf/BHY8fH8IBw0MqaYL
gikAoKewZclVZoJmLvmhxvbH4HjEJ0xP
=gGU1
-----END PGP SIGNATURE-----

Robert Dober

unread,
Apr 17, 2008, 3:09:08 PM4/17/08
to
On Thu, Apr 17, 2008 at 7:35 PM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:

> After all, any consistent system should be provable, but isn't. But if
> it isn't provable, it isn't consistent, but yet it is.

No Gödel is not talking about consistent systems, he has only (that is
a strange adjective in this context, but you know how I mean it)
proven that all *complete* systems are inconsistent.

Eivind can you explain this better? Or am I wrong after all? I really
liked how you put things last time, felt kind of, gosh that is exactly
what I should have sayed...

Cheers
Robert

Mike Silva

unread,
Apr 17, 2008, 7:02:48 PM4/17/08
to
On Apr 17, 12:16 am, "Arved Sandstrom" <asandst...@accesswave.ca>
wrote:
>
> ....Having said that, it seems to me that the better correctness of programs in

> SPARK or Ada compared to C/C++, say, would also be due to the qualities of
> organizations that tend to use/adopt these languages.....

I think there's a lot to be said for this. Organizations that choose
bad tools when better tools are available show that at some level they
are not properly serious, and/or not properly informed (which points
again to not being properly serious).

Mike

Phillip Gawlowski

unread,
Apr 17, 2008, 7:41:58 PM4/17/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:
| On Thu, Apr 17, 2008 at 7:35 PM, Phillip Gawlowski
| <cmdja...@googlemail.com> wrote:
|
|> After all, any consistent system should be provable, but isn't. But if
|> it isn't provable, it isn't consistent, but yet it is.

| No G�del is not talking about consistent systems, he has only (that is


| a strange adjective in this context, but you know how I mean it)
| proven that all *complete* systems are inconsistent.

Even so, it is still a paradox. ;)

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ "I suppose the secret to happiness is learning to appreciate the
moment."
- -Calvin


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgH4EEACgkQbtAgaoJTgL98QACfW/gPvcbiaX2q4Oqhq+NZ7Ykk
/YkAn0PhGRAU4chgL7nwL6mV3qUQV5Ea
=0fAF
-----END PGP SIGNATURE-----

Tom Cloyd

unread,
Apr 18, 2008, 12:54:48 AM4/18/08
to
"...no amount of software nor hardware can replace judgment calls made
by human beings. Technology can only
assist in making decisions."

For what it's worth, in my profession (clinical applied psychology/
psychotherapy), it's written into our professional ethics that decisions
are always to be made by people, not by some testing device, instrument,
or technology. Sometimes we merely review and approve, but that human is
required to be there. Very few people object to this, especially after a
little reflection.

Cross-validation of process, eh?

t.


--

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tom Cloyd, MS MA, LMHC
Private practice Psychotherapist
Bellingham, Washington, U.S.A: (360) 920-1226
<< t...@tomcloyd.com >> (email)
<< TomCloyd.com >> (website & psychotherapy weblog)
<< sleightmind.wordpress.com >> (mental health issues weblog)
<< directpathdesign.com >> (web site design & consultation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Tom Cloyd

unread,
Apr 18, 2008, 1:12:55 AM4/18/08
to
Well, yes. And it's also classical Zen Buddhism - we kill the Buddha we
meet on the road to avoid being distracted from the road. All
idealizations and representations fail, and we must endeavor not to be
taken in. When formalisms are complete enough not to fail, they become
clones - copies, not representations. Psychologically, this is why
thinking too much betrays the thinker. The dialectic between
representation (formalism) and reality is ongoing and unavoidable, if
one wishes to minimize crashes of all sorts.

What utterly fascinates me is that this clearly seems to be true in
cockpits AND people's love lives. That makes it a very good truth
indeed. My earlier expressed appreciation derives precisely from my
delight at seeing the same truth I know well in my home environment
emerging here in a very different (for me) environment. "Delight" is the
precisely correct description of my reaction to seeing this, although it
does not well represent the reality of that reaction. (!)

Phillip Gawlowski

unread,
Apr 18, 2008, 1:36:10 AM4/18/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tom Cloyd wrote:

| For what it's worth, in my profession (clinical applied psychology/
| psychotherapy), it's written into our professional ethics that decisions
| are always to be made by people, not by some testing device, instrument,
| or technology. Sometimes we merely review and approve, but that human is
| required to be there. Very few people object to this, especially after a
| little reflection.

It is a sad state of affairs, if this has to be written down and people
have to think about this, before it makes sense to them.

This reminds me of the late Joseph Weizenbaum's shock he felt, when
people accepted ELIZA as more than a toy, which led to his seminal work
"Computer Power and Human Reason"[0], arguing my case better than I ever
could.

This quote encompasses it, methinks:
"I want them [teachers of computer science] to have heard me affirm that
the computer is a powerful new metaphor for helping us understand many
aspects of the world, but that it enslaves the mind that has no other
metaphors and few other resources to call on. The world is many things,
and no single framework is large enough to contain them all, neither
that of man's science nor of his poetry, neither that of calculating
reason nor that of pure intuition."[1]

It is sad that we, as human beings, so eagerly submit ourselves to the
seeming rule of computers (SkyNET and its counterparts in
science-fiction, anyone?).

| Cross-validation of process, eh?

Yes, indeed.

[0] http://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason
[1] http://www.smeed.org/1735

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Use the good features of a language; avoid the bad ones.


~ - The Elements of Programming Style (Kernighan & Plaugher)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgIMz0ACgkQbtAgaoJTgL+3TwCgh7PiK5Iu16yACUBGujoJ+Lgp
ecMAn0mU6dxJLYNKrY4Vp7MDx/9nz4Ci
=3gHq
-----END PGP SIGNATURE-----

Phillip Gawlowski

unread,
Apr 18, 2008, 1:43:57 AM4/18/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tom Cloyd wrote:

| Well, yes. And it's also classical Zen Buddhism - we kill the Buddha we
| meet on the road to avoid being distracted from the road. All
| idealizations and representations fail, and we must endeavor not to be
| taken in. When formalisms are complete enough not to fail, they become
| clones - copies, not representations. Psychologically, this is why
| thinking too much betrays the thinker. The dialectic between
| representation (formalism) and reality is ongoing and unavoidable, if
| one wishes to minimize crashes of all sorts.

Or, on a wider scale, the difference and conflict between perception,
perception of self, and reality (which can be objective, or not), in all
its forms.

After all, every thing we create reflects our self, on one level or
another, be these things physical or not.

I think Plato's Allegory of the Cave applies, too.

"The things which we perceive as real are actually just shadows on a
wall. Just as the escaped prisoner ascends into the light of the sun, we
amass knowledge and ascend into the light of true reality: where ideas
in our minds can help us understand the form of 'The Good'." [0]

| What utterly fascinates me is that this clearly seems to be true in
| cockpits AND people's love lives. That makes it a very good truth
| indeed. My earlier expressed appreciation derives precisely from my
| delight at seeing the same truth I know well in my home environment
| emerging here in a very different (for me) environment. "Delight" is the
| precisely correct description of my reaction to seeing this, although it
| does not well represent the reality of that reaction. (!)

Well, it is not all that surprising, considering that humans are
involved in all of this. ;)

I share the delight, in a way, from my philosophical background, myself.


[0] http://en.wikipedia.org/wiki/Allegory_of_the_cave

P.S.: My random quote add on for Thunderbird worries me in its
randomness, producing quotes that somehow relate to the email I'm going
to write..

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Zen: the sound of the ax chopping. Chopping logic.
~ -- Edward Abbey


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgINRgACgkQbtAgaoJTgL8xUACeJYr3ZBTf2EPZVfnuvxq8j90O
i8YAn1d+h0yion6otHUmM9Ku2kKEMxox
=D1oT
-----END PGP SIGNATURE-----

Robert Dober

unread,
Apr 18, 2008, 3:16:37 AM4/18/08
to
On Fri, Apr 18, 2008 at 7:36 AM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
<sniop>

>
> This quote encompasses it, methinks:
> "I want them [teachers of computer science] to have heard me affirm that
> the computer is a powerful new metaphor for helping us understand many
> aspects of the world, but that it enslaves the mind that has no other
> metaphors and few other resources to call on. The world is many things,
> and no single framework is large enough to contain them all, neither
> that of man's science nor of his poetry, neither that of calculating
> reason nor that of pure intuition."[1]
That pretty much is why I find Gödel's theorem all save paradoxical.
It showed me, fortunately I was young enough to fully except it as a
truth, that formalism cannot do anything (as does the halting
problem). Without these knowings I might as well still think the
contrary, which would indeed reduce my own awareness of the greater
picture.

Now not to become too serious, I deduce from Gödel's theorem that if a
human being would fully understand the nature of the human brain at
least one of the following things would happen
(1) 42 becomes nil
(2) Life, the universe and evertyhing would vanish immediately.
(42) All of Doug Adam's works will be put on the index.
(SSSSSSSSSS0) I will try to find the error in Gödel's proof.

Cheers
Robert
<snip>

Sylvain COURTECUISSE

unread,
Apr 18, 2008, 3:22:50 AM4/18/08
to
[Note: parts of this message were removed to make it a legal post.]

unsubscribe


**************************
Si vous n'etes pas le destinataire designe de ce message ou une personne autorisee a l'utiliser, toute distribution, copie, publication ou usage a quelques fins que ce soit des informations dans ce message sont interdits. Merci d'informer immediatement l'expediteur par messagerie, et, de detruire ce message.
This e-mail is confidential. If you are not the addressee or an authorized recipient of this message, any distribution, copying, publication or use of this information for any purpose is prohibited. Please notify the sender immediately by e-mail and then delete this message.
**************************

Robert Dober

unread,
Apr 18, 2008, 3:39:19 AM4/18/08
to
2008/4/18 Sylvain COURTECUISSE <scourt...@gfi.fr>:
LOL we will not tell anybody that you tried - unsucessfully BTW - to
unsubscribe from this group (which indeed is a shame ;)
But please try to send this to the administration address of this mailing list.

HTH
Robert

Phillip Gawlowski

unread,
Apr 18, 2008, 3:41:32 AM4/18/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:

| That pretty much is why I find G�del's theorem all save paradoxical.


| It showed me, fortunately I was young enough to fully except it as a
| truth, that formalism cannot do anything (as does the halting
| problem). Without these knowings I might as well still think the
| contrary, which would indeed reduce my own awareness of the greater
| picture.

We humans are neither consistent, nor logical, though. We are still
guided by imperatives that we have little control over, for example
(fear, lust, greed, envy, gluttony..). We can control them, but only if
we a) are aware of them, and b) have the intellect (Freud's super-ego)
to keep them in check. ;)

Not to mention that Godel's Incompleteness Theorem applies to abstract
concepts more than human nature. Machines and abstract systems are in
conflict with human nature, necessitating process for interaction (from
social rules, ethics [distinct from morals, which are more on a
meta-level], software development methodologies, what have you) in a
meaningful and consistent terms.

The game of Chinese whispers (Stille Post in Germany) demonstrates this
quite efficiently, as well as the Mythical Man-Month: Adding people to a
late project makes it later, since communication increases to the square
of the team size).

Alas, process has the problem of creating friction and stifles
creativity, if taken to the extreme. The balance has to be found between
human nature and process, and this is a constant struggle.

Too much process stifles creativity and the wellbeing of those
participating in the process, and no process endangers the success of
the task at hand (whatever that task may be).

| Now not to become too serious, I deduce from G�del's theorem that if a


| human being would fully understand the nature of the human brain at
| least one of the following things would happen
| (1) 42 becomes nil
| (2) Life, the universe and evertyhing would vanish immediately.
| (42) All of Doug Adam's works will be put on the index.

| (SSSSSSSSSS0) I will try to find the error in G�del's proof.

Which are probable events, just not likely. :P


- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Don't sacrifice clarity for small gains in "efficiency".


~ - The Elements of Programming Style (Kernighan & Plaugher)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgIUKwACgkQbtAgaoJTgL+FsQCfeXdwBkTknOD8SRuFTF5Euwjn
8A8An3F51NfuPkYJBx/iXkX36RHLPsjI
=BVdM
-----END PGP SIGNATURE-----

Robert Dober

unread,
Apr 18, 2008, 4:04:37 AM4/18/08
to
On Fri, Apr 18, 2008 at 9:41 AM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Robert Dober wrote:
>
> | That pretty much is why I find Gödel's theorem all save paradoxical.
I completely agree with you on this, but the interesting thing is that
Gödel's theorem is just such a realistic one in a real world far away
from the abstractions of an ideal world, because it showed that even
the abstract, logical world was not ideal as soon
as something got complex enough to be "interesting". That is why his
theorem was that much contested I suppose.
When you say paradox, do you as a matter of fact contest the theorem?
Maybe this is simply my wrong interpretation of the term?
>
>
> | Now not to become too serious, I deduce from Gödel's theorem that if a

> | human being would fully understand the nature of the human brain at
> | least one of the following things would happen
> | (1) 42 becomes nil
> | (2) Life, the universe and evertyhing would vanish immediately.
> | (42) All of Doug Adam's works will be put on the index.
> | (SSSSSSSSSS0) I will try to find the error in Gödel's proof.

>
> Which are probable events, just not likely. :P
Well I am glad you like my humor ;)
Cheers
Robert

Phillip Gawlowski

unread,
Apr 18, 2008, 5:03:21 AM4/18/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:

| I completely agree with you on this, but the interesting thing is that
| Gödel's theorem is just such a realistic one in a real world far away
| from the abstractions of an ideal world, because it showed that even
| the abstract, logical world was not ideal as soon
| as something got complex enough to be "interesting". That is why his
| theorem was that much contested I suppose.
| When you say paradox, do you as a matter of fact contest the theorem?
| Maybe this is simply my wrong interpretation of the term?

Oh, I don't contest its existence, far from it.

Look at this definition of paradox:

"a statement or proposition that seems self-contradictory or absurd but
in reality expresses a possible truth."[0]

I merely state, that Godel's Incompleteness Paradox would be closer to
the truth of Godel's assertion, than the term theorem can transport. :)


[0] http://dictionary.reference.com/browse/paradox

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ "That's the problem with nature, something's always stinging you
~ or oozing mucous all over you. Let's go and watch TV."
~ --- Calvin


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgIY9gACgkQbtAgaoJTgL9ogQCgnZydTBkFGeVMbafAB0ie5kN4
PLsAoJwDTgPSwPFiJEYw/L3l2nu0pLXf
=LoQ5
-----END PGP SIGNATURE-----

Robert Dober

unread,
Apr 18, 2008, 5:31:43 AM4/18/08
to
On Fri, Apr 18, 2008 at 11:03 AM, Phillip Gawlowski
<cmdja...@googlemail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Robert Dober wrote:

> Look at this definition of paradox:
>
> "a statement or proposition that seems self-contradictory or absurd but
> in reality expresses a possible truth."[0]
>
> I merely state, that Godel's Incompleteness Paradox would be closer to
> the truth of Godel's assertion, than the term theorem can transport. :)

ok, if I were a Romulan I would say that this is acceptable ;)
Really enjoyed the discussion.

Phillip Gawlowski

unread,
Apr 18, 2008, 6:09:49 AM4/18/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:

| ok, if I were a Romulan I would say that this is acceptable ;)
| Really enjoyed the discussion.

So do I. :)

A nice exchange, and so very polite, too.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Rule of Open-Source Programming #4:

If you don't work on your project, chances are that no one will.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgIc2sACgkQbtAgaoJTgL//AwCfTx/c2VXgq2sLmFTE+UCF0OUc
3HgAn32lAiacZvT+6UHzZKrmjBrE/dtt
=2E12
-----END PGP SIGNATURE-----

Rick DeNatale

unread,
Apr 18, 2008, 7:53:35 AM4/18/08
to
On Fri, Apr 18, 2008 at 3:39 AM, Robert Dober <robert...@gmail.com> wrote:
> 2008/4/18 Sylvain COURTECUISSE <scourt...@gfi.fr>:
>
> > unsubscribe
> >
> >
> > **************************
> > Si vous n'etes pas le destinataire designe de ce message ou une personne autorisee a l'utiliser, toute distribution, copie, publication ou usage a quelques fins que ce soit des informations dans ce message sont interdits. Merci d'informer immediatement l'expediteur par messagerie, et, de detruire ce message.
> > This e-mail is confidential. If you are not the addressee or an authorized recipient of this message, any distribution, copying, publication or use of this information for any purpose is prohibited. Please notify the sender immediately by e-mail and then delete this message.
> > **************************
> LOL we will not tell anybody that you tried - unsucessfully BTW - to
> unsubscribe from this group (which indeed is a shame ;)
> But please try to send this to the administration address of this mailing list.

I'm surprised that some one from Descarte's homeland would be leaving
a conversation which has taken such a philosophical turn! <G>

Maybe we should discuss whether we are actually addressees or
authorized recipients of the message. I get the slight scent of one of
Hofstadter's "Strange Loops" here. <G>

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

M. Edward (Ed) Borasky

unread,
Apr 18, 2008, 9:18:27 PM4/18/08
to
Rick DeNatale wrote:

> Maybe we should discuss whether we are actually addressees or
> authorized recipients of the message. I get the slight scent of one of
> Hofstadter's "Strange Loops" here. <G>
>

Speaking of Ada ...

http://www.gcn.com/print/27_8/46116-1.html

adaw...@sbcglobal.net

unread,
Apr 19, 2008, 2:03:35 AM4/19/08
to

"Mike Silva" <snarf...@yahoo.com> wrote in message
news:4db7bd18-0df0-401e...@a22g2000hsc.googlegroups.com...

On Apr 17, 12:16 am, "Arved Sandstrom" <asandst...@accesswave.ca>
wrote:
>
> ....Having said that, it seems to me that the better correctness of programs
> in
> SPARK or Ada compared to C/C++, say, would also be due to the qualities of
> organizations that tend to use/adopt these languages.....

MS>>I think there's a lot to be said for this. Organizations that choose
MS>>bad tools when better tools are available show that at some level they
MS>>are not properly serious, and/or not properly informed (which points
MS>>again to not being properly serious).

I have often wondered why someone would choose an error-prone language
such as C++ and expect an error-free result. More recently, I have been
looking
more closely at Java and have learned that it too is far more error-prone than
one might expect.

As to Ada. Some have touted Ada's type-safety as an important feature of the
language. This is certainly one important feature. There are others that are
not
as immediately obvious -- features not found in most other languages -- that
contribute to the better engineering model provided by Ada. Although many
Ada programmers do not understand Chapter Eight of the Ada Language
Reference Manual, the visibility model of the language is, when understood
and used as part of a software design, a powerful part of what makes Ada
so robust for safety-critical software. The architectural model of an Ada
program also lends itself to the design of well-formed, easy-to-read, and
scalable software. That is, as programs become larger, as they tend to do
in real software, Ada tends to scale-up a little better than most other
languages.

Ada is not the right language for small, toy programs, but it begins to show its
power for programs in the 100 KSLOC range or higher. We have Ada software
in the million SLOC range and higher. In those kinds of systems, Ada really
outshines most competing languages. This is, in part, due to its
architectural
constructs: the package model, the separate compilation model, the child
library unit model, and the way both inheritance and genericity are designed
into the language.

Richard Riehle


Robert Dober

unread,
Apr 19, 2008, 2:59:08 AM4/19/08
to
On Sat, Apr 19, 2008 at 8:10 AM, <adaw...@sbcglobal.net> wrote:

> Ada is not the right language for small, toy programs, but it begins to show its
> power for programs in the 100 KSLOC range or higher. We have Ada software
> in the million SLOC range and higher. In those kinds of systems, Ada really
> outshines most competing languages. This is, in part, due to its
> architectural
> constructs: the package model, the separate compilation model, the child
> library unit model, and the way both inheritance and genericity are designed
> into the language.

May I humbly add the Rendez Vous tasking model, of course chosen by a
Frenchman ;).

adaw...@sbcglobal.net

unread,
Apr 19, 2008, 12:52:20 PM4/19/08
to

"Robert Dober" <robert...@gmail.com> wrote in message
news:335e48a90804182359p7c8...@mail.gmail.com...

RD> May I humbly add the Rendez Vous tasking model, of course chosen by a
RD> Frenchman ;).
RD> Robert

You may so add. In its earliest versions, that model did have some problems.
With Ada 95 and Ada 2000, that model has improved greatly and is now one
of the best you can find.

Be aware, though, that Jean Ichbiah did not invent that model "from scratch."
Important contributions from Dijkstra, Hoare, Per-Brinch Hansen, and many
others preceded and informed it. It is not a purely French invention.

Richard Riehle


Robert Dober

unread,
Apr 19, 2008, 1:07:40 PM4/19/08
to
Thank you for this update, I was not aware of the initial problems
BTW. I did not however want to indicate that Jean had invented this
model, I did not think so, I just thought he chose it because of the
French name, and that as a joke of course.
But he still refined the model himself if I understand you correctly,
interesting indeed.

Robert

Marc Heiler

unread,
Apr 19, 2008, 1:31:21 PM4/19/08
to
I guess I never expected this to become that big :)
(But all the better for the interesting tidbits covered here,
I even added Ada to the list of languages I will write a
few things, but right now I am playing with smalltalk)

Es scheint aber auch ohne hype recht brauchbar zu sein
http://ramaze.net/

PS: Und wenn du meinst es gibt zu wenige Beispiele, schreib ne Email
oder
sag das denen!
--
Posted via http://www.ruby-forum.com/.

Marc Heiler

unread,
Apr 19, 2008, 1:34:24 PM4/19/08
to
Marc Heiler wrote:
> I guess I never expected this to become that big :)
> (But all the better for the interesting tidbits covered here,
> I even added Ada to the list of languages I will write a
> few things, but right now I am playing with smalltalk)

Whops sorry, pasted the wrong stuff and hit return - having too many
tabs open is confusing. :/

But what I wanted to say here is that the basic premise seems to be
that Ada is interesting/useful for large scale software whereas
Ruby seems/is not?

Maybe I should have called this thread not "Ada vs Ruby" but more
"Ruby on large production scale". :)

(But I admit, I am clueless about it. I dont even know if there are
"large scale python" apps out there. If there would be though,
I guess Ruby would be perfectly fine as well.)

Robert Dober

unread,
Apr 19, 2008, 2:23:01 PM4/19/08
to
It seems that this tread is to go on forever ;)
Do you not believe that Rails is already on large production scale?

Cheers
Robert


>
>
>
> --
> Posted via http://www.ruby-forum.com/.
>
>

--

Phillip Gawlowski

unread,
Apr 19, 2008, 2:33:30 PM4/19/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Dober wrote:

| Do you not believe that Rails is already on large production scale?

Depends on your definition of large. Twitter runs on Rails. But nothing
like Amazon, or let alone Google.

I have the feeling that most Rails applications are deployed in
intranets, to fill a particular, well-defined need, but nothing as
"general purpose" and exposed as Amazon yet.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ - You know you've been hacking too long when...
..just when you finish writing the be-all-end-all program for your
computer (has everything-AI, MIDI, productivity stuff, excellent games,
desktop video, etc.) the entire computer industry upgrades to the "next
best chip."


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgKOvoACgkQbtAgaoJTgL/lqQCeJRHcXqzvTBwR2rne0YvFAoql
qN8AoKYn39XyfMxjhPGw7sGRskhIJibD
=Y35F
-----END PGP SIGNATURE-----

M. Edward (Ed) Borasky

unread,
Apr 19, 2008, 3:39:14 PM4/19/08
to
Phillip Gawlowski wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Robert Dober wrote:
>
> | Do you not believe that Rails is already on large production scale?
>
> Depends on your definition of large. Twitter runs on Rails. But nothing
> like Amazon, or let alone Google.
>
> I have the feeling that most Rails applications are deployed in
> intranets, to fill a particular, well-defined need, but nothing as
> "general purpose" and exposed as Amazon yet.

Well ... the three "700-pound gorillas" in large-scale web application
deployment are LAMP (Linux-Apache-MySQL-PHP), "assorted Java platforms",
and WISA (Windows-IIS-SQL Server-ASP). LMMR (Linux-Mongrel-MySQL-Rails)
is probably in the round-off noise, and I'd be extremely surprised if
anything that's turning a profit is running on any kind of Windows
platform using Rails.

As far as I know, Twitter is really it, and I don't have any clues at
all whether Twitter is profitable. I don't even know what their business
model or "unique selling proposition" is.

Phillip Gawlowski

unread,
Apr 19, 2008, 4:00:40 PM4/19/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

M. Edward (Ed) Borasky wrote:

| Well ... the three "700-pound gorillas" in large-scale web application
| deployment are LAMP (Linux-Apache-MySQL-PHP), "assorted Java platforms",
| and WISA (Windows-IIS-SQL Server-ASP). LMMR (Linux-Mongrel-MySQL-Rails)
| is probably in the round-off noise, and I'd be extremely surprised if
| anything that's turning a profit is running on any kind of Windows
| platform using Rails.

Additionally, Rails gets lost in the "Server is Apache" reports of
Netcraft and others, unless they explicitly scan for
index.[hmtl|php|asp] pages.

Well, it is not like it matters.

| As far as I know, Twitter is really it, and I don't have any clues at
| all whether Twitter is profitable. I don't even know what their business
| model or "unique selling proposition" is.

Their exit strategy: users (and probably a cut from the
tweet-to-sms/sms-to-tweet charges), since they are not serving ads
(thank the Lord for that).

Though, Twitter is the most *visible* Rails application. What about
github, Gitorious, 37signal's range of applications? How those compare
user-wise would be interesting to see.

However, another interesting metric would be the amount of start ups
using Rails or Ruby to get a real fast time-to-market.

In the end, though, all this is more of an e-penis contest, than of real
worth. After all, Ruby is interesting enough to win over more and more
mindshare. ;)

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ - You know you've been hacking too long when...

..in non-computer related situation you start thinking that whatever
you are doing, it could be done more easily in a shell script.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgKT24ACgkQbtAgaoJTgL9q9QCeOZ76KXMWfewfZIY3QhX1trQ9
CrUAn0vIzdqETri4PgeMpCP1PUF5WoAb
=VTP8
-----END PGP SIGNATURE-----

s.ross

unread,
Apr 19, 2008, 4:36:34 PM4/19/08
to
On Apr 19, 2008, at 11:33 AM, Phillip Gawlowski wrote:
> Depends on your definition of large. Twitter runs on Rails. But
> nothing
> like Amazon, or let alone Google.
>
> I have the feeling that most Rails applications are deployed in
> intranets, to fill a particular, well-defined need, but nothing as
> "general purpose" and exposed as Amazon yet.

The implication of this post, intended or not, is that there are tons
of large-scale public facing sites, none of them running any Ruby
code. There are relatively few large-scale public facing sites,
period, as compared to the number of Web sites out there now. There's
no indicator of how much Ruby code is in use performing non-Internet
related tasks. The point is, when Amazon, Ebay, and Google got their
start, Ruby would not have been a language that came to mind as a
first choice. Consider that these three date back to the mid-90s!

The corollary implication, intended or not, is that none of these
sites could benefit from Ruby or from Rails. The answer to that is not
clear. Much of the code on larger scale sites has been C/C++ or Perl
up to this point. Taking Moore's Law into account, it seems feasible
that at some point, improvement in Ruby's performance characteristics,
along with increase in affordable hardware capability would make Ruby
just as obvious a choice as C/C++ or Perl were when the initial
decisions were made to use them on these large sites. That point could
be now. Amazon is using Rails for some of their new stuff -- not sure
exactly what -- and I know it's on everyone's radar.

There are a number of Rails apps that are handling large traffic
volumes, and Twitter is not the only one. Distilling all Ruby-backed
sites to Twitter isn't fair to the technology, as there are millions
of pages served a day by Rails apps, as well as by some of the less
mainstream frameworks like merb, iowa, ramaze, etc. I don't have a
handle on that, but it's worth noting that the absence of a huge
catalog of "humongous site success stories" implies narrow adoption or
failure. (BTW: A number of the US political candidates, including at
least one of the presidential ones are running Rails applications.
They get lots of traffic :)

Just my $.02

Phillip Gawlowski

unread,
Apr 19, 2008, 5:11:53 PM4/19/08
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

s.ross wrote:
| On Apr 19, 2008, at 11:33 AM, Phillip Gawlowski wrote:
|> Depends on your definition of large. Twitter runs on Rails. But nothing
|> like Amazon, or let alone Google.
|>
|> I have the feeling that most Rails applications are deployed in
|> intranets, to fill a particular, well-defined need, but nothing as
|> "general purpose" and exposed as Amazon yet.
|
| The implication of this post, intended or not, is that there are tons of
| large-scale public facing sites, none of them running any Ruby code.

No, it isn't. I was talking specifically about Ruby and Rails, and on
the front end of a web application, *and* adding the caveat that it
depends on what is considered "large".

Nothing more, nothing less.

(And let's face it, Rails is where you got to look if you want Ruby in
any scale of use that matters more than well within a statistical error
margin.)

Anything else you draw out of it is a hasty generalization on your part.

| There are relatively few large-scale public facing sites, period, as
| compared to the number of Web sites out there now. There's no indicator
| of how much Ruby code is in use performing non-Internet related tasks.

There are a few RQueue installations performing well in RQueue's niche:
painless setup of simple clusters, where more traditional approaches
like Beowulf are too heavyweight to use or administrate.

Does that equal large scale deployment, especially compared to the
amount of code used by Google, of which a small percentage of code is
actually Ruby? It doesn't.

| The point is, when Amazon, Ebay, and Google got their start, Ruby would
| not have been a language that came to mind as a first choice. Consider
| that these three date back to the mid-90s!

Your point? There still is nothing even *close* to Amazon, much less
Google, despite Ruby being 15 years old, nor is there anything
approaching those usage numbers.

A payment processing system using Ruport and Ruby to generate a report
for Chrysler's management wouldn't be large-scale deployment, either.
After all, at most 87 000 people are affected directly on indirectly by
that use.

No internal application alone matters. The scale is way too small.

| The corollary implication, intended or not, is that none of these sites
| could benefit from Ruby or from Rails.

Fallacious conclusion.

| The answer to that is not clear.
| Much of the code on larger scale sites has been C/C++ or Perl up to this
| point. Taking Moore's Law into account, it seems feasible that at some
| point, improvement in Ruby's performance characteristics, along with
| increase in affordable hardware capability would make Ruby just as
| obvious a choice as C/C++ or Perl were when the initial decisions were
| made to use them on these large sites. That point could be now. Amazon
| is using Rails for some of their new stuff -- not sure exactly what --
| and I know it's on everyone's radar.

Being on everyone's radar does not equal actual use. It took enterprises
almost a decade to adopt Java. And *that* had Sun's dollars behind it,
as well as the commitment of a large corporation.

Something as risky as Ruby (development could cease today, with no
further work) is fighting an uphill battle in corporate environments.
Just ask the Red Hat or Novell guys how they are feeling about that matter.

While eventually Ruby will be making inroads, it probably won't be an
epiphany at Google compelling them to throw away their existing
codebase. It'll take a new player reaching wide adoption by users, as
well as founders and funders buying into Ruby (and not just Rails).

| There are a number of Rails apps that are handling large traffic
| volumes, and Twitter is not the only one. Distilling all Ruby-backed
| sites to Twitter isn't fair to the technology, as there are millions of
| pages served a day by Rails apps, as well as by some of the less
| mainstream frameworks like merb, iowa, ramaze, etc. I don't have a
| handle on that, but it's worth noting that the absence of a huge catalog
| of "humongous site success stories" implies narrow adoption or failure.

Touch luck. Twitter is the most visible Rails and Ruby application to
date. It has buzz, hype, users, and mindshare beyond merb's or Wave's
developers. Additionally, Rails has the most visibility outside of the
Ruby community as "ruby's killer app". Compare the Rails question ending
up in ruby-talk to the merb questions, for example. The amount of
false-positives in that area are strongly in favor of Rails.

| (BTW: A number of the US political candidates, including at least one of
| the presidential ones are running Rails applications. They get lots of
| traffic :)

So? Doesn't make them large deployments, nor something that lasts.
McObamaton 2008 will disappear sooner or later. Something like Amazon or
Twtter sticks around. A blimp of usage in a year does not a trend make,
nor does it mean large scale adoption.

Notice further, that the question wasn't really about the amount of
traffic, but usage.

And my assertion regarding Ruby and Rails deployment still holds: It's
mostly intranet, for specific purposes. Nothing as general as Amazon,
Google, or other applications.

Heck, Silverlight is having larger deployments (Aston-Martin for the DBS
site, Hard Rock Cafe's Memorabilia website, Halo official community
site) than Rails or Ruby together.

All your assumptions and conclusions are based on the mix up, that
mission-critical equals large deployment. Which is humbug. A few lines
of code can be more mission critical than the whole code base together
(see Ariane V maiden explosion, STS-1 aborted first launch).

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ I think we dream so we don't have to be apart so long. If we're in
each other's dreams, we can play together all night. -- Calvin


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgKYBwACgkQbtAgaoJTgL+cpwCfUTON3rQzBcqHz8/COyRtKioG
BfwAnRZb8rkJU4zulfWK4UIOBoLtax9Y
=ElfV
-----END PGP SIGNATURE-----

Rimantas Liubertas

unread,
Apr 19, 2008, 8:25:57 PM4/19/08
to
<...>

> No, it isn't. I was talking specifically about Ruby and Rails, and on
> the front end of a web application,
<...>

> Your point? There still is nothing even *close* to Amazon, much less
> Google, despite Ruby being 15 years old,

<...>

So is it about Ruby or Ruby on Rails?

> Nothing as general as Amazon,
> Google, or other applications.

<...>

Yeah, we have a bunch of amazons and googles on the every corner of the
internet, all written in all kinds of languages, except Ruby.
Sigh.


Regards,
Rimantas
--
http://rimantas.com/

s.ross

unread,
Apr 19, 2008, 11:15:16 PM4/19/08
to
Hello--

On Apr 19, 2008, at 2:11 PM, Phillip Gawlowski wrote:

> Anything else you draw out of it is a hasty generalization on your
> part.
>

..

> Fallacious conclusion.
>
..

> Touch luck.

> All your assumptions and conclusions are based on the mix up, that
> mission-critical equals large deployment. Which is humbug.

My mistake. I failed to grasp the significance and correctness of your
respectful discussion of these points.


adaw...@sbcglobal.net

unread,
Apr 21, 2008, 11:48:21 AM4/21/08
to

"s.ross" <cwd...@gmail.com> wrote in message
news:09981649-FB48-47D5...@gmail.com...

>
> The corollary implication, intended or not, is that none of these
> sites could benefit from Ruby or from Rails. The answer to that is not
> clear. Much of the code on larger scale sites has been C/C++ or Perl
> up to this point. Taking Moore's Law into account, it seems feasible
> that at some point, improvement in Ruby's performance characteristics,
> along with increase in affordable hardware capability would make Ruby
> just as obvious a choice as C/C++
>
It might become the case that Ruby will continue to evolve so it will be
more appropriate for large-scale software development involving many
programmers focused on safety-critical software. The key to this is
"evolution."

Early programming languages such as COBOL and Fortran have continued
to evolve, and their current standards are impressive. Most people are
unaware of the large improvements in those languages. Some, such as PL/I,
have not evolved very well, largely remaining true to the imperfect design of
that language in its earliest incarnations. For a language to retain a
following,
it must evolve to accommodate the approaches to software practice. This means
the original designers (or stewards) of the language must be willing to abandon
or improve some features to keep the language design up-to-date.

Evolution has been an important part of the C++ standard, the Ada standard,
and the Java [non-] standard. In fact, instead of abandoning a legacy language
(e.g., COBOL) in favor of an entirely new language, it is often better to simply
upgrade to the new compiler for the language already in-use. Just as Fortran
is no longer the FORTRAN of old, so too are C++ and Ada improved from
their earlier versions. Those who remember the first Ada compilers and
criticize
Ada based on that experience would benefit from learning how it has improved
in recent years. Those who remember COBOL-74 and learned to detest it,
would be amazed to see how the language has opened to more options for
practical programming. In the case of COBOL, many of the earlier, terribly
messy, features are still in place for the purpose of upward compatibility, but
those features can be ignored using newer features without sacrificing one's
knowledge of the fundamental language design.

Ruby will most certainly evolve. The language seems to be designed so it
can evolve. What will be necessary, to ensure that the evolution of Ruby
is not haphazard and self-limiting, is careful analysis of each new change.
One of the more important changes that Ruby needs is a better model of
"design-by-contract." I am not a Ruby expert, so I do not presume to
know what changes are most appropriate, but for design of large-scale
software such as that targeted by Ada, I think there could be some structural
and architectural improvements in Ruby.

Also, a newer language, named SCALA, has some design features that make
it very interesting. Other language designs, during future evolutionary steps,
could learn from the design of SCALA. As I look at SCALA and Ruby,
I see the potential for Ruby learning from SCALA.

Most important, when a language is not designed to evolve, or is designed so
it cannot evolve, that language is guaranteed to fall into disuse over time and
even become inappropriate for its intended niche.

Richard Riehle


Robert Dober

unread,
Apr 21, 2008, 1:13:06 PM4/21/08
to
On Mon, Apr 21, 2008 at 5:55 PM, <adaw...@sbcglobal.net> wrote:
>
<snip>

> Ruby will most certainly evolve. The language seems to be designed so it
> can evolve. What will be necessary, to ensure that the evolution of Ruby
> is not haphazard and self-limiting, is careful analysis of each new change.
> One of the more important changes that Ruby needs is a better model of
> "design-by-contract."
That is a point of particular interest, myself being a complete
ignorant of the concept, I would highly appreciate if you could kindly
either elaborate on this a little bit or giving some pointers or both,
if you insist ;).

> I am not a Ruby expert, so I do not presume to
> know what changes are most appropriate, but for design of large-scale
> software such as that targeted by Ada, I think there could be some structural
> and architectural improvements in Ruby.
>
> Also, a newer language, named SCALA, has some design features that make
> it very interesting. Other language designs, during future evolutionary steps,
> could learn from the design of SCALA. As I look at SCALA and Ruby,
> I see the potential for Ruby learning from SCALA.

>
> Most important, when a language is not designed to evolve, or is designed so
> it cannot evolve, that language is guaranteed to fall into disuse over time and
> even become inappropriate for its intended niche.

I guess that will happen anyway but it can happen much later if a
language evolves into the right direction, on a strict term this is
indicated by the definition of evolution itself.
>
> Richard Riehle
>
Cheers
Robert

s.ross

unread,
Apr 21, 2008, 1:30:04 PM4/21/08
to
[Note: parts of this message were removed to make it a legal post.]


On Apr 21, 2008, at 10:13 AM, Robert Dober wrote:

>> Ruby will most certainly evolve. The language seems to be
>> designed so it
>> can evolve. What will be necessary, to ensure that the evolution
>> of Ruby
>> is not haphazard and self-limiting, is careful analysis of each new
>> change.
>> One of the more important changes that Ruby needs is a better model
>> of
>> "design-by-contract."
> That is a point of particular interest, myself being a complete
> ignorant of the concept, I would highly appreciate if you could kindly
> either elaborate on this a little bit or giving some pointers or both,
> if you insist ;).
>

http://en.wikipedia.org/wiki/Design_by_contract


Robert Dober

unread,
Apr 21, 2008, 1:55:59 PM4/21/08
to
Well I know what it is, theoretically only that means I do not know
much if you see what I mean, thx anyway...
I was interested in some specific ideas for Ruby of course, I am not
shy when it comes to asking :-P

R.

0 new messages