The Silver Bullet: Why Software Is Bad

1 view
Skip to first unread message

Traveler

unread,
Jun 21, 2002, 6:43:16 PM6/21/02
to
The Software Reliability Crisis

This is something that has been bothering me for quite some time. A
recent article (see below) in Technology Review rekindled my interest
in the subject. I have added a new page to my site on the crisis and
on what can be done to solve it once and for all. There *is* a silver
bullet, Fred Brooks and other incorrigible naysayers notwithstanding.
Check it out.

TR article:
http://www.technologyreview.com/articles/mann0702.asp


The Silver Bullet:
http://home1.gte.net/res02khr/AI/Reliability.htm

Temporal Intelligence:
http://home1.gte.net/res02khr/AI/Temporal_Intelligence.htm


Lyle McKennot

unread,
Jun 26, 2002, 7:33:59 PM6/26/02
to

Traveler <eight...@hotmail.com> wrote:

>The Software Reliability Crisis


>on what can be done to solve it once and for all. There *is* a silver
>bullet, Fred Brooks and other incorrigible naysayers notwithstanding.

Louis Savain, I read your web site.
Most amusing.
Connectionism and AI are the silver bullets ?

LOL !

You are Mentifex's disciple , I presume ?

BTW, do a search for Louis Savain on Google Groups to see some of
Savin's other amusing posts.

Also see :
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=utf-8&frame=right&th=dd3d3abe930e43f7&seekm=20000414215150.11833.00001584%40ng-ca1.aol.com#link1


Peter da Silva

unread,
Jun 26, 2002, 7:34:05 PM6/26/02
to

In article <3d139df4$1...@news.ucsc.edu>,

Traveler <eight...@hotmail.com> wrote:
> on what can be done to solve it once and for all. There *is* a silver
> bullet, Fred Brooks and other incorrigible naysayers notwithstanding.

You write: "These unarguable facts squarely and decisively refute Fred
Brooks' 'No Silver Bullet' arguments. The question is, what is it about
biological nervous systems that makes them so reliable?"

Apart from the obvious flaws in your argument (complex brains do exhibit
failure modes that simpler brains don't, for example), I think a five
billion year development leadtime is probably more than most software
development managers are willing to put up with.

Show us a way to easily design complex parallel event-driven systems that
is more efficient than Darwin's trial-and-error technique, and you'll have
something.

--
I've seen things you people can't imagine. Chimneysweeps on fire over the roofs
of London. I've watched kite-strings glitter in the sun at Hyde Park Gate. All
these things will be lost in time, like chalk-paintings in the rain. `-_-'
Time for your nap. | Peter da Silva | Har du kramat din varg, idag? 'U`


Jim Provost

unread,
Jun 26, 2002, 7:34:09 PM6/26/02
to

Unforetunately, business models and competitiveness issues force software
out the door asap. The cost is tenfold (if not more) in repairing errors
and bugs rather than incorporating good design principles.

Dr. David Parnas' ideas on information hiding and modularization are
intended to address such issues, and speaks very similarly the components
and interfaces you speak of. I consider it an honour to have been taught
under his supervision.


--
Jim Provost
B.Eng in Software Engineering
McMaster University
Product Manager
ESPONSIVE Communications
www.esponsive.com


"Traveler" <eight...@hotmail.com> wrote in message
news:3d139df4$1...@news.ucsc.edu...

Traveler

unread,
Jun 28, 2002, 2:42:25 AM6/28/02
to
In article <3d1a4157$1...@news.ucsc.edu>, Lyle McKennot
<sp...@spam.menot.com> wrote:

>
>Traveler <eight...@hotmail.com> wrote:
>
>>The Software Reliability Crisis
>>on what can be done to solve it once and for all. There *is* a silver
>>bullet, Fred Brooks and other incorrigible naysayers notwithstanding.
>
>Louis Savain, I read your web site.
>Most amusing.
>Connectionism and AI are the silver bullets ?
>
>LOL !
>
>You are Mentifex's disciple , I presume ?

IOW, stone the messenger to death and ignore the message.

>BTW, do a search for Louis Savain on Google Groups to see some of
>Savin's other amusing posts.
>
>Also see :
>http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=utf-8&frame=right&th=dd3d3abe930e43f7&seekm=20000414215150.11833.00001584%40ng-ca1.aol.com#link1

Those were some of my most irreverent and enjoyable posts. I truly
miss those days.

Traveler

unread,
Jun 28, 2002, 2:42:30 AM6/28/02
to
In article <3d1a4161$1...@news.ucsc.edu>, "Jim Provost"
<jprovos...@esponsive.com> wrote:

>
>Unforetunately, business models and competitiveness issues force software
>out the door asap. The cost is tenfold (if not more) in repairing errors
>and bugs rather than incorporating good design principles.

The tools should enforce the right design principles IMO.

>Dr. David Parnas' ideas on information hiding and modularization are
>intended to address such issues, and speaks very similarly the components
>and interfaces you speak of. I consider it an honour to have been taught
>under his supervision.

He's not the only one. People like Brad Cox have been saying the same
thing for years. They are 100% right, IMO, at least as far as the use
of components and strongly typed connectors are concerned. But they
are sorely mistaken in one aspect: the use of algorithms within
components. That is the main reason that hardware ICs will always be
much more reliable than software ICs. Once signal timing is working
properly in a chip, it never fails afterward barring physical
failures. To obtain the reliability of hardware we must emulate their
signal-based, parallel environment.

Traveler

unread,
Jun 28, 2002, 2:42:33 AM6/28/02
to

In article <3d1a415d$1...@news.ucsc.edu>, pe...@abbnm.com (Peter da
Silva) wrote:

>
>In article <3d139df4$1...@news.ucsc.edu>,
>Traveler <eight...@hotmail.com> wrote:
>> on what can be done to solve it once and for all. There *is* a silver
>> bullet, Fred Brooks and other incorrigible naysayers notwithstanding.
>
>You write: "These unarguable facts squarely and decisively refute Fred
>Brooks' 'No Silver Bullet' arguments. The question is, what is it about
>biological nervous systems that makes them so reliable?"
>
>Apart from the obvious flaws in your argument (complex brains do exhibit
>failure modes that simpler brains don't, for example),

Not a very good examples since it does not put the failures into
perspective by comparing them to complexity.

> I think a five
>billion year development leadtime is probably more than most software
>development managers are willing to put up with.

This may indeed be true in the case of lower life forms like insects
but it is not true for higher mammals, especially primates. The major
part of the brain's software is not innate. It is constructed through
experience with the environment during the lifetime of the organism.

But be that as it may, nature must be doing something right. And we
don't need to hide out heads in the sand and complain that we can't
ever figure it out because nature is an impenetrable omniscient God.
We can. Neurobiologists know that the reason for the high reliability
of the nervous system has to with the accurate timing of neural
spikes.

>Show us a way to easily design complex parallel event-driven systems that
>is more efficient than Darwin's trial-and-error technique, and you'll have
>something.

Darwin's technique could hardly be labeled as efficient since it took
it 5 billion years as you say. But I don't need to dig into nature's
secrets to show you how it can be done. Computer board designers do it
all the time. They take a bunch of ICs and hook them up together and
expect them to work reliably. Why? because they know that the timing
of the various signals that course through the circuitry will continue
to work as designed.

Stefan Birbacher

unread,
Jul 1, 2002, 1:51:00 AM7/1/02
to

pe...@abbnm.com (Peter da Silva) wrote:


>Apart from the obvious flaws in your argument (complex brains do exhibit
>failure modes that simpler brains don't, for example)

A computer with dyslexia, aphasia or ADHD would be fun :-)


Stefan Birbacher

unread,
Jul 1, 2002, 1:51:03 AM7/1/02
to
Traveler <eight...@hotmail.com> wrote:


>But be that as it may, nature must be doing something right. And we
>don't need to hide out heads in the sand and complain that we can't
>ever figure it out because nature is an impenetrable omniscient God.
>We can. Neurobiologists know that the reason for the high reliability
>of the nervous system has to with the accurate timing of neural
>spikes.

The wide variation in neuronal signaling times suggests that you are
clueless.

Try sophisticated heuristics, massive redundant capacity ,
n-dimensional distributed holographic representation of data and
sophisticated routing algorithms as explanations instead.

Study some neurophysiology and neuroanatomy before spouting garbage.

But then you net-kooks never do...


Myles

unread,
Jul 1, 2002, 1:51:05 AM7/1/02
to

"Traveler" <eight...@hotmail.com> wrote in message
news:3d1bf746$1...@news.ucsc.edu...
> > [Parnas]

> He's not the only one. People like Brad Cox have been saying the same
> thing for years. They are 100% right, IMO, at least as far as the use
> of components and strongly typed connectors are concerned. But they
> are sorely mistaken in one aspect: the use of algorithms within
> components.

Software without any code!
It is easy to have no bugs if you have no code, I guess. But then you have
no functionality either.

M.

Traveler

unread,
Jul 4, 2002, 6:13:33 PM7/4/02
to
In article <3d1fdfb7$1...@news.ucsc.edu>, Stefan Birbacher
<birbach...@yahoo.com> wrote:

>Traveler <eight...@hotmail.com> wrote:
>
>
>>But be that as it may, nature must be doing something right. And we
>>don't need to hide out heads in the sand and complain that we can't
>>ever figure it out because nature is an impenetrable omniscient God.
>>We can. Neurobiologists know that the reason for the high reliability
>>of the nervous system has to with the accurate timing of neural
>>spikes.
>
>The wide variation in neuronal signaling times suggests that you are
>clueless.

World class neurobiology researchers like Kristof Koch, Rufin Van
Rullen, Simon Thorpe, Terrence Sejnowski, Henry Markram prove you
wrong. Way wrong. Do a search on Google and get some education before
criticizing. The precision of neuronal spike timing is a well known
fact. In humans, timing resolution is on the order of 1 millisecond.
In bats and other animals, especially in the auditory cortex it is on
the order of microseconds. The highly accurate echo-locating
capability of bats and other animals is legendary. And that's just for
starters. Don't even get me stated on the accuracy and precision of
the retina and visual cortex.

>Try sophisticated heuristics, massive redundant capacity ,
>n-dimensional distributed holographic representation of data and
>sophisticated routing algorithms as explanations instead.

N-dimensional holographic representation? Routing algorithms? In the
brain? You have to be kidding. Are you making this stuff up as you go?
And massive redundant capacity is a myth FYI. We need all the one
hundred billion neurons in the brain. Why? because the complexity of
the brain's sensory and behavioral space is nothing short of
astronomical. Besides, a bee's brain has about 2 million neurons. Yet
the bee's behavior is amazingly complex, not too mention extremely
robust. Not much room there for redundancy, I might add. But then
again, one never knows with those holographic memories. :-) I guess
Pribam's holographic nonsense really did some serious damage.

>Study some neurophysiology and neuroanatomy before spouting garbage.

Look who's talking.

>But then you net-kooks never do...

Heaping ridicule on the messenger is no substitute for doing your home
work. Goodbye.

Traveler

unread,
Jul 4, 2002, 6:13:36 PM7/4/02
to

I realize you're trying your best to heap ridicule on something that
either threatens you or you're having trouble understanding but I'll
explain the difference between data and code, just in case you did not
already know. Code is anything that can be converted by a compiler or
assembler into a set of binary instruction words that can be directly
executed by a CPU. In the system that I am proposing, the only code is
the OS kernel. Everything else is data. That is what I mean when I say
that no new code is added.

Peter da Silva

unread,
Jul 4, 2002, 6:13:38 PM7/4/02
to
In article <3d1fdfb4$1...@news.ucsc.edu>,

Oh, you've used Windows XP too?

Jim Provost

unread,
Jul 12, 2002, 11:03:03 PM7/12/02
to

Hmm... intriguing. So everything is either "kernal" or "not kernal." My
only concern however would be that CPUs would need far more speed to "fake"
parallelism to such an extent.


--
Jim Provost


Product Manager
ESPONSIVE Communications
www.esponsive.com

"Traveler" <eight...@hotmail.com> wrote in message

news:3d24ba80$1...@news.ucsc.edu...

Peter da Silva

unread,
Jul 12, 2002, 11:03:06 PM7/12/02
to

In article <3d24ba80$1...@news.ucsc.edu>,

Traveler <eight...@hotmail.com> wrote:
> Code is anything that can be converted by a compiler or
> assembler into a set of binary instruction words that can be directly
> executed by a CPU.

this sounds like the "procedural versus declarative" paradigm. The problem
is that there are interpreters and compilers for declarative languages...
the distinction between code and data is at the very least subject to
interpretation.

> In the system that I am proposing, the only code is
> the OS kernel. Everything else is data. That is what I mean when I say
> that no new code is added.

Whether you call what you add "code" or "data", getting it right is the
hard part. A description of relationships between objects can be treated
as either. For another example, do you consider legal statutes to be code
or data? Either way, history tells us that it's very hard to get them right.

Casper H.S. Dik

unread,
Jul 17, 2002, 8:27:26 PM7/17/02
to
Traveler <eight...@hotmail.com> writes:

>N-dimensional holographic representation? Routing algorithms? In the
>brain? You have to be kidding. Are you making this stuff up as you go?
>And massive redundant capacity is a myth FYI. We need all the one
>hundred billion neurons in the brain. Why? because the complexity of
>the brain's sensory and behavioral space is nothing short of
>astronomical. Besides, a bee's brain has about 2 million neurons. Yet
>the bee's behavior is amazingly complex, not too mention extremely
>robust. Not much room there for redundancy, I might add. But then
>again, one never knows with those holographic memories. :-) I guess
>Pribam's holographic nonsense really did some serious damage.


There is some redundancy as parts of the brain can take over when
substantial parts of the brain are missing and still function to near
normal levels. However, it seems inconceivable that we would develop
a power hungry brain and not use it. The brain uses so much of our
energy intake that evolution surely would have favoured smaller brains
if we didn't need the current size.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.


Traveler

unread,
Jul 17, 2002, 8:27:38 PM7/17/02
to
In article <3d2f8a5a$1...@news.ucsc.edu>, pe...@abbnm.com (Peter da
Silva) wrote:

>
>In article <3d24ba80$1...@news.ucsc.edu>,
>Traveler <eight...@hotmail.com> wrote:
>> Code is anything that can be converted by a compiler or
>> assembler into a set of binary instruction words that can be directly
>> executed by a CPU.
>
>this sounds like the "procedural versus declarative" paradigm. The problem
>is that there are interpreters and compilers for declarative languages...
>the distinction between code and data is at the very least subject to
>interpretation.
>
>> In the system that I am proposing, the only code is
>> the OS kernel. Everything else is data. That is what I mean when I say
>> that no new code is added.
>
>Whether you call what you add "code" or "data", getting it right is the
>hard part. A description of relationships between objects can be treated
>as either. For another example, do you consider legal statutes to be code
>or data? Either way, history tells us that it's very hard to get them right.

Code and data are indeed a matter of private definition. But your
point is well taken. The application designer still needs to design
the application and do it right. Fortunately, by eliminating the need
to write directly executable code and by automating a large portion of
the software generation process, a huge number of opportunities for
failure is also eliminated. Reliable software and high productivity
are two of the things that a pure signal-based paradigm promises.

Casper H.S. Dik

unread,
Jul 17, 2002, 8:27:27 PM7/17/02
to

Traveler <eight...@hotmail.com> writes:

>I realize you're trying your best to heap ridicule on something that
>either threatens you or you're having trouble understanding but I'll
>explain the difference between data and code, just in case you did not
>already know. Code is anything that can be converted by a compiler or
>assembler into a set of binary instruction words that can be directly
>executed by a CPU. In the system that I am proposing, the only code is
>the OS kernel. Everything else is data. That is what I mean when I say
>that no new code is added.

Is that a proper definition; I note the word "can" in "can be converted".

Your code is data driven, so it's not unreasonable to assume that at least
some of the data can be converted into code. Only you keep it as data and
interpret it. Sounds like a hairy definition game.

Traveler

unread,
Jul 17, 2002, 8:27:31 PM7/17/02
to

In article <3d2f8a57$1...@news.ucsc.edu>, "Jim Provost"
<jprovos...@esponsive.com> wrote:

>
>Hmm... intriguing. So everything is either "kernal" or "not kernal." My
>only concern however would be that CPUs would need far more speed to "fake"
>parallelism to such an extent.

No more so than running FORTH or Visual Basic programs. As I write on
my site, FORTH has certainly proven itself as a real-time embedded
language for all sorts of applications. Also, the performance slowdown
depends on how far down one's level of granularity is. I stop at the
elementary code statement level, e.g., (A = B + C), (A > B), etc...
There is no need to go any further.

One of the main reasons that speed is not such a problem is this: the
kernel does not need to update every object in the system during every
execution cycle, only those relatively few objects that need updating.

If special FORTH processors can speed up FORTH code to unparalleled
performance levels, there is every reason to suppose that the use of
special processors can achieve the same results in my model.
Additionally, I am offering something that neither FORTH nor Visual
Basic can touch, highly improved reliability. In most applications,
reliability is more important than speed.

Traveler

unread,
Jul 24, 2002, 1:55:44 PM7/24/02
to
In article <3d35fd5f$1...@news.ucsc.edu>, "Casper H.S. Dik"
<Caspe...@Sun.COM> wrote:

>
>Traveler <eight...@hotmail.com> writes:
>
>>I realize you're trying your best to heap ridicule on something that
>>either threatens you or you're having trouble understanding but I'll
>>explain the difference between data and code, just in case you did not
>>already know. Code is anything that can be converted by a compiler or
>>assembler into a set of binary instruction words that can be directly
>>executed by a CPU. In the system that I am proposing, the only code is
>>the OS kernel. Everything else is data. That is what I mean when I say
>>that no new code is added.
>
>Is that a proper definition; I note the word "can" in "can be converted".

Well, in the original accepted definition from the early days of
computing, IIRC, code meant a sequence of CPU instruction bytes.

>Your code is data driven, so it's not unreasonable to assume that at least
>some of the data can be converted into code.

I am sure it can and I do envision that special processors can be
built to execute most of this data directly. But be that as it may, in
the model that I am proposing, there is a lot stuff going on at the
executable level, that would still be inaccessible to the application
designer.

>Only you keep it as data and
>interpret it. Sounds like a hairy definition game.

It may be hairy to you, but an operating system that does not allow
directly executable code is not the same as one that does. The whole
point of what I am saying has nothing to do with definitions, it has
to do with keeping the application designer out of trouble.

Mr. Myles, (I ran into him elsewhere on usenet) like so many knee-jerk
reactionists who have little in the way of positive contributions to
make, decided to ignore the message and latched onto a side issue to
make himself feel good. My mistake was to take his criticism
seriously.

Traveler

unread,
Jul 24, 2002, 1:55:47 PM7/24/02
to
In article <3d35fd5e$1...@news.ucsc.edu>, "Casper H.S. Dik"
<Caspe...@Sun.COM> wrote:

>Traveler <eight...@hotmail.com> writes:
>
>>N-dimensional holographic representation? Routing algorithms? In the
>>brain? You have to be kidding. Are you making this stuff up as you go?
>>And massive redundant capacity is a myth FYI. We need all the one
>>hundred billion neurons in the brain. Why? because the complexity of
>>the brain's sensory and behavioral space is nothing short of
>>astronomical. Besides, a bee's brain has about 2 million neurons. Yet
>>the bee's behavior is amazingly complex, not too mention extremely
>>robust. Not much room there for redundancy, I might add. But then
>>again, one never knows with those holographic memories. :-) I guess
>>Pribam's holographic nonsense really did some serious damage.
>
>
>There is some redundancy as parts of the brain can take over when
>substantial parts of the brain are missing and still function to near
>normal levels.

Well, I am sorry but that is not redundancy. That is just
adaptability. Adaptability is the result of synaptic plasticity and
the ability to grow new dendrites and axonal branches. It takes time
for the brain to form new neural pathways.

In the context of reliability, redundancy has to do with having
several identically programmed backup systems or components. In case
of the sudden failure of one component, one of the redundant systems
is able to instantly take over the role of the failed part.

The visual cortex does indeed have parallel areas that are programmed
for the recognition of similar objects (lines, orientation,
boundaries, etc...) But that still is not redundancy. Why? because the
failure of one area severely curtails overall cortical functioning.
Ask anybody with impaired vision. In a truly redundant system, sudden
limited failure would be imperceptible.

>However, it seems inconceivable that we would develop
>a power hungry brain and not use it. The brain uses so much of our
>energy intake that evolution surely would have favoured smaller brains
>if we didn't need the current size.

Exactly. In fact it does favor smaller brains in more primitive life
forms. Smaller brains come at a price. Bees are not about to put a
artificial satellites in orbit around the earth any time soon. :-)
They need every one of those two million or so neurons to do the
things that bees do.

Peter da Silva

unread,
Jul 24, 2002, 1:55:56 PM7/24/02
to

Have you ever talked to a bloke named "Arthur T. Murray" or "Mentifex"?
Reply all
Reply to author
Forward
0 new messages