Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Dealing with complexity

19 views
Skip to first unread message

H. E. Taylor

unread,
Jul 1, 2003, 2:31:54 AM7/1/03
to
Greetings,
One of the books I have been reading lately is "The Next
Fifty Years, Science in the First Half of the 21st Century".
It is a collection of essays by 25 third party experts.
One of the them is by Jaron Lanier. His essay is about
complexity. He makes an interesting statement:

"Since the complexity of software is currently limited by the
ability of human engineers to explicitly analyze and manage it,
we can be said to have already reached the complexity ceiling
of software as we know it. If we don't find a different way
of thinking about and creating software, we will not be writing
programs bigger than about 10 million lines of code, no matter
matter how fast, plentiful or exotic our processors become."
-Jaron Lanier (The Next Fifty Years, page 219)

Most of my experience is in relatively small PC and embedded systems.
I wonder, is Lanier's perspective common on larger systems?
<curious>
-het


--
"Simplicity is the ultimate sophistication." -olde Apple ad

Computer Links: http://www.autobahn.mb.ca/~het/clinks.html
H.E. Taylor http://www.autobahn.mb.ca/~het/

philo

unread,
Jul 1, 2003, 12:53:48 AM7/1/03
to

"H. E. Taylor" <h...@despam.autobahn.mb.ca> wrote in message
news:3F00F2...@despam.autobahn.mb.ca...

> Greetings,
> One of the books I have been reading lately is "The Next
> Fifty Years, Science in the First Half of the 21st Century".
> It is a collection of essays by 25 third party experts.
> One of the them is by Jaron Lanier. His essay is about
> complexity. He makes an interesting statement:
>
> "Since the complexity of software is currently limited by the
> ability of human engineers to explicitly analyze and manage it,
> we can be said to have already reached the complexity ceiling
> of software as we know it. If we don't find a different way
> of thinking about and creating software, we will not be writing
> programs bigger than about 10 million lines of code, no matter
> matter how fast, plentiful or exotic our processors become."
> -Jaron Lanier (The Next Fifty Years, page 219)
>
> Most of my experience is in relatively small PC and embedded systems.
> I wonder, is Lanier's perspective common on larger systems?
> <curious>
>


although plenty of predictions do come true
most do not.
although there may be a limit as to how simple something can be
there seems to be no linit to complexity!


del cecchi

unread,
Jul 1, 2003, 2:55:23 AM7/1/03
to

"philo" <ph...@plazaearth.com> wrote in message
news:vg1msvk...@corp.supernews.com...
Depends on what you mean by "program". How many lines of code in
MVS/OS390? OS400 including SLIC? Windows XP? Word XP? Unix? Oracle?
DB2?

Who is this John Lanier guy? Somebody I should have heard of?

Seems to me that modular programming, object orientation, or just
languages with higher level constructs will resolve many of his
concerns. One doesn't design an airplane or bridge by writing down how
to mine ore, smelt it, make bolts and rivets, roll sheet etc.

del cecchi
>


Larry__Weiss

unread,
Jul 1, 2003, 3:27:25 AM7/1/03
to
del cecchi wrote:
> Who is this John Lanier guy? Somebody I should have heard of?
>

Jaron Lanier. Not John. And, sure, at least the Wired magazine
readers here have heard of him.

Here's a sample of his writing:
http://www.wired.com/wired/archive/8.12/lanier_pr.html

It's vintage Wired, of course. I sort of dig it. YMMV.

Robert Myers

unread,
Jul 1, 2003, 4:22:21 AM7/1/03
to
On Mon, 30 Jun 2003 19:31:54 -0700, "H. E. Taylor"
<h...@despam.autobahn.mb.ca> wrote:

>Greetings,
> One of the books I have been reading lately is "The Next
> Fifty Years, Science in the First Half of the 21st Century".
> It is a collection of essays by 25 third party experts.
> One of the them is by Jaron Lanier. His essay is about
> complexity. He makes an interesting statement:
>
> "Since the complexity of software is currently limited by the
> ability of human engineers to explicitly analyze and manage it,
> we can be said to have already reached the complexity ceiling
> of software as we know it. If we don't find a different way
> of thinking about and creating software, we will not be writing
> programs bigger than about 10 million lines of code, no matter
> matter how fast, plentiful or exotic our processors become."
> -Jaron Lanier (The Next Fifty Years, page 219)
>
> Most of my experience is in relatively small PC and embedded systems.
> I wonder, is Lanier's perspective common on larger systems?
><curious>
>-het

FWIW:

http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf

"a single domestic passenger airplane alone can contain as many as 6
million parts"

http://www.majorprojects.org/pubdoc/677.pdf

" Aircraft carrier project--a naval project with 30 million parts (a
submarine has only 8 million parts)."

Why would software be any harder?

A paradigm already exists for scaling programs to arbitrarily large
sizes. It's called a network.

RM

Jon Leech

unread,
Jul 1, 2003, 5:56:18 AM7/1/03
to
In article <aN6Ma.1222$IP6....@eagle.america.net>,

del cecchi <dce...@msn.com> wrote:
>Who is this John Lanier guy? Somebody I should have heard of?

His early claim to fame was involvement in and promotion of early
Virtual Reality efforts, when that was hot in the early 90s. Seems to
have gone on to industry punditry since.
Jon
__@/

Nick Maclaren

unread,
Jul 1, 2003, 7:11:45 AM7/1/03
to

In article <aN6Ma.1222$IP6....@eagle.america.net>,

"del cecchi" <dce...@msn.com> writes:
|> "philo" <ph...@plazaearth.com> wrote in message
|> news:vg1msvk...@corp.supernews.com...
|> > "H. E. Taylor" <h...@despam.autobahn.mb.ca> wrote in message
|> > news:3F00F2...@despam.autobahn.mb.ca...
|> > > Greetings,
|> > > One of the books I have been reading lately is "The Next
|> > > Fifty Years, Science in the First Half of the 21st Century".
|> > > It is a collection of essays by 25 third party experts.
|> > > One of the them is by Jaron Lanier. His essay is about
|> > > complexity. He makes an interesting statement:
|> > >
|> > > "Since the complexity of software is currently limited by the
|> > > ability of human engineers to explicitly analyze and manage it,
|> > > we can be said to have already reached the complexity ceiling
|> > > of software as we know it. If we don't find a different way
|> > > of thinking about and creating software, we will not be writing
|> > > programs bigger than about 10 million lines of code, no matter
|> > > matter how fast, plentiful or exotic our processors become."
|> > > -Jaron Lanier (The Next Fifty Years, page 219)
|> > >
|> > > Most of my experience is in relatively small PC and embedded
|> > > systems.
|> > > I wonder, is Lanier's perspective common on larger systems?
|> > > <curious>

No, but it should be. He is largely right, and a very, very few
of us have been saying that for decades. Except that what he should
have said is that "we will not be writing WORKING programs bigger
than about 10 million lines of code, ...."

|> > although plenty of predictions do come true
|> > most do not.
|> > although there may be a limit as to how simple something can be
|> > there seems to be no linit to complexity!
|> >
|> Depends on what you mean by "program". How many lines of code in
|> MVS/OS390? OS400 including SLIC? Windows XP? Word XP? Unix? Oracle?
|> DB2?

Quite. And even "limit". It has already got to the stage that MOST
non-trivial problems diagnosed in the field are never identified as
to even an approximate cause. Whole projects have had to be abandoned
after man-centuries of work because they couldn't be got to work.
There are whole areas which people rely on where I have reason to
believe that nobody in the world understands in any detail.

I doubt that we shall pay attention until we hit a real disaster.

|> Seems to me that modular programming, object orientation, or just
|> languages with higher level constructs will resolve many of his
|> concerns. One doesn't design an airplane or bridge by writing down how
|> to mine ore, smelt it, make bolts and rivets, roll sheet etc.

No, but they would push the limits further. Instead of modern
systems being bug-ridden and almost unusable heaps, debuggable
only by calling in the dwindling number of people with really
low-level hacking experience, they could be fairly robust and
largely self-diagnosing.

We could be talking about a factor of 10 reduction in development
effort and administrator effort, a factor of 100 reduction in
visible problems, a factor of 100 increase in software MTBF in
problem situations, coupled with a factor of 10 increase in size.
There is plenty of margin :-(

There is no way to ELIMINATE the problems of complexity, however.

And, no, I do NOT mean Project Eliza, which completely misses the
point.


Regards,
Nick Maclaren.

Nick Maclaren

unread,
Jul 1, 2003, 7:24:41 AM7/1/03
to

In article <2o12gvc6ishbpa20n...@4ax.com>,

Robert Myers <rmy...@rustuck.com> writes:
|>
|> FWIW:
|>
|> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
|>
|> "a single domestic passenger airplane alone can contain as many as 6
|> million parts"
|>
|> http://www.majorprojects.org/pubdoc/677.pdf
|>
|> " Aircraft carrier project--a naval project with 30 million parts (a
|> submarine has only 8 million parts)."
|>
|> Why would software be any harder?

Because it is less well engineered. Also, do you know how many
such military engineering systems have significant and even major
components that are decommissioned before they are got to work in
the field?

|> A paradigm already exists for scaling programs to arbitrarily large
|> sizes. It's called a network.

Yeah, right. I was having run-ins with the "structured programming"
dogmatists back in the 1960s and 1970s on this one, and the common
errors have not changed.

By splitting programs into functions of at most 20 lines long (yes,
seriously), you may be able to understand every function at a glance.
You will not, however, be able to understand their interactions. So
you split the program into separate ones of at most 20 functions,
and can now understand every program. But you will not be able to
understand the network of programs. And so on.

The same thing applies to hardware. It is easy to analyse and debug
race and other inconsistency conditions involving two entities. As
networks grow in complexity, it becomes harder and harder. There
comes a point (approaching, in some networks) where most failure
time (down time or debugging) is not associated with a problem in
ANY component, but is associated with the network structure itself.
TANSTAFFL.

Have you ever tried to report a problem to a large vendor that is
due SOLELY to the underlying computational assumptions of three or
more separately developed components being subtly incompatible?
I have. Guess how far I got.


Regards,
Nick Maclaren.

Regards,
Nick Maclaren.

Dean Kent

unread,
Jul 1, 2003, 7:28:30 AM7/1/03
to
"philo" <ph...@plazaearth.com> wrote in message
news:vg1msvk...@corp.supernews.com...
>
>
>
> although plenty of predictions do come true
> most do not.
> although there may be a limit as to how simple something can be
> there seems to be no linit to complexity!
>

If you had written 'someone' instead of 'something', I probably would have
had to disagree with you. ;-).

Regards,
Dean

>


Peter "Firefly" Lund

unread,
Jul 1, 2003, 7:29:33 AM7/1/03
to
On Tue, 1 Jul 2003, Nick Maclaren wrote:

> |> Why would software be any harder?
>
> Because it is less well engineered. Also, do you know how many

No. Because you don't have high-level screws. 10 million lines of mostly
flat-line code broken into pieces that interact very little with each
other would be a lot easier to understand than 10 million typical lines.

It would also be even easier if we did apply the engineering we know and
cut it down to say 5 or 10 thousand lines (probably less).

-Peter

Nick Maclaren

unread,
Jul 1, 2003, 7:43:02 AM7/1/03
to

In article <Pine.LNX.4.55.03...@ask.diku.dk>,

"Peter \"Firefly\" Lund" <fir...@diku.dk> writes:
|> On Tue, 1 Jul 2003, Nick Maclaren wrote:
|>
|> > |> Why would software be any harder?
|> >
|> > Because it is less well engineered. Also, do you know how many
|>
|> No. Because you don't have high-level screws. 10 million lines of mostly
|> flat-line code broken into pieces that interact very little with each
|> other would be a lot easier to understand than 10 million typical lines.

I suggest learning a little more about the design of such systems.
Your point is correct, but your viewpoint of their problems is
overly simplistic.

|> It would also be even easier if we did apply the engineering we know and
|> cut it down to say 5 or 10 thousand lines (probably less).

I know of no such example. If you have ever tried to write a
serious product and keep it down to that size, you will have found
out that only some of the bloat is due to engineering incompetence.
Rather more is due to interfaces being designed non-mathematically,
but there is a large residue of special cases, messy situations and
so on that can't be handled compactly.

Factors of 10 reduction are often possible; factors of 100, perhaps.
I do not believe factors of 1,000 and more, as you are claiming.


Regards,
Nick Maclaren.

Gerrit Muller

unread,
Jul 1, 2003, 7:57:07 AM7/1/03
to
Nick Maclaren wrote:
<...snip...>

> Factors of 10 reduction are often possible; factors of 100, perhaps.
> I do not believe factors of 1,000 and more, as you are claiming.
>
> Regards,
> Nick Maclaren.

I recently explored this area in a short article at (long link may be
broken in parts by the newsreader):
www.extra.research.philips.com/natlab/sysarch/BloatingExploredPaper.pdf
"Exploration of the bloating of software"

regards Gerrit

--
Gaudi systems architecting:
http://www.extra.research.philips.com/natlab/sysarch/

Peter "Firefly" Lund

unread,
Jul 1, 2003, 8:32:43 AM7/1/03
to
On Tue, 1 Jul 2003, Nick Maclaren wrote:

> I suggest learning a little more about the design of such systems.

I suggest you get better at reading metaphors.

-Peter

Tom Gardner

unread,
Jul 1, 2003, 8:41:13 AM7/1/03
to
Robert Myers <rmy...@rustuck.com> wrote in
news:2o12gvc6ishbpa20n...@4ax.com:

The problem is with so-called "emergent behaviour" in which the
each component is well defined and understood, but the interactions
of a large number of components isn't.

A simple non-technical analogy (always dangerous :) is
that
- the propoerties of each grain of sand are well
defined and understood.
- they don't help when trying to understand why
heaps of sand are conical with (IIRC) a 35 degree
half angle, which is only very ewakly dependent on
the size and shape of the individual grains
Nowhere in the "definition" of a grain of sand is that
35 degrees "encoded".

But of course good engineering can delay the point at which
unexpected emergent behaviour appears.

Nick Maclaren

unread,
Jul 1, 2003, 9:14:16 AM7/1/03
to

In article <Pine.LNX.4.55.03...@ask.diku.dk>,
"Peter \"Firefly\" Lund" <fir...@diku.dk> writes:

Sigh. Yes, I read it as a metaphor. It was a bad one, which
is why I responded. The equivalents to the bolts (or screws)
in a ship are the symbols like parentheses in a program.

There are equivalently higher-level concepts in naval architecture,
and those are the ones that cause the trouble. Such as the open
communications ducts that caused the fire in HMS Sheffield, which
has an exact analogy in the failure of software. And, like the
latter, it was known to be a problem in the 1960s (in high-rise
building), yet had been ignored by the builders of that class of
vessel in the 1980s.

The fact that modern passenger aircraft don't crash as often as
modern computer systems is not because they are simpler, but
because traditional engineering discipline is used.


Regards,
Nick Maclaren.

Rupert Pigott

unread,
Jul 1, 2003, 9:34:38 AM7/1/03
to
"Robert Myers" <rmy...@rustuck.com> wrote in message
news:2o12gvc6ishbpa20n...@4ax.com...

[SNIP]

> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>
> "a single domestic passenger airplane alone can contain as many as 6
> million parts"
>
> http://www.majorprojects.org/pubdoc/677.pdf
>
> " Aircraft carrier project--a naval project with 30 million parts (a
> submarine has only 8 million parts)."
>
> Why would software be any harder?

Simple : Because only in the software industry would you expect less
than a handfull of people to develop and maintain such a system. I
imagine that Boeing et al have > 5 people designing an airliner, and
it almost certainly has > 5 when it comes to actually fabricating
the sucker.

Cheers,
Rupert


Peter "Firefly" Lund

unread,
Jul 1, 2003, 9:56:28 AM7/1/03
to
On Tue, 1 Jul 2003, Nick Maclaren wrote:

> Sigh.

Sighing makes you sound old, Nick.

-Peter

jmfb...@aol.com

unread,
Jul 1, 2003, 8:28:57 AM7/1/03
to
In article <10570520...@saucer.planet.gong>,

On a plane one day, I sat next to a guy who owned a company who
made just airplane wings. He told me it took 9 months to make one.
That gives just a hint about complexity.

/BAH

Subtract a hundred and four for e-mail.

Morten Reistad

unread,
Jul 1, 2003, 10:35:26 AM7/1/03
to
In article <bdrcvp$4hr$1...@pegasus.csx.cam.ac.uk>,

Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
>
>In article <2o12gvc6ishbpa20n...@4ax.com>,
>Robert Myers <rmy...@rustuck.com> writes:
>|>
>|> FWIW:
>|>
>|> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>|>
>|> "a single domestic passenger airplane alone can contain as many as 6
>|> million parts"
>|>
>|> http://www.majorprojects.org/pubdoc/677.pdf
>|>
>|> " Aircraft carrier project--a naval project with 30 million parts (a
>|> submarine has only 8 million parts)."
>|>
>|> Why would software be any harder?

Because there are more interactions.

A bolt holding a radar mast on an aircraft carrier does just that. It
can be inspected, maintained and changed by a person knowing about
bolts and naval warship construction. [S]He does not have to know much
about radar systems.

A failure of this bolt will only lead to failure of one subsystem
(beside someone getting hit by a falling mast).

Ditto for the whole radar subsystem. This can be assembled, tested and
validated pretty much in isolation.

>Because it is less well engineered. Also, do you know how many
>such military engineering systems have significant and even major
>components that are decommissioned before they are got to work in
>the field?
>
>|> A paradigm already exists for scaling programs to arbitrarily large
>|> sizes. It's called a network.
>
>Yeah, right. I was having run-ins with the "structured programming"
>dogmatists back in the 1960s and 1970s on this one, and the common
>errors have not changed.

They missed the ball big-time. The big potential for re-use is
not of subroutines. It is of binary, working programs with defined
interfaces; often rigorously standardized. Jfr. e-mail.

>By splitting programs into functions of at most 20 lines long (yes,
>seriously), you may be able to understand every function at a glance.
>You will not, however, be able to understand their interactions. So
>you split the program into separate ones of at most 20 functions,
>and can now understand every program. But you will not be able to
>understand the network of programs. And so on.
>
>The same thing applies to hardware. It is easy to analyse and debug
>race and other inconsistency conditions involving two entities. As
>networks grow in complexity, it becomes harder and harder. There
>comes a point (approaching, in some networks) where most failure
>time (down time or debugging) is not associated with a problem in
>ANY component, but is associated with the network structure itself.
>TANSTAFFL.


I have been observing this bit in the open source movement
for a while. It seems the individual is able to handle projects
of about 70.000 lines of code well. When they go through this
roof wear and tear sets in, and sidekicks etc. get recruited.
It only helps a little. Unless this is broken down they hit the
wall at around twice that number of lines.

Sendmail has stayed at 55-60k for a long time. The code gets
honed as new functionality is introduced. xv is an example where
it has gotten too big. It stopped at around 110k lines, but
nothing significant has happened after 80k lines.

The Linux and FreeBSD kernels are around 2+ million lines of code,
and they work because there are sub-projects and well defined
interfaces, each maintainable by a single person.
A good third of the code go into device drivers for various
hardware.

The system libraries and core utilities are about as big again,
but can benefit from a separate team of people.

And then add X. Double again.

Add again some editors, compilers utilities etc. and you have
doubled again. And that is only the baseline.

All this is done by "artisan programmers", and it shows that
it can be done. Peacemeal, steadily and in small increments.

I have seen too many large (10m+) projects where complexity
really is forced on an appropriate architecture. You could
probably get almost an order of magnitude decrease by introducing
appropriate languages and layers. [But please let them be
a little more comprehensible than sendmail's.]

Example : a stock accounting system. There are three hairy
problems associated with these

1) Transactional integrity and security validable by any auditor.
2) Business logic of outlandish complexity.
3) Large transaction volumes with near realtime response demands.

If done in a monolithic fashion complexity will explode.

You need to separate 1 and 2 and make a little language for
the business complexity, where the language itself guarantees
1 and has a shot at 3. I have never seen this done.

>Have you ever tried to report a problem to a large vendor that is
>due SOLELY to the underlying computational assumptions of three or
>more separately developed components being subtly incompatible?
>I have. Guess how far I got.

I have had the privilige of setting up operations for such
software. Lesson#1 : You can write any SLA you want as long as
there is a stringent validation for entry into the SLA. I never
saw more than around 1/4th of projects ever make it into SLA
terms; i.e. past external validation.

-- mrr


Morten Reistad

unread,
Jul 1, 2003, 10:39:26 AM7/1/03
to
In article <bdre26$56m$1...@pegasus.csx.cam.ac.uk>,

Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
>
>In article <Pine.LNX.4.55.03...@ask.diku.dk>,
>"Peter \"Firefly\" Lund" <fir...@diku.dk> writes:
>|> On Tue, 1 Jul 2003, Nick Maclaren wrote:
>|>
>|> > |> Why would software be any harder?
>|> >
>|> > Because it is less well engineered. Also, do you know how many
>|>
>|> No. Because you don't have high-level screws. 10 million lines of mostly
>|> flat-line code broken into pieces that interact very little with each
>|> other would be a lot easier to understand than 10 million typical lines.
>
>I suggest learning a little more about the design of such systems.
>Your point is correct, but your viewpoint of their problems is
>overly simplistic.
>
>|> It would also be even easier if we did apply the engineering we know and
>|> cut it down to say 5 or 10 thousand lines (probably less).
>
>I know of no such example. If you have ever tried to write a
>serious product and keep it down to that size, you will have found
>out that only some of the bloat is due to engineering incompetence.
>Rather more is due to interfaces being designed non-mathematically,
>but there is a large residue of special cases, messy situations and
>so on that can't be handled compactly.

You need a way to isolate the residue of special cases. A language,
a subsystem. You need to bring their complexity contribution down to
additive levels, not multiplicative.

>Factors of 10 reduction are often possible; factors of 100, perhaps.
>I do not believe factors of 1,000 and more, as you are claiming.

I don't believe most large projects really can be rolled back that
drastically. They must be compartmentalized instead.

-- mrr

Jan C. Vorbrüggen

unread,
Jul 1, 2003, 12:14:27 PM7/1/03
to
> " Aircraft carrier project--a naval project with 30 million parts (a
> submarine has only 8 million parts)."
>
> Why would software be any harder?

Because people insist on designing it in a strongly-coupled way.

And even when designing as loosely-coupled hierarchical systems, airplanes
et al. develop surprising "emergent" interactions - some of which are hard
to solve even in principle. An example would be the Warzaw Airbus incident,
or even the Ariane 501 failure.

Jan

Jan C. Vorbrüggen

unread,
Jul 1, 2003, 12:11:38 PM7/1/03
to
> Such as the open communications ducts that caused the fire in
> HMS Sheffield, which has an exact analogy in the failure of software.
> And, like the latter, it was known to be a problem in the 1960s (in
> high-rise building), yet had been ignored by the builders of that class
> of vessel in the 1980s.

Known? It was ignored when a terminal at Düsseldorf airport was built,
which lead to the death of 17 (IIRC) people. When I first read the account
in the paper, I couldn't believe it.

Jan

Nick Maclaren

unread,
Jul 1, 2003, 12:20:09 PM7/1/03
to

In article <e5ordb.8u3.ln@acer>,

Morten Reistad <m...@reistad.priv.no> writes:
|> >|>
|> >|> " Aircraft carrier project--a naval project with 30 million parts (a
|> >|> submarine has only 8 million parts)."
|> >|>
|> >|> Why would software be any harder?
|>
|> Because there are more interactions.
|>
|> A bolt holding a radar mast on an aircraft carrier does just that. It
|> can be inspected, maintained and changed by a person knowing about
|> bolts and naval warship construction. [S]He does not have to know much
|> about radar systems.
|>
|> A failure of this bolt will only lead to failure of one subsystem
|> (beside someone getting hit by a falling mast).
|>
|> Ditto for the whole radar subsystem. This can be assembled, tested and
|> validated pretty much in isolation.

While there is some truth in that, most of the independence is as
the result of careful design. Let us take your bolt as an example.

The fitting is designed so that a failure of the bolt does not
cause the movement of the mast to degrade some other component.
It is also designed so that the bolt can be replaced without
having to dismantle the mast, let alone having to disable some
other component. The bolt is designed so that its probability of
failure is related to the severity of the consequence of its
failure. It is designed so that expected human error will not
cause the bolt to fail in use. And so on.

This doesn't always work, but failure to achieve such targets is
regarded as a fault in the design. This is not so in most modern
software.


Regards,
Nick Maclaren.

Joseph Seigh

unread,
Jul 1, 2003, 12:13:13 PM7/1/03
to

Nick Maclaren wrote:
>
> |> Seems to me that modular programming, object orientation, or just
> |> languages with higher level constructs will resolve many of his
> |> concerns. One doesn't design an airplane or bridge by writing down how
> |> to mine ore, smelt it, make bolts and rivets, roll sheet etc.
>
> No, but they would push the limits further. Instead of modern
> systems being bug-ridden and almost unusable heaps, debuggable
> only by calling in the dwindling number of people with really
> low-level hacking experience, they could be fairly robust and
> largely self-diagnosing.
>
> We could be talking about a factor of 10 reduction in development
> effort and administrator effort, a factor of 100 reduction in
> visible problems, a factor of 100 increase in software MTBF in
> problem situations, coupled with a factor of 10 increase in size.
> There is plenty of margin :-(
>
> There is no way to ELIMINATE the problems of complexity, however.
>

Another part of the problem is people related also. They need to stop
promoting programmers who are really good with dealing with complexity
to system designers. It seems like a good idea but it's not. Because
the last thing these system designers are going to do is design simpler
systems, level the playing field and give up a major advantage they
have over other programmers.

Joe Seigh

jonah thomas

unread,
Jul 1, 2003, 12:45:05 PM7/1/03
to
Nick Maclaren wrote:
> Robert Myers <rmy...@rustuck.com> writes:

> |> A paradigm already exists for scaling programs to arbitrarily large
> |> sizes. It's called a network.

> Yeah, right. I was having run-ins with the "structured programming"
> dogmatists back in the 1960s and 1970s on this one, and the common
> errors have not changed.

> By splitting programs into functions of at most 20 lines long (yes,
> seriously), you may be able to understand every function at a glance.
> You will not, however, be able to understand their interactions. So
> you split the program into separate ones of at most 20 functions,
> and can now understand every program. But you will not be able to
> understand the network of programs. And so on.

The way I understand that dogma, you need first to have your small
functions interact only across a known simple interface. So the
interaction of functions is known and simple, and you can ignore their
details and look only at the inputs and outputs. Then you make them
interact using programs that are no more complicated than those first
functions. The high-level programs must also be short and simple and
interact only through simple known interfaces. And so on, hierarchically.

This approach can sometimes work well. But you lose a sort of
efficiency in allowing only simple hierarchical interactions. If you're
lucky everything works fine. If you're unlucky you get a system with
such poor performance that it isn't worth using even though it doesn't
do anything unexpected.

When it works, one of the things that happens is that the interfaces
seem intuitive. If they don't, then you'll drown in numbers of simple
interactions. When it's simple components connected simply, you have
the possibility still to just get so many of them that you can't keep track.

> The same thing applies to hardware. It is easy to analyse and debug
> race and other inconsistency conditions involving two entities. As
> networks grow in complexity, it becomes harder and harder. There
> comes a point (approaching, in some networks) where most failure
> time (down time or debugging) is not associated with a problem in
> ANY component, but is associated with the network structure itself.
> TANSTAFFL.

By analogy I'd say that you might need hierarchical networks where you
never get too many units interacting at once. Then you at least have
the possibility to track down the problems. But performance degrades
across the hierarchy even in the best case. Still, what choice do you
have? If the alternative is sacrificing a system administrator to the
network gods every full moon....

Tarjei T. Jensen

unread,
Jul 1, 2003, 1:11:18 PM7/1/03
to
Morten Reistad wrote:
> Example : a stock accounting system. There are three hairy
> problems associated with these
>
> 1) Transactional integrity and security validable by any auditor.
> 2) Business logic of outlandish complexity.
> 3) Large transaction volumes with near realtime response demands.
>
> If done in a monolithic fashion complexity will explode.
>
> You need to separate 1 and 2 and make a little language for
> the business complexity, where the language itself guarantees
> 1 and has a shot at 3. I have never seen this done.

I'm not sure that there is much of a problem apart from 3. In all sorts of
accounting systems I know of 1 and 2 is solved by breaking the job into
little steps which are executed in turn.

Most people I know get into trouble when trying to automate accounting
calculations (e.g. for their tax returns). The problem is that they try to
do too much in one go. When I try to convince them that this is a problem
already solved, they refuse to belive me and insist that the accounting
stuff is really difficult. The interesting bit is that they seem to have no
problem doing the sums properly on their paper tax return forms.

greetings,

jmfb...@aol.com

unread,
Jul 1, 2003, 11:30:15 AM7/1/03
to
In article <3F018251...@cavtel.net>,

It's not intuitive! The interfaces are documented. Anything that
doesn't follow the call gets an immediate error return.

>
>> The same thing applies to hardware. It is easy to analyse and debug
>> race and other inconsistency conditions involving two entities. As
>> networks grow in complexity, it becomes harder and harder. There
>> comes a point (approaching, in some networks) where most failure
>> time (down time or debugging) is not associated with a problem in
>> ANY component, but is associated with the network structure itself.
>> TANSTAFFL.
>
>By analogy I'd say that you might need hierarchical networks where you
>never get too many units interacting at once. Then you at least have
>the possibility to track down the problems. But performance degrades
>across the hierarchy even in the best case. Still, what choice do you
>have? If the alternative is sacrificing a system administrator to the
>network gods every full moon....

I don't understand. We used to call it black box development.
Rather similar to a box in a flow chart. If you are outside the
box, the only thing you need to know is what goes in or what comes
out and NOT what happens inside.

The complexity of programming is that there exists nested black
boxes. If you keep going lower and lower and lower, you can
find yourself back in the electric generator at the power station
and then there are all those black boxes in the physics and wiring
areas.

The fun is arranging things so that a CATCH-22 becomes a nested
black box within itself. That's a lot of fun. :-)

Robert Myers

unread,
Jul 1, 2003, 2:05:54 PM7/1/03
to
On 1 Jul 2003 07:24:41 GMT, nm...@cus.cam.ac.uk (Nick Maclaren) wrote:
>
>In article <2o12gvc6ishbpa20n...@4ax.com>,
>Robert Myers <rmy...@rustuck.com> writes:
>|>
>|> FWIW:
>|>
>|> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>|>
>|> "a single domestic passenger airplane alone can contain as many as 6
>|> million parts"
>|>
>|> http://www.majorprojects.org/pubdoc/677.pdf
>|>
>|> " Aircraft carrier project--a naval project with 30 million parts (a
>|> submarine has only 8 million parts)."
>|>
>|> Why would software be any harder?
>
>Because it is less well engineered.

If by less well engineered you mean that most software never goes
through a design, specification, and test process anything like
something built for the military or intended for civilian flight, I
agree with you, but there is nothing fundamental about software that
dictates that it be built that way.

>Also, do you know how many
>such military engineering systems have significant and even major
>components that are decommissioned before they are got to work in
>the field?
>

If you take a subscription to Aviation Week you can follow at least
some of the misadventures of aerospace engineering projects in real
time.

The American aerospace industry has been repeatedly belittled, at
least in the American press, as being this inept sinkhole for money.
Even a sympathetic insider, after having sat through just a few
interminable meetings, might wonder how anything ever gets done.

The reality is that the industry has acquired an astonishing mastery
of complexity and an ability to marshall heterogeneous resources from
the most unlikely places to bring about sometimes quite amazing
results.

For all its reputation for coverups, aerospace has been incredibly
unsuccessful in hiding its screwups. The spectacular successes, on
the other hand, are often concealed from public view under pain of a
prison sentence.

>|> A paradigm already exists for scaling programs to arbitrarily large
>|> sizes. It's called a network.
>
>Yeah, right. I was having run-ins with the "structured programming"
>dogmatists back in the 1960s and 1970s on this one, and the common
>errors have not changed.
>
>By splitting programs into functions of at most 20 lines long (yes,
>seriously), you may be able to understand every function at a glance.
>You will not, however, be able to understand their interactions. So
>you split the program into separate ones of at most 20 functions,
>and can now understand every program. But you will not be able to
>understand the network of programs. And so on.
>
>The same thing applies to hardware. It is easy to analyse and debug
>race and other inconsistency conditions involving two entities. As
>networks grow in complexity, it becomes harder and harder. There
>comes a point (approaching, in some networks) where most failure
>time (down time or debugging) is not associated with a problem in
>ANY component, but is associated with the network structure itself.
>TANSTAFFL.

I believe in lots of things most other people don't, including magic
and free lunches. The internet works. Bank ATM networks work. The
global financial system works. The telephone system works.

Jan Vorbrüggen <jvorbr...@mediasec.de> hit it dead on:

>> " Aircraft carrier project--a naval project with 30 million parts (a
>> submarine has only 8 million parts)."
>>
>> Why would software be any harder?
>

>Because people insist on designing it in a strongly-coupled way.
>

If the eyes of students don't roll heavenward when they are presented
with a network stack, told how useful the concept is, and then watch
how actual software breaks the layering model right from the git-go,
they don't belong in engineering.

I deliberately did *not* choose some common term from software
engineering, like structured programming or object oriented
programming.

A network is such a powerful paradigm for software design because each
node has a finite number of ports. Like an ancient walled city, all
information must come in or go out through the gates. Each city is
governed separately. Requests for information go out through a gate
(and can be logged completely if necessary) and responses go out
through a gate (and can be logged completely if necessary). The
communication protocal is transparent and communication traffic can be
intercepted outside the city gates, so that no principality's word has
to be taken for it as to what it is saying to the other principalities
or what it is hearing from other principalities.

When the number of city-states becomes unmanageable, nations form. If
their borders are porous, it is not intentional. Defining actual
nations without randomly porous borders is impossible, but in the case
of software, it is not. When nations become too large, you form
confederations of nations. At every level of governance the rules are
the same: each principality manages its own internal affairs,
communicates with the outside world only through a finite number of
ports, and uses a communication protocol that can be intercepted an
deciphered by an authorized examiner of network traffic outside the
city walls.

By some no very great stretch of the imagination, the internet is just
one big program. By a slightly smaller stretch of the imagination, my
humble intranet is a program, and a very complicated one, at that. By
no stretch of the imagination at all, very complicated programs
designed, written, and supervised by independent parties can cooperate
over a far-flung network to perform a single a task in a way that is
no different from a "Hello, World" program.

As to "emergent behavior". Yes, such things happen. That's how
systems engineers stay employed. :-).

>
>Have you ever tried to report a problem to a large vendor that is
>due SOLELY to the underlying computational assumptions of three or
>more separately developed components being subtly incompatible?
>I have. Guess how far I got.
>

When an aerospace contractor buys anything other than services, there
is a specification. There are even specifications for specifications.
All questions of culpability for subcontractors come down to one
question: did the subcontractor deliver parts that meet the agreed to
specification, or not. The general contractor is responsible for
selecting and coordinating subcontractors in such a way that the
overall system works. If all subcontractors have delivered according
to spec, it is the general contractors' job to fix it. That's why
they get the big bucks.

If you've bought software and/or hardware from a general contractor,
you almost certainly bought a support contract with it. If your
vendor isn't meeting the terms of the support contract and you are
having a hard time enforcing them, you have my sympathy, with no trace
of sarcasm. Huge companies in this business have survived very hard
times by going to great lengths to avoid leaving their customers
feeling that way.

RM


Nick Maclaren

unread,
Jul 1, 2003, 2:18:28 PM7/1/03
to

In article <9h13gvs2ac1mh3tld...@4ax.com>,

Robert Myers <rmy...@rustuck.com> writes:
|>
|> If by less well engineered you mean that most software never goes
|> through a design, specification, and test process anything like
|> something built for the military or intended for civilian flight, I
|> agree with you, but there is nothing fundamental about software that
|> dictates that it be built that way.

Precisely. I agree with you completely.

|> If you take a subscription to Aviation Week you can follow at least
|> some of the misadventures of aerospace engineering projects in real
|> time.
|>
|> The American aerospace industry has been repeatedly belittled, at
|> least in the American press, as being this inept sinkhole for money.
|> Even a sympathetic insider, after having sat through just a few
|> interminable meetings, might wonder how anything ever gets done.

Why single the Americans out? I thought that was generally true :-)

|> The reality is that the industry has acquired an astonishing mastery
|> of complexity and an ability to marshall heterogeneous resources from
|> the most unlikely places to bring about sometimes quite amazing
|> results.

Yes. I am amazed at how well it does.

|> For all its reputation for coverups, aerospace has been incredibly
|> unsuccessful in hiding its screwups. The spectacular successes, on
|> the other hand, are often concealed from public view under pain of a
|> prison sentence.

Nice one!

|> If you've bought software and/or hardware from a general contractor,
|> you almost certainly bought a support contract with it. If your
|> vendor isn't meeting the terms of the support contract and you are
|> having a hard time enforcing them, you have my sympathy, with no trace
|> of sarcasm. Huge companies in this business have survived very hard
|> times by going to great lengths to avoid leaving their customers
|> feeling that way.

It is actually the problem that there is no definition of whether
software works or not. At any level :-(

You might be surprised at how few software vendors have a reputation
for honouring the spirit (or even the letter) of their contracts.
The real question is how badly they let you down. Their problem is
that, IF they took every little bug and other fault seriously, they
would go broke.


Regards,
Nick Maclaren.

Tom Gardner

unread,
Jul 1, 2003, 2:25:50 PM7/1/03
to
Morten Reistad <m...@reistad.priv.no> wrote in news:e5ordb.8u3.ln@acer:

> A bolt holding a radar mast on an aircraft carrier does just that. It
> can be inspected, maintained and changed by a person knowing about
> bolts and naval warship construction. [S]He does not have to know much
> about radar systems.
>
> A failure of this bolt will only lead to failure of one subsystem
> (beside someone getting hit by a falling mast).
>
> Ditto for the whole radar subsystem. This can be assembled, tested and
> validated pretty much in isolation.

Completely false in all respects, because of the
"rusty bolt" problem!

Theory: rusty bolts act as crude semiconductors, and rectify
any EM radiation falling on them. The result of rectification,
as any radio engineer knows, is that the bolt generates and
re-radiates harmonics and subharmonics of the incident radiation.
In other words, energy gets splatted indiscriminately across
the radio spectrum. The consequence is that it is very
difficult to predict which combination of systems will work
together, and which will interfere; you have to suck it and see.

Practice: HMS Sheffield was sunk because its long-range radar
had been turned off in order to allow satellite communications;
thus missing the Exocet coming down its throat.

Anne & Lynn Wheeler

unread,
Jul 1, 2003, 2:32:57 PM7/1/03
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:
> No, but they would push the limits further. Instead of modern
> systems being bug-ridden and almost unusable heaps, debuggable
> only by calling in the dwindling number of people with really
> low-level hacking experience, they could be fairly robust and
> largely self-diagnosing.
>
> We could be talking about a factor of 10 reduction in development
> effort and administrator effort, a factor of 100 reduction in
> visible problems, a factor of 100 increase in software MTBF in
> problem situations, coupled with a factor of 10 increase in size.
> There is plenty of margin :-(
>
> There is no way to ELIMINATE the problems of complexity, however.
>
> And, no, I do NOT mean Project Eliza, which completely misses the
> point.

some activity from dependable computing, some reference:
http://www.hdcc.cs.cmu.edu/may01/index.html
and
http://www.hdcc.cs.cmu.edu/index.html
http://www.scs.cmu.edu/hot/2000/11/nasa.html
--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm

Anne & Lynn Wheeler

unread,
Jul 1, 2003, 2:34:54 PM7/1/03
to
Jan C. Vorbrüggen <jvorbr...@mediasec.de> writes:
> Known? It was ignored when a terminal at Düsseldorf airport was
> built, which lead to the death of 17 (IIRC) people. When I first
> read the account in the paper, I couldn't believe it.

is it my imagination or were terminals at dusseldorf and dresden
done by the some architect?

jonah thomas

unread,
Jul 1, 2003, 2:44:32 PM7/1/03
to
jmfb...@aol.com wrote:
> jonah thomas <j2th...@cavtel.net> wrote:

>>When it works, one of the things that happens is that the interfaces
>>seem intuitive. If they don't, then you'll drown in numbers of simple
>>interactions. When it's simple components connected simply, you have
>>the possibility still to just get so many of them that you can't keep
>>track.

> It's not intuitive! The interfaces are documented. Anything that
> doesn't follow the call gets an immediate error return.

Yes. But if it isn't intuitive, you'll likely make lots of errors in
spite of the documentation. When it works, usually it turns out to be
intuitive and also well-documented.

>>>The same thing applies to hardware. It is easy to analyse and debug
>>>race and other inconsistency conditions involving two entities. As
>>>networks grow in complexity, it becomes harder and harder. There
>>>comes a point (approaching, in some networks) where most failure
>>>time (down time or debugging) is not associated with a problem in
>>>ANY component, but is associated with the network structure itself.
>>>TANSTAFFL.

>>By analogy I'd say that you might need hierarchical networks where you
>>never get too many units interacting at once. Then you at least have
>>the possibility to track down the problems. But performance degrades
>>across the hierarchy even in the best case. Still, what choice do you
>>have? If the alternative is sacrificing a system administrator to the
>>network gods every full moon....

> I don't understand. We used to call it black box development.
> Rather similar to a box in a flow chart. If you are outside the
> box, the only thing you need to know is what goes in or what comes
> out and NOT what happens inside.

The first theory was that if every individual black box works
'correctly' then a network of them will also work correctly. But you
can get network problems that no individual component will show. And
the more components in the network the harder it is to find out how the
problems start.

To the extent that you can solve software problems by using simple
components connected simply in hierarchies (and this doesn't solve
everything) maybe you could apply the same approach to networks. Simple
networks are easier to fix.

So break a big network into small networks, and each of them can be made
to work. Then you have a network of networks, and there are a limited
number of networks in the big network, and maybe you can fix that. And
then the network of networks of networks.

At each level you can hope to treat the components of that level as
black boxes. But when you make the hierarchy you've restrictrd the
communication to established channels. This may degrade the performance
in the best case.

> The complexity of programming is that there exists nested black
> boxes. If you keep going lower and lower and lower, you can
> find yourself back in the electric generator at the power station
> and then there are all those black boxes in the physics and wiring
> areas.

> The fun is arranging things so that a CATCH-22 becomes a nested
> black box within itself. That's a lot of fun. :-)

"Doctor, it hurts when I do this."
"Then don't do that."

Robert Myers

unread,
Jul 1, 2003, 3:30:45 PM7/1/03
to
On Tue, 01 Jul 2003 10:44:32 -0400, jonah thomas <j2th...@cavtel.net>
wrote:

>
>At each level you can hope to treat the components of that level as
>black boxes. But when you make the hierarchy you've restrictrd the
>communication to established channels. This may degrade the performance
>in the best case.
>

A TCO calculation that can never be done: the incredible life-cyle
cost increases that are incurred every time someone creates a data
path that saves a little time in the present moment and creates
endless opportunities for mischief in the future.

<snip>


>
>"Doctor, it hurts when I do this."
>"Then don't do that."

RM

Paul Wallich

unread,
Jul 1, 2003, 3:55:20 PM7/1/03
to
Robert Myers wrote:

> On Mon, 30 Jun 2003 19:31:54 -0700, "H. E. Taylor"
> <h...@despam.autobahn.mb.ca> wrote:
>
>
>>Greetings,
>> One of the books I have been reading lately is "The Next
>> Fifty Years, Science in the First Half of the 21st Century".
>> It is a collection of essays by 25 third party experts.
>> One of the them is by Jaron Lanier. His essay is about
>> complexity. He makes an interesting statement:
>>
>> "Since the complexity of software is currently limited by the
>> ability of human engineers to explicitly analyze and manage it,
>> we can be said to have already reached the complexity ceiling
>> of software as we know it. If we don't find a different way
>> of thinking about and creating software, we will not be writing
>> programs bigger than about 10 million lines of code, no matter
>> matter how fast, plentiful or exotic our processors become."
>> -Jaron Lanier (The Next Fifty Years, page 219)
>>
>> Most of my experience is in relatively small PC and embedded systems.
>> I wonder, is Lanier's perspective common on larger systems?
>><curious>
>>-het
>
>

> FWIW:
>
> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>
> "a single domestic passenger airplane alone can contain as many as 6
> million parts"
>
> http://www.majorprojects.org/pubdoc/677.pdf
>
> " Aircraft carrier project--a naval project with 30 million parts (a
> submarine has only 8 million parts)."
>
> Why would software be any harder?

Those counts for aircraft, submarines and such aren't really comparable,
because although there are zillions of pieces, the vast majority of
them are instantiations of a much smaller number of masters. Counting
that way is like counting lines of code after inlining all of your
calls. Where you do have oodles of different part types, physical
designs benefit from the continuous nature of physical properties (each
one isn't an entirely new problem) and from the fact that you usually
can't put the wrong components together (think of bolt sizes and
connector layouts as strong typing).

One famous case of a complex physical design going horribly wrong was
the Hartford Civic Center roof, a space frame built from hundreds of
thousands of apparently identical components. It turned out that some of
the rods and connectors were made of high-strength steel and intended to
go in particular areas of stress concentration, but that the workers
had not heeded the fine print of which identical-looking part to put
where. As a result, loads followed paths that weren't designed to
sustain them, and the whole thing fell down.

Analogies with unexpected interactions in information systems left as an
exercise....

paul

Jan C. Vorbrüggen

unread,
Jul 1, 2003, 3:56:00 PM7/1/03
to
> is it my imagination or were terminals at dusseldorf and dresden
> done by the some architect?

I couldn't say. The bug was in the low-level code, as it were, which is
not the usual level for architects to work at. And the implementors violated
all agreed-upon rules of the art in so doing, of course.

So what else is new.

Jan

Charlton Wilbur

unread,
Jul 1, 2003, 4:29:29 PM7/1/03
to
Robert Myers <rmy...@rustuck.com> writes:

> If by less well engineered you mean that most software never goes
> through a design, specification, and test process anything like
> something built for the military or intended for civilian flight, I
> agree with you, but there is nothing fundamental about software that
> dictates that it be built that way.

I concur that there's nothing fundamental that dictates that it must
be built that way, but I think it's part of the intangible nature of
software that leads to being built that way.

When you build an airplane, the later you decide to change things, the
more it costs. Change it on paper, and it might cost you a man-month.
Change it after the first prototypes are built, and it might cost you
10 man-years. Fail to test it adequately, and the results will be
obvious. Software, being intangible, isn't perceived as having this
problem. (This is why, when a user says to me, "It will never need to
do X," I clarify that we share an understanding of 'never', and I get
the statement in writing anyway, because it's been my experience that
'never' means 'for the next three to five months' when used in that
construction.)

The need for stable design and rigorous testing is apparent to PHBs
for a number of reasons: for one thing, the airplane is *tangible*,
and the PHB can see just how a small change can ripple through the
whole design. Another thing is regulation: designs must be approved
by certified people and must meet certain standards. There's also
legal liability; most software systems disclaim as much liability as
possible. Thus it's apparent to the PHB that clear specifications are
more important than wiggle room.

In software, that's not often the case. A PHB who commits to a
certain set of specifications will be responsible if that certain set
of specifications doesn't get the job done, and so there's a good
reason for not conforming to specifications -- leave it to the
programmers to work out, and if the software system doesn't do what it
needs, it will be the programmers' fault. With any luck, there won't
even be any blame assigned for it, or at least none that sticks; just
a lot of frustrated programmers and a PHB who's looking for the next
project to kill.

Charlton


Robert Myers

unread,
Jul 1, 2003, 4:48:58 PM7/1/03
to
On Tue, 01 Jul 2003 11:55:20 -0400, Paul Wallich <p...@panix.com> wrote:

>
>Those counts for aircraft, submarines and such aren't really comparable,
>because although there are zillions of pieces, the vast majority of
>them are instantiations of a much smaller number of masters. Counting
>that way is like counting lines of code after inlining all of your
>calls. Where you do have oodles of different part types, physical
>designs benefit from the continuous nature of physical properties (each
>one isn't an entirely new problem) and from the fact that you usually
>can't put the wrong components together (think of bolt sizes and
>connector layouts as strong typing).
>
>One famous case of a complex physical design going horribly wrong was
>the Hartford Civic Center roof, a space frame built from hundreds of
>thousands of apparently identical components. It turned out that some of
>the rods and connectors were made of high-strength steel and intended to
> go in particular areas of stress concentration, but that the workers
>had not heeded the fine print of which identical-looking part to put
>where. As a result, loads followed paths that weren't designed to
>sustain them, and the whole thing fell down.
>

Why would you imagine that aerospace systems would be immune to such
problems?

Example of one "part":

"The type of jackscrew assembly involved in the accident was
originally designed by Douglas Aircraft in 1965 for the DC-9. It
weighs 100 pounds, is two inches in diameter, and costs $60,000."

Excessive (thousandths of an inch) wear to the threads of the part in
question was determined to be the probable cause of an Alaska Airlines
flight nosediving into the Pacific.

On balance, I think the parts count analogy for aerospace systems is
conservative.

Whether that is true or not, the point is that, whether the software
industry has adopted the methodologies or not, successful
methodologies exist and have been demonstrated successfully for
managing mind-bending levels of complexity, and I see no reason to
place an upper bound on the level of complexity that can be managed
for any kind of engineering system, including software.

RM


Peter "Firefly" Lund

unread,
Jul 1, 2003, 5:12:09 PM7/1/03
to
On Tue, 1 Jul 2003, Paul Wallich wrote:

> them are instantiations of a much smaller number of masters. Counting
> that way is like counting lines of code after inlining all of your
> calls.

Precisely. Maybe Nick will read it this time.

-Peter

Nick Maclaren

unread,
Jul 1, 2003, 5:14:55 PM7/1/03
to

In article <bdc3gv4hk1tmi6gra...@4ax.com>,

Robert Myers <rmy...@rustuck.com> writes:
|>
|> Whether that is true or not, the point is that, whether the software
|> industry has adopted the methodologies or not, successful
|> methodologies exist and have been demonstrated successfully for
|> managing mind-bending levels of complexity, and I see no reason to
|> place an upper bound on the level of complexity that can be managed
|> for any kind of engineering system, including software.

What is abundantly clear, however, is that complexity increases
cost and delay. Good methodology helps to reduce this but, for
any fixed methodology, there is sooner or later a vast increase
in cost and delay caused by complexity. One can argue whether
it is exponential or not, but it is often close to it.

So, while there may not be a theoretical upper bound, there most
definitely is a practical one. We don't know of a methodology
for handling systems a hundred times more complex than a modern
jumbo jet, for example. Yet building software is cheap enough
that such products can be contemplated - and are.


Regards,
Nick Maclaren.


jonah thomas

unread,
Jul 1, 2003, 5:21:55 PM7/1/03
to
Robert Myers wrote:

> Whether that is true or not, the point is that, whether the software
> industry has adopted the methodologies or not, successful
> methodologies exist and have been demonstrated successfully for
> managing mind-bending levels of complexity, and I see no reason to
> place an upper bound on the level of complexity that can be managed
> for any kind of engineering system, including software.

How long does it take to design a new airplane? How much change from
the last design?

Maybe the time-to-market is part of the problem? If we were only
willing to wait 4 years for a well-designed application for Win2000,
we'd get well-designed applications for Win2000. In fact we could have
had some well-designed apps for Win98 by sometime last year, and
well-designed apps for Win95 by 1999.

But then, why should apps be forced to run on too-new, poorly-designed
OSs? Surely a complicated OS shouldn't be allowed out the door for at
least 8 years, after say 3 years in beta testing and maybe an extra 5
years of redesign.

The central problem is maybe that having a lot of errors is usually
acceptable in software. It isn't like planes falling out of the sky.
You just shrug and reboot.

Short of liability lawsuits from the relatives of defunct customers,
perhaps we could get much better reliability if, after the beta testing
is done, we went through about a year's worth of gamma testing in which
any error is likely to kill the top management of the company. It would
make for great TV commercials, kind of like the Maytag ones. The calm,
collected company president sitting at his desk with a piano suspended
over him. He contentedly works away. Every now and then he looks up
and looks kind of thoughtful. "I'm not worried. Our product is 100%
reliable. I'd stake my life on it."

Maybe there wouldn't be quite so much schedule pressure if it was done
like that.

Stan Barr

unread,
Jul 1, 2003, 5:26:53 PM7/1/03
to
On Tue, 01 Jul 03 08:28:57 GMT, jmfb...@aol.com <jmfb...@aol.com> wrote:
>
>On a plane one day, I sat next to a guy who owned a company who
>made just airplane wings. He told me it took 9 months to make one.
>That gives just a hint about complexity.

A friend of mine makes Airbus wings just a few miles from here, and they
do take months to make - even with their sophisticated computer controlled
machinery. (They machine the skin, with integral stiffening ribs, from a
*huge* flat billet of aluminum!)
They fly them out in the fattest plane you've ever seen...

--
Cheers,
Stan Barr stanb .at. dial .dot. pipex .dot. com
(Remove any digits from the addresses when mailing me.)

The future was never like this!

Tim Shoppa

unread,
Jul 1, 2003, 5:30:18 PM7/1/03
to
nm...@cus.cam.ac.uk (Nick Maclaren) wrote in message news:<bdre26$56m$1...@pegasus.csx.cam.ac.uk>...

> In article <Pine.LNX.4.55.03...@ask.diku.dk>,
> "Peter \"Firefly\" Lund" <fir...@diku.dk> writes:
> |> On Tue, 1 Jul 2003, Nick Maclaren wrote:
> |>
> |> > |> Why would software be any harder?
> |> >
> |> > Because it is less well engineered. Also, do you know how many
> |>
> |> No. Because you don't have high-level screws. 10 million lines of mostly
> |> flat-line code broken into pieces that interact very little with each
> |> other would be a lot easier to understand than 10 million typical lines.
>
> I suggest learning a little more about the design of such systems.
> Your point is correct, but your viewpoint of their problems is
> overly simplistic.
>
> |> It would also be even easier if we did apply the engineering we know and
> |> cut it down to say 5 or 10 thousand lines (probably less).
>
> I know of no such example. If you have ever tried to write a
> serious product and keep it down to that size, you will have found
> out that only some of the bloat is due to engineering incompetence.
> Rather more is due to interfaces being designed non-mathematically,
> but there is a large residue of special cases, messy situations and
> so on that can't be handled compactly.
>
> Factors of 10 reduction are often possible; factors of 100, perhaps.

Largish complex systems often rely on tools such as table look-up to
reduce the amount of C code and maximize the ability for at least the
tables to be machine-checked for consistency. This is really shifting
the solution domain from "C code" (or whatever general-purpose
computing language they use that week) to writing down the tables.

> I do not believe factors of 1,000 and more, as you are claiming.

I've seen some extreme cases of a factor of 1000, but in those cases
they were *only* counting lines of C code and ignoring library calls
and the complexity of the tables, even ignoring the tools used to
build the tables. i.e. they were cheating.

MLOC's (Millions of Lines of Code) are one way to measure system complexity
but certainly not the best one!

Tim.

Paul Wallich

unread,
Jul 1, 2003, 5:54:51 PM7/1/03
to
Robert Myers wrote:

Nope, that's not a part. It's an assembly. Consists of a bunch of parts
put together.

> Excessive (thousandths of an inch) wear to the threads of the part in
> question was determined to be the probable cause of an Alaska Airlines
> flight nosediving into the Pacific.

Although thousandths of an inch may sound tiny and hard to detect,
anyone who has worked with mechanical systems will tell you that that
kind of margin is enormous for things like screw threads. An
undergraduate physics student in his first or second session in front of
a toolroom lathe can machine parts to a tolerance of a few thousandths
of an inch.

> On balance, I think the parts count analogy for aerospace systems is
> conservative.

Note also that addendum: originally designed almost 40 years ago for a
completely different aircraft, and still in service. How many
40-year-old code modules are still in daily use?

And more to the point, for any that are, does a central authority have
the phone number of every person using them, so that they can be
notified by day's end if a bug is found?

> Whether that is true or not, the point is that, whether the software
> industry has adopted the methodologies or not, successful
> methodologies exist and have been demonstrated successfully for
> managing mind-bending levels of complexity, and I see no reason to
> place an upper bound on the level of complexity that can be managed
> for any kind of engineering system, including software.

As someone else pointed out, We don't really seem to be willing to pay
the price, either in time or money, that it would cost to implement such
a system. And the IT industry fights tooth and nail against any efforts
to impose the kind of liability incentives that are standard for more
rigorous forms of engineering.

In my limited experience, the larger a cohesive project you get, the
further behind best practices it falls, perhaps because of the amount of
effort that needs to be put into managing just the simplest parts of the
complexity, like making sure all those thousands of programmers get paid.

paul

Pete Fenelon

unread,
Jul 1, 2003, 6:04:09 PM7/1/03
to
In alt.folklore.computers Stan Barr <sta...@dial.pipex.com> wrote:
> A friend of mine makes Airbus wings just a few miles from here, and they
> do take months to make - even with their sophisticated computer controlled
> machinery. (They machine the skin, with integral stiffening ribs, from a
> *huge* flat billet of aluminum!)
> They fly them out in the fattest plane you've ever seen...

The Guppy? Incredible thing - I've seen it somewhere near Manchester -
heading for Broughton?

Isn't there an Airbus-based replacement for the Guppy on the way?

pete
--
pe...@fenelon.com "there's no room for enigmas in built-up areas" HMHB

J Ahlstrom

unread,
Jul 1, 2003, 6:12:28 PM7/1/03
to

Robert Myers wrote:
>
> FWIW:
>
> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>
> "a single domestic passenger airplane alone can contain as many as 6
> million parts"

5,900,000 of which are rivets ?

>
> http://www.majorprojects.org/pubdoc/677.pdf
>
> " Aircraft carrier project--a naval project with 30 million parts (a
> submarine has only 8 million parts)."
>

> Why would software be any harder?
>

> A paradigm already exists for scaling programs to arbitrarily large
> sizes. It's called a network.
>

> RM
>

Peter da Silva

unread,
Jul 1, 2003, 6:09:05 PM7/1/03
to
In article <aN6Ma.1222$IP6....@eagle.america.net>,

del cecchi <dce...@msn.com> wrote:
> Seems to me that modular programming, object orientation, or just
> languages with higher level constructs will resolve many of his
> concerns.

None of those are sufficient or necessary to solve this problem.

The real issue is finding the right ways to slice a problem up into
parts that can be meaningfully addressed separately.

Actually, that's not the real issue. We do know many ways of doing
this that work reasonably well, such as splitting a program up into
cooperating components with well defined and understood interfaces
is a big one. "How many lines of code in the Internet?"

The real issue is getting people to do this in a world where someone
can honestly claim that a web browser is an integral part of an operating
system without being shipped off to the funny farm.

--
#!/usr/bin/perl
$/="%\n";chomp(@_=<>);print$_[rand$.]

Peter da Silva, just another Perl poseur.

Peter da Silva

unread,
Jul 1, 2003, 6:19:22 PM7/1/03
to
In article <2o12gvc6ishbpa20n...@4ax.com>,

Robert Myers <rmy...@rustuck.com> wrote:
> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>
> "a single domestic passenger airplane alone can contain as many as 6
> million parts"
>
> http://www.majorprojects.org/pubdoc/677.pdf
>
> " Aircraft carrier project--a naval project with 30 million parts (a
> submarine has only 8 million parts)."
>
> Why would software be any harder?

Why would software be any easier? 10 million lines of code puts a program
somewhere between a 747 and the Enterprise. Yes, some of those lines of code
are very simple, but some of those parts are screws.

> A paradigm already exists for scaling programs to arbitrarily large
> sizes. It's called a network.

Or an operating system, or anything else that lets you say "we can fit this
into 10 million lines and get it working without worrying about that, and we
can fit that into 10 million lines without worrying about this".

Peter da Silva

unread,
Jul 1, 2003, 6:21:23 PM7/1/03
to
In article <bdrcvp$4hr$1...@pegasus.csx.cam.ac.uk>,

Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
> By splitting programs into functions of at most 20 lines long (yes,
> seriously), you may be able to understand every function at a glance.
> You will not, however, be able to understand their interactions. So
> you split the program into separate ones of at most 20 functions,
> and can now understand every program. But you will not be able to
> understand the network of programs. And so on.

Right. The job of finding the right places to split a problem up so that
each part is comprehensible *and* the interactions between each part are
comprehensible is one of the hardest things to do... and the hardest to
undo if you get it wrong.

Peter da Silva

unread,
Jul 1, 2003, 6:30:18 PM7/1/03
to
In article <bdru9p$kcj$1...@pegasus.csx.cam.ac.uk>,
Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
> The fitting is designed so that a failure of the bolt does not
> cause the movement of the mast to degrade some other component.
> It is also designed so that the bolt can be replaced without
> having to dismantle the mast, let alone having to disable some
> other component. The bolt is designed so that its probability of
> failure is related to the severity of the consequence of its
> failure. It is designed so that expected human error will not
> cause the bolt to fail in use. And so on.

Ah yes, another point.

Each component in a software system has to be measured against all the
*kinds* of components in a hardware system. In fact when you have very
meny components in a software system doing the same job (the 'screws')
then either they're actually custom parts (drivers, or all the individually
shaped tiles on the Space Shuttle), or a design flaw (programmers A, B,
and C all wrote their own memory manager).

So in the software system, there isn't any analog to "the bolt", there's
"all the bolts of this type", and if you need to change them you need to
make sure that change doesn't change any of the places that bolt is used.

This also means that a 10 million line software system is probably more
complex than the 30 million component aircraft carrier.

jonah thomas

unread,
Jul 1, 2003, 6:51:26 PM7/1/03
to
Peter da Silva wrote:

> The real issue is getting people to do this in a world where someone
> can honestly claim that a web browser is an integral part of an operating
> system without being shipped off to the funny farm.

Do we have a world where someone can honestly claim that a web browser
is an integral part of an operating system?

Eugene Miya

unread,
Jul 1, 2003, 8:03:29 PM7/1/03
to
In article <aN6Ma.1222$IP6....@eagle.america.net>,
del cecchi <dce...@msn.com> wrote:
>> "H. E. Taylor" <h...@despam.autobahn.mb.ca> wrote in message
>> news:3F00F2...@despam.autobahn.mb.ca...

>> > One of the books I have been reading lately is "The Next
>> > Fifty Years, Science in the First Half of the 21st Century".
>> > It is a collection of essays by 25 third party experts.
>> > One of the them is by Jaron Lanier. His essay is about
>> > complexity.
>> >
>> > If we don't find a different way
>> > of thinking about and creating software, we will not be writing
>> > programs bigger than about 10 million lines of code, no matter
>> > matter how fast, plentiful or exotic our processors become."
>> > -Jaron Lanier (The Next Fifty Years, page 219)
>>
>Depends on what you mean by "program". How many lines of code in
>MVS/OS390? OS400 including SLIC? Windows XP? Word XP? Unix? Oracle?
>DB2?

Actually, he just gave a talk on that subject (May before I went on
vacation, you can find the abstract in a.f.c. on google).
It's hard to summarize what he said, so I was thinking of inviting
him to work and having them do all the seminar arranging stuff.


>Who is this John Lanier guy? Somebody I should have heard of?

Jaron is a character.
He's more in the human interface, virtual reality (VPL glove),
graphics crowd than in high end computing (but he has sat on serious
HPCC committees).

Should you have heard of him? I dunno.
Jaron used to be a pretty serious coder in some ways,
not so batch oriented, much more interface interactive programmming.

>Seems to me that modular programming, object orientation, or just
>languages with higher level constructs will resolve many of his

>concerns. One doesn't design an airplane or bridge by writing down how
>to mine ore, smelt it, make bolts and rivets, roll sheet etc.

I think a better analysis of the types of complexity would be better.
I've never been a fan of line of code as a metric. There are instances,
Szilard and the bomb, and the SR-71, where knowing basic materials like
stuff while seemingly trivial made critical differences in the final
product (boron used in making graphite and distilled not tap water used
in finishing and cleaning Ti). And I'd guess there were similar Cray
stories.

Larry__Weiss

unread,
Jul 1, 2003, 7:03:10 PM7/1/03
to
Peter da Silva wrote:
> This also means that a 10 million line software system is probably more
> complex than the 30 million component aircraft carrier.
>

Do large-scale mechanical designers have a tool available to them
that would be analogous to a programming language compiler?

That is, a mechanized way to generate an optimized "nuts and bolts"
assembly from a higher-level specification?

Peter da Silva

unread,
Jul 1, 2003, 7:03:01 PM7/1/03
to
In article <3F01D82E...@cavtel.net>,

Five years of lawsuits over that and Microsoft still has the essential
components of Internet Explorer embedded like a tumor in Windows, because
they convinced enough people that it was essential that it be there.

Whether they were honest about it, there are enough people who believe them
that I have to say we live in that world.

jonah thomas

unread,
Jul 1, 2003, 7:17:40 PM7/1/03
to

Not usually. What they have is lists of standard parts with known
properties. The goal is to find ways to use the standard parts so they
work right.

Programmers would have something like that if they had serious code
re-use. But it's a different kind of system.

If you form custom aluminum parts and you then rivet them together,
there's a question what kind of rivets to use and how close to space
them and a few things like that. You have standard parts with a new
part that they fit into. The new part is like a glue that sticks
together the standard parts. I think there's a fundamental difference
between an automobile chassis and a standard headlight you can put on
it. But I'm having trouble defining the difference. Somehow I think
there's more difference between the standard parts and the "glue" parts
in mechanical systems than there is in code. But I don't know how to
say what the difference is.

Jeffrey Dutky

unread,
Jul 1, 2003, 7:19:09 PM7/1/03
to
Robert Myers <rmy...@rustuck.com> wrote:
> FWIW:

>
> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>
> "a single domestic passenger airplane alone can contain as many as 6
> million parts"
>
> http://www.majorprojects.org/pubdoc/677.pdf
>
> " Aircraft carrier project--a naval project with 30 million parts (a
> submarine has only 8 million parts)."
>
> Why would software be any harder?
>

Two reasons:

First, in a physical device, interactions between parts are usually
limited by location (each part only affects other, nearby, parts)
whereas in software, every line of code can affect ANY piece of the
program state. This means that, in general, physical systems have a
complexity that is something like N**k (N = number of parts, k = some
degree of local interaction, N>>k) while software has a complexity
much closer to N**N (every part has potential interaction with every
other part).

Second, humans have a much more experience managing the construction
of large mechanical systems than with constructing large information
systems. Management, as a science, is based, almost entirely, on vast
amounts of empiracle data: We just don't have enough of it, yet, to
know what we are doing as Software Engineers. Other disciplines of
engineering, on the other hand, have much larger and older bodies of
knowledge to draw upon.

- Jeff Dutky

Chris Hedley

unread,
Jul 1, 2003, 7:11:38 PM7/1/03
to
According to Pete Fenelon <pe...@fenelon.com>:

> The Guppy? Incredible thing - I've seen it somewhere near Manchester -
> heading for Broughton?
>
> Isn't there an Airbus-based replacement for the Guppy on the way?

A funny thing flew over Oxford today: looked a bit like a Hercules,
only with four jet engines, and painted the usual military grey. No
idea what it was, though, but it was flying bloody low.

Chris.
--
"If the world was an orange it would be like much too small, y'know?" Neil, '84
Currently playing: random early '80s radio stuff
http://www.chrishedley.com - assorted stuff, inc my genealogy. Gan canny!

Peter da Silva

unread,
Jul 1, 2003, 7:21:43 PM7/1/03
to
In article <3F01DAEE...@airmail.net>,

Larry__Weiss <l...@airmail.net> wrote:
> Peter da Silva wrote:
> > This also means that a 10 million line software system is probably more
> > complex than the 30 million component aircraft carrier.

> Do large-scale mechanical designers have a tool available to them
> that would be analogous to a programming language compiler?

Sure. Lots of them.

> That is, a mechanized way to generate an optimized "nuts and bolts"
> assembly from a higher-level specification?

Um, no, that's not the analog of a compiler. A compiler is like Autocad and
plotters and numerically controlled machines.

The source code you feed into the compiler... *that* is the "nuts and bolts"
assembly already.

Toon Moene

unread,
Jul 1, 2003, 7:59:02 PM7/1/03
to
Peter da Silva wrote:

> In article <2o12gvc6ishbpa20n...@4ax.com>,
> Robert Myers <rmy...@rustuck.com> wrote:

>>Why would software be any harder?

> Why would software be any easier? 10 million lines of code puts a program
> somewhere between a 747 and the Enterprise.

Isn't that the fundamental answer ? If you commission a 10**7 million
line software project, the resulting product should have approximately
the same cost as a 747.

Of course, if you happen to have 10**8 customers who are willing to buy
a bit-exact copy of that software *and use it under exact the same
conditions* the cost at which you supply said software to your customers
will be only 10**(-8) of that 747 ...

--
Toon Moene - mailto:to...@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
GNU Fortran 95: http://gcc-g95.sourceforge.net/ (under construction)

J Ahlstrom

unread,
Jul 1, 2003, 8:34:12 PM7/1/03
to

Peter might have meant
"where someone can successfully claim..."

JKA

Eugene Miya

unread,
Jul 1, 2003, 10:34:00 PM7/1/03
to
In article <2o12gvc6ishbpa20n...@4ax.com>,
Robert Myers <rmy...@rustuck.com> wrote:
>On Mon, 30 Jun 2003 19:31:54 -0700, "H. E. Taylor"
><h...@despam.autobahn.mb.ca> wrote:
>> "The Next Fifty Years, Science in the First Half of the 21st Century".
>> One is by Jaron Lanier. His essay is about complexity.
>>
>> "Since the complexity of software is currently limited by the
>> ability of human engineers to explicitly analyze and manage it,
>> we can be said to have already reached the complexity ceiling
>> of software as we know it. If we don't find a different way

>> of thinking about and creating software, we will not be writing
>> programs bigger than about 10 million lines of code, no matter
>> matter how fast, plentiful or exotic our processors become."
>> -Jaron Lanier (The Next Fifty Years, page 219)
>>
>> Most of my experience is in relatively small PC and embedded systems.
>> I wonder, is Lanier's perspective common on larger systems?

>
>FWIW:
>
>http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>
>"a single domestic passenger airplane alone can contain as many as 6
>million parts"

What is it worth?
Are you attempting to equate a line of code to a plane part?

>http://www.majorprojects.org/pubdoc/677.pdf
>
>" Aircraft carrier project--a naval project with 30 million parts (a
>submarine has only 8 million parts)."

Are you attempting to equate a line of code to ship parts?

>Why would software be any harder?

I can think of a number of reasons, likely some enumerated by other
follow ups. One would get into the vague issues of natural language.
You can start with Brooks.

>A paradigm already exists for scaling programs to arbitrarily large
>sizes. It's called a network.

Huh? What kind of network? Did Forrest cover the address space greater
than 32-bits thread in comp.arch a while back?

Steve O'Hara-Smith

unread,
Jul 1, 2003, 8:51:31 PM7/1/03
to
On Tue, 1 Jul 2003 12:39:26 +0200
Morten Reistad <m...@reistad.priv.no> wrote:

MR> You need a way to isolate the residue of special cases. A language,
MR> a subsystem. You need to bring their complexity contribution down to
MR> additive levels, not multiplicative.

Yes! That's the key, getting the right set of abstractions to
describe the general area of problems well.

Then when patterns are found in the special cases these can often
be usefully abstracted to enrich the vocabulary, simplify the overall
system and increase flexibility.

--
C:>WIN | Directable Mirrors
The computer obeys and wins. |A Better Way To Focus The Sun
You lose and Bill collects. | licenses available - see:
| http://www.sohara.org/

Sander Vesik

unread,
Jul 1, 2003, 10:53:40 PM7/1/03
to
In comp.arch J Ahlstrom <jahl...@cisco.com> wrote:
>
>
> Robert Myers wrote:
>>
>> FWIW:
>>
>> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>>
>> "a single domestic passenger airplane alone can contain as many as 6
>> million parts"
>
> 5,900,000 of which are rivets ?
>

And really - if somebody wants to compare the number of components in an
airplane to software, they should compare parts to opcodes.

--
Sander

+++ Out of cheese error +++

dada

unread,
Jul 2, 2003, 12:36:22 AM7/2/03
to
<snip>

I beleive that the basic idea is ti manage complexity, and isolate it as much as
possible. The main advantage that we have with software is that if it is built
correctly, the peices can be tested much earlier than the finished product. Most
of the complexity that I see everyday is in interaction between systems (
systems, components, programs, subprograms, etc.), and these can be mitigated by
identifying as early as possible these critical interactions. Thus, the earlier
in the implementatinos that integration can be applied can help expose
errors/incompatabilites in the design. Each peice can the be thought of as that
black box that everyone wants to acheive.

The problems I have seen is that the PHBs do not want the time spent in putting
together these early systems (that takes time, which we all know is money), and
rather want tangible results (we have sucessfully coded x% of the project...
earned value and all that). They want the full engineering perfomed up front,
instead of abstracting the system, doing the early integration, finding the
problems early (whrn it is less costly to fix by the way) and then going on to
the next level (yes I am a "fan" of spiral development).

Thanks
Joe, NY

Anne & Lynn Wheeler

unread,
Jul 2, 2003, 12:46:17 AM7/2/03
to
eug...@cse.ucsc.edu (Eugene Miya) writes:
> I think a better analysis of the types of complexity would be better.
> I've never been a fan of line of code as a metric. There are instances,
> Szilard and the bomb, and the SR-71, where knowing basic materials like
> stuff while seemingly trivial made critical differences in the final
> product (boron used in making graphite and distilled not tap water used
> in finishing and cleaning Ti). And I'd guess there were similar Cray
> stories.

K.I.S.S.

there are also all the boyd stories about the f15, f16, etc ... trying
to get it right .... and then there is the air force air-to-air
missile when some really bright people miss a very simple point
... what is the hottest part of a jet? misc.
http://www.garlic.com/~lynn/subboyd.html#boyd

specific:
http://www.garlic.com/~lynn/99.html#120 atomic History
http://www.garlic.com/~lynn/2003j.html#13 A Dark Day

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm

Robert Myers

unread,
Jul 2, 2003, 3:59:14 AM7/2/03
to
On Tue, 01 Jul 2003 11:12:28 -0700, J Ahlstrom <jahl...@cisco.com>
wrote:

>
>
>Robert Myers wrote:
>>
>> FWIW:
>>
>> http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>>
>> "a single domestic passenger airplane alone can contain as many as 6
>> million parts"
>
>5,900,000 of which are rivets ?
>

Estimating the surface area of a Boeing 747 to be about 30,000 square
feet, that would allow for about 200 rivets per square foot of surface
area. Even with one foot square surface panels, rivets one inch on
center would only give 25 rivets per square foot. That's the best my
envelope can do at this time of night.

RM

Stan Barr

unread,
Jul 2, 2003, 5:01:44 AM7/2/03
to
On Tue, 01 Jul 2003 18:04:09 -0000, Pete Fenelon <pe...@fenelon.com> wrote:
>
>Isn't there an Airbus-based replacement for the Guppy on the way?

I believe so, but I've not seen it yet.

Robert Myers

unread,
Jul 2, 2003, 5:07:33 AM7/2/03
to
On 1 Jul 2003 14:34:00 -0800, eug...@cse.ucsc.edu (Eugene Miya) wrote:

>>
>>FWIW:
>>
>>http://www.jdmag.wpafb.af.mil/bogus%20parts.pdf
>>
>>"a single domestic passenger airplane alone can contain as many as 6
>>million parts"
>
>What is it worth?
>Are you attempting to equate a line of code to a plane part?
>
>>http://www.majorprojects.org/pubdoc/677.pdf
>>
>>" Aircraft carrier project--a naval project with 30 million parts (a
>>submarine has only 8 million parts)."
>
>Are you attempting to equate a line of code to ship parts?
>

I have written or supervised the writing of complicated software to
simulate or to analyze complicated aerospace hardware, so the
comparison seems natural to me. I introduced it because I have been
truly impressed by the ability of large aerospace organizations (full
disclosure: my customers, not my employers) to cope with complexity.
If the comparison seems inappropriate or forced to you, feel free to
find another, but from what little I know of your background, I am
surprised that the comparison would not seem natural to you.

>>Why would software be any harder?
>
>I can think of a number of reasons, likely some enumerated by other
>follow ups. One would get into the vague issues of natural language.
>You can start with Brooks.
>

Maybe it's just that it's the end of a long day. Maybe it is that I
got so many pugilistic responses to what seemed like a fairly benign
comparison (truth be told, I think I touched a nerve). Whatever the
reason, Brooks' article strikes me as a lame laundry list of
techniques with no real unifying insight as to the problem he claims
to be addressing. I'll try reading it again tomorrow.

I have said in another thread recently, and I will say it again, that,
when it comes to understanding of algorithms, we are like children. I
am therefore naturally suspicious of articles that attempt to draw
general conclusions about methods of describing algorithms via
software, since I don't know how you can draw conclusions about
methods of describing what you don't fully understand to begin with.

>>A paradigm already exists for scaling programs to arbitrarily large
>>sizes. It's called a network.
>
>Huh? What kind of network?

I responded to the same question posed by Nick Maclaren in some detail
in this thread.

What I described is a message-passing paradigm for stitching together
software that is common in clusters but I don't think is used for
routine programming. I also gave a reasonably clear description of
why I think the model is, to all appearances, indefinitely
extensible.

As I said in responding to Nick, I think Jan C. Vorbrüggen got it
exactly right when he said that what is different about software is
that it is written in a strongly-coupled way. A message-passing
paradigm is one way of strictly limiting the coupling and allowing for
complete monitoring of the interactions between separate program
modules.

Notice, please, that I am not proposing a message-passing paradigm for
all programming. I am proposing it as a way of coping with extremely
complex systems.

>Did Forrest cover the address space greater
>than 32-bits thread in comp.arch a while back?

The fact that the curve began with "Now that The Forrest Curve has
become obviously true..." was more than a little off-putting to me,
since I take the Forrest Curve about as seriously as I took the
article entitled "IT Doesn't Matter" in the Harvard Business Review.
Both discussions might as well have started with "Let's suppose that
no one will ever think of entirely new ways of using computers."

In any case, the thread has no particular relevance to the argument I
made in responding to Nick, because the 32-bit thread dealt with
programs on a single machine, and I take the very broad view that if
two programs can talk to each other, they might as well be a single
program.

RM

Ken Moore

unread,
Jul 1, 2003, 7:55:37 PM7/1/03
to
In article <Xns93AB9CF857F46...@158.234.29.254>, Tom
Gardner <gardne...@TRAPlogica.com> writes
>Practice: HMS Sheffield was sunk because its long-range radar
>had been turned off in order to allow satellite communications;
>thus missing the Exocet coming down its throat.

That's why it was hit. The hit killed people and made a large hole, but
it didn't sink Sheffield immediately. It would probably have survived
if the cable insulation had not been flammable, or if the bulkhead
transitions had had some sort of flame trap.

--
Ken Moore
k...@mooremusic.org.uk
Web site: http://www.mooremusic.org.uk/
I reject emails > 300k automatically: warn me beforehand if you want to send one

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 7:09:49 AM7/2/03
to
> Where you do have oodles of different part types, physical designs benefit
> from the continuous nature of physical properties (each one isn't an
> entirely new problem)

I agree with the premise - benefit from the continuous nature - but the
reason this helps, IMO, is that such systems aren't "brittle" in their
behaviour, meaning that a small change in the input usually means a small
change in the output. While there are examples where this isn't true (e.g.,
crossing a phase transition), brittleness is the usual behaviour of software:
even of defensively coded software, not to speak of the usual stuff.

> and from the fact that you usually can't put the wrong components together
> (think of bolt sizes and connector layouts as strong typing).

...which, however, still needs design to work - asymmetric connectors, for
instance, and you'd better not mix metric and non-metric bolts and tools,
because there are some common sizes that _almost_ fit.

Jan

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 7:13:44 AM7/2/03
to
> > "a single domestic passenger airplane alone can contain as many as 6
> > million parts"
> 5,900,000 of which are rivets ?

No, those are at most a few hundred thousand. If I remember the number from
the film showing how an A320 is being built correctly, they said 145,000.

But even counting parts isn't easy. Is every stringer a part?

Jan

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 7:21:02 AM7/2/03
to
> There are instances, Szilard and the bomb, and the SR-71, where knowing
> basic materials like stuff while seemingly trivial made critical
> differences in the final product (boron used in making graphite and
> distilled not tap water used in finishing and cleaning Ti). And I'd
> guess there were similar Cray stories.

Asimov also wrote a nice little story, based on a society where knowledge
and expertise is extremely fractioned and specialised (so that nobody has
the holistic view, and the different "experts" do not allow themselves to
critique their colleagues from a different field) and the biological
properties of Beryllium.

Jan

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 7:26:07 AM7/2/03
to
> Second, humans have a much more experience managing the construction
> of large mechanical systems than with constructing large information
> systems. Management, as a science, is based, almost entirely, on vast
> amounts of empiracle data: We just don't have enough of it, yet, to
> know what we are doing as Software Engineers. Other disciplines of
> engineering, on the other hand, have much larger and older bodies of
> knowledge to draw upon.

While the relative "more" is surely true, I do believe we have "enough" of
it: Once in a while a serious software project does comes through on schedule
and on (or even under) budget, a recent example (as far as I can tell from
the news) being the system running the London Congestion Charge - in a
country that has probably the most horrible record of large public software
projects being scrapped after wasting hundreds of millions. However, there
is no accepted "state of the art" to software project management that is
routinely applied, and that indeed the project's customer demands to be used.

Jan

Nick Maclaren

unread,
Jul 2, 2003, 7:40:58 AM7/2/03
to

In article <BmeJLUA5ceA$Ew...@mooremusic.org.uk>, Ken Moore <k...@i12.com> writes:
|> In article <Xns93AB9CF857F46...@158.234.29.254>, Tom
|> Gardner <gardne...@TRAPlogica.com> writes
|> >Practice: HMS Sheffield was sunk because its long-range radar
|> >had been turned off in order to allow satellite communications;
|> >thus missing the Exocet coming down its throat.
|>
|> That's why it was hit. The hit killed people and made a large hole, but
|> it didn't sink Sheffield immediately. It would probably have survived
|> if the cable insulation had not been flammable, or if the bulkhead
|> transitions had had some sort of flame trap.

While it is off-topic, the analogy is relevant.

When that occurred, I remembered the problem arising in some of the
first high-rise blocks that were built with communications ducts.
Lift shafts were protected against fire (by law), but people had
forgotten the ducts. That was in the 1960s or 1970s.

Cycling home from work, I designed a cheap way of building a flame
trap that would have stopped that problem. Just a hinged metal
flap (alternating in direction) with a bag of silica gel up against
the side it moves towards. Yes, it would get in the way of
maintenance but not catastrophically.

Now to the relevance. I have begged more vendors and developers
than I care to think for the use of similar techniques in kernels,
device interfaces and so on. Just a very simple trap so that,
when one component melts down, it doesn't take the rest of the
system with it. No attempt to recover, no attempt to be clever,
but an almost bulletproof flame trap to stop disaster spreading.

Almost every time I do, I can see such a trap come down over their
eyes, to block out radical ideas from entering their brain :-(

I then get told that the solution is not to write broken devices
or drivers, never to install devices or drivers you can't trust,
that there is no way a CPU or kernel can protect itself from a
broken (by design, not failed) device or driver, and so on. I
can't get the message across that good engineering is designing
for failure modes, not how well things work when everything goes
according to plan.

Nor can I explain to most of those people that very rare failures
should be protected against, if they are likely to cause disaster.
In the HPC context, a catastrophic failure is not a system crash,
so much as a too-frequent system crash that you can't locate the
cause of or reduce the probability of. But that is a detail.

Anyone recognise my dislike of IA-64 and Infiniband in the above?
To name but two things ....


Regards,
Nick Maclaren.

Nick Maclaren

unread,
Jul 2, 2003, 8:34:45 AM7/2/03
to

God help me, he could be describing the present.


Regards,
Nick Maclaren.

Tom Gardner

unread,
Jul 2, 2003, 8:54:54 AM7/2/03
to
Ken Moore <k...@i12.com> wrote in news:BmeJLUA5ceA$Ew...@mooremusic.org.uk:

> In article <Xns93AB9CF857F46...@158.234.29.254>, Tom
> Gardner <gardne...@TRAPlogica.com> writes
>>Practice: HMS Sheffield was sunk because its long-range radar
>>had been turned off in order to allow satellite communications;
>>thus missing the Exocet coming down its throat.
>
> That's why it was hit. The hit killed people and made a large hole, but
> it didn't sink Sheffield immediately. It would probably have survived
> if the cable insulation had not been flammable, or if the bulkhead
> transitions had had some sort of flame trap.

True enough.

My father used to tell a story about a nuclear reactor,
a power failure, a technician crawling through a duct
with a candle (!), insulation set on fire, and all
three "independent" scram circuits being routed through
that duct.

Next somebody will mention optical fibres and backhoes.

Glen Herrmannsfeldt

unread,
Jul 2, 2003, 9:40:10 AM7/2/03
to

"Tim Shoppa" <sho...@trailing-edge.com> wrote in message
news:bec993c8.0307...@posting.google.com...

> Largish complex systems often rely on tools such as table look-up to
> reduce the amount of C code and maximize the ability for at least the
> tables to be machine-checked for consistency. This is really shifting
> the solution domain from "C code" (or whatever general-purpose
> computing language they use that week) to writing down the tables.
>
> > I do not believe factors of 1,000 and more, as you are claiming.
>
> I've seen some extreme cases of a factor of 1000, but in those cases
> they were *only* counting lines of C code and ignoring library calls
> and the complexity of the tables, even ignoring the tools used to
> build the tables. i.e. they were cheating.

I would say that you should include the input data to such tools, but not
the tools themselves. You wouldn't, for example, include the size of the
compiler used to compile a project in the complexity of the project, though
the compiler must exist.

> MLOC's (Millions of Lines of Code) are one way to measure system
complexity
> but certainly not the best one!

Remember MIPS, Meaningless Indicator of Processor Speed. Someone will need
a new definition for MLOC's.

-- glen


Nick Maclaren

unread,
Jul 2, 2003, 9:59:21 AM7/2/03
to

In article <_LxMa.1690$C36....@rwcrnsc51.ops.asp.att.net>,

"Glen Herrmannsfeldt" <g...@ugcs.caltech.edu> writes:
|> "Tim Shoppa" <sho...@trailing-edge.com> wrote in message
|> news:bec993c8.0307...@posting.google.com...
|>
|> > Largish complex systems often rely on tools such as table look-up to
|> > reduce the amount of C code and maximize the ability for at least the
|> > tables to be machine-checked for consistency. This is really shifting
|> > the solution domain from "C code" (or whatever general-purpose
|> > computing language they use that week) to writing down the tables.
|> >
|> > > I do not believe factors of 1,000 and more, as you are claiming.
|> >
|> > I've seen some extreme cases of a factor of 1000, but in those cases
|> > they were *only* counting lines of C code and ignoring library calls
|> > and the complexity of the tables, even ignoring the tools used to
|> > build the tables. i.e. they were cheating.
|>
|> I would say that you should include the input data to such tools, but not
|> the tools themselves. You wouldn't, for example, include the size of the
|> compiler used to compile a project in the complexity of the project, though
|> the compiler must exist.

It depends on context. If you are assessing the potential reliability
of the resulting program in context, then you do need to be concerned
about that. A vast, complex compiler is likely to be buggier than a
small, simple one.

And, of course, you must include any tools that are specific to the
project, or even customised for it. Like Tim, I have seen claims
based on moving the real code out to a library and denying that the
library was part of the program (even though it had to be developed
specially for that program!)

The other form of cheating, which it appears that Peter Lund was
using, is to use the OUTPUT from tools rather than their INPUT as
the measure of complexity. It is trivial to get factors of thousands
or even millions if you do that. Yes, I really do mean that there
are program and data generators that will turn (say) 1 KB into 1 GB,
and not be totally stupid!

|> > MLOC's (Millions of Lines of Code) are one way to measure system
|> complexity
|> > but certainly not the best one!
|>
|> Remember MIPS, Meaningless Indicator of Processor Speed. Someone will need
|> a new definition for MLOC's.

Yes. As with MIPS, MLOCs is reasonably meaningful if not abused,
but useless when abused. Meaningless Level of Operational
Complexity?

In the context of the consequence of human error, all reasonable
measures consider almost all of the input provided by the human,
in the form that they were provided. Whether or not that should
include libraries and compilers is context dependent, but it is
very rarely meaningful to measure the output from tools - UNLESS
it is then going to be inspected or modified by a human.


Regards,
Nick Maclaren.

Per Ekman

unread,
Jul 2, 2003, 10:12:53 AM7/2/03
to
Ken Moore <k...@i12.com> writes:

> In article <Xns93AB9CF857F46...@158.234.29.254>, Tom
> Gardner <gardne...@TRAPlogica.com> writes
> >Practice: HMS Sheffield was sunk because its long-range radar
> >had been turned off in order to allow satellite communications;
> >thus missing the Exocet coming down its throat.
>
> That's why it was hit. The hit killed people and made a large hole, but
> it didn't sink Sheffield immediately. It would probably have survived
> if the cable insulation had not been flammable, or if the bulkhead
> transitions had had some sort of flame trap.

ISTR reading that the Exocet didn't detonate but that the engine
exhaust set the aluminum hull on fire. Don't have any reference
though.

*p


Morten Reistad

unread,
Jul 2, 2003, 8:22:44 AM7/2/03
to
In article <3F02299D...@stny.rr.com>, dada <sm...@stny.rr.com> wrote:
><snip>
>
>I beleive that the basic idea is ti manage complexity, and isolate it as much as
>possible. The main advantage that we have with software is that if it is built
>correctly, the peices can be tested much earlier than the finished product. Most
>of the complexity that I see everyday is in interaction between systems (
>systems, components, programs, subprograms, etc.), and these can be mitigated by
>identifying as early as possible these critical interactions. Thus, the earlier
>in the implementatinos that integration can be applied can help expose
>errors/incompatabilites in the design. Each peice can the be thought of as that
>black box that everyone wants to acheive.

There seems to be a pretty good consensus in this group about what to do
and not to do in terms of software design; and there have been numerous
suggestions. The suggestions all attack complexity and try to establish
quality control points at different points along the route.

The downside with these methods seems to be a slight time delay, but this
should be possible to recover in terms of better scalability.

>The problems I have seen is that the PHBs do not want the time spent in putting
>together these early systems (that takes time, which we all know is money), and
>rather want tangible results (we have sucessfully coded x% of the project...
>earned value and all that). They want the full engineering perfomed up front,
>instead of abstracting the system, doing the early integration, finding the
>problems early (whrn it is less costly to fix by the way) and then going on to
>the next level (yes I am a "fan" of spiral development).

If we set labels on it I tend to call it "iterative"; and "towards-the-middle".
A little top-down to establish structure; a little bottom-up to establish
control over the worst pitfalls in terms of external interfaces, and let
the middle sort itself out in an iterative fashion.

Now, to something completly different

I have been looking at Open Source "products"; and we could use these
as analysis of how to and how not to. They are almost completly up in the
open, and can be scrutinized.

The metrics are impressive. I ran some scripts on my "build box", where I
build FreeBSD utilities for myself and friends. I ran some scripts over
the source files, and did a rudimentary line count. This is just a raw
line count, for metric counts we'll probably have to reduce these by up to
a third.

The FreeBSD kernel, device drivers etc. are 2.4 M LOC
Add the utilities (everything in /usr/src) and we are at 7.4.
Add my "minimal set" of tools (basic Gnu, emacs, gcc, lynx, trn/leafnode,
mh, etc.). These contain around 5.1 M Loc, exclusive of X11; and
the total is 12.5. This is the source for my expectation of a civilized
unix server; sans user graphics.

I haven't gotten the script to run over the x11 tree; but this is
probably well documented elsewhere.

If I add together all the code I find in the source trees that I
have built (and this includes X11, mozilla, java, openoffice etc) I find
close to 80 million lines of code. Some of these are duplicates, but
not very much. FreeBSD is very well organized in this respect. I would
guess I have build somewhat more than half the projects in the ports
tree.

The vast bulk of these projects have a software quality that ranges
well beyond what is normal. They are almost all run by software artisans.
And they seem to be pulling off a collective complexity that run well
beyond what is a normal pain point in the software industry.

Buggy projects are normally clearly tagged as such; ref wine (alpha state).

I think the software industry should take due note and study what has been
accomplished here.

-- mrr


Tom Gardner

unread,
Jul 2, 2003, 10:48:57 AM7/2/03
to
nm...@cus.cam.ac.uk (Nick Maclaren) wrote in
news:bduadp$kp2$1...@pegasus.csx.cam.ac.uk:

> It is trivial to get factors of thousands
> or even millions if you do that. Yes, I really do mean that there
> are program and data generators that will turn (say) 1 KB into 1 GB,
> and not be totally stupid!
>

I recently had some salesmen that thought it impressive
that their code generator had spat out 550.000 LOC of C++.
They didn't like it when I was unimpressed because, by
that "reasoning", my codegen could have spat
out 1,000,000 LOC.

No, they didn't do anything to justify their desired
impression that 550KLOC was appropriate or beneficial :)

jmfb...@aol.com

unread,
Jul 2, 2003, 8:49:22 AM7/2/03
to
In article <3F019E50...@cavtel.net>,
jonah thomas <j2th...@cavtel.net> wrote:
>jmfb...@aol.com wrote:
>> jonah thomas <j2th...@cavtel.net> wrote:
>
>>>When it works, one of the things that happens is that the interfaces
>>>seem intuitive. If they don't, then you'll drown in numbers of simple
>>>interactions. When it's simple components connected simply, you have
>>>the possibility still to just get so many of them that you can't keep
>>>track.
>
>> It's not intuitive! The interfaces are documented. Anything that
>> doesn't follow the call gets an immediate error return.
>
>Yes. But if it isn't intuitive, you'll likely make lots of errors in
>spite of the documentation. When it works, usually it turns out to be
>intuitive and also well-documented.

NOthing about computers is intuitive.

>
>>>>The same thing applies to hardware. It is easy to analyse and debug
>>>>race and other inconsistency conditions involving two entities. As
>>>>networks grow in complexity, it becomes harder and harder. There
>>>>comes a point (approaching, in some networks) where most failure
>>>>time (down time or debugging) is not associated with a problem in
>>>>ANY component, but is associated with the network structure itself.
>>>>TANSTAFFL.
>
>>>By analogy I'd say that you might need hierarchical networks where you
>>>never get too many units interacting at once. Then you at least have
>>>the possibility to track down the problems. But performance degrades
>>>across the hierarchy even in the best case. Still, what choice do you
>>>have? If the alternative is sacrificing a system administrator to the
>>>network gods every full moon....
>
>> I don't understand. We used to call it black box development.
>> Rather similar to a box in a flow chart. If you are outside the
>> box, the only thing you need to know is what goes in or what comes
>> out and NOT what happens inside.
>
>The first theory was that if every individual black box works
>'correctly' then a network of them will also work correctly. But you
>can get network problems that no individual component will show. And
>the more components in the network the harder it is to find out how the
>problems start.
>
>To the extent that you can solve software problems by using simple
>components connected simply in hierarchies (and this doesn't solve
>everything) maybe you could apply the same approach to networks. Simple
>networks are easier to fix.

No, they're not. I watched bit gods work on "simple" networks.
Anything having to do with comm (interaction with human fingers)
is complex, frustrating, and fraught with CATCH-22s and deadly
embraces. Google (I hope that works) for a post that talked
about Bob Clements' IRMA bit (named for the gal who could
consistently type in such a way to reproduce the problem) in
TOPS-10.

>
>So break a big network into small networks, and each of them can be made
>to work. Then you have a network of networks, and there are a limited
>number of networks in the big network, and maybe you can fix that. And
>then the network of networks of networks.

You are approaching networks in an incorrect way. You are trying
to think of each node as a black box and you can't IF you're debugging
a network problem.

>
>At each level you can hope to treat the components of that level as
>black boxes. But when you make the hierarchy you've restrictrd the
>communication to established channels. This may degrade the performance
>in the best case.

But the "components" of a network is not each computer. Components of
a network are [fingers fumbling for words it doesn't have] the functions;
I wanted to type layers but that's not quite correct.

>
>> The complexity of programming is that there exists nested black
>> boxes. If you keep going lower and lower and lower, you can
>> find yourself back in the electric generator at the power station
>> and then there are all those black boxes in the physics and wiring
>> areas.
>
>> The fun is arranging things so that a CATCH-22 becomes a nested
>> black box within itself. That's a lot of fun. :-)
>
>"Doctor, it hurts when I do this."
>"Then don't do that."

No. A CATCH-22 is when it also hurts when you don't do that.

/BAH

Subtract a hundred and four for e-mail.

jmfb...@aol.com

unread,
Jul 2, 2003, 9:00:40 AM7/2/03
to
In article <bec993c8.0307...@posting.google.com>,
sho...@trailing-edge.com (Tim Shoppa) wrote:
>nm...@cus.cam.ac.uk (Nick Maclaren) wrote in message
news:<bdre26$56m$1...@pegasus.csx.cam.ac.uk>...
>> In article <Pine.LNX.4.55.03...@ask.diku.dk>,
>> "Peter \"Firefly\" Lund" <fir...@diku.dk> writes:

>> |> On Tue, 1 Jul 2003, Nick Maclaren wrote:
>> |>
>> |> > |> Why would software be any harder?
>> |> >
>> |> > Because it is less well engineered. Also, do you know how many
>> |>
>> |> No. Because you don't have high-level screws. 10 million lines of
mostly
>> |> flat-line code broken into pieces that interact very little with each
>> |> other would be a lot easier to understand than 10 million typical
lines.
>>
>> I suggest learning a little more about the design of such systems.
>> Your point is correct, but your viewpoint of their problems is
>> overly simplistic.
>>
>> |> It would also be even easier if we did apply the engineering we know
and
>> |> cut it down to say 5 or 10 thousand lines (probably less).
>>
>> I know of no such example. If you have ever tried to write a
>> serious product and keep it down to that size, you will have found
>> out that only some of the bloat is due to engineering incompetence.
>> Rather more is due to interfaces being designed non-mathematically,
>> but there is a large residue of special cases, messy situations and
>> so on that can't be handled compactly.
>>
>> Factors of 10 reduction are often possible; factors of 100, perhaps.

>
>Largish complex systems often rely on tools such as table look-up to
>reduce the amount of C code and maximize the ability for at least the
>tables to be machine-checked for consistency. This is really shifting
>the solution domain from "C code" (or whatever general-purpose
>computing language they use that week) to writing down the tables.
>
>> I do not believe factors of 1,000 and more, as you are claiming.
>
>I've seen some extreme cases of a factor of 1000, but in those cases
>they were *only* counting lines of C code and ignoring library calls
>and the complexity of the tables, even ignoring the tools used to
>build the tables. i.e. they were cheating.
>
>MLOC's (Millions of Lines of Code) are one way to measure system
complexity
>but certainly not the best one!

I usually thought in total CPU cycles to get my bit to
your bit; that included the ones outside the room and the
cycles needed to "create" that bit. Note that this doesn't
include time to go get coffee and other stuff.

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 11:28:36 AM7/2/03
to
> NOthing about computers is intuitive.

I don't agree with that. A lot of things are much more intuitive to me
than to others (as deduced from observation and experience). I don't think
I'm the only person wired that way.

Jan

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 11:30:40 AM7/2/03
to
> My father used to tell a story about a nuclear reactor, a power failure,
> a technician crawling through a duct with a candle (!), insulation set on
> fire, and all three "independent" scram circuits being routed through that
> duct.

Rings a bell. Although I think there was a solid reason for the candle: They
were looking for a draught indicating where the wire routing was headed, IIRC.
That doesn't excuse this nor the other "bugs" in that incident, in particular
the last one you mention.

Jan

Jan C. Vorbrüggen

unread,
Jul 2, 2003, 11:38:35 AM7/2/03
to
> |> Asimov also wrote a nice little story, based on a society where knowledge
> |> and expertise is extremely fractioned and specialised (so that nobody has
> |> the holistic view, and the different "experts" do not allow themselves to
> |> critique their colleagues from a different field) and the biological
> |> properties of Beryllium.
> God help me, he could be describing the present.

Oh, I'm sure it was pure satire, with a lot of jabs at science culture
(postoring, tirading, and so on), but it was quite overboard. I've been
there, and it's not _quite_ as bad. Although that passage in the "Double
Helix" when Chargaff visits Cambridge would fit right in.

Jan

Stephen Sprunk

unread,
Jul 2, 2003, 12:22:16 PM7/2/03
to
"Morten Reistad" <m...@reistad.priv.no> wrote in message
news:e5ordb.8u3.ln@acer...

> >|> " Aircraft carrier project--a naval project with 30 million parts (a
> >|> submarine has only 8 million parts)."
> >|>
> >|> Why would software be any harder?
>
> Because there are more interactions.

That depends on the compartmentalization.

And it also depends on the interface between compartments being implemented
as specified (or vice versa).

> A bolt holding a radar mast on an aircraft carrier does just that. It
> can be inspected, maintained and changed by a person knowing about
> bolts and naval warship construction. [S]He does not have to know much
> about radar systems.
>
> A failure of this bolt will only lead to failure of one subsystem
> (beside someone getting hit by a falling mast).

Unless, of course, that bolt only partially fails, allowing the radar mast
to vibrate. These vibrations might travel to other nearby parts and cause
more failures...

> They missed the ball big-time. The big potential for re-use is
> not of subroutines. It is of binary, working programs with defined
> interfaces; often rigorously standardized. Jfr. e-mail.

This is exemplified in the proliferation of *ix command line utilities such
as tr, sed, cp, cat, etc. which each do one thing and do it perfectly every
time -- Microsoft Word does the same exact functions, but it does it all in
one place requiring not only code to decide which function to perform but
also code to make sure functions don't interfere with each other.

> >Have you ever tried to report a problem to a large vendor that is
> >due SOLELY to the underlying computational assumptions of three or
> >more separately developed components being subtly incompatible?
> >I have. Guess how far I got.

I worked at such a vendor; my experience was that any _reproducible_ bug was
patched within a few days. Unfortunately, most bugs reported were
never reproduced in a lab and quickly closed due to lack of information.

> I have had the privilige of setting up operations for such
> software. Lesson#1 : You can write any SLA you want as long as
> there is a stringent validation for entry into the SLA. I never
> saw more than around 1/4th of projects ever make it into SLA
> terms; i.e. past external validation.

Or you learn to price the business such that you still make an acceptable
profit even if you fail every SLA...

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Nick Maclaren

unread,
Jul 2, 2003, 12:10:25 PM7/2/03
to

I wouldn't bet on it. It is now VERY hard for someone from area A
(or even a generalist) to publish a result in area B in a 'technical'
journal, let alone get taken seriously. And computing is definitely
one of those areas B!

I have personal experience of this, and know a fair number of other
people who have. Areas of computing like complexity theory are
particularly bad examples of B, but there are lots of equally bad
ones in the older sciences, too.

It is fairly common for something to be rejected because it conflicts
with Authority, when a cursory inspection finds that the standard
Authoritative reference in area B is obviously false to anyone who
has a clue. Where "has a clue" is relative to area A.

In order to get such things published, you FIRST have to tackle the
Authorised Beliefs head on. I can witness that it isn't enough
just to demonstrate the flaw and provide counter-examples :-(

I agree that it is not yet universal ....


Regards,
Nick Maclaren.

Mattias Engdegård

unread,
Jul 2, 2003, 12:13:23 PM7/2/03
to
Per Ekman <p...@pdc.kth.se> writes:

>ISTR reading that the Exocet didn't detonate but that the engine
>exhaust set the aluminum hull on fire. Don't have any reference
>though.

I don't think you can find a reference for that, because her hull and
superstructure were made of steel. The missile's remaining fuel was
indeed the chief cause of the fire.

Joe Seigh

unread,
Jul 2, 2003, 12:36:00 PM7/2/03
to

Stephen Sprunk wrote:
>
> > >Have you ever tried to report a problem to a large vendor that is
> > >due SOLELY to the underlying computational assumptions of three or
> > >more separately developed components being subtly incompatible?
> > >I have. Guess how far I got.
>
> I worked at such a vendor; my experience was that any _reproducible_ bug was
> patched within a few days. Unfortunately, most bugs reported were
> never reproduced in a lab and quickly closed due to lack of information.
>

The not reproducible is a major cop out. I'm suprised that most vendors are
allowed to get away with this excuse but apparently they do.

A lot of not reproducible are thread related and this goes a long way in
explaining the sorry state of threaded programming. You can have incompetent
programmers write crap threaded code and get away with it because most of
the bugs are "not reproducible".

Joe Seigh

Pete Fenelon

unread,
Jul 2, 2003, 12:43:22 PM7/2/03
to
In alt.folklore.computers Nick Maclaren <nm...@cus.cam.ac.uk> wrote:
>
> I wouldn't bet on it. It is now VERY hard for someone from area A
> (or even a generalist) to publish a result in area B in a 'technical'
> journal, let alone get taken seriously. And computing is definitely
> one of those areas B!

Slightly too easy in some Area Bs: ;)

http://www.physics.nyu.edu/faculty/sokal/

pete
--
pe...@fenelon.com "there's no room for enigmas in built-up areas" HMHB

Nick Maclaren

unread,
Jul 2, 2003, 12:49:31 PM7/2/03
to

In article <56f3f90f6652dc90...@free.teranews.com>,

"Stephen Sprunk" <ste...@sprunk.org> writes:
|>
|> > >Have you ever tried to report a problem to a large vendor that is
|> > >due SOLELY to the underlying computational assumptions of three or
|> > >more separately developed components being subtly incompatible?
|> > >I have. Guess how far I got.
|>
|> I worked at such a vendor; my experience was that any _reproducible_ bug was
|> patched within a few days. Unfortunately, most bugs reported were
|> never reproduced in a lab and quickly closed due to lack of information.

Those aren't the sort of things I am talking about - even with
the most massive resources and best will in the world, they
couldn't have been fixed within days. Oh, yes, a kludge for
my particular example could have been produced, but that is not
the same. And many WERE reproducible.

I do mean the UNDERLYING assumptions, and any fix would have
involved a significant redesign to parts of at least one and
often all three components. NONE of the components failed;
NONE was in breach of its specification; NONE of the individual
behaviours was at fault in isolation. But the aggregate didn't
work.

To resolve the issue would have needed a discussion between at
least three architects - not necessarily a long or hard one - to
decide to use a common set of assumptions. Typically who is
responsible for handling or diagnosing errors. In one case, it
could have been resolved in 5 minutes; in others it was much
nastier.

Two architects planning together is a personal interaction;
three doing so is a strategy meeting :-)


Regards,
Nick Maclaren.

Bernd Paysan

unread,
Jul 2, 2003, 1:20:37 PM7/2/03
to
Eugene Miya wrote:
>>" Aircraft carrier project--a naval project with 30 million parts (a
>>submarine has only 8 million parts)."
>
> Are you attempting to equate a line of code to ship parts?

Especially when the parts are all identical. How many nuts and bolts, screws
and rivets of a certain kind are used? The complexity of such a project is
more the number of different parts and the way they are arranged. It's all
information theory: how much information does a single rivet bear if you
make your ship out of thousands rectangular steel plates bend a bit and all
bound together with hundreds rivets on each side?

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

Ken Hagan

unread,
Jul 2, 2003, 1:49:56 PM7/2/03
to
Joe Seigh wrote:
>
> The not reproducible is a major cop out. I'm suprised that most
> vendors are allowed to get away with this excuse but apparently they
> do.

I am not a lawyer, but if someone had me up in court and failed to
produce evidence, I'd be more than a bit miffed if I was convicted.
However, World War Two isn't reproducible and yet no-one doubts
that it happened, so I suppose you are right.

Perhaps the problem is that the only evidence available tends to be
a crash dump. If more programs were capable of recording their own
execution history then we might have fewer bugs.

I have very little experience of software outside the PC mass-market.
Is this something that I'd find in other kinds of computer system?
To what extent can it be done by a without detailed knowledge of the
software package being monitored? (For example, could the OS vendor
help users by shipping a standard "baby sitting" program?)


Nick Maclaren

unread,
Jul 2, 2003, 2:05:22 PM7/2/03
to

In article <bduo0b$rj9$1$8302...@news.demon.co.uk>,

"Ken Hagan" <K.H...@thermoteknix.co.uk> writes:
|> Joe Seigh wrote:
|> >
|> > The not reproducible is a major cop out. I'm suprised that most
|> > vendors are allowed to get away with this excuse but apparently they
|> > do.
|>
|> I am not a lawyer, but if someone had me up in court and failed to
|> produce evidence, I'd be more than a bit miffed if I was convicted.
|> However, World War Two isn't reproducible and yet no-one doubts
|> that it happened, so I suppose you are right.
|>
|> Perhaps the problem is that the only evidence available tends to be
|> a crash dump. If more programs were capable of recording their own
|> execution history then we might have fewer bugs.

Quite. This is one of my hobby horses. It used to be good practice.

|> I have very little experience of software outside the PC mass-market.
|> Is this something that I'd find in other kinds of computer system?

Rarely, but it used to more common.

|> To what extent can it be done by a without detailed knowledge of the
|> software package being monitored? (For example, could the OS vendor
|> help users by shipping a standard "baby sitting" program?)

Effectively not. Black box approaches tend to produce excessive
output, be unhelpful, or both.

What the system COULD do is to provide some decent tools for such
diagnosis - such as being able to log all signals, with source,
destination and properties. HP-UX certainly used to be able to
do a lot of this, under the name of auditing. And most mainframes
could, of course.


Regards,
Nick Maclaren.

jmfb...@aol.com

unread,
Jul 2, 2003, 12:13:06 PM7/2/03
to
In article <Xns93AC783292948...@158.234.29.254>,

I bet they fall for all the Viagra spams, too.

Robert Myers

unread,
Jul 2, 2003, 2:35:36 PM7/2/03
to
On 2 Jul 2003 06:20:37 -0700, bernd....@gmx.de (Bernd Paysan)
wrote:

>It's all
>information theory: how much information does a single rivet bear if you
>make your ship out of thousands rectangular steel plates bend a bit and all
>bound together with hundreds rivets on each side?

On the other hand, if an entire structure is held together with a
fastener of a single type, then the most subtle flaw in analysis or
quality control failure for that type of fastener could lead to
disaster. If there are millions of anything in your design, you'd
better give a commensurate amount of time to thinking about the
anything.

Don't have the time to look up the reference, but there was a fairly
recent incident in which a well-respected structural engineer
miscalculated the wind loading on an already-completed skyscraper in
NYC. The fix: tear the building apart piece by piece from the inside
and weld the joints that had previously only been bolted.

An even more dramatic example: an MIT study shows that the WTC towers
might not have collapsed if the horizontal strucutal members had been
attached with two bolts instead of just one.

For those who are interested in learning might have been done outside
the field of vision of their own computer monitor,

"information theory" "managing complexity"

is an interesting google search topic. People have done a great deal
of thinking about this problem in many contexts. It would be pure
hubris to imagine that the best thinking on this subject necessarily
comes out of software engineering.

RM

Eugene Miya

unread,
Jul 2, 2003, 5:17:36 PM7/2/03
to
In article <f19af6fc.03070...@posting.google.com>,
Bernd Paysan <bernd....@gmx.de> wrote:

Hey, we missed you at the ca-fest.

>Eugene Miya wrote:
>>>" Aircraft carrier project--a naval project with 30 million parts (a
>>>submarine has only 8 million parts)."
>>
>> Are you attempting to equate a line of code to ship parts?
>
>Especially when the parts are all identical. How many nuts and bolts, screws
>and rivets of a certain kind are used? The complexity of such a project is
>more the number of different parts and the way they are arranged. It's all
>information theory: how much information does a single rivet bear if you
>make your ship out of thousands rectangular steel plates bend a bit and all
>bound together with hundreds rivets on each side?

I decided to leave part diversity out. That's the # of unique ICs in
the Cray-1 thread.

I realize that IT theory can be considered in this way, but I think
this detracts from the materials science and other empirical sciences
at this time. We lack the universal tape reader to decode that tape.
And I know about the argument keeping the information of the individual
bolt (the easiest being bar code technology and far more methods of ID).
I think the infrastructure is the more important place to view the
complex object. This is why the military-industrial complex to build
subs in Gorton fights for every sub contract it can get.

Diversity complicates the discussion.....
The question is how much noting it adds to the vault of the thread
(which is largely 1-D).


Glen Herrmannsfeldt

unread,
Jul 2, 2003, 4:21:21 PM7/2/03
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:bduadp$kp2$1...@pegasus.csx.cam.ac.uk...

>
> In article <_LxMa.1690$C36....@rwcrnsc51.ops.asp.att.net>,
> "Glen Herrmannsfeldt" <g...@ugcs.caltech.edu> writes:

> |> I would say that you should include the input data to such tools, but
not
> |> the tools themselves. You wouldn't, for example, include the size of
the
> |> compiler used to compile a project in the complexity of the project,
though
> |> the compiler must exist.
>
> It depends on context. If you are assessing the potential reliability
> of the resulting program in context, then you do need to be concerned
> about that. A vast, complex compiler is likely to be buggier than a
> small, simple one.

Hmm. Maybe it needs a different degree of complexity. The object code of a
hello world program is not increased in complexity by the size of the
compiler. Only a small portion of the compiler will be used, anyway.

Now, say I compile a C program containing a large table of high complexity
data. (The source of that data being left unsaid for now.) Still only a
small part of the compiler will be used, and not much more bugs are likely
to be found.

But yes, more complex programs tend to use more of the compiler, and uncover
bugs. Maybe, though, it is that the compiler complexity is distributed over
all programs that it compiles?

> And, of course, you must include any tools that are specific to the
> project, or even customised for it. Like Tim, I have seen claims
> based on moving the real code out to a library and denying that the
> library was part of the program (even though it had to be developed
> specially for that program!)

So if it is only useful for that project it counts. If it is useful for
other projects is complexity is distributed over all the projects?

> The other form of cheating, which it appears that Peter Lund was
> using, is to use the OUTPUT from tools rather than their INPUT as
> the measure of complexity. It is trivial to get factors of thousands
> or even millions if you do that. Yes, I really do mean that there
> are program and data generators that will turn (say) 1 KB into 1 GB,
> and not be totally stupid!

I think, then, that you have to look at the actual complexity of the data.
A million digits of 0 has no complexity, while a million digits of pi could
be considered to have complexity. Though those million digits could be
generated by a ten line program, so should be considered to have ten lines
of complexity? As above, there is some dependence on the compiler.

-- glen


Eugene Miya

unread,
Jul 2, 2003, 5:44:45 PM7/2/03
to
In article <ucm4gvor5eg0qfh3e...@4ax.com>,
Robert Myers <rmy...@rustuck.com> wrote:
>On 1 Jul 2003 14:34:00 -0800, eug...@cse.ucsc.edu (Eugene Miya) wrote:
>>Are you attempting to equate a line of code to a plane part?
>>
>>>http://www.majorprojects.org/pubdoc/677.pdf

>>>
>>>" Aircraft carrier project--a naval project with 30 million parts (a
>>>submarine has only 8 million parts)."
>>
>>Are you attempting to equate a line of code to ship parts?
>
>I have written or supervised the writing of complicated software to
>simulate or to analyze complicated aerospace hardware, so the
>comparison seems natural to me. I introduced it because I have been
>truly impressed by the ability of large aerospace organizations (full
>disclosure: my customers, not my employers) to cope with complexity.
>If the comparison seems inappropriate or forced to you, feel free to
>find another, but from what little I know of your background, I am
>surprised that the comparison would not seem natural to you.

My first high school job was designing parts (fasteners and stiffners)
for the B-1 bomber [that isn't impressive as it might sound as there
were over 50,000 people doing that].

I would hold with the academic (Brooks) line that the degree of
complexity exceeds anything so far known in the physical realm.

If you plan to induct based on numbers like this, you need to show to me
as a start, that 1 line of code is the necessary and sufficient comparison
to a single part of a sub/ship/plane. That's just the start,
we'll get to the next step after that.

>>>Why would software be any harder?
>>

>>I can think of a number of reasons, likely some enumerated by other
>>follow ups. One would get into the vague issues of natural language.
>>You can start with Brooks.
>
>Maybe it's just that it's the end of a long day. Maybe it is that I
>got so many pugilistic responses to what seemed like a fairly benign
>comparison (truth be told, I think I touched a nerve). Whatever the
>reason, Brooks' article strikes me as a lame laundry list of
>techniques with no real unifying insight as to the problem he claims
>to be addressing. I'll try reading it again tomorrow.
>
>I have said in another thread recently, and I will say it again, that,
>when it comes to understanding of algorithms, we are like children. I
>am therefore naturally suspicious of articles that attempt to draw
>general conclusions about methods of describing algorithms via
>software, since I don't know how you can draw conclusions about
>methods of describing what you don't fully understand to begin with.

The net is pugilistic because it shares an academic, message passing
history. Your comparison is only benign because it's on the net and not
in active use that I can see. Brooks gives no solution other than "it's
better to plan right" in his book and article. Sending word messages is
its only effective method of working. Typical (you don't see much
AutoCAD being exchanged here).

What mankind does know about complex systems is make them as simple as
possible and no simpler. From that in the past few decades came the
term "over-engineering." Rather than look at carriers look at the
Titanic, if sub, look at the Thresher, the Scorpion, and the various
Soviet era subs. The problem with those comparisons is that they
suffered catastrophic failures. You only have to know enough of the
algorithm. And that's not enough for the system (system != algorithm).


>>>A paradigm already exists for scaling programs to arbitrarily large
>>>sizes. It's called a network.
>>
>>Huh? What kind of network?
>
>I responded to the same question posed by Nick Maclaren in some detail
>in this thread.
>
>What I described is a message-passing paradigm for stitching together
>software that is common in clusters but I don't think is used for
>routine programming. I also gave a reasonably clear description of
>why I think the model is, to all appearances, indefinitely
>extensible.
>
>As I said in responding to Nick, I think Jan C. Vorbrüggen got it
>exactly right when he said that what is different about software is
>that it is written in a strongly-coupled way. A message-passing
>paradigm is one way of strictly limiting the coupling and allowing for
>complete monitoring of the interactions between separate program
>modules.

Message passing from the 60s to today is performance limited.
This is a common criticism (e.g., call by reference vs. call by value)
and it way COMMON blocks also got created.


>Notice, please, that I am not proposing a message-passing paradigm for
>all programming. I am proposing it as a way of coping with extremely
>complex systems.

Packet switching.
Data flow.
etc.

>>Did Forrest cover the address space greater
>>than 32-bits thread in comp.arch a while back?
>
>The fact that the curve began with "Now that The Forrest Curve has
>become obviously true..." was more than a little off-putting to me,
>since I take the Forrest Curve about as seriously as I took the
>article entitled "IT Doesn't Matter" in the Harvard Business Review.
>Both discussions might as well have started with "Let's suppose that
>no one will ever think of entirely new ways of using computers."

That's why Jon was trolling.


>In any case, the thread has no particular relevance to the argument I
>made in responding to Nick, because the 32-bit thread dealt with
>programs on a single machine, and I take the very broad view that if
>two programs can talk to each other, they might as well be a single
>program.

This is the use of arrows in chemistry. It boils down to thermodynamics
and whether processor are reversible. As as absolute as "might as well be.."
it would be like tossing out eqn in
... chem | pic| tbl | eqn | troff ...
which is not invertable. Sometimes I want 2 separate programs.

Gotta run.

It is loading more messages.
0 new messages