Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

software architecture and abstraction

9 views
Skip to first unread message

'

unread,
Jul 11, 2003, 6:23:55 AM7/11/03
to
Say you have an array module and you want to implement a dict module
(array of key-value pairs) on top of it. The traditional way to do
this is to say, ok, the fact that dicts are built out of arrays is an
implementation detail, the dict module is a black box, and no one who
uses the dict module needs to know that the array module exists. The
interfacing to the array module is taken care of by the dict module so
your code calls dict.create(), and dict.create() calls array.create(),
and then fiddles with the new array to make it into a dict before
returning it to your code. This is the traditional function-based
abstraction model.

Now I am considering a model that is not based on functions and in
which abstraction is not so important. The best way to think about it
is, the task of building a program is the same as the task of building
a physical machine, the only difference between building a software
machine and building a real machine is that the software machine can
be modified more easily. So reasoning by analogy, assume you were
building a real machine, there is no reason to hide the array module
inside the dict module. Suppose you had an array chip and a dict
chip, the array chip does not physically reside inside the dict chip.
When you are wiring your physical machine, you as the designer of the
system know perfectly well that the array chip exists and in fact your
construction method might go like this: build the array chip, build
the dict chip, and then wire the two together. In code then, the
sequence of events would be like array.create(), dict.create(),
connect( array, dict ). The advantage is that the array is just
another component connected to the dict so that, for instance, a
suitably written list module might be put in its place without having
to rewrite the dict module. What we have done here is decouple the
array module from the dict module, and coupled it to the code that
calls the dict module instead. This is not the traditional way of
implementing abstraction but it is not clear to me that it is worse in
any way.

Maybe you don't want to connect dict to an array; maybe you want to
connect it to a list. Maybe you don't want dict to create an array;
maybe you already have an array that you created earlier that you want
to connect to the dict. Traditionally modules are built so they hide
their implementation; you pass in some parameters to the create method
and it takes care of all the rest. The question is whether the array
is considered internal to the dict or not; you could consider it just
another argument to the dict create function. If the array is
considered internal to the dict then we need a dictarray module and a
dictlist module and I suppose a dicthashtable module but if the array
is not considered internal to the dict i imagine the same dict module
could be used in all these cases.

This is an example of a more general question, namely, when we
assemble a system from many parts, where and when do the connections
happen? One logical extreme is that every module is responsible in
and of itself for making all the necessary connections to make an
"object" work, so that when the module.create() function returns there
is no more work to be done. Another extreme is to say, suppose you
were building a physical machine, you would start by gathering all the
parts you need first and then spend a long time soldering wires and so
on. Most code connects objects in a haphazard fashion; if you wanted
to find out all the objects in a program and how they are connected
you would have to jump around all over the code; it's not all listed
in one place. Suppose programs were written in a different way;
suppose the first section created all the objects used by the program
and the second section was responsible for connecting them all. Then
if you wanted to understand the communications topology of the objects
in the program it would be much easier. But it might be harder to
understand in other ways; your program could potentially contain
thousands of objects and the code that wired them together could
easily become unmanageable.

What do you think?

Adam

Robert Klemme

unread,
Jul 11, 2003, 7:16:57 AM7/11/03
to

"'" <ack9...@yahoo.com> schrieb im Newsbeitrag
news:45b780c3.03071...@posting.google.com...

> What do you think?

The parallel between electronic systems and software is badly chosen and
inappropriate. The machine or human that has to build up a circuit board
must be well aware of each chip needed, its position and connections. A
software engineer bulding a complex component from other components need
not know which other components are used by the building blocks he uses.
In fact this is a principle used to manage complexity of software.

However, this was the desing time and programming view. At deployment
time, one needs to be aware of all physical components that have to be
deployed. But this isn't as hard as knowing all software components used
since often many components (i.e. classes) reside in a single physical
component (class library).

Regards

robert

H. S. Lahman

unread,
Jul 11, 2003, 11:24:30 AM7/11/03
to
Responding to '...

> What do you think?

As far as I can tell you have discovered standard OO practice. B-) At
the OOA/D level One has:

1 organized by *
[Dict] -------------------------- [KeyValue]
composed of

where Array is just a particular implementation of the 1:* relationship
at the 3GL level. Dict and KeyValue will interact on a peer-to-peer
basis. That allows one to instantiate different collaborations by
defining different relationships without touching the implementations of
either [Dict] or [KeyValue]. Peer-to-peer collaboration is one of the
things that distinguish OO construction from the traditional
hierarchical construction of procedural development.

At the OOP level the decision about whether to embed the KeyValues in
Dict is a tactical decision driven by: access constraints (if other
classes than [Dict] must access KeyValues then it should not be embedded
in [Dict]'s implementation); referential integrity constraints (if the
life cycle of a KeyValue is tied to the life cycle of the corresponding
Dict instance, then enforcing that can be easier by embedding); and
efficiency constraints (embedding can save an indirection).


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
h...@pathfindersol.com
Pathfinder Solutions -- We Make UML Work
http://www.pathfindersol.com
(888)-OOA-PATH


Universe

unread,
Jul 11, 2003, 12:40:53 PM7/11/03
to
H. S. Lahman wrote:

> ... Peer-to-peer collaboration is one of the

> things that distinguish OO construction from the traditional
> hierarchical construction of procedural development.

Don't believe the lie.

See among others:

Large-scale C++ Engineering - John Lakos //generally appplicable
//especially for statically
//reliant OOPLs - C++,Eiffel
//(book)

Object-Oriented Layered Architecture and Subsystems - Elliott Coates
//myself
//OO Magazine
//Nov. 1995
//http://www.radix.net/~universe

Elliott
--
OO software rests upon class abstractions expressed in class method
*behaviors*. The sum of class method behaviors is the overall class
*role*. Class role should have primary responsibility for managing class
data such that the impact of class data is driven by the operation of
class methods.
~*~ Get with OO fundamentals, get OO. ~*~
-
We Don't Need US Fat Cat's Geopolitical Hegemony!
People of the World Demand US Stop Now!
-
* http:\\www.radix.net/~universe *
@Elliott 2003 my comments ~ newsgroups+bitnet OK

Thomas Gagné

unread,
Jul 13, 2003, 10:54:03 AM7/13/03
to
It's unfortunate Adam chose Array from which to construct his dictionary. I
suspect he's programming in Java (not that there's anything wrong with that).

Arrays conjure images of APIs of accessing elements by index. for() loops
iterating through them, and maybe even statically sizes. What too many OO
programmers are unfamiliar with are more abstact classes of collections like
indexable and non-indexable, sequenceable and non-sequenceable, sets,
sorted/unsorted, etc.--and that one of these might have been a better starting
place to create a dictionary from.


--
.tom
remove dashes in email for replies
http://isectd.sourceforge.net

David C DiNucci

unread,
Jul 13, 2003, 11:50:52 AM7/13/03
to
Robert Klemme wrote:
>
> "'" <ack9...@yahoo.com> schrieb im Newsbeitrag
> news:45b780c3.03071...@posting.google.com...
>
> > What do you think?
>
> The parallel between electronic systems and software is badly chosen and
> inappropriate. The machine or human that has to build up a circuit board
> must be well aware of each chip needed, its position and connections. A
> software engineer bulding a complex component from other components need
> not know which other components are used by the building blocks he uses.
> In fact this is a principle used to manage complexity of software.

If those "other components" have effects which are still visible at
higher levels of abstraction, then the software engineer *does* need to
know, and hiding their use doesn't help to manage complexity of the
software, it helps to increase and obfuscate it.

In many ways, the parallel between electronic systems and software is a
good one. See "Software Cabling" (e.g. PDSE'97 paper, or references at
www.elepar.com/references.html). OO falls out for free.

-- Dave
-----------------------------------------------------------------
David C. DiNucci Elepar Tools for portable grid,
da...@elepar.com http://www.elepar.com parallel, distributed, &
503-439-9431 Beaverton, OR 97006 peer-to-peer computing

Robert Klemme

unread,
Jul 14, 2003, 7:24:11 AM7/14/03
to

"David C DiNucci" <da...@elepar.com> schrieb im Newsbeitrag
news:3F117FDC...@elepar.com...

> Robert Klemme wrote:
> >
> > "'" <ack9...@yahoo.com> schrieb im Newsbeitrag
> > news:45b780c3.03071...@posting.google.com...
> >
> > > What do you think?
> >
> > The parallel between electronic systems and software is badly chosen
and
> > inappropriate. The machine or human that has to build up a circuit
board
> > must be well aware of each chip needed, its position and connections.
A
> > software engineer bulding a complex component from other components
need
> > not know which other components are used by the building blocks he
uses.
> > In fact this is a principle used to manage complexity of software.
>
> If those "other components" have effects which are still visible at
> higher levels of abstraction, then the software engineer *does* need to
> know, and hiding their use doesn't help to manage complexity of the
> software, it helps to increase and obfuscate it.

Hmmm... To me this sounds like bad usage of abstraction: if there are
hidden side effects in a lower level of abstraction that are not made
explicit (e.g. via callback interfaces), then you have the typical
situation of spaghetti code IMHO. Can you please give an example where
this is not the case?

> In many ways, the parallel between electronic systems and software is a
> good one. See "Software Cabling" (e.g. PDSE'97 paper, or references at
> www.elepar.com/references.html). OO falls out for free.

Interesting point. I'll give it a thought.

Regards

robert

Universe

unread,
Jul 14, 2003, 3:38:40 PM7/14/03
to
David C DiNucci wrote:

> Robert Klemme wrote:
>>
>> "'" <ack9...@yahoo.com> schrieb im Newsbeitrag
>> news:45b780c3.03071...@posting.google.com...
>>
>> > What do you think?
>>
>> The parallel between electronic systems and software is badly chosen and
>> inappropriate. The machine or human that has to build up a circuit board
>> must be well aware of each chip needed, its position and connections. A
>> software engineer bulding a complex component from other components need
>> not know which other components are used by the building blocks he uses.
>> In fact this is a principle used to manage complexity of software.

If one accurately reflects on the nature of software development there are
numerous levels of abstraction. E.g. systems where PowerBuilder coded GUI
at higher levels employ lower level code modules created in C++. (Which
again debunks the Lahman mythology that peer to peer interaction is a
hallmark of the OO paradigm.)

What should be hidden or visible depends upon the projects proper level of
abstraction. Each level existing above lower ones ideally hides the
complexity of lower levels.

> If those "other components" have effects which are still visible at
> higher levels of abstraction, then the software engineer *does* need to
> know, and hiding their use doesn't help to manage complexity of the
> software, it helps to increase and obfuscate it.

If lower level operation is visible at higher levels their manifestation
may or may not be complex, on the one hand and may or may not be
appropriate depending upon project wide design, planning and tradeoffs.
To repeat, ideally we strive to hide complexity from lower levels.
In fact we strive to hide complexity from higher levels.

We hide complexity from other and indeed the same level through a number
of mechanisms, idioms, techniques, etc:
` indirection - references, pointers, polymorhism
` polymorphism
` abstraction - carving out cohesive, coherent and level conceptions
` facades
` encapsulation - private internals<=>public externals
` decomposition - creating jointly cohesive and coherent module groups
` layering - decomposing horizontally
` blocks - decomposing vertically and horizontally simultaneously
` slices - [you guessed it]
` etc and so on 'ad infinitum' :-}

> In many ways, the parallel between electronic systems and software is a
> good one.

Correctamundo!

A camp that will remain "namelexx" denies almost *any* and *all* analogy
between software development and other forms of development period in
reality.

> See "Software Cabling" (e.g. PDSE'97 paper, or references at
> www.elepar.com/references.html). OO falls out for free.

Will do, thanks for all.

Now I really like the following domain.

> -- Dave
> -----------------------------------------------------------------
> David C. DiNucci Elepar Tools for portable grid,
> da...@elepar.com http://www.elepar.com parallel, distributed, &
> 503-439-9431 Beaverton, OR 97006 peer-to-peer computing

"Distributed Objects Everywhere" - DOE
"Distributed Object Management" - DOM

Yeesss...!

jonah thomas

unread,
Jul 14, 2003, 4:14:51 PM7/14/03
to
Robert Klemme wrote:
> "David C DiNucci" <da...@elepar.com> schrieb
>>Robert Klemme wrote:

>>>The parallel between electronic systems and software is badly chosen
>>>and inappropriate. The machine or human that has to build up a circuit
>>>board must be well aware of each chip needed, its position and
>>>connections.

During part of his design effort can't he use higher-level abstract
components?

>>>A software engineer bulding a complex component from other components
>>>need not know which other components are used by the building blocks he
>>>uses.

He may need to know, if they aren't re-entrant.

>>>In fact this is a principle used to manage complexity of software.

>>If those "other components" have effects which are still visible at
>>higher levels of abstraction, then the software engineer *does* need to
>>know, and hiding their use doesn't help to manage complexity of the
>>software, it helps to increase and obfuscate it.

> Hmmm... To me this sounds like bad usage of abstraction: if there are
> hidden side effects in a lower level of abstraction that are not made
> explicit (e.g. via callback interfaces), then you have the typical
> situation of spaghetti code IMHO. Can you please give an example where
> this is not the case?

I can't. In every case it happens by mistake. If it had been done
correctly there would be no problem. But then, if the design had been
done correctly without using abstraction then there would be no problem
there, either.

There are lots of iatrogenic problems in software, but they're responses
to earlier problems. When they don't work then of course the proponents
point out that they have been mis-used. But the problems they were
responses to came from people mis-using earlier technology. So we have
layers of technology that are hard to use, and each new layer has the
claim that it will be easier to use, and in practice each one generates
mistakes.

Alan Gauld

unread,
Jul 14, 2003, 5:52:37 PM7/14/03
to
On Mon, 14 Jul 2003 16:14:51 -0400, jonah thomas
<j2th...@cavtel.net> wrote:
> >>>The parallel between electronic systems and software is badly chosen
> >>>and inappropriate.

At the design level they are similar but I agree the
manufacturing analogy is badly misplaced.

> >>>The machine or human that has to build up a circuit
> >>>board must be well aware of each chip needed, its position and
> >>>connections.
>
> During part of his design effort can't he use higher-level abstract
> components?

Yes but the person doing the assembly is very unlikely to be the
designer. In my time as an electronics designer I hardly ever
assembled my own designs (my time cost too much!) I handed them
over to a technician to assemble. This is very similar to the
traditional mainframe coding model where designers hand flow
charts to programmers to code up. But its not at all similar to
the more modern styles of software development.

This separation of design and construction is one of the
differences between SE and traditional engineering. Whether the
difference needs to exist or not is a different question (as
already said it doesn't usually exist in the mainframe world)

Very often the reason for the designer and programmer being the
same lies in the small size of the projects and the lack of
completeness in the designs - certainly most software component
designs are far less rigorous or complete than the corresponding
electronic component (ie circuit board level) design. Also many
software engineers only work on one or two projects concurrently
whereas IME an electronic engineer will often work on 4 or more
designs at the same time.

Alan G.
Alan g.
http://www.freenetpages.co.uk/hp/alan.gauld/

Nick Maclaren

unread,
Jul 15, 2003, 4:55:29 AM7/15/03
to

In article <105825697...@doris.uk.clara.net>, "Paul Campbell" <p.au.l.ca...@ob.jectvi.sion.c.o.u.k> writes:
|>
|> "'" <ack9...@yahoo.com> wrote in message news:45b780c3.03071...@posting.google.com...

|>
|> > Suppose programs were written in a different way;
|> > suppose the first section created all the objects used by the program
|> > and the second section was responsible for connecting them all.
|> > Then if you wanted to understand the communications topology of the objects
|> > in the program it would be much easier.
|>
|> An object itself is the thing that knows best what other services/facilities
|> it needs.

|>
|> > But it might be harder to
|> > understand in other ways; your program could potentially contain
|> > thousands of objects and the code that wired them together could
|> > easily become unmanageable.
|> >
|> > What do you think?
|>
|> I think youve answered your own question.
|>
|> Paul C.
|>
|>

Nick Maclaren

unread,
Jul 15, 2003, 5:03:17 AM7/15/03
to

In article <105825697...@doris.uk.clara.net>,
"Paul Campbell" <p.au.l.ca...@ob.jectvi.sion.c.o.u.k> writes:
|> "'" <ack9...@yahoo.com> wrote in message news:45b780c3.03071...@posting.google.com...
|>
|> > Suppose programs were written in a different way;
|> > suppose the first section created all the objects used by the program
|> > and the second section was responsible for connecting them all.
|> > Then if you wanted to understand the communications topology of the objects
|> > in the program it would be much easier.
|>
|> An object itself is the thing that knows best what other services/facilities
|> it needs.

This is one of the great dogmas of the Object Orientation religion.
It is sometimes true, and sometimes false.

There are many circumstances under which an operation needs multiple
objects of different types, any one of which could be regarded as
primary. It is seriously harmful to attach that operation to an
arbitrary one of those objects, because it then promotes the model
used by that object to being dominant over the others.

There are a fair number of circumstances under which an object can
be viewed in many different, not wholly compatible, ways. It is
also seriously harmful to attach all of those to the object, because
it exposes incompatibilities that otherwise would not exist.

The programming model described by whatsisname is, indeed, a good
one - in the cases where it is appropriate. There is no panacaea.


Regards,
Nick Maclaren.

jonah thomas

unread,
Jul 15, 2003, 8:13:29 AM7/15/03
to
Nick Maclaren wrote:
> "Paul Campbell" <p.au.l.ca...@ob.jectvi.sion.c.o.u.k> writes:

> |> An object itself is the thing that knows best what other services/facilities
> |> it needs.

> This is one of the great dogmas of the Object Orientation religion.
> It is sometimes true, and sometimes false.

I've never heard of a case where there was no object-compatible way to
do it. But sometimes the object-compatible ways are not intuitive.

> There are many circumstances under which an operation needs multiple
> objects of different types, any one of which could be regarded as
> primary. It is seriously harmful to attach that operation to an
> arbitrary one of those objects, because it then promotes the model
> used by that object to being dominant over the others.

You could ask each object for the data you need, and then do the
operation. Perhaps present results to each object that needs to know
results from this operation. Do you need a new object associated with
this operation? It depends.

There's always some way to use objects that won't bog you down in
endless complexity. The question is whether objects give you enough
benefit to justify finding it.

Nick Maclaren

unread,
Jul 15, 2003, 8:33:24 AM7/15/03
to

In article <3F13EFE9...@cavtel.net>, jonah thomas <j2th...@cavtel.net> writes:
|> Nick Maclaren wrote:
|> > "Paul Campbell" <p.au.l.ca...@ob.jectvi.sion.c.o.u.k> writes:
|>
|> > |> An object itself is the thing that knows best what other services/facilities
|> > |> it needs.
|>
|> > This is one of the great dogmas of the Object Orientation religion.
|> > It is sometimes true, and sometimes false.
|>
|> I've never heard of a case where there was no object-compatible way to
|> do it. But sometimes the object-compatible ways are not intuitive.

Yes, of course, you can pervert any problem into an object oriented
one, but that isn't helpful if it makes the original problem almost
unrecognisable, and it CERTAINLY doesn't imply that "An object itself


is the thing that knows best what other services/facilities it needs."

The same applies to any other programming paradigm.

|> > There are many circumstances under which an operation needs multiple
|> > objects of different types, any one of which could be regarded as
|> > primary. It is seriously harmful to attach that operation to an
|> > arbitrary one of those objects, because it then promotes the model
|> > used by that object to being dominant over the others.
|>
|> You could ask each object for the data you need, and then do the
|> operation. Perhaps present results to each object that needs to know
|> results from this operation. Do you need a new object associated with
|> this operation? It depends.

NO, NO, NO! You are thinking only of TRIVIAL operations. The cases
that I am referring to is where the VIEW of object A depends on the
operation being performed (involving objects A and B, here) and the
VALUE AND TYPE of object B. And vice versa, of course.

|> There's always some way to use objects that won't bog you down in
|> endless complexity. The question is whether objects give you enough
|> benefit to justify finding it.

Not in my experience. There are a small number of problems which
fit the object oriented model so badly that forcing them into it
simply increases the complexity without any corresponding benefit.


Regards,
Nick Maclaren.

jonah thomas

unread,
Jul 15, 2003, 8:42:54 AM7/15/03
to
Nick Maclaren wrote:
> jonah thomas <j2th...@cavtel.net> writes:

> |> There's always some way to use objects that won't bog you down in
> |> endless complexity. The question is whether objects give you enough
> |> benefit to justify finding it.

> Not in my experience. There are a small number of problems which
> fit the object oriented model so badly that forcing them into it
> simply increases the complexity without any corresponding benefit.

If I'm right, there were some ways to do it that wouldn't have increased
the complexity much. But in your (almost certainly quite competent)
opinion it wasn't worth continued exploration looking for them.

So my position is difficult to falsify. Maybe there's some sort of
abstract math available which would prove me wrong.

Gerrit Muller

unread,
Jul 15, 2003, 9:40:15 AM7/15/03
to
Nick Maclaren wrote:
<...snip...>>
> Not in my experience. There are a small number of problems which
> fit the object oriented model so badly that forcing them into it
> simply increases the complexity without any corresponding benefit.
>
In about 1990 we tried to redesign a processing dominated problem in an
object oriented manner. This was a total fiasco, because the natural
problem domain is concerned with other issues, such as processing and
memory efficiency. We might have benefitted from doing the housekeeping
in OO-fashion, but the focus of the designers was too black and white
(everything is an object). Using OO in this domain put the focus of the
designers on the wrong issues.

In the same time frame the medical imaging workstation was conceived,
see articles on
http://www.extra.research.philips.com/natlab/sysarch/MedicalImaging.html
which was one of the early successfull industrial size products, based
on OO technology (in this case programmed in Objective-C). The nature of
the problems in this product did match much better with the OO paradigm.

>
> Regards,
> Nick Maclaren.

regards Gerrit

--
Gaudi systems architecting:
http://www.extra.research.philips.com/natlab/sysarch/

Nick Maclaren

unread,
Jul 15, 2003, 9:50:29 AM7/15/03
to

In article <3F13F6CE...@cavtel.net>,

I think that you are right that it is rare for the paradigm to be
completely hopeless - just as you can usually solve any problem
using any sufficiently flexible paradigm, though some approaches
are more suited to some tasks than others. I have, in the past,
clarified code by merging functions and using explicit branching
instead of conditionals - the key being that I was modelling a
complex state transition operation.

An example of a requirement that is inherently unsuited to object
orientation is cross-object consistency checking, of the sort where
an operation takes a large number of objects, ALL of which are
optional! It checks the N-way consistencies for which the objects
are present and ignores the rest. If you try to bind such checks
to objects, you often end up with a combinatoric explosion.


Regards,
Nick Maclaren.

Hoff, Todd

unread,
Jul 15, 2003, 10:03:18 AM7/15/03
to
ack9...@yahoo.com (') wrote:
>Suppose programs were written in a different way;
> suppose the first section created all the objects used by the program
> and the second section was responsible for connecting them all. Then
> if you wanted to understand the communications topology of the objects
> in the program it would be much easier. But it might be harder to
> understand in other ways; your program could potentially contain
> thousands of objects and the code that wired them together could
> easily become unmanageable.

I'm curious as to how you will create the objects in the first place.
Typically objects and methods are created because some responsibility
needs implementing, which implies some other object requires this
service. So objects arise their interactions. You are proposing
that objects just get created out of thin air and then get connected.
I don't see how you could do this.

--
Do a caterpillar and butterfly speak the same language?

Nick Maclaren

unread,
Jul 15, 2003, 10:19:35 AM7/15/03
to

In article <20030715100318.557$8...@newsreader.com>,

It's an old approach, and has been used very successfully.

In that model, an object consists of its data (state) and the
operations that apply SOLELY to that object - such as displaying
it. All other operations are added as interactions.

H. S. Lahman

unread,
Jul 15, 2003, 1:13:47 PM7/15/03
to
Responding to Campbell...

>>Suppose programs were written in a different way;
>>suppose the first section created all the objects used by the program
>>and the second section was responsible for connecting them all.
>> Then if you wanted to understand the communications topology of the objects
>>in the program it would be much easier.
>
>

> An object itself is the thing that knows best what other services/facilities
> it needs.

I have to agree with Maclaren here. This strikes me as the classic view
of procedural/functional programming where messages are imperatives for
someone else to do something specific. But OO abstraction and
encapsulation demand that an object capture /intrinsic/ properties of
the underlying entity without depending upon the overall solution context.

The best way to guarantee that is to think of messages as announcements
of what the sender has done. That's why messages are defined in OOA/D
at a different level of abstraction (e.g., UML Interaction Diagrams)
than individual class abstractions or method descriptions. One
organizes the sequence of operations for the solution by connecting
those announcements with those who care about them (i.e., for whom the
sender's behavior is a precondition in the solution context).

Paul Campbell

unread,
Jul 16, 2003, 4:24:44 AM7/16/03
to

"H. S. Lahman" <h.la...@verizon.net> wrote in message news:3F143653...@verizon.net...

> Responding to Campbell...
>
> >>Suppose programs were written in a different way;
> >>suppose the first section created all the objects used by the program
> >>and the second section was responsible for connecting them all.
> >> Then if you wanted to understand the communications topology of the objects
> >>in the program it would be much easier.
> >
> >
> > An object itself is the thing that knows best what other services/facilities
> > it needs.
>
> I have to agree with Maclaren here. This strikes me as the classic view
> of procedural/functional programming where messages are imperatives for
> someone else to do something specific. But OO abstraction and
> encapsulation demand that an object capture /intrinsic/ properties of
> the underlying entity without depending upon the overall solution context.
>
> The best way to guarantee that is to think of messages as announcements
> of what the sender has done.

Yes the objects still have to listen for specific announcements i.e. they know
themselves what they should be responding to. The message paths are thus
still essentially properties of the client objects and not somthing configured
externally as the OP suggested.

WTR to imperative vs event/state driven, I dont see your approach as being
intrisically superior at all - you are just restricting all your object communication
to the mulitcasting listener pattern which in turn forces you decompose all
methods into small enough fragments of functionality such that they never span
a message receive boundary. Of course such fragmented methods are
more "independently testable" but they also (IMO) serve to obscure natural
linear stepwise (yes admittedly procedural) collaberations.

Paul C.


Tom Gardner

unread,
Jul 16, 2003, 8:26:45 AM7/16/03
to
nm...@cus.cam.ac.uk (Nick Maclaren) wrote in
news:bf10r5$k85$1...@pegasus.csx.cam.ac.uk:
> An example of a requirement that is inherently unsuited to object
> orientation is cross-object consistency checking, of the sort where
> an operation takes a large number of objects, ALL of which are
> optional! It checks the N-way consistencies for which the objects
> are present and ignores the rest. If you try to bind such checks
> to objects, you often end up with a combinatoric explosion.

That sounds pretty difficult problem whichever way you
choose to attack it. How would you approach the problem?

Nick Maclaren

unread,
Jul 16, 2003, 8:40:52 AM7/16/03
to

In article <Xns93BA88C7D6A99...@158.234.29.254>,

In an unstructured way :-) I write a procedure that has a series
of blocks of code, which correspond to each subset of the arguments
being checked, and string them end to end.

Naturally, where common code can be split off into functions, I do
that, but my experience is that forcing structure onto unstructured
problems merely reduces clarity.


Regards,
Nick Maclaren.

Tom Gardner

unread,
Jul 16, 2003, 10:01:35 AM7/16/03
to
nm...@cus.cam.ac.uk (Nick Maclaren) wrote in
news:bf3h4k$mak$1...@pegasus.csx.cam.ac.uk:

>
> In article <Xns93BA88C7D6A99...@158.234.29.254>,
> Tom Gardner <gardne...@TRAPlogica.com> writes:
>|> nm...@cus.cam.ac.uk (Nick Maclaren) wrote in
>|> news:bf10r5$k85$1...@pegasus.csx.cam.ac.uk:
>|> > An example of a requirement that is inherently unsuited to object
>|> > orientation is cross-object consistency checking, of the sort where
>|> > an operation takes a large number of objects, ALL of which are
>|> > optional! It checks the N-way consistencies for which the objects
>|> > are present and ignores the rest. If you try to bind such checks
>|> > to objects, you often end up with a combinatoric explosion.
>|>
>|> That sounds pretty difficult problem whichever way you
>|> choose to attack it. How would you approach the problem?
>
> In an unstructured way :-) I write a procedure that has a series
> of blocks of code, which correspond to each subset of the arguments
> being checked, and string them end to end.

Sounds like the Command (and maybe Visitor) pattern :)
OOP is a very nice nail, and I've got a hammer :)


> Naturally, where common code can be split off into functions, I do
> that, but my experience is that forcing structure onto unstructured
> problems merely reduces clarity.

If you are having to force it then all you have to do is
recast your problem or solution. Oops, I forgot the quotes
around the "all".

H. S. Lahman

unread,
Jul 16, 2003, 1:53:45 PM7/16/03
to
Responding to Campbell...

>>I have to agree with Maclaren here. This strikes me as the classic view
>>of procedural/functional programming where messages are imperatives for
>>someone else to do something specific. But OO abstraction and
>>encapsulation demand that an object capture /intrinsic/ properties of
>>the underlying entity without depending upon the overall solution context.
>>
>>The best way to guarantee that is to think of messages as announcements
>>of what the sender has done.
>
>
> Yes the objects still have to listen for specific announcements i.e. they know
> themselves what they should be responding to. The message paths are thus
> still essentially properties of the client objects and not somthing configured
> externally as the OP suggested.

That is true in OOPLs because, since they are 3GLs, they employ
procedural message passing where there is no distinction between message
and method. But in OOA/D they are separated. In UML one can define a
Reception to describe the messages to be accepted and then separately
map the Reception to a particular Method in the object implementation.
That can be done after all object responsibilities have been defined.

>
> WTR to imperative vs event/state driven, I dont see your approach as being
> intrisically superior at all - you are just restricting all your object communication
> to the mulitcasting listener pattern which in turn forces you decompose all
> methods into small enough fragments of functionality such that they never span
> a message receive boundary. Of course such fragmented methods are
> more "independently testable" but they also (IMO) serve to obscure natural
> linear stepwise (yes admittedly procedural) collaberations.

I'm not talking about broadcasting. If events are announcements, then
there is no reason to think about them when designing the sender
behavior (i.e., one concentrates on doing what will be announced). Once
the behaviors are defined one can step up to the Interaction Diagram
level and decide who cares about them in the context of solution flow of
control.

One identifies sender, receiver, and message at that level. (IOW, every
behavior has a precondition in the solution sequence so one finds the
postcondition that corresponds and generates the message there.) Then
one can back fill the details of associating the message with sender and
receiver. (Then the OOPLs' procedural message passing is just syntactic
sugar for those details.)

The reason that this is inherently better is that the announcement
mindset precludes any hard-wired implementation dependencies on solution
sequence or chains of hierarchical specifications down the call stack.
By not thinking about context when implementing behaviors one avoids
dependencies between behavior implementations. One can do the same
thing with an imperative approach, but it requires a whole lot more
conscious developer discipline to do so.

gr...@cs.uwa.edu.au

unread,
Jul 17, 2003, 3:20:34 AM7/17/03
to
In comp.lang.misc H. S. Lahman <h.la...@verizon.net> wrote:
: I have to agree with Maclaren here. This strikes me as the classic view
: of procedural/functional programming where messages are imperatives for
: someone else to do something specific.

Eh? Messages? Imperatives? "do"ing something?
I don't think you understand what function programming is.
Better limit yourself to discussing procedural programming.

-Greg

H. S. Lahman

unread,
Jul 17, 2003, 1:45:21 PM7/17/03
to
Responding to Gregm...

Functional programming is based upon lambda calculus, it is structured
around procedural message passing, and state is defined in terms of
input and output (i.e., functions are invoked to modify state and they
return the new state). That leads to exactly the same hierarchical
paradigm for flow of control as procedural programming. The difference
lies in the elimination of persistent state (e.g., only functions are
used, not procedures) and there is much more support for parametric
polymorphism in the FPLs (e.g., an input can be a function).

Uncle Bob (Robert C. Martin)

unread,
Jul 17, 2003, 2:58:26 PM7/17/03
to
gr...@cs.uwa.edu.au might (or might not) have written this on (or
about) Thu, 17 Jul 2003 07:20:34 +0000 (UTC), :

H.S. has been around for a *long* time. Maybe even longer than me.
It's not wise to challenge his knowledge of software.


Robert C. Martin | "Uncle Bob"
Object Mentor Inc.| unclebob @ objectmentor . com
PO Box 5757 | Tel: (800) 338-6716
565 Lakeview Pkwy | Fax: (847) 573-1658 | www.objectmentor.com
Suite 135 | | www.XProgramming.com
Vernon Hills, IL, | Training and Mentoring | www.junit.org
60061 | OO, XP, Java, C++, Python |

Paul Campbell

unread,
Jul 17, 2003, 4:22:33 PM7/17/03
to

"H. S. Lahman" <h.la...@verizon.net> wrote in message news:3F159132...@verizon.net...
> Responding to Campbell...

It also precludes seeing genuine sequential interactions that span objects
and yet from part of the same logical workflow/usecase nicely encapsulated in
one place away from other such multi-object interactions.

> By not thinking about context when implementing behaviors one avoids
> dependencies between behavior implementations. One can do the same
> thing with an imperative approach, but it requires a whole lot more
> conscious developer discipline to do so.

IMO its a plain tradeoff - in return for removing the posibility of behaviour
dependencies you are splitting up natral multi-object interactions and making them
less tractable.

Personally I prefer to actually see multi-object algorithms direcly rather than have
them emearge from interacting state machines.

Paul C.


Peter "Firefly" Lund

unread,
Jul 17, 2003, 4:45:57 PM7/17/03
to
On Thu, 17 Jul 2003, Uncle Bob (Robert C. Martin) wrote:

> H.S. has been around for a *long* time. Maybe even longer than me.
> It's not wise to challenge his knowledge of software.

Perhaps. I'm still as confused as gregm about what "H.S." means.
Who knows, maybe he is right and it's just a case of "when I use a word it
means exactly what I want it to mean".

Functional programming is not usually taken to be about messages.

Object-oriented programming can be.

-Peter

gr...@cs.uwa.edu.au

unread,
Jul 17, 2003, 11:49:46 PM7/17/03
to
In comp.lang.misc H. S. Lahman <h.la...@verizon.net> wrote:
: Responding to Gregm...

:> In comp.lang.misc H. S. Lahman <h.la...@verizon.net> wrote:
:> : ... the classic view of procedural/functional programming where

:> : messages are imperatives for someone else to do something specific.
:>
:> Eh? Messages? Imperatives? "do"ing something?
:> I don't think you understand what function programming is.

: Functional programming is based upon lambda calculus,

OK.

: it is structured around procedural message passing,

No way. No procedures, no message passing. Not in pure functional
programming, anyway.

: and state is defined in terms of input and output

State is not defined at all in a pure functional language: a program
is merely a function mapping a stream of input events to a stream of
output events.

: (i.e., functions are invoked to modify state and they return the new state).

Not in the least. Functions are merely equations that together define
the mapping. They do not _do_ anything. They are not invoked, instead
the run-time system uses them to determine the correct output stream
given the input stream. They definitely do not modify state - they
don't even have a _concept_ of state. They don't return anything,
either, let alone a state.

: That leads to exactly the same hierarchical paradigm for flow of
: control as procedural programming.

A pure functional program does not have a flow of control. The run-time
system may choose to use definition A before, after, or whilst it uses
definition B, but "control" is never handed to the program itself.

: The difference lies in the elimination of persistent state (e.g.,


: only functions are used, not procedures) and there is much more
: support for parametric polymorphism in the FPLs

Those are two of a very long list of features that are used to
differentiate the spectrum of languages from pure imperative
to pure functional. They are by no means the defining features
of functional languages. As I said: I think you should limit
yourself to making claims about things you understand, or you
risk that people will not take you seriously even when you do
know what you are talking about.

-Greg

gr...@cs.uwa.edu.au

unread,
Jul 17, 2003, 11:55:57 PM7/17/03
to
In comp.lang.misc "Uncle Bob (Robert C. Martin)" <u.n.c.l...@objectmentor.com> wrote:
: gr...@cs.uwa.edu.au might (or might not) have written this on (or

: about) Thu, 17 Jul 2003 07:20:34 +0000 (UTC), :
:>In comp.lang.misc H. S. Lahman <h.la...@verizon.net> wrote:
:>: functional programming where messages are imperatives for
:>: someone else to do something specific.
:>Eh? Messages? Imperatives? "do"ing something?
:>I don't think you understand what function programming is.

: H.S. has been around for a *long* time.

<checking posting history> Aaaah. I thought I recognised
"hierarchical paradigm for flow of control" from somewhere.
Yep, he's got quite a history. :)

: It's not wise to challenge his knowledge of software.

Right you are. In my experience it is not worth trying to
discuss _anything_ with him, such is his destitution of
ability to comprehend what is said to him. Perhaps I am
too harsh: he may well have a lot of interesting things
to say about topics he understands a bit better, but I
can only judge from what he says about functional
programming, and he certainly understands next to nothing
about that, despite marathon discussions in the past with
some particularly knowledgeable and eloquent FPers.

-Greg

Shayne Wissler

unread,
Jul 18, 2003, 1:38:59 AM7/18/03
to
> : It's not wise to challenge his knowledge of software.

This might just be the most asinine comment I've seen posted here...

The minute we start believing in that sort of garbage we should all turn in
our computers for stone tablets.

David C DiNucci

unread,
Jul 18, 2003, 3:14:36 AM7/18/03
to
I (and I'm sure most others familiar with functional programming) agree
completely with almost all of your post, except perhaps:

gr...@cs.uwa.edu.au wrote:
> ...[Functions] do not _do_ anything. They are not invoked, instead


> the run-time system uses them to determine the correct output stream
> given the input stream.

Referential transparency, a central tenet of functional programming,
still requires that a demanded reference be evaluated (or looked up, if
memoized) to determine the value that it can be replaced with. The
lambda calculus applies alpha and beta reduction in certain (partial)
orderings to reduce a function application to the value which it
represents. Those sentences contain verbs. To say that a function does
not "do" anything could be applied just as easily to procedural programs
which do not "do" anything: In each case, the "run-time system uses
them to determine the correct output stream[s] given the input
stream[s]."

In fact, you're treading a really thin line to say "they don't return
anything, either", when you just described something as an "output
stream". From that assertion, it almost sounds like a function just
states a relationship between the two streams, and so therefore might be
just as suitable for reverse computation.

If a function doesn't do anything, then I guess one of the things it
isn't doing is computing.

-- Dave
-----------------------------------------------------------------
David C. DiNucci Elepar Tools for portable grid,
da...@elepar.com http://www.elepar.com parallel, distributed, &
503-439-9431 Beaverton, OR 97006 peer-to-peer computing

S Perryman

unread,
Jul 18, 2003, 5:52:41 AM7/18/03
to
<gr...@cs.uwa.edu.au> wrote in message news:bf7r4d$5fg$2...@enyo.uwa.edu.au...

I have programmed in two FPs (HOPE and Lisp) .
I did FP in my final yr as an undergrad.

Even with all the above, I still consider my knowledge of FP to be
quite basic. Which begs the question :

If someone who has been exposed to the concepts of referential
transparency, Beta reduction, the Y combinator, standard and normal
order evaluation, the Church-Rosser theorem, the S/K/I combinators,
program transformation, lambda-lifting etc

has 'basic' knowledge, then what should the FP bods make of
someone whose summary of FP is :

: Functional programming is based upon lambda calculus,

: it is structured around procedural message passing,


Regards,
Steven Perryman


David C DiNucci

unread,
Jul 18, 2003, 6:15:08 AM7/18/03
to
Robert Klemme wrote:
>
> "David C DiNucci" <da...@elepar.com> schrieb im Newsbeitrag
> news:3F117FDC...@elepar.com...
> > Robert Klemme wrote:
> > > A
> > > software engineer bulding a complex component from other components
> > > need
> > > not know which other components are used by the building blocks he
> > > uses.
> > > In fact this is a principle used to manage complexity of software.
> >
> > If those "other components" have effects which are still visible at
> > higher levels of abstraction, then the software engineer *does* need to
> > know, and hiding their use doesn't help to manage complexity of the
> > software, it helps to increase and obfuscate it.
>
> Hmmm... To me this sounds like bad usage of abstraction: if there are
> hidden side effects in a lower level of abstraction that are not made
> explicit (e.g. via callback interfaces), then you have the typical
> situation of spaghetti code IMHO. Can you please give an example where
> this is not the case?

I guess I never addressed your question, perhaps for good reason.
You're basically right, but it really depends on the definition of "side
effects" and "explicit".

Most of my work has been in the parallel and distributed arena. One
example that is (was?) often used there is the "Brock Ackerman anomaly",
in which the behavior (and correctness) of a component in a system may
depend on more than just the relationship of the values in the output
stream to those in the input stream. It may also depend (for example)
on the ordering between when inputs are accepted by the module relative
to when outputs are emitted by it. This sort of "anomaly" may not be of
much relevance in very conventional OO systems, where the circumstances
under which an object, say, emits a message may be more constrained.

Approaches like temporal logic or CSP/CCS take these sort of
observational behaviors into account, and it would certainly be possible
to include such behavioral descriptions regarding relative orderings of
inputs and outputs as part of a module spec to know more fully how that
module would behave within a system. That could be regarded as "making
explicit" those "hidden side effects", as you observe. However, in
terms of Software Cabling, which is the programming approach I mentioned
earlier, I often wonder whether it might not be as (or more) useful to
simply expose some of the important implementation aspects of the lower
level abstraction to make clear its concurrency behavior rather than to
necessarily use a new (or old, like CCS) notation to try to express it
otherwise. In each case, they are formal systems with formal rules, and
I believe may be reasoned about almost at the same level.

Purposely exposing implementation "details" is certainly counter to
traditional wisdom, but in the long run, it may really come down to
which form of information is most useful to the user of the abstraction,
and is easiest to deliver to them. certainly bucks traditional wisdom

> > In many ways, the parallel between electronic systems and software is a
> > good one. See "Software Cabling" (e.g. PDSE'97 paper, or references at
> > www.elepar.com/references.html). OO falls out for free.
>
> Interesting point. I'll give it a thought.

Thanks. I'd like to hear any thoughts you may have.

H. S. Lahman

unread,
Jul 18, 2003, 10:42:45 AM7/18/03
to
Responding to Campbell...

>>The reason that this is inherently better is that the announcement
>>mindset precludes any hard-wired implementation dependencies on solution
>>sequence or chains of hierarchical specifications down the call stack.
>
>
> It also precludes seeing genuine sequential interactions that span objects
> and yet from part of the same logical workflow/usecase nicely encapsulated in
> one place away from other such multi-object interactions.

I am not sure I understand this point. What you seem to be describing
is a subsystem that encapsulates a particular subject matter. "Seeing"
such subsystems is a methodological issue. But at the object
interaction level within a subsystem the sequences are all defined by
the developer's solution algorithm; one just defines those sequences at
the Interaction Diagram level.

>
>
>>By not thinking about context when implementing behaviors one avoids
>>dependencies between behavior implementations. One can do the same
>>thing with an imperative approach, but it requires a whole lot more
>>conscious developer discipline to do so.
>
>
> IMO its a plain tradeoff - in return for removing the posibility of behaviour
> dependencies you are splitting up natral multi-object interactions and making them
> less tractable.
>
> Personally I prefer to actually see multi-object algorithms direcly rather than have
> them emearge from interacting state machines.

That is exactly why one defines where events are generated and where
they go at the Interaction Diagram level. One observes the solution
flow of control at a much higher level of abstraction where one is only
interested in object responsibilities and not their implementations.

BTW, this is not a state machine issue; using object state machines is
relevant only to particular MDA profiles. The same thing is true for
all object implementations. [As I recall this subthread started when I
asserted that in a well-formed OO application methods that implement
behavior responsibilities should never return values (other than error
returns) because that immediately creates an dependence of the caller
implementation on the specification of the called behavior.]

Ken Moore

unread,
Jul 18, 2003, 5:20:13 AM7/18/03
to
In article <bf7qoq$5fg$1...@enyo.uwa.edu.au>, gr...@cs.uwa.edu.au writes:
>[...]
> H.S.Lahman wrote

>: That leads to exactly the same hierarchical paradigm for flow of
>: control as procedural programming.
>
>A pure functional program does not have a flow of control. The run-time
>system may choose to use definition A before, after, or whilst it uses
>definition B, but "control" is never handed to the program itself.

and gregm wrote elsewhere:

>Functional programming is not usually taken to be about messages.

Indeed. I got into functional programming (SISAL) in 1993 because I
thought that the popular paradigm for programming massively parallel
systems, message passing, was cumbersome and could well result in the
production of unreliable programs. Functional programming maps onto
parallel hardware in a way that does not require the programmer to
consider messages or flow of control and SISAL, in particular, has
structures that largely avoid accidental introduction of unnecessary
impediments to parallel execution. The compiler of a functional
language has all the information it needs to ensure that race conditions
will not arise.

--
Ken Moore
k...@mooremusic.org.uk
Web site: http://www.mooremusic.org.uk/
I reject emails > 300k automatically: warn me beforehand if you want to send one

Uncle Bob (Robert C. Martin)

unread,
Jul 18, 2003, 3:07:51 PM7/18/03
to
"Shayne Wissler" <thal...@yahoo.com> might (or might not) have
written this on (or about) Fri, 18 Jul 2003 05:38:59 GMT, :

I was trying to be polite. I could have said: It's not wise to make
stupidly rude comments just for our own gratification.

Shayne Wissler

unread,
Jul 18, 2003, 8:35:37 PM7/18/03
to

"Uncle Bob (Robert C. Martin)" <u.n.c.l...@objectmentor.com> wrote in
message news:p8hghvs6mqpqm7h5u...@4ax.com...

> "Shayne Wissler" <thal...@yahoo.com> might (or might not) have
> written this on (or about) Fri, 18 Jul 2003 05:38:59 GMT, :
>
> >> : It's not wise to challenge his knowledge of software.
> >
> >This might just be the most asinine comment I've seen posted here...
> >
> >The minute we start believing in that sort of garbage we should all turn
in
> >our computers for stone tablets.
>
> I was trying to be polite. I could have said: It's not wise to make
> stupidly rude comments just for our own gratification.

It's wise not to twist one's meaning while endeavoring to be polite. ;)


Shayne Wissler

David C DiNucci

unread,
Jul 20, 2003, 6:19:51 AM7/20/03
to
Ken Moore wrote:
> Indeed. I got into functional programming (SISAL) in 1993 because I
> thought that the popular paradigm for programming massively parallel
> systems, message passing, was cumbersome and could well result in the
> production of unreliable programs. ...

> ... The compiler of a functional


> language has all the information it needs to ensure that race conditions
> will not arise.

That's an interesting way of putting it. Since a function is, by
definition, deterministic, and a correct compiler, by definition,
produces a semantically identical program on output as it got on input,
then it's true that race conditions (which are usually defined as
resulting in observable non-determinism) will not arise on output
because in a truly functional language, they cannot be specified on the
input.

However, if you are implying that race conditions (or, more generally,
non-determinism) are a sign of "unreliable programs", I disagree. Race
conditions have gotten a bad name only because they often arise as the
symptom of a bug in message passing or shared memory programs, and don't
offer much help in tracing that bug down. Please don't shoot the
messenger (though maybe you can go ahead and shoot the "messager").
Sometimes (actually, fairly often, in the real world) you *want* to
specify/program non-determinism, and functional programs aren't
generally of much help.

John Feo, of the SISAL group at LLNL, proposed a set of problems to be
solved in many languages, which have become known as the Salishan
Problems. One was called the "Doctor's Office", and described patients
coming into a doctor's office, sitting in a waiting room, being helped
by a doctor, etc., on a first-come first-served basis. Whenever
something is to happen on a "first-come first-served basis", and you
can't determine beforehand who or what will come first, the course of
the computation is non-deterministic. (Another common example is the
complete output stream on a printer, if it is produced in the order
which programs create and close their output files, since one cannot
necessarily determine that order.) Though my recollection of the
presentation of solutions at Salishan is rusty (and I never did see
John's book documenting the solutions), I do recall that the functional
programming people generally had to fake the non-determinism by using
pseudo-random number generators. Other programming approaches, like my
own LGDF2 (a.k.a. F-Nets, the basic computational model for Software
Cabling) and Chandy and Misra's Unity, were able to very naturally
express the non-determinism intrinsic to the problem.

If the case of F-Nets (or Software Cabling), non-determinism may or may
not be a bug, just like any other part of the specification. It is easy
to statically find all potential sources of non-determinism in an F-Net,
very much like a Petri Net, and thus to statically decide whether each
is desirable/intended or not. Even if they are intentional, a compiler
can make the same analyses to efficiently trace all non-determistic
choices made during an execution, allowing them to be replayed later.
In any case, reconstructing the path of computation from inputs to
output is relatively straightforward, just as in functional languages.

Ken Moore

unread,
Jul 20, 2003, 12:07:31 PM7/20/03
to
In article <3F1A6CC7...@elepar.com>, David C DiNucci
<da...@elepar.com> writes

>However, if you are implying that race conditions (or, more generally,
>non-determinism) are a sign of "unreliable programs", I disagree.

In the context of engineering analysis, different results from the same
input are not usually what you want. For some stable computations
(e.g.relaxation methods) indeterminate relative finishing times for
different parallel threads may not matter, because the solutions
converge despite this, to the accuracy required. In other sorts of
computation the results can be misleading.

> Race
>conditions have gotten a bad name only because they often arise as the
>symptom of a bug in message passing or shared memory programs, and don't
>offer much help in tracing that bug down. Please don't shoot the
>messenger (though maybe you can go ahead and shoot the "messager").
>Sometimes (actually, fairly often, in the real world) you *want* to
>specify/program non-determinism, and functional programs aren't
>generally of much help.

Fair enough.

Uncle Bob (Robert C. Martin)

unread,
Jul 20, 2003, 3:25:19 PM7/20/03
to
"Shayne Wissler" <thal...@yahoo.com> might (or might not) have
written this on (or about) Sat, 19 Jul 2003 00:35:37 GMT, :

You are probably right.

David C DiNucci

unread,
Jul 20, 2003, 6:32:50 PM7/20/03
to
Ken Moore wrote:
>
> In article <3F1A6CC7...@elepar.com>, David C DiNucci
> <da...@elepar.com> writes
> >However, if you are implying that race conditions (or, more generally,
> >non-determinism) are a sign of "unreliable programs", I disagree.
>
> In the context of engineering analysis, different results from the same
> input are not usually what you want.

Yes, "usually", but it also depends on what you regard as "different
results". For example, if the result is a set, and the order in which
the elements of the set are listed is unimportant to you (as it is
mathematically), then why should you specify that order as part of the
algorithm description (as you must in some functional
languages/problems), especially if the algorithm can be speeded
significantly by not specifying it? There are other data structures,
besides sets, where the same sort of logic applies.

This also reminds me of a scientific code I peripherally worked on in
grad school. In parallel (on a Cray, I believe), the program yielded a
seemingly wildy different answer than the scientist who had supplied the
program had seen when he ran it sequentially on the same machine, so we
(logically) assumed that the parallel version had a bug. However, it
turned out that even running the original code sequentially on another
machine (a Sun) turned up the same "wrong" answer. We (primarily Lise
Storc) traced the problem back to a particular floating point
comparison, which was so close that the differences in arithmetic on the
different machines altered the outcome and sent the program in wildly
different directions. It turned out that the program was looking for
some sort of minimum energy state, and upon hearing about our debugging
effort, the scientist decded that both answers were basically right,
they were just different energy wells that it had fallen into. (This
was before the hoopla about chaos theory.)

gr...@cs.uwa.edu.au

unread,
Jul 21, 2003, 1:26:36 AM7/21/03
to
In comp.lang.misc S Perryman <q...@q.com> wrote:
: If someone who has been exposed to the concepts of referential

: transparency, Beta reduction, the Y combinator, standard and normal
: order evaluation, the Church-Rosser theorem, the S/K/I combinators,
: program transformation, lambda-lifting etc
: has 'basic' knowledge, then what should the FP bods make of
: someone whose summary of FP is :

: : Functional programming is based upon lambda calculus,
: : it is structured around procedural message passing,

Umm, probably that they didn't really understand it as abstract
concepts, and are trying to describe it to themselves in terms of
things they are more comfortable/experienced with. Or equally
likely, that they are trying to give a rough, dumbed-down precis
to an audience with those qualities, and probably sacrificing
accuracy for familiarity.

Perhaps if you reword "procedural message passing" as "data flow",
you will get a closer approximation, at least for lazy languages.
As you get stricter or less pure, the structure and "flow of
control" vary widely, doubtless passing through "procedural message
passing" at some points.

-Greg

Paul Campbell

unread,
Jul 22, 2003, 12:44:39 PM7/22/03
to

"H. S. Lahman" <h.la...@verizon.net> wrote in message news:3F180774...@verizon.net...

> I am not sure I understand this point. What you seem to be describing
> is a subsystem that encapsulates a particular subject matter. "Seeing"
> such subsystems is a methodological issue. But at the object
> interaction level within a subsystem the sequences are all defined by
> the developer's solution algorithm; one just defines those sequences at
> the Interaction Diagram level.

> > IMO its a plain tradeoff - in return for removing the posibility of behaviour


> > dependencies you are splitting up natral multi-object interactions and making them
> > less tractable.
> >
> > Personally I prefer to actually see multi-object algorithms direcly rather than have
> > them emearge from interacting state machines.
>
> That is exactly why one defines where events are generated and where
> they go at the Interaction Diagram level. One observes the solution
> flow of control at a much higher level of abstraction where one is only
> interested in object responsibilities and not their implementations.

Well I guess my point is that this "higher level of abstraction" is not expressed
in terms of classes and methods - it is "outside" them somehow. However you
dress it up, this higher level interaction is an algorithm. I dont want to see these
high level interaction algorithms in a seperate layer outside (or "above") class
behaviour because IMO they relate to specific class collaberations more then
they relate to each other so it makes more sense to me to make them an integral
part of that collaberation. This may sometimes mean using "controller"/"god"
- but it works just fine for.

In a nutshell - what you like to see in some form of interaction specification, I like
to see in some actual class method somewhere. I still remain unconviced that
my preference is intrinsically inferior. E.g. in respect of unit testing, your approach
simply moves the specification of key system behaviour above the class level and
consequently out the class level unit testing domain, so *of course* that makes class
level unit testing easier. But all you have done is shift the problem up a level.

Paul C.


H. S. Lahman

unread,
Jul 22, 2003, 5:56:09 PM7/22/03
to
Responding to Campbell...

> In a nutshell - what you like to see in some form of interaction specification, I like
> to see in some actual class method somewhere. I still remain unconviced that
> my preference is intrinsically inferior. E.g. in respect of unit testing, your approach
> simply moves the specification of key system behaviour above the class level and
> consequently out the class level unit testing domain, so *of course* that makes class
> level unit testing easier. But all you have done is shift the problem up a level.

The hierarchical approach is not /intrinsically/ inferior; the
advantages and disadvantages are relative to context. In a context
where requirements are volatile it is at a disadvantage because it
requires more developer discipline to avoid context dependencies in
methods. If the solution sequence is hard-wired into method
implementations, then one must modify those implementations if the
requirements change and a new sequence is required (rather than simply
rerouting existing messages). OTOH, in a context where requirements are
rock solid the hierarchical approach has an advantage because it is a
more intuitive fit to the computation model on which computers and 3GLs
are built, so one should be able to do initial development faster.

As far as unit testing is concerned, being easier is important because
of the combinatorial problem of exhaustive testing. In general it is
not feasible to do exhaustive unit testing of hierarchical programs
without stubbing called functions. But that breaks the chain of
specification so that the test is incomplete. One can break that chain
in functional programming because the side effects of persistent state
variables have been eliminated (i.e., so long as each function in the
hierarchy is unit tested, stubbing is valid and the testing is
complete). That is not true for either procedural or OO development
where there is persistent state and, consequently, the potential for
side effects.

[One can turn this around and argue that OO's emphasis on method
context-independence is a primary mechanism in controlling persistent
state in OO development. If one can exhaustively validate a method
against its specification when generated messages have been /removed/
(i.e., the method under test has no dependence on the response to the
message), then one has properly eliminated side effects due to
persistent state relative to that method's specification.]

Mayer Goldberg

unread,
Jul 28, 2003, 7:43:07 PM7/28/03
to
> : what should the FP bods make of

> : someone whose summary of FP is :
>
> : : Functional programming is based upon lambda calculus,
> : : it is structured around procedural message passing,
>
> Umm, probably that they didn't really understand it as abstract
> concepts, and are trying to describe it to themselves in terms of
> things they are more comfortable/experienced with. Or equally
> likely, that they are trying to give a rough, dumbed-down precis
> to an audience with those qualities, and probably sacrificing
> accuracy for familiarity.
>
> Perhaps if you reword "procedural message passing" as "data flow",
> you will get a closer approximation, at least for lazy languages.
> As you get stricter or less pure, the structure and "flow of
> control" vary widely, doubtless passing through "procedural message
> passing" at some points.

You can rationalise anything in this way. I remember TA'ing a C course
for non-comp sci majors, and the course was structured in a funny way:
The last two weeks were spent on a crash course in Fortran of all
things (it really did crash). So on the final, the instructor asked
"What's the differnece between a function and a subroutine" and I
happened to get the examination book of someone that wrote "Functions
are integers you can call; Subroutines are arrays that return to
main." (I memorised this prose word-for-word; This is not a joke)

Now you can rationalise this too: Functions in C are accessed through
a pointer to some code, and a pointer is just an integer...
Subroutines are pointers to memory, and you can think of memory as an
array, so there you have it. I prefer though what one of our system
people said when I told him of this brilliant answer: "Not a fu**ing
clue!"

I actually photocopied the page with that answer and for a while had
it posted on my door.

Mayer

Rupert Pigott

unread,
Jul 28, 2003, 8:54:24 PM7/28/03
to
"Mayer Goldberg" <gma...@cs.bgu.ac.il> wrote in message
news:19b9b128.03072...@posting.google.com...

One way ticket mo.... :)

Cheers,
Rupert


David C DiNucci

unread,
Jul 29, 2003, 3:45:24 AM7/29/03
to
Mayer Goldberg wrote:
> ... So on the final, the instructor asked

> "What's the differnece between a function and a subroutine" and I
> happened to get the examination book of someone that wrote "Functions
> are integers you can call; Subroutines are arrays that return to
> main." (I memorised this prose word-for-word; This is not a joke)
>
> Now you can rationalise this too: Functions in C are accessed through
> a pointer to some code, and a pointer is just an integer...
> Subroutines are pointers to memory, and you can think of memory as an
> array, so there you have it.

I actually meant it the other way--i.e. "Functions are (like) arrays
that return to main." This is because a function call in Fortran has
nearly identical syntax and potential contexts as an array element
reference. Sometimes it even goes further--e.g.:

function n(i,j)
dimension k(10,20)
k(i,j)= 1000 / i + 73 * j
n = k(i,j)
return

Function n will reliably return the same answer whether or not the
"dimension" statement is removed (i.e. whether k is a function or an
array) as long as i and j are within bounds of the array (or at least 0
<= 10j + i < 200).

It seems I always got the tough graders.

-- Dave

NOSPA...@sebastian9.com

unread,
Jul 30, 2003, 5:36:41 AM7/30/03
to

According to jonah thomas <j2th...@cavtel.net>:

> Robert Klemme wrote:
> > "David C DiNucci" <da...@elepar.com> schrieb
> >>Robert Klemme wrote:
> >>>In fact this is a principle used to manage complexity of software.
>
> >>If those "other components" have effects which are still visible at
> >>higher levels of abstraction, then the software engineer *does* need to
> >>know, and hiding their use doesn't help to manage complexity of the
> >>software, it helps to increase and obfuscate it.
>
> > Hmmm... To me this sounds like bad usage of abstraction: if there are
> > hidden side effects in a lower level of abstraction that are not made
> > explicit (e.g. via callback interfaces), then you have the typical
> > situation of spaghetti code IMHO. Can you please give an example where
> > this is not the case?
>
> I can't. In every case it happens by mistake. If it had been done
> correctly there would be no problem. But then, if the design had been
> done correctly without using abstraction then there would be no problem
> there, either.

Indeed. In my experience, the place where a developer most needs
to be able to cut through all those nice levels of abstraction is
in the debugger. When something has, almost by definition, gone
wrong with one or more of the levels of abstraction, all that
layering can get in the way of trying to figure out what has actually
happened.

I've had the "joy" of trying to debug a nicely-layered
system on HP-UX, where you can't invoke any functions in the
process' address space once the program has faulted.
So, e.g., once the debugger has caught a seg fault, or failed assert,
you can't invoke the API to give you any information about that
nice opaque pointer that seems to be related to the problem.
You either have to catch the program just before the point of
fault, so you can still invoke functions from within the debugger,
or you have to manually undo enough levels of abstraction so that
you can access the data in terms of the underlying language elements.
If the code had exposed enough of the implementation details
so that the debugger could understand the objects in question
(or alternately, if the debugger understood the various layers
of abstraction), this would have been a lot easier.

--
Dave Wallace (Remove NOSPAM from my address to email me)
It is quite humbling to realize that the storage occupied by the longest
line from a typical Usenet posting is sufficient to provide a state space
so vast that all the computation power in the world can not conquer it.

Jan C. Vorbrüggen

unread,
Jul 30, 2003, 6:40:05 AM7/30/03
to
> I've had the "joy" of trying to debug a nicely-layered
> system on HP-UX, where you can't invoke any functions in the
> process' address space once the program has faulted.

Bad luck, having to use such a broken system.

Jan

Nick Maclaren

unread,
Jul 30, 2003, 10:32:40 AM7/30/03
to

In article <3F27A085...@mediasec.de>,

That is a matter of opinion. I have attempted (and failed) to
use debuggers that could be used to display complicated objects
ONLY by invoking functions in the process's address space. Now,
those really ARE broken!


Regards,
Nick Maclaren.

Uncle Bob (Robert C. Martin)

unread,
Jul 30, 2003, 1:56:45 PM7/30/03
to
nm...@cus.cam.ac.uk (Nick Maclaren) might (or might not) have written
this on (or about) 30 Jul 2003 14:32:40 GMT, :

IMHO all debuggers are broken to one extent or another. IMHO the
practice of relying on a debugger is broken.

I use print statements, and I use them very sparingly. I write lots
of unit tests, and get each one working before going on to the next.
This helps me avoid a lot of debugging. I spend very little time
debugging nowadays.

Jan C. Vorbrüggen

unread,
Jul 31, 2003, 3:23:43 AM7/31/03
to
> IMHO all debuggers are broken to one extent or another.

Have you ever used the VMS debugger?

> IMHO the practice of relying on a debugger is broken.

"relying"? Agreed. But that is another matter.

Jan

hack

unread,
Jul 31, 2003, 1:06:34 PM7/31/03
to
In article <3F28C3FF...@mediasec.de>,

All programs are broken (have bugs); debuggers are programs, so...

There is a real issue here, however. Many debuggers have (sometimes
nonobvious) dependencies on the software environment, and on certain
software conventions being observed. This *may* be sufficient when
only source-level bugs that can't break those assumptions are to be
tracked down, but is grossly deficient when debugging assembly code,
or tracking down compiler bugs. Lose the stack pointer, and the
debugger may get so confused it makes things worse; at the very least
good debuggers should detect broken assumptions and warn the user
(e.g. "Environment messed up too badly, can't continue debugging,
please kill the session"). Of course, conveying that last message
may be impossible at that point. (Indeed, having an independent
I/O interface, or a reliable way to share an existing interface at
a low-enough level, is essential here.)

I'm familiar with a debugger that makes no assumptions whatsoever
about software conventions -- it only knows and cares about the
Principles of Operation of the machine. Its only dependency is on
the integrity of the storage area it occupies, and the probability
of trespass while the debugger is not watching can be minimised by
making the OS believe that this storage does not exist. (Trespass
is actively prevented when the debugger is in control, e.g. while
single-stepping or tracing.) This approach to debugging is of
course very machine-specific -- but it is completely independent
of the operating system used on that machine (except possibly for
initial installation, though even that can be avoided by IPLing
the debugger first, and then booting the OS under the debugger by
single-stepping the boot code, with macros to watch out for those
critical things the debugger needs to be aware of, e.g. interrupt
vector initialisation, or storage table initialisation).

This debugger may of course have bugs of its own -- but at least
it's robust by design. In general, when doubts arise, one should
be able to believe the debugger. Unfortunately, in practice many
debuggers are developed as an afterthought, and are not exercised
well enough, or in sufficiently demanding situations, so that when
an unusual bug is being tracked down, it is often more likely that
the debugger is confused. That's sad.

I've never used VMS. Perhaps its debugger fits the model of "when
in doubt, always believe the debugger". That would be nice!

Michel.

Nick Maclaren

unread,
Jul 31, 2003, 3:01:22 PM7/31/03
to
In article <bgbiaq$dom$1...@news.btv.ibm.com>, hack <ha...@watson.ibm.com> wrote:
>
>I'm familiar with a debugger that makes no assumptions whatsoever
>about software conventions -- it only knows and cares about the
>Principles of Operation of the machine. Its only dependency is on
>the integrity of the storage area it occupies, and the probability
>of trespass while the debugger is not watching can be minimised by
>making the OS believe that this storage does not exist. ...

So am I. And I was an extremely heavy and advanced user of it.
I find typical modern debuggers almost completely useless, because
they help only with the problems for which I don't need one in the
first place!

A competently designed operating system will do a lot better than
minimise the probability of trespass - it will eliminate it. For
example, one of the great mistakes of MVT/MVS was restrict the use
of program keys to privileged code; there is no reason why the
debugger should not have been able to run in (say) key 9 when
debugging a program running in key 8. Similarly, some Unices do
not regard the debugger to interact with a stub in the program, and
so are automatically immune from trespass; in fact, I regard any
Unix/debugger that does require a stub as defective (perhaps due to
deficient hardware, but still).

>This debugger may of course have bugs of its own -- but at least
>it's robust by design. In general, when doubts arise, one should
>be able to believe the debugger. Unfortunately, in practice many
>debuggers are developed as an afterthought, and are not exercised
>well enough, or in sufficiently demanding situations, so that when
>an unusual bug is being tracked down, it is often more likely that
>the debugger is confused. That's sad.

No, it's not quite that. They are designed to help tyros locate
"value errors" in their programs, and not to help with severe
overwriting, bugs in the compiler or bugs in the run-time system.
Yes, most errors are like the former, but they are TRIVIAL to locate
compared to the latter classes.


Regards,
Nick Maclaren.

Uncle Bob (Robert C. Martin)

unread,
Jul 31, 2003, 8:08:08 PM7/31/03
to
ha...@watson.ibm.com (hack) might (or might not) have written this on
(or about) 31 Jul 2003 17:06:34 GMT, :

>In article <3F28C3FF...@mediasec.de>,
>Jan C. Vorbrüggen <jvorbr...@mediasec.de> wrote:
>>> IMHO all debuggers are broken to one extent or another.
>>
>>Have you ever used the VMS debugger?
>>
>>> IMHO the practice of relying on a debugger is broken.
>>
>>"relying"? Agreed. But that is another matter.
>>
>> Jan
>
>All programs are broken (have bugs); debuggers are programs, so...
>
>There is a real issue here, however. Many debuggers have (sometimes
>nonobvious) dependencies on the software environment, and on certain
>software conventions being observed.

That's true, but it wasn't the issue I was alluding to. IMHO the
practice of relying on a debugger is a broken practice. I have met
teams whose first reaction to *anything* is to fire up the debugger
instead of thinking. I once watched a developer painstakingly set up
breakpoints and carefully step through the code spending fifteen or
twenty minutes only to reach a line of code that he already knew was
broken. He just couldn't shake his reliance on the debugger. That
reliance had turned into a significant liability.

Andy Glew

unread,
Aug 1, 2003, 1:06:25 AM8/1/03
to
> I use print statements, and I use them very sparingly. I write lots
> of unit tests, and get each one working before going on to the next.
> This helps me avoid a lot of debugging. I spend very little time
> debugging nowadays.
>
>
> Robert C. Martin | "Uncle Bob"
> Object Mentor Inc.| unclebob @ objectmentor . com

Way cool! comp.arch
let me introduce Robert Martin ,
prominent in the (Extreme Programming) community.

2 of my favorite things, computer architecture and XP, coming together.

which is what I have been trying todo since 1997

Phlip

unread,
Aug 1, 2003, 2:13:48 AM8/1/03
to
Andy Glew wrote:

Pish-posh! Don't you know XP programmers don't bother with architecture, and
they just throw together any old things that work?!

(Run, duck & hide;)

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


Jan C. Vorbrüggen

unread,
Aug 1, 2003, 4:19:50 AM8/1/03
to
> All programs are broken (have bugs); debuggers are programs, so...

Oh sure. Like when that (otherwise) wonderful VMS debugger in a point
release acquired the naive O(n^2) algorithm for sorting its symbol table...
quite an embarassment.

> There is a real issue here, however. Many debuggers have (sometimes
> nonobvious) dependencies on the software environment, and on certain
> software conventions being observed. This *may* be sufficient when
> only source-level bugs that can't break those assumptions are to be
> tracked down, but is grossly deficient when debugging assembly code,
> or tracking down compiler bugs.

Just so. Having been brought up on VMS and its (initially) command-line
debugger, I just couldn't believe how deficient the usual Unix et al.
debuggers were, in spite of their glitzy GUI.

> I'm familiar with a debugger that makes no assumptions whatsoever
> about software conventions -- it only knows and cares about the
> Principles of Operation of the machine. Its only dependency is on
> the integrity of the storage area it occupies, and the probability
> of trespass while the debugger is not watching can be minimised by
> making the OS believe that this storage does not exist.

The OS doesn't write-protect its code!?

The VMS kernel/interupt-mode debugger, (X)DELTA, seems to fit your
description otherwise.

> I've never used VMS. Perhaps its debugger fits the model of "when
> in doubt, always believe the debugger". That would be nice!

Yep.

You do have the chance to use it via the Internet, although not at
the level you'd want. Or you might get one of the two or three VAX
emulators, install VMS (free "hobbyist" license available) on that
and play around.

Jan

Phlip

unread,
Aug 1, 2003, 4:37:34 AM8/1/03
to
Jan C. Vorbrüggen wrote:

> Just so. Having been brought up on VMS and its (initially) command-line
> debugger, I just couldn't believe how deficient the usual Unix et al.
> debuggers were, in spite of their glitzy GUI.

Draw a distinction here:

- all platforms should support a non-heinous and full-featured debugger
- programmers should avoid using it by any means necessary

Fortunately the latter is easy, given a little discipline.

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


Nick Maclaren

unread,
Aug 1, 2003, 5:18:53 AM8/1/03
to

In article <iFpWa.625$p76.29...@newssvr15.news.prodigy.com>,

"Phlip" <phli...@yahoo.com> writes:
|> Jan C. Vorbrüggen wrote:
|>
|> > Just so. Having been brought up on VMS and its (initially) command-line
|> > debugger, I just couldn't believe how deficient the usual Unix et al.
|> > debuggers were, in spite of their glitzy GUI.
|>
|> Draw a distinction here:
|>
|> - all platforms should support a non-heinous and full-featured debugger
|> - programmers should avoid using it by any means necessary
|>
|> Fortunately the latter is easy, given a little discipline.

Oh, yeah? What proportion of your debugging time do you spend on
tracking down code generation errors in the compiler (almost always
associated with aggressive optimisation) and/or errors in the
system libraries and run-time system?

Without either the source of those components or a decent, transparent
debugger, you are Up Shit Creek. It can be done, and I have spent far
too much of my life doing it, but it is unmitigated hell.


Regards,
Nick Maclaren.

Phlip

unread,
Aug 1, 2003, 9:33:11 AM8/1/03
to
> |> > Just so. Having been brought up on VMS and its (initially)
command-line
> |> > debugger, I just couldn't believe how deficient the usual Unix et al.
> |> > debuggers were, in spite of their glitzy GUI.
> |>
> |> Draw a distinction here:
> |>
> |> - all platforms should support a non-heinous and full-featured
debugger
> |> - programmers should avoid using it by any means necessary
> |>
> |> Fortunately the latter is easy, given a little discipline.
>
> Oh, yeah? What proportion of your debugging time do you spend on
> tracking down code generation errors in the compiler (almost always
> associated with aggressive optimisation) and/or errors in the
> system libraries and run-time system?
>
> Without either the source of those components or a decent, transparent
> debugger, you are Up Shit Creek. It can be done, and I have spent far
> too much of my life doing it, but it is unmitigated hell.

The word "distinction" did not imply only one statement could be true.

Take a deep breath. And the narrowest test case possible.

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


Nick Maclaren

unread,
Aug 1, 2003, 9:41:47 AM8/1/03
to

In article <r_tWa.633$gt6.30...@newssvr15.news.prodigy.com>,

"Phlip" <phli...@yahoo.com> writes:
|> > |>
|> > |> Draw a distinction here:
|> > |>
|> > |> - all platforms should support a non-heinous and full-featured
|> debugger
|> > |> - programmers should avoid using it by any means necessary
|> > |>
|> > |> Fortunately the latter is easy, given a little discipline.
|> >
|> > Oh, yeah? What proportion of your debugging time do you spend on
|> > tracking down code generation errors in the compiler (almost always
|> > associated with aggressive optimisation) and/or errors in the
|> > system libraries and run-time system?
|> >
|> > Without either the source of those components or a decent, transparent
|> > debugger, you are Up Shit Creek. It can be done, and I have spent far
|> > too much of my life doing it, but it is unmitigated hell.
|>
|> The word "distinction" did not imply only one statement could be true.

I never assumed that it did.

|> Take a deep breath. And the narrowest test case possible.

No problem. My test case is whan all programs are correct.
Yes, I agree that it is easy to avoid needing a debugger in
that case :-)

However, my points were and are that it is NOT easy to avoid
needing a debugger when dealing with real problems, discipline
or not, and that a simple "value error only" debugger is not a
lot of use.


Regards,
Nick Maclaren.

Phlip

unread,
Aug 1, 2003, 9:50:46 AM8/1/03
to
> However, my points were and are that it is NOT easy to avoid
> needing a debugger when dealing with real problems, discipline
> or not, and that a simple "value error only" debugger is not a
> lot of use.

I don't have a dog in the '"value error only" debugger' fight.

> Regards,
> Nick Maclaren.

But try this:

- only make < 10 edits before hitting the test button

Now suppose you get a mysterious error. Not something dumb like you forgot a
function parameter.

In this case, you should tell your editor to return the source to the state
the code was in when all the tests last passed; then you test again, and
will probably get a pass.

So you make the same edits again, or a very careful subset of the edits.

Now you get your mystery bug again.

Back off again, and change the test so you will need fewer edits.

If you have the bug after changing just one line, you know where to put your
first breakpoint.

"Design for Testing" means we will modularize components in space, so we can
use a probe to test each one in isolation. I suggest we isolate edits in
time, the same way, so we can either discard or probe the >edit< in
isolation.

Yes, there are still loopholes in this procedure. This mystery bug could
affect the editor's rollback ability. But the procedure still narrows!

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


Ken Hagan

unread,
Aug 1, 2003, 10:11:58 AM8/1/03
to
Phlip wrote:
>
> But try this:
>
> - only make < 10 edits before hitting the test button

I think you've completely missed the point here. The bugs Nick
is referring to just aren't in your code. Using a binary search
to determine the smallest change in your code that provokes the
bug in the compiler or OS is not going to help you.

You use a debugger to debug *other* people's code, not your own.


jonah thomas

unread,
Aug 1, 2003, 10:25:08 AM8/1/03
to
Phlip wrote:

>>However, my points were and are that it is NOT easy to avoid
>>needing a debugger when dealing with real problems, discipline
>>or not, and that a simple "value error only" debugger is not a
>>lot of use.

> I don't have a dog in the '"value error only" debugger' fight.

> But try this:

> - only make < 10 edits before hitting the test button

> Now suppose you get a mysterious error. Not something dumb like you forgot a
> function parameter.

> In this case, you should tell your editor to return the source to the state
> the code was in when all the tests last passed; then you test again, and
> will probably get a pass.

> So you make the same edits again, or a very careful subset of the edits.

> Now you get your mystery bug again.

> Back off again, and change the test so you will need fewer edits.

> If you have the bug after changing just one line, you know where to put your
> first breakpoint.

That works when the changes aren't coordinated. If you're changing a
method you'll have to make multiple changes to get it to all fit
together, and you can't pick and choose among them.

> "Design for Testing" means we will modularize components in space, so we can
> use a probe to test each one in isolation.

Yes. That works.

> I suggest we isolate edits in
> time, the same way, so we can either discard or probe the >edit< in
> isolation.

So make *one* edit and then do unit tests on the small routine you
changed. If the interface hasn't changed then that will settle it.

> Yes, there are still loopholes in this procedure. This mystery bug could
> affect the editor's rollback ability. But the procedure still narrows!

The bug is in the editor? It's affecting the editor's files?

Anyway, that method works when it's your own code.

When you have to use buggy libraries written by other people, then there
are testing issues. When you don't have source code for the buggy
libraries, even worse.

Given the choice I avoid buggy libraries and avoid libraries I don't
have source for.

hack

unread,
Aug 1, 2003, 11:00:53 AM8/1/03
to
In article <3F2A22A6...@mediasec.de>,

Jan C. Vorbrüggen <jvorbr...@mediasec.de> wrote:
>> [M. Hack:]

>> I'm familiar with a debugger that makes no assumptions whatsoever
>> about software conventions -- it only knows and cares about the
>> Principles of Operation of the machine. Its only dependency is on
>> the integrity of the storage area it occupies, and the probability
>> of trespass while the debugger is not watching can be minimised by
>> making the OS believe that this storage does not exist.
>
>The OS doesn't write-protect its code!?

Some do, some don't, and perhaps what I'm debugging is the OS' failure
to protect something it thought it did! Also, those portions of the OS
that run with DAT (Address Translation) off and key 0 (for some OSs this
can be the entire kernel) have no protection available (or only a tiny
but critical amount, namely interrupt vectors).

I like the description of the VMS kernel/interrupt debugger. I'm glad
to see others have taken kernel debugging seriously. I was very proud
of the fact that my debugger (PRY, in case anybody has heard this name
before) is capable of single-stepping though another debugger's (or
the OS') first-level interrupt handlers, including the single-step
interrupt handler (so one copy of PRY can debug another one at the
lowest level). The MP issues are quite interesting, btw.

One more thing I'd like to mention. You contrast a simple reliable
line-mode interface with a fancy GUI. There is an intermediate position
here: a character-array display (fixed-width font of course) that keeps
interesting items on the screen without having to scroll all the time,
and a separate command and/or message area. This is a big step up in
usability over a simple scrolling line-by-line interface, without going
ape with a fancy GUI (and hence supportable with a stand-alone reliable
console driver). This is the interface PRY offers. But it may be a
matter of taste: the one IBM product (probably defunct by now) that used
PRY as its kernel debugger (AIX/ESA) used a dbx-like line-mode front end
"because that's what customers would expect", and the command to enter
full-screen mode might not even have been documented (don't remember).

Michel.

Jan C. Vorbrüggen

unread,
Aug 1, 2003, 11:31:16 AM8/1/03
to
> Some do, some don't, and perhaps what I'm debugging is the OS' failure
> to protect something it thought it did! Also, those portions of the OS
> that run with DAT (Address Translation) off and key 0 (for some OSs this
> can be the entire kernel) have no protection available (or only a tiny
> but critical amount, namely interrupt vectors).

Not a design I like. In VMS's boot sequence, quite early address translation
is turned on, and the kernel code is protected from writing. You can of course
call routines to change that (temporarily, usually) - e.g. for the debugger's
single-step mode 8-) - but it's unlikely this happens because of a bug.

> One more thing I'd like to mention. You contrast a simple reliable
> line-mode interface with a fancy GUI. There is an intermediate position
> here: a character-array display (fixed-width font of course) that keeps
> interesting items on the screen without having to scroll all the time,
> and a separate command and/or message area.

That is in fact what the VMS debugger provides in command-line mode. It
uses a package similar to curses to do that.

Jan

Sander Vesik

unread,
Aug 1, 2003, 1:39:18 PM8/1/03
to
In comp.arch Phlip <phli...@yahoo.com> wrote:
>> However, my points were and are that it is NOT easy to avoid
>> needing a debugger when dealing with real problems, discipline
>> or not, and that a simple "value error only" debugger is not a
>> lot of use.
>
> I don't have a dog in the '"value error only" debugger' fight.
>
>> Regards,
>> Nick Maclaren.
>
> But try this:
>
> - only make < 10 edits before hitting the test button
>
> Now suppose you get a mysterious error. Not something dumb like you forgot a
> function parameter.
>
> In this case, you should tell your editor to return the source to the state
> the code was in when all the tests last passed; then you test again, and
> will probably get a pass.
>
> So you make the same edits again, or a very careful subset of the edits.
>
> Now you get your mystery bug again.
>
> Back off again, and change the test so you will need fewer edits.
>
> If you have the bug after changing just one line, you know where to put your
> first breakpoint.

No you don't. See the problem is that you are assumeing that the bug mafisestated
becuase it was in your code,in one of the 10 edits. This need not be so - itmight
have manifestated juts becuase you made a change, so some code got optimised
differently or moved around.

>
> "Design for Testing" means we will modularize components in space, so we can
> use a probe to test each one in isolation. I suggest we isolate edits in
> time, the same way, so we can either discard or probe the >edit< in
> isolation.
>
> Yes, there are still loopholes in this procedure. This mystery bug could
> affect the editor's rollback ability. But the procedure still narrows!

No. It is just an application of inappropriate design methodology to software,
and then making claims based on that.

>
> --
> Phlip
> http://www.c2.com/cgi/wiki?TestFirstUserInterfaces
>
>

--
Sander

+++ Out of cheese error +++

ma...@sandbridgetech.com

unread,
Aug 1, 2003, 4:09:05 PM8/1/03
to

You've clearly never debugged some of the nastiest bugs around.

How about the following:
- added a print statement, program crashed - hard.
- delete print statement, everything works.

At this point we've followed your advice. We have a one line delta that
causes a bug to appear/disappear. Obviously, the source of the bug is
that line, right:
- unfortunately that print statement is just
fprintf(stderr, "%s %d: result=%f\n", __FILE__, __LINE__, res);
and res is a double.

So, lets do the next step.
- ran under debugger, program works fine.
This is where you really, really want a good set of debugging tools.


By the way the bug wasn't any of the obvious ones - we don't have
array-overruns, or any of the other things that cause heisenbugs in our
code.

It took quite a bit of searching to isolate the problem - it turned out
that passing %.63f to fprintf with certain very small values caused an
off-by-one error in one of the formating library routines, causing the
header of the next memory block to be corrupted. Eventually, much, much
later this caused the same memory to be malloc'd as parts of two
different blocks. The extra fprintf's contribution was that it caused
the corrupted block to either be realloc'd or free'd (and I still don't
know which it was).

Having a non-perturbing debugger might have helped enormously.

When you've debugged code where the bug source is:
- compiler errors
- library errors
- race conditions in operating system routines
- intermittent hardware failures
maybe you'll be a little less inclined to give simplistic answers.

Rupert Pigott

unread,
Aug 1, 2003, 6:28:29 PM8/1/03
to
<ma...@sandbridgetech.com> wrote in message
news:3F2AC8E1...@sandbridgetech.com...

>
> You've clearly never debugged some of the nastiest bugs around.
>
> How about the following:
> - added a print statement, program crashed - hard.
> - delete print statement, everything works.

I actually had the reverse of that happen a couple of
times. Turned out that the stack was trashed sometime
before the printf... Scared the living crap out of me !

Cheers,
Rupert


Nick Maclaren

unread,
Aug 2, 2003, 6:29:28 AM8/2/03
to
In article <10597769...@saucer.planet.gong>,

A COUPLE OF TIMES? Boggle. It is one of the most common effects
in HPC - with all of optimisation bugs, standards ambiguities and
plain user errors. Or did you not mean 'reverse', in which case I
agree with you - I have seen it only half a dozen times.

The reason is that printf is a function call, and therefore the
compiler usually synchronises its data with the abstract model at
the point of call. This means that whole classes of register bugs
are likely to disappear as soon as you insert a printf call. Of
course, MOST of those are because the user has broken an aliasing
rule or similar :-)

The interesting effect of a critical register being trashed and so
a function call crashing, COMBINED WITH that register being restored
if there is no function call, is quite rare. It happens, though.


Regards,
Nick Maclaren.

Rupert Pigott

unread,
Aug 2, 2003, 8:09:16 AM8/2/03
to
"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:bgg3q8$334$1...@pegasus.csx.cam.ac.uk...

> In article <10597769...@saucer.planet.gong>,
> Rupert Pigott <r...@dark-try-removing-this-boong.demon.co.uk> wrote:
> ><ma...@sandbridgetech.com> wrote in message
> >news:3F2AC8E1...@sandbridgetech.com...
> >>
> >> You've clearly never debugged some of the nastiest bugs around.
> >>
> >> How about the following:
> >> - added a print statement, program crashed - hard.
> >> - delete print statement, everything works.
> >
> >I actually had the reverse of that happen a couple of
> >times. Turned out that the stack was trashed sometime
> >before the printf... Scared the living crap out of me !
>
> A COUPLE OF TIMES? Boggle. It is one of the most common effects
> in HPC - with all of optimisation bugs, standards ambiguities and
> plain user errors. Or did you not mean 'reverse', in which case I
> agree with you - I have seen it only half a dozen times.

As in :
1) Zut alors ! My code is funted !
2) Argh, no, the debugger is lying to me.
3) I'll put in a printf stub to tell me what the code
thinks is going on because the debugger is lying to me.
4) Holy funt ! My code is working...
5) OK, I'll take the printf out, maybe I was running
the wrong binary or something...
6) Shit, it's broke again...

<repeat ad nauseum until I finally worked out that I
should look elsewhere for the bug>

Cheers,
Rupert


Uncle Bob (Robert C. Martin)

unread,
Aug 2, 2003, 5:27:18 PM8/2/03
to
"Uncle Bob (Robert C. Martin)" <u.n.c.l...@objectmentor.com>
might (or might not) have written this on (or about) Wed, 30 Jul 2003
12:56:45 -0500, :

>nm...@cus.cam.ac.uk (Nick Maclaren) might (or might not) have written
>this on (or about) 30 Jul 2003 14:32:40 GMT, :
>
>>
>>In article <3F27A085...@mediasec.de>,
>>Jan C. =?iso-8859-1?Q?Vorbr=FCggen?= <jvorbr...@mediasec.de> writes:
>>|> > I've had the "joy" of trying to debug a nicely-layered
>>|> > system on HP-UX, where you can't invoke any functions in the
>>|> > process' address space once the program has faulted.
>>|>
>>|> Bad luck, having to use such a broken system.
>>
>>That is a matter of opinion. I have attempted (and failed) to
>>use debuggers that could be used to display complicated objects
>>ONLY by invoking functions in the process's address space. Now,
>>those really ARE broken!
>
>IMHO all debuggers are broken to one extent or another. IMHO the
>practice of relying on a debugger is broken.
>
>I use print statements, and I use them very sparingly. I write lots
>of unit tests, and get each one working before going on to the next.
>This helps me avoid a lot of debugging. I spend very little time
>debugging nowadays.

NEWS FLASH.

I just fired up a debugger for the first time in about a year. I had
a fairly complex problem that I just *had* to breakpoint my way
through to understand. Fortunately the IntelliJ debugger worked well
enough to let me see the problem.

I hope it's another year before I have to use it again.

jonah thomas

unread,
Aug 2, 2003, 6:41:24 PM8/2/03
to
Uncle Bob (Robert C. Martin) wrote:

> I just fired up a debugger for the first time in about a year. I had
> a fairly complex problem that I just *had* to breakpoint my way
> through to understand. Fortunately the IntelliJ debugger worked well
> enough to let me see the problem.

> I hope it's another year before I have to use it again.

So, did you simplify the responsible code to the point that it won't
need a debugger again?

If I think I need a debugger for my own code, it's a sign that something
is drasticly wrong.

That code is going to have to be replaced sometime. Why not now?

So why fiddle around with a debugger to "fix the bug" when you're going
to have to rewrite anyway?

Bill Todd

unread,
Aug 2, 2003, 8:56:58 PM8/2/03
to

"jonah thomas" <j2th...@cavtel.net> wrote in message
news:3F2C3E14...@cavtel.net...

> Uncle Bob (Robert C. Martin) wrote:
>
> > I just fired up a debugger for the first time in about a year. I had
> > a fairly complex problem that I just *had* to breakpoint my way
> > through to understand. Fortunately the IntelliJ debugger worked well
> > enough to let me see the problem.
>
> > I hope it's another year before I have to use it again.
>
> So, did you simplify the responsible code to the point that it won't
> need a debugger again?
>
> If I think I need a debugger for my own code, it's a sign that something
> is drasticly wrong.

I guess you've never had the pleasure of having to solve non-trivial coding
problems (either intrinsically knotty ones, or ones where performance
requirements dictate complex approaches). Desk-checking and
armchair-ruminations if something still goes awry can certainly remove
*most* of the need for a debugger in such cases, but can hardly eliminate it
100% of the time.

I don't *expect* to have to debug code that I've written, and document every
error discovered at run time in the hope that I can learn how to avoid the
same kind of error in the future. The majority of such errors do not in
fact require use of a debugger to uncover: reflections on their symptoms,
perhaps with the source code in front of me as a memory aid, is usually
sufficient. Unfortunately, the majority of the time the root cause turns
out to be brain-fade due to time pressure...

- bill

jonah thomas

unread,
Aug 2, 2003, 9:20:33 PM8/2/03
to
Bill Todd wrote:
> "jonah thomas" <j2th...@cavtel.net> wrote

>>Uncle Bob (Robert C. Martin) wrote:

>>>I just fired up a debugger for the first time in about a year. I had
>>>a fairly complex problem that I just *had* to breakpoint my way
>>>through to understand. Fortunately the IntelliJ debugger worked well
>>>enough to let me see the problem.

>>>I hope it's another year before I have to use it again.

>>So, did you simplify the responsible code to the point that it won't
>>need a debugger again?

>>If I think I need a debugger for my own code, it's a sign that something
>>is drasticly wrong.

> I guess you've never had the pleasure of having to solve non-trivial coding
> problems (either intrinsically knotty ones, or ones where performance
> requirements dictate complex approaches).

When there's time to rewrite the code I've usually found a way to reduce
the complexity. Or at least spread it out to the point that it doesn't
look complex and I can handle it.

It's time pressure that can make me try for the false economy of fixing
bad code instead of writing better code.

> Desk-checking and
> armchair-ruminations if something still goes awry can certainly remove
> *most* of the need for a debugger in such cases, but can hardly eliminate it
> 100% of the time.

Agreed. When the code is complex enough that you can't look at it and
see what's going on, it doesn't work well to look at it and try to
imagine what's going on.

> I don't *expect* to have to debug code that I've written, and document every
> error discovered at run time in the hope that I can learn how to avoid the
> same kind of error in the future. The majority of such errors do not in
> fact require use of a debugger to uncover: reflections on their symptoms,
> perhaps with the source code in front of me as a memory aid, is usually
> sufficient. Unfortunately, the majority of the time the root cause turns
> out to be brain-fade due to time pressure...

Yes. So we patch up the stuff that's hard to understand because there
isn't time to do it right, and later on when somebody else has to debug
it again they'll be earning their pay.

I tell myself that it's justified, because there's a chance that the
final product won't be good enough that anybody will have to maintain
it. "If it isn't worth doing at all, then it isn't worth doing right."

Bill Todd

unread,
Aug 2, 2003, 9:41:16 PM8/2/03
to

"jonah thomas" <j2th...@cavtel.net> wrote in message
news:3F2C6361...@cavtel.net...
> Bill Todd wrote:

...

> > I don't *expect* to have to debug code that I've written, and document
every
> > error discovered at run time in the hope that I can learn how to avoid
the
> > same kind of error in the future. The majority of such errors do not in
> > fact require use of a debugger to uncover: reflections on their
symptoms,
> > perhaps with the source code in front of me as a memory aid, is usually
> > sufficient. Unfortunately, the majority of the time the root cause
turns
> > out to be brain-fade due to time pressure...
>
> Yes. So we patch up the stuff that's hard to understand because there
> isn't time to do it right, and later on when somebody else has to debug
> it again they'll be earning their pay.

No, that was not what I was talking about.

1. While *most* things can be made reasonably easy to understand (though
sometimes at the expense of performance), some are *intrinsically* difficult
to understand (whether due to innate problem complexity or to the complexity
created by performance requirements), even when they're 'done right'.

2. The kind of error I was referring to was not the kind that's necessarily
difficult to understand, just the result of inattention at a critical
moment. Fixing it makes the result what it should have been in the first
place, not some kind of patched-up mess.

- bill

jonah thomas

unread,
Aug 2, 2003, 10:11:55 PM8/2/03
to
Bill Todd wrote:
> "jonah thomas" <j2th...@cavtel.net>

>>Yes. So we patch up the stuff that's hard to understand because there


>>isn't time to do it right, and later on when somebody else has to debug
>>it again they'll be earning their pay.

> No, that was not what I was talking about.

> 1. While *most* things can be made reasonably easy to understand (though
> sometimes at the expense of performance), some are *intrinsically* difficult
> to understand (whether due to innate problem complexity or to the complexity
> created by performance requirements), even when they're 'done right'.

> 2. The kind of error I was referring to was not the kind that's necessarily
> difficult to understand, just the result of inattention at a critical
> moment. Fixing it makes the result what it should have been in the first
> place, not some kind of patched-up mess.

Yes. And to make #2 hard to find and correct requires that it also be #1.

Andy Glew

unread,
Aug 2, 2003, 11:56:53 PM8/2/03
to
> You use a debugger to debug *other* people's code, not your own.

you use a debugger to figure out when other people's code,
which doesn't have a test suite, is broken.

then you write a test for it and decide if you needed fix it.


Andy Glew

unread,
Aug 3, 2003, 12:19:33 AM8/3/03
to
>-Mayan, addressing Philip:

> When you've debugged code where the bug source is:
> - compiler errors
> - library errors
> - race conditions in operating system routines
> - intermittent hardware failures
> maybe you'll be a little less inclined to give simplistic answers.


Mayan, I have debugged to a successful conclusion
each of the scenarios you describe;
and yet I can say that, since I started applying test first,
my programming productivity and reliability has increased
and I have more fun, since I get stuck less often.

I've probably learned from as much from.the XP community,
especially Phlip, as I, have from comp.arch about computer hardware.

----
To Nick, who talks about spending so much time debugging
optimizing compilers: well, maybe we shouldn't worry about optimizing
so much, particularly in this day and age when it is becoming a truism
that CPUs l are faster than most apps need.

Nick Maclaren

unread,
Aug 3, 2003, 5:50:02 AM8/3/03
to
In article <p30Xa.469$Bk.30...@newssvr21.news.prodigy.com>,

Andy Glew <andy-gle...@yahoo.com> wrote:
>
>To Nick, who talks about spending so much time debugging
>optimizing compilers: well, maybe we shouldn't worry about optimizing
>so much, particularly in this day and age when it is becoming a truism
>that CPUs l are faster than most apps need.

Well, I can tell that you aren't in the HPC area :-)

You have a good point, except for two things (excluding the HPC
requirements, of course);

1) Modern CPUs demand increasingly aggressive optimisation to
deliver their performance. While that didn't start with the modern
incaranation of RISC, it has been monotonically increasing (as an
average) since then, and the current leader in this respect is of
course the IA-64 architecture.

2) Most optimisation problems are caused by the program's code
breaking one or more of the often arcane aliasing constraints (and
similar rules) of the language. This means that 'eliminating' them
by reducing optimisation is merely hiding them, and delaying their
reappearance.

Most of the time I spend in such areas is tracking down whether the
problem is a user error, a compiler error or a standards error.
In order to do that, you have to identify the precise cause :-(

I agree with you that a better solution would be simpler, cleaner
languages compiled for simpler, cleaner architectures. I expect
such technology to be delivered by flying pig.


Regards,
Nick Maclaren.

Rupert Pigott

unread,
Aug 3, 2003, 8:12:15 AM8/3/03
to
"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:bgilsa$reo$1...@pegasus.csx.cam.ac.uk...

[SNIP]

> I agree with you that a better solution would be simpler, cleaner
> languages compiled for simpler, cleaner architectures. I expect
> such technology to be delivered by flying pig.

I suspect that the best chance you'll get of seeing pigs flying
is over mainland China. They appear to be working towards high(ish)
end CPUs from the ground up. They seem to be capable of balancing
external technology and developing their own, and of course they
have an excellent (if not unparalleled) engineering heritage.

Cheers,
Rupert


John F. Carr

unread,
Aug 3, 2003, 10:34:56 AM8/3/03
to
In article <3F2AC8E1...@sandbridgetech.com>,

<ma...@sandbridgetech.com> wrote:
>
>You've clearly never debugged some of the nastiest bugs around.
>
>How about the following:
>- added a print statement, program crashed - hard.
>- delete print statement, everything works.
>
>At this point we've followed your advice. We have a one line delta that
>causes a bug to appear/disappear. Obviously, the source of the bug is
>that line, right:
>- unfortunately that print statement is just
> fprintf(stderr, "%s %d: result=%f\n", __FILE__, __LINE__, res);
>and res is a double.
>
>So, lets do the next step.
>- ran under debugger, program works fine.
>This is where you really, really want a good set of debugging tools.

The evolution of a SPARC process using register windows is
nondeterministic. Interrupts, task switching, debugging, and system
calls all cause changes to program memory. You're supposed to design
the program so it doesn't care. Sometimes the system itself cares.

One day I came across this bug: emulated multiply instruction on SPARC
returns invalid result if a window overflow occurs during the invalid
instruction trap and the output register is %o0.


The symtpom is, program crashes when compiled for SPARC v8 but run on
SPARCstation IPX. Adding printf and usiong the debugger both "fix"
the problem. What sort of debugging tool would help with this bug?
It is inherent in the nature of the system that stopping the program
to inspect it will flush the register windows to memory. Then the
trap to the kernel will not overflow the register window. Printf has
the same effect. I don't remember how I tracked down the root cause.


(I was using Solaris 2.0 or 2.1. That's when we learned about "jumbo
patches". The bug was already fixed but we had been doing our own OS
support before we switched to Solaris and it didn't occur to us that
we would need to ftp 50 megabytes of patches from the vendor to get a
semi-working system.)

--
John Carr (j...@mit.edu)

Benjamin Ylvisaker

unread,
Aug 3, 2003, 10:37:33 AM8/3/03
to
On 11 Jul 2003 03:23:55 -0700
ack9...@yahoo.com (') wrote:

> In code then, the sequence of events would be like array.create(),
> dict.create(), connect( array, dict ).

Close, but no cigar. I recommend googling around a bit for "higher
order modules". In SML-ish syntax, your example would look like:

functor Fdict (structure storage : STORAGE) =
struct
...
end

structure array : STORAGE = ...
structure list : STORAGE = ...

structure arrDict = Fdict (structure storage = array)
structure listDict = Fdict (structure storage = list)

In English: First, we declare an abstract module, where the
implementation of the storage mechanism is abstracted away and called
"storage". We explicitly state that this storage module must satisfy
specification "STORAGE". So in the omitted definition of Fdict we can
refer to storage as if it were an actual module supporting everything
specified in STORAGE. Next we declare two modules called array and list
that meet the STORAGE specification. Next we create two new modules by
instantiating the abstract module Fdict with array and list,
respectively.

I apologize if higher order modules have been mentioned already in this
thread. I only read c.a occasionally and just stumbled upon this
particular thread.

On a more philosophical level, I think that drawing software engineering
ideas from circuit engineering ideas is usually (but not always) a bad
idea. Some hardware engineers like to taunt software engineers by
pointing out that they have managed to deal with exponentially
increasing complexity (that is, transistor count), while software
engineers have only managed paltry increases in program complexity (that
is, instruction/line count). However, this comparison is ludicrous;
circuits increases in size from one generation to the next mostly by
replicating the exact same structure hundreds, thousands, or millions of
times. By comparison, it is totally senseless to just copy and paste a
bunch of code in a program. I believe that the kind of complexity that
circuit designers most often have to deal with (interfacing the physical
to the logical world) and the kind of complexity that software engineers
most often have to deal with (building and understanding large logical
systems) are largely incomparable.

Benjamin

Uncle Bob (Robert C. Martin)

unread,
Aug 3, 2003, 11:24:11 AM8/3/03
to
jonah thomas <j2th...@cavtel.net> might (or might not) have written
this on (or about) Sat, 02 Aug 2003 18:41:24 -0400, :

>Uncle Bob (Robert C. Martin) wrote:
>
>> I just fired up a debugger for the first time in about a year. I had
>> a fairly complex problem that I just *had* to breakpoint my way
>> through to understand. Fortunately the IntelliJ debugger worked well
>> enough to let me see the problem.
>
>> I hope it's another year before I have to use it again.
>
>So, did you simplify the responsible code to the point that it won't
>need a debugger again?

Not yet. What we found was unexpected complexity between two
interacting systems. As it happens another pair in the project had
been looking at that for completely different reasons, and was in the
process of refactoring it. My little debugging session served to up
the stakes in that effort. The resulting structure of the system will
be better than before, and the problem I was troubleshooting will
disappear.

jonah thomas

unread,
Aug 3, 2003, 12:08:55 PM8/3/03
to
Uncle Bob (Robert C. Martin) wrote:
> jonah thomas <j2th...@cavtel.net>
>>Uncle Bob (Robert C. Martin) wrote:

>>>I just fired up a debugger for the first time in about a year. I had
>>>a fairly complex problem that I just *had* to breakpoint my way
>>>through to understand. Fortunately the IntelliJ debugger worked well
>>>enough to let me see the problem.

>>>I hope it's another year before I have to use it again.

>>So, did you simplify the responsible code to the point that it won't
>>need a debugger again?

> Not yet. What we found was unexpected complexity between two
> interacting systems. As it happens another pair in the project had
> been looking at that for completely different reasons, and was in the
> process of refactoring it. My little debugging session served to up
> the stakes in that effort. The resulting structure of the system will
> be better than before, and the problem I was troubleshooting will
> disappear.

Good! That's how it ought to go.

George N. White III

unread,
Aug 3, 2003, 4:21:10 PM8/3/03
to
On Sun, 3 Aug 2003, Nick Maclaren wrote:

> In article <p30Xa.469$Bk.30...@newssvr21.news.prodigy.com>,
> Andy Glew <andy-gle...@yahoo.com> wrote:
> >
> >To Nick, who talks about spending so much time debugging
> >optimizing compilers: well, maybe we shouldn't worry about optimizing
> >so much, particularly in this day and age when it is becoming a truism
> >that CPUs l are faster than most apps need.
>
> Well, I can tell that you aren't in the HPC area :-)
>
> You have a good point, except for two things (excluding the HPC
> requirements, of course);
>
> 1) Modern CPUs demand increasingly aggressive optimisation to
> deliver their performance. While that didn't start with the modern
> incaranation of RISC, it has been monotonically increasing (as an
> average) since then, and the current leader in this respect is of
> course the IA-64 architecture.

And in general gcc gives up considerable performance compared to
commercial tools, but some people prefer gcc because they have encountered
fewer bugs (probably in part because so much code is first written using
gcc on ix86-linux boxes so coding idioms in many application areas are
gcc-friendly.

> 2) Most optimisation problems are caused by the program's code
> breaking one or more of the often arcane aliasing constraints (and
> similar rules) of the language. This means that 'eliminating' them
> by reducing optimisation is merely hiding them, and delaying their
> reappearance.
>
> Most of the time I spend in such areas is tracking down whether the
> problem is a user error, a compiler error or a standards error.
> In order to do that, you have to identify the precise cause :-(

Few groups have the resources to do this -- sometimes they try their
code on 4 different platforms and stick with the one where the
answers conform to expectations. What fraction of problems caused
by user or compiler error are ever detected?

> I agree with you that a better solution would be simpler, cleaner
> languages compiled for simpler, cleaner architectures. I expect
> such technology to be delivered by flying pig.

It's the American way. Look at the US auto industry in 1955--1965. Then
the Japanese came up with simpler, cleaner designs and the US auto
industry has been playing catchup for nearly 40 years. I expect


simpler, cleaner languages compiled for simpler, cleaner architectures

will be delivered by FedEx from China, India, and Korea.

--
George N. White III <aa...@chebucto.ns.ca>

Phlip

unread,
Aug 3, 2003, 11:19:19 PM8/3/03
to
jonah thomas wrote:

> Uncle Bob (Robert C. Martin) wrote:

> > I hope it's another year before I have to use it again.
>
> So, did you simplify the responsible code to the point that it won't
> need a debugger again?

Because you don't know when to stop.

You stop making code self-documenting when your pair can't think of a
question. You stop removing lines of code when your tests break.

The only way to polish the endeavor "let this code only need a debugger
yearly" is to run the project for a year. Otherwise you don't know if your
attempt worked.

Andy Glew wrote:

> I've probably learned from as much from.the XP community,
> especially Phlip, as I, have from comp.arch about computer hardware.

Thanks! However, from the point of view of the XP community, then all these
issues...

> > - compiler errors
> > - library errors
> > - race conditions in operating system routines
> > - intermittent hardware failures

...are essentially someone elses problem. The simplicity rules ensure we
usually don't write compilers, or wire hardware. We leave that to comp.arch!
;-)

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


Nick Maclaren

unread,
Aug 4, 2003, 3:19:19 AM8/4/03
to
In article <3f2d1d90$0$3936$b45e...@senator-bedfellow.mit.edu>,

John F. Carr <j...@mit.edu> wrote:
>
>The evolution of a SPARC process using register windows is
>nondeterministic. Interrupts, task switching, debugging, and system
>calls all cause changes to program memory. You're supposed to design
>the program so it doesn't care. Sometimes the system itself cares.
>
>One day I came across this bug: emulated multiply instruction on SPARC
>returns invalid result if a window overflow occurs during the invalid
>instruction trap and the output register is %o0.
>
>The symtpom is, program crashes when compiled for SPARC v8 but run on
>SPARCstation IPX. Adding printf and usiong the debugger both "fix"
>the problem. What sort of debugging tool would help with this bug?

A very low-level and well-integrated one :-)

>It is inherent in the nature of the system that stopping the program
>to inspect it will flush the register windows to memory. Then the
>trap to the kernel will not overflow the register window. Printf has
>the same effect. I don't remember how I tracked down the root cause.

Yes, I have had the same problem on other systems. You could get a
similar effect, for example, by breaking the aliasing rules between
pseudovector and scalar accesses on a Hitachi SR2201. Once I found
the reason, I could track down the cause - a C standard ambiguity,
YET AGAIN :-( I have also had it under MVT, but that was my fault!

The way that a good debugger helps is that it allows you to display
and/or check and/or control execution transparently at a sufficient
number of points in the code that you can straddle the failing area.
It is then icepack on head time, with a precise specification of the
archiecture in one hand, a copy of the code in the other, and the
before and after states in the third. I assume that is what you did.

With only a source-level or even dump-evaluating debugger, it is a
real pain.


Regards,
Nick Maclaren.

Nick Maclaren

unread,
Aug 4, 2003, 4:58:08 AM8/4/03
to

In article <Pine.LNX.4.56.03...@cerberus.cwmannwn.nowhere>,

"George N. White III" <aa...@chebucto.ns.ca> writes:
|> On Sun, 3 Aug 2003, Nick Maclaren wrote:
|>
|> > 2) Most optimisation problems are caused by the program's code
|> > breaking one or more of the often arcane aliasing constraints (and
|> > similar rules) of the language. This means that 'eliminating' them
|> > by reducing optimisation is merely hiding them, and delaying their
|> > reappearance.
|> >
|> > Most of the time I spend in such areas is tracking down whether the
|> > problem is a user error, a compiler error or a standards error.
|> > In order to do that, you have to identify the precise cause :-(
|>
|> Few groups have the resources to do this -- sometimes they try their
|> code on 4 different platforms and stick with the one where the
|> answers conform to expectations. What fraction of problems caused
|> by user or compiler error are ever detected?

If they don't have the resources to do that, they don't have them to
make effective use of their program. Running a known broken program
in an environment where it does not fail obviously isn't a great
solution for the future. It is particularly nasty when you CAN'T
check your results and yet need to rely on them.

Quite a lot of the problems that I am talking about is where that
WAS done, but changing requirements and environments have turned an
'avoidable' bug into an 'unavoidable' one. E.g. as the data size
goes up, the number of systems on which it 'works' goes down, until
there are none left!

|> > I agree with you that a better solution would be simpler, cleaner
|> > languages compiled for simpler, cleaner architectures. I expect
|> > such technology to be delivered by flying pig.
|>
|> It's the American way. Look at the US auto industry in 1955--1965. Then
|> the Japanese came up with simpler, cleaner designs and the US auto
|> industry has been playing catchup for nearly 40 years. I expect
|> simpler, cleaner languages compiled for simpler, cleaner architectures
|> will be delivered by FedEx from China, India, and Korea.

Cleaner, perhaps. Simpler, no. Modern cars are a positive minefield
of gimmicks, 90% of which are of very dubious benefit. Modern IT
systems are very similar.


Regards,
Nick Maclaren.

David Gay

unread,
Aug 4, 2003, 11:57:20 AM8/4/03
to

Benjamin Ylvisaker <benj...@alumni.cmu.edu> writes:
> On 11 Jul 2003 03:23:55 -0700
> ack9...@yahoo.com (') wrote:
>
> > In code then, the sequence of events would be like array.create(),
> > dict.create(), connect( array, dict ).
>
> Close, but no cigar. I recommend googling around a bit for "higher
> order modules". In SML-ish syntax, your example would look like:
>
> functor Fdict (structure storage : STORAGE) =
> struct
> ...
> end
>
> structure array : STORAGE = ...
> structure list : STORAGE = ...
>
> structure arrDict = Fdict (structure storage = array)
> structure listDict = Fdict (structure storage = list)
>
> In English: First, we declare an abstract module, where the
> implementation of the storage mechanism is abstracted away and called
> "storage". We explicitly state that this storage module must satisfy
> specification "STORAGE". So in the omitted definition of Fdict we can
> refer to storage as if it were an actual module supporting everything
> specified in STORAGE. Next we declare two modules called array and list
> that meet the STORAGE specification. Next we create two new modules by
> instantiating the abstract module Fdict with array and list,
> respectively.

Functors are nice, but they do limit you to a tree connection structure.
There's a whole bunch of "component" systems out there (I had thought that
"component" was the new upcoming buzzword, I'm surprised nobody mentioned
it yet ;-)) where you have some kind of more-or-less separate language
for specifying how modules/components/objects/pick-your-favourite-name
should be connected together.

A reasonably nice example to look at is Mesa's (or its follow-on, Cedar,
both from Xerox) configuration language. Components are either modules
(executable code) or configurations (a connected set of modules),
components import and export named interfaces. A configuration connects the
interfaces of its components using a distinct configuration language...
There doesn't seem to be any manual for Mesa on the web alas (it's
from the 70s/80s - I did find one dead link. There's a probably a live
one somewhere out there).

--
David Gay
dg...@acm.org

David C DiNucci

unread,
Aug 4, 2003, 11:00:30 PM8/4/03
to
David Gay wrote:

> Functors are nice, but they do limit you to a tree connection structure.
> There's a whole bunch of "component" systems out there (I had thought that
> "component" was the new upcoming buzzword, I'm surprised nobody mentioned
> it yet ;-)) where you have some kind of more-or-less separate language
> for specifying how modules/components/objects/pick-your-favourite-name
> should be connected together.

I'm not sure how to interpret the smiley, but somebody (maybe a
"nobody") did indeed mention a component system very early in this
thread:

In msg <3F117FDC...@elepar.com> on Sun Jul 13, 2003, David DiNucci
wrote:
] In many ways, the parallel between electronic systems and software is
a
] good one. See "Software Cabling" (e.g. PDSE'97 paper, or references
at
] www.elepar.com/references.html). OO falls out for free.

True, I did not clearly say it was a "component" system, but had I
listed all the applicable adjectives, I'd have been accused of
"Mentifexing"--e.g. a visual concurrent language-independent graphical
object-oriented scalable (latency tolerant) component/module
coordination and interconnection high-level glue language/notation for
portable, fault-tolerant, high-performance, real-time
software-engineering for uniprocessor, parallel (shared-memory and
message-based), distributed, wireless, and heterogeneous platforms. :-)
Oops, left out p2p, function based, Petri-net related, transaction
based, grid, modeling, executable specifications...

> A reasonably nice example to look at is Mesa's (or its follow-on, Cedar,
> both from Xerox) configuration language. Components are either modules
> (executable code) or configurations (a connected set of modules),
> components import and export named interfaces. A configuration connects the
> interfaces of its components using a distinct configuration language...

Software Cabling is a reasonably nicer example. It fits the description
you give almost perfectly, but has a much nicer formal basis--e.g.
unlike Cedar, the programmer rarely (or never) thinks about constructs
like "threads" and "monitors", because they are buried in the
abstraction offered by the formal execution model (F-Nets).

> There doesn't seem to be any manual for Mesa on the web alas (it's
> from the 70s/80s - I did find one dead link. There's a probably a live
> one somewhere out there).

Software Cabling is alive (and I'm destined to keep it so). Some of the
on-line info is somewhat outdated and/or somewhat hard to follow, but
other parts aren't bad (e.g. the paper cited above).

-- Dave
-----------------------------------------------------------------
David C. DiNucci Elepar Tools for portable grid,
da...@elepar.com http://www.elepar.com parallel, distributed, &
503-439-9431 Beaverton, OR 97006 peer-to-peer computing

Tara Krishnaswamy

unread,
Aug 6, 2003, 7:51:54 PM8/6/03
to

da...@sebastian9.com wrote:

>
> I've had the "joy" of trying to debug a nicely-layered
> system on HP-UX, where you can't invoke any functions in the
> process' address space once the program has faulted.

Here is what the HPUX debugger folks have to say:

Not true. If you are running under the debugger, you get control before
the program terminates: /tmp_mnt/home/vobadm/coulter/EXAMP/makecore

Program received signal SIGBUS, Bus error.
0x28f0 in main () at makecore.c:10
10 *ptr = 1;
(gdb) p "asdf"
$1 = "asdf"
(gdb) p printf("asdf\nqwer\nqwer\n"
A syntax error in expression, near `'.
(gdb) p printf("asdf\nqwer\nqwer\n")
asdf
qwer
qwer
$2 = 15
(gdb)


You, of course, need to have a valid stack pointer, at least.

-----

Tara

Tara Krishnaswamy

unread,
Aug 6, 2003, 7:52:02 PM8/6/03
to

Tara Krishnaswamy

unread,
Aug 6, 2003, 7:52:42 PM8/6/03
to

Tara Krishnaswamy

unread,
Aug 6, 2003, 7:53:01 PM8/6/03
to

Tara Krishnaswamy

unread,
Aug 6, 2003, 7:53:33 PM8/6/03
to
It is loading more messages.
0 new messages