but there is only one review there yet.
I do not need another introduction to python, but if this is really an
advanced book then I would like it.
so, if anybody has already read this book it would be nice if she or he
could give a short review of the book.
thanks
markus
> has anybody already read the book "python programming patterns"
> published by prentice hall.
> I thought this would be about design patterns in python
> but the review at amazon says different things about this book.
I was one of the tech editors. Haven't seen the final version,
but what I saw was very good.
It is *not* an intro book. The first couple chapters cover the
grammar and basics, but only a *very* experienced programmer
could learn Python from it.
It's also *not* a rehash of the GOF book translated into Python.
There's a fair amount on patterns in there, but from a very
different perspective. (More strongly, it's not a rehash of
anything - this is a very original book.)
I would put the book more (but not completely) in the "algorithms"
category. The example code is very sophisticated and
thought-provoking (one example is continually expanded and
revised). Not sure many people outside a CS dept will find
the example directly useful, but if you're an experienced
Pythoneer (or at least an experienced programmer with some
knowledge of Python), the book will jog your brain in some
new directions.
(I'm not surprised by the Amazon review - I am a long-time
Pythoneer and a very long-time programmer, and I had to study
many of the examples carefully before understanding how they
worked. But I found the experience rewarding.)
-- Gordon
http://www.mcmillan-inc.com/
I havn't seen the book, but I have to wonder about the above. Over the 25
or so years I've been programming, I've gradually shifted my priorities
from elegance, cleverness, and efficiency, to simplicity and ease of
understanding. We do a lot of code review at work, and I'm always the guy
at the meetings arguing for rewriting it so it's easier to understand.
It seems to me that if somebody is experienced in both algorithms and in
the particular language a program is written in still has to study the code
carefully to figure out what it's doing, it's probably not very good code.
"As a younger programmer, I prided myself in the number of lines of
code I wrote. As an older programmer, I pride myself in the number
of lines of code that I reduce."
There is a certain intersection of the two geometries of your life,
where elegance and cleverness overlap simplicity and comprehensibility.
Or so I believe, and strive for.
C//
The book is terse and dense with information. Personally, I like that.
There's many a 800 page computer book with 20 pages of information. By the
end of the book, the example code has implemented multi-threaded undo / redo
- able transactions against a dbm type storage. The hardest thing to follow
was his (highly efficient) implementation of an algorithm for computing
whether one set is a subset of another. So it's not for people who don't
like low-level stuff.
-- Gordon
http://www.mcmillan-inc.com/
I side with Roy's opinions purely because it is almost never the code author
that has to maintain the code (at least on systems I have worked on over the
last 20 years :-)). Generally the standard of maintenance programmer is just
not as good as developers (otherwise they would be developing, wouldn't
they? :-)) and so it is a well recognised tenant that I attempt to follow -
code for the lowest common denominator. Code that is difficult to read will
always be difficult to maintain - and you can end up with some real messes
by people changing code they don't understand! :-)
So if this book contains code and examples that require that kind of thought
then it might not be such a good book to buy. I am Australian, the exchange
rate is roughly 2:1, so whilst I noticed the book on the shelves of my local
bookstore the other day, I doubt that I will purchase a copy after reading
these comments :-). Shame, I was tempted to because of the sections on
patterns.......
Peter
"Courageous" <jkr...@san.rr.com> wrote in message
news:tof92u86c40aja5s8...@4ax.com...
> Actually, I doubt there is any intersection - or perhaps I should
> qualify that by an intersection in the realms of code that you and only
> you will ever have to look at or maintain.
>
> I side with Roy's opinions purely because it is almost never the code
> author that has to maintain the code (at least on systems I have worked
> on over the last 20 years :-)).
This is ridiculous. I said I had to study some things to understand what was
going on, and that has become "ewww, it must be bad code".
Every API has two sides. One side is supposed to be easy to use and
understand. The other side almost certainly isn't. If the internals of
Python dictionaries were easy to understand, their performance would suck.
If you're only interested in using APIs, this book probably isn't for you.
But it's presumptuous to equate "challenging" with "bad".
-- Gordon
http://www.mcmillan-inc.com/
Who said the code was bad? Not I! Your original email implied that you had
to put some amount of study into understanding the examples - there was an
implication that the examples where not straight forward and therefore
required some considerable time and effort on your part to understand what
was going on.
Nobody has tried to equate this with "bad code", just bad coding practice
(in our opinion - well, Roy's and mine :-)) - a very different thing.
Perhaps if you take objection to our comments you should perhaps clarify and
quantify your comments on just how much effort was required to understand
the examples? and why? i.e. if the book is more "algorithmic" in nature then
perhaps you cannot get away from examples that do take some considerable
study effort - if this is the case then perhaps you should provide that
clarification/caveat on your comment. Otherwise I'll have to stick with my
decision (unless I stood in the book store and read the book! :-)).
Peter
"Gordon McMillan" <gm...@hypernet.com> wrote in message
news:Xns9180CE1BAC65...@199.171.54.214...
> No Gordon, that's a ridiculous assumption!
>
> Who said the code was bad? Not I!
From the first reply to my first post:
>> it's probably not very good code.
And you (apparently) agreed.
> Your original email implied that you
> had to put some amount of study into understanding the examples - there
> was an implication that the examples where not straight forward and
> therefore required some considerable time and effort on your part to
> understand what was going on.
Yes.
> Nobody has tried to equate this with "bad code", just bad coding
> practice (in our opinion - well, Roy's and mine :-)) - a very different
> thing.
I fail to discern the distinction, particularly as stated above.
> Perhaps if you take objection to our comments you should perhaps
> clarify and quantify your comments on just how much effort was required
> to understand the examples? and why? i.e. if the book is more
> "algorithmic" in nature then perhaps you cannot get away from examples
> that do take some considerable study effort - if this is the case then
> perhaps you should provide that clarification/caveat on your comment.
Already did so. From my second post:
>> The hardest thing to follow was his (highly efficient)
>> implementation of an algorithm for computing whether one set
>> is a subset of another. So it's not for people who don't
>> like low-level stuff.
> Otherwise I'll have to stick with my decision (unless I stood in the
> book store and read the book! :-)).
I have no objection to your decision. All three posts I've made have said
that the book is not for everyone. I just don't want you (plural) to chase
off people who could get something from the book.
Save your money. Just don't slur the book based on an unfounded inference.
-- Gordon
http://www.mcmillan-inc.com/
> Generally the standard of maintenance programmer is just
> not as good as developers (otherwise they would be developing, wouldn't
> they? :-)) and so it is a well recognised tenant that I attempt to follow
-
> code for the lowest common denominator.
Ugh! That hits a little too close too home for me. While it sounds
reasonable, it flies in the face of my current situation: I'm forced to
maintain thousands of lines of C code written by a man who was clearly very
intelligent but a horrible programmer. He knew what he needed to do but
didn't bother to learn the common idioms of C. I understand he learned C
by reading a book on an airplane trip.
<bitch>
Worse, since he was "developing" this system, he of course ran into
unexpected customer demands (as will anyone), but rather than rework the
system to support these extensions, he hacked them into place using #if
defined(SOME_CUSTOMER). These often appear in the middle of functions that
span literally hundreds of lines. Global variables abound. Maybe
one-third of the functions have correct prototypes. When I started work
porting this mess to gcc, I got over 7000 warnings and errors. Fixing many
of these resulted in /more/ warnings. I discovered that he was simply
redirecting any non-fatal warnings in his build process to a file so they
wouldn't clutter his screen.
</bitch>
Anyway, my point is, I think your statement is unfounded. There are
undoubtedly cases where what you said is true, but to assume it's the
common case is wrong - especially when you consider that many people
(myself included) learned to program by developing applications, so the
quality of coding in those apps is poor (I would probably cringe if I were
to review code I wrote 10 years ago - some poor sod is probably cursing me
right now as he tries to decipher some mess I wrote =)
Besides, applications are often written without a good specification (or
only a very basic one) so the developer is left winging it, adding
features, learning new toolkits, etc, until the original architecture is
obscured beyond comprehension. Take a look at many of the open-source
projects around and you'll often find that one of the first things
"maintainers" do upon acquiring a system is recode large portions of it to
make it more readable/maintainable, etc.
That said, I don't disagree with your tenant of coding for the LCD. While
not always possible, it's good to keep in mind when one is tempted to write
"clever" code =)
Regards,
--
Cliff Wells
Software Engineer
Logiplex Corporation (www.logiplex.net)
(503) 978-6726 x308
(800) 735-0555 x308
"Cliff Wells" <logiplex...@earthlink.net> wrote in message
news:mailman.100957449...@python.org...
> On Mon, 24 Dec 2001 06:59:21 +1100
> "Peter Milliken" <peter.m...@gtech.com> wrote:
>
> > Generally the standard of maintenance programmer is just
> > not as good as developers (otherwise they would be developing, wouldn't
> > they? :-)) and so it is a well recognised tenant that I attempt to
follow
> -
> > code for the lowest common denominator.
>
<snip on the bitching :-)>
Sorry to hear of these experiences, I guess we all run into this kind of
situation sooner or later :-). Good luck with unravelling the mess.
> Anyway, my point is, I think your statement is unfounded. There are
> undoubtedly cases where what you said is true, but to assume it's the
> common case is wrong - especially when you consider that many people
> (myself included) learned to program by developing applications, so the
> quality of coding in those apps is poor (I would probably cringe if I were
> to review code I wrote 10 years ago - some poor sod is probably cursing me
> right now as he tries to decipher some mess I wrote =)
>
I am an Electrical Engineer with a single semester of Fortran and Basic :-).
The rest is self-taught, on the job training that spans (depending how you
count it) at least 20 - 25 years. We all "cringe" when we think of our
earliest efforts - I still produce awful code when I make the mistake of
thinking I can just bash out a program off the top of my head - personally,
I have to sit down and think the entire application through and do a design
and requirements analysis (not in that order :-)) otherwise I end up with a
poorly designed and coded mess :-). Perhaps others can say they produce
works of art off the top of their heads without design that they would be
proud to have other programmers review - I can't.
Most of us, at some stage, spend our early introduction in a maintenance
environment (mine was maintaining 24 6" thick volumes of listings of
assembler code which another company had been hired to provide comments for,
they ended up with approximately one comment per listing page - back in '79,
I think it was :-)). I am currently work in a maintenance environment. So
sometimes you tend to make statements that are a bit sweeping. But I still
stick by the general statement that typically, the majority of programmers
in maintenance roles would not be there if they had a choice. I am an
exception, I have a choice, but my choice was to get out of defence - and to
erase a "stigma" of 20 years in an industry isn't easy! :-)
> Besides, applications are often written without a good specification (or
> only a very basic one) so the developer is left winging it, adding
> features, learning new toolkits, etc, until the original architecture is
> obscured beyond comprehension. Take a look at many of the open-source
> projects around and you'll often find that one of the first things
> "maintainers" do upon acquiring a system is recode large portions of it to
> make it more readable/maintainable, etc.
>
Yes, I have a poor background to comment here and this may sound (almost
certainly! :-)) overly harsh, because I spent most of my work experience in
the defence industry. But to me, "pressure to market", "no time to
analyise", "no time to design", "customer doesn't know what he wants, so we
have to wing it" - are at the end of the day just excuses for the fact that
*all* the people involved in the project can't be brought around to the
concept/idea that you should sit down and plan the project step by step and
not proceed to the next until the previous is complete i.e. sit down with
the customer, analyise the requires - protype if necessary, but get them
nailed down and worked out until the customer is happy that he knows what he
wants!. Then the programmer should sit down and use the analsysis to
generate a design (documentation! ugh, how many programmers want to spend
their life doing documentation! - there is always some excuse not too! :-)
"gee, if we did that then the competition would beat us to market and we
wouldn't have a job" - true I am sure in some cases but I doubt this is true
for the majority of cases/projects!). Then the code can be generated from
the design. Overally idealistic I am sure, but that is my opinion :-)
People talk about the "software crisis" - the only crisis I have seen (in my
experience :-)) is the one that your typical programmer doesn't want to do
"paperwork" - all he/she wants to do is generate the next coding
masterpiece. A formal analysis means paperwork, a good design means
paperwork (and what is worse - others have to be able to read and understand
it! :-)). But most programmers want to cut code - nothing else :-). Your
point about the open source projects proves the point. Nobody bothered i.e.
wanted to, sit down and work out the requirements properly - that would have
taken "unnecessary time", "we know what we want - lets start coding!" :-)
Hence others come in after the fact and re-write the entire thing - a very
expensive exercise!
Peter
> No Gordon, that's a ridiculous assumption!
>
> Who said the code was bad? Not I!
From the first reply to my first post:
>> it's probably not very good code.
And you (apparently) agreed.
> Your original email implied that you
> had to put some amount of study into understanding the examples - there
> was an implication that the examples where not straight forward and
> therefore required some considerable time and effort on your part to
> understand what was going on.
Yes.
> Nobody has tried to equate this with "bad code", just bad coding
> practice (in our opinion - well, Roy's and mine :-)) - a very different
> thing.
I fail to discern the distinction, particularly as stated above.
> Perhaps if you take objection to our comments you should perhaps
> clarify and quantify your comments on just how much effort was required
> to understand the examples? and why? i.e. if the book is more
> "algorithmic" in nature then perhaps you cannot get away from examples
> that do take some considerable study effort - if this is the case then
> perhaps you should provide that clarification/caveat on your comment.
Already did so. From my second post:
>> The hardest thing to follow was his (highly efficient)
>> implementation of an algorithm for computing whether one set
>> is a subset of another. So it's not for people who don't
>> like low-level stuff.
> Otherwise I'll have to stick with my decision (unless I stood in the
> book store and read the book! :-)).
I have no objection to your decision. All three posts I've made have said
that the book is not for everyone. I just don't want you (plural) to chase
off people who could get something from the book.
Save your money. Just don't slur the book based on an unfounded inference.
-- Gordon
http://www.mcmillan-inc.com/
========= WAS CANCELLED BY =======:
Path: news.sol.net!spool0-milwwi.newsops.execpc.com!newsfeeds.sol.net!news-out.visi.com!hermes.visi.com!newsfeed.direct.ca!look.ca!newsfeed.dacom.co.kr!feeder.kornet.net!news1.kornet.net!ua4canc3ll3r
From: Gordon McMillan <gm...@hypernet.com>
Newsgroups: comp.lang.python
Subject: cmsg cancel <Xns918457206212...@199.171.54.214>
Control: cancel <Xns918457206212...@199.171.54.214>
Date: Mon, 31 Dec 2001 01:45:33 GMT
Organization: A poorly-installed InterNetNews site
Lines: 2
Message-ID: <cancel.Xns918457206...@199.171.54.214>
NNTP-Posting-Host: 211.57.49.2
X-Trace: news2.kornet.net 1009775791 27193 211.57.49.2 (31 Dec 2001 05:16:31 GMT)
X-Complaints-To: use...@news2.kornet.net
NNTP-Posting-Date: Mon, 31 Dec 2001 05:16:31 +0000 (UTC)
X-No-Archive: yes
X-Unac4ncel: yes
X-Commentary: I love NewsAgent 1.10 and the Sandblaster Cancel Engine Build 74 (19 March 1999)
This message was cancelled from within Mozilla.
"Cliff Wells" <logiplex...@earthlink.net> wrote in message
news:mailman.100957449...@python.org...
> On Mon, 24 Dec 2001 06:59:21 +1100
> "Peter Milliken" <peter.m...@gtech.com> wrote:
>
> > Generally the standard of maintenance programmer is just
> > not as good as developers (otherwise they would be developing, wouldn't
> > they? :-)) and so it is a well recognised tenant that I attempt to
follow
> -
> > code for the lowest common denominator.
>
<snip on the bitching :-)>
Sorry to hear of these experiences, I guess we all run into this kind of
situation sooner or later :-). Good luck with unravelling the mess.
> Anyway, my point is, I think your statement is unfounded. There are
> undoubtedly cases where what you said is true, but to assume it's the
> common case is wrong - especially when you consider that many people
> (myself included) learned to program by developing applications, so the
> quality of coding in those apps is poor (I would probably cringe if I were
> to review code I wrote 10 years ago - some poor sod is probably cursing me
> right now as he tries to decipher some mess I wrote =)
>
I am an Electrical Engineer with a single semester of Fortran and Basic :-).
The rest is self-taught, on the job training that spans (depending how you
count it) at least 20 - 25 years. We all "cringe" when we think of our
earliest efforts - I still produce awful code when I make the mistake of
thinking I can just bash out a program off the top of my head - personally,
I have to sit down and think the entire application through and do a design
and requirements analysis (not in that order :-)) otherwise I end up with a
poorly designed and coded mess :-). Perhaps others can say they produce
works of art off the top of their heads without design that they would be
proud to have other programmers review - I can't.
Most of us, at some stage, spend our early introduction in a maintenance
environment (mine was maintaining 24 6" thick volumes of listings of
assembler code which another company had been hired to provide comments for,
they ended up with approximately one comment per listing page - back in '79,
I think it was :-)). I am currently work in a maintenance environment. So
sometimes you tend to make statements that are a bit sweeping. But I still
stick by the general statement that typically, the majority of programmers
in maintenance roles would not be there if they had a choice. I am an
exception, I have a choice, but my choice was to get out of defence - and to
erase a "stigma" of 20 years in an industry isn't easy! :-)
> Besides, applications are often written without a good specification (or
> only a very basic one) so the developer is left winging it, adding
> features, learning new toolkits, etc, until the original architecture is
> obscured beyond comprehension. Take a look at many of the open-source
> projects around and you'll often find that one of the first things
> "maintainers" do upon acquiring a system is recode large portions of it to
> make it more readable/maintainable, etc.
>
Yes, I have a poor background to comment here and this may sound (almost
Peter
> That said, I don't disagree with your tenant of coding for the LCD. While
> not always possible, it's good to keep in mind when one is tempted to
write
> "clever" code =)
>
> Regards,
>
> --
> Cliff Wells
> Software Engineer
> Logiplex Corporation (www.logiplex.net)
> (503) 978-6726 x308
> (800) 735-0555 x308
>
========= WAS CANCELLED BY =======:
Path: news.sol.net!spool1-nwblwi.newsops.execpc.com!newsfeeds.sol.net!newspump.sol.net!news.maxwell.syr.edu!feeder.kornet.net!news1.kornet.net!ua4canc3ll3r
From: "Peter Milliken" <peter.m...@gtech.com>
Newsgroups: comp.lang.python
Subject: cmsg cancel <a0nvg7$26...@news1.gtech.com>
Control: cancel <a0nvg7$26...@news1.gtech.com>
Date: Mon, 31 Dec 2001 01:21:31 GMT
Organization: A poorly-installed InterNetNews site
Lines: 2
Message-ID: <cancel.a0nvg7$26...@news1.gtech.com>
NNTP-Posting-Host: 211.57.49.2
X-Trace: news2.kornet.net 1009774059 27193 211.57.49.2 (31 Dec 2001 04:47:39 GMT)
X-Complaints-To: use...@news2.kornet.net
NNTP-Posting-Date: Mon, 31 Dec 2001 04:47:39 +0000 (UTC)
> Generally the standard of maintenance programmer is just
> not as good as developers (otherwise they would be developing, wouldn't
> they? :-)) and so it is a well recognised tenant that I attempt to follow
-
> code for the lowest common denominator.
Ugh! That hits a little too close too home for me. While it sounds
reasonable, it flies in the face of my current situation: I'm forced to
maintain thousands of lines of C code written by a man who was clearly very
intelligent but a horrible programmer. He knew what he needed to do but
didn't bother to learn the common idioms of C. I understand he learned C
by reading a book on an airplane trip.
<bitch>
Worse, since he was "developing" this system, he of course ran into
unexpected customer demands (as will anyone), but rather than rework the
system to support these extensions, he hacked them into place using #if
defined(SOME_CUSTOMER). These often appear in the middle of functions that
span literally hundreds of lines. Global variables abound. Maybe
one-third of the functions have correct prototypes. When I started work
porting this mess to gcc, I got over 7000 warnings and errors. Fixing many
of these resulted in /more/ warnings. I discovered that he was simply
redirecting any non-fatal warnings in his build process to a file so they
wouldn't clutter his screen.
</bitch>
Anyway, my point is, I think your statement is unfounded. There are
undoubtedly cases where what you said is true, but to assume it's the
common case is wrong - especially when you consider that many people
(myself included) learned to program by developing applications, so the
quality of coding in those apps is poor (I would probably cringe if I were
to review code I wrote 10 years ago - some poor sod is probably cursing me
right now as he tries to decipher some mess I wrote =)
Besides, applications are often written without a good specification (or
only a very basic one) so the developer is left winging it, adding
features, learning new toolkits, etc, until the original architecture is
obscured beyond comprehension. Take a look at many of the open-source
projects around and you'll often find that one of the first things
"maintainers" do upon acquiring a system is recode large portions of it to
make it more readable/maintainable, etc.
That said, I don't disagree with your tenant of coding for the LCD. While
not always possible, it's good to keep in mind when one is tempted to write
"clever" code =)
Regards,
--
Cliff Wells
Software Engineer
Logiplex Corporation (www.logiplex.net)
(503) 978-6726 x308
(800) 735-0555 x308
========= WAS CANCELLED BY =======:
Path: news.sol.net!spool1-nwblwi.newsops.execpc.com!newsfeeds.sol.net!newspump.sol.net!newsfeed.direct.ca!look.ca!nntp.kreonet.re.kr!feeder.kornet.net!news1.kornet.net!ua4canc3ll3r
From: Cliff Wells <logiplex...@earthlink.net>
Newsgroups: comp.lang.python
Subject: cmsg cancel <mailman.100957449...@python.org>
Control: cancel <mailman.100957449...@python.org>
Date: Mon, 31 Dec 2001 04:22:54 GMT
Organization: A poorly-installed InterNetNews site
Lines: 2
Message-ID: <cancel.mailman.100957...@python.org>
NNTP-Posting-Host: 211.57.49.2
X-Trace: news2.kornet.net 1009775030 27193 211.57.49.2 (31 Dec 2001 05:03:50 GMT)
X-Complaints-To: use...@news2.kornet.net
NNTP-Posting-Date: Mon, 31 Dec 2001 05:03:50 +0000 (UTC)
This is also known as the "waterfall model" of software development.
It just doesn't work, as many decades of industry experience have amply
shown. Therefore, if some people "can't be brought around to the
concept/idea", there may be some excellent reasons for that; e.g., the
people who resist this horrid idea may have some relevant experience, or
have read some good book on the subject, or just plain have some common
sense and on-target intuition.
First of all, development needs to proceed in far smaller increments than
"a whole application system", as implied by:
> down with the customer, analyise the requires - protype if necessary, but
> get them nailed down and worked out until the customer is happy that he
> knows what he wants!.
Second, it's imperative to have continuous "feedback loops" from each stage
back to previous ones.
Domain analysis is fed back from requirement analysis. Both are fed back
from architectural design. The latter absolutely needs feedback from
subsystem design. And so on, and so forth.
> Then the programmer should sit down and use the analsysis to
> generate a design (documentation! ugh, how many programmers want to spend
> their life doing documentation! - there is always some excuse not too! :-)
The world is chock full of software projects which have generated far more
"documentation" than useful code. Paperwork in out-of-control amounts is
trivially easy to generate, compared to _working_ code _that does what's
actually needed_. The "waterfall model" is responsible for more useless
reams of empty words being generated than any other software development
model, I believe.
Being a budding author, as well as an established developer, I wouldn't
mind "spending my life doing documentation" IF I evaluated that the
end-product of my efforts would be more useful that way. When I write, I
have to carefully weigh the amount of English text I produce versus
examples, snippets, exercises, for example. Just as, when I code, I have
to weigh similar considerations. The goals are somewhat different, but not
by all that much (Knuth's theory of "Literate Programming" makes them
almost coincide: writing a software system is just the same as writing a
publishable paper about the workings of that software system -- you write
just one document, with the two things suitably intertwined and fed-back,
then process it with different post-processors to generate either a, say,
runnable Pascal program source or a printable TeX markup source).
"No battle plan survives contact with the enemy", and no "pure" software
design survives contact with the hardware. Unless you have structured
mechanisms in place for feeding the lessons of lower-level experiences back
to the more abstract layers, you're going to lose the battle (not deliver
good and useful working code) or win it in ways that have really very
little to do with the written-down "abstractions". The "central planning"
pipedreams of unmourned "real-socialist" economies provide another good
example of this phenomenon.
> understand it! :-)). But most programmers want to cut code - nothing else
Assuming this assertion of yours, then the only good software development
methodologies will be those that take it into account and work with it:
i.e., any methodology that assumes otherwise is a monumental disaster.
Waterfall, for example.
Analogy: "most workers want to make money -- nothing else". This isn't
really true (just as your assertion isn't), but, if it were, what would
follow? Obviously, that the only sensible economic systems are those that
take this into account and work with it. In other words, from Marx's
materialistic assumption (that making money is the only motivator), there
follows that centralized planning (and socialism in general) is unworkable,
and the only viable economics approaches are those which leverage people's
greed for money into a constructive system (Smith's "invisible hand" --
""It is not from the benevolence of the butcher, the brewer, or the baker,
that we expect our dinner, but from their regard to their own interest.").
Similarly, then, if it's true that writing code (actually: good, working,
_useful_ code) is a programmer's key motivator, then the only viable
methodologies will be those that leverage this into a constructive system.
"Agile Programming" (particularly in the "Extreme Programming" flavor) does
give code its rightfully central place among all artifacts of software
development. If you don't, then you're pandering to the *bureaucrat*'s
motivations: the key thing is writing huge formalized reports, producing
paperwork for its own sake; that anything 'useful' must come out of the
process is maybe an unfortunate necessity, but the dream system would
produce nothing but paperwork, paperwork about the paperwork, and so on.
(The "Yes, Minister" episode "The Compassionate Society" is a good
caricature of this: most doctors want to cure the sick, nothing else, but
the real bureaucrat knows that a "well-working" hospital is one with 500
paper pushers pushing "documentation"... the fact that no funds were left
to hire any doctors, or admit any patient, is a mere minutia).
Curing people _effectively_ requires producing a small but essential
paperwork trail (with *second* priority to actually making sick people
better, please notice); so does coding effectively -- producing code that
actually does useful stuff -- a small but essential paperwork trail needs
to be there (with *second* priority). In both cases, the key use of the
paperwork is *future* "maintenance". But the patient had better be alive
(the software system working), or else perfect paperwork about it can only
serve to make bureaucrats happy.
So, for example, coding *TESTS* is one of the best ways to express a
component's specifications: it's concise and unambiguous in a way that
human language never evolved to be. Turn the urge-to-code into "test-first
development" (both design, and coding) and your system's quality is poised
to take a huge jump upwards. Involve the customer in the tests, given that
_their_ motivation is to have a system that works to solve their actual
problems, and another jump up in usefulness awaits. Less paperwork, more
code -- code that is more likely to solve the real problems and ensure that
they are indeed solidly and reliably solved -- THAT is what's worth
dreaming of (or, worth doing something about: that's not as hard as all
that, since _many_ developers share such dreams, and now that Agile
Programming is finally getting decent exposure, it's not all that hard to
sell to customers, either!).
If you want waterfalls, try Niagara -- rather pretty place. Just leave
them out of software development, PLEASE.
Alex
[snip]
> This is also known as the "waterfall model" of software development.
>
> It just doesn't work, as many decades of industry experience have amply
> shown.
The "waterfall" model comes from the days when most development was
mainframe batch systems. It "worked" (to some limited degree) in that
environment because of the following conditions:
- there were only a half-dozen or so basic building blocks in mainframe
batch systems (sort, merge, select, extract...)
- the scope of the projects (successfully managed through "waterfall") was
generally tiny compared to the scope of the system in which they fit
- there were senior people on the project who could balance the "top down"
with some "bottom up", but keep it secret
Violate any one of those (too many technical choices; too big a system; or
people who aren't very familiar with the system's context) and the resultant
system was iteration 1 of an iterative prototype too expensive to complete.
More generally, you could describe a successful "waterfall" project as an
iterative prototype that worked the first time through.
What Peter had right was that *thinking* about the problem is always
necessary, even in an iterative prototype :-).
-- Gordon
http://www.mcmillan-inc.com/
Who said the code was bad? Not I! Your original email implied that you had
to put some amount of study into understanding the examples - there was an
implication that the examples where not straight forward and therefore
required some considerable time and effort on your part to understand what
was going on.
Nobody has tried to equate this with "bad code", just bad coding practice
(in our opinion - well, Roy's and mine :-)) - a very different thing.
Perhaps if you take objection to our comments you should perhaps clarify and
quantify your comments on just how much effort was required to understand
the examples? and why? i.e. if the book is more "algorithmic" in nature then
perhaps you cannot get away from examples that do take some considerable
study effort - if this is the case then perhaps you should provide that
clarification/caveat on your comment. Otherwise I'll have to stick with my
decision (unless I stood in the book store and read the book! :-)).
Peter
"Gordon McMillan" <gm...@hypernet.com> wrote in message
news:Xns9180CE1BAC65...@199.171.54.214...
========= WAS CANCELLED BY =======:
Path: news.sol.net!spool0-nwblwi.newsops.execpc.com!newsfeeds.sol.net!news-out.visi.com!hermes.visi.com!newsfeed.direct.ca!look.ca!newsfeed.dacom.co.kr!feeder.kornet.net!news1.kornet.net!ua4canc3ll3r
From: "Peter Milliken" <peter.m...@gtech.com>
Newsgroups: comp.lang.python
Subject: cmsg cancel <a0dabr$14...@news1.gtech.com>
Control: cancel <a0dabr$14...@news1.gtech.com>
Date: Mon, 31 Dec 2001 05:12:36 GMT
Organization: A poorly-installed InterNetNews site
Lines: 2
Message-ID: <cancel.a0dabr$14...@news1.gtech.com>
NNTP-Posting-Host: 211.57.49.2
X-Trace: news2.kornet.net 1009776208 27193 211.57.49.2 (31 Dec 2001 05:23:28 GMT)
X-Complaints-To: use...@news2.kornet.net
NNTP-Posting-Date: Mon, 31 Dec 2001 05:23:28 +0000 (UTC)
Yes, what is described is "waterfall" - I object violently (:-)) to the
concept that it "didn't work" though. Seems to me that there is a lot of
"well it didn't work for all situations so there must be something wrong
with it" thinking around the place - waterfall works very well in many
situations - but not all, there is *no* definitive model (to my knowledge
:-)) that works well in *every* situation. In fact, waterfall has worked
very successfully on some extremely large projects - waterfall was the
"standard" for many years in defence contracts - sure there were some
absolute disasters but there were also many resounding successes - in fact,
I would suggest that analsise of the disasters would many times show that
the waterfall model wasn't adhered to i.e. I worked on one project that went
for 7.5 years and was cancelled by the customer - they were still changing
the SRS when the contract was cancelled - doesn't matter what model you use,
it is very hard to hit a moving target. So there are "many decades" of
industry experience that show it did work extremely well. The limitations of
the model are well know though and hence the rise of other models such as
spiral etc. I won't bother to go into the why's and wherefores these are all
adequately covered in software development textbooks.
I would just caution both of you to be a little less dismissive of waterfall
(sorry Gordon, I don't think you are being dismissive of it but the
"background" of waterfall that you give is wrong - in my experience :-)) -
the concepts are still sound. As far as I am concerned the other models are
purely modifications of the concepts found in waterfall (get each step right
before proceeding to the next) and whether you use waterfall, spiral or
whatever is purely a question of judgement or trade-off for the situation
you are developing for.
AFAIK, waterfall is perfectly appropriate if you can get your mind around
the problem in one go and the customers requirements are definitive - use
one of the other models otherwise :-)
Peter
"Gordon McMillan" <gm...@hypernet.com> wrote in message
news:Xns9188626EA5C2...@199.171.54.213...
The key concept in any form of modern software development methodology is to
acknowledge that you *CANNOT* confidently "get a step right before
proceeding
to the next": feedback and iteration are not optionals.
It makes about as much sense to describe this as "purely modification of"
waterfall, as to describe communism as purely modification of the concepts
found in liberalism (let each citizen pursue his personal advantage freely
within the laws): sure, just change the laws so you can only do what the
Central Committee tells you to do, and there you are.
> AFAIK, waterfall is perfectly appropriate if you can get your mind around
> the problem in one go and the customers requirements are definitive - use
> one of the other models otherwise :-)
How do you KNOW that "the customer's requirements are definitive"?
Let's say: use waterfall when you are ABSOLUTELY certain that the
customer's requirements are definitive and you have understood them
correctly and completely at once. To make "absolutely certain" more
concrete, let's say that you will undertake to disembowel yourself
ritually if it turns out you were wrong -- that the requirements were
NOT quite as clear, well-communicated, well-understood, definitive and/or
nailed-down as you thought. If you're not willing to commit to this,
then you're not ABSOLUTELY certain, and you shouldn't use waterfall.
Now THIS seems a workable operating definition to me. Besides, the
few developers self-assured enough to operate this way will soon be
out of the gene pool, enhancing the breed. A win-win proposition.
The only real defect of waterfall as applied over the last few decades
is the lack of a precise definition of "absolutely certain", and by
the hara-kiri clause I think we remedy that quite effectively.
Alex
To which I would add so as to emphasize the inevitability of change: "How do
you KNOW that the customer's requirements however definitive they might be
NOW, will not change before you're done?"
// mark
Agreed, depends on your definition of getting it right - that is why
feedback and iteration will always be with us.
> It makes about as much sense to describe this as "purely modification of"
> waterfall, as to describe communism as purely modification of the concepts
> found in liberalism (let each citizen pursue his personal advantage freely
> within the laws): sure, just change the laws so you can only do what the
> Central Committee tells you to do, and there you are.
>
Well, I think "looser" on this one than you do :-) Software development is a
stepwise process of determining requirements, producing a design, coding to
that design, testing the code and delivery to a customer - the ultimate
model :-). How you break that up or organise that for the particular project
is what spawns the so called "models". I don't know of any model that
doesn't have this basic process embedded in it.
> > AFAIK, waterfall is perfectly appropriate if you can get your mind
around
> > the problem in one go and the customers requirements are definitive -
use
> > one of the other models otherwise :-)
>
> How do you KNOW that "the customer's requirements are definitive"?
How can you BUILD something if you don't know what it is? If you are a
software house then you have to get an agreed contractual basis as to what
you deliver - otherwise you are always changing what you deliver - which is
fine if you are paid on a time and materials basis but not many customers
want to enter into that kind of open ended arrangement. So, just like hiring
a plumber to do a job, you ask for a quote, he asked you what the job is and
then advises the quote based on what you tell him - same thing has to happen
in software otherwise you go out of business backwards. Of course, if you
have the "luxury" of developing for an inhouse part of the company then you
(the supplier) and the customer can screw around to your hearts content
until one day a manager somewhere notices the dollars disappearing into a
blackhole and cans the entire thing (or makes the pair of you stop screwing
around! :-)).
So nailing down the requirements is always the first step - agreement means
that (at least at that moment :-)) when the customer signs on the dotted
line everyone believes that to the best of their knowledge the requirements
are definitive enough to proceed to build the product. The majority of
failures that I have seen and read about have one underlying common thread -
the requirements were still being changed at the time the project was
cancelled.
>
> Let's say: use waterfall when you are ABSOLUTELY certain that the
> customer's requirements are definitive and you have understood them
> correctly and completely at once. To make "absolutely certain" more
> concrete, let's say that you will undertake to disembowel yourself
> ritually if it turns out you were wrong -- that the requirements were
> NOT quite as clear, well-communicated, well-understood, definitive and/or
> nailed-down as you thought. If you're not willing to commit to this,
> then you're not ABSOLUTELY certain, and you shouldn't use waterfall.
>
I think you are getting too rigid here - all software models are based upon
the fundamental steps of requirements analyise, design, code and test. Even
models based upon rapid protyping follow the same basic steps (the
requirements are very loose obviously, the aim of protyping is to refine
them) - the protype is built to a set of understood requirements, the
protype has some form of design, there is coding and there is testing.
ABSOLUTE has to be applied if you want to stay in business - if the
requirements change (as they will) as the project progresses then the
customer can ask for a modification and project management deal with it i.e.
estimate cost and schedule impact, determine in conjunction with the
customer where and how the change will be slotted in i.e. it might be agreed
that the change can be progressed AFTER the current system has been finished
and "sold off" or the decision might be to fit the changed requirements in
ASAP.
So I am getting confused by your responses - all you seem to do is blast
waterfall - I haven't seen any attempt at some constructive response here, a
proposal for some other model because it fits better into your concepts of
software development or personal situations.
All I have said is that waterfall can and has worked in the past and
shouldn't be dismissed upon the argument that "it hasn't worked in the past,
so therefore lets not consider it" (my interpretation of your responses).
This seems a nonsense argument at best.
> Now THIS seems a workable operating definition to me. Besides, the
> few developers self-assured enough to operate this way will soon be
> out of the gene pool, enhancing the breed. A win-win proposition.
> The only real defect of waterfall as applied over the last few decades
> is the lack of a precise definition of "absolutely certain", and by
> the hara-kiri clause I think we remedy that quite effectively.
>
To me, the gene pool just seems to swirl and swirl with no signs of reaching
any superior state. It appears more to me that the new genes entering the
pool from colleges every year think they are the next Leonardo Da Vinci of
software development, that they can ignore the lessons of the past and just
stride out there with supreme confidence that they know what is right -
hence people like me (who have been here for several decades now) see the
same mistakes being made time and time again. I am sure that you are just as
confident as I am that you "know what is right". I have seen what works and
what doesn't, I don't pretend that I know it all, but what I do know is
correct for the circumstances I have experienced :-). So waterfall can and
does work in the right context. So do the other models - there is no single
model that will work best for all circumstances.
So, unless you are "absolutely certain" then I hope you are on a time and
materials contract, otherwise there will be a win-win situation and the gene
pool will have one less participant :-)
Peter
> How can you BUILD something if you don't know what it is? If you are a
> software house then you have to get an agreed contractual basis as to
> what you deliver - otherwise you are always changing what you deliver -
> which is fine if you are paid on a time and materials basis but not
> many customers want to enter into that kind of open ended arrangement.
Probably true for big software houses, but as a very small (one person)
shop, I find almost no one is interested in a fixed-price bid. That way they
can change their minds without disturbing the budget.
-- Gordon
http://www.mcmillan-inc.com/
By *finding out* what it is. The process of "building", in the widest
sense of the word, is the process of finding out what is being built.
> software house then you have to get an agreed contractual basis as to what
> you deliver - otherwise you are always changing what you deliver - which
is
Remember that well over 90% of software development is in-house, so that the
specific business models you may have in mind are quite marginal in the
world of software development as a whole. Further, a good slice of what is
NOT "in house" is development of software to be sold (or rented, subscribed
to, included as novelty gift in potato chip packs, etc) "off the shelf", so
there is absolutely no "agreed contractual basis" with your customers until
AFTER you have the software ready for sale (rent, subscription, etc).
> in software otherwise you go out of business backwards. Of course, if you
> have the "luxury" of developing for an inhouse part of the company then
you
> (the supplier) and the customer can screw around to your hearts content
Except that this is not the customer's interest: they want SOLUTIONS to
their problems, the latter often being rather ill-defined ones. So, it's
in their best interest to work with you to help find out exactly what they
need, and with what priority in terms of what needs to work first, what can
come later, what would "be nice to have" but not vital, and so on.
Further, YOUR interests are also to similarly develop successful, useful,
usable, software systems -- systems that get used and deliver results.
That's assuming you're an engineer at heart, rather than a bureaucrat whose
key goals are following established procedure and always having somebody
else to blame when things go wrong, of course.
> So nailing down the requirements is always the first step - agreement
means
It almost never is. Use velcro or masking tape, not nails.
The requirements are going to be refined, prioritized, changed in scope,
refactored, reprioritized, etc, throughout the life of a successful
software system. Those nails would get bent out of shape and rust far too
soon to be any real use.
> failures that I have seen and read about have one underlying common
thread -
> the requirements were still being changed at the time the project was
> cancelled.
I.e., the customer was in the process of finding out what they actually
needed -- a perfectly normal interactive discovery process -- and somewhere
in some decision-making positions were people who did NOT realize this and
were not prepared for it. Most likely because they harbored [expletive
deleted] notions about "nailing down the requirements is always the first
step".
BTW, you may not read about them as failures, but examples also abound
of software that WAS painstakingly specified in minute detail before any
step towards building it was taken, then later delivered -- and never
used, being totally irrelevant to the business needs. In my (indirect)
experience, this tends to happen in public procurement, where laws and
regulations may indeed mandate such an absurd process. It also happens
in development for "off-the-shelf" sale (etc), in which case it tends to
be more visible since the end product, despite fulfilling its "nailed down"
specs perfectly, doesn't sell.
It may not be something you will read about as a failure, but, both as
a taxpayer AND as an engineer, I consider these disasters, which are far
from unheard of, every bit as bad as other development disasters.
> > Let's say: use waterfall when you are ABSOLUTELY certain that the
> > customer's requirements are definitive and you have understood them
> > correctly and completely at once. To make "absolutely certain" more
> > concrete, let's say that you will undertake to disembowel yourself
> > ritually if it turns out you were wrong -- that the requirements were
> > NOT quite as clear, well-communicated, well-understood, definitive
and/or
> > nailed-down as you thought. If you're not willing to commit to this,
> > then you're not ABSOLUTELY certain, and you shouldn't use waterfall.
>
> I think you are getting too rigid here - all software models are based
upon
> the fundamental steps of requirements analyise, design, code and test.
Even
You may call then "steps" if you wish, and you have missed quite a few
crucial ones (such as "deploy", "document and/or train", "tune", ...),
but that doesn't make them anywhere as separate and/or sequential as all
that.
> models based upon rapid protyping follow the same basic steps (the
> requirements are very loose obviously, the aim of protyping is to refine
> them) - the protype is built to a set of understood requirements, the
> protype has some form of design, there is coding and there is testing.
"There is" understanding, design, coding, testing, and other things yet
"there are". But the optimal sequencing, interleaving, merging and
splitting of the various activities are not optimally rigid -- YOU tell
ME "I am getting too rigid", I, on the other hand, claim Agile Development
processes are the very antithesis of rigidity and Waterfall IS rigidity in
software development -- an inappropriate, inferior and unsuitable
methodology.
> So I am getting confused by your responses - all you seem to do is blast
> waterfall - I haven't seen any attempt at some constructive response here,
a
All I want to do in this thread is make sure waterfall is and stay blasted,
since benighted discussors are trying to revive the zombie. Excellent
textbooks, articles, courses etc, abound, on many superior alternatives,
such as Rational Unified Process, Extreme Programming, and so forth.
> proposal for some other model because it fits better into your concepts of
> software development or personal situations.
Read anything published in the field in the last few years and you'll
find nothing but. Meanwhile, the monster of Waterfall needs to be
spiked through the heart each and every time it threatens to revive, which
is basicall what I'm doing here.
> All I have said is that waterfall can and has worked in the past and
It can't and hasn't, in any relevant case (see below).
> shouldn't be dismissed upon the argument that "it hasn't worked in the
past,
> so therefore lets not consider it" (my interpretation of your responses).
> This seems a nonsense argument at best.
It has never worked, it can never work, it's an absurdity based on a
total misconception of the nature of programming, of engineering, of
the human mind, of human society, and of the universe. There, good
enough for you?
You're of course driving me to rather extreme assertions, just as I
would be if (e.g.) debating some other similar absurdity which every
historical experience and common sense show can't work, yet upon
which altar blood (true or metaphorical) has been shed aplenty in
the past and threatens to be shed again until and unless the nightmare
is driven forever from the sublunar world.
There ARE trivial 'projects' (not worth the name of 'projects') which
basically consist in doing for the hundredth time much the same thing
as you've done the last 99 times. Boring, but true; in this case the
amount of "interleaving" needed may diminish, potentially down to a
lower bound of zero. This is much like saying that "communism" can
work within a small, close-knit family where everybody trust and love
each other: in such a scenario there is no need to account for "whose
money it is", who's earning and who's spending, and so on. But it
*JUST DOESN'T SCALE* -- it's no basis for economics, which has to deal
with the vast majority of interesting cases, where people, far from
loving each other, may barely _stand_ each other. Idealizing and
extrapolating the tiny "loving family" scenario leads to horrors that
range from endless queues outside of empty shops all the way to rivers
of blood down the streets. Wrong-headed software development approaches
so far have less potential for utter disaster -- but, deaths HAVE resulted
from "waterfall" development (specifically its hubris-like lack of
check-backs and continuous verification), as any reader of the Risks
column will know.
Waterfall proponents traditionally wanted to dismiss such occurrences
as (paraphrasing) "operator error". That's a good symptom of a system
that's not fit for use by human beings, and the rationalization attempts
for the system. Sure, there's nothing wrong with communism per se, only
when you apply it to human beings do problems appear (for all I know,
Martians may be quite happy with it). Perfect beings might be able to
apply Waterfall -- I don't really care about them (being perfect, they'll
surely be good at picking their methodologies anyway): "Know then thyself,
presume not God to scan; The proper study of Mankind is Man".
Man is fallible, and has evolved in an environment of trial-and-error,
continuous feedback, continuous adjustment. The worst failures come when
one's pride rears up and proclaims that one HAS fully grasped the issues,
so no more tentative, feedback, adjustments, etc, are possible -- when one
denies the fallibility, or fails to accept the indispensable "checks and
balances" that compensate for it and make it bearable. "Pride comes before
a fall".
On the subject of "operator error", see Perrow, "Normal Accidents"; and to
some extent Norman, "Design of Everyday Things". On inflexibility
(resulting from excessive pride from past successes) as the key aspect in
the (economic) fall of every Empire, Carlo Cipolla has an excellent short
article in his delightful, highly mixed book "Le Tre Rivoluzioni" (there's
probably an English version somewhere too, given that Cipolla has written
and taught in English even more than in Italian -- I can look for it, if
you want to try and read it). Kennedy's "Rise and Fall of the Great
Powers" is, to some extent, a look at the same issue with wider scope.
Waterfall is highly reminiscent of each of these general issues.
> any superior state. It appears more to me that the new genes entering the
> pool from colleges every year think they are the next Leonardo Da Vinci of
A failure, as an engineer, of course -- he applied "waterfall", rather than
iteratively testing and checking that he knew what he was doing. So, none
of his machines ever really worked well, and some of his incredible works of
art (artists can afford far more pride than engineers can) were not as
lasting as those of his contemporaries (who, more humbly, tried things in
small increments and kept double-checking with customers...).
> software development, that they can ignore the lessons of the past and
I.e., human fallibility? If new graduates are so full of misplaced pride
that they think they can use Waterfall, rather than solid, interactive
mingled-steps development, I entirely blame their teachers.
> stride out there with supreme confidence that they know what is right -
> hence people like me (who have been here for several decades now) see the
> same mistakes being made time and time again. I am sure that you are just
Yes, I do see Waterfall being praised -- but more likely by people my
contemporaries, such as you, rather than by young brilliant graduates (who
are more likely, at least, to have read recent relevant literature).
> confident as I am that you "know what is right". I have seen what works
It's exactly because *I know that I don't know* (and that others don't know
either), that I consider it a horror to praise and defend a methodology
that could only work for perfect (and mind-reading) developers who are
developing for idealized customers who DO know exactly what they need.
> what doesn't, I don't pretend that I know it all, but what I do know is
> correct for the circumstances I have experienced :-). So waterfall can and
> does work in the right context. So do the other models - there is no
single
> model that will work best for all circumstances.
No, but, as I don't care about what works or doesn't for hypothetical
Martians, I do know there's a model that will work horribly in all
interesting circumstances: it's called Waterfall.
> So, unless you are "absolutely certain" then I hope you are on a time and
> materials contract, otherwise there will be a win-win situation and the
That I am absolutely certain that Waterfall doesn't work for human beings
doesn't mean I already know what does work best for any given situation,
since there are so many other possibilities; it means I, and the rest of
the team, _including the customer_, can *find out*, and interactively as
well as iteratively develop and refine solutions and methods that "work
well enough" to deliver significant business value, reliably and repeatably.
Alex
And you don't call "finding out" requirements analysise? :-)
I'll drop the rest of the email now......
No, because, when the "finding out" process is finished, the end result is
generally knowledge *embodied in a working product*. "Analysis" (even, I
presume, in your alternate spelling thereof) normally carries connotations
of omitting such equally crucial procedures as design, implementation,
testing,
deployment, documentation, training, tuning. And yet, you aren't sure you
really know "what it is" until the various procedures converge and business
value is delivered (and only the customer is really qualified to judge about
this last and crucial point -- unless you manage to inveigle some customers
as at least part-time members of the team, you'll only get indirect future
indications of this, e.g. as filtered by marketing personnel who might well
lack other aspects of understanding).
For another potentially useful analogy on the subject, try reading
George Soros' "The Alchemy of Finance: Reading the Mind of the Market".
He doesn't write as well as he speculates (that would be a tall order
indeed), but there is one aspect of his thinking, as here expressed, that
makes the book relevant in this context. He frames his speculations as
_experiments testing out specific financial theories_. Understanding is
reached when a theory works (an investment makes money), and even more
when a theory is disproved (an investment loses money) -- yes, Soros is
an unabashed Popperian, and quite proud of it too. Some others may be
better at verbalizing their theories, and rationalizing about them, but
"the process of building is the finding out of what is being built" holds
quite well here. For example, such "details" (ha!) as edging-strategies
and fallback positions concretely embody the need to meta-understand:
develop concrete and detailed models, not only of what a theory predicts,
but of _how confidently_ it predicts it, and what follows from various
degrees of theory-failure. Similarly, in software development, we may
adopt different flavors and intensities of "defensive programming" and
"flexibility-maintaining strategies/investments" that embody, not just
our best current understanding of what we're building, but also meta-
understanding about how confident we are of specs' stability and what
happens when (...not "if"...:-) the specs to slide along various possible
axes, or "fault lines", of variability.
I posted a few hours ago on another thread a similar analogy with the
works of Renaissance builder Biagio Rossetti (as opposed to ones who
were both great architects AND superb theoreticians, such as Palladio
or Alberti). Studying the _process_ by which all of these greats of
the past reached their best results is somewhat enlightening, too,
although we must not forget that they were dealing with technologies
far older and more stable than Information Processing is today. Hint,
anyway: if you think their masterpieces saw the light through a linear,
"waterfall" process of specification, then blueprinting, then bricks
and stones, then touching up... you are sadly mistaken here, too. I
think (having never studied this in depth) that today, several further
centuries later, most building is probably "technologically stable"
enough to proceed this way, maybe -- most buildings being just about
identical to previous, already-existing buildings, and technology making
it prohibitively expensive to go for customization versus "one size
fits all". But some very relevant buildings are still done the good
old way, with lot of interaction and feedback between "stages" that are
anything BUT separate. Read, for example, the interviews Renzo Piano
gave back when his Kansai airport building was one of the few that proved
able to withstand the Kobe earthquake of 1995 "without so much as a
broken window". He credits the input he received from the construction
workers (and took into account, iteratively revising design, etc) as a
major contribution to the structure's amazing robustness...
Alex
"Gov. Ruth Ann Miner has scraped plans for a computerized accounting
and purchasing system after four years of design costing $7.4
million." The article goes on to note that the system would have cost
another $7 million to implement and $2 million a year to operate and
would be obsolete from the day delivered because of already planned
changes in the state's accounting procedures.
It strikes me (and the Governor) as scandelous that so-called
professionals would spend so much time endlessly designing and never
produce anything that worked and was useful.
Terry J. Reedy
Unfortunately, that's the way Government projects work. Until
someone convinces the people who control the purse strings
(congress, legislators) that you can't get good, repeatable results
without incremental delivery, the same messes will happen.
When you put out a bid for a project to be delivered in one
lump, 5 years down the road, it's going to fail. That's one of the
more guaranteed statements in this universe.
John Roth
> It strikes me (and the Governor) as scandelous that so-called
> professionals would spend so much time endlessly designing and never
> produce anything that worked and was useful.
Some problems are *really* hard. Diseconomies of scale doom some projects
to failure. I bet the cause here was accomodation of legacy infrastructure.
Neil
> When you put out a bid for a project to be delivered in one
> lump, 5 years down the road, it's going to fail. That's one of the
> more guaranteed statements in this universe.
Deeply ironic, isn't it...
--
[ Kyle Cordes * ky...@kylecordes.com * http://kylecordes.com ]
[ Consulting, Training, and Software development tips and ]
[ techniques: Java, Delphi, ASTA, BDE Alternatives Guide, ]
[ JB Open Tools, EJB, Web applications, methodologies, etc. ]
Isn't that the way all projects, government or not, work? I remember
horrifying numbers quoted in places such as Robert Glass's column in
Comm. of the ACM to the effect that 50% (or maybe it was 60%, or 90%)
of projects fail by either not producing a deliverable, or producing
a deliverable that doesn't work, or by producing a working deliverable
that doesn't solve the problem.
--amk (www.amk.ca)
Don't interrupt me when I'm eulogizing.
-- Dr Judson/Fenric, in "The Curse of Fenric"
> "John Roth" <john...@ameritech.net> writes:
> > Unfortunately, that's the way Government projects work. Until
>
> Isn't that the way all projects, government or not, work? I remember
> horrifying numbers quoted in places such as Robert Glass's column in
> Comm. of the ACM to the effect that 50% (or maybe it was 60%, or 90%)
> of projects fail by either not producing a deliverable, or producing
> a deliverable that doesn't work, or by producing a working deliverable
> that doesn't solve the problem.
I don't know about all projects in general, but for "large" projects (ones
that were planned to take a year or more to create) the number was much
closer to 90% than 50%. Worse, the majority of both successes and failures
tend to be way over budget too.
-Dave