The article has been updated with a reply from Craig Larman to an email I sent
him telling him that Isaac Gouy's comments had been posted.
I posted the article in some kind of spirit of fairness. I'm not at all sure how
it has advanced our understanding of the topics of Agile and Extreme
I'm giving the best advice I have. You get to decide if it's true for you.
Craig Larman's public comment echoes the courtesy with which he
received my private email about "Agile and Iterative" a year ago.
In publishing there's a long history of providing Errata for texts. Now
that we have web pages for books and personal web pages it's very easy
to provide corrections for a published text.
We only benefit when corrections to misstatements are made available -
finding bugs is one thing, fixing them another.
> Like Ron, the reason I like agile methods is also that
> *I* have had a good experience with them, but that is only the first step
> in any scientific process of discovery.
We need a book that describes to bosses why what we do is so much better,
in ways that are too obvious to us to need explanation. Craig's book has
been lauded as a good first attempt, leveraging existing research. But...
> We really do need real solid data
> that's as unambiguous as possible to confirm our intuitions.
...such research is incredibly expensive, on the order of ~100 million per
project, and no economically viable way to harness control groups. We
are asking how to test the scalability of practices that obviously work
perfectly well in small, anecdotal projects.
The control groups would have to write very expensive and utterly
disposable projects. Such teams would have to compete with each other by
matching features, the bad way, instead of distinguishing their features,
the good way. So the population of teams would not be able to sell these
expensive programs, leaving the researcher with invoices adding up to
billions of dollars.
So, until we find a way around this research bottleneck, we have surveys
of existing projects. Regardless how we interpret those data, the input is
still not of scientific quality.
When data is not reported accurately our interpretations are built on
"Agile and Iterative" rightly gives several pages to the research by
MacCormack and colleagues, but misstates their results. See examples
#3 and #4.
That isn't about how we interpret the data, it's about being given the
>I must say that I find Craig Larman's reply to be rather disappointing.
>This is a book that seems to be widely quoted so to find out that it
>was not carefully researched from the source material is indeed
>disquieting. I really applaud Ron for putting your errata on his Web
>site, but on the other hand I am unhappy about the lengths he goes to
>in order to distance himself from them. Like Ron, the reason I like
>agile methods is also that *I* have had a good experience with them,
>but that is only the first step in any scientific process of discovery.
>We really do need real solid data that's as unambiguous as possible to
>confirm our intuitions. I am guessing that because software is such a
>social process it really is hard to get data that no one can argue
>with. It will probably always be possible to say "Yeah, but this one
>variable over here was not controlled." Still, one must accurately
>represent one's source material, whatever it might be.
There has never been any good science in software development by team, any more
than there's science in football by team. I don't see how there can be ... there
is absolutely NOTHING that can be controlled.
> There has never been any good science in software development by team, any
> more than there's science in football by team. I don't see how there can
> be ... there is absolutely NOTHING that can be controlled.
You control via brute force, via adding more and more teams to the
population that the margins of error get narrow enough for relevant
demographics and statistics.
Hence, a real study would cost billions of dollars.
Maybe if we spectated and sold beer commercials, we could finance such a
"I guess that any misstatements on my part that Isaac identified come
from the fact that I wrote the last book while mostly on the road and I
relied usually on second-hand references (e.g., on a paper on the web
or in a book I had with me that referred to the source paper) rather
than the original source references, and that second-hand summaries may
not give the clearest picture"
Regarding Phlip's comments, I think that one can do studies at least to
give an indication of success. For example, take some projects that
were done using a non-agile approach and then some comparable projects
that claim to have been done in an agile way, and decide on some
metrics for comparison. It's certainly not a controlled environment,
but at least some initial correlations could be established. One could
then work on developing some controlled studies just to get some finer
insight into specific correlations. My point it, I think just throwing
up our hands in despair may not be the best answer. But yeah, software
is complicated and I'm sure it's really hard to prove theories about
methodologies. Here's an experimentI would find interesting. Take two
groups, one considered according to some predefined metrics to be
"skilled" and another to be "less skilled" Then give them each a
project to do in an agile way, and in a non-agile way. I wonder how the
relative levels of success, say from 1-10 would look.
"skilled" team ? ?
"less skilled" team ? ?
Anyway, I ramble, but one thing is for sure. Taking existing studies
and not properly explaining the results of those studies, then
publishing an influential book, and finally saying "oh sorry, some
stuff may not be accurate because I got my data second-hand while I was
on the road"... wow, that's really awful.
> ..."skilled" and another to be "less skilled" Then give them each a
> project to do in an agile way, and in a non-agile way. I wonder how the
> relative levels of success, say from 1-10 would look.
> agile non-agile
> "skilled" team ? ?
> "less skilled" team ? ?
> Anyway, I ramble, but one thing is for sure.
Ralph Johnson formerly started his freshmen student class half on RUP and
half on XP. (Note this is comparing iterative to iterative.) The XP class
kept getting better marks, so he switched everyone to starting with XP.
That's hard evidence - for tiny school-sized projects.
http://c2.com/cgi/wiki?ZeekLand <-- NOT a blog!!!
The misstatements are differences between the published studies and the
report of those published studies in "Agile and Iterative". It has
nothing to do with whether the published studies were good or bad or
It has everything to do with whether we can trust that when "Agile and
Iterative" says the study found "X" then the study really did find "X".
In examples #3 and #4 the studies found "not X" and yet "Agile and
Iterative" reports that they found "X" - the results are misstated.
>The misstatements are differences between the published studies and the
>report of those published studies in "Agile and Iterative". It has
>nothing to do with whether the published studies were good or bad or
>It has everything to do with whether we can trust that when "Agile and
>Iterative" says the study found "X" then the study really did find "X".
>In examples #3 and #4 the studies found "not X" and yet "Agile and
>Iterative" reports that they found "X" - the results are misstated.
Broken record, Isaac. Time for something that takes us to the next step.
I once spent some time (well, at least a couple of minutes) thinking of
an experiment that could provide some evidence on this matter. I thought
about something similar but gave up eventually.
The first problem is finding good "predefined metrics" to select the
teams. Especially when choosing the more skilled team one should make
sure they have equal experience of both of the methodologies, so the
results don't warp towards the one they're more familiar with.
The second problem is somewhat related to the first one. If the results
were to be comparable between the two rounds, the tasks should be the
same (or very similar). Then again, doing the same task twice would
probably give better results the second time independent of the
One might try using two teams of each skill level (four teams in total)
so that these teams would use the methodologies in different order*,
but again, the selection of teams could very well be questioned.
*skilled team, 1st agile, 2nd non-agile
skilled team, 1st non-agile, 2nd agile
less skilled team, 1st agile, 2nd non-agile
less skilled team, 1st non-agile, 2nd agile
Vladimir said "...Still, one must accurately represent one's source
material, whatever it might be. "
Ron replied "There has never been any good science in software
development by team,... "
And Isaac replied "It has nothing to do with whether the published
studies were ...scientific. ... In examples #3 and #4 the studies
found "not X" and yet "Agile and
Iterative" reports that they found "X" - the results are misstated. "
Sounds like a valid response to Ron's mail to me. Surely this point
("scientific or not is irrelevant") is not a "broken record"? It looks
like a direct response to a point raised by Ron. Imo, if Isaac's claims
are invalid they should be rebutted, not brushed under the carpet.
I'm with Vladimir when he says "This is a book that seems to be widely
quoted so to find out that it was not carefully researched from the
source material is indeed
> Sounds like a valid response to Ron's mail to me. Surely this point
> ("scientific or not is irrelevant") is not a "broken record"? It looks
> like a direct response to a point raised by Ron. Imo, if Isaac's claims
> are invalid they should be rebutted, not brushed under the carpet.
Ron has placed Isaac's claim on his web site. And that won't stop Isaac
from announcing it again, and again, and again.
Isaac Gouy is a perenniel poster on comp.software.extreme-programming,
where his primary focus has been to "debunk" Craig Larman's book,
"Agile and Iterative Development: A Manager's Guide". He has written an
article, and I have agreed, in a spirit of open discussion, to post it
on XProgramming.com. Here it is. (Updated: a reply from Craig Larman.)
To me, that's basically saying something along the lines that this guy
is a kook and don't take him seriously, but you know, here are his
silly objections anyway.
Ron has never posted anything that suggests he agrees in any way with
Isaac's comments, and in fact, his first post to this thread says
I posted the article in some kind of spirit of fairness. I'm not at all
it has advanced our understanding of the topics of Agile and Extreme
Well, maybe it hasn't advanced our understanding, but on the other
hand, clearly neither has Craig Larman's book. Since this book cites
the source material, and Craig has publicly admitted that he didn't
read the source material, he was effectively being dishonest in
referencing it. To me, that's very serious. If he had quoted the
secondary sources that he actually got his information from, and said
as much, that would have been at least honest. If the high profile
advocates of agile ideas like Ron simply pooh-pooh objections such as
Isaac's I think that really hurts the agile cause. And I think it makes
a certain amount of sense for Isaac to constantly clarify what he is
saying if people simply go around it, ignoring the substance of his
point. To me agile is about being honest and having courage to say and
do the right things, whether or not it's palatable to do so. I am very
confused by the fact that Ron and you Phlip keep acting as if this is
no big deal and Isaac is overblowing the matter.
Vladimir Levin wrote:
> ... I am very
> confused by the fact that Ron and you Phlip keep acting as if this is
> no big deal and Isaac is overblowing the matter.
You are arriving at the end of a long and tired history here.
Unlike Isaac, we have read, written, and debated many other materials,
besides this one book. (Including Ron's achingly fair and polite reviews of
the unbalanced and tedious book /Extreme Programming Refactored/.)
And, unlike Isaac, we are not afraid to tell these newsgroups what process
we use. Isaac has repeatedly refused to discuss his own practices.
So suppose we were at a conference on open heart surgery. You discuss a
technique that you have used before, with interesting results. Maybe a
technique supported with partial research, but no real formal study yet.
Now suppose I have overheard some claims against this technique, so I repeat
them. You ask me to back my claims, and I cannot; I can only repeat them.
You ask me what experience I have with open-heart surgery, and I dodge the
Right now you can't tell if I'm really a surgeon, or if I'm just wearing a
conference badge and parroting the arguments I hear around me. And this puts
you in a particularily dangerous situation, because if you remember any of
my prattle, the next time your scalpels are inside someone's heart, my
prattle could kill your patient.
That is the position you take when you decide that Larman's book is not
worth reading, due to minor ommissions in its citations. You are declaring
that your opinion has been swayed by someone who has not yet revealed the
slightest personal understanding of the topic.
Here is Ron Jeffries suggestion - Wed, May 17 2006 6:55 pm
(I don't know what you mean by "he repeatedly invited Isaac to write
that page directly" - where are those invitations?)
Here is my counter-suggestion - Thurs, May 18 2006 12:43 pm
Here is the announcement that I had completed the document suggested by
Ron Jeffries and provided it to him - Mon, Jun 12 2006 7:37 am
I made 4 other postings between May 18 and Jun 12 trying to figure out
why after Ron Jeffries had gone to the trouble of ordering a copy of
James Martin's book he seemed unwilling to share the information - I
still find it puzzling.
Individuals discussing their experiences with software in a public
forum and making suggestions based on personal experience, that's fine.
A book however, especially one quoted and referenced whenever people
want information about agile, that should be more robust. The author of
such a book should read the material he or she cites. It's just that
simple. Anything else is just not... acceptable...
I read the long and tiring thread you referred to. You and Ron keep
saying something to the effect that you're sure the book is fine even
if there might be minor problems with interpretation. Then Ron says
that even if Isaac is right, what difference does it make?
Just because you believe Larman's conclusions are correct and match
your experience, doesn't mean his apparently poor scholarship (which he
admits to!) should be held up for as evidence that agile works for
people who may not have any experience with agile. The whole point of
such a book is to present additional evidence beyond just "agile works
for me, so it'll work for you."
I remember attending a great talk by Robert Martin. In it he talks
about the original paper that people have cited in favour of waterfall.
Apparently the paper actually doesn't advocate waterfall, and if you
read further, it actually talks about iterations (little waterfalls
chained together). Because this paper was misrepresented, Uncle Bob
told us, waterfall became a entrenched in software. What's good for the
goose is good for the gander.
That's it. I'm really out this time! I have a life. Really, I do!
>Sounds like a valid response to Ron's mail to me. Surely this point
>("scientific or not is irrelevant") is not a "broken record"? It looks
>like a direct response to a point raised by Ron. Imo, if Isaac's claims
>are invalid they should be rebutted, not brushed under the carpet.
Isaac has been saying nothing except that there he sees flaws in some of
Larman's interpretations, for months now.
I published Isaac's report on my site, one of the highest-ranked Agile sites in
the world. And I wrote to Larman, who replied, graciously thanking Isaac for his
input, saying that he hoped Isaac would help with his next book because he seems
like a thoughtful individual, and acknowledging that there might be some errors
in his book.
And still Isaac repeats the same thing again and again. He's on record, the
author is on record, the points have been made.
What's most signficant to me and others who actually DO Agile is that whether
Larman misinterpreted some of the articles or not -- the important thing about
Agile ought to be whether and how it works, not whether some guy's book from a
few years ago has errors in it.
Errors, if any, in Larman's book, have no impact whatsoever on the truth of
whether, when, and how Agile is useful. The truth of every word in Larman's book
would also have no impact on the truth about Agile.
I'd just like to move on to discovering the truth about Agile. While there may
be some flaws in Larman's book, I think its general message is accurate, and
that the book as a whole is valuable.
But I'd like, one of these first decades, to set the question of some guy's book
aside and start dealing with the truth of Agile, not whether Larman figured out
what obsolete thinkers like James Martin said and whether it tells us anything
> Individuals discussing their experiences with software in a public
> forum and making suggestions based on personal experience, that's fine.
And if they refuse to discuss their own personal experience, we adjust our
acceptance of their opinions...
> A book however, especially one quoted and referenced whenever people
> want information about agile, that should be more robust. The author of
> such a book should read the material he or she cites. It's just that
> simple. Anything else is just not... acceptable...
You are delightfully naive about publishing cycles, and the time given to
authors to research.
And you also sound quite ready to read Larman's book and understand it, so
please dive in.
> I read the long and tiring thread you referred to. You and Ron keep
> saying something to the effect that you're sure the book is fine even
> if there might be minor problems with interpretation. Then Ron says
> that even if Isaac is right, what difference does it make?
I'm talking about years of behavior, not just one thread. But thanks for
performing the effort!
> Just because you believe Larman's conclusions are correct and match
> your experience, doesn't mean his apparently poor scholarship (which he
> admits to!) should be held up for as evidence that agile works for
> people who may not have any experience with agile. The whole point of
> such a book is to present additional evidence beyond just "agile works
> for me, so it'll work for you."
And the majority of the evidence which the book cites is sound. The accounts
of MIL-STD-2167 are jaw-dropping.
Isaac's behavior is remarkable; he has had many more years than Larman had
to research all the source PDFs. And when he found one little discrepancy,
he triumphantly announced it here, and harped on it ever since.
BTW did you know the first XP project, the C3 project, was CANCELLED?! And
its source code taken offline and NEVER USED???!!!
> I remember attending a great talk by Robert Martin. In it he talks
> about the original paper that people have cited in favour of waterfall.
> Apparently the paper actually doesn't advocate waterfall, and if you
> read further, it actually talks about iterations (little waterfalls
> chained together). Because this paper was misrepresented, Uncle Bob
> told us, waterfall became a entrenched in software. What's good for the
> goose is good for the gander.
That is completely specious (and typical Uncle Bob). Pointy haired bosses
reinvent waterfall, without reading any damned paper, each time code-and-fix
fails, and they say "next time, we will get all the requirements right,
first!" I have heard them say that with my own ears.
You yourself published a great report on your own agile experiences, and I
certainly hope it doesn't attract the endless mindless trolling that some
other reports attract.
> I am very
>confused by the fact that Ron and you Phlip keep acting as if this is
>no big deal and Isaac is overblowing the matter.
Let me be clear: I do think Isaac is overblowing the matter. I do not value
these discoveries very highly, even assuming them to be absolutely accurate.
I think that the /conclusions/ in Larman's book match the world as I know it. I
expect that /most/ of the interpretations he has made of most of the literature
are pretty valid. I expect that, not because of an extensive study of old
documents, but because of an extensive study of Agile methods, and how they
actually work in my hands and the hands of the teams I work with.
It seems to me that that Isaac's arguments are like those of a defense attorney
tearing down the testimony of an honest, well-meaning, and ultimately human
witness, by picking away at details of the testimony, blowing discrepancies out
of proportion to the fundamental truth (or falsity) of the proposition at hand.
I'm supposing that the details Isaac picks at are accurate. I have not as yet
read the original materials Isaac refers to, so I cannot put my own
interpretation into the mix. But I can see that Isaac has looked hard at the
materials, and I am confident that he's reporting accurately what the materials
say. So I'm entirely open to the idea that if and when I go back to this old
literature, I, too, might think that Larman made mistakes.
And so what? Larman's/ conclusions/ aren't wrong, and my bet would be that /
most of his interpretations are not wrong either.I'm sure his conclusions are
right, because the teams I work with are making those conclusions come true
Now I honestly don't know what Isaac's motivation in all this is. He might just
be a guy who is really interested in looking up old references and seeing if
newer references interpret them correctly. I can't get my head around that
motivation, because I'd only do it if I was trying to prove someone wrong. So,
because of my own personality, I frankly assume that Isaac is trying to prove
Larman wrong, and implicitly therefore, to prove that Agile doesn't work.
I don't /know/ that. Isaac hasn't said that. That's just the motivation that
seems most likely to me. I'd like to be entirely wrong about that. If the
motivation was to see flaws in Larman's work and publish those flaws, that
mission is accomplished, and in my opinion it's time for a new mission.
I emphasize that I don't know what Isaac's real motivation may be. I can only
observe what he does. He consistently refuses to discuss what Agile really is,
whether it works, or how it works. He just says, again and again and again, that
his interpretation of this or that more basic source to Larman's book is
different from Larman's.
I've hosted his work, I've elicited a statement from Larman thanking Isaac and
acknowledging that he may well be right. Now I'm just wondering how many more
nails we have to drive in this coffin before we can get back to something I'd
find more interesting, namely figuring out whether, when, and how Agile actually
That's what I care about, you see ... figuring out what works in software
development, how to make it better, how to help people to do better. It's not
much of a life, but it's my life.
>I made 4 other postings between May 18 and Jun 12 trying to figure out
>why after Ron Jeffries had gone to the trouble of ordering a copy of
>James Martin's book he seemed unwilling to share the information - I
>still find it puzzling.
It was easy to one-click the book. Studying it would take much more time than
ordering it. Now I have it, and I can lok at it if and when I want to.
I stated clearly here and on my web site that I have things to do that I find
more interesting than reading an old James Martin book and seeing whether Larman
misinterpreted it or whether you misinterpreted your interpretation of Larman's
interpretation. I am assuming that when I do that, I'll agree with you that he
misinterpreted it. Whether I do or not, I think your ideas deserve a forum, and
I felt it appropriate to give them a forum on my site. I don't have to agree
with things to give them a place to be seen.
I just don't care whether Larman made a mistake. I know him and am confident
that he may hae made a mistake, but that he wasn't dissembling.. I didn't write
his book, so any mistakes in it are his. I agree with the conclusions in it, and
until you or someone else goes through every single piece of evidence in it, I'm
likely to continue to think that there is material of value in it.
What I personally care about is not what James Martin said, or what Craig Larman
said that James Martin said, or what you said about what James Martin and Craig
Larman said. What I care about is figuring out how to write software more
effectively and how to help people do that.
I'm happy to have you spend your time cross-checking people's books, because
it's your life. It doesn't happen to help me, because I learned what I know
about software development not just by reading books but by reading books and
then trying things, and then reasoning about why what happened happened.
My interest is in doing software, and helping others do software. My interest is
in understanding how software building works, and helping others understand
that. My interest isn't in cross-checking other people's books. That's a worthy
profession, just not one that I personally value, much less one that I want to
go into on my own.
So when I tire of watching the grass grow, I might cross-check your
cross-checking of Larman's checking of Martin's work. But either way I read the
literature, it doesn't affect my understanding of Agile at all -- because my
understanding of Agile comes from doing it, not from reading about it.
So there it is. You seem all het up about this and you worked hard enough to
actually write an article, and it is potentially interesting and potentially
important so I published it. I don't find it valuable to me, but it might be
valuable to someone, so I published it.
So now people give me a raft of crap for expressing the truth: I don't value
those finding very highly. Well, sorry. I don't value them highly. If others do,
that's great. I published the material so that people could see it and think
Now could we talk about how to build software, not how Craig Larman writes
Aaargh!!! I had a cut and paste error which resulted in only halff the
post appearing on the list. here is the full message . Please read (and
respond to ) this post. Apologies to everyone.
Ron Jeffries said
"I published Isaac's report on my site, one of the highest-ranked Agile
the world. And I wrote to Larman, who replied, graciously thanking
Isaac for his
input, saying that he hoped Isaac would help with his next book because
like a thoughtful individual, and acknowledging that there might be
in his book. "
yes Thank You . I for one, am grateful to you (Ron) for that. I would
never have become aware of this issue if you hadn't.
"What's most signficant to me and others who actually DO Agile is that
Larman misinterpreted some of the articles or not -- the important
Agile ought to be whether and how it works, not whether some guy's book
few years ago has errors in it. "
I think this is very fair. However, Isaac's position seems (to me as an
observer on the sidelines) to be completely orthogonal to this. He does
not seem to be disputing (or confirming) the experiential "truth" or
effectiveness of agile. (If there is any past history between Ron,
Isaac, Philip etc I am not aware of it). He seems to be focussed on
asserting that anyone who uses the results of a set of studies as
"evidence" for Agile's effectiveness should make very sure to get the
conclusions right. This is orthogonal to teh superiority or otherwise
of XP. I think this is a fair position too and (agreeing with
Vladimir) *dsimissing* this as irrelevant is a mistake, even if one is
not directly interested in any "evidence".
Since I ***don't know Isaac from Adam***, I assume he is honestly
trying to help.
Allow me to quote Richard Feynman
(One should have) "a kind of scientific integrity, a principle of
scientific thought that corresponds to a kind of utter honesty--a kind
of leaning over backwards. For example, if you're doing an experiment,
you should report everything that you think might make it invalid--not
only what you think is right about it: other causes that could possibly
explain your results; and things you thought of that you've eliminated
by some other experiment, and how they worked--to make sure the other
fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if
you know them. You must do the best you can--if you know anything at
all wrong, or possibly wrong--to explain it. If you make a theory, for
example, and advertise it, or put it out, then you must also put down
all the facts that disagree with it, as well as those that agree with
it. There is also a more subtle problem. When you have put a lot of
ideas together to make an elaborate theory, you want to make sure, when
explaining what it fits, that those things it fits are not just the
things that gave you the idea for the theory; but that the finished
theory makes something else come out right, in addition.
In summary, the idea is to give all of the information to help others
to judge the value of your contribution; not just the information that
leads to judgement in one particular direction or another. "
While "agile" (or any other sw methodology for that matter) is not a
science or even a scientific theory, some of this attitude might help
us all move forward.
I think Feynman's "In summary, the idea is to give all of the
information ... not just the information that leads to judgement in
one particular direction or another " is Isaac's motivation (again I
could be wrong, but I don't see any evidence of Isaac wanting to "tear
anyone down" etc, at least on this thread). Larman as a person is not
particularly important here (though personally, as Vladimir said, this
degree of intellectual sloppiness in an author of his calibre is
disappointing) but it *is* important that *any* *offered* evidence does
not misstate the results of studies etc.
If someone points out that the proposed evidence (by Larman or others)
does not hang together or misstates things, perhaps we should (imvvho)
acknowledge that and conclude that "We thought there were studies
indicating that agile works better than conventional methodologies, The
information provided by Isaac seems to invalidate that belief. However
in our *experience* it does seem to work better and we should
investigate this further" .
I think this reconciles Isaac's position ("anyone who claims that
studies show that XP is better damn well get his conclusions right")
and Ron's ("what matters is the experiential reality of agile") without
dismissing anyone's ideas.
Philip, fwiw, I don't think that Isaac's personal methodology is
particularly relevant *for the points he is trying to make here*.
(please note the emphasis). If I say that "Isaac doesn't talk about how
he develops software therefore his conclusions about Larman's evidence
should be rejected", I would be guilty of a classic ad hominem ("attack
the man not the argument") error. I think I may have missed out on some
past history here. Your(Philip's) frustration comes through clearly :-)
and your posts are very eloquent.
I for one, thank Isaac for doing such painstaking work to validate
Larman's proposed "evidence" and point out the errors therin and Ron
for having the intellectual honesty to publish it on his site, and for
pointing to the experiential superior effectiveness of agile.
We are all wiser for having considered these issues in detail.
PS: everything I said above is *not* intended to criticize anyone or
prolong any argument. I've had my say. Thank you all for listening and
for your opinions. I have gained greatly from reading this thread.
> Allow me to quote Richard Feynman
Try Carl Sagan. In Cosmos, he described the flap in the scientific community
regarding /Worlds in Collision/ by Immanuel Velikovsky. That author proposed
that certain Biblical events, such as the sun rising in the West, were
caused by Venus entering our solar system catastrophically, and disturbing
the orbit and even rotation of earth, before settling into its current
Velikovsky's book was, of course, a popular fable of scientific credence
similar to /The da Vinci Code/. And his theory is trivially falsifiable: The
inner planets all have Keplerian orbits. They rotate in the same direction,
orbit in the same direction, have similar planes of ecliptic, and occupy
gradually increasing distances from the sun. All follow the simpler
explanation of condescing from primordial dust. The Earth's rotation on its
axis can be calculated via paleoastronomic techniques to derive from a
faster rotation during the Hadean Eon, slowed down over time by the
influence of the Moon's tides. Velikovsky was wrong.
Sagan didn't bust on V. He is allowed to publish falsifiable theories. In
"Cosmos", Sagan busted on the scientific community for attempting to repress
and censor Velikovsky. "The worst aspect of the Velikovsky affair is not
that his hypotheses were wrong or in contradiction to firmly established
facts, but that some who called themselves scientists attempted to suppress
Guys, we are not repressing Isaac, and he doesn't deserve the status of a
martyr to science!
But Isaac is not falsifiable. Those of us who actually take a positive
stance are the ones assuming the risk of being falsified.
> Philip, fwiw, I don't think that Isaac's personal methodology is
> particularly relevant *for the points he is trying to make here*.
> (please note the emphasis). If I say that "Isaac doesn't talk about how
> he develops software therefore his conclusions about Larman's evidence
> should be rejected", I would be guilty of a classic ad hominem ("attack
> the man not the argument") error. I think I may have missed out on some
> past history here. Your(Philip's) frustration comes through clearly :-)
> and your posts are very eloquent.
Thanks, I think.
If I were among Sagan's "some who called themselves scientists", I would
propose that Larman's book is right because Gouy is wrong. That's not the
road to science.
Ultimately, the author of a claim is responsible for presenting its
evidence. Reading old PDFs is insufficient, so I ask Gouy to back things up
with personal experience. The entire programming industry has been moving
irresistably, and with overwhelming evidence, towards highly iterative
techniques, and Larman's book is just the tip of the iceberg. So when I ask
Gouy to help us by participating in this process of discovery, and can't
even beat out of him whether he has any experience programming at all, then
his position as martyr to science becomes indistinguishable from one of this
newsgroup's biggest time-wasters.
<nice stuff about Sagan snipped>
> But Isaac is not falsifiable. Those of us who actually take a positive
> stance are the ones assuming the risk of being falsified.
Isaac's claim is in that in particular instances, Larman uses studies
and claims results draws *logically unsound conclusions* from the
studies often with weak or absent evidence. At least that is what I
This is not falsifiable?
his example #1 for instance (from the web page put up on Ron's site)
looks like a strong argument.
If Isaac's claims are *not* falsifiable, then of course there is no
point in giving them any attention. However are we sure they are not
falsifiable? when he says for e.g
"sometimes we are not given the evidence that's in the source,
sometimes the evidence has been changed, sometimes the evidence seems
not to be in the source." that looks fairly easily falsifiable to me
especially isnce he has given anumbered list with direct quotations.
> Isaac's claim is in that in particular instances, Larman uses studies
> and claims results draws *logically unsound conclusions* from the
> studies often with weak or absent evidence. At least that is what I
> This is not falsifiable?
Gouy's inner claim is trivially falsifiable, and has not been disproven yet.
Larman admitted to it.
This entire spew of threads started, months ago, when a newbie requested
literature comparing Waterfall to Agile. I cited Larman's book.
Gouy then claimed I should not cite that book because it misrepresents
evidence. This outer claim is not falsifiable - the claim that the entire
book should not be read.
One evidence on behalf of the book is how well it has been accepted within
the programming community. It compares Waterfall to several variations of
iterative development, and it cites iterative development's roots in ancient
1960s projects. Everyone (who is anyone) has found the research to be very
helpful, especially in coaching teams not to use Waterfall.
Please show where I claimed you should not cite that book. Please show
where I claimed that the entire book should not be read.
> I for one, thank Isaac for doing such painstaking work to validate
> Larman's proposed "evidence" and point out the errors therein AND Ron
> for having the intellectual honesty to publish it on his site, and for
> pointing to the experiential superior effectiveness of agile.
I'll second that, and add a thank-you for the Feynman quote.
> Everyone (who is anyone) has found the research to be very
> helpful, especially in coaching teams not to use Waterfall.
Speaking as one who did read a few of the primary sources (Royce, Harlan
Mills, etc.) though not Craig's book, I find a lot more practical
coaching guidance in e.g. Weinberg or Cockburn.
The research, especially the older articles, is mostly interesting as a
reminder that the field has a longer history than I personally remember,
and a habit of reinventing the wheel over and over.
1) Let me be clear: I do not say that my interpretation of this or that
basic source is different from Craig Larman's. I say Chapter 6
Chapter 6 says "The study identified four practices that were
statistically correlated with the most successful projects:
1. An iterative lifecycle..."
That is categorically untrue - an iterative lifecycle was not one of
the four practices that were statistically correlated with the most
successful projects. That fact is misstated in Chapter 6.
2) As you don't honestly know what my motivations are, as you don't
know what my real motivation may be - why are you once again putting
forward insinuations and negative speculation about my motives?
Please respect the courtesy that I've shown to you - trust in the good
intentions of others and verify what they say.
When a consultancy announces they've found a security problem, that
isn't enough to make the problem go away.
When Microsoft confirms that the security problem exists, that isn't
enough to make the problem go away.
When Microsoft releases a patch for the security problem, that is
enough to make the problem go away.
When a book is published with incorrect information, we make the
problem go away by publishing corrections - now that every book and
every author has a website it's really easy to publish corrections.
I guess the subtitle that Ron Jeffries has added ": an accusation from
Isaac Gouy" is intended to make a dull list of misstatements seem
dramatic and attract the attention of more readers.
But now the language is unbalanced - "Reply from Craig Larman" just
isn't at the same dramatic level. When the misstatements are subtitled
"an accusation" the reader will wonder why the response isn't subtitled