>From Chapter 6 - Research Evidence Iterative and Evolutionary, Size
Research, Productivity Research, Quality and Defect Research - I looked
at about 8 of 16 references.
For 7 of the 8 references, there seem to be significant discrepancies
between the original source material, and how that material is
presented in "Agile and Iterative"; or the original source material
fails to present evidence to support a conclusion, which is then
repeated in "Agile and Iterative".
The source material I looked at is available online, or through public
libraries; so there's no need to rely on my quotations.
> I expect that some of Craig's interpretations will be open to
> question, others not so.
I'd be happy to see these discrepancies explained - the best answer I
can give is that they are 'mistakes'.
> I expect the book to stand as providing some reasonable and
interesting
> evidence, not to be struck down as full of falsehoods. I expect this
because
> (a) I know Craig and I know he worked hard on it, even if it's
flawed, and
> (b) I'm quite sure that Agile works, and therefore I expect evidence
to be
> out there.
"Agile and Iterative" makes a clear distinction between IID and
prototyping - "the software resulting from each iteration is not a
prototype or proof of concept" p11; and makes a clear distinction
between IID and incremental delivery - "Incremental delivery is often
confused with iterative development" p20.
And that makes it particularly odd when the original studies talk about
prototype but "Agile and Iterative" describes them as IID, and
particularly odd when the original studies talk about incremental
delivery but "Agile and Iterative" describes them as timeboxed
iterations.
The evidence may well be "out there" - but afaict it isn't in the
Evidence chapter of "Agile and Iterative".
Timeboxing DuPont
http://groups-beta.google.com/group/comp.software.extreme-programming/browse_frm/thread/87c3d62a61304103/b3083b0916686207?_done=%2Fgroup%2Fcomp.software.extreme-programming%3F&_doneTitle=Back+to+topics&_doneTitle=Back&&d#b3083b0916686207
http://groups-beta.google.com/group/comp.software.extreme-programming/browse_frm/thread/8dc24b299fce308b/e20a07d85e6b3595?_done=%2Fgroup%2Fcomp.software.extreme-programming%3F&_doneTitle=Back+to+topics&_doneTitle=Back&&d#e20a07d85e6b3595
MacCormack's Harvard Studies
http://groups-beta.google.com/group/comp.software.extreme-programming/browse_frm/thread/97c04b6f0004be8a/4a91f7da0db4e663?_done=%2Fgroup%2Fcomp.software.extreme-programming%2Fthreads%3Fstart%3D30%26order%3Drecent%26&_doneTitle=Back&&d#4a91f7da0db4e663
> The evidence may well be "out there" - but afaict it isn't in the
> Evidence chapter of "Agile and Iterative".
We had a very good XP San Diego meeting last night, from the
perspective of an experienced Project Manager from SAIC.
SAIC is a research and engineering firm serving military, government,
and aerospace contracts. They have traditionally used Waterfall
methodology (coding to a plan), or Spiral methodology (many small
iterative Waterfalls). They have all been everywhere that the studies
cited in /Agile and Iterative/ claim they were.
Their programmers began to investigate the "Agile" methodologies, and
they now host the XP San Diego Users Group meetings.
On Thursday night, their speaker was Mary Rodney, the first project
manager at SAIC to run an XP project.
Nine months ago, Mary reported to XPSD that she was about to start an
XP project, and that she would tell us how she did in nine months.
On schedule, here are some highlights:
* higher managers expected a schedule with detailed goals
and speculative requirements
* Mary worked in 21 day sprints, with fixed requirements,
but few details planned in advance
* the project used Scrum management, with standup meetings
and rules
* the team used all XP practices except Test-Driven
Development
* the pair programming and light test rig generated very high
code quality
* pairs rotated often, and the total LOC went down often
* all requirements came by discovery - by interaction with
external teams
* the project used no legacy code
* the team had to write emulators to match external hardware
* the project delivered 3 milestones, each after 3 sprints
* each milestone had to pass a rigorous manual test in
a large, complex hardware rig
* the second milestone went online at a site in Europe
* the project was on time and under budget
Mary's job as Project Manager was to channel all the information
produced by the test results and interactions, to interpret and guide
its top-level metrics. The project's success surprised everyone at
SAIC. Its programmers commented that they worked the _fewest_ hours per
week of any project. They worked only one Saturday.
Those seeking numeric evidence should compare XP's sustainable pace
practice to its ability to hit milestones and deadlines. Put another
way, Mary did not enforce Sustainable Pace by fiat. All the other
practices enforced it as a side-effect of very high velocity.
Oh, and the project was an HMI for gamma-ray scanners used at airports
and shipping terminals...
--
Phlip
"Please state the nature of the programming emergency"
What does this mean? "They have all been everywhere that the studies
cited in /Agile and Iterative/ claim they were."
I don't recall SAIC being mentioned in "Agile and Iterative" - which
study was it?
> > SAIC is a research and engineering firm serving military,
government,
> > and aerospace contracts. They have traditionally used Waterfall
> > methodology (coding to a plan), or Spiral methodology (many small
> > iterative Waterfalls). They have all been everywhere that the
studies
> > cited in /Agile and Iterative/ claim they were.
>
> What does this mean? "They have all been everywhere that the studies
> cited in /Agile and Iterative/ claim they were."
Ah, we must speak by the card, or equivocation will Undo us...
The meaning of the kinds of studies that /Agile and Iterative/ cites
are to convert incomplete data into just enough statistics to predict
outcomes.
SAIC has experienced the range of methodologies cited, and has
experienced their predicted outcomes.
Note Note NOTE: I said SAIC experienced the range of outcomes predicted
by the studies that /Agile and Iterative/ cited. I did NOT say SAIC
experienced the range of outcomes that /Agile and Iterative/ predicted.
So emmend my sentence to "They have all been everywhere that the
studies cited in /Agile and Iterative/ claim such organizations would
be."
--
Phlip
> Note Note NOTE: I said SAIC experienced the range of outcomes predicted
> by the studies that /Agile and Iterative/ cited. I did NOT say SAIC
> experienced the range of outcomes that /Agile and Iterative/ predicted.
However, I WILL say that SAIC experienced the outcomes that /Agile and
Iterative/ predicted for XP.
--
Phlip
http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces
No. The aim of the studies cited, and of the chapter which cites them,
is to provide evidence to support a thesis/contention. Not to predict an
outcome.
> SAIC has experienced the range of methodologies cited, and has
> experienced their predicted outcomes.
>
> Note Note NOTE: I said SAIC experienced the range of outcomes predicted
> by the studies that /Agile and Iterative/ cited. I did NOT say SAIC
> experienced the range of outcomes that /Agile and Iterative/ predicted.
>
> So emmend my sentence to "They have all been everywhere that the
> studies cited in /Agile and Iterative/ claim such organizations would
> be."
Gosh, Phlip was able to present an anecdotal XP/Agile success story!
What were the odds?
> No. The aim of the studies cited, and of the chapter which cites
them,
> is to provide evidence to support a thesis/contention. Not to predict
an
> outcome.
Statistics is the study of measuring the probability that predicts the
next event in the series.
The aim of the studies were to measure and (yes) leverage statistics.
> Gosh, Phlip was able to present an anecdotal XP/Agile success story!
> What were the odds?
Well, let's count the successes. SAIC are no slackers, and their
previous projects don't fail often. The success was that the _first_
completely XP project at SAIC exceeded _all_ previous projects for
delivery and accurate scheduling. Three deliveries, two of those
external, in nine months, for the HMI and database for gamma ray
scanners.
Oh, and they had to satisfy ISO _and_ CMMI audits, while doing it.
But hey - maybe it was just beginner's luck!
--
Phlip
It tells us nothing about the accuracy of the information that is
regurgitated into other contexts:
"This reminds me of the passages from /Agile & Iterative Development/
by Craig Larman, indicating that Waterfall performed strictly is less
productive than performed loosely."
http://groups.yahoo.com/group/extremeprogramming/message/103123
I am interested in hearing about the SAIC experience but don't see what
it has to do with *this* topic - it would make a nice new discussion
all on it's own.
Somehow I doubt that. If they're a CMM(i) shop, I presume that they
know what the auditors want to see, so they know how to give it to
them. That's a long way ahead of people who are sorta vague on
both and trying to find their way out of the fog.
CMM(I) level 5 I presume, at least from the comments about
measurements.
Anyway, a shop that's regularly successful with plan-driven isn't
precisely rare, but it's unusual enough to be worth noting.
Are the slides availible?
John Roth
>
> --
> Phlip
>
I actually have no interest in *this* topic. Beating on Larman doesn't
attract me at all. It seems like the kind of idle intellectual pursuit
that's
followed by people with nothing better to do with their time.
John Roth
>
> CMM(I) level 5 I presume, at least from the comments about
> measurements.
We forgot to ask. The deal was, as usual, that the artifacts needed to pass
audit were simply introduced into the backlog.
> Anyway, a shop that's regularly successful with plan-driven isn't
> precisely rare, but it's unusual enough to be worth noting.
The point: Average success with plan-driven, and very successful with XP.
> Are the slides availible?
I'l try to remember to ping when they go up.
--
Phlip
http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces
1) Call me naive, but let me repeat: "I trust the author's good
intentions, and verify the writing against the cited references".
2) Scenario: We find mistakes in a published program, and we bring
those mistakes to the attention of the programmer, marketeers and
program users.
Is that 'beating on' the programmer?
Should we just ignore the mistakes?
3) John, given that you "have no interest in *this* topic" does your
posting have any purpose beyond making insinuations?
Take, for example, this statement of purpose of the management
committee of a rather famous, or infamous organization:
"Real Management is Joint Personality achieved through Interfusion
based on the instincts of Vision, Humility and Love."
>But then, I should know by now that
>the stated values in a movement or organization offer no indication of
>its actual conduct.
Read that again, and notice how it applies from both perspectives.
There is, of course, another way to view this whole Spy vs. Spy
intrigue. It is just possible that everybody involved is trying the
best they can to be honorable.
-----
Robert C. Martin (Uncle Bob) | email: uncl...@objectmentor.com
Object Mentor Inc. | blog: www.butunclebob.com
The Agile Transition Experts | web: www.objectmentor.com
800-338-6716
"The aim of science is not to open the door to infinite wisdom,
but to set a limit to infinite error."
-- Bertolt Brecht, Life of Galileo
> Well, let's count the successes. SAIC are no slackers, and their
> previous projects don't fail often.
...with at least one notable exception recently...
Laurent
?
Google News just reports signing deals with various dodgey outfits.
(NASA, China, etc.)
--
Phlip
My guess: Laurent is refering to VCF.
http://www.usdoj.gov/oig/testimony/0502/index.htm
> My guess: Laurent is refering to VCF.
> http://www.usdoj.gov/oig/testimony/0502/index.htm
Yup. That's been discussed previously, I didn't think to be more
explicit. I'm assuming we're referring to the same SAIC, the ones who're
webquartered at saic.com, but I could be wrong. That same web site has
an interesting "rebuttal" of the conclusion that VCF should be scuttled,
by the way. Interesting and relevant, perhaps, to this thread. (Or, more
precisely, the thread that this thread is turning into.)
> My guess: Laurent is refering to VCF.
> http://www.usdoj.gov/oig/testimony/0502/index.htm
Uh, I suspect that political forces attempting to preserve the
situation that routes all top-level FBI and CIA data thru the
Whitehouse alone might impair any modern effort to simplify the
collation of that data. So that signal is out-of-band with respect to
methodology!
--
Phlip
Another day another conspiracy theory - of course, the CIA are
unmentioned in any of the congressional testimony...
The SAIC testimony makes it seem like just another high-risk overly
ambitious project:
'Without defined requirements or an enterprise architecture for the FBI
IT systems, this was a high risk approach that reflected the post 9/11
atmosphere. Here is where SAIC made honest mistakes. We should have
made known that this approach was too ambitious.'
Or did they embrace change without knowing how?
'Often, however, the agents would look at the development product and
reject it. They would then demand more changes to the design in a
trial-and-error, "we-will-know-it-when-we-see-it" approach to
development.'
...
'SAIC expressed concern over the affect of these changes on cost and
schedule; however, we clearly failed to get the cumulative effect of
these changes across to the FBI customer. We accept responsibility for
this failure to elevate our concerns.'
> The SAIC testimony makes it seem like just another high-risk overly
> ambitious project:
>
> 'Without defined requirements or an enterprise architecture for the
FBI
> IT systems, this was a high risk approach that reflected the post
9/11
> atmosphere. Here is where SAIC made honest mistakes. We should have
> made known that this approach was too ambitious.'
>
> Or did they embrace change without knowing how?
If I told you there was no evidence that this project was Agile-style,
would that change how you choose to distort things?
--
Phlip
If you said that the congressional testimony made no statement about
the methodology used on this project, I would agree with you.
I've provided the URLs for the congressional testimony, and yet you
accuse me of distorting things. People who take the trouble to read the
testimony will see for themselves that the accusation is false.
Phlip,
Isaac is quoting from SAIC's testimony in Washington, and the text is
on SAIC's website, so what makes you think it's distorted?
Of course they are. We just aren't any closer to a shared understanding.
Try this, using Isaac's programming metaphor.
The chapter in Mr. Larman's book is a piece of code, the
output being 'a conclusion backed by evidence', the tests being
the 16 studies cited. 8 of the tests are run, 7 of them fail.
What happens next?
Maybe the tests are ambiguous...
>> I actually have no interest in *this* topic. Beating on Larman doesn't
>> attract me at all. It seems like the kind of idle intellectual pursuit
>> that's
>> followed by people with nothing better to do with their time.
>>
>It's nice to see that Isaac's display of the XP values of courage and
>communication are respected. But then, I should know by now that
>the stated values in a movement or organization offer no indication of
>its actual conduct.
Isaac does seem to some of us to be on a long quest to debunk Larman's
book. That does not interest some of us; it seems idle and
ill-directed.
Some of us think that a more interesting question is whether, when,
how XP and Agile practices work, and are interested in exploring that
realm rather than debunking it.
It's OK that Isaac is doing whatever he's doing, and to quest after
whatever he's really questing after. If he's using courage to do it,
good for him.
As for communication, I, for one, am not understanding the value of
the exercise. But then I am in a different group, perhaps, from Isaac:
I've done XP, been with many teams trying to do XP, and so my need for
concrete data is lower.
I am in another group as well, the one that believes that concrete
data from one project, or a thousand projects, only applies weakly to
any other project. So once I read a bunch of stories that say "We
tried this, it worked great," the details don't excite me much. Isaac
seems to me to have more interest in the details. That's good too.
Regards,
--
Ron Jeffries
www.XProgramming.com
I'm giving the best advice I have. You get to decide if it's true for you.
>
>"Robert C. Martin" <uncl...@objectmentor.com> wrote in message
>news:knvo215utpb7drb12...@4ax.com...
>> On Sun, 6 Mar 2005 16:34:37 -0500, "Scott Kinney"
>> <saki...@ix.netcom.com> wrote:
>>
>> >But then, I should know by now that
>> >the stated values in a movement or organization offer no indication of
>> >its actual conduct.
>>
>> Read that again, and notice how it applies from both perspectives.
>>
>> There is, of course, another way to view this whole Spy vs. Spy
>> intrigue. It is just possible that everybody involved is trying the
>> best they can to be honorable.
>
>Of course they are. We just aren't any closer to a shared understanding.
>
>Try this, using Isaac's programming metaphor.
>
>The chapter in Mr. Larman's book is a piece of code, the
>output being 'a conclusion backed by evidence', the tests being
>the 16 studies cited. 8 of the tests are run, 7 of them fail.
I don't agree with the metaphor. We aren't dealing with anything as
unambiguous as code. Rather we are dealing with human motives and
interpretations.
I think Isaac has done a lot of work, and has brought up some very
good points. I appreciate this work, and have learned from it.
On the other hand, Larman also did a lot of work and brought up some
very good points. I have learned a lot from it as well.
I do not find that Isaac's work invalidates Larman's. I do find that
it adds perspective.
I'm not on a quest to debunk anything.
Looking at just a handful of studies has dragged-out over a couple of
months (it's hard to find the time). The intention of this thread was
to wrap-up, to provide an outcome for those who were waiting:
"I have not done the deep study of Larman's book that Isaac seems to
have undertaken, and await the outcome."
-snip-
> As for communication, I, for one, am not understanding the value of
> the exercise. But then I am in a different group, perhaps, from
Isaac:
> I've done XP, been with many teams trying to do XP, and so my need
for
> concrete data is lower.
Ron, you for one, directed me to chapter 6 of "Agile and Iterative" -
my fault seems to be that I followed Principle 129:
Principles of Software Development, Alan Davis, 1995
Principle 129: "Don't believe everything you read."
> I am in another group as well, the one that believes that concrete
> data from one project, or a thousand projects, only applies weakly to
> any other project.
So, is the evidence against the dreaded 'waterfall' weakly applicable.
> So once I read a bunch of stories that say "We tried this, it worked
great,"
> the details don't excite me much.
A bunch of better stories would read 'We baselined our current process;
We tried this, it worked great compared to our current process.'
> > So once I read a bunch of stories that say "We tried this, it
worked
> > great,"
> > the details don't excite me much.
>
> A bunch of better stories would read 'We baselined our current
process;
> We tried this, it worked great compared to our current process.'
I thought I gave you a story like that.
Someone this curious about XP should ... try it?
--
Phlip
Isaac is not on a quest to debunk Larman's book. You, as noted in
another post, recommended that Isaac read Larman's "Evidence" chapter.
Mr. Martin on Amazon also touted the "Evidence" chapter, saying:
"Carlton, You should get hold of Craig Larman's book "Iterative and Agile
Management". It has some of the best information about the failings of
up-front requirements that I have seen. He quotes from dozens of different
peer-reviewed research studies that date back to the 70's and 80's showing
that the vast majority of software project failure can be traced to up-front
requirements and waterfall mentality."
This information is so significant that I can't believe it's not more widely
known. "
and in an XP wiki
"Larman, in his AgileAndIterativeManagement book,
shows the research. There is a lot of it, peer reviewed, covering thousands
upon thousands of projects. The results are that up front requirements lead
to an overwhelming amount of project failure. The numbers are terrifying.
(Sorry for all the superlatives but I think they are warranted.) "
The book provides an excellent survey of agile and iterative methods, it's
well written, engaging and draws very instructive comparisons between
methods
to show their distinctive styles.
It's the "Evidence" chapter with which Isaac and I take issue. In 8 of the 9
cited studies
read (and I'm including the "Chaos Report" citations as part of that.) there
are errors of
fact, misrepresentations of the studies' conclusions. In short, the evidence
presented doesn't
support the conclusions offered. That's it.
If, as Mr. Martin suggests, it's just a matter of interpretation, then show,
using the cited material
where interpretations differ.
>> Isaac does seem to some of us to be on a long quest to debunk
>Larman's
>> book. That does not interest some of us; it seems idle and
>> ill-directed.
>
>I'm not on a quest to debunk anything.
OK ... I'm just reporting that it looks that way to me, and maybe some
other folks.
>
>Looking at just a handful of studies has dragged-out over a couple of
>months (it's hard to find the time). The intention of this thread was
>to wrap-up, to provide an outcome for those who were waiting:
>
>"I have not done the deep study of Larman's book that Isaac seems to
>have undertaken, and await the outcome."
Yes. if I were to raise issues with things so far, I'd support the
dragging out thing, which makes it less exciting, and I'm concerned
that we don't have Larman in the conversation.
>
>
>-snip-
>> As for communication, I, for one, am not understanding the value of
>> the exercise. But then I am in a different group, perhaps, from
>Isaac:
>> I've done XP, been with many teams trying to do XP, and so my need
>for
>> concrete data is lower.
>
>Ron, you for one, directed me to chapter 6 of "Agile and Iterative" -
>my fault seems to be that I followed Principle 129:
>
>Principles of Software Development, Alan Davis, 1995
>Principle 129: "Don't believe everything you read."
>
I'm not imputing fault to you. You're doing good stuff that some
people value. I value it somewhat, but to a lesser degree, because
I've got data that I personally find more useful: personal data.
I would like to have data like that in Larman's book that was not
readily refutable, because it's good to have, if for no other reason
than to move to the next phase of the conversation with someone who
asks for data. I generally assume that if the data were perfectly
obvious and true, they would then raise an objection closer to their
real one. So if his stuff isn't solid, I'm disappointed that I can't
safely and honestly point people to it.
But I'm not making my decisions based on Craig's numbers.
>
>> I am in another group as well, the one that believes that concrete
>> data from one project, or a thousand projects, only applies weakly to
>> any other project.
>
>So, is the evidence against the dreaded 'waterfall' weakly applicable.
In the same sense as any other evidence, on projects, yes. Does that
mean I think waterfall might be good? No, but it's not the data that
convinces me.
>
>
>> So once I read a bunch of stories that say "We tried this, it worked
>great,"
>> the details don't excite me much.
>
>A bunch of better stories would read 'We baselined our current process;
>We tried this, it worked great compared to our current process.'
When someone said "I tried this and it works great," I take them to be
saying "in comparison with what we did before." I figure they were
there and that they probably know. Call me gullible.
>
>"Ron Jeffries" <ronje...@acm.org> wrote in message
>news:022u211nqcttu34rv...@4ax.com...
>>
>> Isaac does seem to some of us to be on a long quest to debunk Larman's
>> book. That does not interest some of us; it seems idle and
>> ill-directed.
>>
>>
>Is that the royal "us", or are you the spokesman for a group?
I am speaking for me, and making an educated guess, based on the
responses of others.
>
>Isaac is not on a quest to debunk Larman's book. You, as noted in
>another post, recommended that Isaac read Larman's "Evidence" chapter.
>Mr. Martin on Amazon also touted the "Evidence" chapter, saying:
>"Carlton, You should get hold of Craig Larman's book "Iterative and Agile
>Management". It has some of the best information about the failings of
>up-front requirements that I have seen. He quotes from dozens of different
>peer-reviewed research studies that date back to the 70's and 80's showing
>that the vast majority of software project failure can be traced to up-front
>requirements and waterfall mentality."
Yes, I made that recommendation. And Isaac says that he is not out to
debunk. I am reporting that it appears to me that he is out to debunk.
Both can be true.
>
>
>
>This information is so significant that I can't believe it's not more widely
>
>known. "
>
>and in an XP wiki
>"Larman, in his AgileAndIterativeManagement book,
>shows the research. There is a lot of it, peer reviewed, covering thousands
>
>upon thousands of projects. The results are that up front requirements lead
>
>to an overwhelming amount of project failure. The numbers are terrifying.
>
>(Sorry for all the superlatives but I think they are warranted.) "
Did I say that? It doesn't sound like me, but I could be mistaken.
>
>
>
>The book provides an excellent survey of agile and iterative methods, it's
>well written, engaging and draws very instructive comparisons between
>methods
>to show their distinctive styles.
>
Yes, I agree ...
>
>It's the "Evidence" chapter with which Isaac and I take issue. In 8 of the 9
>cited studies
>
>read (and I'm including the "Chaos Report" citations as part of that.) there
>are errors of
>
>fact, misrepresentations of the studies' conclusions. In short, the evidence
>presented doesn't
>
>support the conclusions offered. That's it.
OK. Sounds like debunking to me.
>
>
>
>If, as Mr. Martin suggests, it's just a matter of interpretation, then show,
>using the cited material>
>where interpretations differ.
That's Bob's position to support further or not. I wouldn't bother,
since I don't expect anyone to change position even if the LGJ came
down and said that Craig's interpretations were correct.
And as I reported elsewhere and elsewhen, I've done projects in a
zillion ways and I like XP best, and I've helped lots of other teams
who have seen improvements. Someone else's studies won't help me
decide, though I'd like to have some non-debunkable studies to toss to
people who claim to care.
Honestly, and I mean no insult by this, I feel that if either you or
Isaac were really interested in whether XP would work for you, you'd
be trying it, not reading books and examining their claims. So I
conclude that neither of you is looking for a reason to try it, or
even trying to decide to try it.
I could be wrong on that ... I'm just reporting what my interpretation
is. Nor do I think such a position, if it was yours, is necessarily
wrong. I'm just saying that if I were on commission for selling XP,
I'd stop making sales calls on you guys.
>If, as Mr. Martin suggests, it's just a matter of interpretation, then show,
>using the cited material
>
>where interpretations differ.
I think both sides have had their say. Larman published his
interpretation of the data; and Isaac has posted his. While I respect
the work that Isaac put into this, and can see his points, I don't
think they completely invalidate Larman's conclusions.
BTW I still think the superlatives apply. It astounds me that people
will staunchly defend waterfall when there is so much data out there
calling it into question.
Malign motivations again! See below.
> >Looking at just a handful of studies has dragged-out over a couple
of
> >months (it's hard to find the time). The intention of this thread
was
> >to wrap-up, to provide an outcome for those who were waiting:
> >
> >"I have not done the deep study of Larman's book that Isaac seems to
> >have undertaken, and await the outcome."
>
> Yes. if I were to raise issues with things so far, I'd support the
> dragging out thing, which makes it less exciting, and I'm concerned
> that we don't have Larman in the conversation.
I've provided Mr Larman with notes on these issues. (And to shortcut
the obvious questions, I have no intention of discussing private
correspondence.)
-snip-
> I'm not imputing fault to you.
It would have been great if someone had pointed out specific faults in
what I had written about the discrepancies between "Agile and
Iterative" and the cited studies.
It would have been ordinary to look at the evidence and admit, that-one
is obviously a mistake, that-one is open to different interpretations,
that-one you've got wrong...
imo Simply ignoring what I had written (lack of interest) would be fine
also.
Instead of addressing (or ignoring) the issues, the snide attacks on my
motivations continue.
> I would like to have data like that in Larman's book that was not
> readily refutable
typo? Presumably we'd like to have claims that *were* readily refutable
[falsifiable, testable] but had not been refuted.
-snip-
> So if his stuff isn't solid, I'm disappointed that I can't
> safely and honestly point people to it.
Me too.
> But I'm not making my decisions based on Craig's numbers.
Unfortunately, some 'mistakes' have already been taught on a university
course (maybe next year they'll change that part of the course into a
comparison between primary and secondary sources).
Maybe someplace a manager now expects the team to quadruple
productivity with timeboxing-by-itself.
> >> I am in another group as well, the one that believes that concrete
> >> data from one project, or a thousand projects, only applies weakly
to
> >> any other project.
> >
> >So, is the evidence against the dreaded 'waterfall' weakly
applicable.
>
> In the same sense as any other evidence, on projects, yes. Does that
> mean I think waterfall might be good? No, but it's not the data that
> convinces me.
Well, there might even still be people who don't look at the data, and
'believe' in waterfall.
> When someone said "I tried this and it works great," I take them to
be
> saying "in comparison with what we did before." I figure they were
> there and that they probably know. Call me gullible.
I'd figure they were there, and we know from studies of memory and
recall that there's no chance of that comparison being accurate -
people are not computers.
> I'd figure they were there, and we know from studies of memory and
> recall that there's no chance of that comparison being accurate -
Before accurate, it has to be meaningful...
> people are not computers.
...which is why so-called "anecdotal" comparisons *can* be meaningful.
People are not lab rats either.
I do have an interest in what it means when someone says, "We decided to
measure productivity in this particular way, and when we looked here,
here, here and there we noticed a difference - the places that did X
yielded a result of 100 and the places that did Y yielded a result of
200."
I think in most if not all cases, our very next question would be this:
"Hmmm, now *why* would that of all things happen ?" (In the other cases,
it was our first question and we did the study as a different way of
framing it.) You'll note that anyone with "anecdotal" experience is
justified in asking that very same question of what she observed.
What "why" questions could we be asking right here, right now, based on
what we now (thanks to your primary-source inquiries) know about the
studies quoted by Larman ?
Recollection of the details (qualitative or quantative) of a complex
situation is unreliable.
We can base anecdotal comparisons on information we recorded at the
time...
> We can base anecdotal comparisons on information we recorded at the
> time...
Recording can interfere with the task at hand... Sometimes
constructively so: I hear that people who keep journals are more
effective. (But how you'd prove that is tricky - they didn't keep
journals of the period when they weren't journaling, did they...)
> Recollection of the details (qualitative or quantative) of a complex
> situation is unreliable.
You don't need reliable recall of details to ask insightful questions.
Or to arrive at insightful conclusions. (Cf. that recent pop science
book by Malcolm Gladwell, /Blink/.)
What "why" questions could we be asking right here, right now, based on
what we now (thanks to your primary-source inquiries) know about the
studies quoted by Larman ?
What in software development has you puzzled ?
>Instead of addressing (or ignoring) the issues, the snide attacks on my
>motivations continue.
Isaac, I'm not attacking you or your motivations. I am reporting that
to me, your actions appear to be indistinguishable from someone who is
trying to debunk the book. I'm not even saying that debunking is bad,
I'm just saying that's what it looks like from here.
You said that you aren't doing that, and I acknowledged that as well,
in the same note.
If the book is bunk, it needs to be debunked. I myself am not
interested in doing it, because I'm not making decisions based on it.
I am watching the conversation with some interest so that I'll know
better what you find.
Other people do seem to find more "charitable" interpretations of the
data than you do. That's OK too. I wish we could know what the real
facts were, not even just those reported in the original documents.
It's good that you're doing this even if you were debunking and if you
are trying to keep an open mind, better yet.
And it bores the hell out of me, though I'd like to see the final
results. Not everything interests me, even very important things.
That's a problem with me, not a problem with you.
>Recollection of the details (qualitative or quantative) of a complex
>situation is unreliable.
>
>We can base anecdotal comparisons on information we recorded at the
>time...
"I don't like sheep eyeballs."
"Have you tried them?"
"Yes"
"What do they taste like?"
"I don't remember but I hated them."
Thank you for that very illuminating anecdote.
As I've said - I'm done with it.
It becomes dispiriting after the first couple of mistakes, once you
realize people have taken it at face value.
Ron, given your choice of verbiage in others posts ("idle", "ill-directed",
"bashing Larman") and your choice to call Isaac's motives into question
instead
of addressing the question at hand, this is hard to believe. To paraphrase,
"your actions appear to be indistinguishable from someone who is attacking
Isaac."
>
> If the book is bunk, it needs to be debunked.
For someone who, when it suits him, will parse meaning to within
a gnat's eyelash, I'm puzzled at your consistent unwillingness to
distinguish between:
debunking the misuse of specific primary sources
debunking a chapter in a book
and
debunking a book.
> Other people do seem to find more "charitable" interpretations of the
> data than you do.
That's not precisely true. Other people simply assert that there must be
more
charitable interpretations of the data. No one has actually offered any.
There is a difference between saying "Isaac has bad motivations,"
which I am /not/ saying, and "Isaac appears to me to be trying to
debunk the book," which I /am/ saying.
Note carefully: /appears to me/. This is feedback on what Isaac has
said; on his writing; not an imputation of motivation.
If Isaac is trying to be even handed, and I gather from his comments
that he is, then my feedback might be useful to him in how he writes
up further commentary.
Isaac is not his writings; no one is his writings. His writings
suggest certain things to me. I feed those suggestions back so that he
can revise if he cares to.
Regards,
Ron