Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Elves in the Night [Stupid XP Question Number 6614]

1 view
Skip to first unread message

Phlip

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
eXtreme Newsgroupies:

Consider this hypothetical conversation between a recent college
graduate and his or her veteran project lead. They practice the
Spiral development lifecycle...

----

Junior: [epithet!] Look at all these stoopid features we gotta do.

Senior: Always focus on the Present, Obi-wan. That requirements
document contains core features that represent the common project
functionality. That's the first cycle. Then all that long list of
extra features can be typed in as lists of calls into the common
code. That's the second cycle.

Junior: It's gonna take a year to do the first cycle. And we got
approval for more bodies, but how can they work on the extra
features before the common features are done?

Senior: They can't. The project will have to wait for them.

Junior: Whoa! I got it! We just gotta run like a fab[rication unit],
with 3 shifts! All we gotta do is hire an Evening Shift, and then a
Graveyard Shift.

Senior: I really doubt you'l be able to recruit quality people
willing to give up Prime Time Television to work with limited
supervision to help others get daylight credit for a project.

Junior: No, wait - this could work. Because every time I leave work
I write an e-mail to my relief programmer on the next shift. Then
they read the e-mail, open the source, and pick up where I left off!
But... but... I could be on a "Swing Shift" that overlapped the two
regular shifts. And I'd, uh, hang around and help the second shift
start, you know...

Senior: You have much to learn about the ways of software before you
become a Project Lead, Grasshopper.

Junior: But that's how GNU does it! You change something and check
it in, and then someone in Sri Lanka or somewhere checks it out and
keeps going!

Senior: And GNU projects always ship on time, right?

----

Okay. Now run time back to the beginning, inject eXtreme Programming
into each of their brains, and start time again.

--
Phlip
======= http://users.deltanet.com/~tegan/home.html =======

Ronald E Jeffries

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
On 02 Jan 2000 00:30:40 EST, "Phlip" <new_...@see.web.page> wrote:

>eXtreme Newsgroupies:
>
>Consider this hypothetical conversation between a recent college
>graduate and his or her veteran project lead. They practice the
>Spiral development lifecycle...

I'm sorry, Obi-Wan. This one eludes me entirely. Please try again.

Ron Jeffries
http://www.XProgramming.com
Meditation is futile. You will be aggravated.

Robert C. Martin

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to

Phlip <new_...@see.web.page> wrote in message
news:84mnq0$9...@chronicle.concentric.net...

> eXtreme Newsgroupies:
>
> Consider this hypothetical conversation between a recent college
> graduate and his or her veteran project lead. They practice the
> Spiral development lifecycle...
>
> ----
>
> Junior: [epithet!] Look at all these stoopid features we gotta do.
>
> Senior: Always focus on the Present, Obi-wan.

Senior: Estimate each feature and then ask the customer which features he
wants done first. Allow him to choose only enough features for a three
month release. Then, from those features ask him to pick the ones you
should work on in the first iteration. Don't let him choose more than
three weeks worth. Break those three weeks worth of features into tasks,
take advantage of any commonality between the tasks. rearrange the tasks
into the order you think most appropriate for development and then implement
those tasks.

Junior: But its not likely that the customer will choose first the features
that help us build the core architecture of the system. Without such a core
it will be harder to build the system as a whole. Don't we need an
architecture? Don't we have to design the architecture around *all* the
features?

Senior: Young Obi-wan, do not delude yourself into thinking that features
are stable. By the time you finish that first release of three months, the
features that the customer did not select for that release may not be needed
anymore; or may have changed form dramatically. Indeed, it is likely that
even the features within the release will change significantly. You don't
want to base your architecture on things that are likely to change do you?
Wouldn't you rather base your architecture on the features that the customer
thought most important?

Junior: I see, master. Still, it troubles me to think that there are
abstractions and commonalities that we purposely ignore simply because the
customer doesn't choose the features that support them in the first release.

Senior: And, young paidwan, do you think that when the customer does choose
those features you will not be able to create the necessary abstractions and
commonalities? Do you think the code you have written is frozen in place
and cannot be changed?

Junior: Clearly not, master, but I dread the rework.

Senior: When an artist draws guidelines on his canvas and later uses them
to support the details of his painting, the guidelines disappear. Is this
rework? When a software engineer finds commonality between something he is
implementing and something that was implemented months ago, he uses what
exists and the guidelines for his abstraction. Is this rework; or is this
enhancement?

Junior: I fail to see the difference, master.

Senior: Had you created the abstraction when you saw the first feature, you
would have been guessing young Jedi. You would have added infrastructure to
the software, that the software did not need at that time. And for the
months thereafter you would have had to carry and maintain that
infrastructure in the hopes that one day you would need it in its current
form. Finally, when the customer finally selects the feature that justifies
the abstraction you must hope that the feature hasn't changed so much as to
make that abstraction inappropriate. Chaning an anticipated abstraction
when that anticipation fails is true rework, my naive young padiwan.
Delaying abstraction until needed is simply prudence, like the artists
guideline.

Junior: Ah, I see master. If you delay abstraction, then the code move
from one appropriate state to another appropriate state. It never has too
much infrastructure, and the code that exsists at any point in time is
always the simplest and most appropriate for the suite of features it
supports.

Senior: You grasp the concepts well. Meditate more on these matters while
I cut the head of this red faced sith.


--

Robert C. Martin | OO Mentoring | Training Courses:
Object Mentor Inc. | rma...@objectmentor.com | OOD, Patterns, C++, Java,
PO Box 85 | Tel: (800) 338-6716 | Extreme Programming.
Grayslake IL 60030 | Fax: (847) 548-6853 | http://www.objectmentor.com

"One of the great commandments of science is:
'Mistrust arguments from authority.'" -- Carl Sagan

Richard J. Botting

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
>
>Consider this hypothetical conversation between a recent college
>graduate and his or her veteran project lead.
[...]

>Junior: No, wait - this could work. Because every time I leave work
>I write an e-mail to my relief programmer on the next shift. Then
>they read the e-mail, open the source, and pick up where I left off!

ummmm.... I worked on a project like this but
before there was Email, and when it took 20+
hours to get a single test case run, using FORTRAN.

It worked welll, possibly because
(1) we were good at programming
(2) we only added features and abstractions
when we needed them.
(3) we had occasional noisy meetings
(4) we programmed defensively with run time
checks for pre-conditions ... this caught some
bugs quickly.
(5) No deadline, boss, market, ... pressures.

"I sense fear in him, fear leads to the dark side
of the source."

rbotting at CSUSB edu
Computer Scientist, Sys Admin, Consultant, Researcher, and Reviewer
http://www.csci.csusb.edu/dick


Dave Thomas

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Phlip & "Robert C. Martin" <rma...@oma.com> wrote:

> A Starwars catechism about "not needing it".

But...

I'm troubled by the overall XP argument here. It goes something like:

1. XP is about reducing risk
2. Cost is a factor in risk
3. Doing something now costs more than doing something later (partly
because of the time value of money, partly because of uncertainty).
4. Therefore, don't do something because you _think_ you might need
it, as doing so now costs more than doing so later, and that
increases risk, and that's against XP.

However..

I think there are two at least counter-arguments.

One I've heard before--it will cost _more_ later, because you've more
interfaces to change when you add the component you deferred
implementing. I could see this argued either way. In Ron's
"well-structured system", you could say that the decoupling should
minimize this impact. Maybe, but there'll always be _some_ additional
cost.

There is a second argument though that I haven't seen yet.

Often, I build projects by first implementing a kind of
meta-solution--an infrastructure which is parameterized somehow. With
the correct parameters, it solves the user's problem. These parameters
could be values in a config file, a mini-language, some kind of
environmental scan, just about anything.

I do this because it minimizes risk, but in a different way to the
somewhat negative cost-reducing focus of YAGNI. To my mind, this
approach reduces risk because it allows my applications to roll with
the punches. There are at least two reasons why. Firstly, starting at
the meta-layer *forces* me to design in a decoupled, componentized
way. The various system pieces don't know about each other directly,
because their interactions are scripted elsewhere. It's a discipline
which produces lots of "good fences". Secondly, and to my mind more
importantly, it gives me systems that are easily changed--when the user
says they want to do something differently, often we can accomplish it
with a simple parameter change.[1]

To my mind, this is serious risk reduction. While the XP approach may
achieve linear savings, it's at the expense of the occasional
super-linear cost. Spending the additional week or so up front to
produce a truly flexible and configurable system is, in my
experience, well worth it.

So I feel that YAGNI is too simplistic--there are times where a bit of
up-front investment will be rewarded many times over.

Now I suspect I may hear (in a Hal-like voice) "we know you feel that
way, Dave, but why not just try it our way and see. We think you'll
like it". It's just that at times I _have_ tried it that way--I've
been too lazy or felt I was too busy to invest up-front in a solid
base for a development. Almost without exception I've regretted it.


Regards


Dave


Footnotes:
[1] I had a client in England who called me down to ask about a
change to a system we wrote. He told me there was a limited budget to
do what he felt was a major change--no more than a man-month. We made
the change in front of him during the meeting.

--
Thomas Consulting.
Innovative and successful developments with Unix, Java, C, and C++.

Now in bookstores:
The Pragmatic Programmer. www.pragmaticprogrammer.com/ppbook/

Phlip

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Ronald E Jeffries wrote:

> I'm sorry, Obi-Wan. This one eludes me entirely. Please try again.

If 3 XP coders are O productive, then 9 could be O * 3 productive
working in 3 shifts. An evening shift and a graveyard shift.

(PS I'd appreciate certain regulars stringing up my Dialogue
members - as they were meant to be - if certain regulars hadn't
missed the meaning of the title. Germanic fairy tale about a cobbler
with too much work, but he starts finding shoes done in the morning,
and he determines they were implemented by nocturnal supernatural
beings with too much time on their hands.)

Should be easy to shoot down. Try this: The 3rd shift would have
limited input and output (supervision and visibility). Unlike a real
fab with a dedicated process system, this fab would have no
automated work tickets printed every midnight, and no way except
anecdotes to track the 3rd shift's progress independent of the
project's progress as a whole. Administration nightmare ensues.

Michael Schuerig

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Phlip <new_...@see.web.page> wrote:

> Ronald E Jeffries wrote:
>
> > I'm sorry, Obi-Wan. This one eludes me entirely. Please try again.
>
> If 3 XP coders are O productive, then 9 could be O * 3 productive
> working in 3 shifts. An evening shift and a graveyard shift.
>
> (PS I'd appreciate certain regulars stringing up my Dialogue
> members - as they were meant to be - if certain regulars hadn't
> missed the meaning of the title. Germanic fairy tale about a cobbler
> with too much work, but he starts finding shoes done in the morning,
> and he determines they were implemented by nocturnal supernatural
> beings with too much time on their hands.)

What role does the cobler's overly curious wife play in your allegory?
In the tale -- at least the version I recall -- she lies in wait for the
elves ("Heinzelmaennchen") and disturbs their nightly pursuits. The
little folks don't like this at all and leave the cobler's house.

Michael

--
Michael Schuerig
mailto:schu...@acm.org
http://www.schuerig.de/michael/

Phlip

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Michael Schuerig wrote:

> What role does the cobler's overly curious wife play in your
allegory?
> In the tale -- at least the version I recall -- she lies in wait
for the
> elves ("Heinzelmaennchen") and disturbs their nightly pursuits.
The
> little folks don't like this at all and leave the cobler's house.

That's easy. Upper management (the wife, obviously) gets paranoid of
the 3rd shift and clamps down on them. Then they all quit and get
daylight jobs.

BTW IS ANYONE GONNA ANSWER THE ACTUAL QUESTION??? ("Should software
dev run in 3 shifts if you actually need a little speed, and if so
will XP grease the system or hurt it?")

Phlip

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Robert C. Martin <rma...@oma.com> wrote:

> Senior:  Had you created the abstraction when you saw the first feature, you
> would have been guessing young Jedi.  You would have added infrastructure to
> the software, that the software did not need at that time.  And for the
> months thereafter you would have had to carry and maintain that
> infrastructure in the hopes that one day you would need it in its current
> form.  Finally, when the customer finally selects the feature that justifies
> the abstraction you must hope that the feature hasn't changed so much as to
> make that abstraction inappropriate.  Chaning an anticipated abstraction
> when that anticipation fails is true rework, my naive young padiwan.
> Delaying abstraction until needed is simply prudence, like the artists
> guideline.
 
I'm certain this has never happened before, but you seem to have replicated a Dilbert posted on the exact same day:
 

Patrick Logan

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
In comp.object Dave Thomas <Da...@thomases.com> wrote:

: So I feel that YAGNI is too simplistic--there are times where a bit


: of up-front investment will be rewarded many times over.

Lately I have been tending more toward YAGNI and enjoying the
benefits. Because of other principles like Once And Only Once, I have
been able to add "meta" facilities when and where I really can benefit
from it. Maybe "You Aren't Gonna Need It" should be renamed to be
"Wait Until You Really Need It".

--
Patrick Logan patric...@home.com

Andy Glew

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
> BTW IS ANYONE GONNA ANSWER THE ACTUAL QUESTION??? ("Should software
> dev run in 3 shifts if you actually need a little speed, and if so
> will XP grease the system or hurt it?")

I've worked in de-facto 2-shift software systems
- US / Israel - and it can be nice.

However, it worked best when one site was mainly
users, testing through use, plus quick and dirty code fixes,
while the other shift was *real* programming.

It was *sweet* when I bug you when you left work was fixed
by the time you came inm.

The biggest problem when we tried to do true simultaneous
development was communication - making people truly understand
what was going on. XP might make this harder, since XP seems
to deprecate some of the formal communications tools - issue
tracking system, ECOs, etc. - eliminating their overhead in favour
of less formalized person to person communication.

However, I suspect that true multishift could work, if you arranged for
regular, once monthly or so, contact between all shifts.


Michael C. Feathers

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Dave Thomas <Da...@Thomases.com> wrote in message
news:m2zouoz...@zip.local.thomases.com...

> There is a second argument though that I haven't seen yet.
>
> Often, I build projects by first implementing a kind of
> meta-solution--an infrastructure which is parameterized somehow. With
> the correct parameters, it solves the user's problem. These parameters
> could be values in a config file, a mini-language, some kind of
> environmental scan, just about anything.
>
> I do this because it minimizes risk, but in a different way to the
> somewhat negative cost-reducing focus of YAGNI. To my mind, this
> approach reduces risk because it allows my applications to roll with
> the punches. There are at least two reasons why. Firstly, starting at
> the meta-layer *forces* me to design in a decoupled, componentized
> way. The various system pieces don't know about each other directly,
> because their interactions are scripted elsewhere. It's a discipline
> which produces lots of "good fences". Secondly, and to my mind more
> importantly, it gives me systems that are easily changed--when the user
> says they want to do something differently, often we can accomplish it
> with a simple parameter change.[1]
>
> To my mind, this is serious risk reduction. While the XP approach may
> achieve linear savings, it's at the expense of the occasional
> super-linear cost. Spending the additional week or so up front to
> produce a truly flexible and configurable system is, in my
> experience, well worth it.
>
> So I feel that YAGNI is too simplistic--there are times where a bit of
> up-front investment will be rewarded many times over.
>
> Now I suspect I may hear (in a Hal-like voice) "we know you feel that
> way, Dave, but why not just try it our way and see. We think you'll
> like it". It's just that at times I _have_ tried it that way--I've
> been too lazy or felt I was too busy to invest up-front in a solid
> base for a development. Almost without exception I've regretted it.

Dave, can you give a concrete example of one of these
meta-solutions? I ask because I've seen that term applied
to a wide variety of things.

Whenever I hear meta, I start to think about the TypeObject
pattern and active object models. I have evolved into the former
to reduce duplication in a system, but I'm not sure
that is the meta you are speaking of.


---------------------------------------------------
Michael Feathers mfea...@objectmentor.com
Object Mentor Inc. www.objectmentor.com
Training/Mentoring/Development
-----------------------------------------------------
"You think you know when you can learn, are more sure when
you can write, even more when you can teach, but certain when
you can program. " - Alan Perlis


Michael Schuerig

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Michael C. Feathers <mfea...@acm.org> wrote:

> Dave Thomas <Da...@Thomases.com> wrote in message
> news:m2zouoz...@zip.local.thomases.com...
> > There is a second argument though that I haven't seen yet.
> >
> > Often, I build projects by first implementing a kind of
> > meta-solution--an infrastructure which is parameterized somehow. With
> > the correct parameters, it solves the user's problem. These parameters
> > could be values in a config file, a mini-language, some kind of
> > environmental scan, just about anything.
> >
> > I do this because it minimizes risk, but in a different way to the
> > somewhat negative cost-reducing focus of YAGNI. To my mind, this
> > approach reduces risk because it allows my applications to roll with
> > the punches. There are at least two reasons why. Firstly, starting at
> > the meta-layer *forces* me to design in a decoupled, componentized
> > way. The various system pieces don't know about each other directly,
> > because their interactions are scripted elsewhere. It's a discipline
> > which produces lots of "good fences". Secondly, and to my mind more
> > importantly, it gives me systems that are easily changed--when the user
> > says they want to do something differently, often we can accomplish it
> > with a simple parameter change.[1]

[snip]

> Dave, can you give a concrete example of one of these
> meta-solutions? I ask because I've seen that term applied
> to a wide variety of things.
>
> Whenever I hear meta, I start to think about the TypeObject
> pattern and active object models. I have evolved into the former
> to reduce duplication in a system, but I'm not sure
> that is the meta you are speaking of.

I, too, feel a bit uncomfortable with Dave's use of the "meta"-prefix. I
don't think it's wrong, but it is different from what people who have
read stuff such as "The Art of the Metaobject Protocol" expect. I for
one expect something that's not specific to any particular domain. In
contrast, my understanding of the approach that Dave (together with Andy
Hunt in "The Pragmatic Programmer") advocates is such that I'd rather
call it "Separate Mechanism From Policy". Only mechanism, that which is
needed by any application in the domain, is hard-coded; the specific
application is a malleable layer on top of that.

Dave Thomas

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
"Michael C. Feathers" <mfea...@acm.org> writes:

> Dave, can you give a concrete example of one of these
> meta-solutions? I ask because I've seen that term applied
> to a wide variety of things.
>
> Whenever I hear meta, I start to think about the TypeObject
> pattern and active object models. I have evolved into the former
> to reduce duplication in a system, but I'm not sure
> that is the meta you are speaking of.

Well, there are a number of different levels. Let's look at a couple.

1. The first was a web-based membership administration system for a
school program. We donated the work, so we were motivated to get it
done fast.

The heart of the database was a set of 6 or so tables describing
memberships, contacts, payments, status etc. These were
administered and queried by individuals, regional administrators,
and a state-level administrator. Individuals query based on their
membership number, regions based on a few more criteria, and the
overall administrator could query on just about everything.

When it came time to code the query screens, the easy (and quickest)
answer would have been to code the big one, and then somehow
parameterize it to produce the smaller query screens. That's all
the client asked for.

Instead, we chose to implement a query descriptor--a data structure
that was parsed to produce the HTML form, and then also used to
generate the SQL query given a form's content (including all
necessary joins and the like). The descriptor looked something
like (in PHP):

"con_name" => array ( 'comp' => CONTAINS,
'label' => 'Coordinator',
'table' => 'contact'),

"mem_state" => array ( 'comp' => EQUALS,
'label' => 'Current status',
'table' => 'membership',
'ddlb' => 'select stt_id, stt_name
from state_name'),

"mem_pay_method" => array ( 'comp' => EQUALS,
'label' => 'Payment method',
'table' => 'membership',
'radio' => array('C' => 'Check',
'P' => 'Purchase order',
'V' => 'Credit card',
'*' => 'Special')),


This added perhaps a day to the time. However, we subsequently had
numerous changes to both the database schema and the search
criteria. We implemented each using simple edits to the
descriptor-not a single line of executable code was needed. This
was a big win for us--it saved us time (important as we weren't
billing) and it meant we could react to requests from clients
quickly--normally within minutes.

2. We recently coded a applet/servlet application which allowed
consumers of metered services to view various aspects of their
usage on a web browser. The client had a fixed set of reports
(perhaps 4 tables and 6 graphs) which they specified. However,
anticipating change, we chose not to build these 10 reports into
the code. Instead we implemented them as dynamically loaded
classes, looked up using names stored in a properties file. So,
when the client adds a new report to the application, the core
doesn't even need to be downloaded again--it just picks up the
classes on the fly. This gave us a number of benefits.

- the applet was smaller, and loaded quicker
- reports could be added on the fly
- testing was far more modular
- the interface to reporting was remarkably clean

Not a big deal, but a small amount of extra work up front saved us
a lot throughout the rest of the project.

3. I designed a switch for airline protocols. The initial
implementation was for a protocol called ALC--that was to be the
focus of the product. However, we actually implemented a bus-based
modular communications system, which was configured totally at
run-time. You could add and change comms modules, filters, and
application processors. You could patch inputs to filters to
processors to outputs, all using an external configuration. None of
this was part of the initial requirement, and it probably cost a
month up front. However, as the product became successful, all of
these features were used. The client appreciated the speed with
which we could react to marketing requirements--the benefits
accrued in both the technical and business domains.

None of these are earth-shattering. I'm sure we all develop systems
such as these every day. None of these systems had a stated customer
requirement for flexibility. However, if we hadn't built the
flexibility in from the start, we'd have paid a whole lot more down
the road. We'd either throw out a heap of existing code while adding
the code we should have had from the start, or we'd end up hard coding
each alternative as it came along. Both ways would be expensive, both
at the time and for on-going maintenance.

Using our collective experience, I contend that good developers have a
nose for areas that are likely to need to be flexible. I feel strongly
that if we then ignore that experience and build just for today, we're
being unprofessional and negligent. Our role is not just to transcribe
user requirements into opcodes. We also have a duty to guide
developments based on our knowledge, experience and ethics.


Regards


Dave

Ronald E Jeffries

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
On 02 Jan 2000 10:55:37 -0600, Dave Thomas <Da...@Thomases.com> wrote:

>To my mind, this is serious risk reduction. While the XP approach may
>achieve linear savings, it's at the expense of the occasional
>super-linear cost. Spending the additional week or so up front to
>produce a truly flexible and configurable system is, in my
>experience, well worth it.

What's even cheaper and of lower risk is to do the simplest thing
until you know what the objects really want to be, and only THEN
automate them, just as much, and no more, than you really need.

A team I'm working with at the moment needs to implement somewhere
between a zillion and a bazillion "rules". They implemented them
manually for an iteration, refactoring the system until the
rule-supporting objects were lean and mean. The next iteration they
addressed automation of rule generation. I haven't seen the results
yet but the discussion sounded like generating them as fast as you
could type in the detail settings.

Nice thing about the approach is that it doesn't invest in any
generation features that aren't needed, and it focuses on getting the
base objects right early, which is a good thing.


>
>So I feel that YAGNI is too simplistic--there are times where a bit of
>up-front investment will be rewarded many times over.

There might be. And the smarter you are, the better you'll do at
investing up front, because the more likely you are to guess right.
What surprises me, because I think I'm pretty smart, is how much of
the time I get the target a bit wrong and the tool a bit overbuilt,
compared to what I get when I do the simple thing. My record, so far,
seems to be perfect, based on (a) keeping track of what I was going to
do and then didn't, and (b) keeping track of how well my tool worked
when I couldn't resist doing it.

Kent made me think about one of these at the XP Immersion. His
comments got me thinking about what the "rules" project I talk about
did. We did only a day of up-front experimentation, and we "knew" what
the objects needed to be. I have a glimmer of a picture of what they'd
have been had we done even more pure YAGNI, and by G I t hink they
would be better. I'm going to try to get the team to do an experiment
and find out.

But don't try this idea if you're really good at building tools up
front. You won't like it. ;->

Regards,

Phlip

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
<brou...@yahoo.com> wrote:

> "Phlip" <new_...@see.web.page> wrote:
>
> >BTW IS ANYONE GONNA ANSWER THE ACTUAL QUESTION??? ("Should
software
> >dev run in 3 shifts if you actually need a little speed, and if
so
> >will XP grease the system or hurt it?")
>

> I had assumed the question was facetious.

As did others...

> I see no advantage in using three
> shifts. Just triple the number of people working during the day.
> Obviously, if one woman takes nine months to bear a child, three
women can
> get the job done in three months.

Women don't take two thirds of their day off from gestation.

> If you're constantly having to document unfinished work at the end
of each
> shift, and then having to review what went on without your
knowledge at the
> beginning, I can easily see a highly efficient programmer losing
25% of his
> time just on the extra overhead of difficult communications. So
having
> three shifts might double your production. And that's assuming
everybody is
> communicating effectively.

Ah, but Junior in my Non-Platonic Dialog now has XP on the brain.
This means she or he is now going to say that Pair-Programming,
overlapping all these shifts, is going to sail straight thru any
issues of what time of day it is, or who fixed what last.

I certainly wouldn't say that. But Junior (who considers Genesis
over-the-hill and an ardent trend-follower) sure would.

> In these times of low unemployment, I can't see wanting to waste
people's
> time like this just to get a product out the door slightly faster.
It would
> make more sense to make sure each team is working on an
independent project,
> and then you'd realize closer to 3x the productivity of one team.

If these were independent projects, and if workstations and cubicles
are cheaper than rampant overtime pay, then these projects could all
work the first shift.

And what about GNU?

Dave Thomas

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
schu...@acm.org (Michael Schuerig) writes:

> I, too, feel a bit uncomfortable with Dave's use of the "meta"-prefix. I
> don't think it's wrong, but it is different from what people who have
> read stuff such as "The Art of the Metaobject Protocol" expect. I for
> one expect something that's not specific to any particular domain. In
> contrast, my understanding of the approach that Dave (together with Andy
> Hunt in "The Pragmatic Programmer") advocates is such that I'd rather
> call it "Separate Mechanism From Policy". Only mechanism, that which is
> needed by any application in the domain, is hard-coded; the specific
> application is a malleable layer on top of that.

That's a great characterization. Build nuggets of domain-specific
code, and knit them together to build the application you need at the
time. The only qualification I'd make is that often the lower level
ends up being domain independent. We've been dragging the same trace
routines around with us now for 5 years.

I guess on reflection it _is_ a kind of YAGNI--we're saying defer the
anything application specific as long as possible, and then make it
easy to change. So in spirit we're XPers, we just bend the rules...

Robert C. Martin

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to

Phlip <new_...@see.web.page> wrote in message
news:84o8vg$c...@chronicle.concentric.net...

> BTW IS ANYONE GONNA ANSWER THE ACTUAL QUESTION??? ("Should software
> dev run in 3 shifts if you actually need a little speed, and if so
> will XP grease the system or hurt it?")

Ah, a clear questions deserves a clear answer.

It seems to me that XP is an enabler of a multi-shift operation. On the
other hand, I think the shifts need to overlap. We don't want the entire
team changing all at once. Rather, we'd like shifts to overlap each other
by 2-4 hours so that the shift that starts gets to work with the current
shift for awhile before that shift stops working.

Robert C. Martin

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to

Dave Thomas <Da...@Thomases.com> wrote in message
news:m2zouoz...@zip.local.thomases.com...
>
> I'm troubled by the overall XP argument here. It goes something like:
>
> 1. XP is about reducing risk
> 2. Cost is a factor in risk
> 3. Doing something now costs more than doing something later (partly
> because of the time value of money, partly because of uncertainty).
> 4. Therefore, don't do something because you _think_ you might need
> it, as doing so now costs more than doing so later, and that
> increases risk, and that's against XP.
>
> However..
>
> I think there are two at least counter-arguments.
>
> One I've heard before--it will cost _more_ later, because you've more
> interfaces to change when you add the component you deferred
> implementing. I could see this argued either way. In Ron's
> "well-structured system", you could say that the decoupling should
> minimize this impact. Maybe, but there'll always be _some_ additional
> cost.

Some languages are harder to refactor than others. C++ is the hardest of
the lot. Smalltalk is very easy. Java is somewhere in the middle. If you
are programming in a langauge that is hard to refactor (because of interface
dependencies for example) then you must manage dependencies while you are
coding. This does not violate YAGNI, because you definitely *are* going to
need to keep your dependencies under control.


>
> There is a second argument though that I haven't seen yet.
>
> Often, I build projects by first implementing a kind of
> meta-solution--an infrastructure which is parameterized somehow. With
> the correct parameters, it solves the user's problem. These parameters
> could be values in a config file, a mini-language, some kind of
> environmental scan, just about anything.

I do this too. But only when it is very clear that the benefits will exceed
the costs. There is no doubt that there is a cost to going meta. When you
go meta you are betting a great deal that your abstractions are correct, and
the flexibility you are buying yourself is going to be used a lot. If you
are satisfied, up front, that you are going to need the flexibility you are
designing, then so be it. If you are unsure, then it is probably better to
wait. The first time you find you need the flexibility, *then* implement
it. i.e. the second one pays for generality.

XP is not anti-meta. XP simply puts tention in the decision to go meta.
You go meta only if you know you must.

> So I feel that YAGNI is too simplistic--there are times where a bit of
> up-front investment will be rewarded many times over.

The problem is predicting those times in advance. I have been the
beneficiary of such up front investment -- and if feels good to know you
guessed right. But I've also been burned by investing too much in
generality that wasn't needed.

XP resolves the dilemma by saying: Pay only for what you know you need now,
pay for generality now, when you need the generality now.

> It's just that at times [...]I was too busy to invest up-front in a solid


> base for a development. Almost without exception I've regretted it.

Of course. No one is suggesting that you should not create a solid base for
development. Again, you pay now, for what you need now. If you need some
infrastructure in the current iteration, build it. But if you think you
might need some infrastructure one day, wait until that day, and then build
it.

Phlip

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Robert C. Martin wrote:

> It seems to me that XP is an enabler of a multi-shift operation.
On the
> other hand, I think the shifts need to overlap. We don't want the
entire
> team changing all at once. Rather, we'd like shifts to overlap
each other
> by 2-4 hours so that the shift that starts gets to work with the
current
> shift for awhile before that shift stops working.

That plop you just heard was yet another Sacred Cow hitting the
deck. The answer under any other lifecycle to this question was a
screaming "No!"

But programmers good enough to leave alone to refactor all night are
hard to find. And if you tell them they are going to miss their
favorite traffic jam just to satisfy some hare-brained ideal of
running a project thru continuous all-nighters _without_ all the
drugs that usually requires, they will split for some normo project
across town under a WaterFall lifecycle and do it the old-fashioned
way. So...

Place one team on Silicon Coast (you know, Irvine California). Then
put another in Hawaii, another in Hong Kong, another in Khartoum,
and another in Curitaba. Then wire them all up with a little
consumer-grade teleconferencing equipment. Once the ball got
rolling, the infrastructure of this huge "mundo-team" would remain
in place for the new projects under new customers, and so on. On top
of the speed gains of XP would come speed gains of 24/5
productivity.

This is logistically impossible, and the worst idea I ever had.
(Unless, of course, if anyone out there agrees with it.)

thi

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
"Phlip" <new_...@see.web.page> writes:

> And what about GNU?

gnu programmers do not sleep; they blink very deliberately.

thi

Dave Thomas

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
"Phlip" <new_...@see.web.page> writes:

>
> Place one team on Silicon Coast (you know, Irvine California). Then
> put another in Hawaii, another in Hong Kong, another in Khartoum,
> and another in Curitaba. Then wire them all up with a little
> consumer-grade teleconferencing equipment. Once the ball got
> rolling, the infrastructure of this huge "mundo-team" would remain
> in place for the new projects under new customers, and so on. On top
> of the speed gains of XP would come speed gains of 24/5
> productivity.

Apart from the delay while you Fedex the 3x5 cards around the world
once a day... ;-)

Dave Thomas

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
"Robert C. Martin" <rma...@oma.com> writes:

> XP is not anti-meta. XP simply puts tention in the decision to go meta.
> You go meta only if you know you must.
>

> Dave:

> > So I feel that YAGNI is too simplistic--there are times where a bit of
> > up-front investment will be rewarded many times over.
>
> The problem is predicting those times in advance. I have been the
> beneficiary of such up front investment -- and if feels good to know you
> guessed right. But I've also been burned by investing too much in
> generality that wasn't needed.

But it's more than 'feeling good' and 'getting burned', isn't it? In
the XP risk model, the 'feeling good' actually represents cost
savings. In my experience, the right underlying structure can make
these substantial--the cost of adding new functions is halved or
better. The 'burn' is increased cost--you bet up-front and lost.

My belief is that for an experienced developer, we're looking at the
venture capital success formula here: arithmetic losses, geometric
gains. You invest n days up front, on the basis that you're pretty
certain to see returns of 10n down the road.

Elsewhere, people have argued that you add this infrastructure when
you see the need--effectively when the second example of it's use
occurs. My experience is that often that would be an expensive
proposition--the kind of design I'm talking about here is structural,
not just procedural. We're talking about refactoring the metaphor, not
just the code.

So, my problem is that XP as espoused doesn't allow me to use my
experience to reduce risk. It says 'add it when you need it', 'the
first use only pays what it must'. I'd just like to see a tad more
flexibility there, allowing me to say "well, I can't guarantee it, but
I strongly suspect we'll need XYZ, and if I'm right, it'll pay for
itself 10 times over. Implementing it now will n days, but adding it
retroactively will affect everything written to that date, and will
cost at least 3n days. If I'm right 50% of the time, it pays more to
do it now".

I think this is probably a somewhat academic argument. My guess is
that in real life, common sense over wins out over the strict letter
of the method. After all, I suspect XP coders use a manifest constant
the first time they need a fixed value, not the second. I just get
nervous when I read the somewhat extreme and absolute tone of some
of the writings.

Regards, and thanks for some interesting discussions.

Phlip

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Dave Thomas <Da...@Thomases.com> wrote:

> "Phlip" <new_...@see.web.page> writes:

> > Place one team on Silicon Coast (you know, Irvine California).
Then
> > put another in Hawaii, another in Hong Kong, another in
Khartoum,
> > and another in Curitaba. Then wire them all up with a little
> > consumer-grade teleconferencing equipment. Once the ball got
> > rolling, the infrastructure of this huge "mundo-team" would
remain
> > in place for the new projects under new customers, and so on. On
top
> > of the speed gains of XP would come speed gains of 24/5
> > productivity.
>
> Apart from the delay while you Fedex the 3x5 cards around the
world
> once a day... ;-)

Tree killer. 8-]

Dave Thomas

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Dave Thomas <Da...@Thomases.com> writes:

> "Phlip" <new_...@see.web.page> writes:
>
> >
> > Place one team on Silicon Coast (you know, Irvine California). Then

> > put another in Hawaii, another in Hong Kong, another in Khartoum..


>
> Apart from the delay while you Fedex the 3x5 cards around the world
> once a day... ;-)

Oops, I forgot, the customer can carry the cards between the teams.

Dave

Hans Wegener

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
"Robert C. Martin" wrote:

> XP resolves the dilemma by saying: Pay only for what you know you need now,
> pay for generality now, when you need the generality now.

I don't see how XP really resolves anything if you mean the dilemma: "Will this
feature pay off or not?" People are notoriously bad at predicting that. The
principle merely puts tention in the decision to go meta, as you said it, and
that's a very good thing to do. Yet, as I read your words you mean to say that
you are only willing to pay for what you need right now. IMHO that's another
word for shortsightedness. This is not to say XPers are shortsighted persons -
not at all. But the principle doesn't aid you in becoming more farsighted, and
from time to time this is really what you need.

The problem becomes less fuzzy when you take a look at project size and domain
complexity: lack of domain knowledge and sheer complexity are major reasons for
running over budget etc. We all have heard, read or even witnessed these
stories. Starting small and releasing early and often is one of the best
countermeasures. In such situations a wrong decision (on average) has also a
comparatively small effect. Go YAGNI. But if I knew the beast well and had
things under control I would be foolish not to start and think big. Here YAGNI
would (again on average) be more expensive.

Wrapping up, you have to relate things to the problem at hand. In some
situations XP really makes perfect sense. But there are many others where XP
adds to the risk instead of reducing it. The art is to tell where the borderline
is, and that is still a matter of experience, not of process.

HW
--
Phone: +41-1-334 66 51
Fax: +41-1-334 50 60
Web: http://www.credit-suisse.ch

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On 02 Jan 2000 23:46:15 EST, "Phlip" <new_...@see.web.page> wrote:

>Place one team on Silicon Coast (you know, Irvine California). Then

>put another in Hawaii, another in Hong Kong, another in Khartoum,


>and another in Curitaba. Then wire them all up with a little
>consumer-grade teleconferencing equipment. Once the ball got
>rolling, the infrastructure of this huge "mundo-team" would remain
>in place for the new projects under new customers, and so on. On top
>of the speed gains of XP would come speed gains of 24/5
>productivity.
>

>This is logistically impossible, and the worst idea I ever had.
>(Unless, of course, if anyone out there agrees with it.)

I believe this discussion may be forgetting the XP Communication
value, which suggests things like having all the programmers in the
same room (at the same time). People trying pair programming remotely
report that it works but very poorly. Never seeing the people on the
other shift seems more than a little problematical in communication.

A lot of why collective code ownership works is that everyone hears
all the conversations. It might be difficult to come in to work and
find a bunch of code you'd never heard of.

Still, it might work. Me, I'd be satisfied to get a single shift
working well ...

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On 02 Jan 2000 19:39:55 -0600, Dave Thomas <Da...@Thomases.com> wrote:

>I guess on reflection it _is_ a kind of YAGNI--we're saying defer the
>anything application specific as long as possible, and then make it
>easy to change. So in spirit we're XPers, we just bend the rules...

I think the "Business Value First" rule might be a poor one to start
bending ...

What you're doing is Framework Building, seems to me. It's a strategy
that can work, and certainly one that works for you. But I'd have to
say that it's not an XP strategy. That's OK, just not the same as what
we teach.

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On Sun, 2 Jan 2000 22:20:19 -0600, "Robert C. Martin"
<rma...@oma.com> wrote:

Of course on any real project, whoever's there must do whatever s/he
thinks best. However ...

>XP is not anti-meta. XP simply puts tention in the decision to go meta.
>You go meta only if you know you must.

As the official XpHammer, I'd go a bit further. You only know "you
must" only when you have, in your hand, a story calling for a second
instance of whatever the customer has asked for. And no cheating - you
only work on one card at a time.

The XP rule is business value first. There's just no way for an XP
project to say "wait a while while we build some framework" if the
customer wants his checks added up. If we already have framework, then
sure, use it if it is really applicable. Otherwise, build what you
really need and let it evolve into framework.

IMO the above could be an incorrect strategy but it is the XP
strategy.

Just one guy's opinion, of course. Let's ask Beck in a couple of
weeks.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

<brou...@yahoo.com> wrote in message
news:g9tvOEuL+Jwgdy...@4ax.com...

> "Phlip" <new_...@see.web.page> wrote:
>
> >BTW IS ANYONE GONNA ANSWER THE ACTUAL QUESTION??? ("Should software
> >dev run in 3 shifts if you actually need a little speed, and if so
> >will XP grease the system or hurt it?")
>
> I had assumed the question was facetious. I see no advantage in using

three
> shifts. Just triple the number of people working during the day.
> Obviously, if one woman takes nine months to bear a child, three women can
> get the job done in three months.

The advantage is that you need one third the resources, i.e. space, desks,
computers, licenses, etc.

> If you're constantly having to document unfinished work at the end of each
> shift, and then having to review what went on without your knowledge at
the
> beginning, I can easily see a highly efficient programmer losing 25% of
his
> time just on the extra overhead of difficult communications. So having
> three shifts might double your production. And that's assuming everybody
is
> communicating effectively.

If you are pair programming, and if your shifts overlap by 50%, then the
communications overhead should be minimal. At shift change only one member
of each pair changes. Thus, continuity can be preserved.

Of course I've never tried this, or seen it tried, so its speculation on my
part.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Dave Thomas <Da...@Thomases.com> wrote in message
news:m2iu1cy...@zip.local.thomases.com...

> schu...@acm.org (Michael Schuerig) writes:
>
> > I, too, feel a bit uncomfortable with Dave's use of the "meta"-prefix. I
> > don't think it's wrong, but it is different from what people who have
> > read stuff such as "The Art of the Metaobject Protocol" expect. I for
> > one expect something that's not specific to any particular domain. In
> > contrast, my understanding of the approach that Dave (together with Andy
> > Hunt in "The Pragmatic Programmer") advocates is such that I'd rather
> > call it "Separate Mechanism From Policy". Only mechanism, that which is
> > needed by any application in the domain, is hard-coded; the specific
> > application is a malleable layer on top of that.
>
> That's a great characterization. Build nuggets of domain-specific
> code, and knit them together to build the application you need at the
> time. The only qualification I'd make is that often the lower level
> ends up being domain independent. We've been dragging the same trace
> routines around with us now for 5 years.
>
> I guess on reflection it _is_ a kind of YAGNI--we're saying defer the
> anything application specific as long as possible, and then make it
> easy to change. So in spirit we're XPers, we just bend the rules...

Building software in this way is often a good solution. But not always.
Sometimes, perhaps even often, the best cost/benefit trade off is the more
direct solution. Unfortunately, the problem is not simply a binary
decision. One cannot simply decide to go meta or not. Rather, individual
portions of projects may benefit from a meta approach, while others may not.

For example, Twenty years ago I worked on one of the very first voice mail
systems. We had to be able to dial phone numbers to deliver voice mail
messages. Dialling numbers is very tricky. The number you dial depends
upon a whole set of complex criteria including: Your area code, the target
area code, whether you are behind a PBX, whether the target is an extension
of that PBX, whether the target is on a different part of that PBX, whether
there are special prefixes for certain exchanges, etc, etc, etc. There was
no way we could hard code all these rules. To make matters worse, the rules
were different for every installation. So we created a nice little
meta-language that let us write scripts for dialling numbers.

Most complex systems are hybrids of meta and non-meta designs. We go meta
when we need the flexibility and can afford the cost of the meta-design. We
avoid meta when the rules don't change often.

XP is not anti-meta. It just puts a lot of tension in the decision.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Ronald E Jeffries <ronje...@acm.org> wrote in message
news:E659BB72DCD35CC5.D3416223...@lp.airnews.net...

> On 02 Jan 2000 19:39:55 -0600, Dave Thomas <Da...@Thomases.com> wrote:
>
> >I guess on reflection it _is_ a kind of YAGNI--we're saying defer the
> >anything application specific as long as possible, and then make it
> >easy to change. So in spirit we're XPers, we just bend the rules...
>
> I think the "Business Value First" rule might be a poor one to start
> bending ...
>
> What you're doing is Framework Building, seems to me. It's a strategy
> that can work, and certainly one that works for you. But I'd have to
> say that it's not an XP strategy. That's OK, just not the same as what
> we teach.

It's not so much a difference in technique, as a difference in priority.
Frameworks can be generated in XP, but only once the software has concretely
demonstrated that the framework is necessary. The once and only once rule
tends to generate local frameworks, but only after the duplication has shown
up. Thus, frameworks are given a lower priority in XP. They are only
generated when the need is irrefutable.

Dave Harris

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
new_...@see.web.page (Phlip) wrote:
> But programmers good enough to leave alone to refactor all night are
> hard to find.

What do you mean by "leave alone"? Are you supposing that managers,
customers and other support staff won't work all night too?

"Customer on site" is supposed to be an XP practice, so having programmers
work at midnight will not be very XP unless you have a representative of
the customer who will do the same. (As I understand it - I'm not an XP
guy.)

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Dave Thomas <Da...@Thomases.com> wrote in message
news:m266xbz...@zip.local.thomases.com...

> "Robert C. Martin" <rma...@oma.com> writes:
>
> > XP is not anti-meta. XP simply puts tention in the decision to go meta.
> > You go meta only if you know you must.
> >
> > Dave:
> > > So I feel that YAGNI is too simplistic--there are times where a bit of
> > > up-front investment will be rewarded many times over.
> >
> > The problem is predicting those times in advance. I have been the
> > beneficiary of such up front investment -- and if feels good to know you
> > guessed right. But I've also been burned by investing too much in
> > generality that wasn't needed.
>
> But it's more than 'feeling good' and 'getting burned', isn't it? In
> the XP risk model, the 'feeling good' actually represents cost
> savings. In my experience, the right underlying structure can make
> these substantial--the cost of adding new functions is halved or
> better. The 'burn' is increased cost--you bet up-front and lost.

Yes, the right underlying structure can make the savings substantial. As
you say, the cost of adding functions can be halved. In XP, however, we
won't create this structure until the we have two functions that would have
benefitted. When we see the second function, we refactor the existing
design until the second function is easy to add. All subsequent functions
then benefit from this.

Thus, we aren't abandoning the right underlying structure, we are just
demanding that the code show us that it's absolutely needed. We also wait
until the code shows us exactly what that structure should be.

> My belief is that for an experienced developer, we're looking at the
> venture capital success formula here: arithmetic losses, geometric
> gains. You invest n days up front, on the basis that you're pretty
> certain to see returns of 10n down the road.

But at what failure rate? The majority of startups don't experience the
geometric growth. Their up front guesses were wrong. They struggle over
years to make a small incremental gain on the initial investment, and then
fold or are absorbed. Investors still find the model useful because the
occasional geometric success swamps the preponderance of failures.

Can we afford this model in building software projects? Does the occasional
geometric success really provide enough benefit to overcome the times we
guess wrong? It seems to me that project failure due to overengineering is
not all that uncommon.

How do many startup companies really succeed? They stay nimble. They
change when the market changes. They try not to invest too much into an
approach until the market begins to pull that solution from them. Then they
invest like crazy. i.e. they are market driven.

How can a software project succeed using this formula. By not investing in
unproven infrsstructure. By doing the things that the customer thinks are
most important, and then optimizing the design within that context. When
duplication arises because of a lack of infrastructure, add the
infrastructure and kill the duplication.


>
> Elsewhere, people have argued that you add this infrastructure when
> you see the need--effectively when the second example of it's use
> occurs. My experience is that often that would be an expensive
> proposition--the kind of design I'm talking about here is structural,
> not just procedural. We're talking about refactoring the metaphor, not
> just the code.

Yes, there is certainly rework involved. But since the rework is done on
the *second* instance, it just not that much rework. Yes, there will be
times when some good idea is missed, and a larger refactoring is necessary.
We live with that. We count it better to have the code force us into a
better infrastructure than to force that infrastructure on a project that
doesn't need it.

> So, my problem is that XP as espoused doesn't allow me to use my
> experience to reduce risk.

Yes it does. It just asks you to wait until the risk is actually manifest.
As you work in an XP project, you will find all kinds of opportunities for
adding infrastructure. But you wait, rather than immediately adding it.
You wait until its clear that the infrastructure is really needed.

This isn't asking a lot. It is reasonble to ask that you avoid extra
infrastructure that you aren't sure you need.

> It says 'add it when you need it', 'the
> first use only pays what it must'. I'd just like to see a tad more
> flexibility there, allowing me to say "well, I can't guarantee it, but
> I strongly suspect we'll need XYZ, and if I'm right, it'll pay for
> itself 10 times over. Implementing it now will n days, but adding it
> retroactively will affect everything written to that date, and will
> cost at least 3n days. If I'm right 50% of the time, it pays more to
> do it now".

That's a lot of 'ifs'. How certain are you that they are correct? How
confident are you in the 10X benefit, or the 3X cost, or the 50% accuracy?
In effect you are gambling with a lot of variables; and this increases your
variance. Now, what does your customer want? Does the customer want
variability, or predictability? Consider these two options (forgive the
inappropriate use of a normal distribution):

1. Mean = 2 man years. Sigma= 2 man years.
2. Mean = 3 man years. Sigma= 1 man years.

Which will the customer be more interested in? In my experience, the
customer will go for option 2. He'll be willing to pay a higher average
cost for more predictability.

> I think this is probably a somewhat academic argument. My guess is
> that in real life, common sense over wins out over the strict letter
> of the method. After all, I suspect XP coders use a manifest constant
> the first time they need a fixed value, not the second. I just get
> nervous when I read the somewhat extreme and absolute tone of some
> of the writings.

I think you should stay nervous. Using XP, a program is built from one
failing test case to the next. And the granularity of those test cases is
remarkably small -- on the order of a few dozen lines of code. Don't
presume that XPers actually still add all the infrastructure up front that
"common sense" would dictate -- they don't. Instead, they make each failing
test case pass, one at a time. After each test case passes (or before they
write the next test case) they refactor to remove duplication and simplify
the design. The absolute tone of the writings reflects the behavior of the
XPers. Infrastructure is added after the fact by refactoring something that
already exists.

Dave Thomas

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Ronald E Jeffries <ronje...@acm.org> writes:

> On 02 Jan 2000 19:39:55 -0600, Dave Thomas <Da...@Thomases.com> wrote:
>
> >I guess on reflection it _is_ a kind of YAGNI--we're saying defer the
> >anything application specific as long as possible, and then make it
> >easy to change. So in spirit we're XPers, we just bend the rules...
>
> I think the "Business Value First" rule might be a poor one to start
> bending ...

But my point is that the framework or whatever does add business
value, and it adds it early on. If my experience tells me I'm likely
to need to program at the meta level, then I would like a methodology
with the flexibility to allow me to.

brou...@yahoo.com writes:
>
> I think what XP is saying is that if you aren't experienced enough in the
> problem domain to know everything before hand, you will probably be better
> off by deferring as many decisions as possible.
>

This sounds very sensible, and is the kind of flexibility I'm seeking.

Regards

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Hans Wegener <hans.w...@credit-suisse.ch> wrote in message
news:38707FEB...@credit-suisse.ch...

> "Robert C. Martin" wrote:
>
> > XP resolves the dilemma by saying: Pay only for what you know you need
now,
> > pay for generality now, when you need the generality now.
>
> I don't see how XP really resolves anything if you mean the dilemma: "Will
this
> feature pay off or not?" People are notoriously bad at predicting that.
The
> principle merely puts tention in the decision to go meta, as you said it,
and
> that's a very good thing to do. Yet, as I read your words you mean to say
that
> you are only willing to pay for what you need right now. IMHO that's
another
> word for shortsightedness. This is not to say XPers are shortsighted
persons -
> not at all. But the principle doesn't aid you in becoming more farsighted,
and
> from time to time this is really what you need.

XP defines shortsighted to mean: "you got caught a third time." In other
words, if it happens once, you pay for it. If it happens twice you figure
it'll happen again, and you generalize it. If you find yourself paying a
third time, you screwed up.

> The problem becomes less fuzzy when you take a look at project size and
domain
> complexity: lack of domain knowledge and sheer complexity are major
reasons for
> running over budget etc. We all have heard, read or even witnessed these
> stories. Starting small and releasing early and often is one of the best
> countermeasures. In such situations a wrong decision (on average) has also
a
> comparatively small effect. Go YAGNI. But if I knew the beast well and had
> things under control I would be foolish not to start and think big. Here
YAGNI
> would (again on average) be more expensive.

I'm not convinced of that. The overall cost of building the project *might*
be higher; however there is another factor. If you work only on those
things that are most important to the customer, the customer starts to get
benefit early. That benefit starts to pay for the cost of the project
before the project is complete. Thus the net cost may be lower.

XP says: "Generate value for the customer every day. Generate the greatest
value early. Don't invest in infrastructure until you are damned sure that
it will help you generate more value for the customer *sooner* rather than
later."


>
> Wrapping up, you have to relate things to the problem at hand. In some
> situations XP really makes perfect sense. But there are many others where
XP
> adds to the risk instead of reducing it. The art is to tell where the
borderline
> is, and that is still a matter of experience, not of process.

I can't imagine a scenario in which XP adds risk. I consider the risk of
building the wrong infrastructure up-front to be greater than the risk of
waiting until the need for the infrastructure is demonstrable.

Andrew Hunt

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On Mon, 3 Jan 2000 08:54:32 -0600, Robert C. Martin <rma...@oma.com> wrote:

> I think you should stay nervous. Using XP, a program is built from one

That's dangerous. When Dave's nervous, it's usually for a very good
reason.

> failing test case to the next. And the granularity of those test cases is
> remarkably small -- on the order of a few dozen lines of code. Don't
> presume that XPers actually still add all the infrastructure up front that
> "common sense" would dictate -- they don't. Instead, they make each failing
> test case pass, one at a time. After each test case passes (or before they
> write the next test case) they refactor to remove duplication and simplify
> the design. The absolute tone of the writings reflects the behavior of the
> XPers. Infrastructure is added after the fact by refactoring something that
> already exists.

Side question:
But don't you run the risk of being caught in a local mimima/maxima this way?
How do you pull back to see the forest, instead of the trees?

If I look back on all the projects I've worked on over the years, the
number of times that over-engineering a project has *saved* the
project far outweighs the number of times that it has laid unused.

XP says to only build what you need for today, and that's fine (and
keeps our heads from exploding), but a fundamental truth here is that
CUSTOMERS ALWAYS WANT MORE. They may not even now it yet, but they
will.

We're not advocating building huge Frameworks (capitol F) when you
first start up a project. We're saying build the fundamental parts of
speech that allow you to talk about a particular problem domain. You
can then express any number of arbitrary "sentences" the combinations
of objects and services that you've defined to solve the problem and
create an application. If you need more services to fulfill a
requirement, go ahead and create them -- as needed -- and then use
them.

The only difference I see in the thread here is we tend to create the
upper level in a more flexible form than the straight code used in the
rest of the application. For us, this is more flexible and powerful
with minimal added cost.

Your mileage may vary :-)

--
Andrew Hunt, The Pragmatic Programmers, LLC.
Innovative Object-Oriented Software Development
--
Our New Book: "The Pragmatic Programmer" Published by Addison-Wesley Oct 1999
(see www.pragmaticprogrammer.com/ppbook)
--

todd...@my-deja.com

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
In article <ho3c4.2822$h3.7...@ord-read.news.verio.net>,

"Robert C. Martin" <rma...@oma.com> wrote:
> I can't imagine a scenario in which XP adds risk. I consider the
risk of
> building the wrong infrastructure up-front to be greater than the
risk of
> waiting until the need for the infrastructure is demonstrable.

Given this would it be safe to assume you nolonger use
templates, abstract base classes, visitors, factories,
worry about dependency inversions, etc? Having read your posts for
several years i'd find this hard to believe, but at the same time
these practices would seem to have very little place in XP as they
are all future considerations of problems yet to appear.


Sent via Deja.com http://www.deja.com/
Before you buy.

Dave Thomas

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
"Robert C. Martin" <rma...@oma.com> writes:

> > So, my problem is that XP as espoused doesn't allow me to use my
> > experience to reduce risk.
>
> Yes it does. It just asks you to wait until the risk is actually manifest.
> As you work in an XP project, you will find all kinds of opportunities for
> adding infrastructure. But you wait, rather than immediately adding it.
> You wait until its clear that the infrastructure is really needed.

So I'm only allowed to add a nightly job to back up the repository
only after we've had a disk crash and lost all the source?

That's not as facetious a question as it sounds at first.

My experience tells me it's good to take backups. I set them up on all
new projects. I don't wait until I bump into the real-world need
before doing it.

Why is this different to saying 'my experience tells me we'll need an
logging and tracing facility for everyone to use. Let's code one up
front"?

Panu Viljamaa

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Hans Wegener wrote:

> "Robert C. Martin" wrote:
> > XP resolves the dilemma by saying: Pay only for what you know you need now,
> > pay for generality now, when you need the generality now.
>
> I don't see how XP really resolves anything if you mean the dilemma: "Will this
> feature pay off or not?" People are notoriously bad at predicting that. The
> principle merely puts tention in the decision to go meta, as you said it, and
> that's a very good thing to do. Yet, as I read your words you mean to say that

> you are only willing to pay for what you need right now. ...

One question I haven't yet seen is: "Reduces risk for whom ? For the developer or
the client ?" .

From a developer's point of view it is a great strategy to do only what the customer
asks for and has committed to by being part of the (ongoing) requirements phase. It
is a terrible failure *for the developer* to hear "You spent all this time on the
meta-framework we didn't even ask for!? And now where is the feature XYZ we really
need ?!". Your answer "I was trying to reduce YOUR risk by building some
flexibility into the system first" doesn't really help. In general, clients are
happy when you do exactly what they ask, perhaps a bit more but not much.

The situation is different when you are *building for your self*, for instance when
creating a product or a product line.

I'd like to compare this to playing chess: You can simply 'solve one problem' at a
time OR you can have a strategy for winning the game, perhaps sacrificing some pawns
along the way. The latter is possible only if you are a master chess-player. If
you're a novice programmer, you'll probably spend far too much time trying to build
a framework to start with, and you're off just doing what the client asks, even if
that means two months of work that a well-designed framework would have reduced to 2
minutes. The point is: the client would never know.

- Panu


Ell

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Panu Viljamaa <pa...@zoologic.com> wrote:

#I'd like to compare this to playing chess: You can simply 'solve one problem' at a
#time OR you can have a strategy for winning the game, perhaps sacrificing some pawns
#along the way. The latter is possible only if you are a master chess-player.

No way. It is the chess rookies who are going piece meal, who react bit by
bit to every new event. It is the masters who build frameworks for victory.
Who use forethought and anticipation to their advantage. That is a major
reason why the XP bit by bit, piece meal strategy is such a criminal fraud.

Elliott
--
:=***=: Objective * Holistic * Overall pre-code Modelling :=***=:
Hallmarks of the best SW Engineering
study Craftite vs. Full Blown OO: http://www.access.digex.net/~ell
copyright 1999 Elliott. exclusive of others' writing. may be copied freely
only in comp., phil., sci. usenet & bitnet & otug.

Patrick Logan

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
In comp.object Panu Viljamaa <pa...@zoologic.com> wrote:

: The situation is different when you are *building for your self*,


: for instance when creating a product or a product line.

I don't see how it is different in this case. You have to build
something that will sell. You have to build it in time to sell
it. Doing just enough to get the job done well seems like the right
goal. You have not spent any more than necessary to sell what you
think will make the most money.

: I'd like to compare this to playing chess: You can simply 'solve one
: problem' at a time OR you can have a strategy for winning the game,
: perhaps sacrificing some pawns along the way.

XP does not solve one problem at a time, right? The Planning Game is
how you plan for everything that you know, when you know it. How is
this different from chess?

--
Patrick Logan patric...@home.com

Panu Viljamaa

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Patrick Logan wrote:

I'm not trying to argue against XP, but on behalf of the practice of
building frameworks, even if they do not seem 'necessary'. I'm also trying
to raise some discussion about whether "Developer's risk = Client's risk"
and whether the ways to reduce them are exactly the same. I'm not much of
a chess player, so correct me if I'm wrong ...

Let's imagine a client hires you to "help them play the game". The
objective of the game is not simply checkmate, but to sacrifice your
white pieces to black ones, maximizing the value of the blacks they own in
the end, pawn = $1, knight = $3 etc. Every turn also costs in time.

You client will tell you, "eat that pawn, we need it". But because you're
an experienced player (perhaps not a grand master yet), you can point out
why we should eat another one first. By playing multiple games your
client soon realizes it pays for them to pay for your advice. For the
client black pieces are much more valuable, so they are eager to trade
their whites for them. They win, they are happy. The question is, would
investing in a basic strategy/framework allow them to win more ?

If you're a master player, you would start organizing your pieces into
some strategic positions and patterns. But you would have hard time
explaining to the client why you are wasting time on this, instead of
getting immediate business value by eating up the opponent. You would
perhaps say "my intuition tells me" this configuration "might be good in
the end game". Building a strategic advantage in chess is like building a
framework in software. You can't exactly say which exact problems it will
solve later, but your experience tells you (rightly or wrongly) that it
usually pays off eventually.

Whether spending some extra time to build a framework to prepare for some
unforeseen problems instead of solving the immediately visible problems is
in the spirit of XP I couldn't say.. Yet it seems to me the way to go,
especially if you have experience in doing so. The purpose of building one
is to make the software *adaptable* and in that sense you could consider
it to 'be XP'.

Again, think of chess. Unforeseen problems arise all the time. You can't
follow any single recipe to win the game (which is not to say that reading
books on start-, middle- and end-games is not useful). If any piece you
eat brings 'some' value, a simple strategy like "attack the black pieces"
will 'solve the problem'. The question is could a more strategic game
initiative bring you (the developer, or you the client ?) more value, less
risk ?

I would say it really depends on the situation whether you "are gonna need
it again". Following XP simple-mindedly seems to suggest a mind-set of
"just solving the client's problems" instead of a) trying to solve several
(potentially) recurring problems at once, and b) trying ask more questions
to discover what the 'real' problems are. I suspect "solving the client's
problems first" reduces the risk for the developer more than it does for
the client. You can then say "I just did what you asked me to". It may
make you 'part of the solution', and it makes sure you are not 'part of
the problem'.

- Panu
P.S. A great title for a chess book: "Extreme Chess"

Dave Thomas

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Patrick Logan <pat...@c837917-a.potlnd1.or.home.com> writes:

> XP does not solve one problem at a time, right? The Planning Game is
> how you plan for everything that you know, when you know it. How is
> this different from chess?

This raises an interesting YAGNI question.

Does XP recommend that developers read ad study stuff they're not
currently working on, or do they wait until they have a particular
need, and then find the most focused book available?

Good chess players spend years studying the foundations of the game,
memorizing openings, gambits, endgames, all on the basis that they
_might_ need it. It's one of the things that makes them successful.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Ell <univ...@radix.net> wrote in message
news:3874fb00...@news1.radix.net...
> Panu Viljamaa <pa...@zoologic.com> wrote:
>
> #I'd like to compare this to playing chess: You can simply 'solve one
problem' at a
> #time OR you can have a strategy for winning the game, perhaps sacrificing
some pawns

> #along the way. The latter is possible only if you are a master
chess-player.
>
> No way. It is the chess rookies who are going piece meal, who react bit
by
> bit to every new event. It is the masters who build frameworks for
victory.
> Who use forethought and anticipation to their advantage. That is a major
> reason why the XP bit by bit, piece meal strategy is such a criminal
fraud.

When you play chess against someone who is very good, you can't simply
depend upon a well thought through strategy. Your opponent will think it
through with you and know how to defend against it. Rather you must put
pressure on your opponent by creating several credible threats while
remaining as nimble as possible to take advantage of any small error or
oversight he might make. Advantage is slowly accumulated by whittling away
at the opponent.

The point is that neither player can predict how the game will go from the
outset. The game will evolve; just as any project evolves. The trick is to
stay nimble enough to view project evolution as an advantage rather than a
liability.

Patrick Logan

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
In comp.object Panu Viljamaa <pa...@zoologic.com> wrote:

: Building a strategic advantage in chess is like building a framework


: in software. You can't exactly say which exact problems it will
: solve later, but your experience tells you (rightly or wrongly) that
: it usually pays off eventually.

If this were really true then it would be the best advice to follow. I
don't see a lot of evidence that it is really true. In fact, as Robert
Martin wrote not too long ago, XP may be an optimal way to build the
framework(s) that are most valuable. Rather than build a framework
that may or may not be needed, use an XP-like practice to be ready to
build the frameworks that really are needed.

: The purpose of building one is to make the software *adaptable* and


: in that sense you could consider it to 'be XP'.

It is not for me to say if this is "XP". I don't think so. From what
I've read, XP says "You Aren't Gonna Need It".

: The question is could a more strategic game initiative bring you
: (the developer, or you the client ?) more value, less risk ?

There have already been people writing about their success in
anticipation of what may be needed in the future. I have had successes
and failures in doing the same. I am attracted to XP as possibly an
*optimal* way to anticipate what may be needed in the future.

: You can then say "I just did what you asked me to". It may make you


: 'part of the solution', and it makes sure you are not 'part of the
: problem'.

With XP, though, not only did you do what the customer asked for, you also:

* Managed the changes in what (s)he asked for.

* Managed the code to be ready for those changes.

* Tested the code to prove what has been asked for.

XP is _far_ from a so called "piece meal" approach. It is an
integrated, in-touch approach that anticipates the need for future
changes. While it prepares your code and your understanding for future
changes, at the same time it hedges against what the specific changes
will be until they become more concrete.

--
Patrick Logan patric...@home.com

Patrick Logan

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
In comp.object Dave Thomas <Da...@thomases.com> wrote:

: Does XP recommend that developers read ad study stuff they're not


: currently working on, or do they wait until they have a particular
: need, and then find the most focused book available?

I don't see where XP says anything about skill development outside of
the specific project. XP is a project management technique. Do any
other approaches to developing software mention these concerns? Is
there some reason they should? Or does that concern get addressed
elsewhere?

--
Patrick Logan patric...@home.com

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

<brou...@yahoo.com> wrote in message
news:3871a37a....@news.newsguy.com...

> If you've already solved a problem before so that you already know what
you
> need to do, I don't see any benefit to XP's incremental development.
Might
> as well develop the infrastructure you *know* you will need via a
waterfall
> model, and then switch over to XP after that.

I look at it differently than that. If I already knew the domain cold, I
would still use XP simply because XP will give me the simplest design that
could possibly work. I might be *sure* about all the infrastructure I'll
need, but XP will only let me add it if its truly needed. And there can be
a big difference between what I'm sure will be needed, and what really is
needed.

> I think what XP is saying is that if you aren't experienced enough in the
> problem domain to know everything before hand, you will probably be better
> off by deferring as many decisions as possible.

I'd amend that to simply: "You are always better off deferring as many
decisions as possible."

> Incremental development isn't necessarily the best solution for *all*
> problems.

True. If the cost of change is very high, then we don't want to iterate.
But if we can keep the cost of change low, through unit tests, refactoring,
pair programming, and simplicity, then iteration is probably the best
approach.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Andrew Hunt <an...@toolshed.com> wrote in message
news:slrn871jj...@workbench.toolshed.com...
> On Mon, 3 Jan 2000 08:54:32 -0600, Robert C. Martin <rma...@oma.com>
wrote:
>

> > I think you should stay nervous. Using XP, a program is built from one
>
> That's dangerous. When Dave's nervous, it's usually for a very good
> reason.

I've read you guys' book. I really like it and recommend it. So I respect
Dave's nervousness. I shared it until recently. But I've begun to realize
that I was nervous about the wrong thing. I was nervous that XP was
preventing engineers from creating well thought through designs. But I've
realized that XP doesn't do this at all. Rather, XP forces engineers to
abandon speculation, and to apply their design to keep the code as flexible
and malleable as possible, and to generalize only those things for which
duplication already exists. Designs are still well thought through. Code
is still clean and flexible. It's just that the infrastructure-urge is
leashed until proven necessary. Nowadays I get nervous about unleashing the
infrastructure urge.


>
> > failing test case to the next. And the granularity of those test cases
is
> > remarkably small -- on the order of a few dozen lines of code. Don't
> > presume that XPers actually still add all the infrastructure up front
that
> > "common sense" would dictate -- they don't. Instead, they make each
failing
> > test case pass, one at a time. After each test case passes (or before
they
> > write the next test case) they refactor to remove duplication and
simplify
> > the design. The absolute tone of the writings reflects the behavior of
the
> > XPers. Infrastructure is added after the fact by refactoring something
that
> > already exists.
>

> Side question:
> But don't you run the risk of being caught in a local mimima/maxima this
way?
> How do you pull back to see the forest, instead of the trees?

Often a local minimum is good enough. The 80-20 rule applies. Twenty
percent of the effort may get us 80 percent towards the minimum. It may
require the remaining 80 percent of the effort to reclaim the final 20
percent toward the minimum.

Still, there are occasions when you get stuck at a local minimum that is too
high to tolerate. An XP team does not abandon its design and abstraction
skills. It just holds then on a tight leash. If the team decides that a
fundamental changes is needed to escape from a local minimum, then that
change will be made part of the next release. Such a decision would only be
made if there were substantial benefits to be had.

> If I look back on all the projects I've worked on over the years, the
> number of times that over-engineering a project has *saved* the
> project far outweighs the number of times that it has laid unused.

In 25 years I don't think I've ever *saved* a project by anticipating
architecture. Certainly I've helped one or two. But I've also harmed some.
I've come to the realization that the guesswork isn't worth it. Instead, it
is better to keep the design as simple and flexible as possible.

> XP says to only build what you need for today, and that's fine (and
> keeps our heads from exploding), but a fundamental truth here is that
> CUSTOMERS ALWAYS WANT MORE. They may not even now it yet, but they
> will.

That is the driving force behind XP. Customers want more. XP works because
it make it OK for the customer to ask for more, or change what they've
already asked for. When a customer says, "I've changed my mind, now I want
X", XPers do not bemoan the lost infrastructure, they do not gnash their
teeth over all the careful preparation that is now obviated, they just get
on with it. And as they get on with it, they simplify, and take advantage
of duplications to create abstractions. They keep the design as simple and
flexible as possible.

> We're not advocating building huge Frameworks (capitol F) when you
> first start up a project. We're saying build the fundamental parts of
> speech that allow you to talk about a particular problem domain. You
> can then express any number of arbitrary "sentences" the combinations
> of objects and services that you've defined to solve the problem and
> create an application. If you need more services to fulfill a
> requirement, go ahead and create them -- as needed -- and then use
> them.

XP shortens this even more. Ignore everything outside the current iteration
(which lasts for about three weeks). Find commonality within the elements
of the iteration. Build each feature in the simplest way possible, and then
refactor to take advantage of commonality.

As a result, we don't write definitions for words that are never used; and
we don't make the definitions more elaborate than they need to be. We still
get the words, and we still build the sentences, its just that there is more
tension in the way that they are created.


>
> The only difference I see in the thread here is we tend to create the
> upper level in a more flexible form than the straight code used in the
> rest of the application. For us, this is more flexible and powerful
> with minimal added cost.

XP demands that flexibility in every part of the code. No code can be
checked in until it has been refactored to the simplest state the engineers
can find.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Dave Thomas <Da...@Thomases.com> wrote in message
news:m24scva...@zip.local.thomases.com...

> "Robert C. Martin" <rma...@oma.com> writes:
>
> > > So, my problem is that XP as espoused doesn't allow me to use my
> > > experience to reduce risk.
> >
> > Yes it does. It just asks you to wait until the risk is actually
manifest.
> > As you work in an XP project, you will find all kinds of opportunities
for
> > adding infrastructure. But you wait, rather than immediately adding it.
> > You wait until its clear that the infrastructure is really needed.
>
> So I'm only allowed to add a nightly job to back up the repository
> only after we've had a disk crash and lost all the source?

If your customer has not made nightly backups a priority, then they must be
willing to lose all the source. Clearly you want to make sure the customer
knows what they are risking. But if they prioritize other features above
the nightly backup, you have no business building the nightly backup first.
It's their nickel, after all.

> That's not as facetious a question as it sounds at first.

Nor did I interpret it facetiously. Indeed, it is right to the point.
There are things that make a tremendous amount of engineering sense that may
make no business sense. We, as engineers, have a tough time understanding
this. Still, we exist to serve the business. When the business
countermands our engineering instincts, the business wins.

This doesn't mean we sacrifice on quality. Those features that business
tells us to create, we create with the best of our skills. But we do not
add features that business has not made a priority.

> My experience tells me it's good to take backups. I set them up on all
> new projects. I don't wait until I bump into the real-world need
> before doing it.

And since you, as the software engineer, are the customer of the development
environment, you have that right. But you don't have that right with
customer features. If the customer does not want you to back up the
employee database every night, even after you have warned him of the risk,
then you don't back it up.

> Why is this different to saying 'my experience tells me we'll need an
> logging and tracing facility for everyone to use. Let's code one up
> front"?

If that statement is true, then at least two people in the first iteration
will need it. That need will be identified early on, either before any code
is written, or before too much code is written.

It works like this. Bill and Bob pair up to work on feature X. They
realize that they will have to emit messages and write them to a private log
file. A few hours later, Bill pairs with Jim. Bill sees that Jim has also
been writing to a log file. Bill, Jim, and Bob realize that there is
duplication in their efforts. They quickly poll the team to see if others
have faced this need. If so, the team quickly convenes to design a logging
facility. Everybody changes their code (only a few hours worth have been
written) and they go on from there.

This scenario repeats often because pairs change frequently. By the end of
the first few days of an XP project, everybody has seen all the code and is
familiar with the basic design of the system. They have also found
commonalities and refactored them into abstractions and infrastructure.

Is there rework? Certainly. But it is minimal. And the benefit is that
the abstractions, once created, are based upon something very real. Many is
the time I've had to rework large chunks of code because others had written
to my bad abstractions that were based upon what I *thought* would be
needed.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

<todd...@my-deja.com> wrote in message news:84qr22$jnh$1...@nnrp1.deja.com...
> In article <ho3c4.2822$h3.7...@ord-read.news.verio.net>,

> "Robert C. Martin" <rma...@oma.com> wrote:
> > I can't imagine a scenario in which XP adds risk. I consider the
> risk of
> > building the wrong infrastructure up-front to be greater than the
> risk of
> > waiting until the need for the infrastructure is demonstrable.
>
> Given this would it be safe to assume you nolonger use
> templates, abstract base classes, visitors, factories,
> worry about dependency inversions, etc? Having read your posts for
> several years i'd find this hard to believe, but at the same time
> these practices would seem to have very little place in XP as they
> are all future considerations of problems yet to appear.
>

It astounds me how often I get asked this question. People have this very
odd view of XP. They think that it eliminates all good design and prevents
any kind of intelligent thought about software structure.

Of course I have not given up any of the things you mention. Look at the
way I wrote my book. How many times did I create designs and write code,
only to back away and say "ick!", and then refactor the designs and the code
into something better? That was the fudamental pattern throughout the book!
it is also one of the fundamental patterns of XP.

XPers don't code blindly. They *design*. But they use code as a design
tool; just as I did in my book. XPers *do* add infrastructure, but only
when the existing code could benefit from it. Again, you see this
frequently in my book.

I find it odd that I must justify my affinity to XP. The most consistent
compliment I get regarding my papers and my books is that I show my train of
thought through the design process, and show the errors I make and how I
correct them. Yet when XP formalizes this very approach, it is interpreted
as a repudiation of design.

Make no mistake, XP is *heavily* design oriented. But it does not allow you
to build castles in the air. If you want to use a visitor, you'd better be
able to show how the code will improve. If you want to invert dependencies,
you'd better be able to show that it solves some existing problem, or makes
the structure of the code more flexible.

XP does not represent a change in my thinking. Though it wasn't described
as well as Kent had described it, you will see many of the basic concepts of
XP in my books and writings over the last half decade.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Panu Viljamaa <pa...@zoologic.com> wrote in message
news:3870F454...@zoologic.com...

[regarding XP, and doing exactly what the customer wants]

> The situation is different when you are *building for your self*, for
instance when
> creating a product or a product line.

Im unconvinced. Developers still have a customer then. Their customer is
the marketting group, or the folks who have the responsibility to define the
feature set of the product. The customer is the person who has the
authority and responsibility to say: "If you build these features first,
customers will buy it."

> I'd like to compare this to playing chess: You can simply 'solve one
problem' at a

> time OR you can have a strategy for winning the game, perhaps sacrificing
some pawns

> along the way. The latter is possible only if you are a master

chess-player. If
> you're a novice programmer, you'll probably spend far too much time trying

to build


> a framework to start with, and you're off just doing what the client asks,
even if
> that means two months of work that a well-designed framework would have
reduced to 2
> minutes. The point is: the client would never know.

Building a framework is a particulary difficult thing to do. The chances of
success diminish rapidly with the size of the framework. The only reliable
way that I have found to build a framework is to build it in conjunction
with two or three applications that use it.

Sometimes you get lucky; and the framework we build works out well. We tend
to remember those lucky times. I also remember the 70,000 lines of
framework I had to discard outright. (Actually I could have kept it, but I
believe the project would have died a horrible death if I had.) The point
is that there are very very few grandmaster chess players.

Dave Thomas

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Patrick Logan <pat...@c837917-a.potlnd1.or.home.com> writes:

Sorry, I was being obtuse. I keep coming back to the concept that
JustInTime design and coding seems to encourage developers to work
only up the end-of-the-day horizon. SeeingFurther has always been
important to me, and I'm struggling mightily with the idea of giving
it up. So, I was using the patently ridiculous idea of not bothering
to read or study until you have a concrete requirement to further
illustrate problems I feel with a jit approach.

However, I'm really not speaking from a position of authority--I've
not yet experienced a strict XP regime. Who knows, I may be happy to
fling my foresight to the four winds?

Panu Viljamaa

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

"Robert C. Martin" wrote:

> Panu Viljamaa <pa...@zoologic.com> wrote in message

> > The situation is different when you are *building for your self*, for
> >instance when creating a product or a product line.
>
> Im unconvinced. Developers still have a customer then. Their customer is
> the marketting group, or the folks who have the responsibility to define the
> feature set of the product. The customer is the person who has the
> authority and responsibility to say: "If you build these features first,
> customers will buy it."

If the responsibilities are well-defined, you'll be happy to leave the marketing
group the responsibility of specs - and keep them happy by doing only what they
ask for.

By *building for your self* I mean programmers have some stake in the profits
from the final product, perhaps in the form of stock options or bonuses. In such
a situation you would be willing at least to investigate the feasibility of a
framework, even though nobody in the marketing group asked for it (none of the
'customers' will ask for it). A framework might give you big payoffs later in
terms of derivative products.

This touches on the subject of code-ownership. If my client owns the code (and
the profits from it), I have little incentive to invest in adaptability and
maintainability. 'Ad hoc' -solutions will in fact give me money in terms of
maintenance/enhancement contracts later. In this situation I certainly don't
want to spend the client's money (much less my own) on building things they
didn't ask for. If I 'own' the code on the other hand I'm interested in being
able to reuse and adapt it in the future.

> Building a framework is a particulary difficult thing to do. The chances of
> success diminish rapidly with the size of the framework. The only reliable
> way that I have found to build a framework is to build it in conjunction
> with two or three applications that use it.

I agree. Certainly it is a risky endeavor. But in any business, you take risks
in return for higher expected profits.

- Panu


Michael C. Feathers

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Phlip <new_...@see.web.page> wrote in message
news:84p9in$b...@chronicle.concentric.net...
> Place one team on Silicon Coast (you know, Irvine California). Then
> put another in Hawaii, another in Hong Kong, another in Khartoum,
> and another in Curitaba. Then wire them all up with a little
> consumer-grade teleconferencing equipment. Once the ball got
> rolling, the infrastructure of this huge "mundo-team" would remain
> in place for the new projects under new customers, and so on. On top
> of the speed gains of XP would come speed gains of 24/5
> productivity.
>
> This is logistically impossible, and the worst idea I ever had.
> (Unless, of course, if anyone out there agrees with it.)

Unfortunately, I've heard it before.. from a manager
in passing. It is hard to figure out whether it is
a joke and who the joke is on. :-)

Someone ought to trace it. I'm pretty sure this
was a Dilbert strip once. Elbonian outsourcing
era. Anyone remember?

Where is Deja-Strip when you need it.


---------------------------------------------------
Michael Feathers mfea...@objectmentor.com
Object Mentor Inc. www.objectmentor.com
Training/Mentoring/Development
-----------------------------------------------------
"You think you know when you can learn, are more sure when
you can write, even more when you can teach, but certain when
you can program. " - Alan Perlis

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Dave Thomas <Da...@Thomases.com> wrote in message
news:m23dse9...@zip.local.thomases.com...

> Sorry, I was being obtuse. I keep coming back to the concept that
> JustInTime design and coding seems to encourage developers to work
> only up the end-of-the-day horizon. SeeingFurther has always been
> important to me, and I'm struggling mightily with the idea of giving
> it up.

XP is not asking you to give it up. It's just asking you to delay
implementation of what you foresee until you have proof positive that you
need it. You'll still get to implement your favorite infrastructures, but
only those that are actually required.

Think about the XP rule "Once and only once". Imagine a system with *no*
duplication, either direct or indirect. No segments of code that look like
they were cut and pasted from somewhere else, no funny little routines that
do almost the same kind of thing, no stretches of similar looking code, etc,
etc. Anyone looking at such a system would be impressed at the forethought
of the designers. They'd be amazed at how well they had anticipated every
abstraction, every commonality.

But the code wasn't anticipated directly. The design evolved and improved
as duplication was identified and eliminated.

> However, I'm really not speaking from a position of authority--I've
> not yet experienced a strict XP regime. Who knows, I may be happy to
> fling my foresight to the four winds?

No, but you may find that delaying its implementation a bit could be
profitable.

Robert C. Martin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to

Dave Thomas <Da...@Thomases.com> wrote in message
news:m2iu1a9...@zip.local.thomases.com...

> Patrick Logan <pat...@c837917-a.potlnd1.or.home.com> writes:
>
> > XP does not solve one problem at a time, right? The Planning Game is
> > how you plan for everything that you know, when you know it. How is
> > this different from chess?
>
> This raises an interesting YAGNI question.
>
> Does XP recommend that developers read ad study stuff they're not
> currently working on, or do they wait until they have a particular
> need, and then find the most focused book available?

XP doesn't say anything at all about how developers choose to better
themselves.

> Good chess players spend years studying the foundations of the game,
> memorizing openings, gambits, endgames, all on the basis that they
> _might_ need it. It's one of the things that makes them successful.

And any professional software engineer will study techniques, methods,
languages, and patterns on the premise that they *will* need them one day.
But programmers are not programs. If we put everything into a program that
it _might_ need one day, we'd never finish it.

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On 03 Jan 2000 09:17:33 -0600, Dave Thomas <Da...@Thomases.com> wrote:

>But my point is that the framework or whatever does add business
>value, and it adds it early on. If my experience tells me I'm likely
>to need to program at the meta level, then I would like a methodology
>with the flexibility to allow me to.

The XP rule is simple. If the customer doesn't ask for a Framework,
you can't work on it. You can and should evolve the software at every
stage to be just as general as it then needs to be, and no more
general. Experience suggests that this generates just exactly the
framework you actually need.

The rationale is that we have to write the methodology to say what all
the team members do. We can't say that the smartest guy can write
frameworks if he is more than 87% sure he'll be OK. We need a
definitive rule, and the one we use, we're quite sure, does no harm.

I don't recall Beck ever suggesting that there was an exception. I'm
quite certain that his current personal development technique wouldn't
write a line of framework before it was actually needed.

But you don't have to do it that way. You can build the framework up
front if you want to. There are plenty of people who recommend that
approach. The XP folks just aren't among them. We could be wrong.

Regards,

Ron Jeffries
http://www.XProgramming.com
Meditation is futile. You will be aggravated.

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On Tue, 04 Jan 2000 01:36:50 GMT, univ...@radix.net (Ell) wrote:

>But "evolution" is not the same as "piece meal" and "non-holistic".
>[holistic: the interaction of parts to serve the whole]. "Evolution" in my
>observations typically happens most efficiently in a holistic, non-piece meal
>manner.

Once again we see the Phasist misapprehension in action. Incremental
development does not militate against a holistic understanding and
advancement of the system.

All systems are developed incrementally - none are written in one
moment. The question is the order of implementation.

Bottom-up implementations tend to do nothing useful until the very end
of the project. Top-down implementations often have the same problem,
though they may tend to have better architecture when completed. XP
suggests a different order of implementation - most valuable thing
first, and provides a family of practices that allow this to result in
a good architecture as well as delivery of known value on a
predictable schedule. "It's a good thing. (tm)"

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On Mon, 03 Jan 2000 13:29:07 GMT, brou...@yahoo.com wrote:

>Incremental development isn't necessarily the best solution for *all*
>problems.

I used to think that as well. I've changed my mind by virtue of using
incremental development for everything, for nearly four years now.
Unquestionably, if one is sufficiently smart, and sufficiently lucky
(it takes both), more design up front can pay off. Incremental
development might not be best for some folks here.

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On Mon, 03 Jan 2000 18:44:23 -0600, Panu Viljamaa <pa...@zoologic.com>
wrote:

>By *building for your self* I mean programmers have some stake in the profits
>from the final product, perhaps in the form of stock options or bonuses. In such
>a situation you would be willing at least to investigate the feasibility of a
>framework, even though nobody in the marketing group asked for it (none of the
>'customers' will ask for it). A framework might give you big payoffs later in
>terms of derivative products.

This may be true. However, if the business people have determined what
the company needs, it is neither prudent nor ethical for the
development team to second-guess them and build what isn't asked for.

I've been there, and done that. What I know how to do now, and didn't
then, is to build what they ask for, but to build it in a way that
won't lock me in for the future, and that will allow me flexibility
for additional use later.

You don't have to do it that way, however. Just be aware that you're
spending money that wasn't given to you to spend.

Phlip

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
todd...@my-deja.com wrote:

> Given this would it be safe to assume you nolonger use
> templates, abstract base classes, visitors, factories,
> worry about dependency inversions, etc? Having read your posts for
> several years i'd find this hard to believe, but at the same time
> these practices would seem to have very little place in XP as they
> are all future considerations of problems yet to appear.

There is a difference between cautious optimizism that code you can
think of won't really be needed and blind idiocy. Of course you can
apply the correct amount of infrastructure to code to accept the current
functionality and a _guess_ at where the code will need to be expanded.

But... the details you list are not _functionality_, they are _support_
elements between which one hangs functionality.

Support elements are cheap, self-documenting, type-safe, and documented
in books like /Design Patterns/.

Functionality - the 'if' statements and 'print' statements and such
written inside blocks of supporting logic, by contrast, is expensive,
obscure, dynamically risky, and a burden to maintain. Therefor, don't
maintain it until after proving you need it. YAGNI.

--
Phlip at politizen dot com (address munged)
======= http://users.deltanet.com/~tegan/home.html =======
-- Whip me. Beat me. Make me install Oracle. --

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On Mon, 03 Jan 2000 21:40:17 GMT, Patrick Logan
<pat...@c837917-a.potlnd1.or.home.com> wrote:

>: The purpose of building one is to make the software *adaptable* and
>: in that sense you could consider it to 'be XP'.
>
>It is not for me to say if this is "XP". I don't think so. From what
>I've read, XP says "You Aren't Gonna Need It".

I think it might be for me to say. Patrick is right. XP describes how
to build software adaptably by working solely on business value. If we
learn how to do that, we can deliver more value now and not hurt
ourselves in the future.

Ronald E Jeffries

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On 03 Jan 2000 15:14:43 -0600, Dave Thomas <Da...@Thomases.com> wrote:

>Does XP recommend that developers read ad study stuff they're not
>currently working on, or do they wait until they have a particular
>need, and then find the most focused book available?

Of course developers should study everything they can stand. Good
programmers, with any methodology, out-perform programmers who are not
so good.

It's important to have a deep bag of tricks, and important to
understand as much as possible in the huge universe of programming.

Then, when we implement, we implement as simply as could possibly
work, digging as shallowly into our bag of tricks as possible.

One of the greatest advantages of a huge bag of tricks is courage.
Having already implemented every program in the universe gives one
great confidence that the current problem will yield. And that frees
the mind to find the simple solutions. Fear is the mindkiller.

Ell

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
Panu Viljamaa <pa...@zoologic.com> wrote:

The following paragraph lays out a great concept and highly worthy goal!

#Whether spending some extra time to build a framework to prepare for some
#unforeseen problems instead of solving the immediately visible problems is
#in the spirit of XP I couldn't say.. Yet it seems to me the way to go,
#especially if you have experience in doing so. The purpose of building one
#is to make the software *adaptable* and in that sense you could consider
#it to 'be XP'.
# ...
#The question is could a more strategic game
#initiative bring you (the developer, or you the client ?) more value, less
#risk ?

I'll continue by responding to the post by addressing additional issues
mentioned in the post:

#I would say it really depends on the situation whether you "are gonna need
#it again". Following XP simple-mindedly seems to suggest a mind-set of
#"just solving the client's problems" instead of a) trying to solve several
#(potentially) recurring problems at once, and b) trying ask more questions
#to discover what the 'real' problems are. I suspect "solving the client's
#problems first" reduces the risk for the developer more than it does for
#the client. You can then say "I just did what you asked me to". It may
#make you 'part of the solution', and it makes sure you are not 'part of
#the problem'.

Suppose the information system *amplifies* existing problems? In that case,
"going along to get along" is harmful.

#- Panu
#P.S. A great title for a chess book: "Extreme Chess"

"Extreme Chess: An Exercize in Inefficiency"

Ell

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
"Robert C. Martin" <rma...@oma.com> wrote:

#
#Ell <univ...@radix.net> wrote in message
#news:3874fb00...@news1.radix.net...
#> Panu Viljamaa <pa...@zoologic.com> wrote:
#>
#> #I'd like to compare this to playing chess: You can simply 'solve one
#problem' at a
#> #time OR you can have a strategy for winning the game, perhaps sacrificing
#some pawns
#> #along the way. The latter is possible only if you are a master
#chess-player.
#>
#> No way. It is the chess rookies who are going piece meal, who react bit
#by
#> bit to every new event. It is the masters who build frameworks for
#victory.
#> Who use forethought and anticipation to their advantage. That is a major
#> reason why the XP bit by bit, piece meal strategy is such a criminal
#fraud.

#When you play chess against someone who is very good, you can't simply
#depend upon a well thought through strategy. Your opponent will think it
#through with you and know how to defend against it. Rather you must put
#pressure on your opponent by creating several credible threats while
#remaining as nimble as possible to take advantage of any small error or
#oversight he might make. Advantage is slowly accumulated by whittling away
#at the opponent.
#
#The point is that neither player can predict how the game will go from the
#outset. The game will evolve; just as any project evolves. The trick is to
#stay nimble enough to view project evolution as an advantage rather than a
#liability.

The opponent may or may *not* discern your strategy. But often a strategy can
be implemented in spite of the opponent knows.

Finally the chess masters and others I observe are often implementing one or
more strategies which complement each other. Frequently their is a larger
strategy which the others serve. Being nimble is like apple pie, everyone is
for it. It's a given so to speak.

Ell

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
"Robert C. Martin" <rma...@oma.com> wrote:

#The point is that neither player can predict how the game will go from the
#outset. The game will evolve; just as any project evolves. The trick is to
#stay nimble enough to view project evolution as an advantage rather than a
#liability.

Certainly everyone views project evolution as an advantage - in fact is the
goal of the project.

But "evolution" is not the same as "piece meal" and "non-holistic".
[holistic: the interaction of parts to serve the whole]. "Evolution" in my
observations typically happens most efficiently in a holistic, non-piece meal
manner.

Elliott

Ell

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
Patrick Logan <pat...@c837917-a.potlnd1.or.home.com> wrote:

#In comp.object Dave Thomas <Da...@thomases.com> wrote:
#
#: Does XP recommend that developers read ad study stuff they're not
#: currently working on, or do they wait until they have a particular
#: need, and then find the most focused book available?

#I don't see where XP says anything about skill development outside of
#the specific project. XP is a project management technique. Do any
#other approaches to developing software mention these concerns? Is
#there some reason they should? Or does that concern get addressed
#elsewhere?

I don't know if it does in this case, but methodology often does have
far-reaching implications. And the regular use of XP it seems to me to have
for the most part, bad far-reaching implications. To me XP reflects and also
encourages a "study for immediate events only" approach to scholarship.
They certainly apply the same notion with a passion during actual project
development.

Phlip

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
Ronald E Jeffries wrote:

> As the official XpHammer, I'd go a bit further. You only know "you
> must" only when you have, in your hand, a story calling for a
second
> instance of whatever the customer has asked for. And no cheating -
you
> only work on one card at a time.
>
> The XP rule is business value first. There's just no way for an XP
> project to say "wait a while while we build some framework" if the
> customer wants his checks added up. If we already have framework,
then
> sure, use it if it is really applicable. Otherwise, build what you
> really need and let it evolve into framework.

It's all really easy.

First the customer asks for a 2-digit year field in a date record.
You code and deploy a program following that request, and get paid.
The customer starts using the code profitably.

Then the customer presents a new request for a 4-digit field in a
date record. So (after rolling your eyes) you just change the
UnitTests to expect the century in the year field; refactor the
record definition so the UnitTests compile; change the date-handling
functionality so the century conducts thru it correctly; write a
script that changes the database entities and upgrades the contents;
pass all the UnitTests, and redeploy the program.

See?

--
Phlip
======= http://users.deltanet.com/~tegan/home.html =======


Vladimir Trushkin

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
> I've read you guys' book. I really like it and recommend it. So I
respect
> Dave's nervousness. I shared it until recently. But I've begun to
realize
> that I was nervous about the wrong thing. I was nervous that XP was
> preventing engineers from creating well thought through designs. But I've
> realized that XP doesn't do this at all. Rather, XP forces engineers to
> abandon speculation, and to apply their design to keep the code as
flexible
> and malleable as possible, and to generalize only those things for which
> duplication already exists.

Yeah, and they introduce flexibility in different ways... Do you expect them
to be identical in respect to design abilities?

Vladimir

Vladimir Trushkin

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to

> When you play chess against someone who is very good, you can't simply
> depend upon a well thought through strategy.

Analogy is weak. Does program play against you (consciously) when you create
it? ;-)

Vladimir

Vladimir Trushkin

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
> XP defines shortsighted to mean: "you got caught a third time." In other
> words, if it happens once, you pay for it. If it happens twice you figure
> it'll happen again, and you generalize it. If you find yourself paying a
> third time, you screwed up.

And what do you do after several of such 'caught'? In my opinion you
establish somewhat about formal process went all the way and made all the
mistakes on your own.

Best Wishes,
Vladimir

Vladimir Trushkin

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to

> XP suggests a different order of implementation - most valuable thing
> first, and provides a family of practices that allow this to result in
> a good architecture as well as delivery of known value on a
> predictable schedule. "It's a good thing. (tm)"

Does it mean you adhere mostly to Evolutionary Prototyping strategy?

Vladimir Trushkin

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
> With XP, though, not only did you do what the customer asked for, you
also:
>
> * Managed the changes in what (s)he asked for.
>
> * Managed the code to be ready for those changes.
>
> * Tested the code to prove what has been asked for.

Only System (Behavior) level testing may help to prove you've got what has
been asked for. Biased thoward internal design developers can't abstract
from this knowledge completely, so, they're bad performers of testing on
System level. Unit level testing doing in XP at development side can't prove
you've got what customer expected to (asked for), you have proven only
you've got what you've been thinking of. I can't agree with your 3rd
statement.

Best Wishes,
Vladimir

Ronald E Jeffries

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to

Having apparently not read the assignment, you may have missed that XP
has two levels of testing, Unit Testing, specified by the programmers,
and Functional Testing, specified by the customers. It's the latter
that is mostly referred to here.

Hans Wegener

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
"Robert C. Martin" wrote:

> I'm not convinced of that. [...] That benefit starts to pay for the cost of
> the project
> before the project is complete. Thus the net cost may be lower.

Could be, but to my knowledge that hasn't been demostrated yet. I would not say
it's impossible, but I'm really sceptical. This is where I would be happy to see
more tangible results rather than guesses.

> I can't imagine a scenario in which XP adds risk. I consider the risk of
> building the wrong infrastructure up-front to be greater than the risk of
> waiting until the need for the infrastructure is demonstrable.

If I read you right you can not imagine any situation where people assess the
requirements wrongly and find out at a point in time where it costs more than
thinking big in the beginning...?

HW
--
Phone: +41-1-334 66 51
Fax: +41-1-334 50 60
Web: http://www.credit-suisse.ch

todd...@my-deja.com

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In article <KW9c4.2863$h3.7...@ord-read.news.verio.net>,

"Robert C. Martin" <rma...@oma.com> wrote:
> It astounds me how often I get asked this question. People have
>this very odd view of XP.

It astounds me how you are astounded.

>They think that it eliminates all good design and prevents
>any kind of intelligent thought about software structure.

Let's see, you can't predict the need for bug tracking system or a
logging facility, etc and you can't predict that you should
backup your stuff, and you can't do a lot of other common
sense things one learns from experience,
and it is my idea of XP that is odd?

> Of course I have not given up any of the things you mention. Look at
>the way I wrote my book. How many times did I create designs and
>designs and the code into something better? >write code, only to back
away and say "ick!", and then refactor the


My understanding is "ick" would not be sufficient justification
to change code. It would take at least being bitten 2 times
for the code to be refactored. Your early refactoring doesn't
seem to be consistent.

> I find it odd that I must justify my affinity to XP.

You have to do no such thing. But not to expect questions
is naive. Your past and present selves seem at odds even if
internally you have found a reconciliation.

> Make no mistake, XP is *heavily* design oriented.

I know you have a lot of design principles, but i
don't recall XP promoting any in the tradional sense.
Read the web sites and email and you'll see no mention of DIP etc.

>But it does not allow you to build castles in the air.

And you did build airy castles in the past? You did change code for
no benifit or reason in the past?

>If you want to use a visitor, you'd better be
> able to show how the code will improve.

Any before you would sprinkle visitors about for no reason?

>If you want to invert dependencies,
> you'd better be able to show that it solves some existing problem,

My understanding is that you would not even think or worry about
dependencies until it became a twice seen problem and then you
would refactor. You would do the simple thing that works first
and this would clearly preclude worring about all these issues.

>the structure of the code more flexible.

Worrying about flexible software structure ahead of time seems
in direct opposition to the idea of constant refactoring
and doing the simplest thing that works. I understand it's
in your nature to think of these things, but it does not
seem to be in the nature of XP. I don't recall ron or other
XPers talking like this. The more XP approach would be to get
something that working and if you have a problem later then refactor
it. Worrying about making the refactoring "easier" etc would seem
to be unecessary work because you don't know what will need changing
and in general seems wildly inconsistent with XP.


Sent via Deja.com http://www.deja.com/
Before you buy.

Patrick Logan

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In comp.object Phlip <new_...@see.web.page> wrote:

: First the customer asks for a 2-digit year field in a date record.


: You code and deploy a program following that request, and get paid.
: The customer starts using the code profitably.

Part of requirements analysis is separating requirements-specific
information from implementation-specific information. If the customer
asks for two digit years, you have to find out why. Maybe they are
just getting caught up in how they think it *should* be
implemented. Maybe you can show them why that choice would be a
problem. Or maybe it is because they *have* to display a date on a
device that only has six LED digits. Now you have an engineering
problem to get date calculations correct as well as to determine the
best way to display the dates on that device.

--
Patrick Logan patric...@home.com

univ...@saltmine.radix.net

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In comp.object Robert C. Martin <rma...@oma.com> wrote:

> And any professional software engineer will study techniques, methods,
> languages, and patterns on the premise that they *will* need them one day.
> But programmers are not programs. If we put everything into a program that
> it _might_ need one day, we'd never finish it.

Not every program will need everything. That is unless we intend for
every program to be used, or potentially usable for every
imaginable purpose.

Elliott

univ...@saltmine.radix.net

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In comp.object Michael C. Feathers <mfea...@acm.org> wrote:

> Phlip <new_...@see.web.page> wrote in message

>> Place one team on Silicon Coast (you know, Irvine California). Then
>> put another in Hawaii, another in Hong Kong, another in Khartoum,
>> and another in Curitaba. Then wire them all up with a little
>> consumer-grade teleconferencing equipment. Once the ball got
>> rolling, the infrastructure of this huge "mundo-team" would remain
>> in place for the new projects under new customers, and so on. On top
>> of the speed gains of XP would come speed gains of 24/5
>> productivity.
>>
>> This is logistically impossible, and the worst idea I ever had.
>> (Unless, of course, if anyone out there agrees with it.)

> Unfortunately, I've heard it before.. from a manager
> in passing. It is hard to figure out whether it is
> a joke and who the joke is on. :-)

Whooa!

What I and I thought most others with questions about XP meant by
"framework" is nothing of this sort. I meant structure and architecture
within the code of the project software. Like the frameworks discussed in
the GoF book for the various graphics packages they refer to.

Elliott


univ...@saltmine.radix.net

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In comp.object Ronald E Jeffries <ronje...@acm.org> wrote:

> On Mon, 03 Jan 2000 13:29:07 GMT, brou...@yahoo.com wrote:
>>Incremental development isn't necessarily the best solution for *all*
>>problems.

> I used to think that as well. I've changed my mind by virtue of using
> incremental development for everything, for nearly four years now.
> Unquestionably, if one is sufficiently smart, and sufficiently lucky
> (it takes both), more design up front can pay off. Incremental
> development might not be best for some folks here.

A recent Common error: equating IID with the piece meal approach.

Why not do what RUP and I do? We do iterative and incremental development
(IID) with upfront planning whose overall design may change before each
major iteration.

IID was associated with RUP like process before the XP/oma approaches were
worked out twinkles in their founder's eyes.

Elliott

univ...@saltmine.radix.net

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In comp.object Ronald E Jeffries <ronje...@acm.org> wrote:
> On Tue, 04 Jan 2000 01:36:50 GMT, univ...@radix.net (Ell) wrote:

>>But "evolution" is not the same as "piece meal" and "non-holistic".
>>[holistic: the interaction of parts to serve the whole]. "Evolution" in my
>>observations typically happens most efficiently in a holistic, non-piece meal
>>manner.

> Once again we see the Phasist misapprehension in action. Incremental


> development does not militate against a holistic understanding and
> advancement of the system.

I think I know IID can work with the holistic approach that after saying
it for 8 years before XP was mentioned on comp.object.

You are in error to fail to see that IID always has been for at least 10
years and still is a cornerstone of RUP like process. To make it seem
like XP was first and more importantly that it correctly applies IID. All
the literature supports this and not your take.

> All systems are developed incrementally - none are written in one
> moment. The question is the order of implementation.

It's relative to a time frame. What seems to be incremental in one
context is not in another. Not all systems are developed incrementally.
For instance a star often goes into phasar modality suddenly. While there
may be increments, there is also a suddenness. Sometimes a solution gels
suddenly. Failure to realize that is the mistake of an overall
conservative philosophical view of the world.

> Bottom-up implementations tend to do nothing useful until the very end
> of the project. Top-down implementations often have the same problem,

> though they may tend to have better architecture when completed. XP


> suggests a different order of implementation - most valuable thing
> first, and provides a family of practices that allow this to result in
> a good architecture as well as delivery of known value on a
> predictable schedule. "It's a good thing. (tm)"

"It's an unfortunate thing. (tm)"

Unless the approach examines the problem space as a whole (holistically),
and then creates an overall plan based upon that, which is subject to
iterative change and replanning, it is typically not going to be the most
efficient form of development.

Elliott


Patrick Logan

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In comp.object Vladimir Trushkin <trus...@iname.com> wrote:

:> With XP, though, not only did you do what the customer asked for,
:> you also:
:>
:> * Managed the changes in what (s)he asked for.
:>
:> * Managed the code to be ready for those changes.
:>
:> * Tested the code to prove what has been asked for.

: Only System (Behavior) level testing may help to prove you've got what has
: been asked for. Biased thoward internal design developers can't abstract
: from this knowledge completely, so, they're bad performers of testing on
: System level. Unit level testing doing in XP at development side can't prove
: you've got what customer expected to (asked for), you have proven only
: you've got what you've been thinking of. I can't agree with your 3rd
: statement.

XP prescribes user-level functional tests as much as it does
developer-level unit tests.

--
Patrick Logan patric...@home.com

Martijn Meijering

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to Dave Thomas
Dave Thomas wrote:
> Sorry, I was being obtuse. I keep coming back to the concept that
> JustInTime design and coding seems to encourage developers to work
> only up the end-of-the-day horizon. SeeingFurther has always been
> important to me, and I'm struggling mightily with the idea of giving
> it up.

"SeeingFurther" is not a bad thing. The bad thing is making premature
decisions. In XP you would preferably use "SeeingFurther" to assess
risk. XP has a rule that says "worst things first". Last week I read an
article by one of the programmers of "X-wing vs TIE-fighter". One of
their problems was getting decent network play over the internet. From
the article it is clear that they didn't (and still don't...) know
enough about how the internet works. If you don't know enough about
something or if you are not sure if you know enough, that's a risk. In
XP you would talk it through with your colleagues, do some back of
envelope calculations, do a CRC session, or even write a spike. If this
mitigates the risks ("Hey, it's not so hard after all!"), that's fine.
Otherwise, it would be a good thing to move the risky user story
forward.

Martijn

Panu Viljamaa

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
Phlip wrote:

> First the customer asks for a 2-digit year field in a date record.
> You code and deploy a program following that request, and get paid.
> The customer starts using the code profitably.

Think I get your thrift. After $250 billion later we have now refactored
our computer software to be Y2K ready. It seems everybody profited and
the programmers in the 50s did 'just the right thing'.

- Panu

> ...

Ell

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to
Martijn Meijering <mmei...@wi.leidenuniv.nl> wrote:

#Dave Thomas wrote:
#> Sorry, I was being obtuse. I keep coming back to the concept that
#> JustInTime design and coding seems to encourage developers to work
#> only up the end-of-the-day horizon. SeeingFurther has always been
#> important to me, and I'm struggling mightily with the idea of giving
#> it up.

#"SeeingFurther" is not a bad thing. The bad thing is making premature
#decisions. In XP you would preferably use "SeeingFurther" to assess
#risk. XP has a rule that says "worst things first". Last week I read an
#article by one of the programmers of "X-wing vs TIE-fighter". One of
#their problems was getting decent network play over the internet. From
#the article it is clear that they didn't (and still don't...) know
#enough about how the internet works. If you don't know enough about
#something or if you are not sure if you know enough, that's a risk. In
#XP you would talk it through with your colleagues, do some back of
#envelope calculations, do a CRC session, or even write a spike. If this
#mitigates the risks ("Hey, it's not so hard after all!"), that's fine.

This kind of prototyping is also an integral part of RUP like process. RUP
has always advocated exploring high risks by implementing and coding.

#Otherwise, it would be a good thing to move the risky user story
#forward.

RUP has advocated putting the riskiest portions first along with testing risky
portions as I say above.

XP offers nothing good that is really new and profound that RUP didn't always
include. XP's major new parts like taking a constant, on-going piece meal
approach have been found to inefficient for most types of sw development.

Ell

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to
<univ...@saltmine.radix.net> wrote:

#In comp.object Ronald E Jeffries <ronje...@acm.org> wrote:

#> univ...@radix.net (Ell) wrote:
#>>But "evolution" is not the same as "piece meal" and "non-holistic".
#>>[holistic: the interaction of parts to serve the whole]. "Evolution" in my
#>>observations typically happens most efficiently in a holistic, non-piece meal
#>>manner.

#> Once again we see the Phasist misapprehension in action. Incremental
#> development does not militate against a holistic understanding and
#> advancement of the system.

#I think I know IID can work with the holistic approach that after saying
#it for 8 years before XP was mentioned on comp.object.

Correction:

#I think I know IID can work with the holistic approach. []
That after showing how
#it
does
#for 8 years before XP was mentioned on comp.object.

#You are in error to fail to see that IID always has been for at least 10
#years and still is a cornerstone of RUP like process. To make it seem
#like XP was first and more importantly that it correctly applies IID. All
#the literature supports this and not your take.

Correction:

#You are in error to fail to see that IID always has been for at least 10
#years, and still is a cornerstone of RUP like process.


You are in error to

# make it seem like XP was first
to,
#and more importantly that it correctly applies, IID
in software development.
#All the literature supports
what I just said []
#and not your take.

Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Vladimir Trushkin <trus...@iname.com> wrote in message
news:OxN0WeoV$GA.221@cpmsnbbsa04...

No. They work in pairs, and the pairs change often. So their difference in
skills and styles will tend to homogenize throughout the team.


--

Robert C. Martin | OO Mentoring | Training Courses:
Object Mentor Inc. | rma...@objectmentor.com | OOD, Patterns, C++, Java,
PO Box 85 | Tel: (800) 338-6716 | Extreme Programming.
Grayslake IL 60030 | Fax: (847) 548-6853 | http://www.objectmentor.com

"One of the great commandments of science is:
'Mistrust arguments from authority.'" -- Carl Sagan

Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

<brou...@yahoo.com> wrote in message
news:pvhxOMSM6zLM96...@4ax.com...

> "Robert C. Martin" <rma...@oma.com> wrote:
>
> >If your customer has not made nightly backups a priority, then they must
be
> >willing to lose all the source. Clearly you want to make sure the
customer
> >knows what they are risking. But if they prioritize other features above
> >the nightly backup, you have no business building the nightly backup
first.
> >It's their nickel, after all.
>
> >There are things that make a tremendous amount of engineering sense that
may
> >make no business sense. We, as engineers, have a tough time
understanding
> >this. Still, we exist to serve the business. When the business
> >countermands our engineering instincts, the business wins.
>
> The customer doesn't always know best. If he did, he wouldn't be paying
for
> our expertise.

And we have the responsibility to educate him. But he still has to make the
business decisions. If something we know impacts those decisions, we have
to inform him. But we can't make the decisions for him.
>
> If a doctor, lawyer, accountant, or *real* engineer followed a customer's
> request to the detriment of what the customer wanted (but didn't know),
he'd
> probably be liable for malpractice.

If I need surgery, but I refuse surgery, is the doctor liable? Certainly he
would be liable if he did not tell me I needed surgery. Certainly he would
be liable if he did not explain to me what the repercussions of refusing
surgery would be. But if I still refuse, he is not liable. And I have the
right to refuse!

> I wonder if our perception is also colored by how much stake we have in
the
> project. If we're merely hired hands getting paid by the hour to do
> somebody else's project, I can see your point.

It has nothing to do with how much stake you have. You are an engineer. Do
you want the customer making technical decisions? Clearly not! Yet the
customer has a huge stake in the project. By the same token, the customer
does not want you to make business decisions!

Engineers cannot presume to know more about the cusomter's business than the
customer knows. Engineers cannot usurp the business decisions that are the
customers right to make.

> Some people may prostitute
> themselves and give the clients what the clients wants...even if that
means
> giving them enough rope to hang themselves.

You describe the rope. You explain the shape of the noose, and how it might
find its way over their head. You describe what the drop and the sudden
stop could do to them. If they are determined to take the risk anyway, then
you help them do it, and you do everything in your power at a technical
level to lessen the risk.

> On the other hand, if we're
> working on a fixed bid or with a salaried staff, I think bowing to every
> whim--especially when it goes counter to good engineering sense--could be
a
> costly mistake that will burn the developers.

Nobody is talking about going against good engineering sense. Instead we
are talking about not usurping business perogative.

Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

<todd...@my-deja.com> wrote in message news:84t65u$94l$1...@nnrp1.deja.com...
> In article <KW9c4.2863$h3.7...@ord-read.news.verio.net>,

> "Robert C. Martin" <rma...@oma.com> wrote:
> > It astounds me how often I get asked this question. People have
> >this very odd view of XP.
>
> It astounds me how you are astounded.

(sigh)

>
> >They think that it eliminates all good design and prevents
> >any kind of intelligent thought about software structure.
>
> Let's see, you can't predict the need for bug tracking system or a
> logging facility, etc and you can't predict that you should
> backup your stuff, and you can't do a lot of other common
> sense things one learns from experience,
> and it is my idea of XP that is odd?

I perfer to wait until the bug flux demands the use of a tracking system. I
perfer to wait until there is a definite need for the logging facility. I
will always back up my source code, as is my right as the customer of the
development environment. I will submit to my customer if he decides that
backing up *his* data is low priority. After all, getting a prototype to a
trade show by a certain date may be much more important to him than the
creation of a backup facility for data that doesn't exist yet.


>
> > Of course I have not given up any of the things you mention. Look at
> >the way I wrote my book. How many times did I create designs and
> >designs and the code into something better? >write code, only to back
> away and say "ick!", and then refactor the
>
> My understanding is "ick" would not be sufficient justification
> to change code. It would take at least being bitten 2 times
> for the code to be refactored. Your early refactoring doesn't
> seem to be consistent.

Refactoring in XP is done any time there is an 'ick'. Kent refers to these
as "Code Smells". If you don't like the code for some substantial reason,
or if the code is not as simple and clear as you can make it, then refactor
it. By the same token, don't anticipate features you aren't sure about.
Don't refactor the code for generality in advance of needing that
generality. The anticipated features may not appear, or may not appear in
the form you are anticipating.


>
> > Make no mistake, XP is *heavily* design oriented.
>
> I know you have a lot of design principles, but i
> don't recall XP promoting any in the tradional sense.
> Read the web sites and email and you'll see no mention of DIP etc.

That is correct. Though you will find some crossover at
http://c2.com/cgi/wiki?ExtremeNormalForm. Kent's rule, OnceAndOnlyOnce is
strongly related to OCP. LSP, DIP, and ISP are simply ways of achieving
OCP. I agree that the notion of principles is not as well developed in XP,
but that's something I intend to correct. The philosophical aim of XP is
*more* design, not less.


>
> >But it does not allow you to build castles in the air.
>
> And you did build airy castles in the past? You did change code for
> no benifit or reason in the past?

No. At least I hope not. That's my point. There is no "before". My
viewpoint hasn't changed. XP does not represent a fundamental shift in my
thinking.


>
> My understanding is that you would not even think or worry about
> dependencies until it became a twice seen problem and then you
> would refactor.

No. You would not anticipate new features. But you would certainly worry
about dependencies! In order to make the code as simple as possible, you
have to minimize dependencies. Thus, refactoring to manage dependencies is
crucial to the success of XP. Without it, the code gets badly tangled, and
resists further refactoring.

> You would do the simple thing that works first
> and this would clearly preclude worring about all these issues.

The rule has two parts.

1. Do the simplest thing that passes the current test case.
2. Refactor to the simplest design. (i.e. manage dependencies).


>
> Worrying about flexible software structure ahead of time seems
> in direct opposition to the idea of constant refactoring
> and doing the simplest thing that works.

Not at all. To a method that depends upon refactoring, keeping the code
flexible is of paramount importance.

> I understand it's
> in your nature to think of these things, but it does not
> seem to be in the nature of XP.

In my talks with Kent, Ron, and Martin, I have found them to be just as
focused upon good design issues as I am.

> I don't recall ron or other XPers talking like this.

Ron is the one who published the two part view of "The simplest thing that
could possibly work".

> The more XP approach would be to get
> something that working and if you have a problem later then refactor
> it.

No! You get it working first, then you refactor the design into its
simplest, cleanest, and most flexible form.

> Worrying about making the refactoring "easier" etc would seem
> to be unecessary work because you don't know what will need changing
> and in general seems wildly inconsistent with XP.

Refactoring is at the heart of XP. Each refactoring step enables further
refactoring. That's what its all about.

This is not hacking! Every change in the code is a change towards a better
design. Every change in the code enhances its conformance to the principles
of good OO design. THAT is an overriding imperative of XP. Pairs are honor
bound not to check in dirty, inflexible code. Period. When a pair finds
such code, they are honor bound to refactor it into a better form.

Martijn Meijering

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to univ...@radix.net
Ell wrote:
> This kind of prototyping is also an integral part of RUP like process. RUP
> has always advocated exploring high risks by implementing and coding.

Agreed.



> #Otherwise, it would be a good thing to move the risky user story
> #forward.
>
> RUP has advocated putting the riskiest portions first along with testing risky
> portions as I say above.

Agreed.



> XP offers nothing good that is really new and profound that RUP didn't always
> include. XP's major new parts like taking a constant, on-going piece meal
> approach have been found to inefficient for most types of sw development.

Incorrect. Let me give you a few examples of significant differences
between RUP and XP:

1)

The high level view of an RUP project is a number of releases (or
cycles, I haven't got my RUP stuff handy). This is similar to how
successful products like Word, Access, WP, Corel Draw etc have had new
major releases each year. Each release is divided into phases (not to be
confused with the waterfall phases): inception, elaboration,
construction and transition. The phases are divided into iterations.
Each iteration contains classical activities such as analysis, design,
coding, testing etc and ends in some tangible result. This can be a
version with reduced functionality, but often is a document, model or
prototype that couldn't be taken into production immediately. Still, it
provides a measure of feedback and risk reduction.

An XP project starts with something like RUP's Inception, though with
different rituals and fewer and simpler artifacts. Then there is the
time between the start of development and the first operational release.
In XP we keep this as short as possible, because we consider it to be
extremely risky. Then comes the main part of an XP project, a continuous
stream of production quality releases in short iterations (1 to 4
weeks). Some of these releases can be internal releases and it is a good
idea to have your new system running in parallel with the old system if
there is one, but it is very important that releases are taken into
production.

So, for XP the construction/transition distinction isn't very useful.
The focus is on delivering business value and getting feedback, so the
two concepts are deliberately not separated. Therefore it is typically
not very useful to think in terms of RUP releases. You can group
individual releases into major releases if that makes sense. Whether it
makes sense depends on the cost of deployment, whether you are doing
bespoke, shrinkwrap or in house development etc.

At some point the stream of user stories dries up, and you're left with
just minor maintenance until the system is replaced.

2)

RUP wisely advises to move risky stuff and stuff that has a great impact
on the architecture first. XP goes much further than this by moving it
to a smaller level of granularity (XP user stories are 1-3 weeks of
ideal programming time) and making it the main risk management
technique. You start small, so you can deliver business value early and
don't have to invest much, and so you can go fast. XP, like other
strategies, divides complicated problems up into easier ones. But it
turns out that it is usually really easy to divide them into pieces that
are so small they're not just manageable but really easy. The optimal
size is much smaller than people realise and the difference in
productivity is much larger than people realise. In order to make this
work you must do merciless refactoring. Many people seem to think this
is only so much hot air. This comes from a misunderstanding of what
refacoring is and how merciless it should be. We really mean it when we
say "refactor mercilessly". When I try to teach people XP, I find that I
have to discourage people from adding speculative infrastructure
("cowboy analysis") but that I have to encourage them strongly to do
merciless refactoring.

3)

RUP does advocate rigorous testing (who wouldn't), but XP places great
emphasis on continuous unit testing and writing unit tests first. Code
that compiles and runs gives more feedback than code that doesn't
(through a program database a la MS VC++, through debugging statements
etc). Code with unit tests gives far and far more feedback than code
that merely compiles. Because of this most bugs are found almost
instantaneously. Because of this debugging is orders of magnitude more
productive. You just don't spend ages debugging anymore. You can
pinpoint most bugs in a minute or even less.

4)

RUP advocates good coding standards, but XP insists on once and only
once, very small methods, very short parameter lists, very small
objects, systematically chosen and continually refined identifiers.
During coding, you spend a good deal of time reading code. Not pages on
end, but a paragraph here and then a paragraph there while you are doing
something else like finding a bug, or finfing out where to add a
parameter and things like that. When you do that, you have to parse
what's going on, what the variables in scope are and so forth. If you
don't pay attention to it, you may not even notice how much time you're
spending on reading because you are focussed on something else. If you
make the small effort of extracting tiny methods, you'll save yourself
untold hours of reading.

I could go on, but I won't because this message is getting long.

Regards,

Martijn

Martijn Meijering

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to Panu Viljamaa
Panu Viljamaa wrote:
>
> Phlip wrote:
>
> > First the customer asks for a 2-digit year field in a date record.
> > You code and deploy a program following that request, and get paid.
> > The customer starts using the code profitably.
>
> Think I get your thrift. After $250 billion later we have now refactored
> our computer software to be Y2K ready. It seems everybody profited and
> the programmers in the 50s did 'just the right thing'.

Nope, that's not refactoring, that's a bug fix, or a requirements change
if you're kind. A refactoring is a behaviour preserving transformation
of code into a more maintainable form. If those 50' programmers had
practiced merciless refactoring, the bug fix would have been trivial.

Martijn

Martijn Meijering

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to
"Robert C. Martin" wrote:
> > My understanding is "ick" would not be sufficient justification
> > to change code. It would take at least being bitten 2 times
> > for the code to be refactored. Your early refactoring doesn't
> > seem to be consistent.
>
> Refactoring in XP is done any time there is an 'ick'. Kent refers to these
> as "Code Smells". If you don't like the code for some substantial reason,
> or if the code is not as simple and clear as you can make it, then refactor
> it. By the same token, don't anticipate features you aren't sure about.
> Don't refactor the code for generality in advance of needing that
> generality. The anticipated features may not appear, or may not appear in
> the form you are anticipating.

Also, XP has a very low 'ick'-level. XP-ers go 'ick' much more often
than most programmers.

Martijn

Panu Viljamaa

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to
Martijn Meijering wrote:

> Panu Viljamaa wrote:
>
> > Think I get your thrift. After $250 billion later we have now refactored
> > our computer software to be Y2K ready. It seems everybody profited and
> > the programmers in the 50s did 'just the right thing'.
>
> Nope, that's not refactoring, that's a bug fix, or a requirements change if
> you're kind. A refactoring is a behaviour preserving transformation of code
> into a more maintainable form. If those 50' programmers had practiced
> merciless refactoring, the bug fix would have been trivial.

Got me there, it's not refactoring.

However, Y2K-problem seems to me a direct consequence of the other XP-related
maxims "You ain't gonna need it", "Do what the customer asks", and "Business
Value first".

What I've been trying to point to in some of my previous postings is that
simplistic slogans should not be taken as cure-all-apply-always dogmas, rather
their applicability depends on the situation, and its participants. We
*should* be skeptical (yet keep an open mind). We should take into account
the whole context, and the different interest groups involved. We must ask
"Business Value for Whom ?". You the developer ? Your employer ? Your client's
project manager ? Your client's company ? The society as whole ? I'm pretty
sure the benefits are not equally distributed to each of these, and we should
be honest, or at least explicit about it. (some would say we shouldn't)

Kent Beck's quite explicit about it; he simply says "This works for us".

You can argue that the original decision to use 2 digits for dates made a lot
of sense and value, for some of the groups involved; and that it was caused by
the noble motto "To do what the customer asks for". So, don't blame us
programmers. Instead thank us for solving *your* Y2K problem.

- Panu


Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Vladimir Trushkin <trus...@iname.com> wrote in message
news:euFNzioV$GA.277@cpmsnbbsa04...

They shouldn't happen. If they do, you find out what you overlooked and try
not to do that again. This is no different from any formal process. You
find the source of error, and find a way to eliminate it.

XP is a formal process. It is just not a heaviweight process. It
eliminates a lot of the saftey nets under the assumption that the cost isn't
worth the risk.

Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Vladimir Trushkin <trus...@iname.com> wrote in message
news:eJpnSmoV$GA.269@cpmsnbbsa04...

The entire analogy is weak. The point is that your beginning game plan in
chess does not include your winning move. And your software development
plan does not include your final move either. You can't see that far in
advance.

The world of software is a world of change. Requirements are never static,
they almost always change; often changing with extreme frequency. Any plan
made early will soon be invalidated by the ever changing requirements. A
software process must be able to deal efficiently with this kind of change.
Any process that invests huge amounts of time in up front plans, is doomed
to be inefficient with regard to changing requirements.

Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Hans Wegener <hans.w...@credit-suisse.ch> wrote in message
news:3871EDA8...@credit-suisse.ch...

> "Robert C. Martin" wrote:
>
> > I'm not convinced of that. [...] That benefit starts to pay for the
cost of
> > the project
> > before the project is complete. Thus the net cost may be lower.
>
> Could be, but to my knowledge that hasn't been demostrated yet. I would
not say
> it's impossible, but I'm really sceptical. This is where I would be happy
to see
> more tangible results rather than guesses.
>
> > I can't imagine a scenario in which XP adds risk. I consider the risk
of
> > building the wrong infrastructure up-front to be greater than the risk
of
> > waiting until the need for the infrastructure is demonstrable.

Sorry, this was a screwup on my part.

There are certainly scenarios where XP has not been tried. XP is a risk in
any such scenario until it has been tried and found to work.

> If I read you right you can not imagine any situation where people assess
the
> requirements wrongly and find out at a point in time where it costs more
than
> thinking big in the beginning...?

Correct. Simply because thinking big in the beginning is also risky. You
must think and model for long periods of time without any concrete feedback.
You can make errors early that are not detected until much later, after many
other decisions have been predicated on those errors. Also, it is my
experience that the requirements are never initially correct.

In any case I agree that we need more tangible numbers. I agree that the
enthusiasm built around XP is not resting on the firmest of empirical
foundations. I am eager to correct that because from my own empirical
observations of XP and XP like projects, it works very very well.

Robert C. Martin

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Vladimir Trushkin <trus...@iname.com> wrote in message
news:uynu5roV$GA.221@cpmsnbbsa04...

>
> > XP suggests a different order of implementation - most valuable thing
> > first, and provides a family of practices that allow this to result in
> > a good architecture as well as delivery of known value on a
> > predictable schedule. "It's a good thing. (tm)"
>
> Does it mean you adhere mostly to Evolutionary Prototyping strategy?
>

We evolve simple systems into more complex systems. The simple systems
implement the features that are most valuable to the customer. The
evolutions add more and more features, in the order that the customers want
to see them.

But we do not abandon architecture. At every iteration we refactor the code
into the simplest, cleanest, and most flexible form for the features its
supports. Thus the architecture evolves with the code. The project is not
a bunch of features bolted together in a mishmash. We don't stop coding
when it works. We get it to work first, and then we spent a lot of time
giving it the right structure.

Ell

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to
Martijn Meijering <mmei...@wi.leidenuniv.nl> wrote:

#Ell wrote:
#> This kind of prototyping is also an integral part of RUP like process. RUP
#> has always advocated exploring high risks by implementing and coding.

#Agreed.

#> #Otherwise, it would be a good thing to move the risky user story
#> #forward.

#> RUP has advocated putting the riskiest portions first along with testing risky
#> portions as I say above.

#Agreed.

#> XP offers nothing good that is really new and profound that RUP didn't always
#> include. XP's major new parts like taking a constant, on-going piece meal
#> approach have been found to inefficient for most types of sw development.

#Incorrect. Let me give you a few examples of significant differences
#between RUP and XP:

Before going on, I will say that pair programming may be useful in some
circumstances. Beyond that I'm not impressed.

Martijn Meijering continues:

#1)
#
#The high level view of an RUP project is a number of releases (or
#cycles, I haven't got my RUP stuff handy). This is similar to how
#successful products like Word, Access, WP, Corel Draw etc have had new
#major releases each year. Each release is divided into phases (not to be
#confused with the waterfall phases): inception, elaboration,
#construction and transition. The phases are divided into iterations.
#Each iteration contains classical activities such as analysis, design,
#coding, testing etc and ends in some tangible result. This can be a
#version with reduced functionality, but often is a document, model or
#prototype that couldn't be taken into production immediately. Still, it
#provides a measure of feedback and risk reduction.
#
#An XP project starts with something like RUP's Inception, though with
#different rituals and fewer and simpler artifacts. Then there is the
#time between the start of development and the first operational release.
#In XP we keep this as short as possible, because we consider it to be
#extremely risky.

RUP also wants this as short as possible.

# Then comes the main part of an XP project, a continuous
#stream of production quality releases in short iterations (1 to 4
#weeks). Some of these releases can be internal releases and it is a good
#idea to have your new system running in parallel with the old system if
#there is one, but it is very important that releases are taken into
#production.

A successful RUP project also has a stream of production quality releases in
as short an iteration as possible. The limiting factors are being clear that
the last release test successfully, there is an overall understanding of the
key issues involved in the next release, and there is an overall design for
the whole system and next release.

#So, for XP the construction/transition distinction isn't very useful.
#The focus is on delivering business value and getting feedback, so the
#two concepts are deliberately not separated. Therefore it is typically
#not very useful to think in terms of RUP releases. You can group
#individual releases into major releases if that makes sense. Whether it
#makes sense depends on the cost of deployment, whether you are doing
#bespoke, shrinkwrap or in house development etc.
#
#At some point the stream of user stories dries up, and you're left with
#just minor maintenance until the system is replaced.

User stories should all be assessed at the start of the project to identify
the highest risks and which are key for the client. You never mention this
holistic id of the highest risk stories _among them all_, which RUP does.

#2)
#
#RUP wisely advises to move risky stuff and stuff that has a great impact
#on the architecture first. XP goes much further than this by moving it
#to a smaller level of granularity (XP user stories are 1-3 weeks of
#ideal programming time) and making it the main risk management
#technique. You start small, so you can deliver business value early and
#don't have to invest much, and so you can go fast. XP, like other
#strategies, divides complicated problems up into easier ones.

But the key coding foundation for all code may not be that small. But it
certainly should be first.

# But it
#turns out that it is usually really easy to divide them into pieces that
#are so small they're not just manageable but really easy. The optimal
#size is much smaller than people realise and the difference in
#productivity is much larger than people realise. In order to make this
#work you must do merciless refactoring. Many people seem to think this
#is only so much hot air. This comes from a misunderstanding of what
#refacoring is and how merciless it should be. We really mean it when we
#say "refactor mercilessly". When I try to teach people XP, I find that I
#have to discourage people from adding speculative infrastructure
#("cowboy analysis") but that I have to encourage them strongly to do
#merciless refactoring.

Infrastructure may be necessary to most easily code all portions as I just
alluded to.

#3)
#
#RUP does advocate rigorous testing (who wouldn't), but XP places great
#emphasis on continuous unit testing and writing unit tests first. Code
#that compiles and runs gives more feedback than code that doesn't
#(through a program database a la MS VC++, through debugging statements
#etc). Code with unit tests gives far and far more feedback than code
#that merely compiles.

Sure and our RUP does unit tests.

#4)
#
#RUP advocates good coding standards, but XP insists on once and only
#once, very small methods, very short parameter lists, very small
#objects, systematically chosen and continually refined identifiers.
#During coding, you spend a good deal of time reading code. Not pages on
#end, but a paragraph here and then a paragraph there while you are doing
#something else like finding a bug, or finfing out where to add a
#parameter and things like that. When you do that, you have to parse
#what's going on, what the variables in scope are and so forth. If you
#don't pay attention to it, you may not even notice how much time you're
#spending on reading because you are focussed on something else. If you
#make the small effort of extracting tiny methods, you'll save yourself
#untold hours of reading.

Methods should be as small as possible. That is Programming 101. All my
teachers and others have taught me that no functions should be over a page
long.

Later, and Regards,

Robert C. Martin

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to

Vladimir Trushkin <trus...@iname.com> wrote in message
news:#56A00oV$GA.274@cpmsnbbsa04...

> > With XP, though, not only did you do what the customer asked for, you
> also:
> >
> > * Managed the changes in what (s)he asked for.
> >
> > * Managed the code to be ready for those changes.
> >
> > * Tested the code to prove what has been asked for.
>
> Only System (Behavior) level testing may help to prove you've got what has
> been asked for. Biased thoward internal design developers can't abstract
> from this knowledge completely, so, they're bad performers of testing on
> System level.

This may be so. That is why XP requires the customer to specify the
functional (system) tests. Those tests are based upon the user stories (use
cases).

> Unit level testing doing in XP at development side can't prove
> you've got what customer expected to (asked for), you have proven only
> you've got what you've been thinking of. I can't agree with your 3rd
> statement.

Since XP demands that functional tests, specified by the customer, be
executed and passed at every release (and even more often than that) I think
point three is valid.

It is loading more messages.
0 new messages