How do we calculate ROI for following Agile practices in our projects?

1,169 views
Skip to first unread message

Ashish Arvind Pathak

unread,
Nov 26, 2012, 5:23:17 AM11/26/12
to scruma...@googlegroups.com
I am working for a software development company that is into developing large complex enterprise solutions for big corporations.
It's been 5 years since we embraced Scrum as the methodology to plan and develop our software solutions.
But we have stagnated with our agile adoption across BUs/Product Lines. Also, each BU/PL have their own interpretation of Agile/Scrum practices. Not all are trained and fundamentals are weak with people who have recently come on-board.
We did a quick dipstick survey (based on Nokia Test) with 5 project teams across of organization and found that we averaged 5 on a scale of 10.

Our Sr. Management is interested in investing in increasing operational efficiency but are looking for a business case/ROI to fund this as a "project".

We don't really have any data around how agile/scrum has helped us improve our operational efficiency (quality, productivity) in last 5 years.

I am looking for pointers as to how we can articulate this ROI and make a business case out of it.

Has anyone here gone through this? Done this? Used metrics or something else to convince their Sr. Management to invest in Agile as a tool to improving overall operational efficiency?

Regards,
--
Ashish Pathak

Derek

unread,
Nov 26, 2012, 5:39:50 AM11/26/12
to scruma...@googlegroups.com
In a word: Velocity. 

I advise that you spin up a new, well trained team, and set to work on a new project. As part of continuous improvement, you should quickly observe increasing velocity. That should be a VERY convincing example. 

--
Derek Davidson


--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.

Mark Levison

unread,
Nov 26, 2012, 7:43:03 AM11/26/12
to scruma...@googlegroups.com

Derek when you say velocity, I assume you mean actually producing product vs measuring Story points. If you use the Story point measure as a proxy for productivity I can double (or triple) in about 10 minutes.

Cheers
Mark

George Dinwiddie

unread,
Nov 26, 2012, 7:49:04 AM11/26/12
to scruma...@googlegroups.com
Ashish,

On 11/26/12 5:23 AM, Ashish Arvind Pathak wrote:
> I am working for a software development company that is into developing
> large complex enterprise solutions for big corporations.
> It's been 5 years since we embraced Scrum as the methodology to plan and
> develop our software solutions.
> But we have stagnated with our agile adoption across BUs/Product Lines.
> Also, each BU/PL have their own interpretation of Agile/Scrum practices.
> Not all are trained and fundamentals are weak with people who have
> recently come on-board.
> We did a quick dipstick survey (based on Nokia Test) with 5 project
> teams across of organization and found that we averaged 5 on a scale of 10.

Do you think that you ARE achieving increases in productivity?

> Our Sr. Management is interested in investing in increasing operational
> efficiency but are looking for a business case/ROI to fund this as a
> "project".
>
> We don't really have any data around how agile/scrum has helped us
> improve our operational efficiency (quality, productivity) in last 5 years.
>
> I am looking for pointers as to how we can articulate this ROI and make
> a business case out of it.

If you're not yet measuring your performance in some fashion, you can't
measure improvement, and therefore can't calculate ROI.

> Has anyone here gone through this? Done this? Used metrics or something
> else to convince their Sr. Management to invest in Agile as a tool to
> improving overall operational efficiency?

In what ways do you see Agile helping you to improve operational efficiency?

- George

--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------

Derek Davidson

unread,
Nov 26, 2012, 7:58:38 AM11/26/12
to scruma...@googlegroups.com
Hi Mark

I'd be surprised if it took 10 minutes! :)

But, here's the thing: If you guesstimated all of your stories up-front and if they weren't subject to 're-estimation' then an increasing velocity is a good measure of increasing productivity.

Like any statistic, it's VERY easy to massage velocity to give the appearance of increased productivity but anyone involved with Scrum would spot this a million miles away.

-- 
Derek Davidson


Michael Vizdos

unread,
Nov 26, 2012, 8:23:15 AM11/26/12
to scruma...@googlegroups.com

Hi.

Great stuff so far.

Suggestion since you are down this path for a while.

Ask your customers.

Ask your stakeholders.

Ask your end users.

Ask the teams including ScrumMasters and product owners.

The answers are there.

Mike Vizdos

--

Ashish Arvind Pathak

unread,
Nov 26, 2012, 9:45:31 AM11/26/12
to scruma...@googlegroups.com
Hi George,

Do you think that you ARE achieving increases in productivity?

Yes, the teams do feel that they are delivering more (features/defect fixes/enhancements) than 5 years back. The teams are convinced that Agile is working for them. The problem is in articulating what they "feel" into hard numbers....Sr. Mgmt. just loves #s.


If you're not yet measuring your performance in some fashion, you can't measure improvement, and therefore can't calculate ROI.

It's not as if we aren't measuring anything. We do have CSI (for each issue) that gives us a feel of what customers think of our products and services. Internally, we track defect burndown vis-a-vis our plans; we know how much time features take to release, we know how much % automation we have (very very less)....I think I am struggling to put it all together and make a compelling story out of it.

In what ways do you see Agile helping you to improve operational efficienc
y?

My own assessment is that we could use agile principles to optimize or team structures (move from component teams to feature teams). Reduce waste by using getting PO (product management) interact more effectively with development teams. We are weak on automation and TDD. We are not very good at prioritizing backlogs and doing trade offs....and we then often push delivery dates...we conveniently forget the principle of time-box. In general, most teams only remember/follow the rules of agile/Scrum but forget the principles on which these rules are based.There are a few of us in the organization who feel that we need to understand the principles and we are trying to change things. But it is a mindset change for the teams and hence the frustration. Unfortunately, our Sr. Mgmt. too isn't conversant with the principles :-P

Thanks very much for the replies. I wish to hear more on this from more people here.

Thanks and Regards,

Ashish

--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumalliance+unsubscribe@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.




--
Ashish Pathak
J-804, Sapphire Park, Park Street,
Aundh Chest Hospital Road,
Wakad, Pune - 411057
Mobile: 9923151369

P We have a responsibility to the environment

Before printing this e-mail or any other document, let's ask ourselves whether we need a hard copy.

cid:image002.jpg@01CC31B6.AB4163E0


Mark Levison

unread,
Nov 26, 2012, 2:08:22 PM11/26/12
to scruma...@googlegroups.com
Let me game that metric for you. In the sprints you measure me I won't write Unit Tests, I won't refactor and ATDD what's that. You will get what you measure. In addition there will lots of new stories in the future to estimate. For more ideas about how bad the use of velocity http://www.infoq.com/news/2011/11/velocity-highsmith

Cheers
Mark
26 November, 2012 7:58 AM
Hi Mark

I'd be surprised if it took 10 minutes! :)

But, here's the thing: If you guesstimated all of your stories up-front and if they weren't subject to 're-estimation' then an increasing velocity is a good measure of increasing productivity.

Like any statistic, it's VERY easy to massage velocity to give the appearance of increased productivity but anyone involved with Scrum would spot this a million miles away.

-- 
Derek Davidson



From: Mark Levison <ma...@mlevison.com>
Reply-To: <scruma...@googlegroups.com>
Date: Mon, 26 Nov 2012 07:43:03 -0500
To: <scruma...@googlegroups.com>
Subject: Re: [Scrum] How do we calculate ROI for following Agile practices in our projects?

Derek when you say velocity, I assume you mean actually producing product vs measuring Story points. If you use the Story point measure as a proxy for productivity I can double (or triple) in about 10 minutes.

Cheers
Mark

--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.
26 November, 2012 7:43 AM

Derek when you say velocity, I assume you mean actually producing product vs measuring Story points. If you use the Story point measure as a proxy for productivity I can double (or triple) in about 10 minutes.

Cheers
Mark


--
Cheers
Mark Levison
Agile Pain Relief Consulting | Writing
Proud Sponsor of Agile Tour Gatineau Ottawa Nov 28, Toronto 26 and Montreal
24

George Dinwiddie

unread,
Nov 26, 2012, 11:12:32 PM11/26/12
to scruma...@googlegroups.com
Ashish,

On 11/26/12 9:45 AM, Ashish Arvind Pathak wrote:
> Hi George,
>
> *Do you think that you ARE achieving increases in productivity?*
>
> Yes, the teams do feel that they are delivering more (features/defect
> fixes/enhancements) than 5 years back. The teams are convinced that
> Agile is working for them. The problem is in articulating what they
> "feel" into hard numbers....Sr. Mgmt. just loves #s.

Well that's a good start. Many teams don't achieve productivity gains,
for various reasons.

BTW, have you asked senior management about how they think productivity
or ROI can be measured or estimated? That might be a good conversation
to start.

> *If you're not yet measuring your performance in some fashion, you can't
> measure improvement, and therefore can't calculate ROI.*
>
> It's not as if we aren't measuring anything. We do have CSI (for each
> issue) that gives us a feel of what customers think of our products and
> services. Internally, we track defect burndown vis-a-vis our plans; we
> know how much time features take to release, we know how much %
> automation we have (very very less)....I think I am struggling to put it
> all together and make a compelling story out of it.

What's CSI?

The questions on
http://blog.gdinwiddie.com/2009/08/18/diy-projectprocess-evaluation-kit/
might help you come up with some things the execs would like. The post
http://blog.gdinwiddie.com/2009/12/27/tracking-your-investments/ follows
this up with some cautions.

One measure I like (that I got from a client) is number of reported
problems in the first 30 days after a release. This is a wonderful
trailing indicator of quality. Combining this with cycle time might give
you a feel for increase in productivity that's not at the expence of
quality.

> *In what ways do you see Agile helping you to improve operational
> efficienc**y?*
>
> My ownassessment is that we could use agile principles to optimize or
> team structures (move from component teams to feature teams). Reduce
> waste by using getting PO (product management) interact more effectively
> with development teams. We are weak on automation and TDD. We are not
> very good at prioritizing backlogs and doing trade offs....and we then
> often push delivery dates...we conveniently forget the principle of
> time-box. In general, most teams only remember/follow the rules of
> agile/Scrum but forget the principles on which these rules are
> based.There are a few of us in the organization who feel that we need to
> understand the principles and we are trying to change things. But it is
> a mindset change for the teams and hence the frustration. Unfortunately,
> our Sr. Mgmt. too isn't conversant with the principles :-P

So, you've got an job ahead of you educating them. For senior
management, though, I find that explaining things in lean terms tends to
work better than agile terms. It speaks better to the business needs,
and focuses less on the details of development. For the development
organization, keep talking about the principles. They're the key to
"getting it" in my experience.

Remember that this is a two-way conversation. These people know a lot,
even if they don't know Agile. Expect to learn more than you teach.
Learn what they look for, and try to use that to express your successes.

> Thanks very much for the replies. I wish to hear more on this from more
> people here.

I'm glad to help all I can. Please do keep us informed on your progress,
though.

John Clifford

unread,
Nov 27, 2012, 2:10:50 PM11/27/12
to scruma...@googlegroups.com
I think velocity (story points per sprint) is a good indicator of productivity... with the following provision: I strongly recommend AGAINST re-estimating already-estimated backlog items (stories) at sprint planning.

Why? Because the estimates never decrease in value. This is because the teams don't understand that estimates shouldn't factor in uncertainty... that is handled by velocity (the more uncertainty, the lower the velocity).

If you preclude re-estimating backlog items, you get rid of people trying to game the system by showing velocity increases where there is no productivity increase.

Most people don't really understand estimation, and how to estimate properly in an Agile environment. Here's a link to a free webinar on our company website that explains this: http://www.construx.com/Page.aspx?cid=3625

If you want to put a cost figure on productivity, you can express productivity in terms of how much velocity you get per dollar spent. For example, if your team of 8 costs your organization $50k per two-week sprint, and you are initially achieving a velocity of 20 points per sprint, each point of functionality is roughly $2.5K. If you double your velocity, then each point of functionality costs you $1.25K. Over time, your ability to estimate the cost of an estimated product becomes increasingly precise, and increasingly useful.

For reference, I've worked with several teams and have helped them show senior management real, measurable increases in velocity/productivity... and I'm talking about improvements in terms of an order of magnitude over time. Savings include over a million dollars per year on R & D costs. That is something that impresses even business folks.

Regards,

John Clifford
Construx Software, Inc.

Mark Levison

unread,
Nov 27, 2012, 2:25:40 PM11/27/12
to scruma...@googlegroups.com
Wow. This seems like about the worst use of velocity I can imagine. Do this at your peril. See http://agilepainrelief.com/notesfromatooluser/2010/02/misuse-of-velocity-in-agile-projects.html (caveat my writing) and http://jimhighsmith.com/2011/11/02/velocity-is-killing-agility/

We're trying to quantify something that can readily be quantified - after story points are just a measure of effort. If you're going to measure anything make it the value delivered to the customer which isn't related to effort.

Reality check - the person who created Story Points (Ron J) doesn't use them anymore. If memory serves its in part because of suggestions like this.

Cheers
Mark

27 November, 2012 2:10 PM
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/t6TR1gUqnlkJ.

To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.
26 November, 2012 5:23 AM

John Clifford

unread,
Nov 27, 2012, 4:30:19 PM11/27/12
to scruma...@googlegroups.com
I think the problem here may be definitional... we may be talking past each other. 

Value is a measure of worth. Like story points, value is in the eye of the beholder; some customers (and some team members/POs) believe certain aspects of functionality are worth more than others. I don't have a problem with measuring value delivered over time, but it is orthogonal to productivity. Maybe value over time is a good metric for effectiveness... and that is certainly something to understand and measure.

Velocity isn't a measure of effort, it's a measure of effort achieved per unit of time (story points per sprint). In that sense it is a good indicator of productivity. Like any metric, there must be some rigor around establishing velocity in order to prevent 'gaming.' Developing and enforcing policies that prevent gaming are the responsibility of the Scrum Master (with assistance from engineering management). Having said all of this, the problems people have with using velocity as a metric stem from misuse of velocity; their solution is to throw the baby out with the bathwater. 

Mark, look at your examples and see how much of the things you object to are caused by 'gaming', and ineffectiveness of the Scrum Master to prevent gaming.

Michael Mallete

unread,
Nov 27, 2012, 6:15:48 PM11/27/12
to scruma...@googlegroups.com
I'm with Mark on this. John, I just totally disagree with what you said.

"Our highest priority is to satisfy the customer through early and continuous delivery of valuable software," the first Agile principle states.

Does  "effort achieved per unit of time" really matter if your customer isn't getting anything valuable? Is the amount of "effort achieved per unit of time" = amount of customer value?

In the end it's not how you work hard, and produce much. But how fast you give value to your customer. How much did the margins grow? Did we hit ROI yet? etc. And also, does our team continuously try to improve themselves still? Are they learning more? Are they still intact?

salamat,
mike mallete


--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/XUOG7YhZiSUJ.

Dan Rawsthorne

unread,
Nov 27, 2012, 6:54:03 PM11/27/12
to scruma...@googlegroups.com
exactly. StoryPoints are a measure of size, so should be a measure of how much stuff the Story added to the increment. Many people think this is hard to do, so they estimate effort. And do it in a way that is not normalized, either. Not so good. It is relatively easy to create velocity metrics that are additive and actually measure size and are proportional to effort (as long as environmental variables don't change. I'm improving it in the next book on project management, but read chapter 3.7. Then come burn me at the stake :)

After all, StoryPoints, like Function Points and Use Case Points before them, are nothing more than relative measures of Ideal Effort or Intrinsic Difficulty. Not that hard to estimate, not that hard to normalize. Not very useful for Sprint Planning, but very useful for Release Planning. Just sayin'...

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

Dan Rawsthorne

unread,
Nov 27, 2012, 6:56:40 PM11/27/12
to scruma...@googlegroups.com
It is a good measure of productivity as long as StoryPoints are about size of the product. A good size metric is functional tests passed, or scenarios releases, or even use cases releases. It is interesting to think about how to measure the size of the delta between product increments. But, since we know we are incrementing the product, there should be some way to measure it, right? That is productivity... how big is the increment delivered each sprint? If you can put numbers on it, this is velocity.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/XUOG7YhZiSUJ.

RonJeffries

unread,
Nov 27, 2012, 7:50:12 PM11/27/12
to scruma...@googlegroups.com
Dan,

On Nov 27, 2012, at 6:56 PM, Dan Rawsthorne <draws...@gmail.com> wrote:

It is a good measure of productivity as long as StoryPoints are about size of the product. A good size metric is functional tests passed, or scenarios releases, or even use cases releases. It is interesting to think about how to measure the size of the delta between product increments. But, since we know we are incrementing the product, there should be some way to measure it, right? That is productivity... how big is the increment delivered each sprint? If you can put numbers on it, this is velocity.

Yes, that is velocity. And it is wrong. It's wrong because:
  • It can be misused, and usually is;
  • It can be gamed, and usually is;

And most of all, because it is working on the short end of the lever. What's important in Agile is steering the project by selecting things to do and things to defer, not working the cost side of the equation and trying to improve it.

A team that is focusing on velocity is not focusing on value. I wish I had never invented velocity, if in fact I did.

Ron Jeffries
www.XProgramming.com
There's no word for accountability in Finnish. 
Accountability is something that is left when responsibility has been subtracted. 
--Pasi Sahlberg

Dan Rawsthorne

unread,
Nov 27, 2012, 11:16:42 PM11/27/12
to scruma...@googlegroups.com
Don't understand your comment, but ok. I guess you're saying that since people often use productivity measures for evil purposes, you shouldn't have productivity metrics, and I think I disagree. :)


Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.

RonJeffries

unread,
Nov 28, 2012, 6:27:38 AM11/28/12
to scruma...@googlegroups.com
Well, there is plenty of evidence that they never work, but my present comments are that estimates and velocity are almost invariably misused, and more important, then operate counter to Agile's real strength.

On Nov 27, 2012, at 11:16 PM, Dan Rawsthorne <dan.raw...@drdansplace.com> wrote:

Don't understand your comment, but ok. I guess you're saying that since people often use productivity measures for evil purposes, you shouldn't have productivity metrics, and I think I disagree. :)


Ron Jeffries
I'm not bad, I'm just drawn that way.  -- Jessica Rabbit

Dan Rawsthorne

unread,
Nov 28, 2012, 6:59:30 AM11/28/12
to scruma...@googlegroups.com
I don't know what you mean by work, I've seen them work just fine for decades... This is not really about estimates and velocity, this is about what to measure for productivity, You need something that describes Yesterday's Weather when you are making budgets for future work, right? And I mean big chunks of work, not simply Sprints - you've got to have enough work to have the 'Law of Large Numbers' kick in... just sayin'... If you can't measure yesterday's weather then you can't use it to predict tomorrow's weather in any meaningful way... this isn't a Development Team thing, this is a Project/Product/Portfolio Management (which is outside the Team) thing.


Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

RonJeffries

unread,
Nov 28, 2012, 7:13:43 AM11/28/12
to scruma...@googlegroups.com
Dan ...

On Nov 28, 2012, at 6:59 AM, Dan Rawsthorne <draws...@gmail.com> wrote:

I don't know what you mean by work, I've seen them work just fine for decades... 

That's why projects have all been working just fine for decades, then, I take it?

Ron Jeffries
www.XProgramming.com Before you contradict an old man, my fair friend, you should endeavor to understand him. - George Santayana

John Clifford

unread,
Nov 28, 2012, 12:41:30 PM11/28/12
to scruma...@googlegroups.com
Why is it that people seem to assume that measuring velocity precludes measuring value? These are not mutually exclusive.

Velocity is not a measure of how hard you exert yourself, it's a measure of how effectively you exert yourself. Teams can put forth a considerable amount of effort and yet still not deliver potentially shippable increments of functionality at the end of a sprint. 

Measuring and tracking velocity, when done correctly, is a valuable tool for answering your team-related questions ("are we improving? are we learning?"). This is the metric for the team to consider... are we doing the thing right, and getting better at it?

Value is more properly the domain of the owners of the 'what' on the project, e.g., the Product Owner and project sponsors and stakeholders. This is the metric for the organization to consider... are we doing the right thing, and getting better at it?

Two separate questions, two separate metrics... and we should use the appropriate tool for the job at hand. BTW, I agree with Dan Rawsthorne... story points are an extremely valuable tool for release planning in terms of being able to balance demand against capacity. 

John Clifford

unread,
Nov 28, 2012, 12:55:33 PM11/28/12
to scruma...@googlegroups.com, ronje...@acm.org
Ron, I think a Scrum team should be focused on velocity... on how well they can execute and how they can improve their ability to execute. I think the larger organization should be focused on value, on the 'what'. 

(I'm defining "Scrum team" as in Schwaber's original Scrum book... the people who actually do the work; the Scrum team plus Scrum Master and Product Owner I refer to as 'Delivery Team.')

You say that velocity is wrong because it can be misused, and usually is (and 'gaming' is a form of misuse). I agree with your preconditions (that it is misused), but I disagree with your conclusion (it is wrong). I have worked with too many Scrum teams that have used velocity correctly (no gaming, no misusing) as a tool to evaluate the effectiveness of their development process changes, and have seen substantial improvements and cost savings, to believe that velocity, when used properly, is a Bad Thing.

Re steering the project, IMO this is more properly done based upon information from outside the Scrum team, in alignment with the organization's wider objectives, and based on feedback from sponsors and stakeholders (including but not limited to customers). Certainly the Product Owner plays an important role here, and should be very interested in whether or not the Delivery team is effectively prioritizing/ordering/etc. to deliver the highest value to the customer as early as possible. 

In organizations of any size, we have to count on people focusing first on their primary responsibilities. The Product Owner is ultimately responsible for maximizing ROI and minimizing TTM via the ordering of the Product Backlog, and based upon feedback from all sources. Let the business focus on what is most valuable, let the team focus on what is most effective.

John Miller

unread,
Nov 28, 2012, 1:03:09 PM11/28/12
to scruma...@googlegroups.com, ronje...@acm.org
+1
Any metric can be abused. I believe every metric should come with a Warning Label of how/and when not to use it : )

Ron, what are alternatives to Story Points that you prefer that may provide useful data help the team self-organize to improve and assist in Release Planning?  

Great discussion.Thanks! 
John
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/2FggMkYu8wsJ.

Ram Srinivasan

unread,
Nov 28, 2012, 1:31:53 PM11/28/12
to scruma...@googlegroups.com, ronje...@acm.org
1. predictability/due date performance of a story -  example - in Release 1 a 2 point story (if you wish to use story points) was completed anywhere from 1 day to 4 days. From this data we can say that we have 80% confidence that a 2 point story can be delivered in 3 days.  You can compare Release 1 and Release 2 statistics to see how well your team is doing. 

2. Cycle time/Lead time calculations

3. How often are you releasing - short frequent releases increase ROI (http://evolvebeyond.com/release-duration1/).  Mike Cohn talks about Quantum Scrum where people release software everyday.

One other metric which you can track is escaped defects (does not address release planning, but addresses OP's subject - "How do we calculate ROI for following Agile Practices")


Ram

John Miller

unread,
Nov 28, 2012, 1:54:37 PM11/28/12
to scruma...@googlegroups.com
Thanks Ram!

So that I am clear, are you suggesting using Kanban metrics such as CFD's? Once it hits the Sprint? One of the great virtues of the Kanban community is some very data rich metrics and it is good to hear of applying it to Scrum teams.

For Lead time, again, once it hits the Sprint or once it is in the backlog? Is this redundant in a Sprint where your committed lead time for a "batch" of stories is already determined? Or, are you looking at a per story lead time? For example, a typical story takes 3 days to complete with in a 2 week Sprint?


Seems like an interesting Balanced Scorecard approach. 

Dan Rawsthorne

unread,
Nov 29, 2012, 8:43:17 AM11/29/12
to scruma...@googlegroups.com
Some have.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

Michael James

unread,
Nov 29, 2012, 8:54:37 AM11/29/12
to scruma...@googlegroups.com
I have seen the velocity AND scope increase chart help Product Owners get a more realistic sense of what they might have by a given release date, generally driving a Product Owner to cut scope.

I have not seen it benefit the development team when used as a target to increase.  I suppose it could be used as a sanity check on a Sprint commitment, but even that seems to cause more harm than good.

--mj
(Michael)

George Dinwiddie

unread,
Nov 29, 2012, 10:37:11 AM11/29/12
to scruma...@googlegroups.com
Dan,

On 11/27/12 11:16 PM, Dan Rawsthorne wrote:
> Don't understand your comment, but ok. I guess you're saying that since
> people often use productivity measures for evil purposes, you shouldn't
> have productivity metrics, and I think I disagree. :)

My experience has been that when people use "productivity measures" with
the best of intentions, they still produce "evil" outcomes. Trying to
measure productivity using velocity, in my experience, makes it pretty
useless for planning. In accord with Goodhart's law
(http://www.atm.damtp.cam.ac.uk/mcintyre/papers/LHCE/goodhart.html), the
desire to increase productivity asserts an insidious pressure against
the reliability of velocity to measure anything.

If you're going to use anything as an analog of productivity, I would
suggest RTFs (Running Tested Features). The side effect will likely be
smaller features, but that's not a bad thing.

RonJeffries

unread,
Nov 29, 2012, 1:24:39 PM11/29/12
to scruma...@googlegroups.com
John,

On Nov 28, 2012, at 12:41 PM, John Clifford <john.c...@construx.com> wrote:

Measuring and tracking velocity, when done correctly, is a valuable tool for answering your team-related questions ("are we improving? are we learning?"). This is the metric for the team to consider... are we doing the thing right, and getting better at it?

The key phrase is "done correctly". If velocity is treated as a metric, then it is a metric that needs to be improved. Inside the team, we don't really need a metric, especially not that one, to know if we are improving. If velocity becomes a public metric, then someone in power can decide that the team needs to improve the number. It is easy to improve the number, and improving the way we do the work is not the easiest way.

I have seen velocity used improperly as a public metric many times. I have never seen it used properly. Of course I've only been doing this for fifteen years and haven't visited everyone yet, so there may be some teams out there who publicly display velocity and do not game it. I would love to meet such a team someday.

Measuring and tracking velocity is very difficult to "do correctly". The political traps, and the psychological traps, are extremely difficult to avoid, to the point where I believe it is counter-productive to recommend velocity as a metric. And since either I invented it, or was standing there when it was invented, I consider that to be an indictment people should take seriously.

There are better ways to know how you're doing, and in the hands of almost any merely human manager, velocity is giving razor blades to babies. It's possible to do better, and people should.

Ron Jeffries
www.XProgramming.com
It's true hard work never killed anybody, but I figure, why take the chance?
-- Ronald Reagan



Ram Srinivasan

unread,
Nov 29, 2012, 1:40:13 PM11/29/12
to scruma...@googlegroups.com
+1 to Ron's answers. Could not help posting this blog I read quite sometime back

http://www.industriallogic.com/blog/stop-using-story-points/

Funny quote from the above blog
She looked at me funny and said "These days around here if you sneeze, you get a story point."


Ram
> --
> You received this message because you are subscribed to the Google Groups
> "Scrum Alliance - transforming the world of work." group.

John Miller

unread,
Nov 29, 2012, 4:25:03 PM11/29/12
to scruma...@googlegroups.com
I have seen velocity used incorrectly. I have seen it used "correctly".  I myself used Velocity as a CIO understood the purpose and used it as it intended. 

"Inside the team, we don't really need a metric," -Are you making a case that a team does not need metrics?

"There are better ways to know how you're doing" -Please elaborate on the better ways. 

Most organizations, especially larger ones, are not at the hyper trust level to not have team metrics. As one may want  push and stretch the organization to this level of trust, not to provide metrics for an organization who requires it may be a sure way for career limiting future.  In cases like this, what is the lesser evil or the safest metrics?

Thank You,
John 
Sent from my iPhone. It likes to sabotage my grammar. 

--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.

Michael Vizdos

unread,
Nov 29, 2012, 4:42:13 PM11/29/12
to scruma...@googlegroups.com
I am going to hop on this one too (smile)...

Better ways: DELIVER.  Consistently.  Your customers say "WOW.  Customers.  Revenue generating end-users.

Here's the thing that large organizations need to realize... focusing on metrics, not trusting, and worrying about career limiting moves is your own mess to own -- with or without agile or scrum.  It is a dysfunction.  It is messed up. 

And other organizations will step up and DELIVER and WOW customers.  And thank you.

My big clients understand this.  They are worried about "three guys in a garage"  Well... at least the ones that seem to *get it*.

Good luck.



 
Thank you.

- mike vizdos
   www.michaelvizdos.com/contact

John Miller

unread,
Nov 29, 2012, 6:05:36 PM11/29/12
to scruma...@googlegroups.com
Yes. Delighting customers should be the key metric. 
But...my question is, for those organizations who will require a metric. Outside of velocity, what are the alternatives?
For those that need to understand basic progress for release planning. If you do not use Velocity, what is the best alternative?


Thank You,
John 
Sent from my iPhone. It likes to sabotage my grammar. 

Michael Vizdos

unread,
Nov 29, 2012, 6:36:04 PM11/29/12
to scruma...@googlegroups.com

I am not sure you are going to get an answer you want to hear here.

Mike

John Miller

unread,
Nov 29, 2012, 6:39:59 PM11/29/12
to scruma...@googlegroups.com
I am not sure either : )

Alan Dayley

unread,
Nov 29, 2012, 7:22:41 PM11/29/12
to scruma...@googlegroups.com
The side effects, good and mostly bad, of any metric are so serious, so pernicious and so risky that the specific context of the need must be carefully considered. You must balance the measure need against options of other metrics, against options of changes that are not metrics but fill the desired need, against behavior changes that will occur and against many other possible options and side effects.

Given that your question provides near zero context, any answer we give would probably be unhelpful.

I have seen teams who simply report what the organization wants to see so they can get on with shipping product.  Not that I advocate such a practice.  I also saw only one team on a visit years ago that was reporting completely nonsense numbers in their metrics.  The fact that no one questioned the numbers proved that the metric was not needed.  They continued to report nonsense to avoid getting in trouble for not reporting metrics.

Metrics can be important to discover improvements and make visible things that help make decisions.  And they can cause more damage than the help they are supposed to provide.  Choose carefully!

Having said all that, here is a handy reference to help you think: http://dhondtsayitsagile.blogspot.se/2012/10/agile-metrics.html

Alan

John Miller

unread,
Nov 29, 2012, 7:38:18 PM11/29/12
to scruma...@googlegroups.com
Hi Alan,

I very much enjoyed the blogpost you included. The Side effects and Caution labels are perfect, and every metric should include this right on the report or chart. Thank You!

I am not sure about 0 context. I mentioned release planning , which is a pretty common context, especially for organizations where they need to release at a specific time, such as seasonal releases or a big marketing event. 
Stating that Velocity and Story Points do not work or is not effective is not giving context of where it does not work as well. 

Life is full of compromises, and one of those may be, some CIO stating they need a metric, regardless of how un-Agile the request/need is.

Dan Rawsthorne

unread,
Nov 29, 2012, 9:41:52 PM11/29/12
to scruma...@googlegroups.com
Exactly. Not a team thing, IMHO.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

Dan Rawsthorne

unread,
Nov 29, 2012, 9:43:44 PM11/29/12
to scruma...@googlegroups.com
That's essentially what I suggested, that StoryPoints measure some form of functional size, not effort. I have used RTFs, running tests, relative size, they all work, but not relative effort.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals
On 11/29/2012 7:37 AM, George Dinwiddie wrote:

George Dinwiddie

unread,
Nov 29, 2012, 11:27:50 PM11/29/12
to scruma...@googlegroups.com
John,

On 11/29/12 6:05 PM, John Miller wrote:
> Yes. Delighting customers should be the key metric.
> But...my question is, for those organizations who will require a metric.
> Outside of velocity, what are the alternatives?
> For those that need to understand basic progress for release planning.
> If you do not use Velocity, what is the best alternative?

Have you considered RTFs?

George Dinwiddie

unread,
Nov 29, 2012, 11:30:20 PM11/29/12
to scruma...@googlegroups.com
Dan,

On 11/29/12 9:43 PM, Dan Rawsthorne wrote:
> That's essentially what I suggested, that StoryPoints measure some form
> of functional size, not effort. I have used RTFs, running tests,
> relative size, they all work, but not relative effort.

I'm not sure how to use Story Points to measure value. A count of
features, yes. But not estimates of any sort of size.

John Miller

unread,
Nov 30, 2012, 12:27:24 AM11/30/12
to scruma...@googlegroups.com
I have not but I am researching it now, thanks to your suggestion. 
Actually reading Dan's presentation on Agile metrics from Agile 2009 conference. http://agile2009.agilealliance.org/files/session_pdfs/Rawsthorne_AgileMetrics_v6d.pdf

Do you feel it is a better measure of progress than Velocity?

What I am sensing from the anti-story point/velocity camp, admittedly those much smarter than I am, is to not have any metrics. I am not sure I am sold, as, I feel metrics can be used wisely to give clues. I have been researching all day, as, this list always challenges my assumptions, on any studies that show if/how/when metrics hurts/helps performance. I am entirely aware of the danger of misused metrics. I was in IT management for 10 years, and saw it first hand, how the metrics I gathered on a team, with good intentions, produced the wrong outcomes. I took responsibility for it, and removed metrics from my team. My question is to the anti-velocity folks, , are all in process metrics bad? Are there no in process metrics that you believe can help a team? Am I misunderstanding your position?

Thank You,
John
"Science is organized knowledge. Wisdom is organized life."
—Immanuel Kant

Dan Rawsthorne

unread,
Nov 30, 2012, 1:44:57 AM11/30/12
to scruma...@googlegroups.com
Well, StoryPoints are, by definition, a relative measure of size. I don't know why people decided that size equaled effort. After all, StoryPoints were invented so that Velocity could be used (via yesterday's weather) to make guesses about what would happen in the future - they were intended to produce a productivity metric. If StoryPoints represent actual effort (at some level of abstraction), they can't do this, which people are just now figuring out - it's been clear to some of us for years. Therefore, the size they represent can't represent actual effort, and never could, because then they don't do the main thing they were intended to do.

In other words, StoryPoints must be a relative estimate of the size of the product, not the effort used to produce the product, and then the question is "how do you estimate that?" I always figured it was similar to how you make relative estimates of Function Points or Use Case Points, because a Story was a 'new term' for Requirement, just as 'function' or 'use case' were older terms for Requirement. No biggie... it made perfect sense to me then, and it still does. As Mike Cohn says in the book, there is no standard definition of what StoryPoints mean, other than they represent size, so they are typically estimates where people take all sorts of things into consideration. In other words, we know what StoryPoints are supposed to do, but we're really not sure how to make them do that. ok...

BTW, in my opinion it's not hard to do an estimation that works; a relative measure of inherent functional complexity (which is part of the problem, not the solution) does the trick. I call this 'Intrinsic Difficulty' in Chapter 3.7, and call it 'Ideal Effort' in the new book I'm working on - same idea, though. Of course, it takes a few paragraphs to describe and define it; it's precisely the same philosophical problem that Project Managers have when trying to describe Earned Value, actually... and this is because Earned Value is supposed to be the Ideal Cost of an Activity, rather than the Actual Cost- which is the same philosophical distinction. In any case, it's not something that members of the Development Team in Scrum need concern themselves with - it's purely a management thing. Luckily, this is the main thing that the Scrum community has taken from this discussion - that Velocity is not useful for a Scrum Team, and that StoryPoint estimates are not useful for a Scrum Team. This is good, and is almost as good as realizing the BurnDowns aren't useful either. A maturation of the process... love it.

In other words, Velocity is a Project Management thing, and thus occurs outside the Scrum Team - we all know that Scrum is not a PM framework, but projects done in scrum need to be managed. We also realize that pushing the metrics down inside the team in a coercive way is evil. still good. ok, enough...

Anyway...

If you were trying to have a productivity measure, one way is to simply count the functional tests that define the Stories.
Here's why: Incremental Development is: 1. The test doesn't work, 2. you write some code, 3. now the test works. The passing test represents the size of the functional increment you just produced. Now, how do you count, or measure, or estimate, that size? This is the question upon us if we're trying to do a productivity measure. Simplest thing, count the tests (basically, count RTFs).

The COSMIC Function Point (CFP) standard is a simple extension/refinement of this idea, defining the functional tests as S, M, or L based on how many interactions the functional flow the test represents has with external entities (this is the basic idea, not the actual definition). So, doing StoryPoint estimates that are basically relative measures of CFPs are an effective, and easy, way to get a reasonable productivity metric that can be used for software project management.  If you can find any of the talks I've given on Release Planning in the last five years or so, you can see this stuff in there.

Maybe you'll like the new book, as soon as I get it out - I don't know when, though...

Anyway, to bed... Dan  ;-)


Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals
On 11/29/2012 8:30 PM, George Dinwiddie wrote:

Dan Rawsthorne

unread,
Nov 30, 2012, 1:57:49 AM11/30/12
to scruma...@googlegroups.com
well, RTFs are simply defining StoryPoint(Story) = 1 for all functional Stories. It's a reasonable, non-granular, estimate to use, as long as each RTF is defined by the same number of Functional Tests (I like real small functional stories, defined by one acceptance test). I like it, but the process geeks that I work with want something with more 'teeth' than that, usually, so I do the CFP thing I talked about in the previous email. The PDF you are reading is watered down quite a bit from others I have given. This was done because some people thought that the more robust CFP stuff was 'too much' for the audience at Agile 2009. The attached PDF is a more robust version of the talk.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals
On 11/29/2012 9:27 PM, John Miller wrote:
I have not but I am researching it now, thanks to your suggestion.�
Actually reading Dan's presentation on Agile metrics from Agile 2009 conference.�http://agile2009.agilealliance.org/files/session_pdfs/Rawsthorne_AgileMetrics_v6d.pdf

Do you feel it is a better measure of progress than Velocity?

What I am sensing from the anti-story point/velocity camp, admittedly those much smarter than I am, is to not have any metrics. I am not sure I am sold, as, I feel metrics can be used wisely to give clues. I have been researching all day, as, this list always challenges my assumptions, on any studies that show if/how/when metrics hurts/helps performance. I am entirely aware of the danger of misused metrics. I was in IT management for 10 years, and saw it first hand, how the metrics I gathered on a team, with good intentions, produced the wrong outcomes. I took responsibility for it, and removed metrics from my team. My question is to the anti-velocity folks, , are all in process metrics bad? Are there no in process metrics that you believe can help a team? Am I misunderstanding your position?

Thank You,
John
"Science is organized knowledge. Wisdom is organized life."
�Immanuel Kant


On Nov 29, 2012, at 10:29 PM, George Dinwiddie <li...@idiacomputing.com> wrote:

John,

On 11/29/12 6:05 PM, John Miller wrote:
Yes. Delighting customers should be the key metric.
But...my question is, for those organizations who will require a metric.
Outside of velocity, what are the alternatives?
For those that need to understand basic progress for release planning.
If you do not use Velocity, what is the best alternative?

Have you considered RTFs?

- George

--
----------------------------------------------------------------------
�* George Dinwiddie * ���������������������http://blog.gdinwiddie.com
�Software Development �������������������http://www.idiacomputing.com
�Consultant and Coach �������������������http://www.agilemaryland.org

----------------------------------------------------------------------

--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.

FixedPriceContract_90MinVersion_v1d.pdf

George Dinwiddie

unread,
Nov 30, 2012, 5:09:44 AM11/30/12
to scruma...@googlegroups.com
John,

On 11/30/12 12:27 AM, John Miller wrote:
> I have not but I am researching it now, thanks to your suggestion.
> Actually reading Dan's presentation on Agile metrics from Agile 2009
> conference.
> http://agile2009.agilealliance.org/files/session_pdfs/Rawsthorne_AgileMetrics_v6d.pdf
>
> Do you feel it is a better measure of progress than Velocity?

Yes, I think RTFs are a better measure of progress than velocity.
Velocity is about cost, and estimated cost at that. RTFs are about
value. Crudely measured value, but value nonetheless.

In those slides, Dan seems to make some assumptions that I don't make. I
find http://xprogramming.com/articles/jatrtsmetric/ to be a better
description of RTF.

On page 31, Dan says, RTFs "requires our regression test suite to be
divided up by story, so that we know which stories are broken when tests
don’t pass." That may make it easy to calculate the number when a team
is routinely breaking functionality, but it's certainly not necessary.
My assumption is that the team keeps the regression tests passing all
the time (by running them all the time), so you don't need to use the
tests to count them. If you do have a regression, asking yourself some
thoughtful questions will give you a quick idea of how many features
were broken. And, if you do have a regression, the important thing is to
fix the code rather than count the number of broken features.

The purpose of the regression test suite is to PREVENT regressions, not
count them. A secondary purpose is to document the expected behavior.
Organizing the suite for counting regressions works against the more
important goal of documenting the expected behavior.

The user stories are an artifact of development, a means of breaking the
work down into small pieces, each of which provides business value.
After a story is running, it should continue to run. It no longer has an
identity of it's own. It's just part of the system. Indeed, a later
story might modify the expectations that were in place for an earlier
story (because it handles a variant flow, perhaps) or even contradict
those expectations (because of a change in the desires). Keeping the
tests organized by story, rather than by system feature, is analogous to
not refactoring your code, but leaving functionality wherever it was
first coded.

Of course, I've seen shops that recommend against refactoring because it
damages their traceability matrix. This is another example of a tool
being used with good intentions but bad outcomes. They end up with
fragile code full of duplication. Having a traceability matrix that
extends to the code creates a system that encourages that, and
discourages good object-oriented practices.

> What I am sensing from the anti-story point/velocity camp, admittedly
> those much smarter than I am, is to not have any metrics. I am not sure
> I am sold, as, I feel metrics can be used wisely to give clues. I have
> been researching all day, as, this list always challenges my
> assumptions, on any studies that show if/how/when metrics hurts/helps
> performance. I am entirely aware of the danger of misused metrics. I was
> in IT management for 10 years, and saw it first hand, how the metrics I
> gathered on a team, with good intentions, produced the wrong outcomes. I
> took responsibility for it, and removed metrics from my team. My
> question is to the anti-velocity folks, , are all in process metrics
> bad? Are there no in process metrics that you believe can help a team?
> Am I misunderstanding your position?

What do you mean by "in process metrics?"

I find velocity useful, when it is useful, for planning purposes. Short
term, it helps a team plan how much work might fit into a sprint. Longer
term, it helps a PO plan how much might be done by a given date. In both
cases, the number itself is not so important as the shape of the
burndown, burnup, or continuous flow diagram chart. You can get many
clues from the shape of these charts, but not so much from the numbers.

When I was writing
http://idiacomputing.com/pub/BetterSoftware-BurnCharts.pdf for Better
Software about the clues in the shape of burn charts, I quickly realized
that none of the shapes actually told you what was happening. They gave
you clues, yes, but required you to go see and ask questions to
understand what was happening.

A team can help themselves by looking at these charts and asking
themselves thoughtful questions. A team can also help themselves by
watching the flow of stories on a sprint kanban board and asking
themselves thoughtful questions. There are many ways a team can help
themselves, and they all seem to include asking themselves thoughtful
questions. Collecting metrics is only one of many ways to stimulate
those questions. Unfortunately, when looking at the numbers rather than
the shape of a plot over time, the questions that are stimulated seem to
be generally less helpful, and sometimes pernicious.

George Dinwiddie

unread,
Nov 30, 2012, 5:22:33 AM11/30/12
to scruma...@googlegroups.com
Dan,

On 11/30/12 1:44 AM, Dan Rawsthorne wrote:
> Well, StoryPoints are, by definition, a relative measure of size. I
> don't know why people decided that size equaled effort. After all,
> StoryPoints were invented so that Velocity could be used (via
> yesterday's weather) to make guesses about what would happen in the
> future - they were intended to produce a productivity metric. If
> StoryPoints represent actual effort (at some level of abstraction), they
> can't do this, which people are just now figuring out - it's been clear
> to some of us for years. Therefore, the size they represent can't
> represent actual effort, and never could, because then they don't do the
> main thing they were intended to do.

Story points are estimates given by the people doing the work. Not by
the business. Not by the managers. Therefore they are related to the
amount of work required, and not very related to the amount of value
gained by that work. What "size" do you mean if it's not related to the
time and effort required to implement the story?

- George
> 3Back.com <http://www.3Back.com>
> Author of /Exploring Scrum: the Fundamentals/
> <http://www.amazon.com/dp/1461160286>
> On 11/29/2012 8:30 PM, George Dinwiddie wrote:
>> Dan,
>>
>> On 11/29/12 9:43 PM, Dan Rawsthorne wrote:
>>> That's essentially what I suggested, that StoryPoints measure some form
>>> of functional size, not effort. I have used RTFs, running tests,
>>> relative size, they all work, but not relative effort.
>>
>> I'm not sure how to use Story Points to measure value. A count of
>> features, yes. But not estimates of any sort of size.
>>
>> - George

Dan Rawsthorne

unread,
Nov 30, 2012, 7:39:17 AM11/30/12
to scruma...@googlegroups.com
Ah, George, welcome to the question of the ages. What is size if it's not related to the effort?" Well, size isn't very useful if it doesn't relate to effort in some way, but that does not mean that it an estimation of effort. Let's say the RTFs are our measure of size, for instance, that velocity is simply a count of RTFs completed. Then this is clearly related to effort, the more RTFs, the more effort. So far, so good. Are they related to Product? that's a different question, but the one the Project Manager needs to know. Are the numbrer of RTFs going to change if the Product doesn't change? If RTFs represent passing tests, then the answer is "no". But then the effort is variable, and that's cool. The team then "owns" the effort it takes, while the Customer, who produced the RTF (developed the story) owns the size. Like I said before, this is complicated, and is one of the major things that messes PMs up. Not the Team's problem, though. All they have to do is worry about effort - will the RTF fit in the Sprint or not?

The PM, on the other hand, needs to know how fast the RTF are (and hopefully, will be) produced.

Which leads to the notion of size of the Story. The Story is a work unit, it produces product when done, and consumes effort. It, itself, is neither. The Story "Paint this room" is neither a painted room nor 8 hours and a can of paint. One is benefit, the other is cost. The size of a work unit is an estimate of the amount of product received, and the cost of the work unit is obvious. Effort is a cost, Size is size. Then the issue becomes how to measure the size so that the PM gets meaningful metrics that help in managing the product.. The original definitions of StoryPoints, and there are several, were an attempt to produce a Size that is useful for Velocity. In fact, Mike Cohn seems to define them as 'something that measures size that will be useful for velocity' or something (I don't have the book in front of me...

Now, we are finally realizing as a group, that effort is not it. So, what is? RTFs? Functional Complexity? What? And who estimates it? Well, we know that the Team 'owns' its effort, so if it's effort, the Team must do the estimation. We also know that the Team has the best idea about how it will be implemented, so if it's functional Complexity we probably want the Team to estimate it. If it's simply a count of the number of Stories (RTFs), then whoever is creating the RTFs may as well do the counting, and so on.

There is something about passing tests in a regression test suite here. Ok, here's the idea. Whatever the size of a Story is, if you're measuring the size of the Increment, it's a sum of the Sizes of the Stories that are completed. And a story becomes 'unDone' if its tests start to fail. So, breaking tests decreases the size of the increment, and causes negative velocity. Not a good thing... just sayin'...

Enough, I've got to get packing. Back home today!  Dan  ;-)

Dan Rawsthorne, PhD, PMP, CST
3Back.com

George Dinwiddie

unread,
Nov 30, 2012, 9:03:15 AM11/30/12
to scruma...@googlegroups.com
Dan,

I've only got a moment, here, but wanted to mention a later thought I
had. Story points are not an aspect of the resulting system. We can look
at the code that's written and size it in lines of code or function
points, but not in story points. So it's not that sort of size, either.

Josh Kerievsky called story points NUTs, for Nebulous Units of Time.

Mike Cohn did fairly well at codifying the use of user stories and story
points, but in doing so, tried to tail them down into something more
specific. I remember the discussions on his book list when he was
circulating drafts for review, and there was much disagreement. His book
represents the way he sees story points, but is not, to my mind, a
definitive definition. In fact, I think that story points lose their
point when you try to pin them down to something specific.

- George
> --
> You received this message because you are subscribed to the Google
> Groups "Scrum Alliance - transforming the world of work." group.
> To post to this group, send email to scruma...@googlegroups.com.
> To unsubscribe from this group, send email to
> scrumallianc...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/scrumalliance?hl=en.

RonJeffries

unread,
Nov 30, 2012, 9:08:37 AM11/30/12
to scruma...@googlegroups.com
Dan,

On Nov 29, 2012, at 8:43 AM, Dan Rawsthorne <dan.raw...@drdansplace.com> wrote:

Some have.

Name two. :)

Ron Jeffries
I'm not bad, I'm just drawn that way.  -- Jessica Rabbit

RonJeffries

unread,
Nov 30, 2012, 9:10:42 AM11/30/12
to scruma...@googlegroups.com
Hi John,

On Nov 29, 2012, at 4:25 PM, John Miller <agiles...@gmail.com> wrote:

I have seen velocity used incorrectly. I have seen it used "correctly".  I myself used Velocity as a CIO understood the purpose and used it as it intended. 

What is the purpose of velocity? How was it intended to be used? And how do you know? Remember: I was there when it was invented and possibly actually invented it. So this is a trick question.
Wisdom begins when we understand the difference between "that makes no sense" and "I don't understand". -- Mary Doria Russell

RonJeffries

unread,
Nov 30, 2012, 9:12:47 AM11/30/12
to scruma...@googlegroups.com
John,

On Nov 29, 2012, at 4:25 PM, John Miller <agiles...@gmail.com> wrote:

Most organizations, especially larger ones, are not at the hyper trust level to not have team metrics. As one may want  push and stretch the organization to this level of trust, not to provide metrics for an organization who requires it may be a sure way for career limiting future.  In cases like this, what is the lesser evil or the safest metrics?

Running, tested features.

Ron Jeffries
www.XProgramming.com
There's no word for accountability in Finnish. 
Accountability is something that is left when responsibility has been subtracted. 
--Pasi Sahlberg

RonJeffries

unread,
Nov 30, 2012, 9:15:25 AM11/30/12
to scruma...@googlegroups.com
On Nov 29, 2012, at 6:05 PM, John Miller <agiles...@gmail.com> wrote:

But...my question is, for those organizations who will require a metric. Outside of velocity, what are the alternatives?

Best alternative: educate them.

Next best: figure out what they really want to know and tell them that. In aid of that: what does your organization, which I guess must require a metric, really want to know?

For those that need to understand basic progress for release planning. If you do not use Velocity, what is the best alternative?

Sort by value. Start. Notice how long things take. Make decisions accordingly. Notice what gets in the way. Remove those obstacles.

Do the work.
Sometimes you just have to stop holding on with both hands, both feet, and your tail, to get someplace better. 
Of course you might plummet to the earth and die, but probably not: you were made for this.

RonJeffries

unread,
Nov 30, 2012, 9:16:19 AM11/30/12
to scruma...@googlegroups.com
John,

On Nov 29, 2012, at 6:39 PM, John Miller <agiles...@gmail.com> wrote:

I am not sure either : )

You might not [get the answer you want to hear]. "But if you try sometimes, you might find, you get what you need."

RonJeffries

unread,
Nov 30, 2012, 9:18:56 AM11/30/12
to scruma...@googlegroups.com
Hi John,

On Nov 29, 2012, at 7:38 PM, John Miller <agiles...@gmail.com> wrote:

I am not sure about 0 context. I mentioned release planning , which is a pretty common context, especially for organizations where they need to release at a specific time, such as seasonal releases or a big marketing event. 

If you already know the release date, what do you suppose Release Planning would do for you?

Ron Jeffries
I'm really pissed off by what people are passing off as "agile" these days.
You may have a red car, but that does not make it a Ferrari.
  -- Steve Hayes

John Miller

unread,
Nov 30, 2012, 10:50:27 AM11/30/12
to scruma...@googlegroups.com
Indeed : )

Dan Rawsthorne

unread,
Dec 1, 2012, 12:18:26 AM12/1/12
to scruma...@googlegroups.com
determine scope?
Dan Rawsthorne, PhD, PMP, CST
3Back.com

RonJeffries

unread,
Dec 1, 2012, 12:48:04 AM12/1/12
to scruma...@googlegroups.com
Dan,
On Dec 1, 2012, at 12:18 AM, Dan Rawsthorne <dan.raw...@drdansplace.com> wrote:

determine scope?

Not really. Why bother?

Dan Rawsthorne

unread,
Dec 1, 2012, 12:53:25 AM12/1/12
to scruma...@googlegroups.com
Man, you really have NO Project Manager in you, do you?  LOL!

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

RonJeffries

unread,
Dec 1, 2012, 12:54:54 AM12/1/12
to scruma...@googlegroups.com
Dan,

On Dec 1, 2012, at 12:53 AM, Dan Rawsthorne <draws...@gmail.com> wrote:

Man, you really have NO Project Manager in you, do you?  LOL!

I've been doing software for over a half century and I have never once seen a release plan that wasn't bullshit.

Dan Rawsthorne

unread,
Dec 1, 2012, 1:12:08 AM12/1/12
to scruma...@googlegroups.com
that's too bad. I haven't seen too many that were valid in all their details, but I've seen a lot that were basically sound: these use cases, this date, this team, this cost... and that's what I define as a release plan, basically. The problem is, of course, that you need people who actually believe in yesterday's weather... and I've seen a lot of them in the military.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

RonJeffries

unread,
Dec 1, 2012, 1:17:47 AM12/1/12
to scruma...@googlegroups.com
Dan,

On Dec 1, 2012, at 1:12 AM, Dan Rawsthorne <draws...@gmail.com> wrote:

that's too bad. I haven't seen too many that were valid in all their details, but I've seen a lot that were basically sound: these use cases, this date, this team, this cost... and that's what I define as a release plan, basically. The problem is, of course, that you need people who actually believe in yesterday's weather... and I've seen a lot of them in the military.

That's probably why they don't give the people on the ground much autonomy, because they know that plans always go as planned.

No, wait, they do give them autonomy, and they do know that plans never go as planned. So they plan to learn and consider options. Unfortunately, the IT profession plans in order to believe in the plan and insist on it. I'd not let them run a war.

Dan Rawsthorne

unread,
Dec 1, 2012, 12:11:40 PM12/1/12
to scruma...@googlegroups.com
Neither would I. But I'd let a General run an IT shop.

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals
Message has been deleted

Julien

unread,
Dec 1, 2012, 6:38:33 PM12/1/12
to scruma...@googlegroups.com, draws...@gmail.com

Hi all,

Very interesting discussion. I would even say a classic.
For me, the big problem is when organizations try to measure the productivity of software development teams based on the software throughput of the team.
There are many ways of doing it indeed. They all can be useful in some situations. But not for evaluating team performance.
For me, team throughput or developers productivity is just an intermediate variable. And one that will never be measured well. It is like trying to measure the performance of a writer by how many well written pages he writes per day. The correlation with the potential success of the book is still very weak.

Every software is used for a reason and brings value to the end user for a reason. Trying to understand the value generated for the customer and how to improve it, is much more meaningful as it more directly affects your company performance.

My conclusion is: Measuring software development productivity (based on throughput) is most of the time a waste of time. Some exceptions exist. Example: you are an outsourcing company (and your customer knows very well what he wants (sounds scary)) and the customer wants to make sure you can deliver the most functionality per $. In this case, team productivity is a clear competitive advantage. It might be worth measuring.

Using velocity for the purpose of productivity is a waste of time too whether it is in story points or function points or whatever. And it is EXTREMELY frustrating when we try to "sell" Agile to the business. It would be so great to show before Agile and after Agile numbers with velocity. The fact is that it is often just not the best way to sell it, in my opinion.

Some Agilists friends told me they used velocity at the team level and that it contributed to keep the team motivated on using Agile practices as they saw their own velocity augment. No management involved in this of course. For me, I could only mean that they improved their Definition of Done. Why not show this instead?

I think that using velocity in story points is fine for long-term planning.

At least, that is how I use it.

I also think it is of limited value. If you are really potentially shippable every sprint, velocity is meaningful and usually you do not wait that much before you release.
The more we feel we need velocity, the longer our releases are. And often it is because our Definition of Done is very limited.
If you need velocity to plan for more than, say 3-4 months time, it usually means that you cannot release within 3-4 months and that your ability to release your product as desired is limited.
In this case, your velocity might well be a meaningless number because  your DoD is so limited. And therefore you need hardening sprints... In this case: extend your DoD! It is very tough work in many cases but much more meaningful and rewarding than debating on velocity numbers which can not be well used.

For me, the main value of story points is that it goes well with release planning. I had found story points useful for beginners to Agile as it forced them to break their previous estimation habits which were not team based and which were often hold as a commitment against them.

If a team works well together, can estimate and commit together, then any unit is fine for me.

So I am not against velocity nor story points neither. Their usefulness is just much more limited than we think.

To come back to the original question: How do we calculate the ROI for following Agile practices?

I have a question : Why are you using these practices in the first place?

If you are using them to improve productivity, just measure whatever productivity you were measuring before. One of the biggest problems I have seen is people tell me that they want to use Agile methods to improve productivity but they never measured it before…

Any leading indicators (before release) to see the effects of Agile practices are tricky. Probably the best you could use is CFD to visualize your queues (read the Reinertsen book, it is quite good) and expected cycle times.

Some good lagging indicators are: Time-to-market, ROI, customer satisfaction. Normally you should improve on those.

If you are NOT willing to wait to see the lagging indicators…. Then you probably need to send whoever would like to see the benefit of Agile practices through leading indicators to a at-least 2 weeks bootcamp training on Agile and Lean product development for managers (Reinertsen’s book on product development flow is useful for this). After which they might understand that your evidences of improvement on a few leading indicators (queues reductions, cycle time reductions, co-location and f2f communication with customer, faster feedbacks, adaptations after sprint reviews feedbacks, happier teams etc…) will really make a difference. You cannot convince with leading indicators people who do not have the right mental models of what product development is. And unfortunately for us, the vast majority of our industry still does not. I think this is what it all boils down to. There is no easy answer to this one.


Julien.

John Miller

unread,
Dec 1, 2012, 7:08:57 PM12/1/12
to scruma...@googlegroups.com
In my case, manage stakeholders, my boss's, and the board's expectations. Especially when they are hell bent on asking "When?" and "When do I get what I want?". Velocity was a helpful guide and much better than traditional methods.  Again it helped, it did not guarantee anything. 


It was useful for the team to help guide how much to commit to in a Sprint.

I await some Socratic follow up : )

Thank You,
John 
Sent from my iPhone. It likes to sabotage my grammar. 

John Clifford

unread,
Dec 4, 2012, 12:00:56 AM12/4/12
to scruma...@googlegroups.com, ronje...@acm.org
I've seen a lot of release plans that relied on hope as the primary project management strategy, i.e., BS. I've seen a very few that relied on some underlying data to give a sense of what could be expected based upon the Cone of Uncertainty. I've been very successful on software projects using traditional, task-based project management techniques when the problem domain was well understood, I owned the scope and the resources, and I could set the date after some upfront planning and work breakdown. The reason I went over to Scrum was because these techniques did not work in a startup environment when running product feature teams, because the product managers had no clue as to what we should build, kept changing their minds, refused to prioritize, etc. Scrum was initially a grasp at sanity, an excuse for the business people to leave my teams alone for a month so we could get something done without being randomized... so their poor planning no longer constituted our crisis. A funny thing happened on the way to sanity, though... I found that this stuff actually worked! In problem domains with high uncertainty! And, not only that, it helped the teams identify and fix their process problems so they could get more work done with the same effort... because they weren't doing stupid things that didn't contribute to getting the product out the door. Yes, we used velocity, or lack of it, as the impetus to have some painful discussions.

"Responding to change over following a plan." That doesn't mean that we shouldn't plan. "Over" is not the same as "instead of." The idea of a release plan is to give the people paying for the project an idea of what is realistic. After all, don't the people paying for a project have the right to some indication as to what they're going to get for their money and how long it's going to take? The reason we track velocity is so we can update the release schedule (# sprints to completion, or what scope will be completed by a desired date) at the end of every sprint, getting a more precise estimate of what we're going to get or how long it's going to take as we go along. If we make changes to the scope, we can see the effect of those changes on the schedule/budget immediately, so that we can get stakeholders to either buy in, or disavow those changes.

Ron, you sound very cynical. I tell the people I train and coach that skepticism, a healthy doubt, is good, but cynicism, a refusal to consider, isn't. I can understand skepticism and even cynicism... but if we believe this will never work, then we will be right. Instead, we have to believe that a reasonable plan based on solid data has a reasonable chance, and that we must give up on hope as a valid project management strategy.

The folks I've trained and coached to successful projects via Scrum are no longer cynical, and increasingly no longer skeptical. Seeing is believing. This stuff works, and it is simple... but not easy.

Mark Levison

unread,
Dec 4, 2012, 1:28:03 AM12/4/12
to scruma...@googlegroups.com
John -

On Mon, Dec 3, 2012 at 9:00 PM, John Clifford <john.c...@construx.com> wrote:
"Responding to change over following a plan." That doesn't mean that we shouldn't plan. "Over" is not the same as "instead of."

Food for thought - Of all us Ron is better qualified than most to understand what "over" means in that context - after he was there when the manifesto got written.
 

Ron, you sound very cynical. I tell the people I train and coach that skepticism, a healthy doubt, is good, but cynicism, a refusal to consider, isn't. I can understand skepticism and even cynicism... but if we believe this will never work, then we will be right. Instead, we have to believe that a reasonable plan based on solid data has a reasonable chance, and that we must give up on hope as a valid project management strategy.

I've been arguing with Ron for ~7 yrs now - in that time I've learned that while grumpy, cynical etc - he's often right. When Ron pushes me hard I tend to question my assumptions.

Maybe you will one day find a way to measure ROI, I spent a few years looking and gave up. Maybe you will help you see what I missed.

Off to battle with Eclipse
Mark Levison
Agile Pain Relief Consulting | Writing
Proud Sponsor of Agile Tour Gatineau Ottawa Nov 28, Toronto 26 and Montreal 24

RonJeffries

unread,
Dec 4, 2012, 6:11:09 AM12/4/12
to scruma...@googlegroups.com
Hi, John,

On Dec 4, 2012, at 12:00 AM, John Clifford <john.c...@construx.com> wrote:

"Responding to change over following a plan." That doesn't mean that we shouldn't plan. "Over" is not the same as "instead of." The idea of a release plan is to give the people paying for the project an idea of what is realistic. After all, don't the people paying for a project have the right to some indication as to what they're going to get for their money and how long it's going to take? The reason we track velocity is so we can update the release schedule (# sprints to completion, or what scope will be completed by a desired date) at the end of every sprint, getting a more precise estimate of what we're going to get or how long it's going to take as we go along. If we make changes to the scope, we can see the effect of those changes on the schedule/budget immediately, so that we can get stakeholders to either buy in, or disavow those changes.

Yes, Remember that I happened to be standing there when we wrote that. I'm even in the front page picture. :)

As your story points out, these things work ... but only when used as we intended in the manifesto.


Ron, you sound very cynical. I tell the people I train and coach that skepticism, a healthy doubt, is good, but cynicism, a refusal to consider, isn't. I can understand skepticism and even cynicism... but if we believe this will never work, then we will be right. Instead, we have to believe that a reasonable plan based on solid data has a reasonable chance, and that we must give up on hope as a valid project management strategy.

Only very experienced, John, only very experienced. It is common to speak, on this list and elsewhere, as if "estimates" are a good thing. We forget that most of the people here do not come from long and successful Scrum and Agile experience, as you and I do. As you yourself said:
"I've seen a lot of release plans that relied on hope as the primary project management strategy, i.e., BS. I've seen a very few that relied on some underlying data to give a sense of what could be expected based upon the Cone of Uncertainty. I've been very successful on software projects using traditional, task-based project management techniques when the problem domain was well understood, I owned the scope and the resources, and I could set the date after some upfront planning and work breakdown. "

The people who come here and ask about estimates are not speaking from that kind of successful experience. Frankly, if they were, they wouldn't be asking: they would already know. And estimates are not central to success with Scrum. Yes, the idea does occur in the Scrum Core, and yes, in my opinion, it should be surrounded with barbed wire, caveats, and possibly a moat.

The reason is that people beginning with Scrum generally come to it from a more conventional project management outlook, where you just figure out everything that has to be done, estimate it all, add it up, and voila! you have a project plan that you just have to execute. As your experience discovered, that trick only works in quite rare circumstances. As you put it, where you "owned the scope and the resources".  In the case of most of our listeners here, that is emphatically not the case.

Instead, we find that most of the questions here are about how to improve estimates, so that they will be accurate. When we drill in, we see teams who are projecting when they'll be done and then pushing the teams to complete everything. We see them measuring teams on accuracy. We see product owners who are just doling out the backlog items with little or no control over anything, just "following the plan". 

As you discovered and report, Scrum works best when the Product Owner uses the backlog creatively to deliver the best possible product. I have found that focus on estimates is not helpful to getting there.

The folks I've trained and coached to successful projects via Scrum are no longer cynical, and increasingly no longer skeptical. Seeing is believing. This stuff works, and it is simple... but not easy.

Yes, well, as coach on the first XP project, and one of the authors of the manifesto, I am in no way cynical about this stuff working. However, I do not find that passing through "estimation" is conducive to learning what works, and thus I continue to recommend alternatives.

I could be wrong, however it is not from cynicism that I say what I do, but from a half century in software and fifteen+ years at the core of Agile. Estimation, other than inside the team, is a very questionable practice, and needs to be handled with extreme care.

Dan Rawsthorne

unread,
Dec 4, 2012, 11:48:43 AM12/4/12
to scruma...@googlegroups.com
Yes, Ron, and that is precisely what my next book will be about... how to do it in a scrummish way (I'm afraid to say right way, since people tend to think there is only one right way)...  Dan  ;-)

Dan Rawsthorne, PhD, PMP, CST
3Back.com
Author of Exploring Scrum: the Fundamentals

John Clifford

unread,
Dec 4, 2012, 1:00:27 PM12/4/12
to scruma...@googlegroups.com, ronje...@acm.org
 
The people who come here and ask about estimates are not speaking from that kind of successful experience. Frankly, if they were, they wouldn't be asking: they would already know. And estimates are not central to success with Scrum. Yes, the idea does occur in the Scrum Core, and yes, in my opinion, it should be surrounded with barbed wire, caveats, and possibly a moat.

The reason is that people beginning with Scrum generally come to it from a more conventional project management outlook, where you just figure out everything that has to be done, estimate it all, add it up, and voila! you have a project plan that you just have to execute. As your experience discovered, that trick only works in quite rare circumstances. As you put it, where you "owned the scope and the resources".  In the case of most of our listeners here, that is emphatically not the case.

Instead, we find that most of the questions here are about how to improve estimates, so that they will be accurate. When we drill in, we see teams who are projecting when they'll be done and then pushing the teams to complete everything. We see them measuring teams on accuracy. We see product owners who are just doling out the backlog items with little or no control over anything, just "following the plan". 

I believe, based upon some experience and demonstrated success, that it IS possible to have an ACCURATE release plan based upon story point estimates and velocity estimates, that take into account the Cone of Uncertainty. I believe, again based upon some experience and demonstrated success, that this can be continued despite uncertainty in terms of scope and resources, within the constraints of that uncertainty.

I also agree with you, again based upon some bitter experience, that too many people don't understand what an accurate estimate is, don't understand the difference between accuracy and precision, have no idea about the Cone of Uncertainty, and really don't understand the difference between a goal and a commitment. I believe that is because too many organizations, and managers, would rather have precisely wrong estimates than approximately right estimates, and they have no idea of how to respond to change, in terms of planning. Even worse, they don't know what they don't know, and are unwilling to learn. I also believe, based on some experience, that because of this lack of understanding of what an estimate is, and isn't, and why we estimate, most of the effort we expend on estimation is wasted effort, and results from trying to be more precise than our current level of knowledge will allow. The illusion of certainty is one of the more seductive cobblestones on the road to project hell.

And, I also believe that the chief impediment to more successful projects in most organizations is not located at the team level. It's easy to fix teams. The hard part is fixing the underlying faulty assumptions and dysfunctional behavior higher up in the organization... the refusal to prioritize, the unwillingness to face reality, the avoidance of responsibility and attempting to hold the engineering staff accountable for inadequate business/product planning and decision-making. It's this type of thinking that wants to use velocity as a club. This last paragraph is really outside the domain of Scrum, as a project management process wrapper. Solving it is more of an HR issue... :-)

Joe Blauer

unread,
Dec 4, 2012, 1:06:02 PM12/4/12
to scruma...@googlegroups.com
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/_YboqjsVUhoJ.

Yves Hanoulle

unread,
Dec 4, 2012, 1:06:40 PM12/4/12
to scruma...@googlegroups.com
Are you aware that the cone is something non scientific, that was made up?
See laurent bossavits book: the leprechauns

Scrambled by my Yphone
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/_YboqjsVUhoJ.

John Clifford

unread,
Dec 4, 2012, 3:40:48 PM12/4/12
to scruma...@googlegroups.com
The Cone of Uncertainty is a concept, not a physical entity. And, that it is a cone, and not a cloud or amorphous blob, is not an essential property. However, the concept of acknowledging uncertainty and its impact on our projects, and then reducing it via conscious effort is valid.

Yves Hanoulle

unread,
Dec 4, 2012, 4:13:18 PM12/4/12
to scruma...@googlegroups.com

2012/12/4 John Clifford <john.c...@construx.com>

The Cone of Uncertainty is a concept, not a physical entity. And, that it is a cone, and not a cloud or amorphous blob, is not an essential property. However, the concept of acknowledging uncertainty and its impact on our projects, and then reducing it via conscious effort is valid.
no it's not. There is scientific evidence that the uncertainty is not reduced. 
please read Laurent's book to know more.

 
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/JqfofaGkc5AJ.

Michael James

unread,
Dec 4, 2012, 7:25:59 PM12/4/12
to scruma...@googlegroups.com
Has this turned into a discussion over whether forecasting beyond one Sprint has any value at all?  Forecasting beyond one Sprint can have some value as long as we don't take it too seriously.  We must explain to anyone looking at these forecasts that the team's commitment is Sprint to Sprint and the Product Owner is expected to discover new things he wants more than his previous plans.  If they don't understand that, I'd be reluctant to show them the forecasts.

Despite what various models assert, Product Owners do have a better idea what features might be done two Sprints from now than twenty Sprints from now.

--mj

John Clifford

unread,
Dec 5, 2012, 12:13:35 AM12/5/12
to scruma...@googlegroups.com
Yves, I went ahead and read Laurent's book, specifically the chapter on the Cone of Uncertainty. I agree with his assertion that to assume the Cone is a strict mathematical model is to assume falsely. Again, let me have you re-read my posts in this thread. The concept of the Cone of Uncertainty is valid; Laurent says the Cone means no more than 'the future is uncertain...' but I will add to his summary interpretation (in agreement with what he wrote earlier in the chapter); the future IS uncertain, but the further out it is the more uncertain it is. That is the concept of the Cone of Uncertainty. Assigning mathematical precision to uncertainty at specific points in the development lifecycle is, in my and Construx' view, illustrative rather than declarative (to give people ideas, not set these hard points in stone).

What does that mean, in terms of release planning? It means that, while you may have some idea (an "estimate," if you will) of what you can get done by a specific date, you don't really know, however, the closer you get to the date the better an idea you will have of what you will get done. Because you know what you already HAVE gotten done, and you know what's left, and you should have a better idea of about how much you can get done based upon your track record. Yes, there is still uncertainty, up until the last day of the last sprint. It's okay. We don't have to know everything to an absolute certainty to know enough to make valid decisions.

What does this mean in terms of commitment (promises made, by the team to itself, and to others)? I teach that it is foolish to promise something if you don't know you can deliver it. I teach that the sprint commitment is something made by the team, to the team, for the team, as a means of establishing a goal for the sprint. The sprint commitment is not a promise to the organization that something WILL be done by the end of the sprint, it is a hypothesis; the team believes it can get these items done in the duration of a sprint (every sprint is an experiment, an example of PDCA in terms of the commitment, what happens during the experiment/sprint, and how we explore the results of that experiment at the retrospective before we form our next hypothesis). And it assuredly should not be a promise to an external customer. The Kanban principle of divorcing cadence from deployment is key to making promises to customers we can keep, because we don't promise something that isn't done. This is key: product owners should not represent the team's sprint commitment as a promise for a specific delivery at a specific time to the organization. Making promises you (the Product Owner) can't personally deliver on, because you don't do the work, is how a Product Owner gets in trouble. Now, promising what you already HAVE done is a good strategy. (One of my Seven Principles for Project Success is 'Make realistic plans based upon estimates, make realistic commitments based upon demonstrated performance.') Having written this paragraph, I hope this explains why I think much "estimation" is waste and adds no value, in agreement with others, e.g., Ron. 

To get back on topic (calculating ROI), you really won't know what the ROI is on a project until the project is complete and the money has come in... but you will know what the current investment is at any point, and you will know the current functionality produced at any point (by 'point' I mean at a sprint boundary). Good project stewardship should include looking at where we are versus where we wanted to be, and where we want to be, and then to decide whether or not our continued investment is the right decision. There is no perfect, foolproof way to do this, so we have to use approximations for perfection and do the best we can with what we have. Release planning based upon story point estimates, and a plan that is updated after every sprint, gives us some guidance. It's good enough, because it is as about as good as it's going to get (historical data on the current project).

Perhaps the real problem is not with story points, or velocity, or determining value, or the Cone of Uncertainty. It is with people who do not understand uncertainty or ambiguity, who insist on perfect knowledge even when it isn't possible and then believe or BS that they know what they don't know, who pedantically and rigidly treat guidelines as rules. Effective release planning and project management IS possible, but not if this rigid, inflexible, pedantic, rule-clinging, control-obsessed, give-me-the-fish-so-I-don't-have-to-think approach cannot be abandoned. I often quote Schwaber's "Prime Directive of Scrum" to these people: Use Common Sense. Sometimes invoking the Prime Directive even works.... :-) Otherwise, perhaps the organization, at whatever level, is not yet ready to face its problems and solve them. 

Yves Hanoulle

unread,
Dec 5, 2012, 11:03:49 AM12/5/12
to scruma...@googlegroups.com
Thanks.
I see we agree more then I thought we did.
;-)

> you really won't know what the ROI is on a project until the project is complete and the money has come in
yes and if you deliver in smaller chunck you can see already part of the money (with part of the investment)
aka the whole lean startup idea's

yes on the ambiguty and problems with people that don't understand uncertainty

>Schwaber's "Prime Directive of Scrum" to these people: Use Common Sense
problem with common sense is that the bigger the group is, the smaller the common sense seems to be

with this image as "proof"

and some things in agile are counter intuitive...

y




2012/12/5 John Clifford <john.c...@construx.com>

--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To view this discussion on the web visit https://groups.google.com/d/msg/scrumalliance/-/4uyYmdex1NUJ.
Reply all
Reply to author
Forward
0 new messages