The problems you're describing indicate some significant gaps in your application of Scrum that would likely be better addressed by having a qualified coach come in and work with the team to cement an effective understanding of Scrum principles and practices within the team/organization.
Mike Cohn's excellent book on User Stories provides additional information on agile estimation, and more information is available on his website mountaingoatsoftware.com.
While others might want to take a stab at this, I don't think this is a topic that lends itself to resolution over email, so I'll leave my comments to the recommendations above.
Thanks,
Vernon
> --
> You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
> To post to this group, send email to scruma...@googlegroups.com.
> To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.
>
However when estimating the "entire project" you will probably get the best approach by estimating the entire backlog of PBIs in story points, dividing that by the velocity in story points per sprint, and then you have the best current estimate for how many sprints the entire backlog will take. This estimate should improve over time as the velocity stabilizes, and the team gets better at estimating.
It is quite normal to perceive an inconsistency between estimates of a story in story points, and estimates of tasks in hours. This is because the story points are telling you something about the magnitude of the problem that needs to be solved. (I hope this is obvious, as the PBI is usually estimated in backlog grooming, when only the PBI and acceptance criteria are known, whereas the team only begins to think about the solution during planning, when the PBI is broken up into tasks or SBIs) The task estimates in hours, are telling you something about an actual solution to the problem. It is possible that the team has certain strengths in some areas, and weaknesses in others. Perhaps a large problem of 20 points could be solved with a clever solution that takes 4 hours, whereas a small problem of 5 points could take 10 hours to solve because the team is unfamiliar with some technology for example.
But I wouldn't get too hung up about it, the hour estimates of tasks are only used for the sprint burndown chart, and many teams don't even bother estimating them and simply count the number of tasks remaining for the burndown.
One thing I noticed was that you were talking about a design phase. This could be a serious problem, if it means what I think it does. I am sorry to say, but you can only really calculate a good value for velocity when you are delivering complete software each sprint, which is the normal way of using Scrum to develop software. Normally we take a PBI or user story, and do all the tasks like analysis, design, development and test on that story within the sprint, so we are finished with actual running software. I am concerned that with your use of the term "design phase" you might only be taking analysis documents and turning them into design documents or something. This will give you a strange value of velocity that won't help you predict the delivery of actual software at all, and Scrum is not usually used this way anyway.
Hopefully I misunderstood what you meant by "design phase".
And to answer the final question, the relation of story points to hours is velocity. I don't know if this is useful, but I see the time required to complete a PBI as having two components. That which can be estimated before hand, at least as a relational unit in comparison to other PBIs, and a component that is only measurable afterwards, and depends a lot on the solution that is yet to be chosen, and the capacity of the team. The story points value relates to the first component, and velocity encapsulates the other component.
Basically it doesn't make sense to estimate PBIs in hours, because you just don't know what the solution will look like. However velocity tells us how long it took us to solve problems of certain sizes in the past, so we can use that to make an estimate in hours if we really need to.
> You want to get a potencially shippable product, fine. However, if you need
> a finished product by a certain date, you will
> need to spend more time in analysis in some spritns. What doy ou think about
> this?
To do software development with Scrum well, you must deliver
software, in a potentially shippable form, in every Sprint, from the
first to the last. Naturally, the software toward the end will have
more features and will be a better product in many ways.
Assuming a flat number of people on the team, their ability to
produce features should be approximately flat from the beginning to
the end. Therefore the amount of design that is needed is also
roughly flat from the beginning to the end. That said, early Sprints
will be more exploratory in both analysis (feature selection) and
design, and later ones more focused on quality and the like.
This is not an authorization to have analysis Sprints at the
beginning and Testing Sprints at the end.
Ron Jeffries
www.XProgramming.com
Don't confuse more exact with better. -- Brian Marick
--
You received this message because you are subscribed to the Google Groups "Scrum Alliance - transforming the world of work." group.
To post to this group, send email to scruma...@googlegroups.com.
To unsubscribe from this group, send email to scrumallianc...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/scrumalliance?hl=en.
As others have said, this doesn't sound kosher. To be honest, I no
longer recommend breaking stories down into tasks, but into thinner
stories that can be proven by an example. See
http://blog.gdinwiddie.com/2011/05/01/splitting-user-stories/ for a
really simple way to do that (as well as some links to other ways). And
I suggest delivering working software from the git-go, even though it
might not be much. See
http://blog.gdinwiddie.com/2011/05/25/avoiding-iteration-zero/ for more
on that.
I'd be happy to answer any questions you have on those articles and how
to apply that to your situation. I'm headed out to the Agile
Development Practices Conference in the morning, though, and probably
won't keep up with the mailing list while I'm there. Ping me at
gdinwiddie at idiacomputing.com if I miss an important question here.
- George
P.S. The abbreviation of estimate is "guess." Don't worry about the
inconsistency. Do track your velocity over time and see if it settles
into a groove.
--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------
--
Hello Gustavo,By doing more of analysis in the beginning are you trying to reduce the risk of not able to deliver those set of important features by that certain date?
btw, doesn't almost all the project that we deal with have that certain date and those definite set of features handed to us anyway?
> If we can not deliver for sure, we just do not start or stop the analysis.
How do you know whether you can deliver for sure without doing the
analysis? If you know you can deliver for sure before analysis, why
bother to do the analysis?
Ron Jeffries
www.XProgramming.com
The practices are not the knowing: they are a path to the knowing.
Hello, Gustavo. On Sunday, June 5, 2011, at 3:18:46 PM, you wrote:How do you know whether you can deliver for sure without doing the
> If we can not deliver for sure, we just do not start or stop the analysis.
analysis?
If you know you can deliver for sure before analysis, why
bother to do the analysis?
The practices are not the knowing: they are a path to the knowing.
--
>> > If we can not deliver for sure, we just do not start or stop the
>> analysis.
>>
>> How do you know whether you can deliver for sure without doing the
>> analysis?
> You can not. Did I say you can?
I think so. I think you said "If we can not deliver for sure, we
just do not start [or stop] the analysis." If you don't start it,
how do you know?
Ron Jeffries
www.XProgramming.com
In times of stress, I like to turn to the wisdom of my Portuguese waitress,
who said: "Ol�, meu nome � Marisol e eu serei sua gar�onete."
-- after Mark Vaughn, Autoweek.
Hello, Gustavo. On Sunday, June 5, 2011, at 4:33:25 PM, you wrote:I think so. I think you said "If we can not deliver for sure, we
>> > If we can not deliver for sure, we just do not start or stop the
>> analysis.
>>
>> How do you know whether you can deliver for sure without doing the
>> analysis?
> You can not. Did I say you can?
just do not start [or stop] the analysis." If you don't start it,
how do you know?
In times of stress, I like to turn to the wisdom of my Portuguese waitress,
who said: "Olá, meu nome é Marisol e eu serei sua garçonete."
-- after Mark Vaughn, Autoweek.
--
> I saw you saying about complex vs. tedious work when estimating Stories and
> what I say to my teams is that tedious work should also be taken into
> account when estimating in Story Points.
> Story Points should indicate the relative SIZE of a Story, and in this
> measure of "size" I ask then to take into account:
> - Complexity
> - Risk (never-done-before features, new technology, etc)
> - Amount of tedious work (and I'd also wanted to say that the more tedious
> and quantity of this kind of work, the less programmers focus on the tasks
> and more bugs are generated).
> So a Story with low complexity but lot of tedious work might be bigger than
> a more complex one.
When we invented Story Points, we had in mind simply "how long will
it take to do this story". If one must use them, this still seems to
me to be the best definition.
Ron Jeffries
www.XProgramming.com
Master your instrument, master the music,
and then forget all that *!xy!@ and just play. -- Charlie Parker
Then why not just use time?
>
> Ron Jeffries
> www.XProgramming.com
> Master your instrument, master the music,
> and then forget all that *!xy!@ and just play. -- Charlie Parker
>
>> When we invented Story Points, we had in mind simply "how long will
>> it take to do this story". If one must use them, this still seems to
>> me to be the best definition.
> Then why not just use time?
Currently Chet and I do not recommend this kind of estimation at
all, as it too often generates the kind of problems often discussed
here.
Story points were invented for political reasons:
At the time we invented story points, the team in question had been
estimating in "Ideal Time", the time it would take to do the story
if not interrupted. We had already found "Actual Time" to be too
hard to yse, both in estimating and politically, since in our
environment, "estimate" meant "promise" to many managers. Ideal Time
was supposed to be a new kind of unit that didn't have that
promissory aspect.
What we found in use, however, was that Ideal Time was itself a
problem because now you'd say you could do something in two Ideal
days, and five days later it would be done. Management couldn't seem
to make use of this.
By this time we had a hard deadline, and our "Product Owner" was
pretty good at managing scope, so we didn't have much need for
managers to be seeing how we did on estimates. They just couldn't
help it when they saw things like "days". So we invented story
points, with some definition like "one story point is one-half an
ideal day".
Time is probably a better way of estimating if the environment is
healthy enough to understand the difference between estimate and
promise. In my experience few organizations are.
Day in and day out, the main purpose of estimation is to decide how
much work to put in the Sprint. I find that rather than do some kind
of numerology on time, utilization, who's on vacation, and all the
other factors people worry about, this works better:
Make the stories all pretty small, no more than two or three days'
work for a pair. Stories this small will be easier to understand,
and the law of large numbers will be on your side.
After the Product Owner gets what looks like enough stories up on
the wall during Sprint Planning, the development team looks at
them, look each other in the eyes, and draw a line, committing as
a group to the stories above the line.
TL;DR
Story Points were invented to obfuscate duration so that certain
managers would not pressure the team over estimates. Using elapsed
time is probably better if the environment is healthy enough not to
obsess over meeting the estimates. Slicing stories small and
committing as a team to the batch provides better commitment, more
accurate selection of work, easier tracking, and less political
pressure.
Ron Jeffries
www.XProgramming.com
The main reason that testing at the end of a development cycle finds
problems is not that problems were put in near the end, it is that
testing was put off until then.
I concur with the approach you and Chet are now recommending, thought I find it works best with more mature teams. For teams with less experience I still find story points, completely devoid of discussions of time (strictly discussing and comparing relative effort against a baseline or, as more stories are added, comparing against multiple points of reference) along with velocity provide teams with a good tool for estimating and planning. After estimating I still encourage the teams to do a 'gut check' to make sure they're confident of being able to deliver to their commitment. I never encourage the use of time because, as you describe below, it confuses the issue. I see story points and relative estimation as an intermediate step in moving towards the more ideal approach of commitment based (sprint) planning.
Thanks,
Vernon
Just my (very late in the thread) two bits...
Cheers,
Jan Beaver, PhD, CSP
On 6/4/11 10:41 AM, Patti wrote:
> Hi Manoj,
> Thanks for your reply,
>
> no we have not done analysis in all these sprints though we are doing
> some investigation in some cases. We are using alot of technologies
> new to the team. Most of the team ramped up on these and trainied
> before we started the sprints. As I described to Vernon, we are
> building a product from scratch so each story requires putting
> frameworks or parts of frameworks in place. In the scrum training I
> took I asked the group how they handle a story that does not create a
> feature, such as investigating and implementing a framework versus 'II
> want to start an acqusition so I can see the data as it is
> collected' They suggested having a story for for those tasks also,
> so we would make the implementation part of a story and maybe make the
> investigation and decision a story that delivers a decision and brief
> document with examples of how to use.
For learning about something so that you can make a decision, I suggest
treating it as a "spike." This is different from as story, in that it's
not creating deliverable functionality. Start with the question you
want to answer, to give a goal to the work. Such a question might be
"How do you use framework X to do Y?" Also, set a time limit on the
work so that it doesn't become a permanent research project. If the
time limit is reached without answering the question, make an explicit
decision whether to extend the spike or to try a different approach.
> I would ask the community how do you start a large project from
> scratch like this with scrum, when the first few months are putting
> the acrhitecture in place?
Grow the architecture as the code requires it. Let deliverable
functionality be the driver.
- George