How do you handle test tasks?

110 views
Skip to first unread message

Thomas

unread,
Aug 29, 2008, 10:22:02 AM8/29/08
to VersionOne-users
Dear all,

I wonder how you handle the testing of new features?

In our project setting, the developers fix bugs or implement features
during one iteration and the testers then tests them in the following
iteration (based on a freshly installed MSI of the application).

As far as I understand, VersionOne is rather strict in handling tasks
and tests associated with features/defects: The task has to be in the
same iteration.

Does anybody see a chance to implement our workflow with a one
iteration off-set between implementation and testing? And how would
you do it?

Thanks

Thomas

Palmer, Chris

unread,
Aug 29, 2008, 7:25:56 PM8/29/08
to versiono...@googlegroups.com
Thomas,

We run our Iterations the same you described below. What we do, is
setup a separate story for all testing activities. For example we have
a story that we copy from one iteration to the next that gets updated
with new tests as we go. We call it our Regression suite. This has
been working out quite well. So, that's how we do it. Hope this helps,
let me know if you have any additional questions.

Thanks,
Chris P

Thomas

unread,
Sep 1, 2008, 7:17:38 AM9/1/08
to VersionOne-users
Chris,

Thanks a lot for you answer. It is good to know, that we are not alone
in the way we do it ;).

I understand, how you work with your regression suite story, but I
have a couple of questions:

1. How do you relate the finished stories with the regression suite
story? Do you add the stories to the list of "This feature depends on
these features" in the regression suite?

2. Who adds the stories to the regression suite? Every developer
individually? And when? Right after finishing a story or at the end of
the iteration? And how do you keep track, that every story was placed
in the regression suite?

Thanks for sharing your experience.

Cheers,

Thomas

Thomas

unread,
Sep 1, 2008, 2:39:46 AM9/1/08
to VersionOne-users
Chris,

Thanks a lot for your answer. Now I know, that we are not alone in how
we do it :).

You solution sounds interesting and I was just playing around a bit
with the concept, as I am currently setting up the next iteration
(it's Monday after all).

I have a couple of questions in regard to your regression suite story:

1. I understand that you copy the regression suite story from
iteration to iteration, but how do you reference the finished features
and defects?. Are you adding them to the "This feature depends on
these features" relation?

2. Who adds the stories to the regression suite? Every developer
herself? And when? After she finishes her story or at the end of the
iteration?

Thanks for sharing your experience.

Cheers,

Thomas

On 30 Aug., 01:25, "Palmer, Chris" <Chris.Pal...@sabre.com> wrote:

Luanne

unread,
Sep 1, 2008, 9:55:33 AM9/1/08
to VersionOne-users
Wanted to also add information from a previous post regarding
regression test suites and how to establish them within VersionOne

Assuming a project structure such as:

Product
* Release 1

Adding Tests to the Suite
1. Add a new backlog item in the Product project, called "Master
Regression Suite".
2. If you know of regression tests that need to be added right away,
add the Test assets to this backlog item.
3. Each time you add a new Test asset to a backlog item, make a copy
of it (the copy will appear right next to the original, under the same
story) by using the copy action (available in any in the actions menu
for each item).
4. Edit the copied Test.
5. There will be a field (3rd down) called "Parent Workitem" with a
little magnifying glass. Click on the magnifying glass.
6. In the popup window, click the Filters action and choose "Find",
type "Master Regression Suite" and click go or enter.
7. Your master suite BI will be found, click the Select link next to
it at the far right in the grid.
8. Your copied test is now part of the Regression Suite.

Using the Suite
1. Using the copy action, make a copy of the Master Regression Suite
BI (this will copy all the Tests underneath also).
2. Edit the BI. Change the project to be the Release you want it in
and the Sprint to be the sprint you will execute it in.

-Luanne
> > Thomas- Hide quoted text -
>
> - Show quoted text -

Luiz Claudio Parzianello (GBA-TI)

unread,
Sep 1, 2008, 11:10:54 AM9/1/08
to versiono...@googlegroups.com
Dear friends,

Our team is working with a User Story template based on:

Title: < User Story Title >
Description: As a < who / role >, I want < what / functional requirement > so that < why / benefit >.
Acceptance Criteria: To check this, I want to verify:

1) < Criteria 1 >
2) < Criteria 2 >
3) < ... >

In VersionOne, we´ve standardized:

User Story Title = Backlog Item Title
Description = Backlog Item Description
Acceptance Criteria = Excepect Results (from an Acceptance Test created in Backlog Item)

My question is: how can I build a report with the following fields (BI = Backlog Item; AT = Acceptance Test)?

BI_ID | BI_TITLE | BI_DESC | AT_EXPECTED_RESULTS | SPRINT

Using "Test Quicklist Report" we got only:

AT_ID | BI_TITLE | AT_EXPECTED_RESULTS | SPRINT

That is, we can´t use BI_ID and BI_DESC in a test report. Any suggestion?

[]´s

Luiz Parzianello



winmail.dat

Palmer, Chris

unread,
Sep 1, 2008, 11:35:56 AM9/1/08
to versiono...@googlegroups.com
Thomas,

We don't associate our regression suite with a story, but rather with a release. So, we know that stories X, Y, and Z will go out with release 2006.x.x. To associate a regression suite with a release, we create a goal for each release and then connect that goal to each story, and defect, including the regression suite story.

I Hope that helps.

Thanks,
Chris P.

Thomas

unread,
Sep 2, 2008, 4:23:07 AM9/2/08
to VersionOne-users
Chris,

Thanks again for sharing. Also thanks to Luanne for pointing me to the
other thread.

I'll let you know, how we decide to handle it.
BTW: here is V1's way: http://community.versionone.com/KnowledgeBase/FAQs/Q10561.aspx

Cheers,

Thomas

Kevin S.

unread,
Sep 2, 2008, 12:55:35 PM9/2/08
to VersionOne-users
Thomas-

Disclaimer: I'm entering into this discussion late, and my input may
not be helpful to you. I'm simply providing some scrum-focused
process feedback in case you or others can benefit from it (I see
these threads as a historical reference for future new users both in
V1 and Agile).

Flags are raised in my head whenever someone says that their testing
team works on the prior iterations work.

This is problematic for the following reasons:
- the development team has moved on to new work
- if the testing team finds something, they must "interupt" the
development team to go back and fix the bug
- this throws off velocity (or it is gamed and false)
...and this whole concept feels like a mini-waterfall since it ignores
that the product should be "potentially" shippable at the end of an
iteration.

What we did on my last team was have a CI (continuous integration)
build that ran every night separate from the check-in test/build
process. If everything passed, then the test environment was
automatically updated in the middle of the night. Thus, our test team
was part of the sprint team and they tested stories as they were
built. Stories were not closed unless Dev, Test agreed AND the
customer accepted them. This removes the pitch-over and splash-back
feeling.

It can also remove a lot of interruptions and debates that pit of QA
against Dev.

Having said all of that- you may not technically be able to do this
today. Or, your culture may not be ready to truly be agile. As a
pragmatist, I say whatever you are doing is better than not. I just
wanted to throw this out there as a potential goal to strive for.

Luiz Claudio Parzianello (GBA-TI)

unread,
Sep 3, 2008, 12:50:33 PM9/3/08
to versiono...@googlegroups.com
Dear friends,

Our team is working with an User Story template based on:

Thomas

unread,
Sep 4, 2008, 3:42:22 AM9/4/08
to VersionOne-users
Hi Kevin,

I appreciate your input and don't think it to be too late.

The same flags rose in my head and I started the discussion with the
idea to get some feedback on our process, too. We definitely are not
truly agile, but I like to build on the advantages of the methodology.
I also believe, that changes have to be pragmatic and are only
successful if they are "lived" right after they are applied.

We do automated unit testing and are currently pushing continuous
integration, but we lack the automatic deployment of the builds onto
our test system.
At the moment, this is still a manual task (we are deploying MSI's and
database dumps) and the major reason for the phase shift of one
iteration between implementation and testing of features. Once
installed, we then do additional automated tests via the GUI (using
TestComplete).

Your thoughts made me think how much it needs to eliminate these
manual steps - not much it seems. And then we would have more frequent
automated "GUI tests" and an ideal base for manual tests...

Thanks for sharing,

Thomas
Reply all
Reply to author
Forward
0 new messages