QA in agile environments

14 views
Skip to first unread message

James Thigpen

unread,
Aug 12, 2011, 12:32:02 PM8/12/11
to altnet...@googlegroups.com
How does QA fit into an agile/lean process? How can QA function in an iterative environment. How can QA be effective while not being a place you throw things over the wall to? How are you doing QA?

It's been a while since this thread went around, so I wanted to see how people were doing it these days.

Thanks,

-jt

Adron Hall

unread,
Aug 12, 2011, 12:54:20 PM8/12/11
to altnet...@googlegroups.com
IMHO - if QA or "Testing" exists in a company I'm of the ATTD camp (which is not associative to TDD).  I'm not sure I'm sold on every single test ideal of ATTD, but the general attitude I'm a big fan of.  Basically...
  • Test/QA needs involved from inception of a project, as soon as developers or even paper prototyping UX people are working on a project, QA/Test needs brought in also.
  • Exploratory Testing (along with the traditional automated testing/etc) is fundamental to usability & high quality applications & over time helps for maintainability.
  • UX + QA + Dev must communicate, communicate, and communicate some more!  :)
  • Project management, business analyst shouldn't be involved at this level, but if they are - they should ONLY be clearing path and not doing any of that "allocating resources/i.e. people" at this level. If the team isn't self organizing... then, well, you got other issues aside from where QA/Test fits into agile.  :P
What is not agile, not very helpful.
  • Eternal click, click to confirm every single thing.
  • A lack of automation for things that should be automated.
  • The actual UI testing doesn't seem to yield much.  Exploratory testing is good, UI clicky click click is bad.
  • Putting any lines of disconnect between communication for UX + QA + Dev - i.e. do NOT use analysts or project managers in between these groups.
-Adron





--
You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
To post to this group, send email to altnet...@googlegroups.com.
To unsubscribe from this group, send email to altnetseattl...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/altnetseattle?hl=en.



--
Adron B Hall
Techhttp://compositecode.com
Transit:  
http://transitsleuth.com
Twitter
http://www.twitter.com/adronbh

James Thigpen

unread,
Aug 12, 2011, 1:25:01 PM8/12/11
to altnet...@googlegroups.com
Thanks Adron, great feedback.

We're not far off from this. Our testing is feature oriented and exploratory in nature (to the best of my knowledge). What are some examples of things that should be automated? Are we talking about Selenium tests or something like that? We've not had much luck with Selenium as far as provided value goes, but it's possible we're doing it wrong.

We're also a continuous deployment environment, so we acutely feel any bottleneck that occurs in our development process. Does anyone have any tips for helping keep cycle times to a minimum w.r.t. QA? Does it happen in staging? Does it happen in production? Is it asynchronous to the product delivery pipeline?

-jt

Chris Bilson

unread,
Aug 13, 2011, 11:06:37 AM8/13/11
to altnet...@googlegroups.com
TL;DR: Let testers go first, make a test plan, developers follow, everyone lives happily ever after.

One thing a team I was on tried before (at Parametric) is letting QA get in front of developers in the value chain - i.e., they talk with business customers figure out how to test and force the thinking through of all the features wanted. Testers are usually better at disciplined thinking and projecting themselves into the role of someone using the system than goofball developers are. Our testers were seasoned too, so they were able to communicate with the business people really well.

Once we have a test plan, we start work building things. Regardless of whether we're doing manual testing or automated testing, developers have a clearer idea of what they are shooting for when they see the test plan / automated test suite sitting there in front of them, and testers don't feel like they are being brought in at the last minute with little information and no idea of what parts of the system are critical to test.

I think if you do it that way, you increase the odds that what the developers think is done is actually done, which means every process after the test plan is created is more smooth and efficient. Good developer will realize having a test plan means their job is easier, build to the plan, and get done faster. For less good developers, it will be obvious to everyone where the problem is.

I only got to see this in action for a few iterations. I think it was working well, but I'm not sure what happened after I left. We don't really have traditional testers where I am now - business experts do the testing, so it's not the same problem.

Maybe this "test plan first" way of doing things is what the model driven development movement should have been driving for: modeling in terms of test cases seems like it would be a lot more valuable than modeling in terms of software components and good testers can kick most architects butts.

Having testers that don't understand the business is a real drag on productivity. In my mind they should know the business as good or better than the developers. If that means training of some kind, spending more time with business people, or whatever, it should happen. If you aren't letting them grow in that direction you are wasting effort and increasing stress on the team.

--c


P.P.S.: Sorry this was way longer than I originally intended.

Justin Bozonier

unread,
Aug 13, 2011, 11:59:04 AM8/13/11
to altnet...@googlegroups.com
Tldr; Chris +1

At Milliman, we had our best success when one of our business users made a table of every input and the corresponding expected output before we got started on it. 

After we were done he then tested our feature by running through each case as well.

For the Millimanites here, I'm talking about Noah. He's a bad ass.

Another thing we did is get our business customers to start using the new features ASAP. Before they even released or were done.

Shawn Neal

unread,
Aug 13, 2011, 12:10:08 PM8/13/11
to altnet...@googlegroups.com
Letting the testers get out in front of the devs is an interesting idea.  Almost sounds like a business analyst who happens to also test the end product.  I guess that's not all that different from what we're now doing at Daptiv, which seems to be working really well.  At the beginning of the sprint (before commit) we'll spend a day or two breaking stories down, test casing, and tasking.  The entire team is responsible for owning the test cases, not just the tester or SDET on the team.  This has helped in lots of ways.

Stories are smaller and better defined.  The devs are acutely aware of the need to get value to the manual/adhoc tester ASAP, and having smaller stories really helps to avoid the "throw it over the wall" at the end of the sprint syndrome. 

Test casing by the entire team before commit has really helped size stories more accurately.  Although the acceptance criteria often dictates the 'happy path' tasks, test casing often dictates the 'unhappy path' tasks, which directly affect story size.

Having the entire team aware of and owning the test cases has dropped our defect rate significantly. We try to automate all our test cases (Selenium, REST API, QUnit), which the entire team is responsible for, not just the SDET.  It's not uncommon to finish a story without a single defect being logged.

Stories are almost always completed at the end of the sprint, which wasn't always the case before.  And when stories are done, they're done, done; and everyone knows it.

What it really comes down to is good team work.  There's mutual respect between dev and testers and it's understood they all bring value to the table, some of it coding skills and some of it domain expertise.

Chris Bilson

unread,
Aug 13, 2011, 12:29:43 PM8/13/11
to altnet...@googlegroups.com
Yes, exactly. Noah is a perfect example of this. (too bad no one else will know what we are talking about.)

Like Noah, I think testers would do themselves and their teams a tremendous service if they really dig into the business domain they are working in. If you are working on a portfolio management system for a mutual fund, for example, you need to know what a custodian bank is, what a benchmark is, what a corporate action is, etc. Not knowing these things is not doing your job, IMO. It would be like testing cars off an assembly line and not knowing what a steering wheel is. Not to say that developers are much better, but it's even more important for testers, I think.

Alan Ridlehoover

unread,
Aug 13, 2011, 6:30:47 PM8/13/11
to altnet...@googlegroups.com
+1 to getting the testing to happen in front of development.

What follows is TL;DR... <plug shame="off">Unless you want to read it on my blog: http://codegardener.com.</plug>

Testing is a huge domain. If you're familiar with Brian Marick's testing quadrant, you know that there are four basic areas that testing covers:

* Business Facing tests in Support of Programming (Business Requirements testing - Does the code do what it should?)
* Business Facing tests to Critique the Product (Business Defect testing - Does the code do something it shouldn't? Are there missing requirements?)
* Technology Facing tests in Support of Programming (Technical Requirement testing - Does this method do what the developer intended?)
* Technology Facing tests to Critique the Product (Technical defect testing - Are there leaks? Can it handle a load? Is it fast enough?)

Typically, testers focus on the business facing tests. And, people with specialized technical skills focus on the technology facing tests. (Developers on the support programming side; Performance testers on the critique product side.)

None of these tests can be run before the software is written. But, the tests in support of technology can be written before the code. And, metrics for perf/load/stress can be defined before the code is written. I recommend doing all of that (unless perf/load/stress isn't important to you). Obviously, exploratory testing is something that has to wait for the code to be written.

If I were designing an agile team from scratch, I would propose the following approach:

1. During planning: 
* Track requirements as user stories.
* Document acceptance criteria with each story, including perf/load/stress criteria (on the back of the 3x5 card, in Rally or TFS, etc.)
2. During an iteration: 
* One pair works on one story at a time.
* Acceptance tests are automated first, based on acceptance criteria.
* Code is written using TDD
* Story is not functionally complete until all acceptance tests are passing (for the right reasons - no hard coded answers left)
3. After story is functionally complete:
* Original pair leverages existing acceptance tests in perf/load/stress tests to determine if those criteria are met.
* Tweak code as necessary to meet perf/load/stress acceptance criteria.
* Story is not perf/load/stress complete until all perf/load/stress acceptance tests are passing
4. Exploratory testing should happen outside the constraints of a single story. Limiting it to a single story would put blinders on that could negatively impact the effort. But, it is important that it happen. Perhaps the team sets aside time during the day or iteration for banging on the software.
5. Once all acceptance tests are passing
* Ship it!

Variations:

1. Have the entire team bang out the acceptance tests at the beginning of the iteration.  I've seen this done. It works. But, quite often, tests get written for stories that end up getting cut from the iteration due to time constraints. That is excess inventory sitting on the production floor until those stories make it into another iteration. In other words, doing this encourages the accumulation of waste.

2. If you're concerned about a single pair working a story from beginning to end, mix it up. Give pairs one day to work on something, or 4 hours, or two, whatever works for you. Then switch things up - preferably by keeping one person on the story and bringing in a new pair. Then, the next time you switch, bring the older pair leaves.

3. Even though exploratory testing should not be constrained by a single story, it really is important to do it before shipping the software. Microsoft calls this a bug bash. They give away prizes for the most bugs, and the hardest to find bugs. But, they don't do it until very late in their process. It would be most agile to do it continuously.

Alan

Kelly Sommers

unread,
Aug 13, 2011, 10:40:55 PM8/13/11
to altnet...@googlegroups.com
Hiya,

I've recently joined a team that is by FAR the most agile I've been on in the past and our QA people are within our teams. Each team currently has a QA person on it. 

That means the QA person is involved in everything I am involved in as a developer. She's involved in the sprint planning, each scrum in the morning and the sprint demo at the end of the sprint. 

The best QA group I've worked with (no coincidence was under the same QA manager as this one is) the QA group was treated as a team just like the dev teams. 

I found having QA people like this made them far more knowledgable. These QA people knew the product and it's inner capabilities better than I did even though I built it. They were in the same meetings as me, and I tend to move on after building something. It was amazing having QA people who were very in depth with the product. 

Since QA was so knowledgable they could easily help with sales engineering or support when resources were low like during vacation periods etc. I'm sure they didn't enjoy that but it's a testament to their knowledge and in depth understanding of the system. 

I've been in some environments where dev and QA had an adversarial relationship and that never works out well. 

Later!
Kell
--

Kevin Klinemeier

unread,
Aug 13, 2011, 11:26:21 PM8/13/11
to altnet...@googlegroups.com, altnet...@googlegroups.com
The expression I like is this:

The #1 priority of agile testing is to provide immediate feedback to these two questions: Is anything broken? Are we done yet?

-Kevin

Michael Ibarra

unread,
Aug 14, 2011, 1:37:47 AM8/14/11
to altnet...@googlegroups.com
I'm really proud of the way we do things on my team. Although we're always looking for areas of improvement, I think we've got a really good thing going.

We don't have a QA team. We have a team. QA and dev together from start to finish. The whole team owns and is responsible for every aspect of the story pipeline, so there's really no wall (physical or logical) to throw code over. We define acceptance criteria together, we pair program (as much as we can) together on our tests and code, and we trade off responsibilities when it comes to our release process.

We find great value in this because, as Chris pointed out, there is a different mentality between seasoned testers and devs. We value this balance and  we seek to learn as much as we can from each other in the hopes of becoming better at our own disciplines.

So far this has worked out great and we're continuing to improve.

I'd personally avoid "testers going first" (or any one discipline, that matter) because it sounds like a breeding ground for a "not my job" mentality. When any part of the team doesn't feel like they have ownership (or equal stake) in a story, people are likely to disengage.

My 2 cents, anyway. HTH.

Mike
--
********************************
Michael Ibarra
bm2...@gmail.com

Alan Ridlehoover

unread,
Aug 14, 2011, 4:28:17 AM8/14/11
to altnet...@googlegroups.com
Agreed. "Testers" don't go first. "Testing" does.

Paul McCallick

unread,
Aug 15, 2011, 12:00:17 PM8/15/11
to Seattle area Alt.Net
Of all of the challenges we've faced in agile, determining how qa
works is the biggest. I can't say we've totally figured it out, but
here are some things that have worked:

- Create cross functional teams that work a story from beginning to
end, don't separate the roles.

- Test automation infrastructure needs the attention of people with
architecture skills - typically the devs. This is a good thing. It
starts with your devs providing technical horsepower to the tests and
ends with QA teaching them what needs to be tested. Eventually the
whole team is involved in the tests, which is what you want.

- For legacy systems sometimes it is more realistic to create tools
that help with manual testing rather than automated tests.

- Beware of automated tests that don't exercise the whole stack. You
can end up with tests that are just technical masturbation rather than
something you can rely upon to ensure the product works. For us this
can mean selenium or testing as close to the ui as possible.

This is probably obvious but its worth stating - if you have QA folks
who aren't willing to rethink how they work, then you've got a serious
uphill climb ahead of you.

On Aug 14, 1:28 am, Alan Ridlehoover <aridlehoo...@me.com> wrote:
> Agreed. "Testers" don't go first. "Testing" does.
> On Aug 13, 2011, at 8:26 PM, Kevin Klinemeier wrote:
>
>
>
>
>
>
>
> > The expression I like is this:
>
> > The #1 priority of agile testing is to provide immediate feedback to these two questions: Is anything broken? Are we done yet?
>
> > -Kevin
>
> > On Aug 13, 2011, at 9:10 AM, Shawn Neal <neal.sh...@gmail.com> wrote:
>
> >> Letting the testers get out in front of the devs is an interesting idea.  Almost sounds like a business analyst who happens to also test the end product.  I guess that's not all that different from what we're now doing at Daptiv, which seems to be working really well.  At the beginning of the sprint (before commit) we'll spend a day or two breaking stories down, test casing, and tasking.  The entire team is responsible for owning the test cases, not just the tester or SDET on the team.  This has helped in lots of ways.
>
> >> Stories are smaller and better defined.  The devs are acutely aware of the need to get value to the manual/adhoc tester ASAP, and having smaller stories really helps to avoid the "throw it over the wall" at the end of the sprint syndrome.
>
> >> Test casing by the entire team before commit has really helped size stories more accurately.  Although the acceptance criteria often dictates the 'happy path' tasks, test casing often dictates the 'unhappy path' tasks, which directly affect story size.
>
> >> Having the entire team aware of and owning the test cases has dropped our defect rate significantly. We try to automate all our test cases (Selenium, REST API, QUnit), which the entire team is responsible for, not just the SDET.  It's not uncommon to finish a story without a single defect being logged.
>
> >> Stories are almost always completed at the end of the sprint, which wasn't always the case before.  And when stories are done, they're done, done; and everyone knows it.
>
> >> What it really comes down to is good team work.  There's mutual respect between dev and testers and it's understood they all bring value to the table, some of it coding skills and some of it domain expertise.
>
> >> On Sat, Aug 13, 2011 at 8:06 AM, Chris Bilson <cbil...@pobox.com> wrote:
> >> TL;DR: Let testers go first, make a test plan, developers follow, everyone lives happily ever after.
>
> >> One thing a team I was on tried before (at Parametric) is letting QA get in front of developers in the value chain - i.e., they talk with business customers figure out how to test and force the thinking through of all the features wanted. Testers are usually better at disciplined thinking and projecting themselves into the role of someone using the system than goofball developers are. Our testers were seasoned too, so they were able to communicate with the business people really well.
>
> >> Once we have a test plan, we start work building things. Regardless of whether we're doing manual testing or automated testing, developers have a clearer idea of what they are shooting for when they see the test plan / automated test suite sitting there in front of them, and testers don't feel like they are being brought in at the last minute with little information and no idea of what parts of the system are critical to test.
>
> >> I think if you do it that way, you increase the odds that what the developers think is done is actually done, which means every process after the test plan is created is more smooth and efficient. Good developer will realize having a test plan means their job is easier, build to the plan, and get done faster. For less good developers, it will be obvious to everyone where the problem is.
>
> >> I only got to see this in action for a few iterations. I think it was working well, but I'm not sure what happened after I left. We don't really have traditional testers where I am now - business experts do the testing, so it's not the same problem.
>
> >> Maybe this "test plan first" way of doing things is what the model driven development movement should have been driving for: modeling in terms of test cases seems like it would be a lot more valuable than modeling in terms of software components and good testers can kick most architects butts.
>
> >> Having testers that don't understand the business is a real drag on productivity. In my mind they should know the business as good or better than the developers. If that means training of some kind, spending more time with business people, or whatever, it should happen. If you aren't letting them grow in that direction you are wasting effort and increasing stress on the team.
>
> >> --c
>
> >> P.S.: I think this blog post was what set me off:http://thecleancoder.blogspot.com/2010/08/qa-or-when-do-you-flip-panc...
>
> >> P.P.S.: Sorry this was way longer than I originally intended.
>
> >> On Fri, Aug 12, 2011 at 10:25, James Thigpen <james.r.thig...@gmail.com> wrote:
> >> Thanks Adron, great feedback.
>
> >> We're not far off from this. Our testing is feature oriented and exploratory in nature (to the best of my knowledge). What are some examples of things that should be automated? Are we talking about Selenium tests or something like that? We've not had much luck with Selenium as far as provided value goes, but it's possible we're doing it wrong.
>
> >> We're also a continuous deployment environment, so we acutely feel any bottleneck that occurs in our development process. Does anyone have any tips for helping keep cycle times to a minimum w.r.t. QA? Does it happen in staging? Does it happen in production? Is it asynchronous to the product delivery pipeline?
>
> >> -jt
>
> >> On Fri, Aug 12, 2011 at 9:54 AM, Adron Hall <adronh...@gmail.com> wrote:
> >> IMHO - if QA or "Testing" exists in a company I'm of the ATTD camp (which is not associative to TDD).  I'm not sure I'm sold on every single test ideal of ATTD, but the general attitude I'm a big fan of.  Basically...
> >> Test/QA needs involved from inception of a project, as soon as developers or even paper prototyping UX people are working on a project, QA/Test needs brought in also.
> >> Exploratory Testing (along with the traditional automated testing/etc) is fundamental to usability & high quality applications & over time helps for maintainability.
> >> UX + QA + Dev must communicate, communicate, and communicate some more!  :)
> >> Project management, business analyst shouldn't be involved at this level, but if they are - they should ONLY be clearing path and not doing any of that "allocating resources/i.e. people" at this level. If the team isn't self organizing... then, well, you got other issues aside from where QA/Test fits into agile.  :P
> >> What is not agile, not very helpful.
> >> Eternal click, click to confirm every single thing.
> >> A lack of automation for things that should be automated.
> >> The actual UI testing doesn't seem to yield much.  Exploratory testing is good, UI clicky click click is bad.
> >> Putting any lines of disconnect between communication for UX + QA + Dev - i.e. do NOT use analysts or project managers in between these groups.
> >> -Adron
>
> >> On Fri, Aug 12, 2011 at 9:32 AM, James Thigpen <james.r.thig...@gmail.com> wrote:
> >> How does QA fit into an agile/lean process? How can QA function in an iterative environment. How can QA be effective while not being a place you throw things over the wall to? How are you doing QA?
>
> >> It's been a while since this thread went around, so I wanted to see how people were doing it these days.
>
> >> Thanks,
>
> >> -jt
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
> >> To post to this group, send email to altnet...@googlegroups.com.
> >> To unsubscribe from this group, send email to altnetseattl...@googlegroups.com.
> >> For more options, visit this group athttp://groups.google.com/group/altnetseattle?hl=en.
>
> >> --
> >> Adron B Hall
> >> Tech:http://compositecode.com
> >> Transit:  http://transitsleuth.com
> >> Twitter:http://www.twitter.com/adronbh
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
> >> To post to this group, send email to altnet...@googlegroups.com.
> >> To unsubscribe from this group, send email to altnetseattl...@googlegroups.com.
> >> For more options, visit this group athttp://groups.google.com/group/altnetseattle?hl=en.
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
> >> To post to this group, send email to altnet...@googlegroups.com.
> >> To unsubscribe from this group, send email to altnetseattl...@googlegroups.com.
> >> For more options, visit this group athttp://groups.google.com/group/altnetseattle?hl=en.
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
> >> To post to this group, send email to altnet...@googlegroups.com.
> >> To unsubscribe from this group, send email to altnetseattl...@googlegroups.com.
> >> For more options, visit this group athttp://groups.google.com/group/altnetseattle?hl=en.
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
> >> To post to this group, send email to altnet...@googlegroups.com.
> >> To unsubscribe from this group, send email to altnetseattl...@googlegroups.com.
> >> For more options, visit this group athttp://groups.google.com/group/altnetseattle?hl=en.

Mario Pareja

unread,
Aug 15, 2011, 1:52:24 PM8/15/11
to altnet...@googlegroups.com
That was a pretty useful and candid list of points - thanks for sharing.

James Thigpen

unread,
Aug 15, 2011, 7:45:44 PM8/15/11
to altnet...@googlegroups.com
Thanks everyone, this has been a fantastic amount of feedback to sort through. This was exactly what I was hoping for.

From this list, I see several things we can try to improve on with my team, and some things to help me plot out a long term vision for what QA looks like in our organization.

Thanks!

-jt
Reply all
Reply to author
Forward
0 new messages