Of all of the challenges we've faced in agile, determining how qa
works is the biggest. I can't say we've totally figured it out, but
here are some things that have worked:
- Create cross functional teams that work a story from beginning to
end, don't separate the roles.
- Test automation infrastructure needs the attention of people with
architecture skills - typically the devs. This is a good thing. It
starts with your devs providing technical horsepower to the tests and
ends with QA teaching them what needs to be tested. Eventually the
whole team is involved in the tests, which is what you want.
- For legacy systems sometimes it is more realistic to create tools
that help with manual testing rather than automated tests.
- Beware of automated tests that don't exercise the whole stack. You
can end up with tests that are just technical masturbation rather than
something you can rely upon to ensure the product works. For us this
can mean selenium or testing as close to the ui as possible.
This is probably obvious but its worth stating - if you have QA folks
who aren't willing to rethink how they work, then you've got a serious
uphill climb ahead of you.
On Aug 14, 1:28 am, Alan Ridlehoover <
aridlehoo...@me.com> wrote:
> Agreed. "Testers" don't go first. "Testing" does.
> On Aug 13, 2011, at 8:26 PM, Kevin Klinemeier wrote:
>
>
>
>
>
>
>
> > The expression I like is this:
>
> > The #1 priority of agile testing is to provide immediate feedback to these two questions: Is anything broken? Are we done yet?
>
> > -Kevin
>
> > On Aug 13, 2011, at 9:10 AM, Shawn Neal <
neal.sh...@gmail.com> wrote:
>
> >> Letting the testers get out in front of the devs is an interesting idea. Almost sounds like a business analyst who happens to also test the end product. I guess that's not all that different from what we're now doing at Daptiv, which seems to be working really well. At the beginning of the sprint (before commit) we'll spend a day or two breaking stories down, test casing, and tasking. The entire team is responsible for owning the test cases, not just the tester or SDET on the team. This has helped in lots of ways.
>
> >> Stories are smaller and better defined. The devs are acutely aware of the need to get value to the manual/adhoc tester ASAP, and having smaller stories really helps to avoid the "throw it over the wall" at the end of the sprint syndrome.
>
> >> Test casing by the entire team before commit has really helped size stories more accurately. Although the acceptance criteria often dictates the 'happy path' tasks, test casing often dictates the 'unhappy path' tasks, which directly affect story size.
>
> >> Having the entire team aware of and owning the test cases has dropped our defect rate significantly. We try to automate all our test cases (Selenium, REST API, QUnit), which the entire team is responsible for, not just the SDET. It's not uncommon to finish a story without a single defect being logged.
>
> >> Stories are almost always completed at the end of the sprint, which wasn't always the case before. And when stories are done, they're done, done; and everyone knows it.
>
> >> What it really comes down to is good team work. There's mutual respect between dev and testers and it's understood they all bring value to the table, some of it coding skills and some of it domain expertise.
>
> >> On Sat, Aug 13, 2011 at 8:06 AM, Chris Bilson <
cbil...@pobox.com> wrote:
> >> TL;DR: Let testers go first, make a test plan, developers follow, everyone lives happily ever after.
>
> >> One thing a team I was on tried before (at Parametric) is letting QA get in front of developers in the value chain - i.e., they talk with business customers figure out how to test and force the thinking through of all the features wanted. Testers are usually better at disciplined thinking and projecting themselves into the role of someone using the system than goofball developers are. Our testers were seasoned too, so they were able to communicate with the business people really well.
>
> >> Once we have a test plan, we start work building things. Regardless of whether we're doing manual testing or automated testing, developers have a clearer idea of what they are shooting for when they see the test plan / automated test suite sitting there in front of them, and testers don't feel like they are being brought in at the last minute with little information and no idea of what parts of the system are critical to test.
>
> >> I think if you do it that way, you increase the odds that what the developers think is done is actually done, which means every process after the test plan is created is more smooth and efficient. Good developer will realize having a test plan means their job is easier, build to the plan, and get done faster. For less good developers, it will be obvious to everyone where the problem is.
>
> >> I only got to see this in action for a few iterations. I think it was working well, but I'm not sure what happened after I left. We don't really have traditional testers where I am now - business experts do the testing, so it's not the same problem.
>
> >> Maybe this "test plan first" way of doing things is what the model driven development movement should have been driving for: modeling in terms of test cases seems like it would be a lot more valuable than modeling in terms of software components and good testers can kick most architects butts.
>
> >> Having testers that don't understand the business is a real drag on productivity. In my mind they should know the business as good or better than the developers. If that means training of some kind, spending more time with business people, or whatever, it should happen. If you aren't letting them grow in that direction you are wasting effort and increasing stress on the team.
>
> >> --c
>
> >> P.S.: I think this blog post was what set me off:
http://thecleancoder.blogspot.com/2010/08/qa-or-when-do-you-flip-panc...
>
> >> P.P.S.: Sorry this was way longer than I originally intended.
>
> >> On Fri, Aug 12, 2011 at 10:25, James Thigpen <
james.r.thig...@gmail.com> wrote:
> >> Thanks Adron, great feedback.
>
> >> We're not far off from this. Our testing is feature oriented and exploratory in nature (to the best of my knowledge). What are some examples of things that should be automated? Are we talking about Selenium tests or something like that? We've not had much luck with Selenium as far as provided value goes, but it's possible we're doing it wrong.
>
> >> We're also a continuous deployment environment, so we acutely feel any bottleneck that occurs in our development process. Does anyone have any tips for helping keep cycle times to a minimum w.r.t. QA? Does it happen in staging? Does it happen in production? Is it asynchronous to the product delivery pipeline?
>
> >> -jt
>
> >> On Fri, Aug 12, 2011 at 9:54 AM, Adron Hall <
adronh...@gmail.com> wrote:
> >> IMHO - if QA or "Testing" exists in a company I'm of the ATTD camp (which is not associative to TDD). I'm not sure I'm sold on every single test ideal of ATTD, but the general attitude I'm a big fan of. Basically...
> >> Test/QA needs involved from inception of a project, as soon as developers or even paper prototyping UX people are working on a project, QA/Test needs brought in also.
> >> Exploratory Testing (along with the traditional automated testing/etc) is fundamental to usability & high quality applications & over time helps for maintainability.
> >> UX + QA + Dev must communicate, communicate, and communicate some more! :)
> >> Project management, business analyst shouldn't be involved at this level, but if they are - they should ONLY be clearing path and not doing any of that "allocating resources/i.e. people" at this level. If the team isn't self organizing... then, well, you got other issues aside from where QA/Test fits into agile. :P
> >> What is not agile, not very helpful.
> >> Eternal click, click to confirm every single thing.
> >> A lack of automation for things that should be automated.
> >> The actual UI testing doesn't seem to yield much. Exploratory testing is good, UI clicky click click is bad.
> >> Putting any lines of disconnect between communication for UX + QA + Dev - i.e. do NOT use analysts or project managers in between these groups.
> >> -Adron
>
> >> On Fri, Aug 12, 2011 at 9:32 AM, James Thigpen <
james.r.thig...@gmail.com> wrote:
> >> How does QA fit into an agile/lean process? How can QA function in an iterative environment. How can QA be effective while not being a place you throw things over the wall to? How are you doing QA?
>
> >> It's been a while since this thread went around, so I wanted to see how people were doing it these days.
>
> >> Thanks,
>
> >> -jt
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "Seattle area Alt.Net" group.
> >> To post to this group, send email to
altnet...@googlegroups.com.
> >> To unsubscribe from this group, send email to
altnetseattl...@googlegroups.com.
> >> For more options, visit this group athttp://
groups.google.com/group/altnetseattle?hl=en.
> >> For more options, visit this group athttp://
groups.google.com/group/altnetseattle?hl=en.
> >> For more options, visit this group athttp://
groups.google.com/group/altnetseattle?hl=en.
> >> For more options, visit this group athttp://
groups.google.com/group/altnetseattle?hl=en.
> >> For more options, visit this group athttp://
groups.google.com/group/altnetseattle?hl=en.