Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to get to 0.3: QA aspect

0 views
Skip to first unread message

Ulf Stroehler

unread,
Jun 12, 2006, 1:27:29 PM6/12/06
to dev-apps...@lists.mozilla.org
heading towards 0.3 I'd like to elaborate a bit on QA work from a high
level perspective and how it could look in an ideal world. Nothings set
in stone, just my personal reasoning to give you an idea.

Overview
========
we are looking at two products (SB + LN) right now, with currently
partly different features and code. The goal for 0.3 is to align SB and
Lightning. If that alignment covers user experience, feature set and
sharing of source code among these two applications, this is the right
time to align and strengthen QA efforts also.

Goal(s)
=======
have defined processes and metrics that allow assessment of the overall
quality of both products.

Others have stated the following:
* "spread QA over entire development period [..]" [1]
* [..]"try to supplament nightly testers with something structured too"[1]
* "We should also use community more (blogs, ng, etc)" [1]
* "Have a good idea of the health and stability of the codebase" [2]

Scope
=====
* develop QA community
* have different set of test cases (smoke-, component-, regression tests)
* have different types of test harnesses (manual, automatic GUI and unit
tests)

Test coverage
=============
Focus on new individual feature and bug fixes with a combination of
executing component test cases and ad-hoc user testing. Ideally these
steps are performed beforehand a fix/feature is integrated into the
considered stable source branch (e.g. with the introduction of a
"QA-review" flag in bugzilla)
Reliably regression testing of the existing code base on a regular basis
(involve QA community)

Test cycles
===========
* Individual feature/bug fix testing on request ("QA-review" flag)
* Regression tests on a daily basis
* Perhaps special test cycles for specific builds (e.g. release
candidate or localization builds)

Test Results
============
test results and statistics should be visible e.g. by utilizing Litmus
(help needed)

How do you know when you're done
================================
when all required features are implemented
when all P1 and P2 bugs are fixed/verified
when all stopper bugs are fixed/verified (stopper bug criteria tbd)


Comments?

-Ulf

[1]http://wiki.mozilla.org/Calendar:Status_Meetings:2006-06-08#How_to_get_to_0.3.3F
[2]http://groups.google.com/group/mozilla.dev.apps.calendar/browse_thread/thread/7b2610ffdde88549/08906d667406bf50?lnk=st&q=%22Organizing+Lightning+QA%22&rnum=2#08906d667406bf50

Clint Talbert

unread,
Jun 15, 2006, 11:38:47 AM6/15/06
to
Ulf Stroehler wrote:
...snip...

> If that alignment covers user experience, feature set and
> sharing of source code among these two applications, this is the right
> time to align and strengthen QA efforts also.
I think you're exactly right.

>
> Goal(s)
> =======
> have defined processes and metrics that allow assessment of the overall
> quality of both products.
>
> Others have stated the following:
> * "spread QA over entire development period [..]" [1]
> * [..]"try to supplament nightly testers with something structured too"[1]
> * "We should also use community more (blogs, ng, etc)" [1]
> * "Have a good idea of the health and stability of the codebase" [2]

These are excellent goals. I think we need to distill these so that they
are a little more specific and measureable. Perhaps we could re-state
these goals like so:
1. Formalize QA test procedures
A. Involve/Develop Calendar QA community into a structured QA
organization
i. Find the people already doing calendar QA, perhaps that could
be done by holding a once a week calendar QA chat on IRC.
ii. Increase QA visibility: Blog and post in the NG about QA matters
iii. Institute a bug summary in the calendar weekly meeting, where
a QA calendar person outlines (briefly the bug stats since last week: x
closed, y open, z in test etc)

B. Spread QA work out over the entire spectrum of development
i. Define smoke and component tests for existing features and
components.
ii. Have component based test cases for new features
iii. Have QA be a voice in feature design
iv. Install a regression test/unit test harness to check builds
automatically
v. Create automated GUI tests
2. Have metrics for health and stability of the codebase
A. Report bug metrics
B. Perform code-coverage analysis of our tests
C. Perform memory profiling of our tests (looking for leaks)
D. Perform stress testing (giant calendars, for example)

These are my ideas for how to make these broad goals into specific
goals, with small sub-goals that we can do bit by bit to get calendar QA
where we'd like it to be. What do you think?

> Test coverage
> =============
> Focus on new individual feature and bug fixes with a combination of
> executing component test cases and ad-hoc user testing. Ideally these
> steps are performed beforehand a fix/feature is integrated into the
> considered stable source branch (e.g. with the introduction of a
> "QA-review" flag in bugzilla)
> Reliably regression testing of the existing code base on a regular basis
> (involve QA community)
>

These are great ideas that should be included in the goal breakout
above, I think. But, I'm not sure where you see this fitting in. It
seems to me it could go either in 1 or 2.

> Test Results
> ============
> test results and statistics should be visible e.g. by utilizing Litmus
> (help needed)

Definitely. I have exchanged some email with Chris Cooper. He's very
willing to help us get testcases defined and imported into the Litmus
tool. He's also fixed a few problems I noted with it. (You can now
search for testcases without having to remember the testcase ID number!).


These are all great ideas, Ulf.

I think the question is: where do we go from here?

Do you think the goals are fleshed out enough? If not, what needs adding
or removing?

Which goal should we start with?

Should we go ahead with starting up a well-publicized calendar QA chat
once a week or something, just to help flush these folks out of the
woodwork?

Thanks,
Clint

Simon Paquet

unread,
Jun 17, 2006, 9:51:32 AM6/17/06
to
And on the seventh day Ulf Stroehler spoke:

Hi Ulf,

I meant to comment on your post way earlier, but didn't find the time.
Thanks for the great wrap-up. I just have some minor comments.

>heading towards 0.3 I'd like to elaborate a bit on QA work from a high
>level perspective and how it could look in an ideal world. Nothings set
>in stone, just my personal reasoning to give you an idea.
>
>Overview
>========
>we are looking at two products (SB + LN) right now, with currently
>partly different features and code. The goal for 0.3 is to align SB and
>Lightning. If that alignment covers user experience, feature set and
>sharing of source code among these two applications, this is the right
>time to align and strengthen QA efforts also.

Right.

>Goal(s)
>=======
>have defined processes and metrics that allow assessment of the overall
>quality of both products.
>
>Others have stated the following:
>* "spread QA over entire development period [..]" [1]
>* [..]"try to supplament nightly testers with something structured too"[1]
>* "We should also use community more (blogs, ng, etc)" [1]
>* "Have a good idea of the health and stability of the codebase" [2]

While these are very worthy goals, I think it is not very likely, to
expect to reach these goals and keep them ongoing during the remaining
0.3 cycle.

To be honest, if we could reach these goals when we reach 1.0 I would be
a very happy man.

>Scope
>=====
>* develop QA community
>* have different set of test cases (smoke-, component-, regression tests)
>* have different types of test harnesses (manual, automatic GUI and unit
> tests)

I would probably be best to try to speak to the Mozilla Corporation QA
staff (Marcia Knous, Tracy Walker) to get a feeling how much work would
be involved in that.

>How do you know when you're done
>================================
>when all required features are implemented
>when all P1 and P2 bugs are fixed/verified
>when all stopper bugs are fixed/verified (stopper bug criteria tbd)

AFAIK we haven't really utilized the P1-P5 priority-fields in bugzilla.
Perhaps we should start to use them. What do the developers think?

Simon
--
Sunbird/Lightning/Calendar Website Maintainer:
http://www.mozilla.org/projects/calendar
Sunbird/Calendar blog: http://weblogs.mozillazine.org/calendar

Ulf Stroehler

unread,
Jun 19, 2006, 5:39:11 AM6/19/06
to
Salut Simon,

thanks for your post.

Simon Paquet wrote:

>> Goal(s)
>> =======
>> have defined processes and metrics that allow assessment of the overall
>> quality of both products.
>>
>> Others have stated the following:
>> * "spread QA over entire development period [..]" [1]
>> * [..]"try to supplament nightly testers with something structured too"[1]
>> * "We should also use community more (blogs, ng, etc)" [1]
>> * "Have a good idea of the health and stability of the codebase" [2]
>
> While these are very worthy goals, I think it is not very likely, to
> expect to reach these goals and keep them ongoing during the remaining
> 0.3 cycle.
>
> To be honest, if we could reach these goals when we reach 1.0 I would be
> a very happy man.

Completely agreed, these are long-term goals.

>> Scope
>> =====
>> * develop QA community
>> * have different set of test cases (smoke-, component-, regression tests)
>> * have different types of test harnesses (manual, automatic GUI and unit
>> tests)
>
> I would probably be best to try to speak to the Mozilla Corporation QA
> staff (Marcia Knous, Tracy Walker) to get a feeling how much work would
> be involved in that.

Would you be able to make the contact. I very much like Clint's idea of
a regular calendar QA chat. I'd be glad if Marcia and Tracy (and anyone
interested) could help kicking-off a calendar QA community.

Thanks,
-Ulf

Ulf Stroehler

unread,
Jun 20, 2006, 7:56:52 AM6/20/06
to
thanks Clint for working out and render more precisely the goal
statement. I completely agree with your #1 and #2

> 1. Formalize QA test procedures

> 2. Have metrics for health and stability of the codebase


Clint Talbert wrote:

> These are excellent goals. I think we need to distill these so that they
> are a little more specific and measureable. Perhaps we could re-state
> these goals like so:
> 1. Formalize QA test procedures
> A. Involve/Develop Calendar QA community into a structured QA
> organization
> i. Find the people already doing calendar QA, perhaps that could
> be done by holding a once a week calendar QA chat on IRC.
> ii. Increase QA visibility: Blog and post in the NG about QA matters
> iii. Institute a bug summary in the calendar weekly meeting, where
> a QA calendar person outlines (briefly the bug stats since last week: x
> closed, y open, z in test etc)
>
> B. Spread QA work out over the entire spectrum of development
> i. Define smoke and component tests for existing features and
> components.
> ii. Have component based test cases for new features
> iii. Have QA be a voice in feature design
> iv. Install a regression test/unit test harness to check builds
> automatically
> v. Create automated GUI tests

brilliant.

> 2. Have metrics for health and stability of the codebase
> A. Report bug metrics
> B. Perform code-coverage analysis of our tests
> C. Perform memory profiling of our tests (looking for leaks)
> D. Perform stress testing (giant calendars, for example)

great.

>
> These are my ideas for how to make these broad goals into specific
> goals, with small sub-goals that we can do bit by bit to get calendar QA
> where we'd like it to be. What do you think?
>

This is all great with me and exactly states what I have in mind too. I
also had worked out the goals in more detail, but these are almost
congruent with yours and only differ in wording.

>> Test coverage
>> =============
>> Focus on new individual feature and bug fixes with a combination of
>> executing component test cases and ad-hoc user testing. Ideally these
>> steps are performed beforehand a fix/feature is integrated into the
>> considered stable source branch (e.g. with the introduction of a
>> "QA-review" flag in bugzilla)
>> Reliably regression testing of the existing code base on a regular basis
>> (involve QA community)
>>
> These are great ideas that should be included in the goal breakout
> above, I think. But, I'm not sure where you see this fitting in. It
> seems to me it could go either in 1 or 2.
>

this was thought as a part of the QA procedure formalization, hence #1.
Perhaps this is a little bit too much in a few sentences. I'd suggest to
come back to this point once we have established regular QA chats, you
proposed.

>> Test Results
>> ============
>> test results and statistics should be visible e.g. by utilizing Litmus
>> (help needed)
>
> Definitely. I have exchanged some email with Chris Cooper. He's very
> willing to help us get testcases defined and imported into the Litmus
> tool. He's also fixed a few problems I noted with it. (You can now
> search for testcases without having to remember the testcase ID number!).
>
>
> These are all great ideas, Ulf.
>
> I think the question is: where do we go from here?
>
> Do you think the goals are fleshed out enough? If not, what needs adding
> or removing?
>

I don't think there's much we're currently missing. We agreed on goals
and broke them into smaller pieces. If we feel any of these need
addition or a change in direction we could discuss so on the QA chat you
proposed. Besides, what we started here already develops as a small (and
not yet complete) Test Plan. We already begun working out QA procedures
top to bottom. The next steps probably would be to get a buy-in from
other people and then go and publish our ideas and start executing.

> Which goal should we start with?
>
> Should we go ahead with starting up a well-publicized calendar QA chat
> once a week or something, just to help flush these folks out of the
> woodwork?
>

Yeah, starting with the chat sounds good to me.

> Thanks,
> Clint

What do other people think about this?

Thanks,
-Ulf

--

Clint Talbert

unread,
Jun 20, 2006, 12:25:19 PM6/20/06
to
Ulf, Simon,

I have added this post to the agenda for tomorrow's status meeting.
In general, I wanted to see if anyone has comments about the QA goals as
we have decided on them, and I hope to find a good time to hold the QA
chats from week to week.

http://wiki.mozilla.org/Calendar:Status_Meetings:2006-06-21

Clint

Clint Talbert

unread,
Jun 22, 2006, 11:26:08 AM6/22/06
to

From that meeting, we decided to go ahead with the QA chat. Scheduling
the first of these QA chats is going to be delicate with the July 4th
holiday in the U.S.

I would like to propose either of the following times for our first QA Chat:
Thursday, June 29, 16:00 UTC
Thursday, July 6, 16:00 UTC

You can use this link to see what time 16:00 UTC is in your location:
http://www.timeanddate.com/worldclock/fixedtime.html?month=6&day=29&year=2006&hour=16&min=0&sec=0&p1=0

Note to all readers: If you've ever been interested in testing Lightning
and Sunbird, if you've been looking for a way to help out, then please
join us for this short IRC-based meeting. If you can't make it, please
respond to this post so that we can invite you to the next QA Chat.

Feel free to propose other dates/times.

Thanks,
Clint

Clint Talbert

unread,
Jun 23, 2006, 7:06:59 PM6/23/06
to

The first calendar QA Chat will be on:

> Thursday, June 29, 16:00 UTC
>
> You can use this link to see what time 16:00 UTC is in your location:
> http://www.timeanddate.com/worldclock/fixedtime.html?month=6&day=29&year=2006&hour=16&min=0&sec=0&p1=0
>

We will discuss:
Calendar QA goals
Types of tests/overall test vision.
Ways to bring the calendar QA community together
-->How to formalize QA efforts so that effort is not duplicated
-->How to determine the amount of test coverage we have on the
calendar products
-->How to begin writing testcases

Other thoughts and ideas?

It will be an hour long, at most.
We will be chatting in the #calendar-qa channel

Hope to see you there.

Clint


Clint Talbert

unread,
Jun 27, 2006, 7:38:35 PM6/27/06
to
> Ulf Stroehler wrote:

> Simon Paquet wrote:
>> I would probably be best to try to speak to the Mozilla Corporation QA
>> staff (Marcia Knous, Tracy Walker) to get a feeling how much work would
>> be involved in that.
>
> Would you be able to make the contact. I very much like Clint's idea of
> a regular calendar QA chat. I'd be glad if Marcia and Tracy (and anyone
> interested) could help kicking-off a calendar QA community.

Since I had Marcia's contact information and since she was also
mentioned in our last status meeting, I contacted her. She said that
she'd love to participate in one of these chats, but may not be
available this Thursday due to a conflict.

Clint

0 new messages