Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Usefulness of the VERIFIED state?

1 view
Skip to first unread message

Justin Dolske

unread,
Sep 30, 2009, 7:14:34 PM9/30/09
to
[Followups to m.d.quality]

Should we generally abolish the VERIFIED state/keyword/flag of bugs, and
instead move to a model where verification is only done as a special case?

The current process strikes me as not having a good cost/benefit ratio.
It seems like it must take a significant amount of QA time, but as a
developer it doesn't feel like this verification step adds much to my
confidence that we're shipping solid code. Especially for bugfixes which
include automated tests, or are in areas with extensive existing test
coverage. In fact, sometimes bugs are verified with comments along the
lines of "it was checked in and tests passed." I've even seen some
verified just because there have been no more comments in the bug since
checkin.

One thing I've never seen happen is a bug being reopened because the
verification step found problems. I assume it happens but is rare -- do
QA or drivers track how often this happens?

So, I have to wonder if there's a better way to improve quality by
changing the current verification process. For example:

* Use a "verification-wanted" flag, for specific bugs that would benefit
from having a real human focus on the fix and try to break it in unusual
ways. Maybe drivers would want to liberally add this to branch bugs.

* Spend the newfound time on general branch QA (since we've historically
had problems with low numbers of branch beta users, and introducing
unexpected regressions).

Thoughts? What do others think of the way we use the VERIFIED state? Is
the current process fine, or can we make it more useful?

Justin

John J Barton

unread,
Sep 30, 2009, 7:34:45 PM9/30/09
to
Justin Dolske wrote:
>
> Thoughts? What do others think of the way we use the VERIFIED state? Is
> the current process fine, or can we make it more useful?

Just a comment: in the Firebug project we use VERIFIED for only one
purpose: the reporter of the bug tried the fix and it worked for them,
and their contribution is acknowledged. So for us, VERIFIED has nothing
to do with software but all to do with community: we are thanking them
for being the one in thousand that took the time to report enough for us
to fix the bug. I think it helps, but I don't have a survey or such to
back that up.

jjb

L. David Baron

unread,
Sep 30, 2009, 9:05:43 PM9/30/09
to dev-q...@lists.mozilla.org
On Wednesday 2009-09-30 16:14 -0700, Justin Dolske wrote:
> The current process strikes me as not having a good cost/benefit ratio.
> It seems like it must take a significant amount of QA time, but as a
> developer it doesn't feel like this verification step adds much to my
> confidence that we're shipping solid code. Especially for bugfixes which
> include automated tests, or are in areas with extensive existing test
> coverage. In fact, sometimes bugs are verified with comments along the
> lines of "it was checked in and tests passed." I've even seen some
> verified just because there have been no more comments in the bug since
> checkin.

Strongly agreed.

> One thing I've never seen happen is a bug being reopened because the
> verification step found problems. I assume it happens but is rare -- do
> QA or drivers track how often this happens?

I don't remember seeing this happen for quite a few years, at least
not for reasons other than confusion about which builds should
contain the fix.


I think one of the things that could be useful about verification is
that it could involve testing of other related areas and development
of further tests. This is particularly useful while the code is
still fresh in the developer's mind and therefore the cost of
returning to fix additional issues is lower.

I think verification is useful in so far as it is an efficient way
of discovering important problems. The problems discovered could be
that the original bug isn't fixed at all, isn't completely fixed, or
that there are other related problems. I think the way it's
practiced in the Mozilla project today is not useful, given that I
don't see problems being discovered as a result of bug verification.
And I agree that much of the energy spent on it could be better
spent doing other things that would be more efficient ways of
finding important problems or finding more information about known
problems that can lead to a fix.

-David

--
L. David Baron http://dbaron.org/
Mozilla Corporation http://www.mozilla.com/

Juan Becerra

unread,
Sep 30, 2009, 9:50:26 PM9/30/09
to L. David Baron, dev-q...@lists.mozilla.org
On 9/30/09 6:05 PM, L. David Baron wrote:
> On Wednesday 2009-09-30 16:14 -0700, Justin Dolske wrote:
>
>> The current process strikes me as not having a good cost/benefit ratio.
>> It seems like it must take a significant amount of QA time, but as a
>> developer it doesn't feel like this verification step adds much to my
>> confidence that we're shipping solid code. Especially for bugfixes which
>> include automated tests, or are in areas with extensive existing test
>> coverage. In fact, sometimes bugs are verified with comments along the
>> lines of "it was checked in and tests passed." I've even seen some
>> verified just because there have been no more comments in the bug since
>> checkin.
>>
> Strongly agreed.

>
>
>> One thing I've never seen happen is a bug being reopened because the
>> verification step found problems. I assume it happens but is rare -- do
>> QA or drivers track how often this happens?
>>
> I don't remember seeing this happen for quite a few years, at least
> not for reasons other than confusion about which builds should
> contain the fix.
>
>
>
This does not happen often, but recently, during 3.5.1 and 3.52 testing,
we found that at least one fix had not applied correctly and verifying
another lead to finding a related issue: bug 502604 and bug 503144,
respectively.

But before this I can't remember the last time this happened.

> I think one of the things that could be useful about verification is
> that it could involve testing of other related areas and development
> of further tests. This is particularly useful while the code is
> still fresh in the developer's mind and therefore the cost of
> returning to fix additional issues is lower.
>
>
> I think verification is useful in so far as it is an efficient way
> of discovering important problems. The problems discovered could be
> that the original bug isn't fixed at all, isn't completely fixed, or
> that there are other related problems. I think the way it's
> practiced in the Mozilla project today is not useful, given that I
> don't see problems being discovered as a result of bug verification.
> And I agree that much of the energy spent on it could be better
> spent doing other things that would be more efficient ways of
> finding important problems or finding more information about known
> problems that can lead to a fix.
>
> -David
>
>

It would be great if we could spend less time doing bug verifications,
and more time testing around bug fixes, which could lead us to discover
hidden problems or regressions. Traditionally it has been difficult to
scope out where to test around, and in the case of bug fixes like the
ones mentioned above, it could be really difficult, because it isn't
obvious to QA or the developers where the hidden problems could be.

It might be even better if we could focus our attention in gathering
more information about known problems, as David says, as well as by
following through the life-cycle of a bug, as it has been expressed by
Shaver with his idea of human based testing.

juanb

Murali Nandigama

unread,
Sep 30, 2009, 10:39:08 PM9/30/09
to
It is very easy to generate stats around the number of reopened bugs
after they went through RESOLVED-FIXED if you want to test your
hypothesis..

It is said in the world of QA [ not just within Mozilla ] that a fix
which is not verified has gone thorough a "faith-based" testing. That
is why the motto of QA is

In god we trust ... every thing else we TEST :)

Cheers
Murali

Murali Nandigama

unread,
Sep 30, 2009, 11:03:02 PM9/30/09
to
I just ran a quick query on the Bugzilla database back end [dump up to
September 23,2009] . There are 2522 instances of bugs being moved from
RESOLVED to REOPENED since 2009-01-01. This includes 2276 distinct bug
Ids. So, some got reopened more than once apparently.

A query on the front end Bugzilla interface
https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=2009-01-01&chfieldto=Now&chfield=bug_status&chfieldvalue=REOPENED&cmdtype=doit&order=Reuse+same+sort+as+last+time&field0-0-0=noop&type0-0-0=noop&value0-0-0=

get a count of 2546 bugs that have been reopened since 2009-01-01

Thanks
Murali


L. David Baron

unread,
Sep 30, 2009, 11:08:36 PM9/30/09
to dev-q...@lists.mozilla.org
On Wednesday 2009-09-30 20:03 -0700, Murali Nandigama wrote:
> I just ran a quick query on the Bugzilla database back end [dump up to
> September 23,2009] . There are 2522 instances of bugs being moved from
> RESOLVED to REOPENED since 2009-01-01. This includes 2276 distinct bug
> Ids. So, some got reopened more than once apparently.

But how many of those were part of the verification process, and how
many of them were from other things?

And do we know how many of those were actually correctly reopened?

Murali Nandigama

unread,
Oct 1, 2009, 1:01:26 AM10/1/09
to

https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=2009-01-01&chfieldto=Now&chfield=bug_status&chfieldvalue=REOPENED&cmdtype=doit&order=Reuse+same+sort+as+last+time&field0-0-0=resolution&type0-0-0=changedfrom&value0-0-0=FIXED

There are 1787 bugs that are actually RESOLVED->FIXED and then got
REOPENED. This means either QA or the Reporter actually took pains to
validate the RESOLUTION and decided to REOPEN it. whether they are
correctly REOPENED or not is a subjective matter. There are many
examples where bugs go through RESOLVED->REOPENED->RESOLVED->REOPENED-
>UNCONFIRMED cycle. that means it is a stalemate situation. The
question is , should we monitor them or not?

But to answer the original question, it is NOT uncommon to REOPEN bugs
during VERIFICATION and so there is benefit in NOT abandoning the
process.

Tanner M. Young

unread,
Oct 1, 2009, 1:21:34 AM10/1/09
to dev-q...@lists.mozilla.org
On Wed, Sep 30, 2009 at 11:01 PM, Murali Nandigama <
murali.n...@gmail.com> wrote:

>
>
> But to answer the original question, it is NOT uncommon to REOPEN bugs
> during VERIFICATION and so there is benefit in NOT abandoning the
> process.

> _______________________________________________
> dev-quality mailing list
> dev-q...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-quality
>


I agree.

Also another observation, MozWebQA reopens a lot of verified bugs, because
we deal with web site pushes, which often break; therefore, many of the
RESOLVED->REOPENED->RESOLVED->REOPENED bugs are from us.

It also depends on who is doing the QA work and what project it is: some
projects care more about verification than others do because of the QA
presence and the nature of verifications.

For me, it would be more helpful if bugs had clearer, simpler testcases and
steps to reproduce than remove the ability to verify bugs, since the bugs I
verify require very specific (or convoluted) environments, which I have to
create to verify the bug. Furthermore, I have encountered enough issues that
weren't totally fixed, that it merits keeping the practice.

Tanner M. Young

Boris Zbarsky

unread,
Oct 1, 2009, 1:33:29 AM10/1/09
to
On 10/1/09 1:01 AM, Murali Nandigama wrote:
> There are 1787 bugs that are actually RESOLVED->FIXED and then got
> REOPENED. This means either QA or the Reporter actually took pains to
> validate the RESOLUTION and decided to REOPEN it.

Or the fix turned the tree orange or caused performance regressions and
was backed out...

> But to answer the original question, it is NOT uncommon to REOPEN bugs
> during VERIFICATION and so there is benefit in NOT abandoning the
> process.

I just checked the first 5 bugs (in bug# order: 22310, 32931, 37382,
39912) on the list. Not a scientific sample at all, but:

First one was reopened because it had been marked FIXED incorrectly
(only a partial fix was checked in) and the patch author requested that
it be reopened. I guess that counts as verification (though not QA
verification).

Second one was reopened because someone had incorrectly bulk-marked a
bunch of mailnews bugs "resolved expired". In particular, this bug did
NOT go from FIXED to REOPENED during your timeframe. It did go from
FIXED to reopened in November 2000, as part of a QA verification step
back then.

Third one, similar story (reopened from FIXED in 2000), though there
someone just drive-by resolved it (incorrectly) and reporter immediately
reopened.

Fourth one was never reopened from a FIXED state (was moved from FIXED
to WORKSFORME, however).

Fifth one was incorrectly resolved fixed by Netscape QA in 2001 and
reopened immediately. Otherwise, same story as second through fourth.

If I look at the last 5 bugs in the query (519542, 519582, 519613,
519618, 519723) in hopes of avoiding the mailnews expiration thing.

First one: reopened because it caused a regression (correct procedure is
to just handle the regression in the separate regression but which was
already filed at that point).

Second one: Reopened more or less as part of verification (as in "we
haven't seen this take effect") though not by QA.

Third one: Reopened by reporter because the fix was incomplete.

Fourth one: Reopened by Rey. This might have been verification; hard to
tell.

Fifth one: Reopened because the fix did not fix all cases, apparently
(though not by QA that I can tell).

It doesn't help that of these 5 bugs not a single one was a product bug;
they were all infrastructure bugs. Does QA even do infrastructure
verification? If not, why were such bugs included in the query?

Based on the 10 bugs above, I think the claim that QA verification
commonly reopens bugs is pretty tenuous... Having a query with more
relevant bugs (e.g. ones actually reopened from FIXED recently and in
products where QA actually does verification) would possibly be more
interesting.

-Boris

Boris Zbarsky

unread,
Oct 1, 2009, 1:41:30 AM10/1/09
to
On 10/1/09 1:21 AM, Tanner M. Young wrote:
> It also depends on who is doing the QA work and what project it is: some
> projects care more about verification than others do because of the QA
> presence and the nature of verifications.

OK, that I agree with. I suspect some of our products have much more
effective verification that others; my experience is mostly with the
Core product, where verification has usually been of limited usefulness
(both now and back in the Netscape days, iirc).

I do think the question of whether VERIFIED (and verification as it is
practiced in general) is worth it in Core and Firefox products is one
that we still don't have a good answer to...

-Boris

Mark Banner

unread,
Oct 1, 2009, 5:32:06 AM10/1/09
to
On 01/10/2009 00:14, Justin Dolske wrote:
> [Followups to m.d.quality]
>
> Should we generally abolish the VERIFIED state/keyword/flag of bugs, and
> instead move to a model where verification is only done as a special case?

I'm really interested in the answers to this question. At the moment, I
can't actually answer it because I don't know - once Thunderbird 3 is
out I expect we'll be thinking about this a bit more.

However a couple of thoughts on initial impressions on what I'd want
verification for, taking into account wanting to take up the minimum of
QA time:

- If the bug/patch has a comprehensive test case ensure that the correct
patch has landed for the release (obviously tests should be passing
because the tree should be green). Hence a quick five minute test.

- I was going to write here something along the lines of if it doesn't
have a full set of tests, then follow the STR to see if the verifier can
reproduce or get the reporter to test the build.

However I guess this second step is where it starts taking QA time. I
can see a few issues as well:

* The Steps to Reproduce aren't always that clear or easy to actually
make occur. Making QA time difficult.

* The reporter may not want/have time to test the bug especially if this
is now on a branch and it is a month or more after they actually filed
the original bug (due to the necessary delays for getting into a stable
branch). Likewise, getting others interested may also be difficult.

This almost makes me think that verification should, where possible,
take place on trunk builds before a patch is approved for branch. This
would be closer to the time the bug is tested, and the verification on
branch is just rubber stamping of yes the right thing has been committed.


I'm also wondering are QA taking the opportunity when going through bugs
to make sure that the bug has tests (and if so in-testsuite+ is set) or
if it doesn't if a litmus test can be written or not (and getting that
test into litmus)?

I know developers should be doing this anyway, but are we missing cases
where it isn't happening.


Anyway, like I implied at the start I haven't tried to use it in earnest
yet (or work with QA who are). So I'll be reading comments with interest.

Standard8

Gervase Markham

unread,
Oct 1, 2009, 5:42:39 AM10/1/09
to
On 01/10/09 04:03, Murali Nandigama wrote:
> I just ran a quick query on the Bugzilla database back end [dump up to
> September 23,2009] . There are 2522 instances of bugs being moved from
> RESOLVED to REOPENED since 2009-01-01. This includes 2276 distinct bug
> Ids. So, some got reopened more than once apparently.

Can you do a breakdown by product, so that we can see if it happens much
in Core or Firefox?

Gerv

jmaher

unread,
Oct 1, 2009, 7:03:20 AM10/1/09
to

In the mobile project we have found a handful of bugs (~2%) that we
could not verify. If the bug has a test case it is good to run that
case on all branches. Right now the verified keyword doesn't do
anything to ensure we verified it on all branches. In my last job we
didn't do verification to save QA time and 3 times that I can remember
we shipped a product with a small minor bug that was checked in but
not working. Unfortunately this was seen by a customer each time and
resulted in a lot of busy work to get another version out the door.

So I think a verification process is valid, but is it required for all
bugs? If there is an automated testcase, then should we be testing
it? I would still say yes as testing in different environments than
the clean generated profile for automation will always find new
issues.

-Joel

Boris Zbarsky

unread,
Oct 1, 2009, 9:22:31 AM10/1/09
to
On 10/1/09 5:32 AM, Mark Banner wrote:
> - If the bug/patch has a comprehensive test case ensure that the correct
> patch has landed for the release (obviously tests should be passing
> because the tree should be green). Hence a quick five minute test.

I should note here that by far the most useful post-checkin QA I've seen
as a developer is Jesse filing bugs on assertions that started firing
after the checkin. Sprinkling the code liberally with assertions means
that his fuzzers end up catching all sorts of fun edge cases pretty
easily...

The testcase that's landed is almost never comprehensive, for what it's
worth; we have too many codepaths for that. Something that would be
really useful (but time-consuming, and requiring some expertise in the
area the bug is in, possibly) is trying to see how the tests that exist
(including whatever testcase the bug was filed with for dom/layout bugs)
could be tweaked in ways likely to cause failures... I think we tend to
have a bit of "this test passed, so all is good" tunnel-vision without
digging around in the test space much.

> * The Steps to Reproduce aren't always that clear or easy to actually
> make occur. Making QA time difficult.

Yes, agreed. And some bugs are unclear on the fact that a debug build
is needed to reproduce, etc. Is there something we could do to make
this better for QA?

> This almost makes me think that verification should, where possible,
> take place on trunk builds before a patch is approved for branch.

This would be lovely, but there's one problem: a patch might act very
differently on a branch and on trunk. Verification of correct behavior
on the branch is more likely to catch issues than on trunk.

Perhaps if we focused on making sure there are fairly detailed tests
before pushing to branch and land those tests on branch we can alleviate
this... maybe.

-Boris

Mike Beltzner

unread,
Oct 1, 2009, 9:44:45 AM10/1/09
to dev-q...@lists.mozilla.org
On 2009-10-01, at 1:14 AM, Justin Dolske wrote:

> [Followups to m.d.quality]

I obey!

> Should we generally abolish the VERIFIED state/keyword/flag of bugs,
> and instead move to a model where verification is only done as a
> special case?

As I understand it, QA uses this state in preparation for release,
going through all the bugs that were marked FIXED during a time period
and changing the status to VERIFIED once they've established that the
STR are resulting in the expected behaviour.

Not sure if there's a set of bugs there that QA doesn't really deal
with, thus stay unverified, but as far as I understand bugs will often
stay in FIXED but not VERIFIED for a long time - usually until we gear
up for a release.

On the branches they switch to using a verifiedN keyword.

> The current process strikes me as not having a good cost/benefit
> ratio. It seems like it must take a significant amount of QA time,
> but as a developer it doesn't feel like this verification step adds
> much to my confidence that we're shipping solid code. Especially for
> bugfixes which include automated tests, or are in areas with
> extensive existing test coverage. In fact, sometimes bugs are
> verified with comments along the lines of "it was checked in and
> tests passed." I've even seen some verified just because there have
> been no more comments in the bug since checkin.

I think that the verification stage is something we *should* be doing,
but yes, I agree that often times it feels like it can be done without
specific QA involvement (ie: if we have test coverage that verifies
it, or if the developer can verify it with a screenshot or by trying
the STR themselves) it should be pretty simple to do. We should also
be checking for bugs as verified before shipping, instead of as fixed,
I think.

I'm more interested in cases where verification is hard so that we can
gain confidence that not only is the original problem fixed, but also
that we haven't regressed other metrics like performance or
functionality.

Perhaps what we want to do is change the set of steps required to mark
a bug as verified, and then hold hard on the requirement that we only
ship changes that have been verified in the build?

> * Spend the newfound time on general branch QA (since we've
> historically had problems with low numbers of branch beta users, and
> introducing unexpected regressions).

I think here we want to automate that testing as much as possible. IMO
the problem isn't the number of users we have, but the ability to be
broad in our exposure to sites. I suspect that much of our QA team
browses the web the same way, and few of them are avid Farmville
users, for example.

> Thoughts? What do others think of the way we use the VERIFIED state?
> Is the current process fine, or can we make it more useful?

More useful! Moar!

cheers,
mike

Mike Beltzner

unread,
Oct 1, 2009, 11:44:50 AM10/1/09
to dev-q...@lists.mozilla.org
On 2009-10-01, at 1:14 AM, Justin Dolske wrote:

> [Followups to m.d.quality]

I obey!

> Should we generally abolish the VERIFIED state/keyword/flag of bugs,

> and instead move to a model where verification is only done as a
> special case?

As I understand it, QA uses this state in preparation for release,

going through all the bugs that were marked FIXED during a time period
and changing the status to VERIFIED once they've established that the
STR are resulting in the expected behaviour.

Not sure if there's a set of bugs there that QA doesn't really deal
with, thus stay unverified, but as far as I understand bugs will often
stay in FIXED but not VERIFIED for a long time - usually until we gear
up for a release.

On the branches they switch to using a verifiedN keyword.

> The current process strikes me as not having a good cost/benefit

> ratio. It seems like it must take a significant amount of QA time,
> but as a developer it doesn't feel like this verification step adds
> much to my confidence that we're shipping solid code. Especially for
> bugfixes which include automated tests, or are in areas with
> extensive existing test coverage. In fact, sometimes bugs are
> verified with comments along the lines of "it was checked in and
> tests passed." I've even seen some verified just because there have
> been no more comments in the bug since checkin.

I think that the verification stage is something we *should* be doing,

but yes, I agree that often times it feels like it can be done without
specific QA involvement (ie: if we have test coverage that verifies
it, or if the developer can verify it with a screenshot or by trying
the STR themselves) it should be pretty simple to do. We should also
be checking for bugs as verified before shipping, instead of as fixed,
I think.

I'm more interested in cases where verification is hard so that we can
gain confidence that not only is the original problem fixed, but also
that we haven't regressed other metrics like performance or
functionality.

Perhaps what we want to do is change the set of steps required to mark
a bug as verified, and then hold hard on the requirement that we only
ship changes that have been verified in the build?

> * Spend the newfound time on general branch QA (since we've

> historically had problems with low numbers of branch beta users, and
> introducing unexpected regressions).

I think here we want to automate that testing as much as possible. IMO

the problem isn't the number of users we have, but the ability to be
broad in our exposure to sites. I suspect that much of our QA team
browses the web the same way, and few of them are avid Farmville
users, for example.

> Thoughts? What do others think of the way we use the VERIFIED state?

> Is the current process fine, or can we make it more useful?

More useful! Moar!

cheers,
mike

Aakash Desai

unread,
Oct 1, 2009, 12:23:39 PM10/1/09
to dev-q...@lists.mozilla.org
Here's my quick take and thoughts on this:

Things I've Seen:
- WebQA: every single push i've been a part of for webqa has resulted in some re-opened bugs or bugs that opened up a number of other bugs

- Fennec: it's a project early in its development; the community isn't there, so there's a lot of bug fixes that end up being re-opened or causing new bugs to come out. There's also: I get a number of complaints from developers about duplicate bugs, while they aren't really duplicates. They're separate issues that *may* have a root cause back to the originally filed bug, but have to be verified on their own after the bug is fixed.

- Firefox: it's stable and has a strong community of dog-fooders (250 million+) looking at it on a daily basis. Verifying every single bug that goes through our system there just doesn't scale well anymore and we're seeing it in the form of gray hairs on a lot of our QA folks.

During Verification...
- On top of making sure the bug is fixed, we go out and add in-litmus? flags to our bugs, double check if the developer has added the in-testsuite+ flag into the bug and generally make sure the bug report is clean. This is a nice-to-have, but there's going to be a need for someone else (or better quality of work on those who touch the bug) to check-up on these sorts of things.

My Suggestion:
- Identifying blocker, critical and major bugs and verifying those on a consistent basis, while having others be added with the verifyme keyword if someone thinks its necessary. This process would also require for developers/drivers/QA-ers really hankering down on those in-testsuite? and in-litmus? flags when patches are pushed.

Overall, I really love that this discussion is happening because it's becoming a pain-point on our side and would really like to find a better solution.

+1 hug from me for the person that finds a solution everyone loves/likes/abides by.

-- Aakash

> [Followups to m.d.quality]

I obey!

More useful! Moar!

cheers,
mike

Daniel Veditz

unread,
Oct 1, 2009, 7:46:48 PM10/1/09
to Justin Dolske
Justin Dolske wrote:
> One thing I've never seen happen is a bug being reopened because the
> verification step found problems. I assume it happens but is rare -- do
> QA or drivers track how often this happens?

On the stable branches--where we use keywords rather than the
status--we try to verify all of the security fixes and most of the rest,
and we have definitely caught problems doing this.

I wouldn't know how to query for actual numbers but my gut impression is
that this happens to at least one or two bugs every point release.

-Dan

shirish शिरीष

unread,
Oct 4, 2009, 4:12:59 PM10/4/09
to Aakash Desai, dev-q...@lists.mozilla.org
in-line :-

On Thu, Oct 1, 2009 at 21:53, Aakash Desai <ade...@mozilla.com> wrote:

<snipped>

> - Firefox: it's stable and has a strong community of dog-fooders (250 million+) looking at it on a daily basis. Verifying every single bug that goes through our system there just doesn't scale well anymore and we're seeing it in the form of gray hairs on a lot of our QA folks.

Slightly OT but from where you got the 250 million + number ? I would
be definitely interested to know more about those dog-fooders and how
many of them are on branch and trunk and how many of them update
daily. I am sure somebody would have stats of these. Is it published
somewhere? That would make for some interesting reading as state of
project ;)

> -- Aakash

--
Regards,
Shirish Agarwal शिरीष अग्रवाल
My quotes in this email licensed under CC 3.0
http://creativecommons.org/licenses/by-nc/3.0/
http://flossexperiences.wordpress.com
065C 6D79 A68C E7EA 52B3 8D70 950D 53FB 729A 8B17

Robert O'Callahan

unread,
Oct 4, 2009, 6:41:21 PM10/4/09
to
On 1/10/09 12:14 PM, Justin Dolske wrote:
> The current process strikes me as not having a good cost/benefit ratio.

I've been thinking the same thing.

> One thing I've never seen happen is a bug being reopened because the
> verification step found problems. I assume it happens but is rare -- do
> QA or drivers track how often this happens?

I've seen it once or twice maybe, but it doesn't seem worth the effort.

> * Spend the newfound time on general branch QA (since we've historically
> had problems with low numbers of branch beta users, and introducing
> unexpected regressions).

This seems like a better investment. Another possibility is that QA only
verify stable-branch landings.

Rob

Henrik Skupin

unread,
Oct 5, 2009, 5:09:57 AM10/5/09
to dev-q...@lists.mozilla.org
Boris Zbarsky wrote on 01.10.09 07:33:

>> But to answer the original question, it is NOT uncommon to REOPEN bugs
>> during VERIFICATION and so there is benefit in NOT abandoning the
>> process.
>

> Based on the 10 bugs above, I think the claim that QA verification
> commonly reopens bugs is pretty tenuous... Having a query with more
> relevant bugs (e.g. ones actually reopened from FIXED recently and in
> products where QA actually does verification) would possibly be more
> interesting.

I don't think that reopening bugs during verification by QA is a common
action. It will be done by developers when the builds fails or a test
turns orange immediately in the next unit test run on buildbot. But QA
has been advised by a couple of developers to not reopen a bug when the
verification process is done a couple of days later but to file a new
regression bug which blocks the original one. So only counting the
numbers for Resolved:Fixed => Reopen seems to be wrong.

To get a real number of regressions identified by a new feature or a bug
fix we should also check for depending bugs which are regressions.

--
Henrik Skupin
QA Execution Engineer
Mozilla Corporation

Henrik Skupin

unread,
Oct 5, 2009, 5:28:25 AM10/5/09
to dev-q...@lists.mozilla.org
Robert O'Callahan wrote on 05.10.09 00:41:

>> One thing I've never seen happen is a bug being reopened because the
>> verification step found problems. I assume it happens but is rare -- do
>> QA or drivers track how often this happens?
>
> I've seen it once or twice maybe, but it doesn't seem worth the effort.

As said in another post i have sent a couple of minutes before to this
list QA normally files new regression bugs instead of reopening
implementation bugs again. Reopening will normally only happen if the
fix/feature fails on all platforms.

>> * Spend the newfound time on general branch QA (since we've historically
>> had problems with low numbers of branch beta users, and introducing
>> unexpected regressions).
>
> This seems like a better investment. Another possibility is that QA only
> verify stable-branch landings.

I don't think that this is a good idea as long as we don't have
automated tests (as best as possible) for each and every new
implementation bug. Let me give some examples which shows that verifying
a bug on trunk makes sense and shortens the delay on branches if a
regression has been identified earlier in the cycle:

* Bug 501031: liboggz seeking bisection is inaccurate
=> Regression: Bug 518169: Setting currentTime doesn't seek video anymore
==> Regression: Bug 519155: Chained Ogg / Theora files refuse to play
entirely

* Bug 471214: Join function objects transparently, clone via read
barrier ...
=> Regression: Bug 520319: Adding a new credit card on amazon fails
=> Regression: Bug 518103: Cannot type into the phonebook search box ...

There are a couple more examples which would show that verifying bugs
before the patch lands on the branches makes sense. Given the numbers of
daily nightly users regressions will be found much earlier and is
already covered before QA verifies a bug. That's a big help too.

The question is which bugs QA should focus during the verification
process. I agree that they should be verified on a severity based path.
So minor/normal fixes shouldn't be treated the same way as
major/critical/blocker.

Boris Zbarsky

unread,
Oct 5, 2009, 11:21:57 AM10/5/09
to
On 10/5/09 5:09 AM, Henrik Skupin wrote:
> I don't think that reopening bugs during verification by QA is a common
> action. It will be done by developers when the builds fails or a test
> turns orange immediately in the next unit test run on buildbot. But QA
> has been advised by a couple of developers to not reopen a bug when the
> verification process is done a couple of days later but to file a new
> regression bug which blocks the original one. So only counting the
> numbers for Resolved:Fixed => Reopen seems to be wrong.

To be clear, there are two things going into verification. There's the
"is this bug fixed?" verification-stamp. There's the "ok, what did this
fix break, or what bugs can we find around the original bug?" activity.
The latter is, imo, critical. Certainly it's incredibly useful to me,
as a developer. It would not lead to the bug being reopened, as you say.

The VERIFIED state, however, is the "is this bug fixed?" stamp. The
usefulness of that is what is in question.

-Boris

Daniel Veditz

unread,
Oct 5, 2009, 9:51:46 PM10/5/09
to
Henrik Skupin wrote:
> As said in another post i have sent a couple of minutes before to this
> list QA normally files new regression bugs instead of reopening
> implementation bugs again. Reopening will normally only happen if the
> fix/feature fails on all platforms.

Yes, but if you don't have the "verified" state how do you keep track of
the bugs which you've checked from the ones you haven't? Whether you
reopen the original bug or file a new one you've done work and added
value to the bug process.

Daniel Veditz

unread,
Oct 5, 2009, 10:02:34 PM10/5/09
to
shirish शिरीष wrote:
> On Thu, Oct 1, 2009 at 21:53, Aakash Desai <ade...@mozilla.com>
> wrote:
>> - Firefox: it's stable and has a strong community of dog-fooders
>> (250 million+) looking at it on a daily basis.
>
> Slightly OT but from where you got the 250 million + number ? I would
> be definitely interested to know more about those dog-fooders and
> how many of them are on branch and trunk and how many of them update
> daily. I am sure somebody would have stats of these. Is it published
> somewhere? That would make for some interesting reading as state of
> project ;)

250 million is a rough guesstimate of the total number of Firefox users
(~25% of 1 Billion web users -- both numbers severely rounded). I
certainly wouldn't characterize those people as "dog-fooders".

For nightly updaters we have a few tens of thousands, spread among
Firefox 3.0.x, 3.5.x, 3.6pre-release and trunk (3.7+). I'd call those
people "dog-fooders", but that doesn't mean they are all seriously
testing and verifying bugs. Most of them are just using the product and
willing to report when things go wrong in their daily use.

Henrik Skupin

unread,
Oct 6, 2009, 5:32:55 AM10/6/09
to Daniel Veditz, dev-q...@lists.mozilla.org
Daniel Veditz wrote on 06.10.09 03:51:

Oh we have the verified state when we at least can verify the fix for
the major part of the fix except the one which has been regressed or
hasn't been fixed and has been punted to a new bug. My concern from
above was just about the correct numbers.

Justin Dolske

unread,
Oct 8, 2009, 9:44:51 PM10/8/09
to
On 10/1/09 6:44 AM, Mike Beltzner wrote:

> Not sure if there's a set of bugs there that QA doesn't really deal
> with, thus stay unverified, but as far as I understand bugs will often
> stay in FIXED but not VERIFIED for a long time - usually until we gear
> up for a release.

Which is really the worst time to be verifying bugs. People are rushed,
and if bugs are found to not be fixed, they're less likely to get
attention unless they're blockers.

It would be a lot better use QA time on focused inspection of the
bugs/areas that matter, closer to the time projects and big patches are
landing.

> Perhaps what we want to do is change the set of steps required to mark a
> bug as verified, and then hold hard on the requirement that we only ship
> changes that have been verified in the build?

Yes. I think that most bugs (on trunk) should enter that state
automatically. Otherwise we'll just end up playing the same game of
shuffling huge piles of bugs from one state to the other. Then use the
extra time to do focused testing on areas that had code churn, and bugs
that were explicitly marked as needing extra verification.

For branches, it sounds like we want every bit of verification we can
get, so we probably shouldn't change the process there (yet :-).

> I think here we want to automate that testing as much as possible.

Automated testing will help some, but not in the form we currently have.
EG, there's been talk of having a valgrind box spider through Delicious
/ Reddit URLs. That would be useful.

But I think there's still going to be a strong need for manual QA in
areas where significant code changes are made, because there will always
be interesting ways to break things that we didn't think to test. [And,
alas, areas that have poor test coverage.]

Justin

Justin Dolske

unread,
Oct 8, 2009, 10:31:31 PM10/8/09
to
On 10/5/09 2:28 AM, Henrik Skupin wrote:

> Let me give some examples which shows that verifying
> a bug on trunk makes sense and shortens the delay on branches if a
> regression has been identified earlier in the cycle:
>
> * Bug 501031: liboggz seeking bisection is inaccurate
> => Regression: Bug 518169: Setting currentTime doesn't seek video anymore
> ==> Regression: Bug 519155: Chained Ogg / Theora files refuse to play
> entirely

I don't see how verification helped here.

Bug 501031 was, in fact, initially marked VERIFIED by QA. Then, 5 days
later, you noticed a problem while working on something else and filed
518169. Bug 519155 was filed by a community member, and was fixed before
518169 was verified.

So verification didn't help here (and actually missed a regression). But
doing focused inspection of the feature that was modified (by writing
more tests, in this case), is exactly the kind of thing I'm suggesting
QA spend more time doing.

> * Bug 471214: Join function objects transparently, clone via read
> barrier ...
> => Regression: Bug 520319: Adding a new credit card on amazon fails
> => Regression: Bug 518103: Cannot type into the phonebook search box ...

I also don't see how verification helped here.

471214 is an esoteric JS engine bug. I don't have a clue how anyone that
isn't a JS engine hacker would verify it. The 2 bugs you cite were filed
by people dogfooding -- not QA / verification -- which pinned blame on
471214 by looking for a regression range.

If anything, this is again an example of what I'm proposing. Spend more
time on general testing, less on bug microverification. [Just to be
clear: the bisection you did to identify the range is something I would
call hugely helpful!]

> The question is which bugs QA should focus during the verification
> process. I agree that they should be verified on a severity based path.
> So minor/normal fixes shouldn't be treated the same way as
> major/critical/blocker.

I don't think that's quite the right metric. It would be better to
verify based on the perceived risk of the patch. A fix for a minor bug
involving a megapatch to fragile code probably benefits from explicit
verification. But a 1-line fix to well-understood code for a P1 blocker
probably doesn't benefit.

Justin

0 new messages