As discussed at the Gecko 1.9 meeting we are reviving the trunk
sheriffs. People have already starting their duty as detailed on the
schedule here: http://wiki.mozilla.org/Sheriff_Schedule. If you can't
make your time please just edit and sign up for a different day.
All the info on what a sheriff does:
http://wiki.mozilla.org/Sheriff_Duty
Regression Policy:
http://www.mozilla.org/hacking/regression-policy.html
Other useful stuff:
http://www.mozilla.org/hacking/working-with-seamonkey.html
We can discuss at the Gecko 1.9 meeting tomorrow if anyone has any
questions.
Best,
Schrep
> As discussed at the Gecko 1.9 meeting we are reviving the trunk sheriffs.
Sounds good to me.
> People have already starting their duty as detailed on the
> schedule here: http://wiki.mozilla.org/Sheriff_Schedule. If you can't
> make your time please just edit and sign up for a different day.
>
> All the info on what a sheriff does:
> http://wiki.mozilla.org/Sheriff_Duty
>
> Regression Policy:
> http://www.mozilla.org/hacking/regression-policy.html
That's the PERFORMANCE regression policy.
What is the FUNCTIONAL regression policy?
I've been wondering about this for some time.
Over the last year or two, we've had several enhancement checkins that
caused significant numbers of regressions, quite a few of which STILL
are not fixed, some of which are crashers.
One RFE caused something like 33 regression bugs to be filed, of which
about 20 are still unresolved. I keep wondering why some sheriff didn't
(and doesn't) back that out.
Of course, some of them have been in for SO LONG now, that it's kinda
too late for a sheriff to take corrective action now. But then one
wonders why the regressions are allowed to live for SO long.
Maybe this question belongs in m.governance?
> Over the last year or two, we've had several enhancement checkins that
> caused significant numbers of regressions, quite a few of which STILL
> are not fixed, some of which are crashers.
>
> One RFE caused something like 33 regression bugs to be filed, of which
> about 20 are still unresolved. I keep wondering why some sheriff didn't
> (and doesn't) back that out.
Have you asked the sherriff to do so, or at least brought it to their
attention. Ultimately module owners are responsible for the code in their
modules, and you can appeal to them if there is a question.
Could you be more specific, please?
--BDS
> Nelson Bolyard wrote:
>
>> Over the last year or two, we've had several enhancement checkins that
>> caused significant numbers of regressions, quite a few of which STILL
>> are not fixed, some of which are crashers.
>>
>> One RFE caused something like 33 regression bugs to be filed, of which
>> about 20 are still unresolved. I keep wondering why some sheriff didn't
>> (and doesn't) back that out.
>
> Could you be more specific, please?
I don't know which one(s) Nelson is thinking about,
but I would suggest
Darin's [ Bug 326273 – Implement nsIThreadManager ]
as an example.
Not questionning the improvement(s) made by this checkin;
but wondering about the various long time regressions:
tab drag'n'drop indicator, tree not scrolling, (flash/js) memory leak, ...
Yes, that's the one that triggered my question. There are many others
also, each less severe, but taken together they are quite a reduction
in usability.
> Not questionning the improvement(s) made by this checkin;
> but wondering about the various long time regressions:
> tab drag'n'drop indicator, tree not scrolling, (flash/js) memory leak, ...
... numerous password prompt issues, including a crash
See the whole list (for that bug) at
https://bugzilla.mozilla.org/showdependencytree.cgi?id=326273&hide_resolved=1
Note that some of the regressions are shown as blockers of that bug, and
others as dependencies of it.
But my question really is about policy. IMO, the trunk has been going
steadily down hill over the past 18 months, with new regressions
appearing that never seem to get fixed. I don't understand why this is
being tolerated. Seems like there's no pressure to fix regressions.
I wish there was a policy that created such pressure.
(Does this belong in .governance ? )
The trunk is more stable than it ever was in the time when I found
developments in the Mozilla community most exiting, i.e. back in the
days when Mozilla didn't even have a version number, when we went with
milestones instead.
The trunk is a bleeding-edge development beast, it's not intended to be
production quality - if it would, our product would be as current, new
and exciting as Netscape 4.75 when it was released.
Regressions are what new, big code changes cause all the time, with no
chance to reasonably avoid them. And in most cases, those code changes
fix more problems than they regress. Some regressions are not found
during private testing and not during review and peer testing, they are
only found when some wider audience is testing that stuff (e.g. the
current problem of SeaMonkey tab titles not showing up due to a Core
change, as Firefox tabbrowser works correctly). Some regressions are
known even before checkin, but it's more helpful and less work to land
the patch and fixing the regressions afterwards, as long as trunk stay
dogfood-able.
This is good and reasonable development strategy. In closed source,
trunk would be the in-house bleeding-edge development dump (those
companies often don't even review all of that). As we are open source,
everyone can access and test that code, and find and file the
regressions, so that they get fixed over time.
Only after all the needed feature work for a new release (say FF3 in
that case) has been landed, there will be a stabilizing phase with a
feature freeze some thing resembling it, where regression-fixing becomes
the primary target and Betas are made. That doesn't mean regression
fixing is unimportant right now, but it's not the main target of
development atm. The stabilizing phase for FF3 has not yet begun. And
it's good this way.
Anyone who can not tolerate regressions should never even think of
touching trunk.
At least that's my opinion regarding that topic.
Robert Kaiser
> The trunk is more stable than it ever was in the time when I found
Yes, the building process works well,
but the legacy features are not that stable.
> Regressions are what new, big code changes cause all the time, with no
> chance to reasonably avoid them. And in most cases, those code changes
We understand that regressions (can) happen,
the "complain" is about such regressions that affect (daily) Trunk testing:
*Memory leak from Darin's patch: forces to close and restart app
"regularly", which prevent "long time" testing ... for months.
*Blank tabs from Jonas's patch: very annoying for daily users ... but
this case is different as it seems it will be fixed within days.
> Only after all the needed feature work for a new release (say FF3 in
> that case) has been landed, there will be a stabilizing phase
> ... And it's good this way.
Not too sure that it is so good that way:
in the past, there have often been complains about reporting/blocking
bugs to late in the cycle: these or those that these may be hidding;
moreover that means digging into code that was "forgotten" for a long time.
In another thread, sorting of (Core/FF3) blocking bugs has now started,
and target release selected: this is (now) good news, as you say.
What makes us wonder is that it is "discovered" that there is a "huge"
number of such blockers/regressions...
> Anyone who can not tolerate regressions should never even think of
> touching trunk.
I think we do tolerate that they happen:
it could even be said that catching them is the point of testing
nightlies, couldn't it;
the point here is how long they last after being reported...
> At least that's my opinion regarding that topic.
And this is mine :->
That's really really wrong. All regressions should be one or the other
for any given bug (preferably blockers of the bug, imo, but consistency
is more important than which exact direction is used).
> But my question really is about policy. IMO, the trunk has been going
> steadily down hill over the past 18 months, with new regressions
> appearing that never seem to get fixed.
Yes. It has. :( This part of the reason why we now have over 300 bugs
that are blocking1.9+, and why we're re-instituting sheriffs, as far as
I can see.
-Boris
True, but the point is to fix them once that's happened.
> with no chance to reasonably avoid them.
Not quite true, but we're working on the avoidance methods (e.g testing).
> And in most cases, those code changes fix more problems than they regress.
While this may be true, the exact things that are regressed matter more
than how many of them are, in some ways.
> Some regressions are known even before checkin, but it's more helpful and less work to land
> the patch and fixing the regressions afterwards
Somewhere here you should have "in a timely manner".
> as long as trunk stay dogfood-able.
This is one key point. Our trunk is only sort of dogfood-able now, and
many times over the last 5-6 months it hasn't been dogfood-able at all.
> As we are open source,
> everyone can access and test that code, and find and file the
> regressions, so that they get fixed over time.
That last conclusion doesn't necessarily follow. To get them fixed you
need someone fixing them.
> Only after all the needed feature work for a new release (say FF3 in
> that case) has been landed, there will be a stabilizing phase
The problem with that approach is that the longer a regression is in the
tree, the harder it gets to fix (not to mention that it becomes very
hard to back out the patch if it becomes clear that the regression is
not fixable and the cure is worse than the disease).
Also, we need to be in this "stabilizing phase" as of a month of two ago
if we're aiming for a 2007 release, imo. How long do you estimate it
will take us to fix 300+ nontrivial (else they would have been fixed
already) bugs?
> Anyone who can not tolerate regressions should never even think of
> touching trunk.
There's a world of difference between "regressions" and "regressions
that never get fixed".
-Boris
>> And in most cases, those code changes fix more problems than they
>> regress.
Oh, I'm sure they fix problems that were inconveniencing the developer
who made them, but what about their impact on USERS? The regressions
listed for bug 326273 are all broken user features. Name one user
feature that was actually fixed or enhanced by that checkin. It may
have been elegant, but it broke WAY too much to have been tolerated as
it was.
> While this may be true, the exact things that are regressed matter more
> than how many of them are, in some ways.
And there are mega-exceptions. Checkins that cause multiple tens of
regressions, including crashes. No way will anyone prove that that
checkin was a net positive. It should have been backed out, or the
person who did it shuold work around the clock to fix the damage.
That's the missing policy, IMO.
>> Some regressions are known even before checkin, but it's more helpful
>> and less work to land the patch and fixing the regressions afterwards
If so, then it ought not to take 12 months or more to fix the regressions,
and the people who know about the regressions a priori' should not walk
away from the job until the regressions are fixed.
> Somewhere here you should have "in a timely manner".
>
>> as long as trunk stay dogfood-able.
>
> This is one key point. Our trunk is only sort of dogfood-able now, and
> many times over the last 5-6 months it hasn't been dogfood-able at all.
>
>> As we are open source, everyone can access and test that code, and
>> find and file the regressions, so that they get fixed over time.
>
> That last conclusion doesn't necessarily follow. To get them fixed you
> need someone fixing them.
We're very unlikely to get volunteers to spent large amounts of effort,
rewriting formerly working code to get it to work again, after it was
broken by someone else's checkin. This demotivates developers and drives
them away. They think "why should I keep working on this when others can
break my code and I must pay for their mistakes?" and "I worked hard to
get that working, and now person X has broken it. Let HIM fix it."
And we want to avoid a culture in which some developers are allowed to
cause regressions without being forced to fix them, while other developers
are required to fix theirs. That happened at Netscape and there was
nearly a revolt over it.
>> Only after all the needed feature work for a new release (say FF3 in
>> that case) has been landed, there will be a stabilizing phase
>
> The problem with that approach is that the longer a regression is in the
> tree, the harder it gets to fix (not to mention that it becomes very
> hard to back out the patch if it becomes clear that the regression is
> not fixable and the cure is worse than the disease).
I fully expect that FF3 will ship with most of the regressions from bug
326273 still unfixed. Many of them are over a year old now. It's
obvious that no-one thinks it's his job to fix them. I don't believe
that 2 months before FF3 releases a lot of developers will suddenly
realize that they should have been working on these things all along.
Also, when a regression becomes a year old, or more, it becomes necessary
to CONVINCE people that these ARE regressions. People reason: "it's
worked this way for 6 months, therefore it's worked that way forever,
and it should not be changed now."
> Also, we need to be in this "stabilizing phase" as of a month of two ago
> if we're aiming for a 2007 release, imo. How long do you estimate it
> will take us to fix 300+ nontrivial (else they would have been fixed
> already) bugs?
>
>> Anyone who can not tolerate regressions should never even think of
>> touching trunk.
>
> There's a world of difference between "regressions" and "regressions
> that never get fixed".
Exactly, regressions that last 3-4, even 6 weeks are one thing.
Regressions that last 17 months are quite another.
> -Boris
The problem is that about half the mozilla developers think that regressions
should be listed as blockers, and half think they should be dependents.
When a bug is resolved/fixed, no-one is clear on how to interpret blockers
of that bug that are still unfixed. Evidently they weren't really blockers,
or else the bug should not have been marked fixed. And a bug that was
caused by the "fix" for another bug is clearly not dependent on that fix.
If anything, it is dependent on the undoing of that fix.
I think we need new categories of links to other bugs, including
"Caused" and "caused by" for regressions, and "see-also" for related bugs
that are not causal nor blockers nor dependencies.
IIRC, it paved the way for network traffic during modal dialogs.
--
Blake Kaplan
Sure. That's why I said we should pick one or the other for this bug
and just do it consistently (for this bug). If someone has already
gone through and sorted out which bugs in those dep lists are
regressions, it would be great to put the results somewhere so we can
change the deps (assuming said person doesn't change the deps himself).
> When a bug is resolved/fixed, no-one is clear on how to interpret blockers
> of that bug that are still unfixed. Evidently they weren't really blockers,
> or else the bug should not have been marked fixed.
The way I tend to think of it, is that they're blockers for landing the
bug fix on a branch, say. Or rather that they _should_ have been
blockers for the bug landing but were simply not discovered in time.
But again, I'm fine with either direction as long as we're consistent
for any given bug.
> I think we need new categories of links to other bugs
Absolutely. There are existing bugs on Bugzilla for it. :(
-Boris
Which is in fact what causes a lot of the regressions, because code that
was not expecting this broke.
But yes, that was in fact the main point of the change, and the main
user benefit -- now a modal dialog posed from one Firefox (or Seamonkey,
or Thunderbird) window doesn't prevent network access started from other
windows before the dialog came up from completing.
-Boris
The question about policy has been answered, we had a policy problem
back then, but we don't have one today. We have sheriffs back, we have
perf testing, feature testing, we have daily smoketests by QA. All of
which are things we didn't have a year ago.
It doesn't help fixing the regressions that darin got reassigned since
then, of course.
If you care about that bug, and I read you do, mind filing a bug "Fix
nsIThreadManager regressions" and make sure that all regressions do
block that bug? And then you can start to see which of those are
reproducable (threads make me ask that), and request blocking1.9 for
those bugs. At least for those that you don't want fx3 to ship with.
That's the only way to get things done here.
Axel
I've been using 1.8 branch over that period, so I can't clearly tell -
I'll be able to tell more from personal experience once suiterunner
lands and I'll move to that for daily usage.
From earlier experience, I've seen thought that it varies a lot what
different people call dogfood-able, as I've been using milestone builds
and nightlies in old times on a daily production basis when most
Netscape engineers considered the codebase non-dogfood-able...
>> As we are open source, everyone can access and test that code, and
>> find and file the regressions, so that they get fixed over time.
>
> That last conclusion doesn't necessarily follow. To get them fixed you
> need someone fixing them.
Right. And I guess it's probably no good idea to keep code in when its
creator doesn't actively work on its regressions, esp. if they break
dogfood-ability.
Which reminds me that it might be a good idea to start using dogfood and
catfood keywords again in Bugzilla.
>> Only after all the needed feature work for a new release (say FF3 in
>> that case) has been landed, there will be a stabilizing phase
>
> The problem with that approach is that the longer a regression is in the
> tree, the harder it gets to fix (not to mention that it becomes very
> hard to back out the patch if it becomes clear that the regression is
> not fixable and the cure is worse than the disease).
Sure, and we probably need to do a better job in fixing regressions. Or,
actually, those among us who cause them - I'm not writing a lot of code
myself, I'm just OKing temporary regressions made by others in SeaMonkey
currently to get suiterunner landed (which is nearly as dogfood-able
right now as xpfe trunk - with mailcompose crashing on both, they're
probably labeled "unusable" anyways until that regression is fixed...)
> Also, we need to be in this "stabilizing phase" as of a month of two ago
> if we're aiming for a 2007 release, imo. How long do you estimate it
> will take us to fix 300+ nontrivial (else they would have been fixed
> already) bugs?
Well, this might be some difference between us: I'm not aiming for a
2007 release, as I'm in doubt if the app I'm planning for (SeaMonkey 2)
will be able to make that. I for myself would be quite happy to see
Gecko 1.9 slip to spring 2008 or such.
But then, I'm no Firefox or Gecko lead, and those are who decide on
schedules, I guess ;-)
>> Anyone who can not tolerate regressions should never even think of
>> touching trunk.
>
> There's a world of difference between "regressions" and "regressions
> that never get fixed".
Right. I meant to be talking about the former, which are hard to avoid
even if you're doing good testing (you might be able to reduce their
number though). The latter is surely something we surely don't want. (At
least if where are not intentional - as things like "unmaintained qt
port doesn't build any more" or "FF3 doesn't work on Win9x" are surely
regressions, but ones that are probably wanted and will not be fixed.)
Robert Kaiser
I thought we had exactly that policy in place? I think that it's
probably only a matter of executing that policy.
Robert Kaiser
> But my question really is about policy. IMO, the trunk has been going
> steadily down hill over the past 18 months, with new regressions
> appearing that never seem to get fixed. I don't understand why this is
> being tolerated. Seems like there's no pressure to fix regressions.
> I wish there was a policy that created such pressure.
This, along with Boris' comment that "our trunk is only sort of
dogfood-able now" concerns me, as that's not the perception I have.
I wonder if these various regressions are time-bombs which are quietly
piling up, or if they're just the inevitable result of changes to a
complex product trading one set of bugs for another (albeit with a net
benefit, one hopes!).
I've been dogfooding trunk on OS X for quite a few months, and "grumpy
Schrep" has been strongly encouraging folks to be dogfooding trunk as
well. My general impression is that while there are certainly a number
of annoyances and quirks, it's entirely dogfoodable. I don't use FF2 at all.
I think it would be good to have a discussion (this discussion, I
suppose) on what the concerns are, and evaluate what needs to be done to
make sure the problem is scoped and on-track for resolution for Firefox 3.0.
Justin
I agree. If you are seeing bugs you would not want Firefox 3 to ship
with, please file them in bugzilla and notify the appropriate module
owners so they can be triaged.
I won't go so far as to say to nominate them as blocking-1.9/firefox-3,
because I'm unsure of exactly what the policy there is, but I think
everyone can agree that noises should be made about critical bugs that
need to be fixed before a release.
-Colin
It all depends on your task set and on your environment.
For example, on my machine (Linux) about one in three SVG testcases in
Bugzilla causes trunk Gecko to hang X; if I'm lucky I ssh in from
another machine quickly enough and kill the browser; 5-20 minutes later
(depending on how fast I moved) X stops hogging all the CPU and starts
reacting to input again. If I'm not lucky, the computer stops
responding to the network and the only thing I can do is a hard reboot.
Now this clearly interferes with day to day work: if I make a layout or
XBL or content change and it causes an SVG regression (a normal
occurrence, really)... what am I supposed to do? Load the testcase in
the bug and hope that I don't hit the above symptoms? Ignore the bug?
Try to figure out what's going on by inspection of the testcase in a
text editor? Upgrade my OS and hope a newer X versions deals better
with whatever we're throwing at it so I can continue working on trunk?
Fwiw, this last seems to be the official policy at the moment.... But
that doesn't change the fact that this is a dogfood regression for me.
> I wonder if these various regressions are time-bombs which are
> quietly piling up, or if they're just the inevitable result of
> changes to a complex product trading one set of bugs for another
> (albeit with a net benefit, one hopes!).
Some of both; net benefit is hard to gauge because the cost-benefit
equation is so different for different people and in different situations.
> I've been dogfooding trunk on OS X for quite a few months, and
> "grumpy Schrep" has been strongly encouraging folks to be dogfooding
> trunk as well. My general impression is that while there are
> certainly a number of annoyances and quirks, it's entirely
> dogfoodable.
Your text inputs must work more often than mine do, then. I've not been
able to use trunk for more than an hour or two at a time on OS X the
last few times I tried; doing the "click in a textfield, alt-tab,
alt-tab" dance every minute is so distracting that it just makes it
impossible to get work done at anything resembling 100% productivity.
> and evaluate what needs to be done to make sure the problem is scoped
> and on-track for resolution for Firefox 3.0.
I feel that we need to:
1) Nominate bugs for blocking Gecko 1.9 (or Firefox 3) as needed
2) Triage said nominations, with regressions likely getting
blocking status
3) Find people to fix the blocking bugs.
Step 3 is hard.
-Boris
I would. Given the number of blocking+ bugs already, and how many more
are likely to appear given the number of nominations we have, the
chances of something that is _not_ a blocker and not on the pet peeve
list of someone who goes and fixes it are slim, I suspect.
-Boris
Should we go for using the dogfood (and maybe catfood) keywords in
Bugzilla again, and maybe do stats on them, trying to get them (at least
dogfood) down to zero even for nightlies and alphas?
Robert Kaiser
> It all depends on your task set and on your environment.
>
> For example, on my machine (Linux)
"Oh, Linux."
You've got a point there. At least, my impression is that Linux needs
the most help of the 3 main platforms.
Were you just using SVG on Linux as an example to hilight differences in
task set / environment, or have there been recent regressions (Cairo :-)
that started killing X?
> Some of both; net benefit is hard to gauge because the cost-benefit
> equation is so different for different people and in different situations.
Absolutely.
>> My general impression is that while there are
>> certainly a number of annoyances and quirks, it's entirely
>> dogfoodable.
>
> Your text inputs must work more often than mine do, then.
Probably not! I end up doing the same kinds of workarounds as you
mention, but I just haven't personally found it to be something that
stops me from getting work done. I suppose this is the kind of
regression that falls into the "death by a thousand papercuts" category...
> I feel that we need to:
>
> 1) Nominate bugs for blocking Gecko 1.9 (or Firefox 3) as needed
> 2) Triage said nominations, with regressions likely getting
> blocking status
> 3) Find people to fix the blocking bugs.
Perhaps we should track 'dogfood' bugs as a standard part of the weekly
FF3/Gecko meetings? Although, being more aggressive about backing out
new regressions is probably (but not always) a more efficient way to do
that for the future.
I also wonder if it might help to have longer terms for sheriffs? Say, 2
days to a week? That might help by having stronger ownership for backing
out checkins which don't immediately manifest themselves as a flaming
tinderbox inferno.
Justin
Both. Clearly, if you're on Windows or OSX (or even just a newer X
version), whatever this particular regression is (and yes, it's fallout
from turning on cairo) is not dogfood for you.
> Probably not! I end up doing the same kinds of workarounds as you
> mention, but I just haven't personally found it to be something that
> stops me from getting work done.
It doesn't so much stop me as reduce the amount of work I can do. Which
means I can more efficiently deal with my bug list if I don't use trunk
for my day to day browsing... which is bad, because then I'm not testing
the changes I make to it before checking them in. Which leads to more
regressions from those changes, usually.
> Perhaps we should track 'dogfood' bugs as a standard part of the weekly
> FF3/Gecko meetings?
The problem, as discussed, is defining "dogfood".... I do think it's
worth tracking regression bugs. It's hard to say which regressions are
1.9-only based on bugzilla, but we currently have 545 open bugs with the
"regression" keyword that were filed in the Core product 3 months after
1.8 branched or later (I used Nov 1, 2005 as the three months date).
433 of them were filed in the last 12 months (so since May 22, 2006).
The vast majority of these bugs are unowned (assigned to nobody@, etc).
Ideally this number would hit something close to 0 before we ship 1.9.
In my opinion.
-Boris
Have you tried logging in locally on a text console? (Ctrl-Alt-F2, go back to
X with Ctrl-Alt-F7). In my experience, even when X is hung, it's usually
possible to log in on /dev/tty and renice (or kill) the misbehaving Firefox
process. Then within a few _seconds_ X starts reacting to input again.
Best regards,
Tony.
--
If a jury in a criminal trial stays out for more than twenty-four
hours, it is certain to vote acquittal, save in those instances where
it votes guilty.
-- Joseph C. Goulden
1) The FC4 version of X screws up the graphics state somehow so that
those consoles only show garbage.
2) Once X is hung, I can't even switch to a different virtual terminal,
because it ignores Ctrl-Alt-F*. If this were not the case, I would just
switch to the other X server I have running on vt8.
-Boris
So I think one thing that needs to be added to the regression policy
is a bit about what should be done by somebody who closes the tree.
Closing the tree is essentially asking everybody who wants to check
in to look into the regression instead, and this can be very
inefficient if everybody is duplicating the same work.
So to close the tree, I think whoever is closing it should ensure:
* There is a bug on file about the problem, linked from the top of
http://tinderbox.mozilla.org/Firefox/ .
* If it's a performance or unit test regression, the bug says which
tinderboxes, when, and has a bonsai URL to the list of checkins
in the window that may have caused the regression.
* If it's a functional regression, the bug has steps to reproduce,
the dates of the nightly builds immediately surrounding the
regression (or narrower window if known), and a bonsai URL to the
list of checkins that could have caused the regression.
This should typically take about 5-10 minutes.
As more information is discovered, it can then be added to the bug.
(I say this from experience in dealing with tree closures where this
information was not provided, and I had to spend 10 minutes figuring
out the above information before determining that the regression had
nothing to do with anything I checked in or knew about. Presumably
50 other people were spending the same 10 minutes -- or at least
they should have been -- so it's much more efficient for it to be
done once.)
-David
--
L. David Baron <URL: http://dbaron.org/ >
Technical Lead, Layout & CSS, Mozilla Corporation
> So to close the tree, I think whoever is closing it should ensure:
> * There is a bug on file about the problem, linked from the top of
> http://tinderbox.mozilla.org/Firefox/ .
> * If it's a performance or unit test regression, the bug says which
> tinderboxes, when, and has a bonsai URL to the list of checkins
> in the window that may have caused the regression.
> * If it's a functional regression, the bug has steps to reproduce,
> the dates of the nightly builds immediately surrounding the
> regression (or narrower window if known), and a bonsai URL to the
> list of checkins that could have caused the regression.
> This should typically take about 5-10 minutes.
>
> As more information is discovered, it can then be added to the bug.
Perhaps the initial info should also be included in the hook email.
(Which otherwise I find pretty useless.)
Peter.
I just updated to the 20070522 nightly trunk build from a build a few
weeks old. Definitely not edible dogfood any more! Sheesh!
Bugzilla, here I come.
>> But my question really is about policy. IMO, the trunk has been going
>> steadily down hill over the past 18 months, with new regressions
>> appearing that never seem to get fixed. I don't understand why this is
>> being tolerated. Seems like there's no pressure to fix regressions.
>> I wish there was a policy that created such pressure.
>>
>> (Does this belong in .governance ? )
>
> The question about policy has been answered, we had a policy problem
> back then, but we don't have one today. We have sheriffs back, we have
> perf testing, feature testing, we have daily smoketests by QA. All of
> which are things we didn't have a year ago.
But what is the definition of a functionality bug (not performance bug)
that is deemed so severe that it warrants (or demands) that a patch be
backed out? Who decides? How do we nominate a patch to be backed out?
> It doesn't help fixing the regressions that darin got reassigned since
> then, of course.
I think you're saying that Darin got reassigned after landing the new
nsIThreadManager but before he could fix the regressions. Yes?
That's truly unfortunate. This suggests to me that someone thought it
unimportant that the trunk remain edible dog food. :(
> If you care about that bug, and I read you do, mind filing a bug "Fix
> nsIThreadManager regressions"
https://bugzilla.mozilla.org/show_bug.cgi?id=381699
> and make sure that all regressions do block that bug?
Well, there are 23 open bugs still marked as blocking the nsIThreadManager
bug (bug https://bugzilla.mozilla.org/show_bug.cgi?id=326273), all of
which were filed after that patch landed (IINM), making them all
candidates to be regressions caused by it.
There are also 11 open bugs marked as blocked by that bug, some of which
seem like they are actually regressions, but I'm not sure.
I've initially marked only 5 bugs as blockers of bug 381699. Those bugs
are either obviously regressions or are ones that I confirm still occur
and began when nsIThreadManager landed.
I'd appreciate others having a look at the other blockers of bug 326273
to help determine which ones should be made blockers of the new bug 381699.
> And then you can start to see which of those are
> reproducable (threads make me ask that), and request blocking1.9 for
> those bugs. At least for those that you don't want fx3 to ship with.
They are now so marked. (Boris beat me to it for most :)
> That's the only way to get things done here.
Let's hope these fixes do get done soon. As it is, the trunk regressions
are now bad enough (and I don't mean only this one) that I must revert
to an older, more usable, trunk version. :(
>> This, along with Boris' comment that "our trunk is only sort of
>> dogfood-able now" concerns me, as that's not the perception I have.
>
> I just updated to the 20070522 nightly trunk build from a build a few
> weeks old. Definitely not edible dogfood any more! Sheesh!
I/others stick with SM nightly -20070515.
> Bugzilla, here I come.
Another example, regressions from same Core patch:
the tabs rendering was broken on SeaMonkey for some days;
and Jonas is still trying to fix email Compose "break down(s)"...
Bug 53901 – nsXULElement::CloneNode sets IS_IN_DOCUMENT flag
NB: Don't get my wrong, I'm sad about not-dogfood, I'm happy Jonas fixes
the regressions.
> I think you're saying that Darin got reassigned after landing the new
> nsIThreadManager but before he could fix the regressions. Yes?
That how it feels:
*Trunk was closed
*Darin landed his patch
*Darin spent some time fixing building issues, and maybe some app
regressions
*Trunk was reopened
*Never saw him again; meanwhile, the regressions are still piling up.
> I'd appreciate others having a look at the other blockers of bug 326273
> to help determine which ones should be made blockers of the new bug 381699.
I thought bug 326273 would be enough,
but if the new one can help then you did well.
> Let's hope these fixes do get done soon.
Indeed !
Just to be clear Darin was not "ressaigned." He's no longer working on
Mozilla stuff. So we need to figure out how to move forward from here.
The tracking bug is helpful (thanks Nelson) but as Bz indicates the
primary problem is find the right resources to help fix this - which
we've been working on and welcome any volunteers :-).
Mike
Mike
Here's my take on the sets of issues raised in this thread:
* More clear written functional regression policy would be helpful
** Any volunteers to post a draft :-)?
* Bug 326273 – Implement nsIThreadManager - has caused particular pain.
As I mentioned in a previous message the main developer is no longer
available so the primary problem here is finding folks to help.
* Dogfoodability: This is why we restarted sheriffs, smoketesting, and
I've been badgering everyone I see to dogfood. It is clear everyone's
use cases are a little different - I'm currently on Vista and the trunk
runs surprisingly well for me.
* Please file bugs and nominate them for blocking. We do have folks
doing triage and track the blockers list at every Gecko 1.9 meeting
(http://people.mozilla.com/~schrep/noms.jpg) and
(http://people.mozilla.com/~schrep/blockers.jpg). As you can see from
the second graph the blocking list is scarily flat. Which IMHO is the
*biggest* problem here.
Did I miss anything?
Best,
Schrep
Nelson Bolyard wrote:
> Mike Schroepfer wrote:
>
>> As discussed at the Gecko 1.9 meeting we are reviving the trunk sheriffs.
>
> Sounds good to me.
>
>> People have already starting their duty as detailed on the
>> schedule here: http://wiki.mozilla.org/Sheriff_Schedule. If you can't
>> make your time please just edit and sign up for a different day.
>>
>> All the info on what a sheriff does:
>> http://wiki.mozilla.org/Sheriff_Duty
>>
>> Regression Policy:
>> http://www.mozilla.org/hacking/regression-policy.html
>
> That's the PERFORMANCE regression policy.
> What is the FUNCTIONAL regression policy?
>
> I've been wondering about this for some time.
>
> Over the last year or two, we've had several enhancement checkins that
> caused significant numbers of regressions, quite a few of which STILL
> are not fixed, some of which are crashers.
>
> One RFE caused something like 33 regression bugs to be filed, of which
> about 20 are still unresolved. I keep wondering why some sheriff didn't
> (and doesn't) back that out.
>
> Of course, some of them have been in for SO LONG now, that it's kinda
> too late for a sheriff to take corrective action now. But then one
> wonders why the regressions are allowed to live for SO long.
>
> Maybe this question belongs in m.governance?
>
>
>> Other useful stuff:
>> http://www.mozilla.org/hacking/working-with-seamonkey.html
>>
>> We can discuss at the Gecko 1.9 meeting tomorrow if anyone has any
>> questions.
>>
>> Best,
>>
>> Schrep
>
>
Schrep
> application_pgp-signature_part
> 1KDownload
Here's my take on the sets of issues raised in this thread:
* More clear written functional regression policy would be helpful
** Any volunteers to post a draft :-)?
* Bug 326273 - Implement nsIThreadManager - has caused particular
pain. As I mentioned in a previous message the main developer is no
longer available so the primary problem here is finding folks to
help.
* Dogfoodability: This is why we restarted sheriffs, smoketesting,
and I've been badgering everyone I see to dogfood. It is clear
everyone's use cases are a little different - I'm currently on Vista
and the trunk runs surprisingly well for me.
* Please file bugs and nominate them for blocking. We do have folks
doing triage and track the blockers list at every Gecko 1.9 meeting
(http://people.mozilla.com/~schrep/noms.jpg) and (http://
people.mozilla.com/~schrep/blockers.jpg). As you can see from the
second graph the blocking list is scarily flat. Which IMHO is the
*biggest* problem here.
Did I miss anything?
Best,
Schrep
On May 19, 12:23 pm, Nelson Bolyard <NOnelsonS...@NObolyardSPAM.com>
wrote:
> Mike Schroepfer wrote:
> > As discussed at the Gecko 1.9 meeting we are reviving the trunk sheriffs.
>
> Sounds good to me.
>
> > People have already starting their duty as detailed on the
> > schedule here:http://wiki.mozilla.org/Sheriff_Schedule. If you can't
> > make your time please just edit and sign up for a different day.
>
> > All the info on what a sheriff does:
> > http://wiki.mozilla.org/Sheriff_Duty
>
> > Regression Policy:
> > http://www.mozilla.org/hacking/regression-policy.html
>
> That's the PERFORMANCE regression policy.
> What is the FUNCTIONAL regression policy?
>
> I've been wondering about this for some time.
>
> Over the last year or two, we've had several enhancement checkins that
> caused significant numbers of regressions, quite a few of which STILL
> are not fixed, some of which are crashers.
>
> One RFE caused something like 33 regression bugs to be filed, of which
> about 20 are still unresolved. I keep wondering why some sheriff didn't
> (and doesn't) back that out.
>
> Of course, some of them have been in for SO LONG now, that it's kinda
> too late for a sheriff to take corrective action now. But then one
> wonders why the regressions are allowed to live for SO long.
>
> Maybe this question belongs in m.governance?
>
> > Other useful stuff:
> > http://www.mozilla.org/hacking/working-with-seamonkey.html
>
...
>> That's the PERFORMANCE regression policy.
>> What is the FUNCTIONAL regression policy?
>>
The most important aspect of functional regressions is finding them in
the first place. It's extremely difficult to back out a patch after a
week or so, once all the incoming bugs have been triaged. We need more
unit tests, so we can back out after one hour. For example, turning on
Cocoa widgets caused 10 layout regression tests to fail yesterday, and
so it was backed out. It will be fixed and our product will be better
for it, and we've avoided a bunch of time-consuming interactions with
bug reporters in bugzilla.
It is highly likely that landing a patch with as many problems as the
thread manager initially had would cause every single one of our test
suites to fail these days. It is getting better, and it's going to keep
improving.
- Rob
IMHO, that's actually fine as the developer who caused those regressions
is actively working on them and getting more and more of them fixed. As
long as that happens in a timely manner, I think that's what trunk is
actually for.
Robert Kaiser
I'm really not sure we're testing the right things for that. Most of
the serious regressions involve interaction of multiple systems with
multiple event queues (modal dialogs, plug-ins, etc). That's not the
sort of thing unit testing catches, and in particular I doubt that any
of the tests we have would catch the sort of issues threadmanager caused.
Which really just means we need more involved tests...
-Boris
Actually, some of the resolved blocking/depending bugs are already in
our test suite. I'm not saying it would have caught all of them. It
would have focused the initial cleanup effort on a smaller number of bugs.
> Which really just means we need more involved tests...
Sure, but I don't think that blocks testing of any feature, though it
may block 100% coverage. If it's too hard to write integration tests for
a particular feature, write unit tests for the logic that the code
performs, and skip coverage for the integration points. That still gets
you pretty far.
- Rob
There are things automated tests can catch. There are some they already do,
but no test suite is perfect, and test suites can (and should) always be
perfected.
Then there are things automated tests normally can't catch, and I'm thinking
for example of intermittent errors caused by race conditions on heavily loaded
systems (which are my pet peeve). For that, we need as many hands as possible
using the software at all stages of production (including alpha, beta,
release, and all the corresponding nightlies) and on all relevant platforms
(such as "various reasonably current but not only the latest" versions of
Windows, Linux and Mac operating systems), and knowing how (and willing) to
report any bugs that bites them. Of course, people won't be willing to give a
product a "real-life" test if it hasn't already got some decent level of
usability. So we're back to bettering the test suites again, but also to
making sure that all bugs get identified and fixed as fast as humanly
possible. Not a small task, because everything works hand in hand: all sides
of the problem must be tackled at once.
Sheriffs here, and another thread about stale report requests and what to do
about them, are among the signs that lead me to believe that an attempt is
indeed made to tackle all sides of the problem at once and not let something
rot to unmanageable proportions. Of course, no one is perfect, and some bugs
aren't easy to track, so some of them get fixed faster than others. That's to
be expected, as long as no true bug gets "forgotten". As someone who can
report bugs but not fix them, let me tell you guys that watching both the
seriousness and the openness of the whole process is heartwarming to say the
least.
Best regards,
Tony.
--
There are three kinds of lies: Lies, Damn Lies, and Statistics.
-- Disraeli
> I just updated to the 20070522 nightly trunk build from a build a few
> weeks old. Definitely not edible dogfood any more! Sheesh!
> Bugzilla, here I come.
After filing a bunch of new bugs, I started going backwards in 7 day
steps. The first step was WORSE than 20070522, so I guess someone
is fixing some bugs. I finally ended up going all the way back to
20070418. :(
Yes, but right now, I don't think our biggest shortcoming is failure to
FIND bugs. It's failure to FIX bug within a reasonable time after they're
found.
I guess this is one of the problems with a nearly-all volunteer labor
force. People work on what they want to, which is mostly new features,
not bugs. Are volunteer open-source projects doomed to this problem?
> So I think one thing that needs to be added to the regression policy
> is a bit about what should be done by somebody who closes the tree.
> Closing the tree is essentially asking everybody who wants to check
> in to look into the regression instead, and this can be very
> inefficient if everybody is duplicating the same work.
>
> So to close the tree, I think whoever is closing it should ensure:
> * There is a bug on file about the problem, linked from the top of
> http://tinderbox.mozilla.org/Firefox/ .
> * If it's a performance or unit test regression, the bug says which
> tinderboxes, when, and has a bonsai URL to the list of checkins
> in the window that may have caused the regression.
> * If it's a functional regression, the bug has steps to reproduce,
> the dates of the nightly builds immediately surrounding the
> regression (or narrower window if known), and a bonsai URL to the
> list of checkins that could have caused the regression.
> This should typically take about 5-10 minutes.
Under what circumstances does a Sheriff back out a patch due to
functional regressions? From what you wrote above, I gather it's "never". :(
How long do we allow a regression to remain before we pull it out?
How long until we decide it's too late to pull it out?
Clearly the answer to the first question should be less than the second. :)
How does a dog food eater invoke the Sheriff?
When an update breaks functionality so bad that one must stop using it,
how does that someone notify the sheriff that something needs to be backed
out? Clearly, setting the "blocking1.9" flag to ? or + isn't it.
(Witness the number of such flags now.)
Finally, the trunk is SO BROKEN now that I just can't use it, and had to
go back a MONTH to get a usable build. The bugs are filed. There ought
to be some way of notifying dog food eaters like me when someone thinks
that enough of them have been fixed that the trunk might be usable again.
Downloading the nightly build every day, only to see that it is unusable,
yet again, is too inefficient. :(
> Serge Gautherie schrieb:
>> Another example, regressions from same Core patch:
>> the tabs rendering was broken on SeaMonkey for some days;
>> and Jonas is still trying to fix email Compose "break down(s)"...
>>
>> Bug 53901 – nsXULElement::CloneNode sets IS_IN_DOCUMENT flag
>
> IMHO, that's actually fine as the developer who caused those regressions
> is actively working on them and getting more and more of them fixed. As
That's why I wrote "I'm happy Jonas fixes the regressions".
> long as that happens in a timely manner, I think that's what trunk is
> actually for.
Yet, it has been 10 days already that I'm not testing nighlies anymore:
download, check bug(s) still there or has morphed, revert to (SM) -20070515.
On the "policy" discussion, I think this kind of checkins should be
backed out quite soon once it becomes obvious the number/impact of
regressions grows faster the the author/reviewers/... can fix them.
I see this happening: people backing out, fixing known reg., relanding,
eventually backing out again if new regression_s_ are found, ...
> Hey There,
>
> Just to be clear Darin was not "ressaigned." He's no longer working on
To add too to what I wrote, I think the (default) plan after Darin
"quit" was "Darin's patch is working has intended, now it's up to the
module owner to check/fix their own code to cope with the new
behaviour", but that obviously did not happen (yet).
May be checking that the app, as a whole, had been updated as needed
should have happened (/moved back) on a branch ?
(And I would help on such a branch user-testing...)
> Mozilla stuff. So we need to figure out how to move forward from here.
> The tracking bug is helpful (thanks Nelson) but as Bz indicates the
> primary problem is find the right resources to help fix this - which
> we've been working on and welcome any volunteers :-).
It seems some people are trying/starting to look into some of the
(nsITM) regressions :-)
Thanks to them !
> Mike
Perhaps I wasn't clear. We are finding more and more bugs within hours
and backing them out quickly. In other words, they aren't sticking
around to cause functional regressions. I agree that we are /currently/
in the hole you describe. I just think we've taken steps to mitigate
this problem in the future. A little bit of organization is needed to
promptly address the regressions we have now. There are many. I don't
know what the best way to deal with that issue is.
A second point. From mail headers, I see you are using SeaMonkey on
Windows. Is the nightly for Firefox similarly inedible for you? Talkback
tells me that the nightlies for 5/22 and 5/23 were crashy and awful. In
general, the Windows Firefox nightlies have treated me pretty well. do
the problems persist for you there? (they probably will)
- Rob
> A second point. From mail headers, I see you are using SeaMonkey on
> Windows. Is the nightly for Firefox similarly inedible for you?
Nearly all the problems that have made trunk nightlies inedible for me
are in Mail/News. So I think the relevant question is whether
ThunderBird is similarly inedible.
It has been my experience that nearly 100% of the mail/news bugs I find
in SeaMonkey are also identically found in Thunderbird, because they're
actually "core" bugs, even when they affect UI.
> Talkback tells me that the nightlies for 5/22 and 5/23 were crashy and awful.
I gather you're referring to SeaMonkey nightlies. Yes?
I have not found them to be more prone to crashing than mid-April builds,
but the UI regressions were numerous.
> In general, the Windows Firefox nightlies have treated me pretty well. do
> the problems persist for you there? (they probably will)
I am now running the Gecko/20070527 SeaMonkey/1.5a build. Quite a few
(but not all) of the recent UI regressions seem to have been fixed.
Of the recent email composer regressions, Bug 381722 remains.
Crashes and other issues due to the nsIThreadManager checkin remain the
biggest stability problem for me.
> - Rob
Given that I am dealing with a regression in an unrelated area that is
going on 2 years old and I am unlikely to find the cause, you can safely
state that I now hate with a passion regressions that linger on the
trunk. Or on the branch for that matter. With that said...
I am glad to see useful ideas, lessons and actions coming out of the
mess of bug 342810 as written by many posters in this thread. Smart
changes will help improve Quality.
But given this bug is a classic "testcase" in it's own right of what can
go wrong, I suggest further microscope analysis may be advised to
squeeze everything possibly useful out of it.
As one who was on the forefront of bug 342810 and related bugs I hope
the following comments are helpful. With apologies for repeats, consider:
* no mention in any bug of Darin's reassignment nor who should be pegged
with responsibility for regressions
* it took 4 months to get blocking1.9+ for a well-defined, critical bug
* did the bug get less attention because a plugin was mentioned in the
title?
* do major patches need to give attention to pervasive plugins and
perhaps even major add-ons - in the development process, unit testing,
and post checkin tests (tinderbox, litmus, etc)
* for major checkins, should an easily identifiable meta bug with
regression in the title be simultaneously filed as a blocker to be a
container for regressions to be linked to? like Bug 381699 "Fix
regressions due to new nsIThreadManager"
* beyond the reactivation and continued use of sheriffs, is there a
policy or process to *explicitly* monitor for and *actively* seek
important regressions after the landing of huge (or critical) patches
like bug 326273?
Finally, extending the last point and along the lines of "all bugs are
shallow given enough eyes", it would seem there were simply not enough
eyes. Consider Bug 340260 (tied with timeless' bug 340283 as first
regression bugs filed against threadmanager), it beat bug 326273 by 3
weeks but it was *still* more than *3 weeks* after the threadmanager
checkin. Why?
* overlooked because it was filed as a suite bug?
* not enough eyes because it was filed as a suite bug?
* not enough people using trunk?
* on the cusp of the vacation season?
* (add your own ideas)
I doubt sheriff oversight would have caught this memory problem and as
has been pointed out, regression and unit testing have their limits,
sheer lack of manpower being just one. So parallel to the idea of
enactmenting a functional regression policy I submit a corollary to "all
bugs are shallow given enough eyes" -- "all regressions are more likely
to be found given enough eyes, and the sooner the better so it's easier
to place blame."
Ergo, as Schrep proposes, it would be useful to have more people using
trunk as their daily tool - despite the dangers and limitations inherent
with encouraging people to use trunk in that manner. Rather than
discourage people from using trunk which is what I see most, we should
develope and publicize a program to encourage people to use trunk in
appropriate and useful ways.
Wayne
On 5/27/2007 6:45 PM, Nelson Bolyard wrote:
> Robert Sayre wrote (addressing me):
>
>> A second point. From mail headers, I see you are using SeaMonkey on
>> Windows. Is the nightly for Firefox similarly inedible for you?
>
> Nearly all the problems that have made trunk nightlies inedible for me
> are in Mail/News. So I think the relevant question is whether
> ThunderBird is similarly inedible.
Thunderbird trunk has been highly usable for many, many months, with
brief exceptions. I've been using trunk for 9 months or more, except
for the last month because of the TB2 rollout,
> It has been my experience that nearly 100% of the mail/news bugs I find
> in SeaMonkey are also identically found in Thunderbird, because they're
> actually "core" bugs, even when they affect UI.
I wonder how the number of people using SM trunk compares to the number
using trunk FF+TB?
>> Talkback tells me that the nightlies for 5/22 and 5/23 were crashy and awful.
>
> I gather you're referring to SeaMonkey nightlies. Yes?
> I have not found them to be more prone to crashing than mid-April builds,
> but the UI regressions were numerous.
>
>> In general, the Windows Firefox nightlies have treated me pretty well. do
>> the problems persist for you there? (they probably will)
for daily use (usage beyond bug triage) bug 342810 is the only thing
that made FF trunk unusable for me. If not for that I would have been
using it continuously for more than 12 months (I gave up in december)
Out of curiousity... bug number?
> * no mention in any bug of Darin's reassignment
It was never publicly announced by anyone (including Darin); just sort of made
its way through the grapevine by word of mouth, as far as I can tell.
> nor who should be pegged with responsibility for regressions
That's because there was no one. We still don't know who has the cycles to take
some of the remaining regressions from threadmanager.
> * it took 4 months to get blocking1.9+ for a well-defined, critical bug
Is that 4 months from when it was nominated? Or from when 1.9 triage started?
It looks like I nominated it on 2006-10-13 and we didn't start doing 1.9 triage
(actually looking at nominations) until at least 2007-01-18. Before this point
it was impossible to get blocking1.9+ for anything.
At that point we had some 500 bugs nominated; it typically takes a few minutes
per bug, so spending 6-7 hours a week on triage (as we were effectively doing)
it takes a few weeks to get through a list that size. In this case it took
about 4 weeks to even get to the bug; as soon as it was read it was marked
blocking+.
> * did the bug get less attention because a plugin was mentioned in the
> title?
Possible, yes. For two reasons. First of all, we have very few developers who
know the plug-in code; the ones who don't avoid it like the plague (for good
reasons!). Second, whenever a plug-in is involved there's uncertainty about
whose bug this really is.... Unfortunate, but true.
> * do major patches need to give attention to pervasive plugins and
> perhaps even major add-ons - in the development process, unit testing,
> and post checkin tests (tinderbox, litmus, etc)
Ideally, yes.
> * for major checkins, should an easily identifiable meta bug with
> regression in the title be simultaneously filed as a blocker to be a
> container for regressions to be linked to?
I think it's fine to just track the regressions on the original bug as long as
that's actually done. At least I've found that to be quite sufficient for my bugs.
> * beyond the reactivation and continued use of sheriffs, is there a
> policy or process to *explicitly* monitor for and *actively* seek
> important regressions after the landing of huge (or critical) patches
> like bug 326273?
I don't think so.
> * overlooked because it was filed as a suite bug?
Likely.
> * not enough eyes because it was filed as a suite bug?
Definitely.
> * not enough people using trunk?
In my opinion, yes.
> * on the cusp of the vacation season?
Also likely....
For what it's worth, years ago, it used to be that I would read the list of bugs
filed daily sometime toward the evening and reassign/triage things as needed,
flagging regressions, etc. I stopped doing that after the product split,
because most bugs were getting filed in Firefox, even if they were Core bugs.
So reading the Core buglist for the day was generally already well-triaged by
the filers. And Firefox:General came to have such a high noise-to-signal ratio
(where "noise" for my purposes included UI bugs) that it wasn't worth my time to
read it daily.
I did try to read things with a 3-month time gap for a while (to catch things
getting reassigned out of Firefox but not really flagged well), but that has
fallen by the wayside; at this point I'm almost 18 months behind, and will
probably never catch up. And of course this means that I have no idea what bugs
are being moved from Firefox to Core...
I think some others are reading the daily bug lists in Firefox now, so perhaps
what we need to do is:
1) Have some people triaging the incoming Firefox:General bugs, moving them to
Core as needed.
2) Set up a query for Core bugs that were either filed or moved to Core in the
last 24 hours. Core-specific triagers could handle those...
Thoughts?
-Boris
bz, We exchanged email about a month ago under the topic "character
help" regarding Bug 237624 (well actually bug 310290 which I have just
fixed up). Because the palmsync extension is not a main stream
application I can understand why bug 310290 and also 311371 might not
have gotten picked up as a regression. OK, not a big deal at the time.
But 2 years after the patch landed that caused 310290, it is killer (and
frustrating obviously) because it is not a slam dunk to narrow the
regression range for reasons that I won't bore the group with.
>> * no mention in any bug of Darin's reassignment
>
> It was never publicly announced by anyone (including Darin); just sort
> of made its way through the grapevine by word of mouth, as far as I can
> tell.
>
>> nor who should be pegged with responsibility for regressions
>
> That's because there was no one. We still don't know who has the cycles
> to take some of the remaining regressions from threadmanager.
>
>> * it took 4 months to get blocking1.9+ for a well-defined, critical bug
>
> Is that 4 months from when it was nominated? Or from when 1.9 triage
> started? It looks like I nominated it on 2006-10-13 and we didn't start
> doing 1.9 triage (actually looking at nominations) until at least
> 2007-01-18. Before this point it was impossible to get blocking1.9+ for
> anything.
yes, 4 months from your nomination. After the long wait I figured it
was awaiting some massive 1.9 triage. But in as much as blocking status
raises the visibility of the bug in the eyes of developers, the solution
might (big might) have come sooner.
In retrospect perhaps I should have raised the bug severity from
critical to blocking - but because of the doubtful effectiveness of that
move and not wishing to be labeled a rabble rouser I wimped out.
>> * did the bug get less attention because a plugin was mentioned in the
>> title?
>
> Possible, yes. For two reasons. First of all, we have very few
> developers who know the plug-in code; the ones who don't avoid it like
> the plague (for good reasons!). Second, whenever a plug-in is involved
> there's uncertainty about whose bug this really is.... Unfortunate, but
> true.
having tried to diagnose this, and having tried to get adobe to comment,
I understand that perspective. But in this case, that mentality bit back ...
>> * do major patches need to give attention to pervasive plugins and
>> perhaps even major add-ons - in the development process, unit testing,
>> and post checkin tests (tinderbox, litmus, etc)
>
> Ideally, yes.
Hopefully it should be easy for someone to develop a "short" list?
>> * for major checkins, should an easily identifiable meta bug with
>> regression in the title be simultaneously filed as a blocker to be a
>> container for regressions to be linked to?
>
> I think it's fine to just track the regressions on the original bug as
> long as that's actually done. At least I've found that to be quite
> sufficient for my bugs.
in the main, yes. But for major bugs like this, which inevitably
generate many regressions - as someone pointed out it is quite difficult
to see which are bugs regressions and which are linked for other reasons.
Schep, did anyone take you up on the functional testing issue?
Question for whomever - are major checkins/landings announced at some
central location, without the noise of the daily hundreds of "simple"
checkins? The kind of thing which would "put everyone on notice" so
they can watch out, and help with the fallout if they are so inclined?
Are these reasons we have any control over in the here and now? Or at least
reasons we can mitigate in the future for checkins happening now? If so, it
might be worth discussing them.
> But in as much as blocking status
> raises the visibility of the bug in the eyes of developers
Not until people started focusing on blockers, which was another several months
after triage started....
> In retrospect perhaps I should have raised the bug severity from
> critical to blocking
If it was preventing daily trunk testing, probably yes...
> Question for whomever - are major checkins/landings announced at some
> central location, without the noise of the daily hundreds of "simple"
> checkins?
We've had abortive attempts to do this in the past. It might be worth doing
this at either the quality or developer blog, perhaps... The quality blog would
make a lot of sense, actually.
-Boris
Unfortunately, Bugzilla is a mess in some way nowadays, as quite a few
components are in the wrong products, and Bugzilla sucks when it comes
down to sorting such things out.
I smell that some day we need to do some really big restructuring of
Bugzilla components...
Robert Kaiser
> Unfortunately, Bugzilla is a mess in some way nowadays, as quite a few
> components are in the wrong products, and Bugzilla sucks when it comes
> down to sorting such things out.
> I smell that some day we need to do some really big restructuring of
> Bugzilla components...
I concur.
Have bugs been filed on getting these moved?
> and Bugzilla sucks when it comes
> down to sorting such things out.
How so? We can shift large chunks of bugs around without too much
effort. Yes, it generates a lot of bugmail, but if people know it's
coming and we do it all at once, they can filter it.
Gerv
Not sure... I at least came across this when I wanted to file a bug in
toolkit history and looked into toolkit and core products before finding
it in firefox.
I also think we should have a good definition what belongs into toolkit
and what into core and sort that part out.
And then, there's our nice bug about the Mozilla Application Suite
product which is wrong in multiple ways but doesn't get corrected due to
neither justdave nor reed having time to write up the script they need
for correcting it and nobody else having the knowledge to write it up,
apparently.
>> and Bugzilla sucks when it comes down to sorting such things out.
>
> How so? We can shift large chunks of bugs around without too much
> effort. Yes, it generates a lot of bugmail, but if people know it's
> coming and we do it all at once, they can filter it.
We probably should review the list then and file bugs to get those done
that are in the wrong places.
Robert Kaiser
Seems to me it would be better to have one bug, have a discussion, and
then do all the moving. Moving a number of components around without an
overall view of what the result might look like could end up with
something that's no more coherent than things are now...
--
Michael
Also, that bug does not address the subject of this discussion --
components in the wrong products -- SeaMonkey has quite a few components
in Core. The existing bug is a reorganization of the existing
components in Moz App Suite (renaming and consolidation mostly). The
return on investment for that is lower than simply[1] moving components
over from Core.
[1] Yes, I know it's not simple!
--
Andrew Schultz
ajsc...@verizon.net
http://www.sens.buffalo.edu/~ajs42/
Right. (Although the fact that we don't have one is not Bugzilla's
fault. :-)
> And then, there's our nice bug about the Mozilla Application Suite
> product which is wrong in multiple ways but doesn't get corrected due to
> neither justdave nor reed having time to write up the script they need
> for correcting it and nobody else having the knowledge to write it up,
> apparently.
Which bug is this?
> We probably should review the list then and file bugs to get those done
> that are in the wrong places.
Good idea :-)
Gerv
Right. All the problems in component organization i was talking about
are actually the fault of us having years of history and not the fault
of Bugzilla itself.
Some reorganizations seem to not be as easy as one would hope in
Bugzilla though.
>> And then, there's our nice bug about the Mozilla Application Suite
>> product which is wrong in multiple ways but doesn't get corrected due
>> to neither justdave nor reed having time to write up the script they
>> need for correcting it and nobody else having the knowledge to write
>> it up, apparently.
>
> Which bug is this?
https://bugzilla.mozilla.org/show_bug.cgi?id=298904
>> We probably should review the list then and file bugs to get those
>> done that are in the wrong places.
>
> Good idea :-)
I think we need some discussion first, so I think I should open a new
thread here about this.