Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Super-review, what shall we do with you?

144 views
Skip to first unread message

Dave Townsend

unread,
Nov 6, 2012, 1:09:13 PM11/6/12
to mozilla-g...@lists.mozilla.org
We've had a policy requiring super-review for certain kinds of patches
for a long time. It's changed a couple of times but the current policy
(http://www.mozilla.org/hacking/reviewers.html) primarily requires
super-review for any patch that introduces or changes an API. Basically
any function in a JS file or JSM is covered here, or at least that is my
reading of it. The reasoning is pretty straightforward, designing APIs
well up front reduces the maintenance burden in the future and hopefully
means we don't have to make changes that break add-ons.

The problem is that I frequently come across patches that were landed
without super-review despite meeting the requirements. This suggests
that many reviewers aren't aware of the policy or don't have the same
interpretation of it as I do. We probably need to do a better job of
making sure that all reviewers in particular understand the policy and
are following it.

But, I haven't yet seen an issue arise from this lack of SR. Does that
mean that the policy is too restrictive and we need to change it to more
closely match how reviewers are working?

Either we're setting ourselves up for big problems down the road or we
have a policy that is in some cases hindering development. Those are the
extremes of course and the reality is probably somewhere in between, but
I'd like to hear other people's thoughts about this.

Dave Townsend

unread,
Nov 23, 2012, 2:29:36 PM11/23/12
to mozilla-g...@lists.mozilla.org
I've made a blog post suggesting a change to the definition of what an
API is that requires super-review. I'd appreciate it if people read it
and gave me some feedback:
http://www.oxymoronical.com/blog/2012/11/What-is-an-API

Justin Dolske

unread,
Nov 24, 2012, 9:06:04 PM11/24/12
to
On 11/6/12 10:09 AM, Dave Townsend wrote:
> We've had a policy requiring super-review for certain kinds of patches
> for a long time. It's changed a couple of times but the current policy
> (http://www.mozilla.org/hacking/reviewers.html) primarily requires
> super-review for any patch that introduces or changes an API.

I think we should consider jettisoning/rewriting that part of the
policy. It doesn't match what we've been doing in reality(*), and we
don't seem to be in a terrible state as a result.

A bit of data: by my crude Bugzilla search, I count 104 sr+'d bugs that
have been fixed in the last 3 months. (http://is.gd/45WAxl) By product:

Core - 75
Toolkit - 9
Mailnews - 8
NSS+NSPR - 6
TB+SM+CM - 6

(*) Core's pretty clearly still using sr+, so if it's working and they
want to keep it that's just fine. I direct my comments at Toolkit and
Firefox.

The policy page goes into the motivations a bit, which are generally
sound, but I think that a big part of the problem is that it's presented
in absolutest terms... Any API change. Any cross-module change. Only
refactoring has some wiggle room via "significant". These things all
vary in degree as well as the cost for making a mistake.

I think we should consider rewriting these rules as guidelines and
placing the expectation on reviewers to determine if/when superreview is
needed. This isn't that different from how reviewers operate in Toolkit
and Firefox now... Instead of hyperspecialized reviewers, it's a general
pool with the expectation that members are able to self-determine what
they're able to review or when they should flag someone else for
additional review.

This would also help from a new-contributor angle -- the whole "what is
sr? When do I need it? Who does it?" question is hard to answer until
one learns the ropes. But it's reasonable for reviewers to know.

Justin

Robert Kaiser

unread,
Nov 25, 2012, 10:21:32 PM11/25/12
to
Justin Dolske schrieb:
> I think we should consider jettisoning/rewriting that part of the
> policy. It doesn't match what we've been doing in reality(*)

Yes, that's why we almost f***ed up 17.0 and needed to do a last-minute
reversion of patches that changed IIDs while on beta, for example.
If we wouldn't have caught that problem with applying proper sr, our
problem lies deeper, of course.

Robert Kaiser

L. David Baron

unread,
Nov 25, 2012, 10:29:55 PM11/25/12
to Robert Kaiser, dev-pl...@lists.mozilla.org
I don't think that's relevant, since I don't think anybody's
proposing that we have re-review and re-superreview of patches for
aurora and beta.

-David

--
𝄞 L. David Baron http://dbaron.org/ 𝄂
𝄢 Mozilla http://www.mozilla.org/ 𝄂

Gregory Szorc

unread,
Nov 25, 2012, 10:51:15 PM11/25/12
to L. David Baron, Robert Kaiser, dev-pl...@lists.mozilla.org
On 11/25/12 7:29 PM, L. David Baron wrote:
> On Monday 2012-11-26 04:21 +0100, Robert Kaiser wrote:
> I don't think that's relevant, since I don't think anybody's
> proposing that we have re-review and re-superreview of patches for
> aurora and beta.

I concur. IMO we should be applying static analysis, auditing, etc to
catch these types of issues. [Super] reviewers shouldn't need to waste
brain cycles on style checking, IID change requirements, etc.

I concede we currently generally don't do a very good job at this. As a
reviewer and someone who cares about "quality," this annoys me because I
know it is something that could largely be solved through decent
automation and tools. Not having this wastes my time as a reviewer and
increases the probability of "bad things" slipping through review and
into the code base.

smaug

unread,
Nov 26, 2012, 7:17:34 AM11/26/12
to Gregory Szorc, L. David Baron, Robert Kaiser
On 11/26/2012 05:51 AM, Gregory Szorc wrote:
> On 11/25/12 7:29 PM, L. David Baron wrote:
>> On Monday 2012-11-26 04:21 +0100, Robert Kaiser wrote:
>>> Justin Dolske schrieb:
>>>> I think we should consider jettisoning/rewriting that part of the
>>>> policy. It doesn't match what we've been doing in reality(*)
>>>
>>> Yes, that's why we almost f***ed up 17.0 and needed to do a
>>> last-minute reversion of patches that changed IIDs while on beta,
>>> for example.
>>> If we wouldn't have caught that problem with applying proper sr, our
>>> problem lies deeper, of course.
>>
>> I don't think that's relevant, since I don't think anybody's
>> proposing that we have re-review and re-superreview of patches for
>> aurora and beta.
>
> I concur. IMO we should be applying static analysis, auditing, etc to catch these types of issues. [Super] reviewers shouldn't need to waste brain
> cycles on style checking, IID change requirements, etc.
+1
superreview should be needed for those web/addon facing API changes which aren't coming from Web specs
(or if the spec is still very unstable and/or controversial).


>
> I concede we currently generally don't do a very good job at this.
(not sure what 'this' refers to)


> As a reviewer and someone who cares about "quality," this annoys me because I know
> it is something that could largely be solved through decent automation and tools.
Yes. We certainly should have at least coding style checker, and uuid update checker.




-Olli

Ehsan Akhgari

unread,
Nov 26, 2012, 11:08:56 AM11/26/12
to smaug, L. David Baron, Robert Kaiser, dev-pl...@lists.mozilla.org
On 2012-11-26 7:17 AM, smaug wrote:
>> As a reviewer and someone who cares about "quality," this annoys me
>> because I know
>> it is something that could largely be solved through decent automation
>> and tools.
> Yes. We certainly should have at least coding style checker, and uuid
> update checker.

https://bugzilla.mozilla.org/show_bug.cgi?id=813809

Cheers,
Ehsan

Benjamin Smedberg

unread,
Nov 26, 2012, 12:13:05 PM11/26/12
to Dave Townsend, dev-pl...@lists.mozilla.org
On 11/6/2012 1:09 PM, Dave Townsend wrote:
> We've had a policy requiring super-review for certain kinds of patches
> for a long time. It's changed a couple of times but the current policy
> (http://www.mozilla.org/hacking/reviewers.html) primarily requires
> super-review for any patch that introduces or changes an API.
> Basically any function in a JS file or JSM is covered here, or at
> least that is my reading of it. The reasoning is pretty
> straightforward, designing APIs well up front reduces the maintenance
> burden in the future and hopefully means we don't have to make changes
> that break add-ons.
>
> The problem is that I frequently come across patches that were landed
> without super-review despite meeting the requirements. This suggests
> that many reviewers aren't aware of the policy or don't have the same
> interpretation of it as I do. We probably need to do a better job of
> making sure that all reviewers in particular understand the policy and
> are following it.
>
> But, I haven't yet seen an issue arise from this lack of SR. Does that
> mean that the policy is too restrictive and we need to change it to
> more closely match how reviewers are working?
As the primary author of the previous requirements, let me dump some
knowledge about the original intent, and then let the discussion go
where it may. I don't think I have particularly strong opinions about it.

* At the time, super-review was required for *all* patches.
* Super-review was a burden for super-reviewers and slowed the project down.
* There was a sense that our process has historically been very good at
detailed code review, and historically very spotty at "design" review.
* What super-reviewers really wanted to do was apply their
cross-functional knowledge and experience to the issues where that
experience would be useful, without slowing down the more mundane
bugfixing aspects of the project.
* At the time (the Firefox 3 beta period, IIRC), we had recently
"discovered" how disruptive some JS changes could be. Specifically,
extensions rely on some functions in browser.js and in the XBL bindings
for <browser> and <tabbrowser> extensively, and even minor changes in
their function could have unexpected consequences. So we didn't want to
limit the policy to "IDL-defined XPCOM APIs" but rather recognize that
there are some very important APIs in JavaScript. This was before JSMs
even existed, IIRC.

The wording of the current policy wanted to capture these general
sentiments by requiring integration and API review. We talked at length
about whether to rename it to "design/integration review".
Superreviewers were asked to specifically look at the following aspects
of the code:

* Understanding the interactions between modules and making sure that we
weren't introducing technical debt with too-tight couping, but also
weren't going architecture-astronaut and designing too-general APIs.
* Understand the ways that changes affect addon compatibility and make
value judgements about whether breaking changes were worthwhile.
* Apply experience to APIs to try and bring consistency
* Superreviewers agreed that for the most part they were not going to do
line-by-line code review on patches.

Some of these functions may have since been taken over by other groups.
The `addon-compat` keyword, for example, may solve some of the problems
related to extension-facing APIs. I do think we still aren't always as
deliberate as we should be about the cost/reward of changing/maintaining
addon APIs, and the keyword is mainly used as a notification for "we
broke something; Jorge please help addons clean up after us".

It was never our intent that "pseudo APIs" was meant to cover all JS
code, and there is certainly lots of reviewer judgement involved whether
a particular new function is just implementation, or an API, or right
now an implementation that extension might force to become an API in the
future.

--BDS

L. David Baron

unread,
Nov 26, 2012, 2:19:02 PM11/26/12
to Justin Dolske, dev-pl...@lists.mozilla.org
On Saturday 2012-11-24 18:06 -0800, Justin Dolske wrote:
> On 11/6/12 10:09 AM, Dave Townsend wrote:
> >We've had a policy requiring super-review for certain kinds of patches
> >for a long time. It's changed a couple of times but the current policy
> >(http://www.mozilla.org/hacking/reviewers.html) primarily requires
> >super-review for any patch that introduces or changes an API.
>
> I think we should consider jettisoning/rewriting that part of the
> policy. It doesn't match what we've been doing in reality(*), and we
> don't seem to be in a terrible state as a result.
>
> A bit of data: by my crude Bugzilla search, I count 104 sr+'d bugs
> that have been fixed in the last 3 months. (http://is.gd/45WAxl) By
> product:
>
> Core - 75
> Toolkit - 9
> Mailnews - 8
> NSS+NSPR - 6
> TB+SM+CM - 6
>
> (*) Core's pretty clearly still using sr+, so if it's working and
> they want to keep it that's just fine. I direct my comments at
> Toolkit and Firefox.

I would not interpret this as showing that sr is working in Core; by
the definition of what requires sr, I expect at least a decimal
order of magnitude more patches should have had sr.

(I think in practice, when the policy changed to forbid r+sr from
one reviewer, we mostly stopped using sr as a result.)

Neil

unread,
Dec 1, 2012, 7:28:04 PM12/1/12
to
On a side note, what can we do about checking for unusually verbose or
inefficient constructs? Examples:

> static const PRUnichar* kResetBackupDirectory =
> NS_LITERAL_STRING("resetBackupDirectory").get();

This is technically incorrect on systems that don't support a 16-bit
char type (short wchar_t or char16_t). (When can we require a real UTF16
type?)

> nsString type = NS_LITERAL_STRING("npapi-carbon-event-model-failure");

This is very inefficient because the string gets copied into an
allocated buffer.
Both of the above could have either used NS_NAMED_LITERAL_STRING or been
inlined into the single use.

> nsCOMPtr<nsIVariant>
> d(already_AddRefed<nsIVariant>(XPCVariant::newVariant(ccx, v)));

We have a templated function that saves you from needlessly retyping the
type:
nsCOMPtr<nsIVariant> d(dont_AddRef(XPCVariant::newVariant(ccx, v)));



Gregory Szorc

unread,
Dec 1, 2012, 8:04:19 PM12/1/12
to Neil, dev-pl...@lists.mozilla.org
On 12/1/2012 4:28 PM, Neil wrote:
> On a side note, what can we do about checking for unusually verbose or
> inefficient constructs? Examples:
We could create a compiler plugin that examines the AST for known
badness. See bug 733873.

Justin Dolske

unread,
Dec 1, 2012, 8:37:03 PM12/1/12
to
On 12/1/12 4:28 PM, Neil wrote:
> On a side note, what can we do about checking for unusually verbose or
> inefficient constructs? Examples:

I don't think this has anything to to with sr policy, nor should it.

Justin

Blake Kaplan

unread,
Dec 3, 2012, 7:12:41 PM12/3/12
to
Neil <ne...@parkwaycc.co.uk> wrote:
>> static const PRUnichar* kResetBackupDirectory =
>> NS_LITERAL_STRING("resetBackupDirectory").get();

Isn't this an anti-pattern anyway because the string (and the memory owned by
it) will get destructed after the semicolon "runs" leaving
kResetBackupDirectory pointing at deleted memory?
--
Blake Kaplan

Neil

unread,
Dec 4, 2012, 5:00:19 AM12/4/12
to
Yes, this is why I said it is technically incorrect, but we use a string
literal (L"" or u"") where we can; only on systems that don't support
UTF16 string literals do we allocate memory. If no such systems remain,
we should really switch to NS_L("resetBackupDirectory") throughout.

--
Warning: May contain traces of nuts.

L. David Baron

unread,
Dec 4, 2012, 6:15:45 AM12/4/12
to Neil, dev-pl...@lists.mozilla.org
But if we want to use NS_L widely, we should first rename it to have
a name that looks more like u"" (which is what it does, since it
always produces 2-byte characters) than L"".

Dave Townsend

unread,
Dec 7, 2012, 3:28:34 PM12/7/12
to mozilla-g...@lists.mozilla.org
It's a bit difficult to draw a consensus from everyone here but mostly
it seems that people think the status quo with module owners and
reviewers making their judgement is working out here. I'm going to email
all the module owners directly and ask that they look over the policy
and communicate with their peers about what their expectations are.
0 new messages