Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The future of commit access policy for core Firefox

104 views
Skip to first unread message

Mike Connor

unread,
Mar 9, 2017, 4:53:49 PM3/9/17
to dev-planning, dev-platform, governance, firefox-dev
(please direct followups to dev-planning, cross-posting to governance,
firefox-dev, dev-platform)


Nearly 19 years after the creation of the Mozilla Project, commit access
remains essentially the same as it has always been. We've evolved the
vouching process a number of times, CVS has long since been replaced by
Mercurial & others, and we've taken some positive steps in terms of
securing the commit process. And yet we've never touched the core idea of
granting developers direct commit access to our most important
repositories. After a large number of discussions since taking ownership
over commit policy, I believe it is time for Mozilla to change that
practice.

Before I get into the meat of the current proposal, I would like to outline
a set of key goals for any change we make. These goals have been informed
by a set of stakeholders from across the project including the engineering,
security, release and QA teams. It's inevitable that any significant
change will disrupt longstanding workflows. As a result, it is critical
that we are all aligned on the goals of the change.


I've identified the following goals as critical for a responsible commit
access policy:


- Compromising a single individual's credentials must not be sufficient
to land malicious code into our products.
- Two-factor auth must be a requirement for all users approving or
pushing a change.
- The change that gets pushed must be the same change that was approved.
- Broken commits must be rejected automatically as a part of the commit
process.


In order to achieve these goals, I propose that we commit to making the
following changes to all Firefox product repositories:


- Direct commit access to repositories will be strictly limited to
sheriffs and a subset of release engineering.
- Any direct commits by these individuals will be limited to fixing
bustage that automation misses and handling branch merges.
- All other changes will go through an autoland-based workflow.
- Developers commit to a staging repository, with scripting that
connects the changeset to a Bugzilla attachment, and integrates
with review
flags.
- Reviewers and any other approvers interact with the changeset as
today (including ReviewBoard if preferred), with Bugzilla flags as the
canonical source of truth.
- Upon approval, the changeset will be pushed into autoland.
- If the push is successful, the change is merged to mozilla-central,
and the bug updated.

I know this is a major change in practice from how we currently operate,
and my ask is that we work together to understand the impact and concerns.
If you find yourself disagreeing with the goals, let's have that discussion
instead of arguing about the solution. If you agree with the goals, but
not the solution, I'd love to hear alternative ideas for how we can achieve
the outcomes outlined above.

-- Mike

L. David Baron

unread,
Mar 9, 2017, 5:14:43 PM3/9/17
to dev-pl...@lists.mozilla.org
On Thursday 2017-03-09 16:53 -0500, Mike Connor wrote:
> I've identified the following goals as critical for a responsible commit
> access policy:
...
> - The change that gets pushed must be the same change that was approved.

I'm curious what this goal means. In particular, does it mean that
you're trying to end "review+ if you make the following changes",
and require that reviewers re-review the revisions no matter what?

(If it does mean that, then that's a substantial increase on
reviewer load; if it doesn't, then I'm curious what definition of
"the same" you're using.)

> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>
> - Direct commit access to repositories will be strictly limited to
> sheriffs and a subset of release engineering.
> - Any direct commits by these individuals will be limited to fixing
> bustage that automation misses and handling branch merges.
> - All other changes will go through an autoland-based workflow.
> - Developers commit to a staging repository, with scripting that
> connects the changeset to a Bugzilla attachment, and integrates
> with review
> flags.
> - Reviewers and any other approvers interact with the changeset as
> today (including ReviewBoard if preferred), with Bugzilla flags as the
> canonical source of truth.
> - Upon approval, the changeset will be pushed into autoland.
> - If the push is successful, the change is merged to mozilla-central,
> and the bug updated.

I'm curious if this will mean that ReviewBoard will be required, or
if it will still be a way to use attachment-based workflows. (I ask
this because I still consider the ReviewBoard UI unacceptable for
changes that are likely to require re-review. See
https://bugzilla.mozilla.org/show_bug.cgi?id=1285874#c20 for my form
response on the topic, although I've been a little less picky about
requiring attachments in all cases lately, when I think things
aren't likely to require multiple rounds of review.)

-David

--
𝄞 L. David Baron http://dbaron.org/ 𝄂
𝄢 Mozilla https://www.mozilla.org/ 𝄂
Before I built a wall I'd ask to know
What I was walling in or walling out,
And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)
signature.asc

Lawrence Mandel

unread,
Mar 9, 2017, 5:20:18 PM3/9/17
to L. David Baron, dev-planning
Mark Cote's team is currently working on enabling autoland from Bugzilla so
MozReview will not be required.

Lawrence

Mark Côté

unread,
Mar 9, 2017, 5:31:59 PM3/9/17
to
Also fwiw we have (finally) made some progress on the bugs you mentioned
there.

Mark


Andrew McCreight

unread,
Mar 9, 2017, 5:32:01 PM3/9/17
to dev-planning
On Thu, Mar 9, 2017 at 2:14 PM, L. David Baron <dba...@dbaron.org> wrote:

> On Thursday 2017-03-09 16:53 -0500, Mike Connor wrote:
> > I've identified the following goals as critical for a responsible commit
> > access policy:
> ...
> > - The change that gets pushed must be the same change that was
> approved.
>
> I'm curious what this goal means. In particular, does it mean that
> you're trying to end "review+ if you make the following changes",
> and require that reviewers re-review the revisions no matter what?
>

That's what it sounds like to me. Also, a strict reading of this would
imply that rebasing will require re-review.


>
> (If it does mean that, then that's a substantial increase on
> reviewer load; if it doesn't, then I'm curious what definition of
> "the same" you're using.)
>

In practice, I doubt anybody will look at these re-reviews, so I'm not sure
how much it will help our security.

Bobby Holley

unread,
Mar 9, 2017, 6:00:06 PM3/9/17
to Mike Connor, dev-planning, dev-platform, governance, firefox-dev
At a high level, I think the goals here are good.

However, the tooling here needs to be top-notch for this to work, and the
standard approach of flipping on an MVP and dealing with the rest in
followup bugs isn't going to be acceptable for something so critical to our
productivity as landing code. The only reason more developers aren't
complaining about the MozReview+autoland workflow right now is that it's
optional.

The busiest and most-productive developers (ehsan, bz, dbaron, etc) tend
not to pay attention to new workflows because they have too much else on
their plate. The onus needs to be on the team deploying this to have the
highest-volume committers using the new system happily and voluntarily
before it becomes mandatory. That probably means that the team should not
have any deadline-oriented incentives to ship it before it's ready.

Also, getting rid of "r+ with comments" is a non-starter.

bholley


On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor <mco...@mozilla.com> wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been. We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process. And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories. After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to outline
> a set of key goals for any change we make. These goals have been informed
> by a set of stakeholders from across the project including the engineering,
> security, release and QA teams. It's inevitable that any significant
> change will disrupt longstanding workflows. As a result, it is critical
> that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>
> - Compromising a single individual's credentials must not be sufficient
> to land malicious code into our products.
> - Two-factor auth must be a requirement for all users approving or
> pushing a change.
> - The change that gets pushed must be the same change that was approved.
> - Broken commits must be rejected automatically as a part of the commit
> process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>
> - Direct commit access to repositories will be strictly limited to
> sheriffs and a subset of release engineering.
> - Any direct commits by these individuals will be limited to fixing
> bustage that automation misses and handling branch merges.
> - All other changes will go through an autoland-based workflow.
> - Developers commit to a staging repository, with scripting that
> connects the changeset to a Bugzilla attachment, and integrates
> with review
> flags.
> - Reviewers and any other approvers interact with the changeset as
> today (including ReviewBoard if preferred), with Bugzilla flags as
> the
> canonical source of truth.
> - Upon approval, the changeset will be pushed into autoland.
> - If the push is successful, the change is merged to mozilla-central,
> and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution. If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>
> -- Mike
> _______________________________________________
> dev-planning mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-planning
>

Eric Rescorla

unread,
Mar 9, 2017, 6:01:55 PM3/9/17
to Mike Connor, dev-planning, dev-platform, governance, firefox-dev
First, let me state that I am generally in support of this type of change.

More comments below.

On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor <mco...@mozilla.com> wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been. We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process. And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories. After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make. These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams. It's inevitable that any
> significant change will disrupt longstanding workflows. As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>
> - Compromising a single individual's credentials must not be
> sufficient to land malicious code into our products.
>
> I think this is a good long-term goal, but I don't think it's one we need
to achieve now.
At the moment, I would settle for "only a few privileged people can
single-handedly
land malicious code into our products". In support of this, I would note
that what you
propose below is insufficient to achieve your stated objective, because
there will
still be single person admin access to machines.


> - Two-factor auth must be a requirement for all users approving or
> pushing a change.
> - The change that gets pushed must be the same change that was
> approved.
> - Broken commits must be rejected automatically as a part of the
> commit process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>
> - Direct commit access to repositories will be strictly limited to
> sheriffs and a subset of release engineering.
> - Any direct commits by these individuals will be limited to fixing
> bustage that automation misses and handling branch merges.
>
> I think this is a good eventual goal, but in the short term, I think it's
probably useful
to keep checkin-needed floating around, especially given the somewhat iffy
state
of the toolchain.


>
> - All other changes will go through an autoland-based workflow.
> - Developers commit to a staging repository, with scripting that
> connects the changeset to a Bugzilla attachment, and integrates with review
> flags.
> - Reviewers and any other approvers interact with the changeset as
> today (including ReviewBoard if preferred), with Bugzilla flags as the
> canonical source of truth.
> - Upon approval, the changeset will be pushed into autoland.
>
> Implicit in this is some sort of hierarchy of reviewers (tied to what was
previous L3 commit?)
that says who can review a patch. Otherwise, I can just create a dummy
account, r+ my own
patch, and Land Ho! IIRC Chromium does this by saying that LGTM implies
autolanding
only if the reviewer could have landed the code themselves.

-Ekr



> - If the push is successful, the change is merged to mozilla-central,
> and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution. If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>
> -- Mike
>
> _______________________________________________
> firefox-dev mailing list
> firef...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>

Ehsan Akhgari

unread,
Mar 9, 2017, 6:02:55 PM3/9/17
to Andrew McCreight, dev-planning
On Thu, Mar 9, 2017 at 5:31 PM, Andrew McCreight <amccr...@mozilla.com>
wrote:

> On Thu, Mar 9, 2017 at 2:14 PM, L. David Baron <dba...@dbaron.org> wrote:
>
> > On Thursday 2017-03-09 16:53 -0500, Mike Connor wrote:
> > > I've identified the following goals as critical for a responsible
> commit
> > > access policy:
> > ...
> > > - The change that gets pushed must be the same change that was
> > approved.
> >
> > I'm curious what this goal means. In particular, does it mean that
> > you're trying to end "review+ if you make the following changes",
> > and require that reviewers re-review the revisions no matter what?
> >
>
> That's what it sounds like to me. Also, a strict reading of this would
> imply that rebasing will require re-review.
>

On very practical grounds, I found the former unacceptable, and the latter
impractical. I already can barely keep up with my review load, and other
people who need to do a lot of reviews have similar issues and the
additional review load resulting from this is significant.


>
> >
> > (If it does mean that, then that's a substantial increase on
> > reviewer load; if it doesn't, then I'm curious what definition of
> > "the same" you're using.)
> >
>
> In practice, I doubt anybody will look at these re-reviews, so I'm not sure
> how much it will help our security.
>

I know that I will not in any real detail, even if the process forces me to
check a checkbox somewhere...

--
Ehsan

Ehsan Akhgari

unread,
Mar 9, 2017, 7:11:07 PM3/9/17
to Mike Connor, dev-planning
On 2017-03-09 4:53 PM, Mike Connor wrote:
> * Direct commit access to repositories will be strictly limited to
> sheriffs and a subset of release engineering.
> o Any direct commits by these individuals will be limited to
> fixing bustage that automation misses and handling branch merges.
> * All other changes will go through an autoland-based workflow.
> o Developers commit to a staging repository, with scripting that
> connects the changeset to a Bugzilla attachment, and integrates
> with review flags.

How is this going to work for the use case of being able to land changes
quickly if, according to the judgement of the patch author, the patch
doesn't need to go through the try server, while being able to monitor
the landing and quickly follow up with a fix in case proven wrong? Will
there still be a tree like inbound where committers like myself still
have full commit access to where they can use their judgement accordingly?

Mark Côté

unread,
Mar 9, 2017, 8:08:09 PM3/9/17
to
A mistake we made with MozReview was rolling out a new review tool at
the same time as the new push-based/autoland automation and workflow.
This had a number of repercussions, including requiring users to learn
and use a new review tool in order to benefit from autoland and the
commit-series approach, but also making it difficult to prioritize fixes
and improvements to the various pieces of the system.

Having recognized this, we're decoupling this automation and hooking it
up to BMO, so we can concentrate on getting it right, since most people
are used to using BMO to review patches (this is Project Conduit that
I've blogged and posted about in the mozilla.tools newsgroup). We've
also grown the team, which was previously understaffed for the size of
the project. We should have some initial pieces ready to demo in a few
weeks, but we won't officially launch it until it's good and ready.

(Having a richer and more powerful code-review tool is also still a
goal, but we'll deal with that later.)

Mark

Bobby Holley

unread,
Mar 9, 2017, 8:20:12 PM3/9/17
to Mark Côté, dev. planning
On Thu, Mar 9, 2017 at 5:08 PM, Mark Côté <mc...@mozilla.com> wrote:

> A mistake we made with MozReview was rolling out a new review tool at the
> same time as the new push-based/autoland automation and workflow. This had
> a number of repercussions, including requiring users to learn and use a new
> review tool in order to benefit from autoland and the commit-series
> approach, but also making it difficult to prioritize fixes and improvements
> to the various pieces of the system.
>
> Having recognized this, we're decoupling this automation and hooking it up
> to BMO, so we can concentrate on getting it right, since most people are
> used to using BMO to review patches (this is Project Conduit that I've
> blogged and posted about in the mozilla.tools newsgroup).


This sounds like the right approach.

That said, there can still be hitches in the landing process that are not
necessarily related to the review tool (recall all the edge cases that bz
hit a few weeks ago). Decoupling from the review interface is a good first
step, but building a replacement for {git,hg} push that satisfies
everyone's use-cases is equally hard.

We've also grown the team, which was previously understaffed for the size
> of the project. We should have some initial pieces ready to demo in a few
> weeks, but we won't officially launch it until it's good and ready.
>

That's great to hear.

Mark Côté

unread,
Mar 9, 2017, 8:55:49 PM3/9/17
to
On 2017-03-09 8:19 PM, Bobby Holley wrote:
> On Thu, Mar 9, 2017 at 5:08 PM, Mark Côté <mc...@mozilla.com> wrote:
>
>> A mistake we made with MozReview was rolling out a new review tool at the
>> same time as the new push-based/autoland automation and workflow. This had
>> a number of repercussions, including requiring users to learn and use a new
>> review tool in order to benefit from autoland and the commit-series
>> approach, but also making it difficult to prioritize fixes and improvements
>> to the various pieces of the system.
>>
>> Having recognized this, we're decoupling this automation and hooking it up
>> to BMO, so we can concentrate on getting it right, since most people are
>> used to using BMO to review patches (this is Project Conduit that I've
>> blogged and posted about in the mozilla.tools newsgroup).
>
>
> This sounds like the right approach.
>
> That said, there can still be hitches in the landing process that are not
> necessarily related to the review tool (recall all the edge cases that bz
> hit a few weeks ago). Decoupling from the review interface is a good first
> step, but building a replacement for {git,hg} push that satisfies
> everyone's use-cases is equally hard.

Indeed, it is. First, I don't know that Conduit, or any tool, can
satisfy *everyone's* use cases. But we're certainly going to aim for
the common ones.

The other point I was trying to get across is that by decoupling the
automation, we can focus on it and not split our time between that and
improvements to (and sometimes fixes for) a code-review tool at the same
time. Also the way we're developing the new system, as separate, small
services, will make the turnaround time on improvements much faster
(working within Review Board's extension system was a terrible idea for
something like this).


Mark

Boris Zbarsky

unread,
Mar 9, 2017, 10:10:25 PM3/9/17
to
On 3/9/17 8:19 PM, Bobby Holley wrote:
> That said, there can still be hitches in the landing process that are not
> necessarily related to the review tool (recall all the edge cases that bz
> hit a few weeks ago).

Most of those were due to opinionated tools that have opinions that
don't match reality, nor each other.

The fewer tools at a time in the mix, the better from that point of view.

That said, I strongly urge people working on these tools to actually
consider the various workflows we use and make sure the tools don't
assume common ones away. The issues I ran into also had to do with the
opinionated tools having the opinion that certain use cases don't exist.
Use cases that were brought up as a problem with these tools several
years ago... This sort of thing is OK for optional-to-use tools, but
less OK for mandatory ones.

> but building a replacement for {git,hg} push that satisfies
> everyone's use-cases is equally hard.

A good start here is establishing what the use-cases are.

-Boris

P.S. I am still thinking about the original policy questions here;
trying not to rathole too much on the mechanism

Gijs Kruitbosch

unread,
Mar 10, 2017, 5:58:09 AM3/10/17
to Bobby Holley, Mike Connor
On 09/03/2017 22:59, Bobby Holley wrote:
> Also, getting rid of "r+ with comments" is a non-starter.

+ a lot on this.

I think part of the trust implicit in granting of commit access, and the
reviewer's discretion/trust of the patch author, and said author's
ability to correctly deal with comments without requiring the reviewer
to have another look, mean that removing "r+ with comments" ought not to
be a goal.

If the reviewer doesn't trust the author (from either an ability or
intent perspective) to fix the comments, they shouldn't give "r+ with
nits fixed".

If we don't trust the author to not 'sneak in' unrelated changes off an
r+'d patch, we ought not to give them commit access.

~ Gijs

Gijs Kruitbosch

unread,
Mar 10, 2017, 6:02:22 AM3/10/17
to Bobby Holley, Mike Connor
On 10/03/2017 10:58, Gijs Kruitbosch wrote:
> If we don't trust the author to not 'sneak in' unrelated changes off an
> r+'d patch, we ought not to give them commit access.

Addendum: if we feel this sets the bar for commit access too high,
perhaps we need more than 1 level of commit access for this, so that the
tooling enforces trust for the "r+ with comments" case. If I have L3
commit access and review the patch of someone else with L3 commit
access, I don't need to worry about this, but it would be mildly more
palatable for this to be different if I'm reviewing stuff for someone
with L1 commit access.

Either that or the ability for the (L3) reviewer to adjust minor nits
while reviewing, so that the patch doesn't need to go through a whole
other cycle of review. Of course, that then offers the reviewer the
ability to "sneak in" more changes... :-)

~ Gijs

Masatoshi Kimura

unread,
Mar 10, 2017, 7:38:57 AM3/10/17
to Mike Connor, dev-planning, dev-platform, governance, firefox-dev
On 2017/03/10 6:53, Mike Connor wrote:
> - Two-factor auth must be a requirement for all users approving or
> pushing a change.

I have no mobile devices. How can I use 2FA?

Previously I was suggested to buy a new PC and SSD only to shorten the
build time. Now do I have to buy a smartphone only to contribute
Mozilla? What's the next?

--
VYV0...@nifty.ne.jp

Axel Hecht

unread,
Mar 10, 2017, 7:49:31 AM3/10/17
to
Am 09.03.17 um 22:53 schrieb Mike Connor:
<--->

> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution. If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>

Hi,

I actually have issues with the goals, so I cut out the impl parts.

Here's goals that I'd add to be explicitly stated:

- We want as many people as possible to change the Firefox code base.
- We want to be able to change the Firefox code base going to release
audience as quickly as possible.

How those goals or technical decisions work in favor of goals is really
hard to evaluate without some threat models.

Also, you've been vague on which repositories you intend to cover, and I
also think that that depends on the threat models.

So, along the lines of David's ask in .platform, I'd really appreciate
to get more details on the problems we're hoping to mitigate.

Axel

Mike Hoye

unread,
Mar 10, 2017, 8:49:41 AM3/10/17
to dev-pl...@lists.mozilla.org


On 2017-03-10 7:49 AM, Axel Hecht wrote:
> - We want as many people as possible to change the Firefox code base.

For whatever it's worth, I want the Firefox development process to be as
accessible and participatory as possible to as many people as possible,
but that's not quite the same as "as many people as possible can change
codebase". Automation that removes unnecessary complexity is a big help
in reducing barriers to participation.

- mhoye

Ehsan Akhgari

unread,
Mar 10, 2017, 9:40:05 AM3/10/17
to Mike Hoye, dev-planning@lists.mozilla.org planning
On Fri, Mar 10, 2017 at 8:49 AM, Mike Hoye <mh...@mozilla.com> wrote:

>
>
> On 2017-03-10 7:49 AM, Axel Hecht wrote:
>
>> - We want as many people as possible to change the Firefox code base.
>>
>
> For whatever it's worth, I want the Firefox development process to be as
> accessible and participatory as possible to as many people as possible, but
> that's not quite the same as "as many people as possible can change
> codebase". Automation that removes unnecessary complexity is a big help in
> reducing barriers to participation.
>

Automation that adds unnecessary complexity is a big hindrance in
participation also. Maybe there is a good reason why we should be doing
that, but the proposal doesn't really mention what problem it's actually
trying to solve besides vague mentions of the stakeholders settings these
four goals.

For example, what problem is the first goal solving? ("Compromising a
single individual's credentials must not be sufficient to land malicious
code into our products.") Does anyone believe that is the only way you can
put malicious code into Firefox, whereas anyone can submit malicious code
under a pseudonym in Bugzilla and we know from past experience times and
times again that mistakes will remain uncaught during review and testing
due to the nature of software?

--
Ehsan

Ben Hearsum

unread,
Mar 10, 2017, 9:59:57 AM3/10/17
to
I _think_ the threat model here is a bad actor or someone with a highly
privileged account being coerced into committing something bad.

Boris Zbarsky

unread,
Mar 10, 2017, 11:05:35 AM3/10/17
to
On 3/9/17 4:53 PM, Mike Connor wrote:
> - Compromising a single individual's credentials must not be sufficient
> to land malicious code into our products.

This is a good goal in general. Devil is in the details, of course...

> - Two-factor auth must be a requirement for all users approving or
> pushing a change.

This seems reasonable if we're willing to help contributors with this
(e.g. hand out yubikeys as needed).

> - The change that gets pushed must be the same change that was approved.

This sounds great at first glance, but I suspect is hellish in practice,
as others have pointed out in the thread. Given the need to rebase,
this is completely impossible to achieve as stated. We should figure
out what the _real_ goal is here. It's possible it's actually more than
one goal.

> - Broken commits must be rejected automatically as a part of the commit
> process.

This looks good.

I think there are some critical goals missing. Some of these may be
something we're assuming everyone shares as a goal, but I'd rather make
them explicit, because I think some of these are not in fact being
considered in the specific changes being proposed. I think some of
these are more contentious than others....

- Pushing is possible.
- Pushing is possible for non-MoCo-employees.
- Pushing is not a serious time drain on the patch author. The term
"serious" needs definition.
- Pushing is not a serious time drain on the patch reviewer. The
term "serious" needs definition.
- There should be little incentive to do one push instead of
multiple pushes for work that is conceptually separate.

I may be missing some, honestly. I have tried to think about edge cases
here, but having a failure of imagination.

We should have a clear concept of workflows that we consider "in scope"
for this system in terms of evaluating the "not a time drain" goals. I
have explicitly not put any specific workflows in my list of critical
goals, though I consider some workflows critical. I mean workflows in
terms of how work is organized into commits/pushes/bugs, not workflows
like branch-based vs mq vs git vs hg vs whatever. As a concrete
example, pushing multiple things that depend on each other in practice
but not conceptually should probably not be considered an edge case.
That is, I'm talking about changes to the same code (hence need to be
pushed in the right order) but belonging to separate bugs and needing to
be tracked separately and whatnot. [1]

Basically, the current set of goals is more or less all defined from a
security perspective, and seems to be treating usability as a non-goal.
I think approaching the problem from that perspective will create an
unusable system [2], and we need to have specific usability goals in
mind as we design this. The above are my suggestions for high-level
usability goals, but I would like to see what suggestions others have.

I am explicitly not writing down specific proposals that I think would
help achieve the "not a serious time drain" goals, but I do have some
thoughts here and am happy to drill down into them if we agree on those
goals in general.

-Boris

[1] The current mozreview+autoland workflow falls down badly on this,
for example. See https://bugzilla.mozilla.org/show_bug.cgi?id=1344432

[2] Citation needed, but I think this is a commonly understood thing
about software security. Examples like encrypted email, our old sync
service, etc abound. And those are examples where people _were_
considering usability!

Steven MacLeod

unread,
Mar 10, 2017, 11:34:26 AM3/10/17
to dev-pl...@lists.mozilla.org
I think "r+ with nits" would be much less of an issue if the
trust/permissions
model was less coarse, both in level of permission and breadth of code it
encompasses. It seems a little bit insane to me that a developer who
primarily
works on area X of the codebase and is given scm_level_3 due to trust in
that
area can then land to any other part of the tree.

Attempts to fix this for parts of mozilla-central have been implemented with
push hooks[1]. Having a generic way to assign this ownership/trust for all
parts
of the codebase (and actually making it enforceable, rather than just
checking
a user providable string in a commit message) would give automation the
information it needs to relax certain requirements in explicitly defined
cases.

For example, it probably makes sense to allow the owner of a particular file
to land there when given an r+ on a previous revision of the change,
especially
if that review came from another owner or peer of that file. If you'd like
to
keep an audit trail, automation could even send out an interdiff of what was
reviewed and what was landed, after the fact.

It seems to me like it's time that an actual machine readable code ownership
system is implemented (Like most large orgs have). Allowing some level of
trust
to keep people productive makes sense to me, but I don't think the current
scm_level system provides enough control to grant reasonable trust we can
enforce with automation.

As Gijs mentioned, you could also allow reviewers to just fix the nits
themselves. Have tools support the reviewer pushing up an alternative fix
(or
editing directly in a review interface) and suggesting it as a change which
the
author could accept and land after viewing the differences.


[1]
https://hg.mozilla.org/hgcustom/version-control-tools/file/tip/hghooks/mozhghooks/prevent_webidl_changes.py

L. David Baron

unread,
Mar 10, 2017, 11:54:54 AM3/10/17
to dev-pl...@lists.mozilla.org
On Friday 2017-03-10 11:05 -0500, Boris Zbarsky wrote:
> - Pushing is possible.
> - Pushing is possible for non-MoCo-employees.
> - Pushing is not a serious time drain on the patch author. The term
> "serious" needs definition.
> - Pushing is not a serious time drain on the patch reviewer. The
> term "serious" needs definition.
> - There should be little incentive to do one push instead of
> multiple pushes for work that is conceptually separate.

I like the way you're looking at specific usability goals, although
the last one made me think that perhaps some of the goals could be
higher level, e.g.:

- doesn't provide incentives that reduce our ability to get prompt
feedback from nightly users (e.g., by delaying code landing)
- doesn't provide incentives that reduce our ability to find
regressions via bisection (e.g., by merging landings together)
signature.asc

L. David Baron

unread,
Mar 10, 2017, 11:59:00 AM3/10/17
to Steven MacLeod, dev-pl...@lists.mozilla.org
On Friday 2017-03-10 11:33 -0500, Steven MacLeod wrote:
> It seems to me like it's time that an actual machine readable code ownership
> system is implemented (Like most large orgs have). Allowing some level of
> trust
> to keep people productive makes sense to me, but I don't think the current
> scm_level system provides enough control to grant reasonable trust we can
> enforce with automation.

There's a big tradeoff here, which is that we don't want to
discourage refactoring or cleanup changes that touch large parts of
the codebase. Such changes are important for the long-term health
of the codebase. To allow them, we've generally allowed the owners
of an API to review straightforward changes to users of that API
across all parts of the tree, because getting review from every
module touched is a significant amount of work and delay, and
strongly discourages making such refactoring / cleanup changes.
signature.asc

Steven MacLeod

unread,
Mar 10, 2017, 12:08:55 PM3/10/17
to L. David Baron, dev-pl...@lists.mozilla.org
On Fri, Mar 10, 2017 at 11:58 AM, L. David Baron <dba...@dbaron.org> wrote:

> On Friday 2017-03-10 11:33 -0500, Steven MacLeod wrote:
> > It seems to me like it's time that an actual machine readable code
> ownership
> > system is implemented (Like most large orgs have). Allowing some level of
> > trust
> > to keep people productive makes sense to me, but I don't think the
> current
> > scm_level system provides enough control to grant reasonable trust we can
> > enforce with automation.
>
> There's a big tradeoff here, which is that we don't want to
> discourage refactoring or cleanup changes that touch large parts of
> the codebase. Such changes are important for the long-term health
> of the codebase. To allow them, we've generally allowed the owners
> of an API to review straightforward changes to users of that API
> across all parts of the tree, because getting review from every
> module touched is a significant amount of work and delay, and
> strongly discourages making such refactoring / cleanup changes.
>

In the model I'm proposing, I think there could still be a place for a very
small
number of very trusted developers who could approve changes repository wide.
Vast refactoring could rely on sign-off from one of these developers. I
assume
the number of changes actually requiring such sign-off would be low enough
that it wouldn't overburden this small group? Even if their review was just
a
rubber-stamp indicating they trust the owner of the API making the
refactoring
and the reviewer who doesn't have full access.

Boris Zbarsky

unread,
Mar 10, 2017, 12:32:01 PM3/10/17
to
On 3/10/17 11:54 AM, L. David Baron wrote:
> - doesn't provide incentives that reduce our ability to get prompt
> feedback from nightly users (e.g., by delaying code landing)
> - doesn't provide incentives that reduce our ability to find
> regressions via bisection (e.g., by merging landings together)

Yes, I like those very much.

-Boris

Mike Connor

unread,
Mar 10, 2017, 12:50:02 PM3/10/17
to L. David Baron, dev-planning
On Thu, Mar 9, 2017 at 5:14 PM, L. David Baron <dba...@dbaron.org> wrote:

> On Thursday 2017-03-09 16:53 -0500, Mike Connor wrote:
> > I've identified the following goals as critical for a responsible commit
> > access policy:
> ...
> > - The change that gets pushed must be the same change that was
> approved.
>
> I'm curious what this goal means. In particular, does it mean that
> you're trying to end "review+ if you make the following changes",
> and require that reviewers re-review the revisions no matter what?
>
> (If it does mean that, then that's a substantial increase on
> reviewer load; if it doesn't, then I'm curious what definition of
> "the same" you're using.)


Replying to this specific part, since there's a lot of expected and
understandable concerns here.

That's a potential end state, and one end of the spectrum of strictness of
code review/approval. That spectrum ranges from "trust approved developers
to make good changes, don't do code review" to "every line of code that
lands should be explicitly reviewed first." Leaving aside productivity
concerns for the moment, I think we would all agree that code review is a
critical part of our process, because developers are human and humans make
mistakes. As a result, the closer we get to doing formal review on final
versions, instead of mostly-done versions, the better off we should be.
That will take time, and I think it's the right goal to work toward.

Big picture, I agree that doing this today presents a lot of challenges,
especially for reviewers currently carrying a huge load. We don't *yet*
have great tooling in place to easily review the differences between
submissions (and ideally recognize when it's just
whitespace/syntax/comments typos getting fixed). We have a lot of
reviewers barely keeping up with reviews (or worse, always falling
behind). We don't have an easy enough process for running patches through
automated testing in advance of reviews. And most difficult of all,
patches are frequently not in a "ready to land" state when submitted for
review, meaning a clean review is relatively rare. I believe we can and
should work to address all of these issues, especially the last one.

To expand a bit on that last point: if we can trust core/known developers
to competently make necessary changes prior to checkin, we should also be
able to expect them to fix most/all of those issues *before* they submit
those patches for review. My experience as both author and reviewer has
generally been that doing code review on something 90% finished takes far
more time than something that's been refined thoroughly prior to
submission. I'd love to see us tighten up the expectations on patch
authors to only request review when they believe something is fully ready
to land. (If you want early/high-level feedback, there's a flag for that.)

Assuming we can make significant progress on all of those challenges, and
the net reviewer overhead is about the same, are there other reasons we
wouldn't want to move to a much stricter review process?

-- Mike

Kartikaya Gupta

unread,
Mar 10, 2017, 1:00:54 PM3/10/17
to Mike Connor, L. David Baron, dev-planning
On Fri, Mar 10, 2017 at 12:49 PM, Mike Connor <mco...@mozilla.com> wrote:
> To expand a bit on that last point: if we can trust core/known developers
> to competently make necessary changes prior to checkin, we should also be
> able to expect them to fix most/all of those issues *before* they submit
> those patches for review.

Not necessarily, because oftentimes those issues are specific to the
reviewer's preferences and the area of code being touched, which the
patch author may not be totally familiar with.

> Assuming we can make significant progress on all of those challenges, and
> the net reviewer overhead is about the same, are there other reasons we
> wouldn't want to move to a much stricter review process?

I think the burden of proof is on anybody claiming they can do this
without increasing net reviewer overhead. Without actually showing
that can be done, this whole line of reasoning is kind of moot.

Cheers,
kats

Nicholas Nethercote

unread,
Mar 10, 2017, 8:03:01 PM3/10/17
to Mike Connor, L. David Baron, dev-planning
On Sat, Mar 11, 2017 at 4:49 AM, Mike Connor <mco...@mozilla.com> wrote:

>
> To expand a bit on that last point: if we can trust core/known developers
> to competently make necessary changes prior to checkin, we should also be
> able to expect them to fix most/all of those issues *before* they submit
> those patches for review. My experience as both author and reviewer has
> generally been that doing code review on something 90% finished takes far
> more time than something that's been refined thoroughly prior to
> submission. I'd love to see us tighten up the expectations on patch
> authors to only request review when they believe something is fully ready
> to land. (If you want early/high-level feedback, there's a flag for that.)
>

I frequently makes minor changes to patches after review that haven't been
suggested by reviewers. Often after sleeping on it. These are probably more
often in comments than code. Maybe I'm unusual in this respect.

I'm surprised that Servo and Rust haven't come up yet, because they do
already have a model much like the one suggested -- landing approval must
come from an authorized reviewer.

In my experience it's not a disaster but it is a hassle at times and
motivates against suggesting (as a reviewer) or making (as an author) minor
changes -- it's easy to be lazy and say "close enough". And in the case
where minor changes are requested, the final approval often is a complete
rubber stamp -- I've had Servo patches authorized by people uninvolved with
the PR simply because they were in my timezone and so online at the time.

Nick

Cameron McCormack

unread,
Mar 10, 2017, 8:17:52 PM3/10/17
to Nicholas Nethercote, Mike Connor, L. David Baron, dev-planning
On Sat, Mar 11, 2017, at 09:02 AM, Nicholas Nethercote wrote:
> I'm surprised that Servo and Rust haven't come up yet, because they do
> already have a model much like the one suggested -- landing approval must
> come from an authorized reviewer.

In Servo, reviewers sometimes do "r+ with changes" by saying "delegate+"
in the PR, which tells the landing bot that the PR submitter can mark
the patch as reviewed themselves (even after pushing some updates).

Ehsan Akhgari

unread,
Mar 10, 2017, 8:51:46 PM3/10/17
to Nicholas Nethercote, Mike Connor, L. David Baron, dev-planning
On 2017-03-10 8:02 PM, Nicholas Nethercote wrote:
> On Sat, Mar 11, 2017 at 4:49 AM, Mike Connor <mco...@mozilla.com> wrote:
>
>>
>> To expand a bit on that last point: if we can trust core/known developers
>> to competently make necessary changes prior to checkin, we should also be
>> able to expect them to fix most/all of those issues *before* they submit
>> those patches for review. My experience as both author and reviewer has
>> generally been that doing code review on something 90% finished takes far
>> more time than something that's been refined thoroughly prior to
>> submission. I'd love to see us tighten up the expectations on patch
>> authors to only request review when they believe something is fully ready
>> to land. (If you want early/high-level feedback, there's a flag for that.)
>>
>
> I frequently makes minor changes to patches after review that haven't been
> suggested by reviewers. Often after sleeping on it. These are probably more
> often in comments than code. Maybe I'm unusual in this respect.

This is part of my usual workflow as well.

> I'm surprised that Servo and Rust haven't come up yet, because they do
> already have a model much like the one suggested -- landing approval must
> come from an authorized reviewer.
>
> In my experience it's not a disaster but it is a hassle at times and
> motivates against suggesting (as a reviewer) or making (as an author) minor
> changes -- it's easy to be lazy and say "close enough". And in the case
> where minor changes are requested, the final approval often is a complete
> rubber stamp -- I've had Servo patches authorized by people uninvolved with
> the PR simply because they were in my timezone and so online at the time.

That is quite sad to hear... As a reviewer, oftentimes the changes that
I ask for in my r+ with nits are changes I actually care about quite a
lot for various reasons depending on the context... To me, r+ with nits
means I'm trusting the author to be able to make the changes without
oversight, not that the changes are unimportant. Even if all such
changes are trivial nits, this is a good way to build up technical debt.

Ehsan Akhgari

unread,
Mar 10, 2017, 9:06:52 PM3/10/17
to Cameron McCormack, L. David Baron, dev-planning, Nicholas Nethercote, Mike Connor
On Fri, Mar 10, 2017 at 8:17 PM, Cameron McCormack <c...@mcc.id.au> wrote:

> On Sat, Mar 11, 2017, at 09:02 AM, Nicholas Nethercote wrote:
> > I'm surprised that Servo and Rust haven't come up yet, because they do
> > already have a model much like the one suggested -- landing approval must
> > come from an authorized reviewer.
>
> In Servo, reviewers sometimes do "r+ with changes" by saying "delegate+"
> in the PR, which tells the landing bot that the PR submitter can mark
> the patch as reviewed themselves (even after pushing some updates).
>

OK, something like this would address the issue of reviewers refraining
from asking for nits. But the detail of the implementation matter here...

--
Ehsan

smaug

unread,
Mar 10, 2017, 10:23:27 PM3/10/17
to Bobby Holley, Mike Connor, dev-platform, governance, firefox-dev
On 03/10/2017 12:59 AM, Bobby Holley wrote:
> At a high level, I think the goals here are good.
>
> However, the tooling here needs to be top-notch for this to work, and the
> standard approach of flipping on an MVP and dealing with the rest in
> followup bugs isn't going to be acceptable for something so critical to our
> productivity as landing code. The only reason more developers aren't
> complaining about the MozReview+autoland workflow right now is that it's
> optional.
>
> The busiest and most-productive developers (ehsan, bz, dbaron, etc) tend
> not to pay attention to new workflows because they have too much else on
> their plate. The onus needs to be on the team deploying this to have the
> highest-volume committers using the new system happily and voluntarily
> before it becomes mandatory. That probably means that the team should not
> have any deadline-oriented incentives to ship it before it's ready.
>
> Also, getting rid of "r+ with comments" is a non-starter.

FWIW, with my reviewer hat on, I think that is not true, _assuming_ there is a reliable interdiff for patches.
And not only MozReview patches but for all the patches. (and MozReview interdiff isn't reliable, it has dataloss issues so it
doesn't really count there.).
I'd be ok to do a quick r+ if interdiff was working well.



-Olli

>
> bholley
>
>
> On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor <mco...@mozilla.com> wrote:
>
>> (please direct followups to dev-planning, cross-posting to governance,
>> firefox-dev, dev-platform)
>>
>>
>> Nearly 19 years after the creation of the Mozilla Project, commit access
>> remains essentially the same as it has always been. We've evolved the
>> vouching process a number of times, CVS has long since been replaced by
>> Mercurial & others, and we've taken some positive steps in terms of
>> securing the commit process. And yet we've never touched the core idea of
>> granting developers direct commit access to our most important
>> repositories. After a large number of discussions since taking ownership
>> over commit policy, I believe it is time for Mozilla to change that
>> practice.
>>
>> Before I get into the meat of the current proposal, I would like to outline
>> a set of key goals for any change we make. These goals have been informed
>> by a set of stakeholders from across the project including the engineering,
>> security, release and QA teams. It's inevitable that any significant
>> change will disrupt longstanding workflows. As a result, it is critical
>> that we are all aligned on the goals of the change.
>>
>>
>> I've identified the following goals as critical for a responsible commit
>> access policy:
>>
>>
>> - Compromising a single individual's credentials must not be sufficient
>> to land malicious code into our products.
>> - Two-factor auth must be a requirement for all users approving or
>> pushing a change.
>> - The change that gets pushed must be the same change that was approved.
>> - Broken commits must be rejected automatically as a part of the commit
>> process.
>>
>>
>> In order to achieve these goals, I propose that we commit to making the
>> following changes to all Firefox product repositories:
>>
>>
>> - Direct commit access to repositories will be strictly limited to
>> sheriffs and a subset of release engineering.
>> - Any direct commits by these individuals will be limited to fixing
>> bustage that automation misses and handling branch merges.
>> - All other changes will go through an autoland-based workflow.
>> - Developers commit to a staging repository, with scripting that
>> connects the changeset to a Bugzilla attachment, and integrates
>> with review
>> flags.
>> - Reviewers and any other approvers interact with the changeset as
>> today (including ReviewBoard if preferred), with Bugzilla flags as
>> the
>> canonical source of truth.
>> - Upon approval, the changeset will be pushed into autoland.
>> - If the push is successful, the change is merged to mozilla-central,
>> and the bug updated.
>>
>> I know this is a major change in practice from how we currently operate,
>> and my ask is that we work together to understand the impact and concerns.
>> If you find yourself disagreeing with the goals, let's have that discussion
>> instead of arguing about the solution. If you agree with the goals, but
>> not the solution, I'd love to hear alternative ideas for how we can achieve
>> the outcomes outlined above.
>>

smaug

unread,
Mar 10, 2017, 10:55:18 PM3/10/17
to Bobby Holley, Mike Connor, governance, firefox-dev
On 03/11/2017 05:23 AM, smaug wrote:
> On 03/10/2017 12:59 AM, Bobby Holley wrote:
>> At a high level, I think the goals here are good.
>>
>> However, the tooling here needs to be top-notch for this to work, and the
>> standard approach of flipping on an MVP and dealing with the rest in
>> followup bugs isn't going to be acceptable for something so critical to our
>> productivity as landing code. The only reason more developers aren't
>> complaining about the MozReview+autoland workflow right now is that it's
>> optional.
>>
>> The busiest and most-productive developers (ehsan, bz, dbaron, etc) tend
>> not to pay attention to new workflows because they have too much else on
>> their plate. The onus needs to be on the team deploying this to have the
>> highest-volume committers using the new system happily and voluntarily
>> before it becomes mandatory. That probably means that the team should not
>> have any deadline-oriented incentives to ship it before it's ready.
>>
>> Also, getting rid of "r+ with comments" is a non-starter.
>
> FWIW, with my reviewer hat on, I think that is not true, _assuming_ there is a reliable interdiff for patches.
> And not only MozReview patches but for all the patches. (and MozReview interdiff isn't reliable, it has dataloss issues so it
> doesn't really count there.).
> I'd be ok to do a quick r+ if interdiff was working well.
>

But I could rephrase this differently. During the last couple of weeks I've been doing pretty much only reviews, again, so increasing reviewing time
significantly is a non-starter for me. If we can find more reviewers, great, if not, can we please figure out processes which make reviewing less
time consuming. I guess I'd wish positive answer to both these questions. (a quick r+ doesn't affect here much _if_ interdiff was working.)

Though, I do honestly think that our reviewing process would feel quite a bit better if everyone would always prioritize reviews over pretty much
everything else (some deadlines would be valid exceptions of this rule, then review queue could stay closed).
If that was happening, review requests might spread more evenly.


-Olli

Trevor Saunders

unread,
Mar 12, 2017, 2:09:04 PM3/12/17
to Mike Connor, L. David Baron, dev-planning
On Fri, Mar 10, 2017 at 12:49:50PM -0500, Mike Connor wrote:
> On Thu, Mar 9, 2017 at 5:14 PM, L. David Baron <dba...@dbaron.org> wrote:
>
> > On Thursday 2017-03-09 16:53 -0500, Mike Connor wrote:
> > > I've identified the following goals as critical for a responsible commit
> > > access policy:
> > ...
> > > - The change that gets pushed must be the same change that was
> > approved.
> >
> > I'm curious what this goal means. In particular, does it mean that
> > you're trying to end "review+ if you make the following changes",
> > and require that reviewers re-review the revisions no matter what?
> >
> > (If it does mean that, then that's a substantial increase on
> > reviewer load; if it doesn't, then I'm curious what definition of
> > "the same" you're using.)
>
>
> Replying to this specific part, since there's a lot of expected and
> understandable concerns here.
>
> That's a potential end state, and one end of the spectrum of strictness of
> code review/approval. That spectrum ranges from "trust approved developers
> to make good changes, don't do code review" to "every line of code that
> lands should be explicitly reviewed first." Leaving aside productivity
> concerns for the moment, I think we would all agree that code review is a
> critical part of our process, because developers are human and humans make
> mistakes. As a result, the closer we get to doing formal review on final
> versions, instead of mostly-done versions, the better off we should be.
> That will take time, and I think it's the right goal to work toward.

If we pretend the goal has no downside then of course it seems like a
good idea. However its really hard for me to imagine a world where its
worth the time to review patches that amount to

+ MOZ_ASSERT(x);

after I asked you to please assert x there. The worst downsides would
seem to be forgeting to add the assert which often isn't that big a
deal, or you adding a wrong assert which will show up soon enough.

Even more extreme consider a patch like

-#include <IUnknown.h>
+#include <iunknown.h>

For that to have fixed things on a case sensitive fs it basically has to
have been the correct change.

Finally for
-class X : public Y
+class X final : public Y

which I don't have a way to screw up that a compiler will accept on the
top of my head.

So until we have a world where getting that reviewed takes less time than
dealing with the bug mail for it today I'm inclined to think r+ with
nits for the first and the standing rubber stamps for the latter two
make a lot of sense.

> Big picture, I agree that doing this today presents a lot of challenges,
> especially for reviewers currently carrying a huge load. We don't *yet*
> have great tooling in place to easily review the differences between
> submissions (and ideally recognize when it's just
> whitespace/syntax/comments typos getting fixed). We have a lot of
> reviewers barely keeping up with reviews (or worse, always falling
> behind). We don't have an easy enough process for running patches through
> automated testing in advance of reviews. And most difficult of all,
> patches are frequently not in a "ready to land" state when submitted for
> review, meaning a clean review is relatively rare. I believe we can and
> should work to address all of these issues, especially the last one.

I share the experience of seeing plenty of patches that aren't perfectly
ready to land, but I think most of the reasons are pretty unavoidable.
I'd break them up into a couple groups.

First we have patches that are just baddly designed. As people learn
the area they will get better at avoiding writing those patches, but it
seems likely there will always be some.

Then we have the patches that missed some corner case or race condition.
Idealy we might be able to catch all those with tests, but I'm kind of
skeptical especially when we look at the number of possibly busted tests
we find ourselves disabling.

The third group is patches missing things like asserts, maybe missing a
comment for something tricky, or maybe just a comment typo. Any of
those might easily slip by the author, so I'd expect we'll keep seeing
them.

> To expand a bit on that last point: if we can trust core/known developers
> to competently make necessary changes prior to checkin, we should also be
> able to expect them to fix most/all of those issues *before* they submit
> those patches for review. My experience as both author and reviewer has
> generally been that doing code review on something 90% finished takes far
> more time than something that's been refined thoroughly prior to
> submission. I'd love to see us tighten up the expectations on patch
> authors to only request review when they believe something is fully ready
> to land. (If you want early/high-level feedback, there's a flag for that.)

Given what I observed above I tend to come up with some different
conclusions.

First I don't think we have a particularly big problem in the people I
work with of people asking for review on particularly half done sloppy
patches.

Second while they could fix the issues before asking for review they very
often reasonably don't know of them. So the choice is between making
them aware and then rereviewing, or trusting them to get it right /
being ok with them messing it up once in a while.

So to the degreee I review patches that aren't ready to land the
problems in them are not the major time sinks in reviewing the patch.

> Assuming we can make significant progress on all of those challenges, and
> the net reviewer overhead is about the same, are there other reasons we
> wouldn't want to move to a much stricter review process?

I think there are some really big assumptions in there, but its hard to
disagree with wanting to print free money.

Trev

Daniel Veditz

unread,
Mar 12, 2017, 8:44:47 PM3/12/17
to Trevor Saunders, dev-planning
On 3/12/17 6:54 AM, Trevor Saunders wrote:
> If we pretend the goal has no downside then of course it seems like a
> good idea. However its really hard for me to imagine a world where its
> worth the time to review patches that amount to
>
> + MOZ_ASSERT(x);
>
> after I asked you to please assert x there. The worst downsides would
> seem to be forgeting to add the assert which often isn't that big a
> deal, or you adding a wrong assert which will show up soon enough.

That assumes the developers you review have good intentions--which is
almost always true. Like most security-inspired restrictions, however,
the worry is about someone with bad intentions. It's similar to
insurance in that you pay every year but really you hope that your house
does NOT burn down, but in that case it feels like a waste.

Hypothetically, an evil person could take a "r+ with nits" and then
check in the patch with bad code that was never seen by another human
and which may not get noticed. When we put autoland into the mix this
essentially gives L3 commit access to anyone who can put together a
helpful "almost right" patch. I think there's not a lot of controversy
about imposing this restriction for patch submitters who are not yet
trusted.

This does add overhead when applied to developers who are, in fact,
trusted. On the surface, clearly noted in this thread, that seems
insane. But at the top of this thread, the worry was "what if a trusted
committer's credentials get compromised?" Currently the answer is "We're
screwed". We can hope the sheriffs would notice an odd patch during
uplift, but the sheriffs are busy and any attacker who was any good
would make sure the patch looked totally normal on the surface.

No easy answers.

-Dan Veditz

Eric Rescorla

unread,
Mar 12, 2017, 9:56:25 PM3/12/17
to Daniel Veditz, Trevor Saunders, dev-planning
Top-posting because I'm not really responding to Dan.

I'd like to suggest we take a step back here from the technical mechanisms
and ask
a simpler policy question, namely:

"What is the minimum number of 'trusted' people that should be required to
land
a patch"?

Currently, I suspect the answer is effectively 0: a random person can post
a patch to
Bugzilla, mark it "r=<committer>" (with some claim that it was in IRC) and
then mark
it checkin-needed and (I suspect, but perhaps the Sheriffs will correct me)
it will get
landed. Alternately, you can create a patch which gets r+ with nits, and
then update
with some malicious code and r=<committer>.

So, I would ask: do people believe that this is an acceptable state of
affairs or
should the minimum number of 'trusted' people required to land a patch be 1
or
more?

-Ekr

Kartikaya Gupta

unread,
Mar 12, 2017, 11:19:57 PM3/12/17
to Eric Rescorla, dev. planning, Daniel Veditz
Here's a thought to potentially mitigate some of the delays associated with
re-reviewing for nits and rebases: post-commit review. That is, person A
authors the patch, puts it up for review as usual, person R reviews, says
"r+ with nits fixed". Person A can fix nits/rebase/etc and land right away,
but the patch then still requires the re-review by R after landing.
Obviously in the case where no changes are required, we can skip the
post-commit review.

This doesn't fix the total reviewer time spent, but it does fix the extra
(potentially multiple rebase) timezone/round-trip latency for person A.
That being said, it would require tooling support to make sure that reviews
don't get lost and buy-in from people to actually do the post-commit
reviews. In my experience with the QuantumRender work where we tried this,
a number of patches got left behind and without somebody following up on
them they would have been missed. So I'm not sure this option is actually a
good one, but I'll throw it in this thread as something to consider.

Another option is that person R implements the nit fixes and does the
landing instead of bouncing it back to A. This has obvious problems as
well, but again, throwing more ideas out there for the sake of discussion.

Boris Zbarsky

unread,
Mar 13, 2017, 9:32:24 AM3/13/17
to
On 3/12/17 8:44 PM, Daniel Veditz wrote:
> This does add overhead when applied to developers who are, in fact,
> trusted. On the surface, clearly noted in this thread, that seems
> insane. But at the top of this thread, the worry was "what if a trusted
> committer's credentials get compromised?" Currently the answer is "We're
> screwed". We can hope the sheriffs would notice an odd patch during
> uplift, but the sheriffs are busy and any attacker who was any good
> would make sure the patch looked totally normal on the surface.

I feel like there's an important point to be made here. If I had
compromised a developer's credentials and were trying to get an attack
past a "review right before landing" system, I would do it as follows:

1) In one patch, introduce some code that relies on an invariant that
currently holds.

2) In another, non-conflicting, patch, introduce a violation of that
invariant and fix up the other places depending on it, but not the one
from item 1.

3) Have different people review the two patches.

4) Make sure I have both reviews before landing either patch, so the
reviewers can't catch the problem. There are various ways of doing
this, like making both patches depend on yet at third patch that I make
sure gets reviewed slowly.

This is a somewhat higher bar than just "add the malicious code and land
it", of course. But this is also what I came up with after about 30
seconds' worth of thought...

-Boris

Boris Zbarsky

unread,
Mar 13, 2017, 9:36:08 AM3/13/17
to
On 3/12/17 9:55 PM, Eric Rescorla wrote:
> Alternately, you can create a patch which gets r+ with nits, and
> then update with some malicious code and r=<committer>.

Speaking as a reviewer, for people I don't trust I pretty much never
give "r+ with nits", because I don't trust them to address the nits
correctly.

Even in cases when I do (e.g. if it's a typo in a comment), I do exactly
the "last review", sometimes post-landing, that we seem to be talking about.

I certainly have no problem formalizing this for the untrusted patch
author case.

Addressing the "compromised trusted account" case sanely is much harder,
not least because the number of patches involved is suddenly much
larger. :(

> So, I would ask: do people believe that this is an acceptable state of
> affairs or should the minimum number of 'trusted' people required to land a patch be 1
> or more?

Obviously the latter. ;)

-Boris

David Burns

unread,
Mar 13, 2017, 10:37:50 AM3/13/17
to Mike Connor, dev-planning, dev-platform, governance, firefox-dev
As the manager of the sheriffs, I am in favour of this proposal.

The reasons why are as follow (and to note there are only 3 paid sheriffs
to try cover the world):

* A number of r+ with nits land up in the sheriffs queue for
checkin-needed. This then puts the onus on the sheriffs, not the reviewer
that the right thing has been done. The sheriffs do no have the context
knowledge of the patch, never mind the knowledge of the system being
changed.

* The throughput of patches into the trees is only going to increase. If
there are failures, and the sheriffs need to back out, this can be a long
process depending on the failure leading to pile ups of broken patches.

A number of people have complained that using autoland doesn't allow us to
fail forward on patches. While that is true, there is the ability to do
T-shaped try pushes to make sure that you at least compile on all
platforms. This can easily done from Mozreview (Note: I am not suggesting
we move to mozreview).

Regarding burden on reviewers, the comments in this thread just highlight
how broken our current process is by having to flag individual people for
reviews. This leads to the a handful of people doing 50%+ of reviews on the
code. We, at least visibly, don't do enough to encourage new committers
that would lighten the load to allow the re-review if there are nits. Also,
we need to do work to automate the removal of nits to limit the amount of
re-reviews that reviewer can do.

We should try mitigate the security problem and fix our nit problem instead
of bashing that we can't handle re-reviews because of nits.

David

On 9 March 2017 at 21:53, Mike Connor <mco...@mozilla.com> wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been. We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process. And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories. After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make. These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams. It's inevitable that any
> significant change will disrupt longstanding workflows. As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>
> - Compromising a single individual's credentials must not be
> sufficient to land malicious code into our products.
> - Two-factor auth must be a requirement for all users approving or
> pushing a change.
> - The change that gets pushed must be the same change that was
> approved.
> firefox-dev mailing list
> firef...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>

Byron Jones

unread,
Mar 13, 2017, 10:45:43 AM3/13/17
to David Burns, dev-planning, dev-platform, governance, Mike Connor, firefox-dev
David Burns wrote:
> We should try mitigate the security problem and fix our nit problem
> instead of bashing that we can't handle re-reviews because of nits.
one way tooling could help here is to allow the reviewer to make minor
changes to the patch before it lands.
ie. "r+, fix typo in comment before landing" would become "r+, i fixed
the comment typo"

--
glob — engineering productivity — moz://a

Ehsan Akhgari

unread,
Mar 13, 2017, 11:04:15 AM3/13/17
to Boris Zbarsky, dev-planning@lists.mozilla.org planning
I'm really struggling to imagine what kind of an actor we are trying to
protect against. There is a cost to going through the pain of
compromising a trusted developer's account, authoring a patch without
their knowledge, getting it landed, correctly getting it approved for an
uplift and actually uplifted to our release branches and wait for the
vulnerability to get deployed and profit. This puts a price tag on this
attack vector, in terms of financials, and logistics, and it risks
getting detected by breaching in to a machine used by a Mozilla developer.

What kind of an actor would choose to actually go through this, versus,
let's say, buying a Firefox 0-day from a vulnerability vendor (would
perhaps cost a bit more in financials but a lot less in logistics), or
just look through our oceans of unsafe C++ code fishing for fresh
0-days? (Honest question, not just rhetorical.)

Also, in all honesty, I feel like our review process is pretty
ineffective against this attach vector at any rate. I don't know about
other reviewers, but I'm going to admit that most of the time when I'm
reviewing code coming from trusted Mozilla colleagues who I have worked
with for many years, I implicitly trust the code as being non-malicious,
whereas when I review code coming from contributors who I don't
recognize, I usually try to treat the code as potentially containing
disastrous security sensitive mistakes. :/

Steve Fink

unread,
Mar 13, 2017, 11:25:50 AM3/13/17
to dev-pl...@lists.mozilla.org
Count me as another one skeptical about adding a review step. (I work
most closely with someone 8 hours off of my timezone; I'd bet there's a
correlation between opinions on final review vs timezone offsets.)

The most promising option in my mind would be to (1) improve tooling on
interdiffs, and then (2) automatically post an interdiff to the bug
*after* it lands. Then we could choose whether that final interdiff
requires review or whether it's just an FYI. It doesn't prevent someone
from sneaking in some huge security hole, but it makes it a lot more
visible and obvious when they do -- which provides some amount of
discouragement from it happening in the first place. For slightly more
visibility, you could post all of these ex post facto interdiffs to some
easily-skimmable place.

IIUC, we're trying to scale here. One problem with scaling is that there
are more cracks to sneak security flaws into. Another is scaling the
rate of changes landing in the tree. More process helps the first and
hurts the second, so we need to find a balance. I submit that ex post
facto interdiffs are a reasonable point of tradeoff.

Obviously, anything we could do to help *both* sides of the equation
would be great. That includes better interdiffs and the autoposted nit
interdiffs, both of which would be useful even without process changes.
(I'm guessing that scanning a stream of nit interdiffs would point
towards some tooling improvements that could eliminate the need for a
fair number of final touch-ups.)

If we *do* end up requiring a review on the final version, whether
retroactive or blocking, we could also reduce the friction slightly via
tooling that allows comment changes through. Maybe even variable renames
or formatting changes. But I doubt that changes the overall picture
significantly.

Eric Rescorla

unread,
Mar 13, 2017, 1:34:09 PM3/13/17
to Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu> wrote:

> On 3/12/17 9:55 PM, Eric Rescorla wrote:
>
>> Alternately, you can create a patch which gets r+ with nits, and
>> then update with some malicious code and r=<committer>.
>>
>
> Speaking as a reviewer, for people I don't trust I pretty much never give
> "r+ with nits", because I don't trust them to address the nits correctly.
>

Actually, I wish I had written this differently. Say I get an r+ w/o nits,
I suspect
that the sheriffs will accept an updated patch (e.g., ostensibly with a
comment
fix) that is marked r=<foo>.


So, I would ask: do people believe that this is an acceptable state of
>> affairs or should the minimum number of 'trusted' people required to land
>> a patch be 1
>> or more?
>>
>
> Obviously the latter. ;)


Me too. And I think "trust" in this case at least arguably should be
defined as
"trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
have a
policy like the following.

- Every CL must either be written by someone trusted OR r+ed by someone
trusted.
- If a patch is r+ with nits, then the final patch must be posted by someone
trusted.

This would ensure that every CL that lands was signed off on in its final
form
by a someone trusted.

Does this seem crazy?

-Ekr


>
>
> -Boris

Bobby Holley

unread,
Mar 13, 2017, 1:41:12 PM3/13/17
to Eric Rescorla, Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com> wrote:

> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
>
> > On 3/12/17 9:55 PM, Eric Rescorla wrote:
> >
> >> Alternately, you can create a patch which gets r+ with nits, and
> >> then update with some malicious code and r=<committer>.
> >>
> >
> > Speaking as a reviewer, for people I don't trust I pretty much never give
> > "r+ with nits", because I don't trust them to address the nits correctly.
> >
>
> Actually, I wish I had written this differently. Say I get an r+ w/o nits,
> I suspect
> that the sheriffs will accept an updated patch (e.g., ostensibly with a
> comment
> fix) that is marked r=<foo>.
>
>
> So, I would ask: do people believe that this is an acceptable state of
> >> affairs or should the minimum number of 'trusted' people required to
> land
> >> a patch be 1
> >> or more?
> >>
> >
> > Obviously the latter. ;)
>
>
> Me too. And I think "trust" in this case at least arguably should be
> defined as
> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> have a
> policy like the following.
>
> - Every CL must either be written by someone trusted OR r+ed by someone
> trusted.
> - If a patch is r+ with nits, then the final patch must be posted by
> someone
> trusted.
>
> This would ensure that every CL that lands was signed off on in its final
> form
> by a someone trusted.
>
> Does this seem crazy?
>

This seems like a very reasonable compromise to me. It gets the overhead
out of the way in the hot paths but still gives us some organization-wide
guarantees.


>
> -Ekr
>
>
> >
> >
> > -Boris

Ehsan Akhgari

unread,
Mar 13, 2017, 2:04:41 PM3/13/17
to Bobby Holley, Eric Rescorla, Boris Zbarsky, dev. planning
On 2017-03-13 1:40 PM, Bobby Holley wrote:
> On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
>>
>>> On 3/12/17 9:55 PM, Eric Rescorla wrote:
>>>
>>>> Alternately, you can create a patch which gets r+ with nits, and
>>>> then update with some malicious code and r=<committer>.
>>>>
>>>
>>> Speaking as a reviewer, for people I don't trust I pretty much never give
>>> "r+ with nits", because I don't trust them to address the nits correctly.
>>>
>>
>> Actually, I wish I had written this differently. Say I get an r+ w/o nits,
>> I suspect
>> that the sheriffs will accept an updated patch (e.g., ostensibly with a
>> comment
>> fix) that is marked r=<foo>.
>>
>>
>> So, I would ask: do people believe that this is an acceptable state of
>>>> affairs or should the minimum number of 'trusted' people required to
>> land
>>>> a patch be 1
>>>> or more?
>>>>
>>>
>>> Obviously the latter. ;)
>>
>>
>> Me too. And I think "trust" in this case at least arguably should be
>> defined as
>> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
>> have a
>> policy like the following.
>>
>> - Every CL must either be written by someone trusted OR r+ed by someone
>> trusted.
>> - If a patch is r+ with nits, then the final patch must be posted by
>> someone
>> trusted.
>>
>> This would ensure that every CL that lands was signed off on in its final
>> form
>> by a someone trusted.
>>
>> Does this seem crazy?
>>
>
> This seems like a very reasonable compromise to me. It gets the overhead
> out of the way in the hot paths but still gives us some organization-wide
> guarantees.

I still think this has mostly the same issues as has been raised in the
thread so far, even though at first glance it may seem a bit nicer than
the original proposal: for example, there is still the issue of whether
reviewers actually treat code coming from someone named Ehsan Akhgari as
potentially malicious code, what this means for rebases, who will be
responsible for the final sign off on the patch in the r+-with-nits
situation, etc.

Note that it seems now we have modified our goal slightly. Originally
it seems we were trying to ensure we don't get incoming security bugs by
ensuring that any commit that ultimately gets checked in gets an r+.
Now we are talking about every commit needs to be r+ed or authored by
someone with L3 access. Are we still protecting against incoming
security vulnerabilities?

Bobby Holley

unread,
Mar 13, 2017, 2:08:21 PM3/13/17
to Ehsan Akhgari, Eric Rescorla, Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 11:04 AM, Ehsan Akhgari <ehsan....@gmail.com>
wrote:

> On 2017-03-13 1:40 PM, Bobby Holley wrote:
> > On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com> wrote:
> >
> >> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu>
> wrote:
> >>
> >>> On 3/12/17 9:55 PM, Eric Rescorla wrote:
> >>>
> >>>> Alternately, you can create a patch which gets r+ with nits, and
> >>>> then update with some malicious code and r=<committer>.
> >>>>
> >>>
> >>> Speaking as a reviewer, for people I don't trust I pretty much never
> give
> >>> "r+ with nits", because I don't trust them to address the nits
> correctly.
> >>>
> >>
> >> Actually, I wish I had written this differently. Say I get an r+ w/o
> nits,
> >> I suspect
> >> that the sheriffs will accept an updated patch (e.g., ostensibly with a
> >> comment
> >> fix) that is marked r=<foo>.
> >>
> >>
> >> So, I would ask: do people believe that this is an acceptable state of
> >>>> affairs or should the minimum number of 'trusted' people required to
> >> land
> >>>> a patch be 1
> >>>> or more?
> >>>>
> >>>
> >>> Obviously the latter. ;)
> >>
> >>
> >> Me too. And I think "trust" in this case at least arguably should be
> >> defined as
> >> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> >> have a
> >> policy like the following.
> >>
> >> - Every CL must either be written by someone trusted OR r+ed by someone
> >> trusted.
> >> - If a patch is r+ with nits, then the final patch must be posted by
> >> someone
> >> trusted.
> >>
> >> This would ensure that every CL that lands was signed off on in its
> final
> >> form
> >> by a someone trusted.
> >>
> >> Does this seem crazy?
> >>
> >
> > This seems like a very reasonable compromise to me. It gets the overhead
> > out of the way in the hot paths but still gives us some organization-wide
> > guarantees.
>
> I still think this has mostly the same issues as has been raised in the
> thread so far, even though at first glance it may seem a bit nicer than
> the original proposal:


Hm, maybe I'm missing something obvious, but I don't follow.


> for example, there is still the issue of whether
> reviewers actually treat code coming from someone named Ehsan Akhgari as
> potentially malicious code,


I assumed they wouldn't.


> what this means for rebases, who will be
> responsible for the final sign off on the patch in the r+-with-nits
> situation, etc.
>

The point is that r+-with-nits is only allowed in the case where the person
doing the nit fixup is trusted.

Mike Connor

unread,
Mar 13, 2017, 2:12:57 PM3/13/17
to Eric Rescorla, Boris Zbarsky, dev. planning
This is something I've been thinking about a bunch over the weekend. I
think trusted needs a smaller scope than "one of the hundreds of people
with L3 access." As Bobby noted, the primary issue is the hot paths, so we
should look at how to explicitly define those paths.

While there's some fuzzy lines between modules (and the pan-tree rewrite
case needs a bit more discussion), I think we can quickly to get to a point
where module owners and peers are defined in-tree, in a machine-readable
format that we could use to a) validate that the reviewer(s) of a patch are
the appropriate people and b) suggest potential reviewers. b) is
especially crucial in terms of better balancing review load, which is
something David Burns noted earlier. I'm almost certain that we could do a
much better job of balancing review loads.

If we had that, we could reframe this idea as:

* every patch must be reviewed by an appropriate reviewer (verifiable)
* patch authors who are *also* owners/peers or otherwise marked as trusted
for the code in question (membership in SR group, or explicitly trusted by
the module owner) are permitted to carry over a review.
* carrying over a review requires the original patch be marked explicitly
(i.e. review+ and review-with-nits+)

This does require explicitly marking people as trusted, but I don't think
that's a huge burden, since that list of people doesn't change often.

I think this tightens things up, without creating a significant amount of
pain.

-- Mike

On Mon, Mar 13, 2017 at 1:33 PM, Eric Rescorla <e...@rtfm.com> wrote:

> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
>
> > On 3/12/17 9:55 PM, Eric Rescorla wrote:
> >
> >> Alternately, you can create a patch which gets r+ with nits, and
> >> then update with some malicious code and r=<committer>.
> >>
> >
> > Speaking as a reviewer, for people I don't trust I pretty much never give
> > "r+ with nits", because I don't trust them to address the nits correctly.
> >
>
> Actually, I wish I had written this differently. Say I get an r+ w/o nits,
> I suspect
> that the sheriffs will accept an updated patch (e.g., ostensibly with a
> comment
> fix) that is marked r=<foo>.
>
>
> So, I would ask: do people believe that this is an acceptable state of
> >> affairs or should the minimum number of 'trusted' people required to
> land
> >> a patch be 1
> >> or more?
> >>
> >
> > Obviously the latter. ;)
>
>
> Me too. And I think "trust" in this case at least arguably should be
> defined as
> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> have a
> policy like the following.
>
> - Every CL must either be written by someone trusted OR r+ed by someone
> trusted.
> - If a patch is r+ with nits, then the final patch must be posted by
> someone
> trusted.
>
> This would ensure that every CL that lands was signed off on in its final
> form
> by a someone trusted.
>
> Does this seem crazy?
>
> -Ekr
>
>
> >
> >
> > -Boris

Eric Rescorla

unread,
Mar 13, 2017, 2:19:58 PM3/13/17
to Ehsan Akhgari, Boris Zbarsky, Bobby Holley, dev. planning
On Mon, Mar 13, 2017 at 11:04 AM, Ehsan Akhgari <ehsan....@gmail.com>
wrote:

> On 2017-03-13 1:40 PM, Bobby Holley wrote:
> > On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com> wrote:
> >
> >> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu>
> wrote:
> >>
> >>> On 3/12/17 9:55 PM, Eric Rescorla wrote:
> >>>
> >>>> Alternately, you can create a patch which gets r+ with nits, and
> >>>> then update with some malicious code and r=<committer>.
> >>>>
> >>>
> >>> Speaking as a reviewer, for people I don't trust I pretty much never
> give
> >>> "r+ with nits", because I don't trust them to address the nits
> correctly.
> >>>
> >>
> >> Actually, I wish I had written this differently. Say I get an r+ w/o
> nits,
> >> I suspect
> >> that the sheriffs will accept an updated patch (e.g., ostensibly with a
> >> comment
> >> fix) that is marked r=<foo>.
> >>
> >>
> >> So, I would ask: do people believe that this is an acceptable state of
> >>>> affairs or should the minimum number of 'trusted' people required to
> >> land
> >>>> a patch be 1
> >>>> or more?
> >>>>
> >>>
> >>> Obviously the latter. ;)
> >>
> >>
> >> Me too. And I think "trust" in this case at least arguably should be
> >> defined as
> >> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> >> have a
> >> policy like the following.
> >>
> >> - Every CL must either be written by someone trusted OR r+ed by someone
> >> trusted.
> >> - If a patch is r+ with nits, then the final patch must be posted by
> >> someone
> >> trusted.
> >>
> >> This would ensure that every CL that lands was signed off on in its
> final
> >> form
> >> by a someone trusted.
> >>
> >> Does this seem crazy?
> >>
> >
> > This seems like a very reasonable compromise to me. It gets the overhead
> > out of the way in the hot paths but still gives us some organization-wide
> > guarantees.
>
> I still think this has mostly the same issues as has been raised in the
> thread so far, even though at first glance it may seem a bit nicer than
> the original proposal: for example, there is still the issue of whether
> reviewers actually treat code coming from someone named Ehsan Akhgari as
> potentially malicious code


I was assuming that they did not.


, what this means for rebases, who will be
> responsible for the final sign off on the patch in the r+-with-nits
> situation, etc.
>

This would depend on the original patch author. Specifically:

- If the original patch author (or really the person posting the final
version)
was trusted, then they would be able to (once they had r+) make final
changes, rebase, etc.

- If the original patch author was untrusted, the they would need final
signoff
on rebases, nits, etc. from someone trusted.


Note that it seems now we have modified our goal slightly. Originally
> it seems we were trying to ensure we don't get incoming security bugs by
> ensuring that any commit that ultimately gets checked in gets an r+.
> Now we are talking about every commit needs to be r+ed or authored by
> someone with L3 access. Are we still protecting against incoming
> security vulnerabilities?
>

That's certainly the intent of what I'm suggesting. Note that this is
intended
to enforce a weaker security condition than the one that Mike's proposal
claimed to be trying to enforce (specifically that compromise of a single
trusted person's credentials would not allow them to land malicious code).
However, as I noted originally, I do not believe that that proposal actually
achieved those objectives. The intent of this proposal is, rather, that at
least one trusted person must sign off on any code before it lands.

-Ekr

Nathan Froyd

unread,
Mar 13, 2017, 2:35:13 PM3/13/17
to Mike Connor, Eric Rescorla, Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 2:12 PM, Mike Connor <mco...@mozilla.com> wrote:
> While there's some fuzzy lines between modules (and the pan-tree rewrite
> case needs a bit more discussion), I think we can quickly to get to a point
> where module owners and peers are defined in-tree, in a machine-readable
> format that we could use to a) validate that the reviewer(s) of a patch are
> the appropriate people and b) suggest potential reviewers.

I think you're going to want to handle delegation of reviews, e.g.

1) I own/peer a module and I would like somebody to review my patches
so I can get a sense of how thoroughly they review code and/or to help
provide feedback on their review skills.

2) Similar to the above, maybe I get tagged for review, but I would
like somebody else who doesn't own/peer the module to review the code
because I judge them capable enough. This and the above case are
mostly about growing new reviewers.

2a) Perhaps I would just like a second set of eyes on the patch (e.g.
threading issues) and am comfortable handing off review to somebody
who is more expert in such matters and whose judgement I trust to
handle any other issues capably.

3) Sometimes I get asked about reviewing particular things and I ask
the person who I know wrote the original code to review instead
because they are more familiar than I am. Maybe this can be addressed
with ever-finer lists of code owners.

4) Would just like to reiterate that the rewriting code case needs
some thought, as somebody who has done such rewrites (and appreciated
getting a single review) and as somebody who has reviewed code for
such rewrites.

If we require reviewers to appear in a trusted-list in-tree, then
actually growing new reviewers is going to be a real pain.

> * patch authors who are *also* owners/peers or otherwise marked as trusted
> for the code in question (membership in SR group, or explicitly trusted by
> the module owner) are permitted to carry over a review.

This idea could also be expanded to handle the delegation cases above,
of course.

-Nathan

Ehsan Akhgari

unread,
Mar 13, 2017, 2:35:45 PM3/13/17
to Eric Rescorla, Boris Zbarsky, Bobby Holley, dev. planning
On 2017-03-13 2:19 PM, Eric Rescorla wrote:
>
>
> On Mon, Mar 13, 2017 at 11:04 AM, Ehsan Akhgari <ehsan....@gmail.com
> <mailto:ehsan....@gmail.com>> wrote:
>
> On 2017-03-13 1:40 PM, Bobby Holley wrote:
> > On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com
> <mailto:e...@rtfm.com>> wrote:
> >
> >> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu
> <mailto:bzba...@mit.edu>> wrote:
> >>
> >>> On 3/12/17 9:55 PM, Eric Rescorla wrote:
> >>>
> >>>> Alternately, you can create a patch which gets r+ with nits, and
> >>>> then update with some malicious code and r=<committer>.
> >>>>
> >>>
> >>> Speaking as a reviewer, for people I don't trust I pretty much
> never give
> >>> "r+ with nits", because I don't trust them to address the nits
> correctly.
> >>>
> >>
> >> Actually, I wish I had written this differently. Say I get an r+
> w/o nits,
> >> I suspect
> >> that the sheriffs will accept an updated patch (e.g., ostensibly
> with a
> >> comment
> >> fix) that is marked r=<foo>.
> >>
> >>
> >> So, I would ask: do people believe that this is an acceptable
> state of
> >>>> affairs or should the minimum number of 'trusted' people
> required to
> >> land
> >>>> a patch be 1
> >>>> or more?
> >>>>
> >>>
> >>> Obviously the latter. ;)
> >>
> >>
Did you see my response here?
<https://groups.google.com/d/msg/mozilla.dev.planning/nUP-j72I9bE/7OFmSMuqDgAJ>
I don't understand how any of this is useful without this assumption.

> , what this means for rebases, who will be
> responsible for the final sign off on the patch in the r+-with-nits
> situation, etc.
>
>
> This would depend on the original patch author. Specifically:
>
> - If the original patch author (or really the person posting the final
> version)
> was trusted, then they would be able to (once they had r+) make final
> changes, rebase, etc.
>
> - If the original patch author was untrusted, the they would need final
> signoff
> on rebases, nits, etc. from someone trusted.

OK, to me, this essentially reads as if an L3 committer is malicious,
we're screwed, which is basically the status quo. Am I understanding
the pre-and post- picture correctly under your suggestion?

(I agree on the other part of this proposal about *non-L3 committers*
being malicious, this proposal would inject a hook into the review
process where we can scan those commits for malicious code, but we would
need to do a good job somehow at highlighting people's commit access
rights in the UI of the tool we use to do code reviews so that the
reviewer can quickly assess whether they're supposed to treat the path
author as potentially malicious.)

> Note that it seems now we have modified our goal slightly. Originally
> it seems we were trying to ensure we don't get incoming security bugs by
> ensuring that any commit that ultimately gets checked in gets an r+.
> Now we are talking about every commit needs to be r+ed or authored by
> someone with L3 access. Are we still protecting against incoming
> security vulnerabilities?
>
>
> That's certainly the intent of what I'm suggesting. Note that this is
> intended
> to enforce a weaker security condition than the one that Mike's proposal
> claimed to be trying to enforce (specifically that compromise of a single
> trusted person's credentials would not allow them to land malicious code).
> However, as I noted originally, I do not believe that that proposal actually
> achieved those objectives.

Yes, this much we can agree on. :-)

> The intent of this proposal is, rather, that at
> least one trusted person must sign off on any code before it lands.

I still don't think this proposal achieves any meaningful security
benefits where code is coming from L3 committers. With this proposal,
we're as screwed in case an L3 committer starts to act maliciously as we
are today.

(FWIW, if I may interject some personal opinions here, if I were to redo
our commit access policy I would probably approach things from a
different angle. I don't think there is anything we can do to remove
the requirement around trust in our L3 committers without basically
removing commit access and hence the need to trust humans which IMHO
would be the wrong end of the security/usability spectrum. Instead I
would probably focus a bit on verification measures, for example around
whether the L3 committers actually keep acting on good faith and
investigations on our disclosed security vulnerabilities, if we aren't
already.)

Eric Rescorla

unread,
Mar 13, 2017, 2:42:07 PM3/13/17
to Ehsan Akhgari, Boris Zbarsky, Bobby Holley, dev. planning
On Mon, Mar 13, 2017 at 11:35 AM, Ehsan Akhgari <ehsan....@gmail.com>
wrote:

> On 2017-03-13 2:19 PM, Eric Rescorla wrote:
> >
> >
> > On Mon, Mar 13, 2017 at 11:04 AM, Ehsan Akhgari <ehsan....@gmail.com
> > <mailto:ehsan....@gmail.com>> wrote:
> >
> > On 2017-03-13 1:40 PM, Bobby Holley wrote:
> > > On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com
> > <mailto:e...@rtfm.com>> wrote:
> > >
> > >> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu
> > <mailto:bzba...@mit.edu>> wrote:
> > >>
> > >>> On 3/12/17 9:55 PM, Eric Rescorla wrote:
> > >>>
> > >>>> Alternately, you can create a patch which gets r+ with nits, and
> > >>>> then update with some malicious code and r=<committer>.
> > >>>>
> > >>>
> > >>> Speaking as a reviewer, for people I don't trust I pretty much
> > never give
> > >>> "r+ with nits", because I don't trust them to address the nits
> > correctly.
> > >>>
> > >>
> > >> Actually, I wish I had written this differently. Say I get an r+
> > w/o nits,
> > >> I suspect
> > >> that the sheriffs will accept an updated patch (e.g., ostensibly
> > with a
> > >> comment
> > >> fix) that is marked r=<foo>.
> > >>
> > >>
> > >> So, I would ask: do people believe that this is an acceptable
> > state of
> > >>>> affairs or should the minimum number of 'trusted' people
> > required to
> > >> land
> > >>>> a patch be 1
> > >>>> or more?
> > >>>>
> > >>>
> > >>> Obviously the latter. ;)
> > >>
> > >>
Yes.


I don't understand how any of this is useful without this assumption.


I'm not following. The reason that this is not necessary is because (by
hypothesis)
you are trusted, and this proposal is not intended to prevent against
attack by
trusted people. In order to do that, we would need to ensure signoff by
*two*
trusted people.


> , what this means for rebases, who will be
> > responsible for the final sign off on the patch in the r+-with-nits
> > situation, etc.
> >
> >
> > This would depend on the original patch author. Specifically:
> >
> > - If the original patch author (or really the person posting the final
> > version)
> > was trusted, then they would be able to (once they had r+) make final
> > changes, rebase, etc.
> >
> > - If the original patch author was untrusted, the they would need final
> > signoff
> > on rebases, nits, etc. from someone trusted.
>
> OK, to me, this essentially reads as if an L3 committer is malicious,
> we're screwed, which is basically the status quo. Am I understanding
> the pre-and post- picture correctly under your suggestion?
>

Yes. As noted above, we would need to do something more aggressive
than mconnor is proposing to address this threat.


(I agree on the other part of this proposal about *non-L3 committers*
> being malicious, this proposal would inject a hook into the review
> process where we can scan those commits for malicious code, but we would
> need to do a good job somehow at highlighting people's commit access
> rights in the UI of the tool we use to do code reviews so that the
> reviewer can quickly assess whether they're supposed to treat the path
> author as potentially malicious.)
>

Yes.


> The intent of this proposal is, rather, that at
> > least one trusted person must sign off on any code before it lands.
>
> I still don't think this proposal achieves any meaningful security
> benefits where code is coming from L3 committers. With this proposal,
> we're as screwed in case an L3 committer starts to act maliciously as we
> are today.
>

I agree that it does not. It merely attempts to ensure that the code comes
from
L3 committers.

-Ekr

Botond Ballo

unread,
Mar 13, 2017, 2:59:14 PM3/13/17
to Daniel Veditz, Trevor Saunders, dev-planning
On Sun, Mar 12, 2017 at 8:44 PM, Daniel Veditz <dve...@mozilla.com> wrote:
> Hypothetically, an evil person could take a "r+ with nits" and then
> check in the patch with bad code that was never seen by another human
> and which may not get noticed. When we put autoland into the mix this
> essentially gives L3 commit access to anyone who can put together a
> helpful "almost right" patch.

I don't see how that's the case. To actually land something on
Autoland, you need to have L3 commit access. So, if this hypothetical
evil person does not themselves have L3 commit access, after getting
an "r+ with nits" and adding malicious code along with the nits, it's
still their reviewer who needs to submit to Autoland, and thus has the
opportunity to review their changes before submitting them.

Cheers,
Botond

Boris Zbarsky

unread,
Mar 13, 2017, 3:30:49 PM3/13/17
to
On 3/13/17 1:33 PM, Eric Rescorla wrote:
> Actually, I wish I had written this differently. Say I get an r+ w/o nits,
> I suspect
> that the sheriffs will accept an updated patch (e.g., ostensibly with a
> comment fix) that is marked r=<foo>.

This seems entirely too plausible. :(

> Me too. And I think "trust" in this case at least arguably should be
> defined as
> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> have a
> policy like the following.

This seems reasonable, if that's the goal. But this is not the goal
mconnor had in his original post. I'd love to get to the point where we
agree on the goals.

> - Every CL must either be written by someone trusted OR r+ed by someone
> trusted.
> - If a patch is r+ with nits, then the final patch must be posted by someone
> trusted.

This doesn't quite address your "r+ without nits, then the patch author
updates it anyway" scenario; presumably we would need something to
address that too.

-Boris

David Major

unread,
Mar 13, 2017, 3:34:50 PM3/13/17
to dev-pl...@lists.mozilla.org
How do you plan to handle code in external libraries?

As long as anyone can submit a random patch to libwhatever and get a
free pass into Firefox the next time we pull an update, it's not clear
to me what the proposed restrictions are really buying us.

(And I don't think that requiring an L3 developer to review such merges
would be sufficient. Mozilla may lack the expertise on those libraries,
lack the time to review the details of large merges, or run into the
problems of retro-review that kats mentioned.)

On Tue, Mar 14, 2017, at 07:12 AM, Mike Connor wrote:
> This is something I've been thinking about a bunch over the weekend. I
> think trusted needs a smaller scope than "one of the hundreds of people
> with L3 access." As Bobby noted, the primary issue is the hot paths, so
> we
> should look at how to explicitly define those paths.
>
> While there's some fuzzy lines between modules (and the pan-tree rewrite
> case needs a bit more discussion), I think we can quickly to get to a
> point
> where module owners and peers are defined in-tree, in a machine-readable
> format that we could use to a) validate that the reviewer(s) of a patch
> are
> the appropriate people and b) suggest potential reviewers. b) is
> especially crucial in terms of better balancing review load, which is
> something David Burns noted earlier. I'm almost certain that we could do
> a
> much better job of balancing review loads.
>
> If we had that, we could reframe this idea as:
>
> * every patch must be reviewed by an appropriate reviewer (verifiable)
> * patch authors who are *also* owners/peers or otherwise marked as
> trusted
> for the code in question (membership in SR group, or explicitly trusted
> by
> the module owner) are permitted to carry over a review.
> * carrying over a review requires the original patch be marked explicitly
> (i.e. review+ and review-with-nits+)
>
> This does require explicitly marking people as trusted, but I don't think
> that's a huge burden, since that list of people doesn't change often.
>
> I think this tightens things up, without creating a significant amount of
> pain.
>
> -- Mike
>
> On Mon, Mar 13, 2017 at 1:33 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
> > On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
> >
> > > On 3/12/17 9:55 PM, Eric Rescorla wrote:
> > >
> > >> Alternately, you can create a patch which gets r+ with nits, and
> > >> then update with some malicious code and r=<committer>.
> > >>
> > >
> > > Speaking as a reviewer, for people I don't trust I pretty much never give
> > > "r+ with nits", because I don't trust them to address the nits correctly.
> > >
> >
> > Actually, I wish I had written this differently. Say I get an r+ w/o nits,
> > I suspect
> > that the sheriffs will accept an updated patch (e.g., ostensibly with a
> > comment
> > fix) that is marked r=<foo>.
> >
> >
> > So, I would ask: do people believe that this is an acceptable state of
> > >> affairs or should the minimum number of 'trusted' people required to
> > land
> > >> a patch be 1
> > >> or more?
> > >>
> > >
> > > Obviously the latter. ;)
> >
> >
> > Me too. And I think "trust" in this case at least arguably should be
> > defined as
> > "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> > have a
> > policy like the following.
> >
> > - Every CL must either be written by someone trusted OR r+ed by someone
> > trusted.
> > - If a patch is r+ with nits, then the final patch must be posted by
> > someone
> > trusted.
> >
> > This would ensure that every CL that lands was signed off on in its final
> > form
> > by a someone trusted.
> >
> > Does this seem crazy?
> >
> > -Ekr
> >
> >
> > >
> > >
> > > -Boris

Boris Zbarsky

unread,
Mar 13, 2017, 3:42:23 PM3/13/17
to
On 3/13/17 2:12 PM, Mike Connor wrote:
> If we had that, we could reframe this idea as:

I have a hard time evaluating this proposal without understanding the
threat model we are trying to address. That is, what the actual "goal"
is. Can you explain what that is?

-Boris

P.S. I can imagine various goals that might be addressed by this
proposal, most obviously detection of "unusual activity" a la credit
card fraud departments. But I'm not sure whether those are the goals
we're setting out here, and I think it would be good to be _very_
explicit about our goals, so we can evaluate the specific proposals
against them.


Justin Dolske

unread,
Mar 13, 2017, 3:48:52 PM3/13/17
to
On 3/12/17 5:44 PM, Daniel Veditz wrote:

> That assumes the developers you review have good intentions--which is
> almost always true. Like most security-inspired restrictions, however,
> the worry is about someone with bad intentions. It's similar to
> insurance in that you pay every year but really you hope that your house
> does NOT burn down, but in that case it feels like a waste.

Along those lines, it seems worthwhile to note (and I haven't seen this
raised yet?) that at the core this is also a human-factors problem, and
those are notoriously difficult to fix with technology.

Specifically, here: if a reviewer has already decided that a patch is
"r+ with fixes", it's unlikely that the followup patch is going to get a
vigorous, detailed review. Especially if the process is perceived as
pointless overhead, causing delays, and 99.9999% of the time the patch
author is not trying to sneak in malicious code.

So a simple "every patch must be reviewed" requirement which doesn't
address that in some way isn't really going to change much from the
status quo -- you'd get "reviews", but only as a paperwork formality.

To improve that, we'd need to do things like asking what it would take
to have reviewers give "r+ with fixes" less often, understanding if the
risk can be mitigated (by more/less trust in some authors), etc.

> No easy answers.

Indeed.

Justin

Eric Rescorla

unread,
Mar 13, 2017, 4:02:09 PM3/13/17
to Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 12:30 PM, Boris Zbarsky <bzba...@mit.edu> wrote:

> On 3/13/17 1:33 PM, Eric Rescorla wrote:
>
>> Actually, I wish I had written this differently. Say I get an r+ w/o nits,
>> I suspect
>> that the sheriffs will accept an updated patch (e.g., ostensibly with a
>> comment fix) that is marked r=<foo>.
>>
>
> This seems entirely too plausible. :(
>
> Me too. And I think "trust" in this case at least arguably should be
>> defined as
>> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
>> have a
>> policy like the following.
>>
>
> This seems reasonable, if that's the goal. But this is not the goal
> mconnor had in his original post. I'd love to get to the point where we
> agree on the goals.


Me too. I have come to the conclusion that mconnor's goal cannot be achieved
without substantial disruption.


- Every CL must either be written by someone trusted OR r+ed by someone
>> trusted.
>> - If a patch is r+ with nits, then the final patch must be posted by
>> someone
>> trusted.
>>
>
> This doesn't quite address your "r+ without nits, then the patch author
> updates it anyway" scenario; presumably we would need something to address
> that too.


Sorry, I should have clarified this. In any case, the final (landed) patch
must be
either reviewed or posted by someone trusted. The nits thing is a side issue
(which you would think I would have realized from earlier in my own
message!)

Eric Shepherd (Sheppy)

unread,
Mar 13, 2017, 4:24:04 PM3/13/17
to Justin Dolske, dev-pl...@lists.mozilla.org

> On Mar 13, 2017, at 3:48 PM, Justin Dolske <dol...@mozilla.com> wrote:
>
> Specifically, here: if a reviewer has already decided that a patch is "r+ with fixes", it's unlikely that the followup patch is going to get a vigorous, detailed review. Especially if the process is perceived as pointless overhead, causing delays, and 99.9999% of the time the patch author is not trying to sneak in malicious code.
>
> So a simple "every patch must be reviewed" requirement which doesn't address that in some way isn't really going to change much from the status quo -- you'd get "reviews", but only as a paperwork formality.


That brings to mind this question, for me: does it make more sense to either shoot for a slightly improved review process but find and implement practices that have automated security verifications being performed on the code when it’s submitted for review, even before the reviewer sees it? Before you jump, yes, I know these tools aren’t able to catch all the varieties of issues that exist, but it would certainly help. (Personally, this is an area we could apply some resources to, if only by donation of funds).

Anyway, once the automated security scans are complete and validated, only then would the reviewer actually do their review. They’d still have to do security related checks, but they’d have backup, and hopefully the really tiny changes, such as "nit” fixes would have a pretty low risk factor given the automated analysis. If it gets sent back for changes — of any kind — it goes through security scans again.

I may or may not have made sense, but hopefully I did. Just throwing thoughts out there.


Eric Shepherd
Senior Technical Writer
Mozilla Developer Network <https://developer.mozilla.org/>
Blog: https://www.bitstampede.com/
Twitter: https://twitter.com/sheppy

Lawrence Mandel

unread,
Mar 13, 2017, 4:24:44 PM3/13/17
to Eric Rescorla, Boris Zbarsky, dev. planning
One issue with r+ with nits that we ran into last year is that the
resulting patch/diff is often committed directly to the repo and not
uploaded back to Bugzilla or MozReview. This makes it difficult to audit
the changes to the repo. Keeping the review system in sync with what lands
(regardless of the review requirements) will make it easier to automate a
repo audit and reduce the time that our reviewers need to spend looking at
code changes in the audit scenario. Any concerns with making it a
requirement that the final patch/diff is documented in the bug/review tool
rather than landing directly? (Assuming we adopt a working autoland system
as the way to land code this will be obviously enforced by the tool.)

Lawrence

On Mon, Mar 13, 2017 at 4:01 PM, Eric Rescorla <e...@rtfm.com> wrote:

> On Mon, Mar 13, 2017 at 12:30 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
>
> > On 3/13/17 1:33 PM, Eric Rescorla wrote:
> >
> >> Actually, I wish I had written this differently. Say I get an r+ w/o
> nits,
> >> I suspect
> >> that the sheriffs will accept an updated patch (e.g., ostensibly with a
> >> comment fix) that is marked r=<foo>.
> >>
> >
> > This seems entirely too plausible. :(
> >
> > Me too. And I think "trust" in this case at least arguably should be
> >> defined as
> >> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would be
> >> have a
> >> policy like the following.
> >>
> >
> > This seems reasonable, if that's the goal. But this is not the goal
> > mconnor had in his original post. I'd love to get to the point where we
> > agree on the goals.
>
>
> Me too. I have come to the conclusion that mconnor's goal cannot be
> achieved
> without substantial disruption.
>
>
> - Every CL must either be written by someone trusted OR r+ed by someone
> >> trusted.
> >> - If a patch is r+ with nits, then the final patch must be posted by
> >> someone
> >> trusted.
> >>
> >
> > This doesn't quite address your "r+ without nits, then the patch author
> > updates it anyway" scenario; presumably we would need something to
> address
> > that too.
>
>
> Sorry, I should have clarified this. In any case, the final (landed) patch
> must be
> either reviewed or posted by someone trusted. The nits thing is a side
> issue
> (which you would think I would have realized from earlier in my own
> message!)
>

Boris Zbarsky

unread,
Mar 13, 2017, 4:35:32 PM3/13/17
to
On 3/13/17 4:24 PM, Lawrence Mandel wrote:
> Any concerns with making it a
> requirement that the final patch/diff is documented in the bug/review tool
> rather than landing directly?

Just one: can we make this not send bugmail to people? At least as an
option?

>(Assuming we adopt a working autoland system
> as the way to land code this will be obviously enforced by the tool.)

Indeed.

-Boris

Bobby Holley

unread,
Mar 13, 2017, 4:36:28 PM3/13/17
to Lawrence Mandel, Eric Rescorla, Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 1:24 PM, Lawrence Mandel <lma...@mozilla.com>
wrote:

> One issue with r+ with nits that we ran into last year is that the
> resulting patch/diff is often committed directly to the repo and not
> uploaded back to Bugzilla or MozReview. This makes it difficult to audit
> the changes to the repo. Keeping the review system in sync with what lands
> (regardless of the review requirements) will make it easier to automate a
> repo audit and reduce the time that our reviewers need to spend looking at
> code changes in the audit scenario. Any concerns with making it a
> requirement that the final patch/diff is documented in the bug/review tool
> rather than landing directly?
>

Submitting the final patches to two places instead of one seems like
busywork to me, and I don't do it (even though some do). I don't know what
it buys us, given that pulsebot posts the hashes of the pushed changes in
the bug.

So I would object to this.


>
> Lawrence
>
> On Mon, Mar 13, 2017 at 4:01 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
> > On Mon, Mar 13, 2017 at 12:30 PM, Boris Zbarsky <bzba...@mit.edu>
> wrote:
> >
> > > On 3/13/17 1:33 PM, Eric Rescorla wrote:
> > >
> > >> Actually, I wish I had written this differently. Say I get an r+ w/o
> > nits,
> > >> I suspect
> > >> that the sheriffs will accept an updated patch (e.g., ostensibly with
> a
> > >> comment fix) that is marked r=<foo>.
> > >>
> > >
> > > This seems entirely too plausible. :(
> > >
> > > Me too. And I think "trust" in this case at least arguably should be
> > >> defined as
> > >> "trusted by Mozilla" (e.g., L3 committer). So, one possibility would
> be
> > >> have a
> > >> policy like the following.
> > >>
> > >
> > > This seems reasonable, if that's the goal. But this is not the goal
> > > mconnor had in his original post. I'd love to get to the point where
> we
> > > agree on the goals.
> >
> >
> > Me too. I have come to the conclusion that mconnor's goal cannot be
> > achieved
> > without substantial disruption.
> >
> >
> > - Every CL must either be written by someone trusted OR r+ed by someone
> > >> trusted.
> > >> - If a patch is r+ with nits, then the final patch must be posted by
> > >> someone
> > >> trusted.
> > >>
> > >
> > > This doesn't quite address your "r+ without nits, then the patch author
> > > updates it anyway" scenario; presumably we would need something to
> > address
> > > that too.
> >
> >
> > Sorry, I should have clarified this. In any case, the final (landed)
> patch
> > must be
> > either reviewed or posted by someone trusted. The nits thing is a side
> > issue
> > (which you would think I would have realized from earlier in my own
> > message!)
> >

Mike Connor

unread,
Mar 13, 2017, 4:38:59 PM3/13/17
to Bobby Holley, dev. planning, Eric Rescorla, Lawrence Mandel, Boris Zbarsky
On Mon, Mar 13, 2017 at 4:35 PM, Bobby Holley <bobby...@gmail.com> wrote:

> On Mon, Mar 13, 2017 at 1:24 PM, Lawrence Mandel <lma...@mozilla.com>
> wrote:
>
> > One issue with r+ with nits that we ran into last year is that the
> > resulting patch/diff is often committed directly to the repo and not
> > uploaded back to Bugzilla or MozReview. This makes it difficult to audit
> > the changes to the repo. Keeping the review system in sync with what
> lands
> > (regardless of the review requirements) will make it easier to automate a
> > repo audit and reduce the time that our reviewers need to spend looking
> at
> > code changes in the audit scenario. Any concerns with making it a
> > requirement that the final patch/diff is documented in the bug/review
> tool
> > rather than landing directly?
> >
>
> Submitting the final patches to two places instead of one seems like
> busywork to me, and I don't do it (even though some do). I don't know what
> it buys us, given that pulsebot posts the hashes of the pushed changes in
> the bug.
>
> So I would object to this.


I'd agree that posting a bug to two places is a non-starter. That's a
problem we can and should solve with automation. It should be possible to
commit to somewhere, have it show up in Bugzilla, and have it land where it
needs to go.

-- Mike

Aki Sasaki

unread,
Mar 13, 2017, 4:41:25 PM3/13/17
to Eric Rescorla, Ehsan Akhgari, Boris Zbarsky, dev. planning, Bobby Holley
On Mon, Mar 13, 2017 at 11:41 AM, Eric Rescorla <e...@rtfm.com> wrote:

> On Mon, Mar 13, 2017 at 11:35 AM, Ehsan Akhgari <ehsan....@gmail.com>
> wrote:
>
> > On 2017-03-13 2:19 PM, Eric Rescorla wrote:
> > >
> > >
> > > On Mon, Mar 13, 2017 at 11:04 AM, Ehsan Akhgari <
> ehsan....@gmail.com
> > > <mailto:ehsan....@gmail.com>> wrote:
> > >
> > > On 2017-03-13 1:40 PM, Bobby Holley wrote:
> > > > On Mon, Mar 13, 2017 at 10:33 AM, Eric Rescorla <e...@rtfm.com
> > > <mailto:e...@rtfm.com>> wrote:
> > > >
> > > >> On Mon, Mar 13, 2017 at 6:36 AM, Boris Zbarsky <
> bzba...@mit.edu
> > > <mailto:bzba...@mit.edu>> wrote:
> > > >>
> > > >>> On 3/12/17 9:55 PM, Eric Rescorla wrote:
> > > >>>
> > > >>>> Alternately, you can create a patch which gets r+ with nits,
> and
> > > >>>> then update with some malicious code and r=<committer>.
> > > >>>>
> > > >>>
> > > >>> Speaking as a reviewer, for people I don't trust I pretty much
> > > never give
> > > >>> "r+ with nits", because I don't trust them to address the nits
> > > correctly.
> > > >>>
> > > >>
> > > >> Actually, I wish I had written this differently. Say I get an r+
> > > w/o nits,
> > > >> I suspect
> > > >> that the sheriffs will accept an updated patch (e.g., ostensibly
> > > with a
> > > >> comment
> > > >> fix) that is marked r=<foo>.
> > > >>
> > > >>
> > > >> So, I would ask: do people believe that this is an acceptable
> > > state of
> > > >>>> affairs or should the minimum number of 'trusted' people
> > > required to
> > > >> land
> > > >>>> a patch be 1
> > > >>>> or more?
> > > >>>>
> > > >>>
> > > >>> Obviously the latter. ;)
> > > >>
> > > >>
> > > >> Me too. And I think "trust" in this case at least arguably
> should
> > be
> > > >> defined as
> > > >> "trusted by Mozilla" (e.g., L3 committer). So, one possibility
> > > would be
> > > >> have a
> > > >> policy like the following.
> > > >>
> > > >> - Every CL must either be written by someone trusted OR r+ed by
> > > someone
> > > >> trusted.
> > > >> - If a patch is r+ with nits, then the final patch must be
> posted
> > by
> > > >> someone
> > > >> trusted.
> > > >>
> > > >> This would ensure that every CL that lands was signed off on in
> > > its final
> > > >> form
> > > >> by a someone trusted.
> > > >>
> > > >> Does this seem crazy?
> > > >>
> > > >
I think we're going the right direction in this thread: moving from less
secure to more secure. But if the threat is compromised L3 creds, what
sort of solutions might help mitigate that? (Assume one full set of
compromised L3 creds. Prevent, or mitigate, shipping malicious code.) Can
that solution be designed in such a way that it can coexist with at least
close-to-current levels of developer productivity?

I would guess addressing review and commit policy, as we are discussing in
this thread, would be at least part of that solution. Requiring 2 L3
committers to sign off on certain types of commits may be a solution.
Pre-merge-day tree-review may be a solution that requires less day-to-day
overhead, but adds a significant task every N weeks, and doesn't protect
Nightly users. A pre-Nightly branch (or set of branches) where changes can
accumulate until they are given another round of cumulative review may be a
solution. Any solution that leverages automation to lessen manual overhead
would be worth investigating, as long as we have some trust in that
automation.

aki



>
>
> (I agree on the other part of this proposal about *non-L3 committers*
> > being malicious, this proposal would inject a hook into the review
> > process where we can scan those commits for malicious code, but we would
> > need to do a good job somehow at highlighting people's commit access
> > rights in the UI of the tool we use to do code reviews so that the
> > reviewer can quickly assess whether they're supposed to treat the path
> > author as potentially malicious.)
> >
>
> Yes.
>
>
> > The intent of this proposal is, rather, that at
> > > least one trusted person must sign off on any code before it lands.
> >
> > I still don't think this proposal achieves any meaningful security
> > benefits where code is coming from L3 committers. With this proposal,
> > we're as screwed in case an L3 committer starts to act maliciously as we
> > are today.
> >
>
> I agree that it does not. It merely attempts to ensure that the code comes
> from
> L3 committers.
>
> -Ekr

Mike Connor

unread,
Mar 13, 2017, 4:42:45 PM3/13/17
to Boris Zbarsky, dev-planning
On Mon, Mar 13, 2017 at 4:35 PM, Boris Zbarsky <bzba...@mit.edu> wrote:

> On 3/13/17 4:24 PM, Lawrence Mandel wrote:
>
>> Any concerns with making it a
>> requirement that the final patch/diff is documented in the bug/review tool
>> rather than landing directly?
>>
>
> Just one: can we make this not send bugmail to people? At least as an
> option?


At the least, it should probably send mail, and maybe even an interdiff, to
the reviewer(s).

Otherwise it's sort of "trust and never verify" as a stance. Not sure that
quite makes sense.

-- Mike

Steve Fink

unread,
Mar 13, 2017, 5:10:48 PM3/13/17
to dev-pl...@lists.mozilla.org
On 03/13/2017 01:23 PM, Eric Shepherd (Sheppy) wrote:
>> On Mar 13, 2017, at 3:48 PM, Justin Dolske <dol...@mozilla.com> wrote:
>>
>> Specifically, here: if a reviewer has already decided that a patch is "r+ with fixes", it's unlikely that the followup patch is going to get a vigorous, detailed review. Especially if the process is perceived as pointless overhead, causing delays, and 99.9999% of the time the patch author is not trying to sneak in malicious code.
>>
>> So a simple "every patch must be reviewed" requirement which doesn't address that in some way isn't really going to change much from the status quo -- you'd get "reviews", but only as a paperwork formality.
>
> That brings to mind this question, for me: does it make more sense to either shoot for a slightly improved review process but find and implement practices that have automated security verifications being performed on the code when it’s submitted for review, even before the reviewer sees it? Before you jump, yes, I know these tools aren’t able to catch all the varieties of issues that exist, but it would certainly help. (Personally, this is an area we could apply some resources to, if only by donation of funds).
>
> Anyway, once the automated security scans are complete and validated, only then would the reviewer actually do their review. They’d still have to do security related checks, but they’d have backup, and hopefully the really tiny changes, such as "nit” fixes would have a pretty low risk factor given the automated analysis. If it gets sent back for changes — of any kind — it goes through security scans again.
>
> I may or may not have made sense, but hopefully I did. Just throwing thoughts out there.

Lest people find this unrealistic, I would like to point out that within
the JS engine at least, we sort of already have this via fuzzing. It's
not an analysis of a specific change, it's the analysis of the overall
state after a set of changes, with mostly automated bisection when
something is found. I believe it provides the sorts of benefit sheppy is
describing, but in a retroactive way -- reviewers don't have the results
in hand at the time of review, but at least they can depend on one class
of problems being caught before too long.

Obviously, it relies heavily on how fuzzable a component is.

Mike Connor

unread,
Mar 13, 2017, 5:18:04 PM3/13/17
to Nathan Froyd, Eric Rescorla, Boris Zbarsky, dev. planning
On Mon, Mar 13, 2017 at 2:35 PM, Nathan Froyd <nfr...@mozilla.com> wrote:

> On Mon, Mar 13, 2017 at 2:12 PM, Mike Connor <mco...@mozilla.com> wrote:
> > While there's some fuzzy lines between modules (and the pan-tree rewrite
> > case needs a bit more discussion), I think we can quickly to get to a
> point
> > where module owners and peers are defined in-tree, in a machine-readable
> > format that we could use to a) validate that the reviewer(s) of a patch
> are
> > the appropriate people and b) suggest potential reviewers.
>
> I think you're going to want to handle delegation of reviews, e.g.
>
> 1) I own/peer a module and I would like somebody to review my patches
> so I can get a sense of how thoroughly they review code and/or to help
> provide feedback on their review skills.
>
> 2) Similar to the above, maybe I get tagged for review, but I would
> like somebody else who doesn't own/peer the module to review the code
> because I judge them capable enough. This and the above case are
> mostly about growing new reviewers.
>
> 2a) Perhaps I would just like a second set of eyes on the patch (e.g.
> threading issues) and am comfortable handing off review to somebody
> who is more expert in such matters and whose judgement I trust to
> handle any other issues capably.
>

My assumption is that "must be reviewed by someone already trusted" is a
minimum bar, and would not exclude additional reviews from another party.

In terms of growing new reviewers, the approach I've used many times in the
past was to have new reviewers take the initial pass, and then do a final
review myself. That way they get to learn, I get to evaluate and coach
them on their review approach, and I remain confident that the review bar
was sufficiently high. Once I stop catching issues that they didn't,
they're ready to be a full peer.

3) Sometimes I get asked about reviewing particular things and I ask
> the person who I know wrote the original code to review instead
> because they are more familiar than I am. Maybe this can be addressed
> with ever-finer lists of code owners.
>

Finer-grained lists can work, or the owner/peer who is delegating a
specific review can formally approve (mark r+) based on the work of the
delegate.

I'd frame this as delegating work, but not responsibility.


> 4) Would just like to reiterate that the rewriting code case needs
> some thought, as somebody who has done such rewrites (and appreciated
> getting a single review) and as somebody who has reviewed code for
> such rewrites.
>

My thinking, as you noted further down, is that we can identify the set of
reviewers who can approve a broad change. I'm reasonably confident that
the list of people who should be reviewing pan-tree changes is much smaller
than the number of people with domain expertise.


> If we require reviewers to appear in a trusted-list in-tree, then
> actually growing new reviewers is going to be a real pain.


See above. I think the responsibility angle is key when delegating work.
"Trainees" can mark r+, but that r+ isn't enough to land. The owner/peer
can make their own decision on how much they validate that work, from a
full re-review to a rubber stamp, but they're assuming responsibility
either way.

-- Mike

Justin Dolske

unread,
Mar 13, 2017, 5:18:16 PM3/13/17
to
On 3/13/17 1:23 PM, Eric Shepherd (Sheppy) wrote:

>
> That brings to mind this question, for me: does it make more sense to
> either shoot for a slightly improved review process but find and
> implement practices that have automated security verifications being
> performed on the code when it’s submitted for review, even before the
> reviewer sees it? Before you jump, yes, I know these tools aren’t
> able to catch all the varieties of issues that exist, but it would
> certainly help.

Yeah, automated security scanning is... hard.

But a related example I had in mind was stuff like eslint -- the OG "r+
with nits" was a review on substance plus corrections for (sometimes
arbitrary) style.

Ongoing improvements for seeing integrated test results is similarly
useful. Making it "it passed tests, now please review" easy is an
improvement over landing a reviewed patch plus 2 unreviewed followups
for test bustage. [Which of course has never ever happened to me. *cough*]

Justin

Trevor Saunders

unread,
Mar 13, 2017, 5:23:33 PM3/13/17
to Mike Connor, Boris Zbarsky, dev-planning
On Mon, Mar 13, 2017 at 04:42:35PM -0400, Mike Connor wrote:
> On Mon, Mar 13, 2017 at 4:35 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
>
> > On 3/13/17 4:24 PM, Lawrence Mandel wrote:
> >
> >> Any concerns with making it a
> >> requirement that the final patch/diff is documented in the bug/review tool
> >> rather than landing directly?
> >>
> >
> > Just one: can we make this not send bugmail to people? At least as an
> > option?
>
>
> At the least, it should probably send mail, and maybe even an interdiff, to
> the reviewer(s).

Now, the case of Boris is the reviewer is different from the case boris is
just cc'd to the bug. However sending the mail is only useful if Boris
chooses to do something with it other than delete it unread. Which it
sounds like he may well want to, and even if I wanted to do differently
I can easily see doing the same because there is something more
important at the moment.

> Otherwise it's sort of "trust and never verify" as a stance. Not sure that
> quite makes sense.

Well, if you want it to be more than that you need to somehow force
people to look at things and think about them, which is rather hard.

Trev

>
> -- Mike

Nicholas Nethercote

unread,
Mar 13, 2017, 8:53:11 PM3/13/17
to Lawrence Mandel, Eric Rescorla, Boris Zbarsky, dev. planning
On Tue, Mar 14, 2017 at 7:24 AM, Lawrence Mandel <lma...@mozilla.com>
wrote:
>
> One issue with r+ with nits that we ran into last year is that the
> resulting patch/diff is often committed directly to the repo and not
> uploaded back to Bugzilla or MozReview. This makes it difficult to audit
> the changes to the repo.

I always look at the hg commit link rather than posted patches, precisely
because I know that's exactly what landed. Even if the patch author doesn't
explicitly change anything after getting review, rebasing can still change
the patch.

Nick

jma...@mozilla.com

unread,
Mar 14, 2017, 8:15:48 AM3/14/17
to
I just r-'d a patch last night because a rebase caused it to do unintended things. While I don't see this everyday, it seems that rebases cause many issues (I see it monthly) and given that data I would say any rebases should be re-reviewed. How often do patch authors examine their patch in detail after a rebase? Just looking at the diff can be OK, but seeing the source code in more context can point out the code being added in the wrong place (or in some cases duplicated).

-Joel

Axel Hecht

unread,
Mar 14, 2017, 9:35:54 AM3/14/17
to
Am 14.03.17 um 13:15 schrieb jma...@mozilla.com:
One point on rebases:

autoland, AFAIK, *never* lands the changeset it was asked to land.

It always rebases, and often adjusts the commit message to change the r?
to r+. (Does it also change the reviewer to the actual reviewer? I'd
have to check the code.)

I'm pretty sure that at the pace that we're changing the head of our
repository, we won't get any code landed without rs=foopy on rebases,
and thus hash changes.

Which also raises an interesting question about how much we can prove
about the landed patch at the time it lands.

Axel

Ehsan Akhgari

unread,
Mar 15, 2017, 10:54:07 AM3/15/17
to Mike Connor, Bobby Holley, dev. planning, Eric Rescorla, Boris Zbarsky, Lawrence Mandel
On 2017-03-13 4:38 PM, Mike Connor wrote:
> On Mon, Mar 13, 2017 at 4:35 PM, Bobby Holley <bobby...@gmail.com> wrote:
>
>> On Mon, Mar 13, 2017 at 1:24 PM, Lawrence Mandel <lma...@mozilla.com>
>> wrote:
>>
>>> One issue with r+ with nits that we ran into last year is that the
>>> resulting patch/diff is often committed directly to the repo and not
>>> uploaded back to Bugzilla or MozReview. This makes it difficult to audit
>>> the changes to the repo. Keeping the review system in sync with what
>> lands
>>> (regardless of the review requirements) will make it easier to automate a
>>> repo audit and reduce the time that our reviewers need to spend looking
>> at
>>> code changes in the audit scenario. Any concerns with making it a
>>> requirement that the final patch/diff is documented in the bug/review
>> tool
>>> rather than landing directly?
>>>
>>
>> Submitting the final patches to two places instead of one seems like
>> busywork to me, and I don't do it (even though some do). I don't know what
>> it buys us, given that pulsebot posts the hashes of the pushed changes in
>> the bug.
>>
>> So I would object to this.
>
>
> I'd agree that posting a bug to two places is a non-starter. That's a
> problem we can and should solve with automation. It should be possible to
> commit to somewhere, have it show up in Bugzilla, and have it land where it
> needs to go.

I feel like I'm starting to reply to every message in this thread with
"rebases, rebases, rebases". ;-)

More seriously, what Lawrence is asking for is simply just impossible.
You can't know what the final patch you will push will be before you
push it due to the fact that we prohibit pushing more than one head, so
you may have to do an unlimited number of rebases before you actually
succeed in landing your patch.

Once your patch has landed, then pulsebot already sends a link to the
landed commit to the bug, and that is what has to be counted on as the
actual source of truth as to what needs to landed. The fact that
MozReview is incapable of reflecting that in its UI is just a deficiency
of the review tool which we could fix. I use Phabricator for
contributing to LLVM, and there as soon as I manage to push the final
version of my patch to their SVN repo, Phabricator UI links my review
page to the real SVN revision landed (example:
https://reviews.llvm.org/D16761 says
"Was committed in r260265, but reverted in r260536.")

Lawrence Mandel

unread,
Mar 15, 2017, 11:11:10 AM3/15/17
to Ehsan Akhgari, dev. planning, Eric Rescorla, Bobby Holley, Mike Connor, Boris Zbarsky
On Wed, Mar 15, 2017 at 10:53 AM, Ehsan Akhgari <ehsan....@gmail.com>
wrote:
OK. Clearly I didn't quite think that one through. Point taken.

Lawrence

Gregory Szorc

unread,
Mar 21, 2017, 6:40:46 PM3/21/17
to Mike Connor, dev-planning, dev-platform, governance, firefox-dev
On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor <mco...@mozilla.com> wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been. We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process. And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories. After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make. These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams. It's inevitable that any
> significant change will disrupt longstanding workflows. As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
> - Compromising a single individual's credentials must not be
> sufficient to land malicious code into our products.
> - Two-factor auth must be a requirement for all users approving or
> pushing a change.
> - The change that gets pushed must be the same change that was
> approved.
> - Broken commits must be rejected automatically as a part of the
> commit process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
> - Direct commit access to repositories will be strictly limited to
> sheriffs and a subset of release engineering.
> - Any direct commits by these individuals will be limited to fixing
> bustage that automation misses and handling branch merges.
> - All other changes will go through an autoland-based workflow.
> - Developers commit to a staging repository, with scripting that
> connects the changeset to a Bugzilla attachment, and integrates with review
> flags.
> - Reviewers and any other approvers interact with the changeset as
> today (including ReviewBoard if preferred), with Bugzilla flags as the
> canonical source of truth.
> - Upon approval, the changeset will be pushed into autoland.
> - If the push is successful, the change is merged to
> mozilla-central, and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution. If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>

(Responding to several topics from the top post because I don't want to
spam with multiple replies.)

I think a lot of people are conflating "push access" with "ability to land
something." The distinction is the former grants you privileges to `hg
push` directly to repos on hg.mozilla.org and the latter allows delegation
of this ability. It is easy to conflate them because today "push access"
(level 3) gives you permission to trigger the landing (via the autoland
service via MozReview). But this is only because it was convenient to
implement this way. I want to explicitly state that we're moving towards
N=~10 people having raw push access to the Firefox repos with the majority
of landings occurring via autoland (via Conduit via MozReview/Bugzilla).
This reduces our attack surface area (less exposure to compromised SSH
keys), establishes better auditing (history of change will be on
Bugzilla/MozReview and not on a random local machine), eliminates potential
for bugs or variance with incorrect rebasing done on local machines, and
allows us to do extra things as part of the landing/rebase, such as
automatically reformat code to match unified code styles, do binding
generation, etc. They say all problems in computer science can be solved by
another level of indirection. Deferred landing (autoland) is such an
indirection and it solves a lot of problems. It will be happening. Most of
us will lose access to push directly to the Firefox repos. We won't be
losing ability to initiate a push, however. For the record, I'm not in
favor of making this change until the tooling is such that it won't be a
significant workflow/productivity regression. This is a reason why it
hasn't been made yet.

A handful have commented on whether a rebase invalidates a previous r+.
This is a difficult topic. Strictly speaking, a rebase invalidates
everything because a change in a commit being rebased over could invalidate
assumptions. Same goes for a merge commit. In reality, most rebases are
harmless and we shouldn't be concerned with their existence. In the cases
where it does matter, we can help prevent things from falling through the
cracks by having the review tool detect when in-flight reviews touch the
same files / topics and route them to the same reviewer(s) and/or notify
the different reviewer(s) so people are aware of potential for conflict.
This feature was conceived before MozReview launched but (sadly) hasn't
been implemented.

There have been a few comments on interdiffs when rebases are in play. I
want to explicitly state that there is no single "correct" way to display a
diff much less an interdiff much less an interdiff when a rebase is
involved. This is why tools like Git have multiple diffing algorithms
(minimal, patience, histogram, Myers). This is why there are different ways
of rendering a diff (unified, side-by-side, 3-way). Rendering a simple diff
is hard. Rendering an interdiff is harder. Unfortunately, central review
tools force a specific rendering on users. ReviewBoard does allow some
tweaking of diff behavior (such as ignore whitespace). But no matter what
options you use, someone will be unhappy with it. An example is MozReview's
handling of interdiffs. It used to hide lines changed by a rebase that
weren't changed in the commit up for review. But that was slightly buggy in
some situations and some people wanted to actually see those changes, so
the behavior was changed to show all file changes in the interdiff. This
made a different set of people unhappy because the changes were spurious
and contributed noise. In summary, you can't please all the people all the
time. So you have to pick a reasonable default then debate about adding
complexity to placate the long tail or handle corner cases. Standard
product design challenges. On top of that, technical users are the 1% and
are a very difficult crowd to please.

I'm sensitive to concerns that removal of "r+ with nits" will provide a net
productivity regression. We should be thinking of ways to make landing code
easier, not harder. This is a larger discussion spanning several topics, of
course. But the point I want to make is we have major, systemic problems
with code review latency today. Doing away with "r+ with nits" exacerbates
them, which is part of the reason I think people feel so passionately about
it. I feel like there needs to be a major cultural shift that emphasizes
low review latency, especially for new and high volume contributors. Tools
can help some. But tools only encourage behavior, not enforce it. I feel
like some kind of forcing function (either social/cultural or formal in
policy) is needed. What exactly, I'm not sure. At this point (where I see
very senior engineers with massive review queues), I fear the problem may
have to be addressed by management to adequately correct. (Yes, I hated
having to type the past few sentences.)

I think Steven MacLeod's response was highly insightful and is worth
re-reading multiple times. Pursuant to his lines of thought, I think there
is room to have more flexible policies for different components. For
example, high-value code like anything related to cryptography or security
could be barred from having "r+ with nits" but low-impact, non-shipping
files like in-tree documentation could have a very low barrier to change.
Of course, this implies flexible tools to accommodate flexible workflows
which may imply Mozilla's continued investment in those tools (which is in
contrast to a mindset held by some that we should use tools in an
off-the-shelf configuration instead of highly tailoring them to our
workflows). I like the simplicity of one policy for everything but am not
convinced it is best. I'd like more details on what other large companies
do here.

I hope this post doesn't revive this thread and apologize in advance if it
does. Please consider replying privately instead of poking the bear.
0 new messages