I've reviewed your feedback regarding the first proposal
(http://groups.google.com/group/mozilla.dev.amo/browse_thread/thread/b37b30ab41d7bebd)
and made some adjustments based on it.
The main difference in this version is that it includes an author
reputation system that will be added to the criteria editors will have
available to evaluate add-ons and decide if the are "reliable" or not.
Please have a look an tell me what you think.
Doc: http://docs.google.com/View?id=dcxw3w5k_25hn9nd8vg
Thanks!
- Jorge
I think a slight adjustment may make the process more efficient and
more tailored to individual editor's attitudes.
Code revision and evaluation of an author's reliability are two
different tasks, and even though sometimes it may be handy for an
editor to do the two things together, it may not be so in all cases.
In addition, an editor may just not be certain as to whether an add-on
deserves a 'reliable' status. If I understand correctly, that's the
rationale behind the ability to delegate the decision to an admin and
not just postponing it. But the same decision could be made by another
editor too, by definition.
I'd change:
"Admins will have access to a queue for pending reliable add-on
decisions, and will be able to change this flag for any add-on."
To:
"Admins and editors will have access to a queue for pending reliable
add-on decisions. Admins will be able to change this flag for any
add-on."
After a review, editors can decide to give the 'reliable' status
immediately, or "nominate" the add-on for 'reliable' status (like in
the current proposal), but they will then be able to see their own as
well as other editor's nominations. In this way, a possible workflow
for an editor may be to spend a day doing code revision, and the next
day in visiting author's websites and AMO profiles to decide whether
the add-ons they nominated deserve the 'reliable' status. I guess
the latter task is similar to what experienced editors already do for
public add-on nominations - the only difference is that editors,
instead of authors, nominate the add-ons.
With regard to the authors' scoring system, one based on the data
available in the AMO database may be used for "automatic nominations".
Otherwise, I see no use in having a score associated with each
characteristic (such as "Mozillian") when this information may just
be stored in specific fields in the private author's profile, and is
also used sporadically, only during 'reliable' add-on nominations.
I'm not an editor and these details will be relevant to me as an author
only marginally, but I hope these suggestions may help anyway.
Regards,
Paolo
1.1 Reliable add-ons
Please drop the condition that the add-on has to exist for a year. If
there it is kind of mature and the past updates have proved well, it
should be possible to become reliable earlier.
1.2 Updates for reliable add-ons
This could easily be gamed by uploading i´with the hope to get it
immediately public and removing and reuploading if it didn't work the
first time.
10 Updates is IMHO too much (especially if it is not mature).
1.4 Author reputation system
Knowing a person doesn't tell anything about his or her coding skills,
i.e. I know a Mozilla employee with great extension ideas, but it
sometimes takes up to a half dozen refusals of updates until he uploads
a version ready for the public. This only works for evil people.
3.1 Language pack and search engine reviews
a) Language pack: Tell l10n to help or use an auto update channel of the
coming update channel system. I have seen Pike redirecting new language
pack creators to AMO where they should upload theirs language packs and
collect feedback. Unfortunately, we won't have someone who understands
the translation in most cases. Maybe an automated l10n check/validation
can be integrated. compare-locales.py?
b) I abandoned search engine reviews for a few reasons:
- Noone has asked me ever for a search engine on AMO (except theirs
developers).
- If the search functionality is similar to that of the site, people
should get it from there.
- There is no reliable help when checking nominations (i.e. if there
already exists something like this, if it is
official/non-official/affiliate/...)
4.1 Sandoxed add-on updates
Some people want to check what changed before installing the add-on, so
maybe there should be a prompt with a checkbox ("Do you want to install
this untested update? [ ] Don't ask me again for this update")
P.S. The "Note" is outdated.
Additional comments:
1. Auto-approve new versions with changes only in install.rdf and/or
language files (including addition of new translations). Needs a metric
to check that the developer doesn't game the "Updated add-ons" list.
2. Maybe auto-approve versions compatible only with experimental
versions but request review if the maxVersion gets increased to include
stable versions.
3. Breaking approach: Let editors review, everything that hasn't
approved for N day (N ~ 14), should get the auto-update.
4. Introduce watchlists/blocklists for authors.
5. Exclude some add-ons from editor review: I.e., menu extensions for
websites aren't worth the effort, i.e. the Amazon store extensions from
https://addons.mozilla.org/firefox/user/4794950
6. Create VMs where a script injects the input stream from a real user,
but the VM contains a bunch of add-ons installed and a script will check
if it tries to read/modify data except from the profile directory or to
send data.
Bye
Archaeopteryx
Mozilla employees participate in the meritocracy just like anyone
else.
This isn't about merit, it's about trust.
You, mkaply, could write an extension which is poorly structured, had
terrible UI, and often didn't do what it was supposed to. But it would
still get trusted pretty quickly because we know who you are, and you
are a trustworthy guy.
Similarly, Mozilla employees are people we know, and we can trust them
pretty easily. The other option is making reviewers review their patches
carefully for malicious code. What do you think the chances are of there
actually being any?
Gerv
We could in some way link it to hg accounts, i.e. those with tree
check-in privileges are trusted.
--
Brian King
Need Mozilla Project Hosting?
http://mozdev.org
Yes, that's an interesting idea.
And as Gervase said, this is not an issue of merit, or even trust to
some extent. It's about identity, and knowing that somebody is well
known within the community.
Remember that we will still check the first versions of any add-on, and
there will still be other checks in place. We're not giving complete
freedom to the add-ons we label as 'reliable'.
- Jorge Villalobos
Yes, that's an interesting idea.
100% of the security problems we have caught in add-on reviews were
accidental, not malicious -- the developer just doesn't realize
evaluating remote code with chrome privileges is bad.
I am leery of giving anyone immediate trust just because they are known.
Employees or known community members that aren't familiar with add-on
development will still easily make this same mistakes. I think tying to
tree check-in access is a much better idea.
>
> Gerv
Well, fine, but we can use automated checking to take 80-90% of the load of
checking for accidental mistakes (like eval-ing web code, etc.). Automated
checking is never able to check for malicious code (especially as our automation
would be open source and thus available to everyone), but trust can be useful in
ensuring people are at least not being malicious, thus meaning that I (or any
other editor) waste less of my time in doing code analysis and ensuring the
add-on isn't super-evil.
~ Gijs
> I am leery of giving anyone immediate trust just because they are known.
> Employees or known community members that aren't familiar with add-on
> development will still easily make this same mistakes. I think tying to
> tree check-in access is a much better idea.
One downside to this is that commit access is only loosely tied with
"trust". And, AIUI, the long term goal is to have it more about "do we
think this person follows rules", since code shouldn't even be checked
in until it's gone through the review guidelines.
I haven't been following the issue closely, but I think we've
historically been loathe to give employees any special privilege.
Perhaps we can just avoid the issue -- realistically, how many employees
are posting extensions, who wouldn't already get reputation boosts from
other means?
Justin
I think this particular point about Mozilla employees is getting way
more attention than it should. In reality it's just a measure of trust
given to authors working on Labs or other people who probably have also
a very good reputation on AMO already.
This is not a free pass for anybody with an @mozilla address, it's just
one of a couple of metrics we use to decide an add-on can be trusted
*after* it has gone through a number of manual reviews and a significant
amount of time in the public. It's no big deal, really.
- Jorge Villalobos
Yes; and as the security system we use for Hg improves, we should be
able to allow people some form of Hg access with much less trust. So
"anyone with Hg access" won't be a great trust metric.
"Anyone with m-c access", on the other hand, is much better.
> I haven't been following the issue closely, but I think we've
> historically been loathe to give employees any special privilege.
If it were tied to some form of checkin access, it wouldn't just be
employees.
Gerv
+100 You are Google. Mozilla will give you a part of AMO
https://addons.mozilla.org/google/google_toolbar_beta_win.html
so that users do not have to deal with that yellow bar asking for
confirmation that the site google.com wants to install software.
Since we essentially have no idea when our users will receive updates,
AMO is not a viable distribution channel for us (yet).
Given that context, it's encouraging to see these issues being
acknowledged and addressed. The proposal makes sense, appears
responsive to frequently voiced issues, and formalizes the idea that
good add-ons are based on longevity, user feedback, and author
reputation (one of our primary authors is Johan Sundström of
Greasemonkey, so I'm obviously in favor of this).
However, my belief is that the AMO review process needs a different
philosophy. The current axiom is that add-ons are "Guilty until
repeatedly proven innocent". I would recommend a more permissive
approach such as "Penalized if regarded as evil". You would admit add-
ons more leniently, and rely on user feedback to place applications in
the Penalty Box. Here it is important to distinguish between
applications that are simply Dysfunctional (lack of quality) vs. those
that are Evil (lack of conscience). The former will either Adapt or
Die, and the latter will be penalized. The process for getting
released from the Penalty Box could follow the described proposal.
Frequent trips to the Penalty Box could result in automatic review for
updates, etc.
The idea here is to transform the speculative process of making
educated guesses about an application based on subjective data and
heuristics (e.g. the code validator) into a reactive process based on
real data from real users. I can understand why a company such as
Apple creates barriers for admittance into their App Store -- they
need to protect their turf and profits. I feel that an organization
such as AMO needs to adopt a more benevolent stance and rely on the
passion and integrity of Firefox users to help make qualitative
decisions about add-ons.
--Ranjit Padmanabhan, MashLogic
The problem with this is that you can't make an "Evil" tag stick. If you
adopt an "innocent until proven guilty" model, then addons found guilty
will just be submitted under new accounts with new names in order to
recover their innocence.
However, I do think that the fact that an addon is backed by a company
who (if e.g. their website has an EV cert) you have a way of contacting
and suing, should be a factor in determining trustworthiness. If
MashLogic released an addon which stole banking data, we know where to
send the writs.
Gerv
I wouldn't yet advocate a "innocent until proven guilty" aka "Free
Pass" model. I think that the "5 approvals/1 year" threshold to prove
innocence is far too stringent and somewhat demoralizing to
developers. It also apparently over-extends the (volunteer) reviewers
-- our most recent update sat in the sandbox for 80 days without any
feedback from AMO.
A threshold of 1 approval/3 months should be an adequate deterrent for
those who want to re-enter the system under new guises.
There's another thing that's worth considering. In my experience
Firefox add-on users are a savvy lot, so I don't think they need all
these layers of protection.
> However, I do think that the fact that an addon is backed by a company
> who (if e.g. their website has an EV cert) you have a way of contacting
> and suing, should be a factor in determining trustworthiness. IfMashLogicreleased an addon which stole banking data, we know where to
> send the writs.
That's a good suggestion. Perhaps credits are issued for certificates
and signed code. Our xpi is signed (which implies some level of
business identity verification), so when we do solve World Hunger,
there's an address to which Nobel nominations can be sent.
Ranjit
As a quick update, the implementation of this general idea has been
suspended, as we've found better ways to increase the efficiency of the
editor team:
http://blog.mozilla.com/addons/2009/11/13/burning-down-the-add-on-review-queues/
This doesn't mean that we aren't considering long term solutions to this
problem, but now that waiting times are reaching a more acceptable level
we can consider less radical approaches.
I won't forget the ideas in this proposal or the great feedback you
gave. Some of this will probably make it to AMO eventually.
Thanks.
- Jorge Villalobos
Jorge, I had posted a long reply to this yesterday night, but it has
got lost ion the google groups online editor nirvana :(
- I have a copy here (its acutally also on the discussion of the link
you posted, but it is awaiting moderation) - please DO consider these
suggestions as well:
The reviewing queue is a bit like the door to the law in Kafka’s short
story “Before the Law” – you never know when you’re extension will be
let ;)
Anyway if you are really interested in cutting down the review queue I
would ask to also look at Fx’s little brother Thunderbird, it seems
the review queue there suffers because of the new Fx Release.
My personal viewpoint on releasing is “release early and often” which
leaves my extension with 4 (!) updates that have not yet been
reviewed, the latest one offering an ever increasing feature set and
lots of desirable bug fixes – and just today I have released the 5th
one [which now includes support for 2 more hosts and 3 more locales,
compared to the last published version]; I do not know whether this
actually has again reset the waiting time for my queue – but in
general it seems to boil down to a publishing cycle of roughly 1
published update / month (usually I integrate several bugfixes during
that time, so it is not exclusively negative).
- all thanks to the diligence of the users who constantly come up with
ideas, requests and bug reports. There is an highly interesting
article “the cathedral and the bazaar” by Eric Stephen Raymond (see
catb.org) which points out the difference between commercial and open
source software release cycles, which comes to the conclusion that a
higher release frequency is indeed a good warranty for high software
quality standards – obviously it also generates a lot more
administrative work; I think one of the methods to cut down on the
amount of work would be fairly obvious – people who have reviewed an
extension before (and wish to do so) should be preferred as reviewers
for updates of this.
In my case I have a fairly complex extension with a lot of added value
for long term users, but this is not obvious at first glance to a
casual user – the more you use this extension the more useful features
you will discover; for reviewers, testing this functionality can of
course be a daunting task – you need to set up test folders, do drag /
copy operations on emails, customize the extension’s interface and so
on and so on…
How much easier is it if you already use the extension and you might
even be interested in what’s new – wouldn’t it be great if the
reviewers would be notified of updates of any extension they have
installed on their system and have reviewed previously – surely
somebody who uses a piece of software daily is somewhat of an expert.
I would propose a “mentoring system” where AMO reviewers could
voluntarily take some of their favorite extensions and be contactable
by the authors about important updates?
In emergencies (such as showstopper bug fixes) I have done so before,
using more oblique methods such as preoject owners mailing lists and
ICQ, but it would be great if there was an official method openly
available to developers of extensions that have already been
published.
The other thing that would surely diminish the problem of “stale”
updates would be a more obvious presentation of newer versions on the
AMO pages. I must say that it was slightly better in an earlier
incarnation of the AMO web site, now the link to the latest versions
hides shamefully at the bottom of the page, and its labeled
misguidingly “View Older Versions”. This is not obvious to the end
users. IMHO there is probably more chance of the users looking at the
top 3 reviews and finding that their host version is supported or a
certain bug is fixed and then telling them where to download the
unreviewed version than them preiodically “poking” the versions page
to see whether there was a new version available. But you know this
already, as you’re trying to cut down the queue – it is just not
obvious how it can be done without increasing manpower…
Also, there is a link for getting the latest published version for
each extension, would it be possible (and desirable) to craft one for
the latest “experimental” version as well? Or do you think that it
would open the door for all sorts of security risks?
thanks
Axel
On Sep 16, 6:30 pm, Jorge Villalobos <jo...@mozilla.com> wrote:
> Hello,
>
> My name is Jorge Villalobos and I'm the new Add-ons Developer Relations
> Lead at Mozilla.
>
> One of the most challenging tasks of my position consists in improving
> the add-on review process, which is currently too slow for most add-on
> authors and simultaneously very taxing to our current editor team.
>
> Justin Scott provided a proposal for this some time ago, which garnered
> lots of useful feedback. I've used Justin's proposal and your responses
> to create this new proposal, which attempts to find the right balance
> between the reliability of our current system and the quick review time
> that we all want to have.
>
> Please have a look and leave your feedback and questions in this thread.
>
> Doc:http://docs.google.com/View?id=dcxw3w5k_25hn9nd8vg
>
> Thanks!
>
> - Jorge
Hi Jorge, did a quick review here is my feedback:
1.1 When you use the term 'updates', do you mean published updates, or
sandboxed ones as well? My experience is that there are about 3 to 4
updates before one is published (due to the length of the review queue
and frequent feedback from users). So this distinction is important. I
think in some ways the time factor is probably more important. One
thing that can not be measured at all by this is the amount of effort
that went into creating a new release,this is probably something that
the reviewers themselves are best capable to judge (one hopes there
are software developers among the reviewers, although this is probably
not necessary)
on reviewing Queue length: this should not be pushed out by additional
releases, this is very important as developers do not want to be
penalized for adding value or fixing bugs early. Can you add a point
clarifying on this subject?
Also, could queue length somehow be made visible to the developers.
(something like "your extension is likely to be reviewed within N
days"...)
1.4 - is a good idea, although it is probably hard to measure how
committed a member is. E.g. I am certainly highly committed to my
Extension and the updates and users thereof, also very interested in
improving the community and the quality of the documentation, but
still even with a high time input it is often not clear how & where
best to contribute. I think if there was a "Points" system for things
like adding content to the MDC pages (such as coding examples, API
descriptions etc) this would probably have a positive effect on that
as well.
4.1 is an excellent idea - this should be done on a "per-Extension"
basis; users always have "favorite" extensions where they want to be
notified of the latest versions, and also there is an element of trust
involved. You probably have seen the "Update Notifier" extension -
this would be an excellent starting point to implement such a system -
the UI to configure which extensions are allowed "sandboxed
updates" (you could also use the term 'bleeding edge updates') could
be developed on Update NOtifier as starting point.
The question is, why does this require a change in the extensions
themselves? Surely this is mainly targeted at newer versions of the
extensions (and not to downgrade), so users would either chose "the
latest sandboxed version" or "the public version" - and there could be
a revert mechanism built in, in case the add-on breaks. Another
advantage of having this within the software would be the creation of
"roll-back statistics" which could be a vital tool for extension
developers to gauge take-up on new features / changes / bugs in new
versions.
5.1 I am not a fan of binary add-ons (IE plugins being a good example
- they seem to be counter-productive, but maybe that's just because of
their commercial nature), mainly because of security concerns - I
would <emp>hate</emp> if somebody came up with a "poisoned" extension
(maybe including an obfuscated address harvester or similar), that
would slip through the review system just because it contained a
payload in binary format. The damage to the reputation of the Mozilla
platform could be quite devastating.
--
my 2 cents, thanks for listening.
Axel