Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What's next after Firefox 3 and Gecko 1.9?

35 views
Skip to first unread message

schrep

unread,
May 19, 2008, 5:38:13 PM5/19/08
to
There have been several discussions of 1.9.1/Moz2 and future product
releases here
(http://groups.google.com/group/mozilla.dev.planning/browse_frm/thread/24ff58d12b89295b#,
http://groups.google.com/group/mozilla.dev.planning/browse_frm/thread/e64e0ea2d6008faa#)
and it has been discussed many times in the Mozilla 2 meetings. Not
everyone could make those meetings but more importantly not everyone was
aware we would be discussing 1.9.1/moz2 there.

In order to drive us closer to conclusion I wanted to consolidate my
understanding of the current plan in *early* draft form below. I
propose we iterate on this here and take a final form to the
wiki's/blogs for wider distribution.

So please edit, add questions, to the draft below...

Best,

Schrep


*** DRAFT PLAN ****
*******************

Product Releases
----------------

Firefox 2.0.0.x and 3.0.x:

We'll continue the maintenance schedule we set with Firefox 1.5 - namely
doing scheduled maintenance releases of Firefox 3.0.x and Firefox
2.0.0.x every 6-8 weeks. These releases will focus on security and
stability improvements and cannot add new features, change UX, or break
extensions unless it is required for security purposes or is otherwise
critical. Since Firefox 3.1 is coming out so quickly after 3.0 I
expect the size of these maintenance releases to be smaller than they
have been for the 2.0.0.x series. Support releases for Firefox 2.0.0.x
will terminate at the latest approximately 6 months after the shipment
of Firefox 3. This coincides with the approximate ship date of
Firefox 3.1.

Firefox 3.1:

There were a number of features that we held back from Firefox 3 because
they weren't quite ready - but they were nearly complete. These
include things like XHR, native JSON DOM bindings, ongoing performance
tuning, awesomebar++, better system integration, etc. This along with
the overall quality of Gecko 1.9 as a basis for mobile and the desire to
get new platform features out to web developers sooner has lead to us
want to do a second release of Firefox this year. This release would
be date-driven and targeted at the end of 2008. Any features not ready
in time will move to the next major release. This is currently planned
to be based on Gecko 1.9.1 - but if there are solid technical reasons
for breaking frozen APIs we will bump the version number to Mozilla2.

Firefox Mobile:

There are already devices shipping with early versions of Gecko 1.9 at
the core. More are coming soon and we'll be releasing milestones of
full branded versions of Firefox (with XUL and the Firefox team taking a
lead in the user experience) later this year. This lines up well with
Firefox 3.1 and a synchronized release schedule will make everything run
more smoothly.

Firefox 4:

Firefox 4 will incorporate some of the more aggressive platform
improvements in Mozilla2. It is far too early to set a shipping date
but an initial target would be sometime in late 2009. Mozilla2 work has
been underway for > 8 months

Platform Releases
-----------------

Gecko 1.9.0.x:

Release date: By end of June 2008
Status: Stable
Accepted Changes: Security, stability, performance, minor enhancements
that do not change *any* exposed APIs. All changes must pass all
regression tests and not regress performance.
Development Tree: CVS Trunk
Bugzilla Flags: blocking1.9.0.x
Planning Center:

Gecko 1.9.1:

Release date: Beta stable summer 2008, production stable end of 2008
Status: In development
Accepted Changes: Anything that doesn't break frozen API's. Passing
unit tests and proof of no negative impact on perf from Talos required
before check-in
Development Tree: Hg mozilla-central - will move to a branch before end
of summer 2008
Bugzilla Flags: blocking1.9.1
Planning Center:

Mozilla2:

Release date: Beta stable mid 2009
Status: In development
Accepted Changes: Anything that doesn't regress functionality or performance
Development Tree: Hg mozilla-central
Bugzilla Flags: ?
Planning Center: http://wiki.mozilla.org/Mozilla_2


Q&A
------

Q: I'm running a project based on Gecko and am confused as to weather to
ship on 1.9.0.x, 1.9.1, or Mozilla2?

A: If you are shipping before Nov/Dec 2008 we recommend you ship on
Gecko 1.9.0.x since it is the stable release. If your product is
shipping after Dec 2008 we'd recommend you use Gecko 1.9.1.x since it
will be stable by then. If you are shipping later you may want to
consider Mozilla2

Q: Will Firefox 3.1 ship on 1.9.1 or Mozilla2?

A: We will start with 1.9.1 but if there is a compelling technical
reason to break frozen APIs the drivers and module owners may decide to
do so. If we break frozen APIs we will call it Gecko 2.0 and give fair
notice.

Q: .... please add common questions here..

Jonas Sicking

unread,
May 19, 2008, 6:55:37 PM5/19/08
to
Q: Will *all* APIs remain compatible between 1.9.1 and 1.9 as they were
between 1.8.1 and 1.8?

A: No, only frozen APIs will remain compatible. I.e. we are free to
change as many APIs between 1.9 and 1.9.1 as we did between 1.8 and
1.9

At least that's what I'm hoping the answer is :)

/ Jonas

Mike Beltzner

unread,
May 19, 2008, 7:03:34 PM5/19/08
to jo...@sicking.cc, dev-pl...@lists.mozilla.org
What would that answer mean for add-on compatibility?

cheers,
mike

/ Jonas
_______________________________________________
dev-planning mailing list
dev-pl...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning

Jonas Sicking

unread,
May 19, 2008, 7:15:45 PM5/19/08
to
I think we gained very little as far as add-on compatibility goes by
freezing as many APIs as we did. Only binary extensions have anything to
gain by the level of freezing that we used, script-only extensions will
not be affected.

I don't have any data on how many of our top-extensions have binary
components though.

The cost of freezing all interfaces was quite high. It uglified new code
a good bit as well as cost in performance and memory (though probably
not to a significant extent).

This was sort of ok when the changes went in to a branch that is
eventually going to die. It would suck a lot more to do that to
mozilla-central where we'd have to clean up the mess later, or live with
it forever.

/ Jonas

Mike Beltzner

unread,
May 19, 2008, 7:23:49 PM5/19/08
to jo...@sicking.cc, dev-pl...@lists.mozilla.org
Heh, so it would mean compat would be affected, then :)

I'm not saying we should or shouldn't feeze. APIs, and most definitely we
should make the decision based on previous experience. Just wanted to make
sure I understood the impact.

Jonas Sicking

unread,
May 19, 2008, 7:30:03 PM5/19/08
to
Yes, compat would definitely be affected. But I believe the benefit/cost
ratio is low enough that it's not worth it.

chris hofmann

unread,
May 19, 2008, 8:19:12 PM5/19/08
to Jonas Sicking, dev-pl...@lists.mozilla.org

To add some data:

I recently saw that overall about 10% of add-ons on the AMO site today
have binary components.

More importantly I'm not sure how this 10% is weighted as far as actual
use. I do know there are several in the top 20,50, 100... That would
be the key to compatibility and the ability to recover from changes.

We should also plan on a lot longer ship cycle if we start introducing
signficant levels of API changes. In Firefox 2 it took about a 2-3
month push to get extension compatibility to an acceptable shipping
level, and for the most part it was just a testing and version
compatibilty rev'ing exercise. In Firefox 3 it was more like a 5-6
month push, with a lot more work, and with a lot more people working
hard to make that happen. In the end we didn't quite get to the level
of extension compatibility we had for Firefox 2, but we came pretty close.

This is not to say we should/should not freeze api's. This is to say
that the number and area of API changes is one of the factors that will
determine a realistic schedule, since we need a reasonable level of
extension compatibility to ship any release.

The best thing that could be done is to map out *which* APIs might
change, then do a bit of estimating on what the impact of the changes
might be on extension compatibility and the back end of the schedule.
Where is a good area to start mapping out the proposed changes?

The other thing is that if we are going to change APIs, we need to get
this work done early, and enforce some early API freeze dates.
Dragging the API changes out into late betas also lengthens the period
needed to get extension developers working on compatibility, and will
turn out to lengthen the overall shipping schedule.

Extension compatibility also relies on a wide variety of areas beyond
binary compatibility, so freeze dates ought to account for all those areas.


-chofmann

John J Barton

unread,
May 19, 2008, 9:18:04 PM5/19/08
to
Jonas Sicking wrote:
> I think we gained very little as far as add-on compatibility goes by
> freezing as many APIs as we did. Only binary extensions have anything to
> gain by the level of freezing that we used, script-only extensions will
> not be affected.

Mozilla seems to have several different kinds of APIs so I'm not clear
on what freezing/compatibility issues are being discussed. But late in
FF3, the effective API for extensions changed because of changes in
security model. At least for Firebug this had far more impact than any
reworking of APIs I can imagine. Similarly the new https requirement on
extensions. So if some good comes from evolving the APIs, its seems
like it would not be significant additional cost for extensions.

John.

Boris Zbarsky

unread,
May 19, 2008, 9:58:12 PM5/19/08
to
chris hofmann wrote:
> The best thing that could be done is to map out *which* APIs might
> change, then do a bit of estimating on what the impact of the changes
> might be on extension compatibility and the back end of the schedule.
> Where is a good area to start mapping out the proposed changes?

Can we turn that around? Would it be possible, given an interface, to scan AMO
for binary extensions that use that interface in their binary blobs?

> The other thing is that if we are going to change APIs, we need to get
> this work done early, and enforce some early API freeze dates.

The problem is that API changes are not the end in and of themselves, usually.
They are often needed, for our core "internal" APIs to address other issues that
come up. No one is suggesting changing APIs for the fun of it, I don't think.

For example, note that if a blocker issue comes up in RC1 right now that can be
most easily fixed by changing nsIContent we would change nsIContent before we
ship final. The question is whether 1.9.1 should be held to a higher bar than
this (as 1.8.1 was).

-Boris

Jeff Walden

unread,
May 20, 2008, 2:38:35 AM5/20/08
to schrep
schrep wrote:
> Gecko 1.9.0.x:
>
> Release date: By end of June 2008
> Status: Stable
> Accepted Changes: Security, stability, performance, minor enhancements
> that do not change *any* exposed APIs. All changes must pass all
> regression tests and not regress performance.

I'd go one further here: all changes must be accompanied with automated regression tests or must have manual litmus tests accompanied with an explanation why automated testing cannot be done. Barring unusual circumstances where a test can be written that works on 1.9.1 but doesn't work on 1.9.0 (thinking of damage-related tests that might rely on compositor, for example), the tests must land in 1.9.0 as part of the patch (after landing in 1.9.1, of course) and must run on tinderboxen on checkin.

> Gecko 1.9.1:
>
> Release date: Beta stable summer 2008, production stable end of 2008
> Status: In development
> Accepted Changes: Anything that doesn't break frozen API's. Passing
> unit tests and proof of no negative impact on perf from Talos required
> before check-in

How many tests (or reasons why tests cannot be written) can we require here? Branch obviously can't have a lower a bar than trunk, but that doesn't say anything about what should happen on trunk. I think we should set the bar for trunk fairly high and require a reviewed (even if it's just a sanity check) test be committed with each fix (or the explanation why it can't be tested and a litmus ? if appropriate). If it fails without the patch and succeeds with, that's good enough. On a case-by-case basis you'd want to require some bugs to have a greater number, depth, and concentration of tests, but even if each bug had just a single shallow test that'd be an improvement over where we are now.

Jeff

Sergey Yanovich

unread,
May 20, 2008, 4:51:15 AM5/20/08
to
Jeff Walden wrote:

> schrep wrote:
>> Gecko 1.9.1:
>>
>> Release date: Beta stable summer 2008, production stable end of 2008
>> Status: In development
>> Accepted Changes: Anything that doesn't break frozen API's. Passing
>> unit tests and proof of no negative impact on perf from Talos required
>> before check-in
>
> How many tests (or reasons why tests cannot be written) can we require
> here? Branch obviously can't have a lower a bar than trunk, but that
> doesn't say anything about what should happen on trunk. I think we
> should set the bar for trunk fairly high and require a reviewed (even if
> it's just a sanity check) test be committed with each fix (or the
> explanation why it can't be tested and a litmus ? if appropriate). If
> it fails without the patch and succeeds with, that's good enough. On a
> case-by-case basis you'd want to require some bugs to have a greater
> number, depth, and concentration of tests, but even if each bug had just
> a single shallow test that'd be an improvement over where we are now.

This should be the rule!

My experience tells, if there is no test for a feature, there is no
feature. The test has a wide meaning, it may be automated or manual. But
manual tests are unreliable, so test should be automated, unless that's
impossible.

Apart from protecting features, tests simplify communication between
developers. The argument like "this regresses bug XXXXXX" are often
rebutted with "that's OK, we have a new vision", but there is a general
consensus that breaking unit tests isn't acceptable.

--
Sergey Yanovich

Mike Shaver

unread,
May 20, 2008, 12:40:47 PM5/20/08
to Jonas Sicking, dev-pl...@lists.mozilla.org
On Mon, May 19, 2008 at 7:15 PM, Jonas Sicking <jo...@sicking.cc> wrote:
> Only binary extensions have anything to
> gain by the level of freezing that we used, script-only extensions will
> not be affected.

As you've proposed, non-additive changes to unfrozen APIs can and will
break script-only extensions. We broke a lot of script-only
extensions in 1.9, and we will have a much shorter "get your stuff
up-to-date" cycle than we had for 1.9.

I don't think we should freeze any more APIs, since I agree that it
has not paid for itself (and has led us to premature enfreezification
on APIs that nobody wants to use), but I don't think that we should
leap from that position to one of total API abandon.

We should _break_ as few APIs as possible, but no fewer. Additive
changes to unfrozen APIs should be looked at with a kind eye, and more
disruptive changes should come alongside a case for why the API break
is significantly better than an additional new API or such clumsier
alternative.

Mike

schrep

unread,
May 20, 2008, 1:05:37 PM5/20/08
to Mike Shaver, Jonas Sicking, dev-pl...@lists.mozilla.org
Mike Shaver wrote:
> We should _break_ as few APIs as possible, but no fewer. Additive
> changes to unfrozen APIs should be looked at with a kind eye, and more
> disruptive changes should come alongside a case for why the API break
> is significantly better than an additional new API or such clumsier
> alternative.
>

Agreed. No gratitutious API changes - but changes for good technical
reasons can be discussed and reviewed. Even minor changes to the chrome
can break add-ons and the shorter cycle of the next release means
extension authors have less time to react. So we should make that
transition as easy for them as possible.

Mike

schrep

unread,
May 20, 2008, 1:05:37 PM5/20/08
to Mike Shaver, dev-pl...@lists.mozilla.org, Jonas Sicking
Mike Shaver wrote:
> We should _break_ as few APIs as possible, but no fewer. Additive
> changes to unfrozen APIs should be looked at with a kind eye, and more
> disruptive changes should come alongside a case for why the API break
> is significantly better than an additional new API or such clumsier
> alternative.
>

Agreed. No gratitutious API changes - but changes for good technical

schrep

unread,
May 20, 2008, 1:08:15 PM5/20/08
to Jeff Walden
Jeff Walden wrote:
> schrep wrote:
>> Gecko 1.9.0.x:
>>
>> Release date: By end of June 2008
>> Status: Stable
>> Accepted Changes: Security, stability, performance, minor enhancements
>> that do not change *any* exposed APIs. All changes must pass all
>> regression tests and not regress performance.
>
> I'd go one further here: all changes must be accompanied with automated
> regression tests or must have manual litmus tests accompanied with an
> explanation why automated testing cannot be done. Barring unusual
> circumstances where a test can be written that works on 1.9.1 but
> doesn't work on 1.9.0 (thinking of damage-related tests that might rely
> on compositor, for example), the tests must land in 1.9.0 as part of the
> patch (after landing in 1.9.1, of course) and must run on tinderboxen on
> checkin.
>

Totally agreed. Our increase in automated regression testing has really
saved us time and time again - and our bar for test coverage (either
manual or automated) should continue to rise.


>> Gecko 1.9.1:
>>
>> Release date: Beta stable summer 2008, production stable end of 2008
>> Status: In development
>> Accepted Changes: Anything that doesn't break frozen API's. Passing
>> unit tests and proof of no negative impact on perf from Talos required
>> before check-in
>
> How many tests (or reasons why tests cannot be written) can we require
> here? Branch obviously can't have a lower a bar than trunk, but that
> doesn't say anything about what should happen on trunk. I think we
> should set the bar for trunk fairly high and require a reviewed (even if
> it's just a sanity check) test be committed with each fix (or the
> explanation why it can't be tested and a litmus ? if appropriate). If
> it fails without the patch and succeeds with, that's good enough. On a
> case-by-case basis you'd want to require some bugs to have a greater
> number, depth, and concentration of tests, but even if each bug had just
> a single shallow test that'd be an improvement over where we are now.

++ same as above.

Boris Zbarsky

unread,
May 20, 2008, 4:40:23 PM5/20/08
to
Mike Shaver wrote:
> We should _break_ as few APIs as possible, but no fewer. Additive
> changes to unfrozen APIs should be looked at with a kind eye

Does that mean no revving the iid for such changes (just adds a method to the
end of the vtable) on the branch? Otherwise we get binary extension issues no
matter what, right?

-Boris

Benjamin Smedberg

unread,
May 20, 2008, 4:48:28 PM5/20/08
to

I don't think we're optimizing for binary extensions, but rather for JS
usage: the fewer JS extensions we break in this short cycle, the better off
we are (of course, the same rule applies in general!)

--BDS

Boris Zbarsky

unread,
May 20, 2008, 4:51:01 PM5/20/08
to
Benjamin Smedberg wrote:
> I don't think we're optimizing for binary extensions, but rather for JS
> usage: the fewer JS extensions we break in this short cycle, the better
> off we are (of course, the same rule applies in general!)

I can pretty much guarantee that sicking was worried about things like
nsIContent and nsIDocument and nsPIDOMWindow, not about the very very few cases
in 1.8.1 where we created MOZILLA_1_8_BRANCH scriptable interfaces.

In general, doing an extra scriptable interface is less of a
footprint/peformance issue, and can be made completely transparent using
classinfo if needed... It does take more work on the back end to do it that
way, of course.

In any case, binary extensions are the main reason we had a complete interface
freeze in 1.8.1, as I recall. The question is how we want to handle them in 1.9.1.

-Boris

Gervase Markham

unread,
May 20, 2008, 5:21:01 PM5/20/08
to
schrep wrote:
> Firefox 3.1:
>
> There were a number of features that we held back from Firefox 3 because
> they weren't quite ready - but they were nearly complete. These include
> things like XHR, native JSON DOM bindings, ongoing performance tuning,
> awesomebar++, better system integration, etc.

Is there a source to flesh out that "etc.", or for now is it just an "etc."?

For example, is <video> on the list?

> Firefox Mobile:
>
> There are already devices shipping with early versions of Gecko 1.9 at
> the core. More are coming soon and we'll be releasing milestones of full
> branded versions of Firefox (with XUL and the Firefox team taking a lead
> in the user experience) later this year.

Where might I go to find out what platforms we are targetting for these
releases?

Gerv

Robert Accettura

unread,
May 20, 2008, 5:38:12 PM5/20/08
to Mike Shaver, dev-pl...@lists.mozilla.org, Jonas Sicking
On Tue, May 20, 2008 at 12:40 PM, Mike Shaver <mike....@gmail.com> wrote:

>
> We should _break_ as few APIs as possible, but no fewer. Additive

> changes to unfrozen APIs should be looked at with a kind eye, and more
> disruptive changes should come alongside a case for why the API break
> is significantly better than an additional new API or such clumsier
> alternative.
>

Is there any way to perhaps analyze source code of open source products out
there for what API's they are currently using? So decisions could be made
with some data at hand? Either spidering open sour svn/cvs/hg repo's or
perhaps even a script that could be run by the api user when they build, and
a report could be sent to some webtool that kept track of api's and number
of instances.

Then when deciding if it's worth breaking API _____, there's solid data on
who that effects outside of the mozilla tree.

--
Robert Accettura
rob...@accettura.com

Axel Hecht

unread,
May 20, 2008, 5:54:16 PM5/20/08
to
Gervase Markham wrote:
> schrep wrote:
>> Firefox 3.1:
>>
>> There were a number of features that we held back from Firefox 3 because
>> they weren't quite ready - but they were nearly complete. These include
>> things like XHR, native JSON DOM bindings, ongoing performance tuning,
>> awesomebar++, better system integration, etc.
>
> Is there a source to flesh out that "etc.", or for now is it just an
> "etc."?
>
> For example, is <video> on the list?

I guess that the spreadsheets linked from
http://wiki.mozilla.org/Firefox3/StatusMeetings/2008-05-20#Platform_Post_1.9
are bestest right now.

Axel

beel...@gmail.com

unread,
May 20, 2008, 6:29:11 PM5/20/08
to
Dear Firefox developers,
I think that it would be meaningful if you use the release 3.1 to
assimilate the feedback of the firefox 3.0 users. Version 3.0
introduces to many new features and until now only a group of early
adopters (that is relatively small compared to the number of the
common users) tried out this version. And when the version 3.0 will be
released as final version there will be much more users. I think you
should write down another primary objective for the release 3.1:
collecting the feedback from this broad base of firefox 3.0 users,
evaluating this feedback, and using it to advance the new features
added in 3.0. IMHO it should be more important than integration of the
almost complete features.
Regards
Andy

PS: sorry for my English, I'm not a native speaker ...

Mike Beltzner

unread,
May 20, 2008, 6:54:04 PM5/20/08
to ge...@mozilla.org, dev-pl...@lists.mozilla.org
Hey Gerv,

That rough list is meant to illustrate the size and scope of the changes, as
well as the things we know we want to get. We'll flesh it out more in the
usual ways, building product plans for each of those releases once we agree
to the general roadmap and layout of releases.

cheers,
mike

----- Original Message -----
From: dev-planni...@lists.mozilla.org
<dev-planni...@lists.mozilla.org>
To: dev-pl...@lists.mozilla.org <dev-pl...@lists.mozilla.org>
Sent: Tue May 20 14:21:01 2008
Subject: Re: What's next after Firefox 3 and Gecko 1.9?

schrep wrote:
> Firefox 3.1:
>
> There were a number of features that we held back from Firefox 3 because
> they weren't quite ready - but they were nearly complete. These include
> things like XHR, native JSON DOM bindings, ongoing performance tuning,
> awesomebar++, better system integration, etc.

Is there a source to flesh out that "etc.", or for now is it just an "etc."?

For example, is <video> on the list?

> Firefox Mobile:


>
> There are already devices shipping with early versions of Gecko 1.9 at
> the core. More are coming soon and we'll be releasing milestones of full
> branded versions of Firefox (with XUL and the Firefox team taking a lead
> in the user experience) later this year.

Where might I go to find out what platforms we are targetting for these
releases?

Gerv

Mook

unread,
May 21, 2008, 3:09:25 AM5/21/08
to
Robert Accettura wrote:
> On Tue, May 20, 2008 at 12:40 PM, Mike Shaver <mike....@gmail.com> wrote:

> Is there any way to perhaps analyze source code of open source products out
> there for what API's they are currently using?

There's an old list at http://www.mozpad.org/doku.php?id=api_analysis
but I believe that includes JS usage as well. At least, the list of
projects may be useful to as a starting point to find people who are
receptive.

That might be skewed towards certain projects, though. nsISchema is the
third used interface; nsIEventQueue sits at #8 with 430; somebody
probably forked textbox bindings (nsISSM).

Unfortunately, I don't think treehydra etc. would be that useful, since
not all code compiles with GCC.

--
Mook

skierpage

unread,
May 21, 2008, 5:24:01 PM5/21/08
to
On May 19, 2:38 pm, schrep <sch...@mozilla.com> wrote:
> There were a number of features that we held back from Firefox 3 ...    These
> include things like XHR

Now ComputerWorld says Firefox 3 doesn't support XmlHttpRequest,
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9087558
(I added a comment about the missing "cross-site" part).

Reporters are watching like a hawk so they can misconstrue whatever
people say ;-)

Mike Beltzner

unread,
May 21, 2008, 5:32:28 PM5/21/08
to skie...@gmail.com, dev-pl...@lists.mozilla.org
That's not going to change, and I more fear us changing the way we work in a
potentially futile hope to modify the way reporters report on us.

Gregg's a good guy, and while I would have appreciated him mentioning that
Schrep explicitly called this a "draft" plan, he's trying to report honestly
on what he's observing. I don't think it's a problem at all, and we should
continue to educate journalists and observers to the fact that this is how
software development works. People propose plans, people discuss them, and
figure out if it's the right thing to do. We just do all of that in the
open. It's a double whammy - people need to get used to learning how
software is made, and the idea that we do it in the open.

Let's work with people like Gregg to help him understand what we're doing,
and perhaps encourage him to interact with us here.

cheers,
mike

----- Original Message -----
From: dev-planni...@lists.mozilla.org
<dev-planni...@lists.mozilla.org>
To: dev-pl...@lists.mozilla.org <dev-pl...@lists.mozilla.org>
Sent: Wed May 21 14:24:01 2008
Subject: Re: What's next after Firefox 3 and Gecko 1.9?

Mark Finkle

unread,
May 24, 2008, 8:43:43 PM5/24/08
to
In addition to choffman's points, we really need to make "workarounds"
available to show extension developers how to rewrite the old code to
the new code. We have had a decent set of good examples in the 1.9
timeframe, but there are times where changes are made and no
background is given for how to convert existing code. This slows
things down.

Mark Finkle

unread,
May 24, 2008, 8:49:29 PM5/24/08
to
On May 19, 9:58 pm, Boris Zbarsky <bzbar...@mit.edu> wrote:
> chris hofmann wrote:
> > The best thing that could be done is to map out *which* APIs might
> > change, then do a bit of estimating on what the impact of the changes
> > might be on extension compatibility and the back end of the schedule.  
> > Where is a good area to start mapping out the proposed changes?
>
> Can we turn that around?  Would it be possible, given an interface, to scan AMO
> for binary extensions that use that interface in their binary blobs?

Sure, I have routinely asked AMO to given me a grep of addons using
certain interfaces. They are usually very prompt with the information,
especially if used to help keep extensions functional.

>
> > The other thing is that if we are going to change APIs, we need to get
> > this work done early, and enforce some early API freeze dates.  
>
> The problem is that API changes are not the end in and of themselves, usually.
> They are often needed, for our core "internal" APIs to address other issues that
> come up.  No one is suggesting changing APIs for the fun of it, I don't think.
>
> For example, note that if a blocker issue comes up in RC1 right now that can be
> most easily fixed by changing nsIContent we would change nsIContent before we
> ship final.  The question is whether 1.9.1 should be held to a higher bar than
> this (as 1.8.1 was).

If a blocker comes up in RC1 and we fix it by changing nsIContent in
such a way that breaks extensions, then I think we have done a bad
thing. At this late stage, we certainly should be favor non-ideal ways
(short-lived private interfaces, maybe) over outright breakage. If we
do make breaking changes (for extensions) we should be moving into
some tactical mode of extension developer outreach - before the code
lands.

We don't get a bad rap from extension developers for no good reason.

Boris Zbarsky

unread,
May 25, 2008, 12:08:16 AM5/25/08
to
Mark Finkle wrote:
>> Can we turn that around? Would it be possible, given an interface, to scan AMO
>> for binary extensions that use that interface in their binary blobs?
>
> Sure, I have routinely asked AMO to given me a grep of addons using
> certain interfaces. They are usually very prompt with the information,
> especially if used to help keep extensions functional.

I'm not sure a grep would catch binary blobs using the interface. Those are the
things we're worrying about here, not JS using the interface.

> If a blocker comes up in RC1 and we fix it by changing nsIContent in
> such a way that breaks extensions, then I think we have done a bad
> thing.

Given that extensions shouldn't be using nsIContent to start with, really, I
don't think we are.

> At this late stage, we certainly should be favor non-ideal ways
> (short-lived private interfaces, maybe) over outright breakage.

The question is whether adding 4 bytes to every single DOM node is worth
avoiding possibly breaking binary blob extensions that are using internal APIs
that we clearly say are very unstable. On the 1.8 branch, the official answer
to that was "yes", but it might have been the wrong tradeoff. I certainly think
it's the wrong tradeoff before we ship.

> If we do make breaking changes (for extensions) we should be moving into
> some tactical mode of extension developer outreach - before the code
> lands.

_Any_ change to _any_ interface is a breaking change for binary blob extensions.
Again, this is why we didn't make them on the 1.8 branch. I fully agree on
outreach for API changes at this late date or on the branch in general.

> We don't get a bad rap from extension developers for no good reason.

A bad rap from extension developers who are using internal interfaces we tell
them not to use?

Would we care about a bad rap from extension developers that binary-patched
libxul and complained when the library bit pattern changed in a security update?

If we wouldn't, where is the line drawn and why?

-Boris

Sergey Yanovich

unread,
May 25, 2008, 3:57:22 AM5/25/08
to
Boris Zbarsky wrote:
> _Any_ change to _any_ interface is a breaking change for binary blob
> extensions.

Could you elaborate a bit, why this happens? Because IID should change,
whenever interface changes, right?

IIUC, if an interface changes that an extension *implements*, the
extension will crash. If an extension only *calls* some
methods/attributes of an interface, and the order of methods/attributes
changes in the interface, or methods/attributes are removed from the
interface, or methods/attributes are inserted into the interface
anywhere but at the end, the extension will crash. However, if
method/attributes are inserted at the end of the interface, everything
should be fine.

The last case is an exception to the quoted rule, right?

--
Sergey Yanovich

Mike Shaver

unread,
May 25, 2008, 7:59:14 AM5/25/08
to Boris Zbarsky, dev-pl...@lists.mozilla.org
On Sun, May 25, 2008 at 12:08 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
> A bad rap from extension developers who are using internal interfaces we tell
> them not to use?

And for which we provide supported alternatives for their use cases, or not?

> Would we care about a bad rap from extension developers that binary-patched
> libxul and complained when the library bit pattern changed in a security update?
>
> If we wouldn't, where is the line drawn and why?

They're different to me because extension authors are likely to be
copying the use-of-internal-interface pattern from our code, but are
unlikely to be copying binary patching from anywhere reputable. (We
have been concerned with getting a bad rap in the past about similarly
fragile behaviours.)

I don't know what you mean by "internal", exactly, or why we would
have internal interfaces that were accessible at all by textual name
from other code without explicit opt-in (by which I mean defining
something like MOZILLA_INTERNAL_API, not "including the wrong header
file"), but there are many use cases that are simply not workable
without use of non-frozen interfaces at the least. nsIContent --
beyond the IID identity, I guess -- is protected MOZILLA_INTERNAL_API,
but a file like that in a public/ directory with commented and
useful-sounding methods seems like it's basically baiting you to set
MOZILLA_INTERNAL_API. (If you're finding things like SetAttr or
GetText via lxr, you won't even see the tiny and cryptic comment at
the top of the class declaration.) Things in our own extensions/
directory (xforms, TAF, spellcheck) use nsIContent quite liberally, to
say nothing of SVG and editor and...

Given that there is precious little documentation explaining the right
way to interact with our content model (especially if performance is a
concern), I think a good chunk of the burden for extension developers
using Impure Interfaces falls to us as well. If we actually told
people what to do instead, from nsIContent.h or a page liberally
linked from there, I would feel better about giving developers a hard
time about using it, especially given some time after that
documentation was available for other right-thinking examples to build
up. Right now, I think the story is "use the DOM interfaces", which
means figuring out what sort of node .textContent is on, getting a
bunch of QIs right, and being totally bereft of examples to follow to
see if you got the XPCOM arcana correct. "Go read the W3C
specification text" is pretty unsatisfying! (If the DOM interfaces
are so great, why doesn't more of our own code use them?)

Nobody has time to write alternative-to-internals docs, I'm sure I
will hear, and I expect that there will be a number of defensive
reactions to what I write here, but those decisions are economics.
Right now we're saying that there are more important things than
helping people get weaned off of private interfaces, by making the
Right Thing also the Easier To Figure Out Thing. That might well be
the right decision in terms of resources, but it means that we have to
bear the cost of well-meaning and valuable extension developers
getting wed to the capabilities of nsIContent and other such friends.

Mike

Boris Zbarsky

unread,
May 25, 2008, 10:57:22 AM5/25/08
to
Sergey Yanovich wrote:
> Could you elaborate a bit, why this happens? Because IID should change,
> whenever interface changes, right?

That's correct. Which means the extension that uses that interface is likely to
stop working, if it gets pointers to that interface via QI. If it gets them
from other interfaces, it would get a pointer to a vtable that differs from what
it expects.

> However, if method/attributes are inserted at the end of the interface, everything
> should be fine.

If the extension is not getting the interface via QI, yes.

> The last case is an exception to the quoted rule, right?

Sure.

-Boris

Boris Zbarsky

unread,
May 25, 2008, 11:07:44 AM5/25/08
to
Mike Shaver wrote:
> And for which we provide supported alternatives for their use cases, or not?

That's a good question. I'd love to provide different interfaces for their use
cases; my attempts to gather information about such use cases in the past when
people posted in newsgroups about their nsIContent, etc. use have failed.

> They're different to me because extension authors are likely to be
> copying the use-of-internal-interface pattern from our code, but are
> unlikely to be copying binary patching from anywhere reputable.

I buy this, ok. I admit the comparison was silly.

For what it's worth, I agree that the bar for nsIContent changes after ship (if
any) should be very high. I'm just not convinced that it should be quite that
high before ship, even at the point where we are now. But I might be wrong on
this depending on what we've been communicating to extension developers...

> I don't know what you mean by "internal", exactly

I'm mostly thinking our main layout/content data representation: nsINode,
nsIContent, nsIFrame. Those are the ones that would be most painful to never
ever ever change.

> nsIContent -- beyond the IID identity, I guess -- is protected MOZILLA_INTERNAL_API,
> but a file like that in a public/ directory with commented and
> useful-sounding methods seems like it's basically baiting you to set
> MOZILLA_INTERNAL_API.

Yeah, that's a bad deal. :(

> Things in our own extensions/ directory (xforms, TAF, spellcheck)

Yeah. Part of the reason there is performance, certainly for TAF and
spellcheck.... Perhaps in the new mmgc world using nsIDOMNode and company won't
be quite as painful in terms of incessant refcounting. But given the way the
DOM tosses things on lots and lots of interfaces, there could still be QI costs.
:(

> Given that there is precious little documentation explaining the right
> way to interact with our content model (especially if performance is a
> concern)

More importantly, the lack of a right way to interact with our content model if
performance is a concern.

> I think a good chunk of the burden for extension developers
> using Impure Interfaces falls to us as well.

Agreed.

> Right now, I think the story is "use the DOM interfaces", which
> means figuring out what sort of node .textContent is on, getting a
> bunch of QIs right, and being totally bereft of examples to follow to
> see if you got the XPCOM arcana correct.

That's a pretty good summary, yes.

> Nobody has time to write alternative-to-internals docs

The docs would be less of a problem if we had an actual alternative. What we
_really_ need here time-investment wise is either some API design or making
nsIDOM* suck less or both.

-Boris

Mike Shaver

unread,
May 25, 2008, 12:13:13 PM5/25/08
to Boris Zbarsky, dev-pl...@lists.mozilla.org
On Sun, May 25, 2008 at 11:07 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
> The docs would be less of a problem if we had an actual alternative. What we
> _really_ need here time-investment wise is either some API design or making
> nsIDOM* suck less or both.

I have a very strong preference for the latter or something close to
it, since it would also improve content performance.

(Something close to it might be "replace nsIDOM* and make content use
that as well", for example.)

Mike

Benjamin Smedberg

unread,
May 25, 2008, 10:03:30 PM5/25/08
to
Boris Zbarsky wrote:

> Yeah. Part of the reason there is performance, certainly for TAF and
> spellcheck.... Perhaps in the new mmgc world using nsIDOMNode and
> company won't be quite as painful in terms of incessant refcounting.
> But given the way the DOM tosses things on lots and lots of interfaces,
> there could still be QI costs. :(

Coupla random notes, not really disagreeing with anyone:

* This whole discussion becomes increasingly irrelevant for moz2: the JS
APIs will be the canonical/stable ones, and internal/binary APIs may change
as needed

* We can certainly condense some of the DOM APIs: there's no need to keep
separate nsIDOMX, nsIDOM3X, nsIDOMXInternal interfaces that have evolved due
to early freezing etc.

* I do think we should be tackling the problem of being able to implement
TAF and spellcheck from JS in ways that are roughly equivalent to binary
performance: it can be done with a better crossing layer and tracing.

--BDS

Boris Zbarsky

unread,
May 26, 2008, 11:27:38 PM5/26/08
to
Mike Shaver wrote:
> On Sun, May 25, 2008 at 11:07 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
>> The docs would be less of a problem if we had an actual alternative. What we
>> _really_ need here time-investment wise is either some API design or making
>> nsIDOM* suck less or both.
>
> I have a very strong preference for the latter or something close to
> it, since it would also improve content performance.

Yeah, agreed.

-Boris

Boris Zbarsky

unread,
May 26, 2008, 11:29:24 PM5/26/08
to
Benjamin Smedberg wrote:
> * This whole discussion becomes increasingly irrelevant for moz2: the JS
> APIs will be the canonical/stable ones, and internal/binary APIs may
> change as needed

Sure, but the question on the table right now is 1.9.1 (and 1.9.0), I think.

> * I do think we should be tackling the problem of being able to
> implement TAF and spellcheck from JS in ways that are roughly equivalent
> to binary performance: it can be done with a better crossing layer and
> tracing.

Right now, even implementing them in C++ on top of nsIDOM* would come nowhere
close to using nsIContent performance-wise. The better crossing layer there
might actually help over raw nsIDOM* use from C++ (reduce QIs). And of course
reducing the refcounting and virtual function calls and out params etc involved
would all help...

-Boris

Mark Finkle

unread,
May 27, 2008, 8:54:00 AM5/27/08
to
On May 25, 12:08 am, Boris Zbarsky <bzbar...@mit.edu> wrote:
> Mark Finkle wrote:

> > If a blocker comes up in RC1 and we fix it by changing nsIContent in
> > such a way that breaks extensions, then I think we have done a bad
> > thing.
>
> Given that extensions shouldn't be using nsIContent to start with, really, I
> don't think we are.
>

>


> > We don't get a bad rap from extension developers for no good reason.
>
> A bad rap from extension developers who are using internal interfaces we tell
> them not to use?

I was speaking in a more general sense (applying to any externally,
usable interface), using your specific nsIContent case as an example.

Boris Zbarsky

unread,
May 27, 2008, 11:21:22 AM5/27/08
to
Mark Finkle wrote:
> I was speaking in a more general sense (applying to any externally,
> usable interface), using your specific nsIContent case as an example.

It's a bad example to generalize from, and the one most relevant to sicking's
original question.

-Boris

anaesthetica

unread,
Jun 19, 2008, 1:32:05 PM6/19/08
to
On May 19, 5:38 pm, schrep <sch...@mozilla.com> wrote:
> There have been several discussions of 1.9.1/Moz2 and future product
> releases here
> (http://groups.google.com/group/mozilla.dev.planning/browse_frm/thread...,http://groups.google.com/group/mozilla.dev.planning/browse_frm/thread...)
> and it has been discussed many times in the Mozilla 2 meetings. Not
> everyone could make those meetings but more importantly not everyone was
> aware we would be discussing 1.9.1/moz2 there.
>
> In order to drive us closer to conclusion I wanted to consolidate my
> understanding of the current plan in *early* draft form below.  I
> propose we iterate on this here and take a final form to the
> wiki's/blogs for wider distribution.
>
> So please edit, add questions, to the draft below...
>
> Best,
>
> Schrep
>
> *** DRAFT PLAN ****
> *******************
>
> Product Releases
> ----------------
>
> Firefox 2.0.0.x and 3.0.x:
>
> We'll continue the maintenance schedule we set with Firefox 1.5 - namely
> doing scheduled maintenance releases of Firefox 3.0.x and Firefox
> 2.0.0.x every 6-8 weeks.  These releases will focus on security and
> stability improvements and cannot add new features, change UX, or break
> extensions unless it is required for security purposes or is otherwise
> critical.   Since Firefox 3.1 is coming out so quickly after 3.0 I
> expect the size of these maintenance releases to be smaller than they
> have been for the 2.0.0.x series.  Support releases for Firefox 2.0.0.x
> will terminate at the latest approximately 6 months after the shipment
> of Firefox 3.    This coincides with the approximate ship date of
> Firefox 3.1.

>
> Firefox 3.1:
>
> There were a number of features that we held back from Firefox 3 because
> they weren't quite ready - but they were nearly complete.    These
> include things like XHR, native JSON DOM bindings, ongoing performance
> tuning,  awesomebar++, better system integration, etc.  This along with
> the overall quality of Gecko 1.9 as a basis for mobile and the desire to
> get new platform features out to web developers sooner has lead to us
> want to do a second release of Firefox this year.   This release would
> be date-driven and targeted at the end of 2008.   Any features not ready
> in time will move to the next major release.  This is currently planned
> to be based on Gecko 1.9.1 - but if there are solid technical reasons
> for breaking frozen APIs we will bump the version number to Mozilla2.
>
> Firefox Mobile:
>
> There are already devices shipping with early versions of Gecko 1.9 at
> the core.  More are coming soon and we'll be releasing milestones of
> full branded versions of Firefox (with XUL and the Firefox team taking a
> lead in the user experience) later this year.  This lines up well with
> Firefox 3.1 and a synchronized release schedule will make everything run
> more smoothly.
>
> Firefox 4:
>
> Firefox 4 will incorporate some of the more aggressive platform
> improvements in Mozilla2.   It is far too early to set a shipping date
> but an initial target would be sometime in late 2009.  Mozilla2 work has
> been underway for > 8 months
>
> Platform Releases
> -----------------

>
> Gecko 1.9.0.x:
>
> Release date: By end of June 2008
> Status: Stable
> Accepted Changes: Security, stability, performance, minor enhancements
> that do not change *any* exposed APIs.   All changes must pass all
> regression tests and not regress performance.
> Development Tree: CVS Trunk
> Bugzilla Flags: blocking1.9.0.x
> Planning Center:

>
> Gecko 1.9.1:
>
> Release date: Beta stable summer 2008, production stable end of 2008
> Status: In development
> Accepted Changes: Anything that doesn't break frozen API's.   Passing
> unit tests and proof of no negative impact on perf from Talos required
> before check-in
> Development Tree: Hg mozilla-central - will move to a branch before end
> of summer 2008
> Bugzilla Flags: blocking1.9.1
> Planning Center:
>
> Mozilla2:
>
> Release date: Beta stable mid 2009
> Status: In development
> Accepted Changes: Anything that doesn't regress functionality or performance
> Development Tree: Hg mozilla-central
> Bugzilla Flags: ?
> Planning Center:http://wiki.mozilla.org/Mozilla_2
>
> Q&A
> ------
>
> Q: I'm running a project based on Gecko and am confused as to weather to
> ship on 1.9.0.x, 1.9.1, or Mozilla2?
>
> A: If you are shipping before Nov/Dec 2008 we recommend you ship on
> Gecko 1.9.0.x since it is the stable release.  If your product is
> shipping after Dec 2008 we'd recommend you use Gecko 1.9.1.x since it
> will be stable by then.   If you are shipping later you may want to
> consider Mozilla2
>
> Q: Will Firefox 3.1 ship on 1.9.1 or Mozilla2?
>
> A: We will start with 1.9.1 but if there is a compelling technical
> reason to break frozen APIs the drivers and module owners may decide to
> do so.  If we break frozen APIs we will call it Gecko 2.0 and give fair
> notice.
>
> Q: .... please add common questions here..

Regarding whether or not Fx3.1 will ship on 1.9.1 or 2.0--I think the
Mozilla2 name is psychologically significant and should not be used
even if there's a valid reason to break APIs on what would otherwise
by 1.9.1.

Mozilla2 has a broader significance in terms of the direction and
revitalization of Gecko (as was outlined by Brendan Eich almost two
years ago (wow time flies)).

If 1.9.1 needs to break APIs, I think it would be a better idea to
call it 1.10.0, even though it's somewhat lame to have point numbers
reach 10+. Since Fx3.1 is a *minor release* and the revisions of
Gecko (even if they do break some APIs) will still be comparatively
minor, going to 1.10.0 will make more psychological sense than going
to 2.0.

0 new messages