In order to drive us closer to conclusion I wanted to consolidate my
understanding of the current plan in *early* draft form below. I
propose we iterate on this here and take a final form to the
wiki's/blogs for wider distribution.
So please edit, add questions, to the draft below...
Best,
Schrep
*** DRAFT PLAN ****
*******************
Product Releases
----------------
Firefox 2.0.0.x and 3.0.x:
We'll continue the maintenance schedule we set with Firefox 1.5 - namely
doing scheduled maintenance releases of Firefox 3.0.x and Firefox
2.0.0.x every 6-8 weeks. These releases will focus on security and
stability improvements and cannot add new features, change UX, or break
extensions unless it is required for security purposes or is otherwise
critical. Since Firefox 3.1 is coming out so quickly after 3.0 I
expect the size of these maintenance releases to be smaller than they
have been for the 2.0.0.x series. Support releases for Firefox 2.0.0.x
will terminate at the latest approximately 6 months after the shipment
of Firefox 3. This coincides with the approximate ship date of
Firefox 3.1.
Firefox 3.1:
There were a number of features that we held back from Firefox 3 because
they weren't quite ready - but they were nearly complete. These
include things like XHR, native JSON DOM bindings, ongoing performance
tuning, awesomebar++, better system integration, etc. This along with
the overall quality of Gecko 1.9 as a basis for mobile and the desire to
get new platform features out to web developers sooner has lead to us
want to do a second release of Firefox this year. This release would
be date-driven and targeted at the end of 2008. Any features not ready
in time will move to the next major release. This is currently planned
to be based on Gecko 1.9.1 - but if there are solid technical reasons
for breaking frozen APIs we will bump the version number to Mozilla2.
Firefox Mobile:
There are already devices shipping with early versions of Gecko 1.9 at
the core. More are coming soon and we'll be releasing milestones of
full branded versions of Firefox (with XUL and the Firefox team taking a
lead in the user experience) later this year. This lines up well with
Firefox 3.1 and a synchronized release schedule will make everything run
more smoothly.
Firefox 4:
Firefox 4 will incorporate some of the more aggressive platform
improvements in Mozilla2. It is far too early to set a shipping date
but an initial target would be sometime in late 2009. Mozilla2 work has
been underway for > 8 months
Platform Releases
-----------------
Gecko 1.9.0.x:
Release date: By end of June 2008
Status: Stable
Accepted Changes: Security, stability, performance, minor enhancements
that do not change *any* exposed APIs. All changes must pass all
regression tests and not regress performance.
Development Tree: CVS Trunk
Bugzilla Flags: blocking1.9.0.x
Planning Center:
Gecko 1.9.1:
Release date: Beta stable summer 2008, production stable end of 2008
Status: In development
Accepted Changes: Anything that doesn't break frozen API's. Passing
unit tests and proof of no negative impact on perf from Talos required
before check-in
Development Tree: Hg mozilla-central - will move to a branch before end
of summer 2008
Bugzilla Flags: blocking1.9.1
Planning Center:
Mozilla2:
Release date: Beta stable mid 2009
Status: In development
Accepted Changes: Anything that doesn't regress functionality or performance
Development Tree: Hg mozilla-central
Bugzilla Flags: ?
Planning Center: http://wiki.mozilla.org/Mozilla_2
Q&A
------
Q: I'm running a project based on Gecko and am confused as to weather to
ship on 1.9.0.x, 1.9.1, or Mozilla2?
A: If you are shipping before Nov/Dec 2008 we recommend you ship on
Gecko 1.9.0.x since it is the stable release. If your product is
shipping after Dec 2008 we'd recommend you use Gecko 1.9.1.x since it
will be stable by then. If you are shipping later you may want to
consider Mozilla2
Q: Will Firefox 3.1 ship on 1.9.1 or Mozilla2?
A: We will start with 1.9.1 but if there is a compelling technical
reason to break frozen APIs the drivers and module owners may decide to
do so. If we break frozen APIs we will call it Gecko 2.0 and give fair
notice.
Q: .... please add common questions here..
A: No, only frozen APIs will remain compatible. I.e. we are free to
change as many APIs between 1.9 and 1.9.1 as we did between 1.8 and
1.9
At least that's what I'm hoping the answer is :)
/ Jonas
cheers,
mike
/ Jonas
_______________________________________________
dev-planning mailing list
dev-pl...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning
I don't have any data on how many of our top-extensions have binary
components though.
The cost of freezing all interfaces was quite high. It uglified new code
a good bit as well as cost in performance and memory (though probably
not to a significant extent).
This was sort of ok when the changes went in to a branch that is
eventually going to die. It would suck a lot more to do that to
mozilla-central where we'd have to clean up the mess later, or live with
it forever.
/ Jonas
I'm not saying we should or shouldn't feeze. APIs, and most definitely we
should make the decision based on previous experience. Just wanted to make
sure I understood the impact.
I recently saw that overall about 10% of add-ons on the AMO site today
have binary components.
More importantly I'm not sure how this 10% is weighted as far as actual
use. I do know there are several in the top 20,50, 100... That would
be the key to compatibility and the ability to recover from changes.
We should also plan on a lot longer ship cycle if we start introducing
signficant levels of API changes. In Firefox 2 it took about a 2-3
month push to get extension compatibility to an acceptable shipping
level, and for the most part it was just a testing and version
compatibilty rev'ing exercise. In Firefox 3 it was more like a 5-6
month push, with a lot more work, and with a lot more people working
hard to make that happen. In the end we didn't quite get to the level
of extension compatibility we had for Firefox 2, but we came pretty close.
This is not to say we should/should not freeze api's. This is to say
that the number and area of API changes is one of the factors that will
determine a realistic schedule, since we need a reasonable level of
extension compatibility to ship any release.
The best thing that could be done is to map out *which* APIs might
change, then do a bit of estimating on what the impact of the changes
might be on extension compatibility and the back end of the schedule.
Where is a good area to start mapping out the proposed changes?
The other thing is that if we are going to change APIs, we need to get
this work done early, and enforce some early API freeze dates.
Dragging the API changes out into late betas also lengthens the period
needed to get extension developers working on compatibility, and will
turn out to lengthen the overall shipping schedule.
Extension compatibility also relies on a wide variety of areas beyond
binary compatibility, so freeze dates ought to account for all those areas.
-chofmann
Mozilla seems to have several different kinds of APIs so I'm not clear
on what freezing/compatibility issues are being discussed. But late in
FF3, the effective API for extensions changed because of changes in
security model. At least for Firebug this had far more impact than any
reworking of APIs I can imagine. Similarly the new https requirement on
extensions. So if some good comes from evolving the APIs, its seems
like it would not be significant additional cost for extensions.
John.
Can we turn that around? Would it be possible, given an interface, to scan AMO
for binary extensions that use that interface in their binary blobs?
> The other thing is that if we are going to change APIs, we need to get
> this work done early, and enforce some early API freeze dates.
The problem is that API changes are not the end in and of themselves, usually.
They are often needed, for our core "internal" APIs to address other issues that
come up. No one is suggesting changing APIs for the fun of it, I don't think.
For example, note that if a blocker issue comes up in RC1 right now that can be
most easily fixed by changing nsIContent we would change nsIContent before we
ship final. The question is whether 1.9.1 should be held to a higher bar than
this (as 1.8.1 was).
-Boris
I'd go one further here: all changes must be accompanied with automated regression tests or must have manual litmus tests accompanied with an explanation why automated testing cannot be done. Barring unusual circumstances where a test can be written that works on 1.9.1 but doesn't work on 1.9.0 (thinking of damage-related tests that might rely on compositor, for example), the tests must land in 1.9.0 as part of the patch (after landing in 1.9.1, of course) and must run on tinderboxen on checkin.
> Gecko 1.9.1:
>
> Release date: Beta stable summer 2008, production stable end of 2008
> Status: In development
> Accepted Changes: Anything that doesn't break frozen API's. Passing
> unit tests and proof of no negative impact on perf from Talos required
> before check-in
How many tests (or reasons why tests cannot be written) can we require here? Branch obviously can't have a lower a bar than trunk, but that doesn't say anything about what should happen on trunk. I think we should set the bar for trunk fairly high and require a reviewed (even if it's just a sanity check) test be committed with each fix (or the explanation why it can't be tested and a litmus ? if appropriate). If it fails without the patch and succeeds with, that's good enough. On a case-by-case basis you'd want to require some bugs to have a greater number, depth, and concentration of tests, but even if each bug had just a single shallow test that'd be an improvement over where we are now.
Jeff
This should be the rule!
My experience tells, if there is no test for a feature, there is no
feature. The test has a wide meaning, it may be automated or manual. But
manual tests are unreliable, so test should be automated, unless that's
impossible.
Apart from protecting features, tests simplify communication between
developers. The argument like "this regresses bug XXXXXX" are often
rebutted with "that's OK, we have a new vision", but there is a general
consensus that breaking unit tests isn't acceptable.
--
Sergey Yanovich
As you've proposed, non-additive changes to unfrozen APIs can and will
break script-only extensions. We broke a lot of script-only
extensions in 1.9, and we will have a much shorter "get your stuff
up-to-date" cycle than we had for 1.9.
I don't think we should freeze any more APIs, since I agree that it
has not paid for itself (and has led us to premature enfreezification
on APIs that nobody wants to use), but I don't think that we should
leap from that position to one of total API abandon.
We should _break_ as few APIs as possible, but no fewer. Additive
changes to unfrozen APIs should be looked at with a kind eye, and more
disruptive changes should come alongside a case for why the API break
is significantly better than an additional new API or such clumsier
alternative.
Mike
Agreed. No gratitutious API changes - but changes for good technical
reasons can be discussed and reviewed. Even minor changes to the chrome
can break add-ons and the shorter cycle of the next release means
extension authors have less time to react. So we should make that
transition as easy for them as possible.
Mike
Agreed. No gratitutious API changes - but changes for good technical
Totally agreed. Our increase in automated regression testing has really
saved us time and time again - and our bar for test coverage (either
manual or automated) should continue to rise.
>> Gecko 1.9.1:
>>
>> Release date: Beta stable summer 2008, production stable end of 2008
>> Status: In development
>> Accepted Changes: Anything that doesn't break frozen API's. Passing
>> unit tests and proof of no negative impact on perf from Talos required
>> before check-in
>
> How many tests (or reasons why tests cannot be written) can we require
> here? Branch obviously can't have a lower a bar than trunk, but that
> doesn't say anything about what should happen on trunk. I think we
> should set the bar for trunk fairly high and require a reviewed (even if
> it's just a sanity check) test be committed with each fix (or the
> explanation why it can't be tested and a litmus ? if appropriate). If
> it fails without the patch and succeeds with, that's good enough. On a
> case-by-case basis you'd want to require some bugs to have a greater
> number, depth, and concentration of tests, but even if each bug had just
> a single shallow test that'd be an improvement over where we are now.
++ same as above.
Does that mean no revving the iid for such changes (just adds a method to the
end of the vtable) on the branch? Otherwise we get binary extension issues no
matter what, right?
-Boris
I don't think we're optimizing for binary extensions, but rather for JS
usage: the fewer JS extensions we break in this short cycle, the better off
we are (of course, the same rule applies in general!)
--BDS
I can pretty much guarantee that sicking was worried about things like
nsIContent and nsIDocument and nsPIDOMWindow, not about the very very few cases
in 1.8.1 where we created MOZILLA_1_8_BRANCH scriptable interfaces.
In general, doing an extra scriptable interface is less of a
footprint/peformance issue, and can be made completely transparent using
classinfo if needed... It does take more work on the back end to do it that
way, of course.
In any case, binary extensions are the main reason we had a complete interface
freeze in 1.8.1, as I recall. The question is how we want to handle them in 1.9.1.
-Boris
Is there a source to flesh out that "etc.", or for now is it just an "etc."?
For example, is <video> on the list?
> Firefox Mobile:
>
> There are already devices shipping with early versions of Gecko 1.9 at
> the core. More are coming soon and we'll be releasing milestones of full
> branded versions of Firefox (with XUL and the Firefox team taking a lead
> in the user experience) later this year.
Where might I go to find out what platforms we are targetting for these
releases?
Gerv
>
> We should _break_ as few APIs as possible, but no fewer. Additive
> changes to unfrozen APIs should be looked at with a kind eye, and more
> disruptive changes should come alongside a case for why the API break
> is significantly better than an additional new API or such clumsier
> alternative.
>
Is there any way to perhaps analyze source code of open source products out
there for what API's they are currently using? So decisions could be made
with some data at hand? Either spidering open sour svn/cvs/hg repo's or
perhaps even a script that could be run by the api user when they build, and
a report could be sent to some webtool that kept track of api's and number
of instances.
Then when deciding if it's worth breaking API _____, there's solid data on
who that effects outside of the mozilla tree.
--
Robert Accettura
rob...@accettura.com
I guess that the spreadsheets linked from
http://wiki.mozilla.org/Firefox3/StatusMeetings/2008-05-20#Platform_Post_1.9
are bestest right now.
Axel
PS: sorry for my English, I'm not a native speaker ...
That rough list is meant to illustrate the size and scope of the changes, as
well as the things we know we want to get. We'll flesh it out more in the
usual ways, building product plans for each of those releases once we agree
to the general roadmap and layout of releases.
cheers,
mike
----- Original Message -----
From: dev-planni...@lists.mozilla.org
<dev-planni...@lists.mozilla.org>
To: dev-pl...@lists.mozilla.org <dev-pl...@lists.mozilla.org>
Sent: Tue May 20 14:21:01 2008
Subject: Re: What's next after Firefox 3 and Gecko 1.9?
schrep wrote:
> Firefox 3.1:
>
> There were a number of features that we held back from Firefox 3 because
> they weren't quite ready - but they were nearly complete. These include
> things like XHR, native JSON DOM bindings, ongoing performance tuning,
> awesomebar++, better system integration, etc.
Is there a source to flesh out that "etc.", or for now is it just an "etc."?
For example, is <video> on the list?
> Firefox Mobile:
>
> There are already devices shipping with early versions of Gecko 1.9 at
> the core. More are coming soon and we'll be releasing milestones of full
> branded versions of Firefox (with XUL and the Firefox team taking a lead
> in the user experience) later this year.
Where might I go to find out what platforms we are targetting for these
releases?
Gerv
> Is there any way to perhaps analyze source code of open source products out
> there for what API's they are currently using?
There's an old list at http://www.mozpad.org/doku.php?id=api_analysis
but I believe that includes JS usage as well. At least, the list of
projects may be useful to as a starting point to find people who are
receptive.
That might be skewed towards certain projects, though. nsISchema is the
third used interface; nsIEventQueue sits at #8 with 430; somebody
probably forked textbox bindings (nsISSM).
Unfortunately, I don't think treehydra etc. would be that useful, since
not all code compiles with GCC.
--
Mook
Now ComputerWorld says Firefox 3 doesn't support XmlHttpRequest,
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9087558
(I added a comment about the missing "cross-site" part).
Reporters are watching like a hawk so they can misconstrue whatever
people say ;-)
Gregg's a good guy, and while I would have appreciated him mentioning that
Schrep explicitly called this a "draft" plan, he's trying to report honestly
on what he's observing. I don't think it's a problem at all, and we should
continue to educate journalists and observers to the fact that this is how
software development works. People propose plans, people discuss them, and
figure out if it's the right thing to do. We just do all of that in the
open. It's a double whammy - people need to get used to learning how
software is made, and the idea that we do it in the open.
Let's work with people like Gregg to help him understand what we're doing,
and perhaps encourage him to interact with us here.
cheers,
mike
----- Original Message -----
From: dev-planni...@lists.mozilla.org
<dev-planni...@lists.mozilla.org>
To: dev-pl...@lists.mozilla.org <dev-pl...@lists.mozilla.org>
Sent: Wed May 21 14:24:01 2008
Subject: Re: What's next after Firefox 3 and Gecko 1.9?
Sure, I have routinely asked AMO to given me a grep of addons using
certain interfaces. They are usually very prompt with the information,
especially if used to help keep extensions functional.
>
> > The other thing is that if we are going to change APIs, we need to get
> > this work done early, and enforce some early API freeze dates.
>
> The problem is that API changes are not the end in and of themselves, usually.
> They are often needed, for our core "internal" APIs to address other issues that
> come up. No one is suggesting changing APIs for the fun of it, I don't think.
>
> For example, note that if a blocker issue comes up in RC1 right now that can be
> most easily fixed by changing nsIContent we would change nsIContent before we
> ship final. The question is whether 1.9.1 should be held to a higher bar than
> this (as 1.8.1 was).
If a blocker comes up in RC1 and we fix it by changing nsIContent in
such a way that breaks extensions, then I think we have done a bad
thing. At this late stage, we certainly should be favor non-ideal ways
(short-lived private interfaces, maybe) over outright breakage. If we
do make breaking changes (for extensions) we should be moving into
some tactical mode of extension developer outreach - before the code
lands.
We don't get a bad rap from extension developers for no good reason.
I'm not sure a grep would catch binary blobs using the interface. Those are the
things we're worrying about here, not JS using the interface.
> If a blocker comes up in RC1 and we fix it by changing nsIContent in
> such a way that breaks extensions, then I think we have done a bad
> thing.
Given that extensions shouldn't be using nsIContent to start with, really, I
don't think we are.
> At this late stage, we certainly should be favor non-ideal ways
> (short-lived private interfaces, maybe) over outright breakage.
The question is whether adding 4 bytes to every single DOM node is worth
avoiding possibly breaking binary blob extensions that are using internal APIs
that we clearly say are very unstable. On the 1.8 branch, the official answer
to that was "yes", but it might have been the wrong tradeoff. I certainly think
it's the wrong tradeoff before we ship.
> If we do make breaking changes (for extensions) we should be moving into
> some tactical mode of extension developer outreach - before the code
> lands.
_Any_ change to _any_ interface is a breaking change for binary blob extensions.
Again, this is why we didn't make them on the 1.8 branch. I fully agree on
outreach for API changes at this late date or on the branch in general.
> We don't get a bad rap from extension developers for no good reason.
A bad rap from extension developers who are using internal interfaces we tell
them not to use?
Would we care about a bad rap from extension developers that binary-patched
libxul and complained when the library bit pattern changed in a security update?
If we wouldn't, where is the line drawn and why?
-Boris
Could you elaborate a bit, why this happens? Because IID should change,
whenever interface changes, right?
IIUC, if an interface changes that an extension *implements*, the
extension will crash. If an extension only *calls* some
methods/attributes of an interface, and the order of methods/attributes
changes in the interface, or methods/attributes are removed from the
interface, or methods/attributes are inserted into the interface
anywhere but at the end, the extension will crash. However, if
method/attributes are inserted at the end of the interface, everything
should be fine.
The last case is an exception to the quoted rule, right?
--
Sergey Yanovich
And for which we provide supported alternatives for their use cases, or not?
> Would we care about a bad rap from extension developers that binary-patched
> libxul and complained when the library bit pattern changed in a security update?
>
> If we wouldn't, where is the line drawn and why?
They're different to me because extension authors are likely to be
copying the use-of-internal-interface pattern from our code, but are
unlikely to be copying binary patching from anywhere reputable. (We
have been concerned with getting a bad rap in the past about similarly
fragile behaviours.)
I don't know what you mean by "internal", exactly, or why we would
have internal interfaces that were accessible at all by textual name
from other code without explicit opt-in (by which I mean defining
something like MOZILLA_INTERNAL_API, not "including the wrong header
file"), but there are many use cases that are simply not workable
without use of non-frozen interfaces at the least. nsIContent --
beyond the IID identity, I guess -- is protected MOZILLA_INTERNAL_API,
but a file like that in a public/ directory with commented and
useful-sounding methods seems like it's basically baiting you to set
MOZILLA_INTERNAL_API. (If you're finding things like SetAttr or
GetText via lxr, you won't even see the tiny and cryptic comment at
the top of the class declaration.) Things in our own extensions/
directory (xforms, TAF, spellcheck) use nsIContent quite liberally, to
say nothing of SVG and editor and...
Given that there is precious little documentation explaining the right
way to interact with our content model (especially if performance is a
concern), I think a good chunk of the burden for extension developers
using Impure Interfaces falls to us as well. If we actually told
people what to do instead, from nsIContent.h or a page liberally
linked from there, I would feel better about giving developers a hard
time about using it, especially given some time after that
documentation was available for other right-thinking examples to build
up. Right now, I think the story is "use the DOM interfaces", which
means figuring out what sort of node .textContent is on, getting a
bunch of QIs right, and being totally bereft of examples to follow to
see if you got the XPCOM arcana correct. "Go read the W3C
specification text" is pretty unsatisfying! (If the DOM interfaces
are so great, why doesn't more of our own code use them?)
Nobody has time to write alternative-to-internals docs, I'm sure I
will hear, and I expect that there will be a number of defensive
reactions to what I write here, but those decisions are economics.
Right now we're saying that there are more important things than
helping people get weaned off of private interfaces, by making the
Right Thing also the Easier To Figure Out Thing. That might well be
the right decision in terms of resources, but it means that we have to
bear the cost of well-meaning and valuable extension developers
getting wed to the capabilities of nsIContent and other such friends.
Mike
That's correct. Which means the extension that uses that interface is likely to
stop working, if it gets pointers to that interface via QI. If it gets them
from other interfaces, it would get a pointer to a vtable that differs from what
it expects.
> However, if method/attributes are inserted at the end of the interface, everything
> should be fine.
If the extension is not getting the interface via QI, yes.
> The last case is an exception to the quoted rule, right?
Sure.
-Boris
That's a good question. I'd love to provide different interfaces for their use
cases; my attempts to gather information about such use cases in the past when
people posted in newsgroups about their nsIContent, etc. use have failed.
> They're different to me because extension authors are likely to be
> copying the use-of-internal-interface pattern from our code, but are
> unlikely to be copying binary patching from anywhere reputable.
I buy this, ok. I admit the comparison was silly.
For what it's worth, I agree that the bar for nsIContent changes after ship (if
any) should be very high. I'm just not convinced that it should be quite that
high before ship, even at the point where we are now. But I might be wrong on
this depending on what we've been communicating to extension developers...
> I don't know what you mean by "internal", exactly
I'm mostly thinking our main layout/content data representation: nsINode,
nsIContent, nsIFrame. Those are the ones that would be most painful to never
ever ever change.
> nsIContent -- beyond the IID identity, I guess -- is protected MOZILLA_INTERNAL_API,
> but a file like that in a public/ directory with commented and
> useful-sounding methods seems like it's basically baiting you to set
> MOZILLA_INTERNAL_API.
Yeah, that's a bad deal. :(
> Things in our own extensions/ directory (xforms, TAF, spellcheck)
Yeah. Part of the reason there is performance, certainly for TAF and
spellcheck.... Perhaps in the new mmgc world using nsIDOMNode and company won't
be quite as painful in terms of incessant refcounting. But given the way the
DOM tosses things on lots and lots of interfaces, there could still be QI costs.
:(
> Given that there is precious little documentation explaining the right
> way to interact with our content model (especially if performance is a
> concern)
More importantly, the lack of a right way to interact with our content model if
performance is a concern.
> I think a good chunk of the burden for extension developers
> using Impure Interfaces falls to us as well.
Agreed.
> Right now, I think the story is "use the DOM interfaces", which
> means figuring out what sort of node .textContent is on, getting a
> bunch of QIs right, and being totally bereft of examples to follow to
> see if you got the XPCOM arcana correct.
That's a pretty good summary, yes.
> Nobody has time to write alternative-to-internals docs
The docs would be less of a problem if we had an actual alternative. What we
_really_ need here time-investment wise is either some API design or making
nsIDOM* suck less or both.
-Boris
I have a very strong preference for the latter or something close to
it, since it would also improve content performance.
(Something close to it might be "replace nsIDOM* and make content use
that as well", for example.)
Mike