Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: breaking sites for what?

163 views
Skip to first unread message

Henri Sivonen

unread,
Feb 7, 2012, 4:02:05 AM2/7/12
to dev-platform, dev-apps...@lists.mozilla.org
On Tue, Feb 7, 2012 at 10:09 AM, Asa Dotzler <a...@mozilla.org> wrote:
> I'd like to better understand why we think it's the right trade-off to break
> major websites in order to shave a some bits off of our HTTP headers.
>
> http://hsivonen.iki.fi/accept-charset/
> https://bugzilla.mozilla.org/show_bug.cgi?id=572652
>
> What is the user benefit here that outweighs the loss of compatibility with
> a major site -- in this case, the second most popular translation service on
> the Web?

First of all, breaking one major site is nothing new. For example
Facebook pokes at the edges of the Web platform so hard that sometimes
in order to make the platform better, we break Facebook. Native JSON
and the unification of namespaces in HTML and XHTML DOMs come to mind.
With Facebook this is OK, because developing a Web site is a core
competence of Facebook and they are remarkable quick to adapt to
browser fixes.

Also, we break various Google properties all the time. Often because
Google is doing it wrong with browser sniffing but also because we
first added some moz-prefixed stuff and then take it away or change
it. In the case of Firefox 10, various Google properties had breakage
because the version number started having double digits. The situation
with Google is pretty much OK, too. Developing Web apps is a core
competence of Google, too. They can and do adapt to browser changes.

Because developing Web sites/apps is the core competence of companies
like Facebook and Google and because the changes usually involve them
choosing a code path *they already have in place* for WebKit or IE,
"breaking" Facebook/Google sites/apps in a way that causes us to ask
them to put us on their already-existing WebKit (or IE) codepath is,
in my view, *much* less of a problem than breaking random long-tail
sites that aren't actively maintained and that belong to people or
companies who don't have Web stuff as their core competence.

Moreover, Facebook and Google are pushing the boundaries of the Web
platform harder than the long tail is. If we couldn't ask them to make
a change once in a while, it would make evolving the Web much harder
than keeping the long tail working is.

In the case at hand, the only known breakage was on the properties of
a company that's supposed to be in the same bucket as Facebook and
Google. Keeping Web stuff running should be a core competence of
Yahoo! Also, the nature of their breakage was similar to what we see
with Facebook/Google: Since Babelfish already worked in IE and Safari,
Yahoo! has to have a code path that works without Accept-Charset
already. Also, keeping sending Accept-Charset only to keep Babelfish
working would mean bloating our HTTP requests for all of the long
tail.

Also, it's not new that making the call of which big sites should be
capable of fixing their stuff goes wrong. For example, I broke a
MySpace feature in the 3.6 cycle. I had a patch ready that changed the
platform so that the (crazy) MySpace feature would have worked (i.e.
the breakage never had to reach a release), but the code review
process blocked me from changing Gecko to accommodate to the
(remarkably silly) code at MySpace. But as it turns out,
https://bugzilla.mozilla.org/show_bug.cgi?id=542875 is still open.
(But then, MySpace isn't that relevant anymore.)

When the only known breakage is a major site that should have the
capability to adapt by putting us on their WebKit or IE code path, I
think refraining from changing Gecko is the wrong call. I'd even
rather start doing the Safari thing with site-specific hacks than stop
evolving Gecko because one big site makes a bad UA-sniffed assumption.

(Let's move this to dev-platform.)

--
Henri Sivonen
hsiv...@iki.fi
http://hsivonen.iki.fi/

Henri Sivonen

unread,
Feb 7, 2012, 4:10:38 AM2/7/12
to dev-platform
On Tue, Feb 7, 2012 at 11:02 AM, Henri Sivonen <hsiv...@iki.fi> wrote:
> On Tue, Feb 7, 2012 at 10:09 AM, Asa Dotzler <a...@mozilla.org> wrote:
>> I'd like to better understand why we think it's the right trade-off to break
>> major websites in order to shave a some bits off of our HTTP headers.
>>
>> http://hsivonen.iki.fi/accept-charset/
>> https://bugzilla.mozilla.org/show_bug.cgi?id=572652
>>
>> What is the user benefit here that outweighs the loss of compatibility with
>> a major site -- in this case, the second most popular translation service on
>> the Web?

And for those just tuning into the thread on dev-platform, as dbaron
said on the other list:
The benefit of taking bytes out of our HTTP boilerplate has the
performance benefit of making HTTP requests smaller. This affects
whether a request fits in one TCP packet. Since the length of the
request URL is variable, there's no one single threshold that makes
the boilerplate too long or short enough. The shorter the boilerplate,
the longer the request URL gets to be before fragmenting the request
to an extra packet.

Henri Sivonen

unread,
Feb 7, 2012, 5:26:28 AM2/7/12
to dev-platform, dev-apps...@lists.mozilla.org
On Tue, Feb 7, 2012 at 11:31 AM, Asa Dotzler <a...@mozilla.org> wrote:
> On 2/7/2012 1:02 AM, Henri Sivonen wrote:
>>
>> On Tue, Feb 7, 2012 at 10:09 AM, Asa Dotzler<a...@mozilla.org>  wrote:
>>>
>>> I'd like to better understand why we think it's the right trade-off to
>>> break
>>>
>>> major websites in order to shave a some bits off of our HTTP headers.
>>>
>>> http://hsivonen.iki.fi/accept-charset/
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=572652
>>>
>>> What is the user benefit here that outweighs the loss of compatibility
>>> with
>>> a major site -- in this case, the second most popular translation service
>>> on
>>> the Web?
>>
>>
>> First of all, breaking one major site is nothing new. For example
>
>
> I don't accept that as justification for doing more of it. "We get a major
> win" might be good enough justification but "We've done it in the past"
> isn't good enough, IMO.

I agree that that's not justification enough. I responded that way
because your message made it look like you were saying this was some
kind of new phenomenon. Upon re-reading what you said, I don't think
the wording of your message actually suggested anything about the
recency of the phenomenon. Sorry.

(Quotes re-ordered below.)

> You're making a case for breaking sites when there is a clear win for the
> platform AND we have reasonable confidence that the major sites will fix
> themselves
...
> Sometimes breakage is hard to avoid. I get that. Sometimes breaking sites
> gives the platform a big boost. I get that. This, IMO, is not one of those
> cases.
...
> This is not new bloat. We've been carrying it forever. We could have
> continued to carry it.

The DOM namespace unification thing that broke Facebook (and that
Safari had already done so Facebook had a code path for it) was bloat
we had been carrying around forever. Before making the change *and
some further changes that it enabled*, we didn't know what the full
benefit would be. The change turned out to be an overall DOM
performance win even though it was originally motivated by code
aesthetics, simplicity and maintainability.

As for breaking Facebook by introducing native JSON, we *could* have
told TC39 to give the entry point to native JSON a different name
since the name they used was taken by Facebook.

We don't *have to* break some Google code by taking away or changing
moz-prefixed animation APIs. We *could* keep the moz-prefixed stuff
around forever while *also* implementing the W3C spec without prefix.

We didn't *have to* break some Google code by moving the version
number in the UA string to double digits. We *could* have done what
Opera did and carry around a longer UA string around forever.

What's out of the ordinary here is that the patch got backed out from
four release trains.

> Apparently Chrome isn't in a hurry to drop it and
> they're hyper-sensitive to performance issues. I just don't get the rush
> knowing it was going to break things that were not likely to get fixed with
> haste.

The more likely explanation is that there's a ton of stuff in a
browser and they didn't happen to notice the situation around this
particular thing. A couple of people on their team seemed to act as if
they considered sending Accept-Charset a bug in Chrome after the
correction in http://hsivonen.iki.fi/accept-charset/ alerted them to
it.

In a way, it's sad that we can't be leading on stuff like this. This
wouldn't have happened if IE hadn't already been the leader. And now
it seems that we should be waiting for Chrome to move before us, too.
:-(

> When this broke yahoo search with non-ascii characters and babelfish we
> backed it out. I was there when we did it the first couple of times. I
> wasn't there advocating for backing it out later because I'd assumed we all
> agreed that breaking Yahoo was not an acceptable cost for this "win". At
> some point, half of the Yahoo breakage was fixed by Yahoo and people decided
> that was good enough.

FWIW, I was a bit surprised to see that patch made it to release
before Y! fixed Babelfish.

> Except that we knew about it ahead of time and they weren't in any hurry in
> fixing it.

Although I was surprised that the patch made it to release before
Babelfish had been fixed, in retrospect, I think deferring the patch
indefinitely would have set a bad precedent. I think we shouldn't be
signaling to Google / Facebook / Yahoo! / Live.com/Hotmail/Bing that
we'll refrain from aligning Gecko with IE or Safari behavior if they
put us on a different code path by browser sniffing and are slow to
fix it.

If we are no longer OK with breaking sites/app in a "too big to fail"
category, I think we should start doing Safari/Chrome-style (not
IE-style or Opera-style) site-specific hacks instead of letting others
make us defer alignment with IE/Safari by multiple release trains.

> (Do we even have data that gives us any confidence at all that
> other much smaller sites weren't broken by this?)

We never have confidence beyond not seeing an influx of bug reports.

> Winning over users is much more
> difficult and losing them is a whole lot easier.

Indeed.

(Trying to move this to dev-platform again, since this thread is
really about how we change Gecko instead of how we change Firefox
around it.)

Asa Dotzler

unread,
Feb 7, 2012, 5:45:05 AM2/7/12
to
On 2/7/2012 2:26 AM, Henri Sivonen wrote:

> If we are no longer OK with breaking sites/app in a "too big to fail"
> category, I think we should start doing Safari/Chrome-style (not
> IE-style or Opera-style) site-specific hacks instead of letting others
> make us defer alignment with IE/Safari by multiple release trains.

I would like to hear more about this. Can you explain how Safari and
Chrome do it and why that's superior to how IE and Opera do it?

- A

PS, I don't know what the heck I'm doing here but I think I've set
followup-to to be m.d.platform.

David Rajchenbach-Teller

unread,
Feb 7, 2012, 6:03:11 AM2/7/12
to Henri Sivonen, dev-platform, dev-apps...@lists.mozilla.org
On 2/7/12 11:26 AM, Henri Sivonen wrote:
> We don't *have to* break some Google code by taking away or changing
> moz-prefixed animation APIs. We *could* keep the moz-prefixed stuff
> around forever while *also* implementing the W3C spec without prefix.

What is the rationale for dropping the moz-prefixed stuff, btw?

I tend to believe that it makes sense to keep it (at least for sume
buffer period) – not all web apps/sites can afford to have someone on
watch for such tiny-but-frequent changes.


Cheers,
David

--
David Rajchenbach-Teller, PhD
Performance Team, Mozilla

signature.asc

Henri Sivonen

unread,
Feb 7, 2012, 6:54:51 AM2/7/12
to dev-pl...@lists.mozilla.org
On Tue, Feb 7, 2012 at 12:45 PM, Asa Dotzler <a...@mozilla.org> wrote:
> On 2/7/2012 2:26 AM, Henri Sivonen wrote:
>
>> If we are no longer OK with breaking sites/app in a "too big to fail"
>> category, I think we should start doing Safari/Chrome-style (not
>> IE-style or Opera-style) site-specific hacks instead of letting others
>> make us defer alignment with IE/Safari by multiple release trains.
>
> I would like to hear more about this. Can you explain how Safari and Chrome
> do it and why that's superior to how IE and Opera do it?

Opera downloads (and keeps updating) a JavaScript program that can
modify Web pages when they load, etc.

IE8 introduced an updating list of sites that get the Compatibility
Mode that emulates IE7 wholesale instead of varying just one thing
that needs to be varied for a given site. Since IE9, the they've had a
bunch of switches they can flip on a more granular basis. The list of
which sites get which switches flipped is updated independently of the
IE binary.

Safari has an app-wide flag for enabling site-specific hacks but they
don't have a generic mechanism for hacks beyond that. When they need a
hack for something too-big-to-fail, they write an alternative code
path in C++ in the relevant part of the codebase and make it
conditional on identifying what the hack is targeting and the app-wide
flag. The targeting can be by host name, obviously, but there has been
a hack that targeted an old version of Mediawiki. That hack recognized
a particular CSS file that shipped with Mediawiki. There's isn't
elaborate infrastructure for targeting the hack. You just write enough
code to figure out if the hack should apply in whatever way you can
figure that out from the C++ context you are in. There's no mechanism
for shipping hacks to users other or taking individual hacks away than
the general browser update mechanism. Since these hacks are
conditional on a global flag, it's possible to offer a Debug menu item
for easily toggling them off for testing.

Chrome also does its site-specific hacks in C++ where the relevant
code is without a generic mechanism. I'm not sure if they expose a way
to turn all the hacks off for testing.

The Safari way of doing site-specific hacks is superior, because it
allows any kind of behavior change with little up-front abstraction.
To adopt the same approach, we'd need to expose a libxul-global flag
in e.g. nsContentUtils and maybe add some "What site am I on?" helper
methods on an as-needed basis.

The IE8 approach of having an IE7 mode is an overkill for us, because
the kind of breakage we see from Gecko changes tends to be limited to
some particular thing instead of requiring a whole bunch of
intertwingled behaviors from Firefox 2.

The IE9/IE10 approach of having a bunch of toggles that get flipped on
multiple sites is an overkill for us, because so far, the kind of
breakage that we've shipped or been close to shipping tends to have
affected one big site at a time, so a generic targeting mechanism
would have been just overhead.

The Opera approach requires the problems either to be fixable from JS
using common Web platform APIs or requires adding JS-exposed APIs, so
to implement a hack to send Accept-Charset to Babelfish, we would have
needed Necko changes, a new JS exposed API and some JS to drive the
API. With the Safari approach, the changes involved would have been
well-contained Necko changes. The Opera approach also has the problem
that once you've built all the generic infrastructure for it, it
becomes too easy to deploy compared to the Safari approach so there's
a risk of ending up with more hacks. I believe Opera indeed has more
hacks than Safari, though not all of it can be attributed to the
mechanism. Some of it may be attributable to the market position.

Boris Zbarsky

unread,
Feb 7, 2012, 9:35:02 AM2/7/12
to
On 2/7/12 6:03 AM, David Rajchenbach-Teller wrote:
> On 2/7/12 11:26 AM, Henri Sivonen wrote:
>> We don't *have to* break some Google code by taking away or changing
>> moz-prefixed animation APIs. We *could* keep the moz-prefixed stuff
>> around forever while *also* implementing the W3C spec without prefix.
>
> What is the rationale for dropping the moz-prefixed stuff, btw?

Assuming you really mean "dropping" as opposed to changing:

1) Keeping it around in perpetuity is bad for the web, since it allows
sites to be written to work in Gecko only.

2) Keeping around two separate codepaths (the prefixed one and the
standard one) increases codesize and reduces code maintainability,
making it harder to make other improvements.

> I tend to believe that it makes sense to keep it (at least for sume
> buffer period)

We do. We typically ship the unprefixed version, then a few releases
later remove the prefixed one. This gives people time to adapt.

>– not all web apps/sites can afford to have someone on
> watch for such tiny-but-frequent changes.

Quite honestly, if they can't watch for changes they should not be using
explicitly experimental features!

The flip side is we should not be prefixing stuff that's no longer
experimental, of course.

-Boris

Patrick McManus

unread,
Feb 7, 2012, 10:08:29 AM2/7/12
to Henri Sivonen, dev-platform
On Tue, 2012-02-07 at 11:10 +0200, Henri Sivonen wrote:
> On Tue, Feb 7, 2012 at 11:02 AM, Henri Sivonen <hsiv...@iki.fi> wrote:
> > On Tue, Feb 7, 2012 at 10:09 AM, Asa Dotzler <a...@mozilla.org> wrote:
> >> I'd like to better understand why we think it's the right trade-off to break
> >> major websites in order to shave a some bits off of our HTTP headers.
> >>
> >> http://hsivonen.iki.fi/accept-charset/
> >> https://bugzilla.mozilla.org/show_bug.cgi?id=572652
> >>
> >> What is the user benefit here that outweighs the loss of compatibility with
> >> a major site -- in this case, the second most popular translation service on
> >> the Web?
>

I'm all for more use of feature-black lists and a generic infrastructure
there. It seems to be the ugly way to get past 1% breakage that
sometimes holds us back. For instance, I'm told that's the way Chrome
deploys SSL False Start which a rare few broken servers choke on.

> And for those just tuning into the thread on dev-platform, as dbaron
> said on the other list:
> The benefit of taking bytes out of our HTTP boilerplate has the
> performance benefit of making HTTP requests smaller. This affects
> whether a request fits in one TCP packet. Since the length of the
> request URL is variable, there's no one single threshold that makes
> the boilerplate too long or short enough. The shorter the boilerplate,
> the longer the request URL gets to be before fragmenting the request
> to an extra packet.

that's the right overall point, but the details are a little off and
somehow I can't resist pedantically chiming in.

The packet boundary isn't super important, but the congestion window is.
If you overflow that you take a full round trip time (~100ms) of delay
before being able to send again. Client side cwnd is usually 3 or 4
packets large (4.5 - 6KB), so even big URLs fit in it comfortably.
however the truly variable part that causes the hit is most often
Cookies. So every byte is relevant but I bet telemetry would show us the
size of accept-* wasn't the tipping point for very many old school http
transactions.

However, optimizations like pipelines and spdy have more than 1 request
outstanding at a time (which is the source of their awesomeness!) - so
they are far more likely to bump into that limit. The gains here are
huge, so its certainly something we want to enable. In those cases
reducing header sizes by any amount increases the number of requests
that tcp will allow in one flight, and thus the limit of your effective
parallelism, which is a really big deal because round trips really are
the limiting factor in networking performance for most page scenarios.

as a final aside, this is why spdy header compression is important. Not
so much because of the reduced transfer time of a compressed header per
se but because you can send more requests in one cwnd which is a really
big deal.

hth.

-Patrick


Gervase Markham

unread,
Feb 7, 2012, 11:58:29 AM2/7/12
to Patrick McManus, Henri Sivonen
On 07/02/12 15:08, Patrick McManus wrote:
> However, optimizations like pipelines and spdy have more than 1 request
> outstanding at a time (which is the source of their awesomeness!) - so
> they are far more likely to bump into that limit. The gains here are
> huge, so its certainly something we want to enable. In those cases
> reducing header sizes by any amount increases the number of requests
> that tcp will allow in one flight, and thus the limit of your effective
> parallelism, which is a really big deal because round trips really are
> the limiting factor in networking performance for most page scenarios.

So would it be fair to say that, for SPDY and pipelining, while it's not
possible to predict the behaviour for any particular site (due to
cookies), it is true that each single byte shaved off our default HTTP
boilerplate has value, because it means that a slightly greater
percentage of request sets will get in under various significant
boundaries? And that each byte has equal value, that is to say saving 10
bytes in one spot is the same value as 10 bytes in another spot?

If that's so, then we have proven benefit for any change, proportional
to the amount of bytes removed. The difficulty then is quantifying the
downside, and doing the comparison.

Gerv

Patrick McManus

unread,
Feb 7, 2012, 12:42:37 PM2/7/12
to Gervase Markham, Henri Sivonen, dev-pl...@lists.mozilla.org
On Tue, 2012-02-07 at 16:58 +0000, Gervase Markham wrote:
> On 07/02/12 15:08, Patrick McManus wrote:
> > However, optimizations like pipelines and spdy have more than 1 request
> > outstanding at a time (which is the source of their awesomeness!) - so
> > they are far more likely to bump into that limit. The gains here are
> > huge, so its certainly something we want to enable. In those cases
> > reducing header sizes by any amount increases the number of requests
> > that tcp will allow in one flight, and thus the limit of your effective
> > parallelism, which is a really big deal because round trips really are
> > the limiting factor in networking performance for most page scenarios.
>
> So would it be fair to say that, for SPDY and pipelining, while it's not
> possible to predict the behaviour for any particular site (due to
> cookies), it is true that each single byte shaved off our default HTTP
> boilerplate has value, because it means that a slightly greater
> percentage of request sets will get in under various significant
> boundaries? And that each byte has equal value, that is to say saving 10
> bytes in one spot is the same value as 10 bytes in another spot?
>

basically, yes. more true for sub-resources than other things because
the characterization of the web says sub-resources come in big runs,
where base page uris, xhrs, etc.. not so much. Typical web page now has
from 55-90 sub resources depending on how you want to weight the data
(http://www.stevesouders.com/blog/2012/02/01/http-archive-2011-recap/)



Gervase Markham

unread,
Feb 7, 2012, 12:52:31 PM2/7/12
to
On 07/02/12 17:42, Patrick McManus wrote:
> basically, yes. more true for sub-resources than other things because
> the characterization of the web says sub-resources come in big runs,
> where base page uris, xhrs, etc.. not so much. Typical web page now has
> from 55-90 sub resources depending on how you want to weight the data
> (http://www.stevesouders.com/blog/2012/02/01/http-archive-2011-recap/)

We already have different boilerplate for different sorts of request.
One option would be to start by eliminating Accept-Charset (and perhaps
some other stuff; we should probably do an audit) for image, CSS and
script loads.

Gerv


Asa Dotzler

unread,
Feb 7, 2012, 7:14:43 PM2/7/12
to
On 2/7/2012 8:58 AM, Gervase Markham wrote:

> If that's so, then we have proven benefit for any change, proportional
> to the amount of bytes removed. The difficulty then is quantifying the
> downside, and doing the comparison.

I'm a big SPDY fan and I can't wait for widespread adoption. But I want
to remind folks that "We have proven benefit" for approximately one
major website on the Web today and the downside is changing things that
potentially millions of sites depend on.

- A

Igor Bukanov

unread,
Feb 7, 2012, 7:46:35 PM2/7/12
to Patrick McManus, Henri Sivonen, dev-platform
On 7 February 2012 16:08, Patrick McManus <mcm...@ducksong.com> wrote:
> I'm all for more use of feature-black lists and a generic infrastructure
> there. It seems to be the ugly way to get past 1% breakage that
> sometimes holds us back. For instance, I'm told that's the way Chrome
> deploys SSL False Start which a rare few broken servers choke on.

This is also the way Opera enables HTTP pipe-lining for something like
10 years. I suppose it must be somewhat ugly code to detect all those
broken HTTP proxies etc., but it has clear performance benefits for
the majority of working sites.

Henri Sivonen

unread,
Feb 8, 2012, 1:20:58 AM2/8/12
to dev-pl...@lists.mozilla.org
On Tue, Feb 7, 2012 at 4:35 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
> Quite honestly, if they can't watch for changes they should not be using explicitly experimental features!

If a site breaks, it doesn't make a difference to the end user
perception of Firefox whether the site broke because we
changed/removed an experimental or non-experimental feature. When we
ship "experimental" features in the release channel and evangelize the
features, we set up an attractive nuisance for Web developers. It's
really our fault if Web developers use experimental features that
we've shipped and evangelized on Hacks. (If we don't want authors to
use the features, we shouldn't ship the features in release.)

James Graham of Opera summed this up on #whatwg yesterday:
'My point of view is that on the web there are two options: 1) "don't
ship" 2) "ship and make any future changes sensitive to the legacy
established by shipping". Acting like there is an option 3) "Ship and
then make backwards incompatible changes justified by the fiction that
the legacy doesn't exist because it has a funny name" is just
delusion.'
http://krijnhoetmer.nl/irc-logs/whatwg/20120207#l-853

That's why I argued in
http://hsivonen.iki.fi/vendor-prefixes/#whattodo that we should first
experiment without letting features reach the release channel until we
are OK with stopping making changes that break sites too much and when
do let features reach the release channel and content that depends on
the feature has emerged, we should refrain from breaking changes (i.e.
say "No" to late bikeshedding at the W3C).

Are we really going to treat all the APIs that the WebAPI team is
making as "experimental" and be OK with breaking them in the future?
Even if Babelfish starts using some of them? I think we shouldn't ship
those APIs before we are OK with supporting them "forever". If
shipping B2G poses a schedule constraint that requires shipping, so be
it. I think we should ship unprefixed and then live with what has been
shipped and evolve the features within the constraints posed by
content that will have emerged to depend on the API. (Just like we
evolve various now-old APIs within the constraints posed by legacy.)

Robert O'Callahan

unread,
Feb 8, 2012, 2:23:51 AM2/8/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Wed, Feb 8, 2012 at 7:20 PM, Henri Sivonen <hsiv...@iki.fi> wrote:

> James Graham of Opera summed this up on #whatwg yesterday:
> 'My point of view is that on the web there are two options: 1) "don't
> ship" 2) "ship and make any future changes sensitive to the legacy
> established by shipping". Acting like there is an option 3) "Ship and
> then make backwards incompatible changes justified by the fiction that
> the legacy doesn't exist because it has a funny name" is just
> delusion.'
> http://krijnhoetmer.nl/irc-logs/whatwg/20120207#l-853
>

That sounds reasonable but there's a spectrum between 1 and 2. There are
degrees of shipping.

That's why I argued in
> http://hsivonen.iki.fi/vendor-prefixes/#whattodo that we should first
> experiment without letting features reach the release channel until we
> are OK with stopping making changes that break sites too much and when
> do let features reach the release channel and content that depends on
> the feature has emerged, we should refrain from breaking changes (i.e.
> say "No" to late bikeshedding at the W3C).
>

For SVG-Opentype Fonts my current plan is to ship the feature preffed off
by default. People who want to experiment with it can easily turn it on in
any new-enough Firefox build. But I can't see sites creating a dependency
on it by requiring all their users to flip about:config flags.

I think that'll work great for some features. However, for some other
features there's a competitiveness problem. If another browser is shipping
a feature on-by-default, and we don't ship or ship off-by-default, that
will negatively affect us.

Rob
--
"If we claim to be without sin, we deceive ourselves and the truth is not
in us. If we confess our sins, he is faithful and just and will forgive us
our sins and purify us from all unrighteousness. If we claim we have not
sinned, we make him out to be a liar and his word is not in us." [1 John
1:8-10]

Henri Sivonen

unread,
Feb 8, 2012, 3:10:54 AM2/8/12
to dev-pl...@lists.mozilla.org
On Wed, Feb 8, 2012 at 9:23 AM, Robert O'Callahan <rob...@ocallahan.org> wrote:
> On Wed, Feb 8, 2012 at 7:20 PM, Henri Sivonen <hsiv...@iki.fi> wrote:
>>
>> James Graham of Opera summed this up on #whatwg yesterday:
>> 'My point of view is that on the web there are two options: 1) "don't
>> ship" 2) "ship and make any future changes sensitive to the legacy
>> established by shipping". Acting like there is an option 3) "Ship and
>> then make backwards incompatible changes justified by the fiction that
>> the legacy doesn't exist because it has a funny name" is just
>> delusion.'
>> http://krijnhoetmer.nl/irc-logs/whatwg/20120207#l-853
>
>
> That sounds reasonable but there's a spectrum between 1 and 2. There are
> degrees of shipping.

Do you mean having the feature in a release build but behind an
off-by-default pref? (If you meant that, the spectrum is heavily
quantized.)

>> That's why I argued in
>> http://hsivonen.iki.fi/vendor-prefixes/#whattodo that we should first
>> experiment without letting features reach the release channel until we
>> are OK with stopping making changes that break sites too much and when
>> do let features reach the release channel and content that depends on
>> the feature has emerged, we should refrain from breaking changes (i.e.
>> say "No" to late bikeshedding at the W3C).
>
>
> For SVG-Opentype Fonts my current plan is to ship the feature preffed off by
> default. People who want to experiment with it can easily turn it on in any
> new-enough Firefox build. But I can't see sites creating a dependency on it
> by requiring all their users to flip about:config flags.

Cool. Thank you for doing it this way.

> I think that'll work great for some features. However, for some other
> features there's a competitiveness problem. If another browser is shipping a
> feature on-by-default, and we don't ship or ship off-by-default, that will
> negatively affect us.

In that case, we should ship on-by-default, too. And then refrain from
making changes that break existing content too much and block
bikeshedding that'd result in breaking changes at the W3C. If a
feature is at the point where now having it is a competitiveness
problem, I think it should be taken as an indication that it's part of
the platform and shouldn't be subjected to breaking changes. I sucks
terribly for a feature to reach that stage with prefixes, so to be
safe, features shouldn't have prefixes to begin with--just like
innerHTML wasn't prefixed and XHR wasn't prefixed from Mozilla
onwards. (The ActiveX prefixing of XHR is still hurting MS and IE7 has
been out for quite a while.)

(If we give the W3C prefixes as an excuse to bikeshed, the W3C *will*
bikeshed. The CSS WG is discussing commas vs. spaces as separators in
syntax that's already out there and the W3C flavor of the Fullscreen
API adjusts the case of the letter 'S' in "FullScreen"! Meanwhile, we
already have the paragon on feature consistency and design that is SVG
as part of the Web platform.)

Nicholas Nethercote

unread,
Feb 8, 2012, 4:10:00 AM2/8/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Wed, Feb 8, 2012 at 7:10 PM, Henri Sivonen <hsiv...@iki.fi> wrote:
>>
>> That sounds reasonable but there's a spectrum between 1 and 2. There are
>> degrees of shipping.
>
> Do you mean having the feature in a release build but behind an
> off-by-default pref? (If you meant that, the spectrum is heavily
> quantized.)

Another option is to enable it in pre-releases and then disable them
before release. That's being done for some experimental JS features
(ES6 Set and Map), see
https://bugzilla.mozilla.org/show_bug.cgi?id=723219.

Nick

Robert O'Callahan

unread,
Feb 8, 2012, 4:39:09 AM2/8/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Wed, Feb 8, 2012 at 9:10 PM, Henri Sivonen <hsiv...@iki.fi> wrote:

> Do you mean having the feature in a release build but behind an
> off-by-default pref? (If you meant that, the spectrum is heavily
> quantized.)
>

It is quantized, but there are quite a few options along the spectrum.

> I think that'll work great for some features. However, for some other
> > features there's a competitiveness problem. If another browser is
> shipping a
> > feature on-by-default, and we don't ship or ship off-by-default, that
> will
> > negatively affect us.
>
> In that case, we should ship on-by-default, too. And then refrain from
> making changes that break existing content too much and block
> bikeshedding that'd result in breaking changes at the W3C. If a
> feature is at the point where now having it is a competitiveness
> problem, I think it should be taken as an indication that it's part of
> the platform and shouldn't be subjected to breaking changes.
>

That makes some sense if the design of the feature is fairly well settled.
Where there's competing visions for what the feature's API should be, it's
not so clear.

Henri Sivonen

unread,
Feb 8, 2012, 6:33:34 AM2/8/12
to dev-pl...@lists.mozilla.org
On Wed, Feb 8, 2012 at 11:39 AM, Robert O'Callahan <rob...@ocallahan.org> wrote:
> On Wed, Feb 8, 2012 at 9:10 PM, Henri Sivonen <hsiv...@iki.fi> wrote:
>>
>> Do you mean having the feature in a release build but behind an
>> off-by-default pref? (If you meant that, the spectrum is heavily
>> quantized.)
>
> It is quantized, but there are quite a few options along the spectrum.

What options besides in the release channel and enabled, in the
release channel but disabled unless pref flipped and not in the
release channel? I guess in theory, a feature that's in development
could be enabled for a handful of origins whose owners sign a contract
that they agree to maintain their stuff in a timely manner.

(Obviously there are various places where a feature can be when it is
"not in the release channel" including Nightly, Aurora, Beta, other
branch, try builds, etc.)

>> > I think that'll work great for some features. However, for some other
>> > features there's a competitiveness problem. If another browser is
>> > shipping a
>> > feature on-by-default, and we don't ship or ship off-by-default, that
>> > will
>> > negatively affect us.
>>
>> In that case, we should ship on-by-default, too. And then refrain from
>> making changes that break existing content too much and block
>> bikeshedding that'd result in breaking changes at the W3C. If a
>> feature is at the point where now having it is a competitiveness
>> problem, I think it should be taken as an indication that it's part of
>> the platform and shouldn't be subjected to breaking changes.
>
> That makes some sense if the design of the feature is fairly well settled.
> Where there's competing visions for what the feature's API should be, it's
> not so clear.

Do you have an example where
1) competitiveness requires having a feature on the conceptual level
AND
2) the competitiveness requirement is satisfied by shipping an API
for the concept that's different from what the competitors causing the
pressure are shipping for the same concept (as opposed to shipping the
same API as what the others are shipping)
AND
3) prefixes are helping the situation somehow
?

Did prefixes help or are they helping with database or audio APIs?

And more concretely on the topic of breaking sites: Is the Audio Data
API going to be removed if that vision doesn't carry? If so, with what
kind of breakage avoidance process?

Ian Hickson

unread,
Feb 8, 2012, 3:49:04 PM2/8/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Wed, 8 Feb 2012, Henri Sivonen wrote:
> On Tue, Feb 7, 2012 at 4:35 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
> > Quite honestly, if they can't watch for changes they should not be using explicitly experimental features!
>
> If a site breaks, it doesn't make a difference to the end user
> perception of Firefox whether the site broke because we changed/removed
> an experimental or non-experimental feature. When we ship "experimental"
> features in the release channel and evangelize the features, we set up
> an attractive nuisance for Web developers. It's really our fault if Web
> developers use experimental features that we've shipped and evangelized
> on Hacks. (If we don't want authors to use the features, we shouldn't
> ship the features in release.)
>
> James Graham of Opera summed this up on #whatwg yesterday: 'My point of
> view is that on the web there are two options: 1) "don't ship" 2) "ship
> and make any future changes sensitive to the legacy established by
> shipping". Acting like there is an option 3) "Ship and then make
> backwards incompatible changes justified by the fiction that the legacy
> doesn't exist because it has a funny name" is just delusion.'
> http://krijnhoetmer.nl/irc-logs/whatwg/20120207#l-853
>
> That's why I argued in http://hsivonen.iki.fi/vendor-prefixes/#whattodo
> that we should first experiment without letting features reach the
> release channel until we are OK with stopping making changes that break
> sites too much and when do let features reach the release channel and
> content that depends on the feature has emerged, we should refrain from
> breaking changes (i.e. say "No" to late bikeshedding at the W3C).
>
> Are we really going to treat all the APIs that the WebAPI team is making
> as "experimental" and be OK with breaking them in the future? Even if
> Babelfish starts using some of them? I think we shouldn't ship those
> APIs before we are OK with supporting them "forever". If shipping B2G
> poses a schedule constraint that requires shipping, so be it. I think we
> should ship unprefixed and then live with what has been shipped and
> evolve the features within the constraints posed by content that will
> have emerged to depend on the API. (Just like we evolve various now-old
> APIs within the constraints posed by legacy.)

I agree with Henri here. FWIW, in the specs I edit, I make the assumption
that once something has shipped, I have to keep the spec compatible with
it. That's also why I try to discourage people from using prefixes for
things that are already somewhat specced (I just make the spec follow the
implementations if they fill in the gaps before I get around to it).

Sometimes it's not perfect -- pushState() is an example of where the
browsers didn't do the same thing. That was my fault (I wrote a bad spec
and didn't realise it before implementors shipped incompatible fixes to my
mess). I think we ended up ok though (the spec lined up with the better
implementations, and the sites weren't too badly broken, all things
considered). localStorage is another example where we ended up with a
problem due to early shipping. Generally though it goes much better. The
problems with the above approach are far fewer than the problems caused by
shipping incompatible prefixed implementation in multiple browsers and
then having the spec define yet another solution later.

--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'

Robert O'Callahan

unread,
Feb 8, 2012, 4:22:23 PM2/8/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Thu, Feb 9, 2012 at 12:33 AM, Henri Sivonen <hsiv...@iki.fi> wrote:

> What options besides in the release channel and enabled, in the
> release channel but disabled unless pref flipped and not in the
> release channel? I guess in theory, a feature that's in development
> could be enabled for a handful of origins whose owners sign a contract
> that they agree to maintain their stuff in a timely manner.
>
> (Obviously there are various places where a feature can be when it is
> "not in the release channel" including Nightly, Aurora, Beta, other
> branch, try builds, etc.)
>

Yes, those options.

This part of the discussion doesn't matter, let's drop it.

Do you have an example where
> 1) competitiveness requires having a feature on the conceptual level
> AND
> 2) the competitiveness requirement is satisfied by shipping an API
> for the concept that's different from what the competitors causing the
> pressure are shipping for the same concept (as opposed to shipping the
> same API as what the others are shipping)
> AND
> 3) prefixes are helping the situation somehow
> ?
>
> Did prefixes help or are they helping with database or audio APIs?
>

I don't know yet. I'm not arguing that prefixes are a good thing here. I'm
saying that whether to ship an experimental feature off by default may be
hard to determine when it competes with something that's on by default, but
the competitor(s) hasn't already settled the issue by grabbing the
namespace.

And more concretely on the topic of breaking sites: Is the Audio Data
> API going to be removed if that vision doesn't carry? If so, with what
> kind of breakage avoidance process?
>

The Audio Data API is definitely not our long-term plan. It will be
deprecated, but I haven't given any thought to whether or how it should be
removed.

Henri Sivonen

unread,
Feb 21, 2012, 2:23:41 AM2/21/12
to dev-apps...@lists.mozilla.org, dev-platform
CCing dev-platform, since this is a Gecko change.

On Tue, Feb 21, 2012 at 9:02 AM, Asa Dotzler <a...@mozilla.org> wrote:
> On 2/20/2012 7:39 PM, Daniel Cater wrote:
>>
>> On Tuesday, 21 February 2012 01:27:14 UTC, Asa Dotzler  wrote:
>>>
>>> And we broke Google. Nice. And for what user benefit?
>>
>> - Moves inline with Fennec (minimises web-visible differences between
>> mobile and desktop)
>
> to what end? we're explicitly differentiating the fennec UA from desktop.

This goal could be achieved by having Fennec and desktop Aurora and
Nightly all say Gecko/20100101.

>> - Reduces fingerprint-ability by removing the minor version
>
> fingerprinting is the least effective and least used method for user
> tracking on the web. it's almost laughable how much attention this gets when
> it's simply not a problem today.

Minor version could have been hidden while keeping the frozen
Gecko/20100101. Obviously, removing the minor version can't break us
worse that shipping a normal non-chemspill release.

>> - Makes testing on trunk and Aurora more representative of beta and
>> release by removing the "a1" string
>
> Where has this bit us in the past? How many release users should we
> sacrifice to solve this problem in this way?

Removing the "a1" part is a pure win. Obviously, it can't break us
worse than shipping a release, since we don't say "a1" in release
builds. However, not having the "a1" or "a2" part there means that
we'd test what we are going to ship all the way from Nightly through
to Release.

>> - Saves bytes (can save RTTs due to packet boundaries)
>
> Got data? How many users will notice an increase in performance? How does
> that weigh against the number of users it will cost us? Without data, that's
> a tough selling point, IMO.

Gecko/13.0 is 4 bytes shorter than Gecko/20100101.

>> - Removes the potentially confusing string "20100101"
>
> Confusing to whom?

(Personally, I think this is a very unpersuasive argument.)

>> - Plans for the long-term by allowing future user-agents to be more
>> succinct
>
> What does succinctness buy anyone?

Fewer bytes per request. But if we are ever going to go for that kind
of succinctness, we'll need to have the guts to attack the
"Mozilla/5.0" part. That's the most dead weight bytes up for grabs.

>> - Gets sites to fix sniffing that is already broken (perhaps even by
>> removing it entirely thus providing benefits to more than just desktop
>> Firefox users)
>
> And there are no better ways to accomplish this?

We could make all Gecko builds of *any* kind say Gecko/20100101 and
keep that part frozen forever, make no kinds of builds say "a1" or
"a2" and make no builds report the chemspill version digit in the UA
string. (If this was my decision, that's what I'd do knowing what I
now know, because I believe we can never get rid of the rv: part
without causing breakage at "definitely not worth it" levels.)

Gijs Kruitbosch

unread,
Feb 21, 2012, 5:51:58 AM2/21/12
to Asa Dotzler
On 21/02/2012 07:56 AM, Asa Dotzler wrote:
> <snip>
> This seems like an awful lot of effort for little to no actual benefit. We won't
> know the long tail of sites that break. Even our release audience won't tell us
> that. They'll simply walk away when a site that's important to them stops working.
>
> We're not in 2006 any more. Many users have multiple very capable browsers on
> their machines and if we make even one important-to-that-user site break, Chrome
> an IE are just a click away. Why on earth are we now deciding that breaking
> sites and driving users away is OK. If there was ever a time to back off from
> these efforts and instead focus on adding compatibility rather than killing it,
> now is that time.
>
> This whole effort is misguided IMO. There are far more important things to be
> working on like *increasing* web compatibility. We don't have the users to spare
> on wild goose chases like this.
>
> - A

All of these changes were discussed very, very extensively on these newsgroups,
including (as Dao pointed out) lots of data to motivate these changes (see the
archives for September-February (last post in relevant threads was on 2/2)).
Ironically, AIUI one of the reasons to change the UA string *was* compatibility,
especially for mobile.

I feel it would have been better if you gave feedback then. If you want to give
feedback now, it would be nice if you made more of an effort to understand the
reasons for the changes and how they were decided upon, rather than implying
that the people involved are actively trying to break web compatibility 'for
little to no actual benefit' when, reading the past threads, that just isn't the
case.

Gijs

Gervase Markham

unread,
Feb 21, 2012, 9:47:36 AM2/21/12
to
On 21/02/12 07:02, Asa Dotzler wrote:
>> - Moves inline with Fennec (minimises web-visible differences between
>> mobile and desktop)
>
> to what end? we're explicitly differentiating the fennec UA from desktop.

In specific ways ("Mobile" vs "Tablet" vs another platform identifier).
In all other ways, we should be identical, because we are the same
browser. That's one of our big selling points vs. the multiple
fragmented WebKits.

As others have said, it would be nice if you didn't imply with every
message that anyone involved in this work is just tinkering randomly
like the Chaos Monkey.

>> - Reduces fingerprint-ability by removing the minor version
>
> fingerprinting is the least effective and least used method for user
> tracking on the web. it's almost laughable how much attention this gets
> when it's simply not a problem today.

The privacy team care about it, and so decisions about the UA are taken
with that in mind. If you think we are wrong-headed here, take it up
with them.

>> - Makes testing on trunk and Aurora more representative of beta and
>> release by removing the "a1" string
>
> Where has this bit us in the past? How many release users should we
> sacrifice to solve this problem in this way?

Now you are just ranting. The change he refers to here doesn't affect
release builds in the slightest.

>> - Saves bytes (can save RTTs due to packet boundaries)
>
> Got data?

We discussed the impact of this with the networking team, yes.

> I don't consider any of these items worth losing unknowable numbers of
> users.

By your argument, we would never make any changes _at_all_ to our web
platform, because any change might break something somewhere which would
cause an unknowable number of our users to hit their pain threshold and
depart. That seems unlikely to be a productive way forward.

Can we instead consider each change on its merits, based on the data
we've managed to gather?

Gerv

Gervase Markham

unread,
Feb 21, 2012, 9:48:47 AM2/21/12
to Henri Sivonen
On 21/02/12 07:23, Henri Sivonen wrote:
> Fewer bytes per request. But if we are ever going to go for that kind
> of succinctness, we'll need to have the guts to attack the
> "Mozilla/5.0" part. That's the most dead weight bytes up for grabs.

We have data which suggests we might be able to do that. We probably
need more, but it's possible.

Gerv

Asa Dotzler

unread,
Feb 21, 2012, 12:11:37 PM2/21/12
to
On 2/21/2012 6:47 AM, Gervase Markham wrote:
> On 21/02/12 07:02, Asa Dotzler wrote:

>> fingerprinting is the least effective and least used method for user
>> tracking on the web. it's almost laughable how much attention this gets
>> when it's simply not a problem today.
>
> The privacy team care about it, and so decisions about the UA are taken
> with that in mind. If you think we are wrong-headed here, take it up
> with them.

Yes. I think we are completely wrong-headed here. Why would anyone
anywhere go to the effort (outside of the eff) to set up a massively
difficult and horribly inaccurate tracking system based on
fingerprinting when they can just drop a cookie on your system?

> By your argument, we would never make any changes _at_all_ to our web
> platform, because any change might break something somewhere which would
> cause an unknowable number of our users to hit their pain threshold and
> depart. That seems unlikely to be a productive way forward.

No, that's not my argument. There are plenty of changes where there are
obvious user/developer benefits that we don't make because they might
break parts of the Web. These changes don't have, IMO, obvious user and
developer benefits and we're breaking sites and losing users over them.

Shaving a few bits of entropy out of our headers and UA to defeat a
tracking system that isn't used anywhere for anything is not the same as
fixing a longstanding cross browser bit of incompatible HTML or CSS to
delight developers everywhere is one thing. Tackling the latest EFF
bogeyman at the expense of users is something entirely different.

- A

Christopher Blizzard

unread,
Feb 21, 2012, 4:05:44 PM2/21/12
to Henri Sivonen, dev-platform, dev-apps...@lists.mozilla.org
I have two very specific complaints about how the UA change is being
handled:

First - and most important - I don't understand why we're doing this
right now given the number of other more important things that we have
going on. It breaks web sites, creating a really bad experience for our
users. It's just borrowing trouble. If we want to invest in privacy
let's implement basically anything else on this list:
https://wiki.mozilla.org/Privacy/Features - basically anything in there
will help far more than fingerprinting entropy in our UA string.

Second, on approach: UA changes are acceptable, but they need to be done
in a very deliberate fashion. The last time we did this we gave lots of
warning, did a lot of early outreach and clearly documented what we were
doing. We also kept a lot of the string because it maintained
compatibility with a lot of sites and UA parsers. (Hence rv:<ver>
instead of Gecko/<ver>!) The final UA string document we did for
Firefox 4 was done 6 months before the release and our beta population
during that time was far larger than our Aurora or Beta population today
so the impact on real-world web sites was far greater and the dark
matter we can't see had time to adjust as well.

What I've seen so far doesn't include a coordinated outreach plan or a
time frame for people to adjust. "Surprise, here comes Firefox 13!"
isn't enough.

--Chris

Asa Dotzler

unread,
Feb 21, 2012, 4:21:04 PM2/21/12
to
On 2/21/2012 6:47 AM, Gervase Markham wrote:
> On 21/02/12 07:02, Asa Dotzler wrote:

>>> - Saves bytes (can save RTTs due to packet boundaries)
>>
>> Got data?
>
> We discussed the impact of this with the networking team, yes.

I'd like to learn more about this. Where is this data? What are the
actual performance benefits we get from this change today?

- A

Matt Brubeck

unread,
Feb 21, 2012, 4:29:24 PM2/21/12
to Christopher Blizzard
On 02/21/2012 01:05 PM, Christopher Blizzard wrote:
> The final UA string document we did for Firefox
> 4 was done 6 months before the release and our beta population during
> that time was far larger than our Aurora or Beta population today so the
> impact on real-world web sites was far greater and the dark matter we
> can't see had time to adjust as well.

Actually, the aurora+beta population now is over 2x larger than the beta
population in September 2010, when the Firefox 4 UA changes landed.
(But you are right that the beta period was much *longer* at that time
than it is today.)

Christopher Blizzard

unread,
Feb 21, 2012, 4:33:13 PM2/21/12
to Matt Brubeck, dev-pl...@lists.mozilla.org
Apologies, yes, that's what I meant.

--Chris

Dao

unread,
Feb 21, 2012, 4:55:43 PM2/21/12
to
On 21.02.2012 22:05, Christopher Blizzard wrote:
> We also kept a lot of the string because it maintained
> compatibility with a lot of sites and UA parsers. (Hence rv:<ver>
> instead of Gecko/<ver>!)

Back then the idea was that we could introduce Gecko/<ver> and remove
rv:<ver> concurrently. This is kind of crazy and not being considered
right now.

> What I've seen so far doesn't include a coordinated outreach plan or a
> time frame for people to adjust. "Surprise, here comes Firefox 13!"
> isn't enough.

The situation is different from one and a half years ago. We're not
committed to any particular release. We want to be able to back this out
from release branches as long as we think it's necessary. Once we're
settled on a release -- be it aurora, central or a future train at that
time -- we can blog about it.

Patrick McManus

unread,
Feb 21, 2012, 4:59:43 PM2/21/12
to Asa Dotzler, dev-pl...@lists.mozilla.org
we talked about this in the last couple weeks right here on dev-platform
in the context of accept-charset.

to summarize - less bytes are always better for performance (mostly due
to subtle window management issues which are extremely painful because
they are latency-scaled instead of bandwidth-scaled), but a handful of
header bytes (especially redundant header bytes such as fixed value
headers that can be compressed away in spdy) probably won't give you
performance improvement worth driving a major interop decision around.

something like removing accept-charset has more value (it's a bigger
change in terms of bytes I believe) and probably a lower cost (some
folks argue its unused dead weight - nobody says that about user-agent,
though many wish it didn't exist :) ) and is therefore more defensible
on performance grounds.

The content of UA should not be driven in large part by networking
concerns as long as it stays within reasonable bounds. You'll note that
Jason, Biesi, myself, Honza, Nick, Steve, Brian, and Michal aren't
driving this discussion - and we care a lot about networking
performance! My advice is just understand that smaller is better, but
given the variability in sizes of things like URIs and cookies it really
is relatively small potatoes compared to the interop issues.

-P

Christopher Blizzard

unread,
Feb 21, 2012, 5:10:27 PM2/21/12
to Dao, dev-pl...@lists.mozilla.org
On 2/21/2012 1:55 PM, Dao wrote:
> On 21.02.2012 22:05, Christopher Blizzard wrote:
>> We also kept a lot of the string because it maintained
>> compatibility with a lot of sites and UA parsers. (Hence rv:<ver>
>> instead of Gecko/<ver>!)
>
> Back then the idea was that we could introduce Gecko/<ver> and remove
> rv:<ver> concurrently. This is kind of crazy and not being considered
> right now.

But we didn't because replacing a date (yyyymmdd...) with a string that
includes version numbers (x.x) wasn't compatible with a lot of UA
sniffers. I doubt very much that people have changed their parsers to
support anything other than rv:<version>. That's why we wanted to drop
the Gecko/XX string over the long term. (I'll bet people still sniff on
Gecko/* to tell Firefox from other browsers, though!)

>
>> What I've seen so far doesn't include a coordinated outreach plan or a
>> time frame for people to adjust. "Surprise, here comes Firefox 13!"
>> isn't enough.
>
> The situation is different from one and a half years ago. We're not
> committed to any particular release. We want to be able to back this
> out from release branches as long as we think it's necessary. Once
> we're settled on a release -- be it aurora, central or a future train
> at that time -- we can blog about it.

That's not enough time. That's a 12 week cycle, not 6 months and a big
UA change has huge impact. That's why we did it for Firefox 4. It was
a big release for us with a big runway, had a lot people looking at it
and included many other things that would require explicit compatibility
testing.

And in the larger sense the train model doesn't mean we don't need to
plan large things out for more than just the train release cycle. It
just means we have graduated risk for users and don't have to worry (as
much) about supporting other releases. For a big UA change we need to
be planning for a time horizon longer than just one release cycle.

--Chris

Asa Dotzler

unread,
Feb 21, 2012, 5:12:38 PM2/21/12
to Sid Stamm, Alex Fowler, dev-pl...@lists.mozilla.org
On 2/21/2012 1:59 PM, Patrick McManus wrote:
> The content of UA should not be driven in large part by networking
> concerns as long as it stays within reasonable bounds. You'll note that
> Jason, Biesi, myself, Honza, Nick, Steve, Brian, and Michal aren't
> driving this discussion - and we care a lot about networking
> performance! My advice is just understand that smaller is better, but
> given the variability in sizes of things like URIs and cookies it really
> is relatively small potatoes compared to the interop issues.

Thank you for the summary and the clear explanation of Networking's
performance priorities, Patrick. This was very helpful information.

So, what I'm learning is that Networking doesn't think this is a big win
or worth interoperability pain.

I'd like to hear from our Privacy people, though I think it's safe to
assume they have a lot bigger fish to fry than defeating EFF's
fingerprinting demonstration. (see
https://wiki.mozilla.org/Privacy/Features as Chris Blizzard noted
upthread.)

Sid and/or Alex, can you all comment on the importance addressing
fingerprintability as traded that against website compatibility?

That leaves unifying Mobile and Desktop User Agents as the only other
serious driving force for these changes. Who represents that concern? Am
I missing anything else?

- A

Dao

unread,
Feb 21, 2012, 5:26:15 PM2/21/12
to
On 21.02.2012 23:10, Christopher Blizzard wrote:
> But we didn't because replacing a date (yyyymmdd...) with a string that
> includes version numbers (x.x) wasn't compatible with a lot of UA
> sniffers.

How did you determine this at that time?

> I doubt very much that people have changed their parsers to
> support anything other than rv:<version>.

They don't need to, as we won't remove rv: anytime soon. Bug 729089 is
my midterm goal.

> That's why we wanted to drop
> the Gecko/XX string over the long term. (I'll bet people still sniff on
> Gecko/* to tell Firefox from other browsers, though!)

Indeed, you can bet on that, so that's not being considered either.

>> The situation is different from one and a half years ago. We're not
>> committed to any particular release. We want to be able to back this
>> out from release branches as long as we think it's necessary. Once
>> we're settled on a release -- be it aurora, central or a future train
>> at that time -- we can blog about it.
>
> That's not enough time. That's a 12 week cycle, not 6 months and a big
> UA change has huge impact. That's why we did it for Firefox 4. It was a
> big release for us with a big runway, had a lot people looking at it and
> included many other things that would require explicit compatibility
> testing.
>
> And in the larger sense the train model doesn't mean we don't need to
> plan large things out for more than just the train release cycle. It
> just means we have graduated risk for users and don't have to worry (as
> much) about supporting other releases. For a big UA change we need to be
> planning for a time horizon longer than just one release cycle.

I wrote "or a future train". This is completely in our hands. There's no
point in committing to a release _now_, though, as the patch didn't bake
long enough on central, let alone on a release branch.

Matt Brubeck

unread,
Feb 21, 2012, 5:27:55 PM2/21/12
to Asa Dotzler
On 02/21/2012 02:12 PM, Asa Dotzler wrote:
> I'd like to hear from our Privacy people, though I think it's safe to
> assume they have a lot bigger fish to fry than defeating EFF's
> fingerprinting demonstration.

Anyway, this change doesn't actually remove (or add) any bits of entropy
from the UA header, and so it has zero impact on fingerprinting.

(Neither the frozen "20100101" nor the major version adds any
information that's not present in other parts of the header, so neither
helps attackers tell one Firefox 13 user from another.)

Matt Brubeck

unread,
Feb 21, 2012, 5:42:17 PM2/21/12
to Christopher Blizzard
On 02/21/2012 02:10 PM, Christopher Blizzard wrote:
> On 2/21/2012 1:55 PM, Dao wrote:
>> The situation is different from one and a half years ago. We're not
>> committed to any particular release. We want to be able to back this
>> out from release branches as long as we think it's necessary. Once
>> we're settled on a release -- be it aurora, central or a future train
>> at that time -- we can blog about it.
>
> That's not enough time. That's a 12 week cycle, not 6 months and a big
> UA change has huge impact. That's why we did it for Firefox 4. It was a
> big release for us with a big runway, had a lot people looking at it and
> included many other things that would require explicit compatibility
> testing.

Note that when we shipped the UA changes in Firefox 4.0b6 and blogged
about them (September 2010), we were scheduled to release Firefox 4 less
than 12 weeks later (November 2010).

> And in the larger sense the train model doesn't mean we don't need to
> plan large things out for more than just the train release cycle. It
> just means we have graduated risk for users and don't have to worry (as
> much) about supporting other releases. For a big UA change we need to be
> planning for a time horizon longer than just one release cycle.

Isn't this exactly what Dao just proposed? "Backing this out from
release branches" means we can restrict it only to Nightly users (or
Nightly + Aurora, or whatever we choose) for as long as we want -- six
months, if that's what it takes. And he said that we would blog about
it "once we're settled on a release" -- and explicitly said that release
might not be the one that's currently on central, but might instead be
one in the future (i.e., more than 18 weeks out).

Dao

unread,
Feb 21, 2012, 5:48:18 PM2/21/12
to
On 21.02.2012 23:27, Matt Brubeck wrote:
> On 02/21/2012 02:12 PM, Asa Dotzler wrote:
>> I'd like to hear from our Privacy people, though I think it's safe to
>> assume they have a lot bigger fish to fry than defeating EFF's
>> fingerprinting demonstration.
>
> Anyway, this change doesn't actually remove (or add) any bits of entropy
> from the UA header, and so it has zero impact on fingerprinting.

Accept-Charset did, I think, so maybe that's what Asa has in mind.

Sid Stamm

unread,
Feb 21, 2012, 6:03:12 PM2/21/12
to Asa Dotzler, Alex Fowler
On 02/21/2012 02:12 PM, Asa Dotzler wrote:
> Sid and/or Alex, can you all comment on the importance addressing
> fingerprintability as traded that against website compatibility?

Strictly privacy-wise, less entropy is _always_ better, and I support
trimming anything we send out. Fewer bytes on the wire, smaller
fingerprint in general, etc.

But in general to justify massive web breakage, we need to do far more
than take one or two tokens from the UA string. If we quit sending
Referer, dropped the UA entirely (or randomized it in its entirety), we
might be getting somewhere... but that would probably be devastating to
the web. I support these incremental changes to reduce fingerprint, but
not too enthusiastically when the minor increment causes major web-wide
breakage.

The reality is that fingerprinting via UA is pretty easy, so trimming it
is cool. However, there are many other ways to fingerprint users that
don't involve the UA, so we're in for a serious disappointment when we
break a bunch of sites and ten more proof-of-concept fingerprinting
systems immediately emerge that don't use the UA.

And as mbrubeck says, the proposal isn't a massive leap towards moar
privacy.

-Sid

Justin Dolske

unread,
Feb 21, 2012, 8:55:08 PM2/21/12
to
On 2/21/12 3:03 PM, Sid Stamm wrote:

> But in general to justify massive web breakage, we need to do far more
> than take one or two tokens from the UA string. If we quit sending
> Referer

Can we first rename it to "Referrer" (correcting the long-standing
misspelling in the spec)? I know it adds 1 more byte, but while we're
cleaning things up...

:-)

Justin

Justin Dolske

unread,
Feb 21, 2012, 8:59:11 PM2/21/12
to
On 2/21/12 1:55 PM, Dao wrote:
> On 21.02.2012 22:05, Christopher Blizzard wrote:
>> We also kept a lot of the string because it maintained
>> compatibility with a lot of sites and UA parsers. (Hence rv:<ver>
>> instead of Gecko/<ver>!)
>
> Back then the idea was that we could introduce Gecko/<ver> and remove
> rv:<ver> concurrently. This is kind of crazy and not being considered
> right now.

But it seems like there is consideration of adding "Gecko/<ver>" now,
and removing (or freezing?) "rv:<ver>" later. What's the benefit of
shuffling this info around?

Justin

Justin Dolske

unread,
Feb 21, 2012, 9:55:12 PM2/21/12
to
On 2/21/12 2:51 AM, Gijs Kruitbosch wrote:

> All of these changes were discussed very, very extensively on these
> newsgroups, including (as Dao pointed out) lots of data to motivate
> these changes (see the archives for September-February (last post in
> relevant threads was on 2/2)). Ironically, AIUI one of the reasons to
> change the UA string *was* compatibility, especially for mobile.

The context for these discussions, as I remember it, was _only_ for
mobile. Perhaps I missed it buried within -- it wasn't really relevant
to me, seemed like bikeshedding to avoid, and I'm basically happy to let
mobile solve the exciting new challenges they encounter.

What _was not_ clear to me was that the desktop UA was going to change.
Or that there was known breakage on major sites. Or that people were
assuming we were willing to live with the breakage as a way to force
change. These are all pretty serious issues that surprised quite a few
of the people intimately involved with owning and shipping desktop
Firefox, and indicates that some communication which should have
happened apparently didn't.

Now, for better or worse, people have noticed and here we are.

Justin

Henri Sivonen

unread,
Feb 22, 2012, 1:43:14 AM2/22/12
to dev-pl...@lists.mozilla.org
On Wed, Feb 22, 2012 at 3:59 AM, Justin Dolske <dol...@mozilla.com> wrote:
> But it seems like there is consideration of adding "Gecko/<ver>" now, and
> removing (or freezing?) "rv:<ver>" later. What's the benefit of shuffling
> this info around?

I, too, would like to understand the benefit of "rv:" removal plans.
Is it about bytes? Is it about elegance?

When it comes to the UA string, I think elegance should be at the
bottom of the priority list.

It's worth noting that at least since 2001 Netscape evangelism team
activities, sniffing rv: has been evangelized as the correct way to
get a version number for sniffing Gecko-based browsers, so if we ever
remove rv:, we risk breaking sites stuff that was put in place in
response to previous Gecko evangelism efforts.

See http://archive.bclary.com/lib/js/geckoGetRv.js

Gervase Markham

unread,
Feb 22, 2012, 6:20:37 AM2/22/12
to
On 22/02/12 02:55, Justin Dolske wrote:
> What _was not_ clear to me was that the desktop UA was going to change.
> Or that there was known breakage on major sites. Or that people were
> assuming we were willing to live with the breakage as a way to force
> change. These are all pretty serious issues that surprised quite a few
> of the people intimately involved with owning and shipping desktop
> Firefox, and indicates that some communication which should have
> happened apparently didn't.

Entirely fair point.

Gerv

Gervase Markham

unread,
Feb 22, 2012, 6:37:21 AM2/22/12
to Justin Dolske
On 22/02/12 01:55, Justin Dolske wrote:
> Can we first rename it to "Referrer" (correcting the long-standing
> misspelling in the spec)? I know it adds 1 more byte, but while we're
> cleaning things up...
>
> :-)

I suspect the compatibility trade-offs of _that_ change would come down
squarely on the side of not fixing it...

Hey, we could send both the old value and the new value until everyone
has transitioned! That wouldn't be long, right?

Gerv


Gervase Markham

unread,
Feb 22, 2012, 6:39:15 AM2/22/12
to Asa Dotzler, Sid Stamm, Alex Fowler
On 21/02/12 22:12, Asa Dotzler wrote:
> Thank you for the summary and the clear explanation of Networking's
> performance priorities, Patrick. This was very helpful information.

Indeed; it's not quite as I had understood it before (probably my fault).

> That leaves unifying Mobile and Desktop User Agents as the only other
> serious driving force for these changes. Who represents that concern? Am
> I missing anything else?

Not sure who represents it; but the general case for it is that given
that we are the same engine on Mobile and Desktop (and that's one of our
selling points vs. Webkit), we should make the UA as similar as possible.

This can also be done, of course, as mbrubeck has pointed out, by
reversing the Gecko date change on Mobile. However, having just gone
through a long process of getting that UA agreed, perhaps we could keep
the worm can shut for at least a short time.

Gerv

Gervase Markham

unread,
Feb 22, 2012, 6:41:29 AM2/22/12
to Henri Sivonen
On 22/02/12 06:43, Henri Sivonen wrote:
> I, too, would like to understand the benefit of "rv:" removal plans.
> Is it about bytes? Is it about elegance?
>
> When it comes to the UA string, I think elegance should be at the
> bottom of the priority list.

FWIW, I'm not a supporter of the removal of rv:. It's basically only
about elegance, and unlike the Gecko date (which has been frozen for two
years and so, one would hope, is no longer much of a sniffing target),
it's what we've been telling people to check for a long time.

Gerv

Dao

unread,
Feb 22, 2012, 6:48:46 AM2/22/12
to
On 22.02.2012 02:59, Justin Dolske wrote:
> On 2/21/12 1:55 PM, Dao wrote:
>> On 21.02.2012 22:05, Christopher Blizzard wrote:
>>> We also kept a lot of the string because it maintained
>>> compatibility with a lot of sites and UA parsers. (Hence rv:<ver>
>>> instead of Gecko/<ver>!)
>>
>> Back then the idea was that we could introduce Gecko/<ver> and remove
>> rv:<ver> concurrently. This is kind of crazy and not being considered
>> right now.
>
> But it seems like there is consideration of adding "Gecko/<ver>" now,
> and removing (or freezing?) "rv:<ver>" later.

I'd like to freeze it at some point, which won't break scripts in the
wild. Only sites wanting to identify new versions would need to start
looking at the Gecko token. Removing it is a possible long-term option
if it doesn't break the web, e.g. when old scripts faded away or it
turns out they commonly don't break miserably if they can't determine
the Gecko version... something like this seems to be the case with
Mozilla/5.0 today.

> What's the benefit of shuffling this info around?

The rv: token isn't self-explanatory, people don't understand it without
reading docs. This led to never-ending evangelism efforts. I believe
that making it easier for people to comprehend the UA string can prevent
bad sniffing that can backfire on browsers.

Dao

unread,
Feb 22, 2012, 6:54:51 AM2/22/12
to
On 22.02.2012 03:55, Justin Dolske wrote:
> On 2/21/12 2:51 AM, Gijs Kruitbosch wrote:
>
>> All of these changes were discussed very, very extensively on these
>> newsgroups, including (as Dao pointed out) lots of data to motivate
>> these changes (see the archives for September-February (last post in
>> relevant threads was on 2/2)). Ironically, AIUI one of the reasons to
>> change the UA string *was* compatibility, especially for mobile.
>
> The context for these discussions, as I remember it, was _only_ for
> mobile. Perhaps I missed it buried within -- it wasn't really relevant
> to me, seemed like bikeshedding to avoid, and I'm basically happy to let
> mobile solve the exciting new challenges they encounter.

You didn't miss it, there was no immediate implication for non-mobile.
At /some/ point, though, the platform should converge.

Kyle Huey

unread,
Feb 22, 2012, 9:25:24 AM2/22/12
to Gervase Markham, dev-pl...@lists.mozilla.org
On Wed, Feb 22, 2012 at 3:39 AM, Gervase Markham <ge...@mozilla.org> wrote:

> Not sure who represents it; but the general case for it is that given that
> we are the same engine on Mobile and Desktop (and that's one of our selling
> points vs. Webkit), we should make the UA as similar as possible.
>
> This can also be done, of course, as mbrubeck has pointed out, by
> reversing the Gecko date change on Mobile. However, having just gone
> through a long process of getting that UA agreed, perhaps we could keep the
> worm can shut for at least a short time.
>

If we want to align a product with 400 million users and a product with a
million users (I have no idea how many users mobile has, I'm just guessing
here) why are we changing the product with 399 million more users?

- Kyle

Matt Brubeck

unread,
Feb 22, 2012, 10:29:50 AM2/22/12
to
On 02/22/2012 03:39 AM, Gervase Markham wrote:
> On 21/02/12 22:12, Asa Dotzler wrote:
>> That leaves unifying Mobile and Desktop User Agents as the only other
>> serious driving force for these changes. Who represents that concern? Am
>> I missing anything else?
>
> This can also be done, of course, as mbrubeck has pointed out, by
> reversing the Gecko date change on Mobile. [...]

Just so everyone is aware, I filed this bug to reverse the change on
mobile *if* we decide to keep it backed out on desktop:
https://bugzilla.mozilla.org/show_bug.cgi?id=729348

(Note: Please read my comments in that bug before responding.)

Christopher Blizzard

unread,
Feb 22, 2012, 1:05:06 PM2/22/12
to Kyle Huey, dev-pl...@lists.mozilla.org, Gervase Markham
On 2/22/2012 6:25 AM, Kyle Huey wrote:

> If we want to align a product with 400 million users and a product with a
> million users (I have no idea how many users mobile has, I'm just guessing
> here) why are we changing the product with 399 million more users?
>

Once again, the goals for mobile and desktop are different. For desktop
we're interested in compatibility with ourselves, in mobile we want to
be compatible with what gives us the best content. So alignment across
mobile and desktop isn't as important as something that creates the best
user experience.

--Chris

Asa Dotzler

unread,
Feb 21, 2012, 5:12:38 PM2/21/12
to Sid Stamm, Alex Fowler, dev-pl...@lists.mozilla.org
On 2/21/2012 1:59 PM, Patrick McManus wrote:
> The content of UA should not be driven in large part by networking
> concerns as long as it stays within reasonable bounds. You'll note that
> Jason, Biesi, myself, Honza, Nick, Steve, Brian, and Michal aren't
> driving this discussion - and we care a lot about networking
> performance! My advice is just understand that smaller is better, but
> given the variability in sizes of things like URIs and cookies it really
> is relatively small potatoes compared to the interop issues.

Thank you for the summary and the clear explanation of Networking's
performance priorities, Patrick. This was very helpful information.

So, what I'm learning is that Networking doesn't think this is a big win
or worth interoperability pain.

I'd like to hear from our Privacy people, though I think it's safe to
assume they have a lot bigger fish to fry than defeating EFF's
fingerprinting demonstration. (see
https://wiki.mozilla.org/Privacy/Features as Chris Blizzard noted
upthread.)

Sid and/or Alex, can you all comment on the importance addressing
fingerprintability as traded that against website compatibility?

That leaves unifying Mobile and Desktop User Agents as the only other
serious driving force for these changes. Who represents that concern? Am
I missing anything else?

- A

Matt Brubeck

unread,
Feb 23, 2012, 11:55:51 AM2/23/12
to
On 02/21/2012 02:12 PM, Asa Dotzler wrote:
> That leaves unifying Mobile and Desktop User Agents as the only other
> serious driving force for these changes. Who represents that concern?

We don't have any solid evidence that changing from "Gecko/DATE" to
"Gecko/VERSION" improved compatibility or evangelism for mobile. We did
find that changing to "Gecko/0" caused an insignificant increase in
getting iPhone-like content (0.002 increase in mean difflib ratio, for
the sites we sampled). I don't think mobile compatibility can be
considered a driver for this change.

Instead, the primary rationale given for changing this on mobile was
that, if we made the change on desktop *and* mobile, it would reduce
developer confusion caused by the frozen date, and would enable
long-term freezing phasing out of the "rv" token:

https://groups.google.com/d/msg/mozilla.dev.platform/j-Lzk2c7T6A/CxRCOLdF3msJ

Therefore, if we decide not to make this change on desktop, our reason
for making the change on mobile is moot (and in fact it would instead
*increase* confusion). If we keep this change backed out on desktop, I
think we should revert it on mobile too, for the reasons I gave in bug
729348.

Henri Sivonen

unread,
Mar 5, 2012, 2:21:49 AM3/5/12
to dev-pl...@lists.mozilla.org
On Tue, Feb 7, 2012 at 4:35 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
> Quite honestly, if they can't watch for changes they should not be using
> explicitly experimental features!
>
> The flip side is we should not be prefixing stuff that's no longer
> experimental, of course.

Unsurprisingly, unprefixing mozSlice broke at least one site:
https://bugzilla.mozilla.org/show_bug.cgi?id=732709

What are we doing to avoid a massive number of breakage instances like
this once we unprefix all the Web API stuff? Why aren't we shipping
the Web API stuff unprefixed from the start?

Gijs Kruitbosch

unread,
Mar 5, 2012, 6:19:08 AM3/5/12
to
On 05/03/2012 08:21 AM, Henri Sivonen wrote:
> Why aren't we shipping
> the Web API stuff unprefixed from the start?

Because the spec might still change, and we don't want an (undetectable) change
in API behaviour that is presented as being ready to be relied upon?

At least, that'd be my guess.

Gijs


Henri Sivonen

unread,
Mar 5, 2012, 7:47:28 AM3/5/12
to dev-pl...@lists.mozilla.org
On Mon, Mar 5, 2012 at 1:19 PM, Gijs Kruitbosch
<gijskru...@gmail.com> wrote:
> On 05/03/2012 08:21 AM, Henri Sivonen wrote:
>>
>> Why aren't we shipping
>> the Web API stuff unprefixed from the start?
>
>
> Because the spec might still change,

That's a self-fulfilling prophesy, because prefixes make spec writers
feel that they can still change stuff.

> and we don't want an (undetectable) change in API behaviour that is presented as being ready to be relied upon?

So let's not make undetectable changes in behavior.

Gijs Kruitbosch

unread,
Mar 5, 2012, 8:07:33 AM3/5/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On 05/03/2012 13:47 PM, Henri Sivonen wrote:
> On Mon, Mar 5, 2012 at 1:19 PM, Gijs Kruitbosch
> <gijskru...@gmail.com> wrote:
>> On 05/03/2012 08:21 AM, Henri Sivonen wrote:
>>>
>>> Why aren't we shipping
>>> the Web API stuff unprefixed from the start?
>>
>>
>> Because the spec might still change,
>
> That's a self-fulfilling prophesy, because prefixes make spec writers
> feel that they can still change stuff.

I'm not sure spec writers are particularly afraid of unprefixed names in their
desire to change specs. Are they? On the other side, I'm even less sure that the
first thing that ships without a(n agreed) spec is optimal (see eg. WebSQL,
text-overflow: ellipsis support in IE, etc.).

>> and we don't want an (undetectable) change in API behaviour that is presented as being ready to be relied upon?
>
> So let's not make undetectable changes in behavior.

How would you do this? Overall feature detection is Hard, even now. Want to know
if a browser supports the 'input' event? No trivial way to do that. How about
DOM* events? Same problem. The list of features which are hard to detect is
pretty long, in fact. If you're actually going to expose an API, say,
navigator.fooBar(baz, quux), I don't really think that multiplying the API in
order to make the 'version' or signature of that function (how many arguments?
what do they mean? Are there 'fake enum/flag' string/boolean values, and if so,
which ones are available?) available through some extra property is the right
thing to do (generally). Additionally and more generally, we just had the little
uproar about -webkit-* prefixes, and several people have said web developers
don't take 'experimental' seriously enough. That's *with* the prefix. What do
you think will happen if we take it off? Today I learned that some people are
still upset with 'Mozilla' for 'killing' webSQL, and are using it anyway,
because "Chrome supports it", and "it's better" than IndexedDB. (see the
comments on the last hacks.m.o article)

I wrote all of the above without even mentioning IE and the unprefixed stuff it
shipped that became de facto standards, and all the horribleness that came with
it (like fake document.all support, to give a random semi-benign example).

Let's not go down that road again.

Gijs

Kyle Huey

unread,
Mar 5, 2012, 8:22:42 AM3/5/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Sun, Mar 4, 2012 at 11:21 PM, Henri Sivonen <hsiv...@iki.fi> wrote:

> On Tue, Feb 7, 2012 at 4:35 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
> > Quite honestly, if they can't watch for changes they should not be using
> > explicitly experimental features!
> >
> > The flip side is we should not be prefixing stuff that's no longer
> > experimental, of course.
>
> Unsurprisingly, unprefixing mozSlice broke at least one site:
> https://bugzilla.mozilla.org/show_bug.cgi?id=732709
>
> What are we doing to avoid a massive number of breakage instances like
> this once we unprefix all the Web API stuff? Why aren't we shipping
> the Web API stuff unprefixed from the start?
>

mozSlice was kind of a bad case ... we had rough consensus and working code
there, and changed how it worked at the 13th hour. IIRC we prefixed the
new behavior to noticeably break anyone using it, rather than doing
something different than they expected.

In general I think we should avoid prefixing things that multiple vendors
mostly agree about. A lot of the WebAPI stuff is pretty unilateral at this
point though, so prefixing it seems like a reasonable thing to do.

- Kyle

Henri Sivonen

unread,
Mar 5, 2012, 8:23:23 AM3/5/12
to dev-pl...@lists.mozilla.org
On Mon, Mar 5, 2012 at 3:07 PM, Gijs Kruitbosch
<gijskru...@gmail.com> wrote:
> On 05/03/2012 13:47 PM, Henri Sivonen wrote:
>>
>> On Mon, Mar 5, 2012 at 1:19 PM, Gijs Kruitbosch
>> <gijskru...@gmail.com>  wrote:
>>>
>>> On 05/03/2012 08:21 AM, Henri Sivonen wrote:
>>>
>>>> Why aren't we shipping
>>>> the Web API stuff unprefixed from the start?
>>>
>>> Because the spec might still change,
>>
>> That's a self-fulfilling prophesy, because prefixes make spec writers
>> feel that they can still change stuff.
>
> I'm not sure spec writers are particularly afraid of unprefixed names in
> their desire to change specs. Are they?

Reasonable spec writers are reluctant to change the semantics of
unprefixed names that Web content already depends on. (Prefixing is a
mind trick that makes spec writers less reluctant even if Web content
depended on prefixed names to the same degree.)

> On the other side, I'm even less
> sure that the first thing that ships without a(n agreed) spec is optimal

The Web is a worse-is-better system. Having a working but unoptimal
solution soon is much better than having an optimal solution a lot
later. One of the biggest problems of the CSS WG in particular is that
they worry about getting things right for the next 50 years but then
take at least 5 years or more make the feature just right before
unprefixing. Then they've already deferred the feature by a tenth of
the stated expected lifespan. We need to consider the net present
utility of Web features more (by analogy of net present value of
money).

>>> and we don't want an (undetectable) change in API behaviour that is
>>> presented as being ready to be relied upon?
>>
>> So let's not make undetectable changes in behavior.
>
> How would you do this?

By not changing behavior in ways that are undetectable. I.e. by
sticking to what got shipped after Web content depends on it and
changes would be breaking.

> I wrote all of the above without even mentioning IE and the unprefixed stuff
> it shipped that became de facto standards, and all the horribleness that
> came with it (like fake document.all support, to give a random semi-benign
> example).
>
> Let's not go down that road again.

The innerHTML and document.all road seems much better to me than the
multiple prefixed roads that we have now.

Henri Sivonen

unread,
Mar 5, 2012, 8:34:45 AM3/5/12
to dev-pl...@lists.mozilla.org
On Mon, Mar 5, 2012 at 3:22 PM, Kyle Huey <m...@kylehuey.com> wrote:
> In general I think we should avoid prefixing things that multiple vendors
> mostly agree about.  A lot of the WebAPI stuff is pretty unilateral at this
> point though, so prefixing it seems like a reasonable thing to do.

Our stated intent is to get the WebAPI stuff standardized, so I'd
expect us to believe that what we are doing will be acceptable to
other browser vendors. If there was something we already believed not
to be acceptable, we probably shouldn't ship it even with a prefix if
the stated intent is to standardize.

So it seems to me that we should err on the side of not prefixing and
if your expectation of acceptability is wrong for some feature, deal
with that trouble for particular thing later instead of dealing with
the unprefixing trouble for every feature.

Gijs Kruitbosch

unread,
Mar 5, 2012, 8:57:59 AM3/5/12
to
On 05/03/2012 14:23 PM, Henri Sivonen wrote:
> Reasonable spec writers are reluctant to change the semantics of
> unprefixed names that Web content already depends on. (Prefixing is a
> mind trick that makes spec writers less reluctant even if Web content
> depended on prefixed names to the same degree.)

But the web doesn't depend on them to the same degree. I think the likelihood of
API change can be approximated by:

editor brazenness * improvement offered / breakage caused

(after units are normalized, yada yada).

So if we assume that breakage declines if you prefix (as fewer sites will use
the feature) it is easier to justify smaller changes. In other words, one could
imagine a situation where a half-broken API gets fixed in the prefixed case, and
not if it were unprefixed, as the ratio of improvement to existing-web-breakage
is too small to bother. That leaves us with suboptimal APIs. This can obviously
happen in both cases, but because of the above I'd argue that APIs that start
unprefixed are at higher risk of being suboptimal, as the burden of improvement
becomes higher.

>
>> On the other side, I'm even less
>> sure that the first thing that ships without a(n agreed) spec is optimal
>
> The Web is a worse-is-better system. Having a working but unoptimal
> solution soon is much better than having an optimal solution a lot
> later. One of the biggest problems of the CSS WG in particular is that
> they worry about getting things right for the next 50 years but then
> take at least 5 years or more make the feature just right before
> unprefixing. Then they've already deferred the feature by a tenth of
> the stated expected lifespan. We need to consider the net present
> utility of Web features more (by analogy of net present value of
> money).

You made a point about DOM/JS API (and specifically WebAPI, in which IMU there
is no CSS involved). I agree that some CSS things are slow. I don't think it's a
valid argument when talking about DOM/JS APIs, which AFAICT have progressed much
faster.

As a sidenote, I think even such an argument should be used to encourage a
different debate: make the CSS WG process stuff faster. By unprefixing
non-standard CSS you'd fix the symptom (prefixes because of lacking standards)
but not the cause (slow standards development). Make it a hard-and-fast rule
that any proposal should progress within 2 months from draft to whatever the
next state is, and be finalized or ditched within 6 months (where the timelines
I chose are pretty much completely arbitrary; I think this is definitely a
different debate).

> By not changing behavior in ways that are undetectable. I.e. by
> sticking to what got shipped after Web content depends on it and
> changes would be breaking.

This is hard. Have you kept track of how many revisions of File/Blob systems we
went through? Frankly, I had to on the consumer side of that API (don't anymore)
and by now I've lost track. There were/are at least 3 different APIs at varying
points in time. Good thing they were prefixed so I could detect them and use the
appropriate ones!

>> I wrote all of the above without even mentioning IE and the unprefixed stuff
>> it shipped that became de facto standards, and all the horribleness that
>> came with it (like fake document.all support, to give a random semi-benign
>> example).
>>
>> Let's not go down that road again.
>
> The innerHTML and document.all road seems much better to me than the
> multiple prefixed roads that we have now.
>

I've seen innerHTML (wrapped through jQuery) be a source of bugs because of
parsing differences between browsers; some HTML was grokked fine by Chrome,
Firefox, Safari, Opera... and broke in IE7 (id est, produced a different DOM
structure).

For a more extreme example, when was the last time you looked at what
(F)CKEditor and various other libraries wrapping contenteditable were doing in
order to cope with the lack of consistency, or worse, tried to write a minimal
component yourself to deal with it for just your usecase? Different browser
can't even agree on what happens if you hit 'return' (separate <p> or <br>
insertion).

I think (some degree of) consensus is essential to keeping an open web. I don't
understand why you think that any one vendor, even Mozilla, foisting an
unprefixed implementation on the web without standardizing first wouldn't ever
lead to the same problems that that method has brought with it in the past.

Gijs

Gijs Kruitbosch

unread,
Mar 5, 2012, 9:10:23 AM3/5/12
to
But believing that something is acceptable is totally not the same as it
*actually being* acceptable (or otherwise). Moreover, it being acceptable to
others is not the same as it being a good way to solve the problem. I don't
think the goal of standardizing is to get something 'accepted' by the other
people in the debate, to convince them that you're right. It's to together find
a good way to solve a particular problem. Different perspectives help.

I am not naive enough to believe that reality will always work out like that
without effort, but I do think that that should be the goal, and that this is a
point that Mozilla has always stood for: to act for the good of the web, rather
than corporate interests. And therefore I quite firmly believe we should get
outside input through the standardization process before unilaterally
unprefixing these APIs. I also think that standardization shouldn't take 5
years, and believe that it won't. :-)

Gijs

Gijs Kruitbosch

unread,
Mar 5, 2012, 8:07:33 AM3/5/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On 05/03/2012 13:47 PM, Henri Sivonen wrote:
> On Mon, Mar 5, 2012 at 1:19 PM, Gijs Kruitbosch
> <gijskru...@gmail.com> wrote:
>> On 05/03/2012 08:21 AM, Henri Sivonen wrote:
>>>
>>> Why aren't we shipping
>>> the Web API stuff unprefixed from the start?
>>
>>
>> Because the spec might still change,
>
> That's a self-fulfilling prophesy, because prefixes make spec writers
> feel that they can still change stuff.

I'm not sure spec writers are particularly afraid of unprefixed names in their
desire to change specs. Are they? On the other side, I'm even less sure that the
first thing that ships without a(n agreed) spec is optimal (see eg. WebSQL,
text-overflow: ellipsis support in IE, etc.).

>> and we don't want an (undetectable) change in API behaviour that is presented as being ready to be relied upon?
>
> So let's not make undetectable changes in behavior.

How would you do this? Overall feature detection is Hard, even now. Want to know
if a browser supports the 'input' event? No trivial way to do that. How about
DOM* events? Same problem. The list of features which are hard to detect is
pretty long, in fact. If you're actually going to expose an API, say,
navigator.fooBar(baz, quux), I don't really think that multiplying the API in
order to make the 'version' or signature of that function (how many arguments?
what do they mean? Are there 'fake enum/flag' string/boolean values, and if so,
which ones are available?) available through some extra property is the right
thing to do (generally). Additionally and more generally, we just had the little
uproar about -webkit-* prefixes, and several people have said web developers
don't take 'experimental' seriously enough. That's *with* the prefix. What do
you think will happen if we take it off? Today I learned that some people are
still upset with 'Mozilla' for 'killing' webSQL, and are using it anyway,
because "Chrome supports it", and "it's better" than IndexedDB. (see the
comments on the last hacks.m.o article)

I wrote all of the above without even mentioning IE and the unprefixed stuff it
shipped that became de facto standards, and all the horribleness that came with
it (like fake document.all support, to give a random semi-benign example).

Let's not go down that road again.

Gijs

Jesper Kristensen

unread,
Mar 6, 2012, 12:45:53 PM3/6/12
to
Den 05-03-2012 14:57, Gijs Kruitbosch skrev:
> On 05/03/2012 14:23 PM, Henri Sivonen wrote:
>> Reasonable spec writers are reluctant to change the semantics of
>> unprefixed names that Web content already depends on. (Prefixing is a
>> mind trick that makes spec writers less reluctant even if Web content
>> depended on prefixed names to the same degree.)
>
> But the web doesn't depend on them to the same degree. I think the
> likelihood of API change can be approximated by:
>
> editor brazenness * improvement offered / breakage caused
>
> (after units are normalized, yada yada).
>
> So if we assume that breakage declines if you prefix (as fewer sites
> will use the feature)

But does that assumption hold? My best guess would be not at all. I
believe most web developers won't care if a feature is prefixed or not.
That is what seems to have happened on the mobile web.

A couple of weeks ago in a conversation with some of my web developer
colleagues, we came to talk about CSS gradients. Somebody said something
strange about prefixes, so I asked them what these prefixes are for.
They all said something related to browsers, which did not which to
follow standards. I then asked them if it had anything to do with
experimental status of the feature, and the consensus was pretty
strongly "no". And these are skilled web developers. I don't think we
can assume that a prefix tells anything to web developers about when to
use a feature or not.
0 new messages