Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The purpose of binary components

160 views
Skip to first unread message

Benjamin Smedberg

unread,
Jan 17, 2012, 10:30:31 AM1/17/12
to dev-pl...@lists.mozilla.org, mozilla.dev.planning group
This post was prompted by the thread "Why do we keep revising intervace
ids?" in mozilla.dev.platform, and Asa's reply which suggests that we
reconsider our current policy of breaking (requiring a recompile) of
binary components in each 6-week cycle. I thought that before we got
into the details of our solution we should define the problem space.

== Background ==

The Mozilla platform has supported loadable components since the advent
of XPCOM. Those components can primarily be written in two different
languages: C++ compiled components and JavaScript components.

== Purpose ==

Loadable XPCOM components allow developers of both applications and
extensions access to the following basic functions:

* Adding new functionality to the platform available via our XPCOM
object framework.
* Extending existing functionality of the platform, for example adding a
new protocol handler.
* Replacing core functionality of existing components/services.

== Binary Versus Script Components ==

There are tradeoffs in the capabilities of C++ and JavaScript
components. JavaScript components have the following properties:

* They do not depend on specific interface layout: changes to XPCOM
interfaces which don't directly impact the specific calls made by the
component don't require any additional work and are automatically
compatible.
* They are automatically protected by the memory safety guarantees of
our JavaScript engine. This is not a 100% guarantee against crashes, but
it means that JS components are much less fragile.
* They have access to DOM workers which allow multithreading using
message passing which is well tested and memory-safe.
* They have access to load arbitrary binary code via ctypes. Note that
ctypes introduces potential type-unsafety and requires very careful
programming.

Binary components (written in C++) have different properties:

* They allow deep access to our platform; they can call into low-level
content APIs and other nonscriptable and private interfaces.
* They have access to some threading facilities which are not available
to JavaScript (but note ChromeWorkers above).
* The source code does not need to be made available to users.

== Current Compatibility Rules ==

Currently, JavaScript components are assumed to be compatible across
releases.

Binary XPCOM components are version-checked at startup and must match
the current version of the platform exactly. This means that binary
components must be recompiled for each 6-week release cycle.

It is possible to load binary XPCOM code which bypasses the version
check. We know of at least one case (bug 680927) where Oracle software
uses some kind of hooking to load a DLL into the Firefox process which
then uses XPCOM without any version checking (this caused a startup
crash for users of that software because the Oracle DLL did not check
for interface changes).

== Possible Alternatives ==

There are few possible alternatives for what we could do to make using
binary components easier or harder:

A. Change nothing, leave the status quo as-is. Extensions which use
binary components and are not recompiled every 6 weeks will "break", but
they at least have the opportunity to do cool stuff . Users will
continue to be vulnerable to Oracle-type crashes.
B. Make no technical changes, but disallow binary components in release
extensions on AMO. We may allow binary components in beta or
experimental addons, with the understanding that these would need to be
updated for each release.
C. (copied from Asa's post) Only change interfaces every 3rd release (or
something like that). This would mean that extensions which use C++
components would need to compile less frequently. Users would still be
exposed to Oracle-type issues, but less frequently.
D. Stop loading binary components from extensions. Continue exporting
the XPCOM functions so that application authors can continue to use
them. Users might still be exposed to Oracle-type issues.
E. Stop loading binary components completely: compile Firefox as a
static binary and stop exporting the XPCOM functions completely.

== bsmedberg's Opinion ==

I see binary components as a useful tool in very specific circumstances.
Primarily, they are a good way to prototype and experiment with new
features which require mucking about with Mozilla internals. But I tend
to think that we should discourage their use in any production
environment, including perhaps disallowing them on AMO. I tend to think
we should consider option "B".

--BDS

Henri Sivonen

unread,
Jan 17, 2012, 10:51:36 AM1/17/12
to mozilla.dev.planning group
On Tue, Jan 17, 2012 at 5:30 PM, Benjamin Smedberg
<benj...@smedbergs.us> wrote:
> B. Make no technical changes, but disallow binary components in release
> extensions on AMO. We may allow binary components in beta or experimental
> addons, with the understanding that these would need to be updated for each
> release.

If binary components were for experimentation only, wouldn't it be
simpler to do experimentation on an experimental branch so that the
whole app is compiled with the experiment baked in?

> D. Stop loading binary components from extensions. Continue exporting the
> XPCOM functions so that application authors can continue to use them. Users
> might still be exposed to Oracle-type issues.

Who are "application authors" in this case? Authors of XULRunner apps?
Don't authors of XULRunner apps pretty much end up compiling XULRunner
themselves and, therefore, could bake their code into a static binary
of the type mentioned in option E?

> E. Stop loading binary components completely: compile Firefox as a static
> binary and stop exporting the XPCOM functions completely.

It seems to me that options other than A constrain what can be done in
a pragmatic way (either on the extension side in cases B and D or on
the Firefox side in case C) so much that we might as well choose
option E if we don't stick with option A.

--
Henri Sivonen
hsiv...@iki.fi
http://hsivonen.iki.fi/

Leander Bessa

unread,
Jan 17, 2012, 10:54:29 AM1/17/12
to mozilla.dev.planning group, dev-pl...@lists.mozilla.org
Hello,

Due to my recent interests in XULRunnner and XPCOM components, i'm just
wondering if option (B) forbids the use of XPCOM components in XULRunner in
general or does it only apply only to firefox?

Regards,

Leander
> B. Make no technical changes, but disallow binary components in release
> extensions on AMO. We may allow binary components in beta or experimental
> addons, with the understanding that these would need to be updated for each
> release.
> C. (copied from Asa's post) Only change interfaces every 3rd release (or
> something like that). This would mean that extensions which use C++
> components would need to compile less frequently. Users would still be
> exposed to Oracle-type issues, but less frequently.
> D. Stop loading binary components from extensions. Continue exporting the
> XPCOM functions so that application authors can continue to use them. Users
> might still be exposed to Oracle-type issues.
> E. Stop loading binary components completely: compile Firefox as a static
> binary and stop exporting the XPCOM functions completely.
>
> == bsmedberg's Opinion ==
>
> I see binary components as a useful tool in very specific circumstances.
> Primarily, they are a good way to prototype and experiment with new
> features which require mucking about with Mozilla internals. But I tend to
> think that we should discourage their use in any production environment,
> including perhaps disallowing them on AMO. I tend to think we should
> consider option "B".
>
> --BDS
>
> ______________________________**_________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-platform<https://lists.mozilla.org/listinfo/dev-platform>
>

Justin Wood (Callek)

unread,
Jan 17, 2012, 11:05:30 AM1/17/12
to mozilla.dev.planning group
Re: Option B.

It would relegate Lightning to a non-release addon.
It would relegate Enigmail to a non-release addon.

These are two addons that users of Thunderbird for example, consider
necessary. Yes doing Lightning/Enigmail with simply binary-parts but
using js-ctypes to access those parts should be theoretically possible,
but I make no guess as to the amount of work.

So *IFF* we do B we should have a way for "special review" to allow
binary-addons to still be a release product.

I am sure there are more. And we should explicitly allow binary pieces
on AMO, just not binary-linked-with-Firefox components in the general
sense, since people can create binary dll's and load it via js-ctypes!

--
~Justin Wood (Callek)

Justin Wood (Callek)

unread,
Jan 17, 2012, 11:05:30 AM1/17/12
to mozilla.dev.planning group

Wes Garland

unread,
Jan 17, 2012, 11:18:53 AM1/17/12
to mozilla.dev.planning group, dev-pl...@lists.mozilla.org
On 17 January 2012 10:30, Benjamin Smedberg <benj...@smedbergs.us> wrote:

> * They have access to load arbitrary binary code via ctypes. Note that
> ctypes introduces potential type-unsafety and requires very careful
> programming.
>

The hazards for js-ctypes are far more widespread than simple type
un-safety.

It is extremely difficult for even a seasoned C programmer to write
js-ctypes which is portable across a multitide of architectures and
operating systems, unless the APIs which are used from js-ctypes are very
narrowly defined. I cannot think of an API list other than "Mozilla XPCOM,
JSAPI, etc" which is sufficiently controlled by Mozilla that we can make
any kind of promises or guarantees about extensions which use js-ctypes.

In some regards, components that are written with js-ctypes are in fact
*less* portable than binary components, because the JS programmer has to do
some of the work of the C compiler.

In order to completely portably use js-ctypes in an extension, the JS
developer must know at least the following:
1 - Location on disk of any libraries used that are not on the library path
2 - Calling convention of any functions used
3 - Implementation details of any of the C APIs which are implemented as
macros
4 - Struct layout of any structs used
5 - Struct packing rules of any structs used

All five of these can vary based on operating system, operating system
version, third party lib versions, and even the compiler used to build said
libraries. None of these examples are theoretical, they are challenges
faced by any developer in any scripting language trying to write portable
FFI code. Every one of them has bit me personally at some point in my
career.s not need to be made available to users.

Impact? If we are allowing the use of js-ctypes on AMO, we cannot
reasonably treat extensions which use them the same as plain JS
extensions. We can probably treat them like binary extensions, if we
"sniff" the code and make sure they are only using Mozilla's libraries. If
they use any system or third party libraries, we cannot actually make any
guarantees for the users at all. Maybe extensions like that should come
through vendor channels (e.g. Debian).

== Possible Alternatives ==
> C. (copied from Asa's post) Only change interfaces every 3rd release (or
> something like that). This would mean that extensions which use C++
> components would need to compile less frequently. Users would still be
> exposed to Oracle-type issues, but less frequently.
>

Observation: This gives us, effectively, as Major.Minor release scheme,
where the numbering scheme is in trinary instead of the usual decimal.

If you're trying to remove Oracle-type issues: what kind of an impact would
it make to Firefox to have ALL XPCOM symbols hidden -- except for one.
That one returns a struct full of function pointers (or object full of
methods, you get the idea) -- but only if the caller supplies the correct
'magic cookie'.

Question: how does the binary extension issue interact with virus scanners
et al on Windows?

Wes

--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102

Jorge Villalobos

unread,
Jan 17, 2012, 11:20:11 AM1/17/12
to mozilla.dev.planning group, Benjamin Smedberg, dev-pl...@lists.mozilla.org
I don't think option B is something that can be taken lightly. Add-ons
with binary components on AMO are in no way experiments, and most are
maintained by development groups, not individuals. They have very
significant usage on AMO (and much more outside of AMO):

https://addons.mozilla.org/en-US/firefox/compatibility/11.0?appver=1-11.0&type=binary

Closing the binary XPCOM way for extensions seems like a very
destructive approach, and it only worsens the problem that Asa was
trying to address.

- Jorge

Jorge Villalobos

unread,
Jan 17, 2012, 11:20:11 AM1/17/12
to mozilla.dev.planning group, Benjamin Smedberg, dev-pl...@lists.mozilla.org
On 1/17/12 9:30 AM, Benjamin Smedberg wrote:

Asa Dotzler

unread,
Jan 17, 2012, 11:28:38 AM1/17/12
to
On 1/17/2012 7:30 AM, Benjamin Smedberg wrote:

> I see binary components as a useful tool in very specific circumstances.
> Primarily, they are a good way to prototype and experiment with new
> features which require mucking about with Mozilla internals. But I tend
> to think that we should discourage their use in any production
> environment, including perhaps disallowing them on AMO. I tend to think
> we should consider option "B".

Benjamin, thank you for the best description of binary and js components
I've seen to date. It's a big help to folks like me who don't live so
deeply in the code.

There are a class of extension authors which, for what ever their
reasons, have expressed an unwillingness to move to JS ctypes. Perhaps
it's that they technically cannot or they're simply unwilling to (it
would be useful to get to the bottom of that). It's been nearly a year
now since we let them know we'd be requiring a re-compile every six
weeks and began encouraging them to move to JS solutions and they've not
done it.

Preliminary numbers say that binary add-on installs outnumber JS add-on
installs by about two to one. There are more than 30 add-ons with binary
components that each have more than a million Firefox users today. There
are only 13 JS add-ons that have more than a million Firefox users today.

I think we've lost this battle and we need to step back and revisit the
value we get from breaking that compatibility every 6 weeks and compare
that against regressing millions of users every 6 weeks.

- A

Benjamin Smedberg

unread,
Jan 17, 2012, 11:31:57 AM1/17/12
to Henri Sivonen, mozilla.dev.planning group
On 1/17/2012 10:51 AM, Henri Sivonen wrote:
> If binary components were for experimentation only, wouldn't it be
> simpler to do experimentation on an experimental branch so that the
> whole app is compiled with the experiment baked in?
I tend to think it's better to say to people "here is an experimental
addon, install it" than to say "here is an experimental build, install
it". This is because these experiments are often done not by Mozilla
itself but by third parties. We risk having users with non-updating
builds. But having custom builds is basically what we'd have to
recommend if we went with option E.

>
>> D. Stop loading binary components from extensions. Continue exporting the
>> XPCOM functions so that application authors can continue to use them. Users
>> might still be exposed to Oracle-type issues.
> Who are "application authors" in this case? Authors of XULRunner apps?
> Don't authors of XULRunner apps pretty much end up compiling XULRunner
> themselves and, therefore, could bake their code into a static binary
> of the type mentioned in option E?
Do you have data on this? I don't think that most app authors compile
their own XULRunner. In some cases they just try to use firefox -app.
> It seems to me that options other than A constrain what can be done in
> a pragmatic way (either on the extension side in cases B and D or on
> the Firefox side in case C) so much that we might as well choose
> option E if we don't stick with option A.
We're trying to balance the constraints here. Option E is really the
nuclear option, given our current history.

Leaner Bessa wrote:

> Due to my recent interests in XULRunnner and XPCOM components, i'm just
> wondering if option (B) forbids the use of XPCOM components in XULRunner in
> general or does it only apply only to firefox?
That is a question to be answered. If we go with option E) we probably
want XULRunner apps to use a similar build config for pragmatic/testing
reasons, so you'd have to compile your components into libxul itself.

Justin Wood (Callek) wrote:

> Re: Option B.
>
> It would relegate Lightning to a non-release addon.
> It would relegate Enigmail to a non-release addon.
>
> These are two addons that users of Thunderbird for example, consider
> necessary. Yes doing Lightning/Enigmail with simply binary-parts but
> using js-ctypes to access those parts should be theoretically possible,
> but I make no guess as to the amount of work.
If Firefox chooses option B, that doesn't necessarily mean that
Thunderbird must also choose it. I really don't know lightning at all,
but I believe that all enigmail needs is ipccode, which we should just
consider adding as part of core.

--BDS

Benjamin Smedberg

unread,
Jan 17, 2012, 11:39:24 AM1/17/12
to Wes Garland, mozilla.dev.planning group
On 1/17/2012 11:18 AM, Wes Garland wrote:
> It is extremely difficult for even a seasoned C programmer to write
> js-ctypes which is portable across a multitide of architectures and
> operating systems, unless the APIs which are used from js-ctypes are very
> narrowly defined.
In any non-trivial case, the recommended way to use ctypes is to compile
a DLL which exposes your well-known API, and have the C compiler do all
the dirty work for you. In this case none of the objections you have
mentioned really apply.

--BDS

Wes Garland

unread,
Jan 17, 2012, 11:44:44 AM1/17/12
to Benjamin Smedberg, mozilla.dev.planning group
On 17 January 2012 11:39, Benjamin Smedberg <benj...@smedbergs.us> wrote:

> In any non-trivial case, the recommended way to use ctypes is to compile a
> DLL which exposes your well-known API, and have the C compiler do all the
> dirty work for you. In this case none of the objections you have mentioned
> really apply.
>

This is absolutely true, however, by following this approach, we now have a
binary component.

Asa's comment seems to indicate that he believes that js-ctypes are an
effective way to rid us of the need for binary components. I do not
believe that is the case.

Boris Zbarsky

unread,
Jan 17, 2012, 11:47:31 AM1/17/12
to
On 1/17/12 11:28 AM, Asa Dotzler wrote:
> I think we've lost this battle and we need to step back and revisit the
> value we get from breaking that compatibility every 6 weeks

The main value is being able to sanely develop Gecko.

The alternative is basically to only do Gecko work 6 weeks out of every 18.

-Boris

Asa Dotzler

unread,
Jan 17, 2012, 11:49:13 AM1/17/12
to
On 1/17/2012 8:44 AM, Wes Garland wrote:
> On 17 January 2012 11:39, Benjamin Smedberg<benj...@smedbergs.us> wrote:
>
>> In any non-trivial case, the recommended way to use ctypes is to compile a
>> DLL which exposes your well-known API, and have the C compiler do all the
>> dirty work for you. In this case none of the objections you have mentioned
>> really apply.
>>
>
> This is absolutely true, however, by following this approach, we now have a
> binary component.
>
> Asa's comment seems to indicate that he believes that js-ctypes are an
> effective way to rid us of the need for binary components. I do not
> believe that is the case.
>
> Wes
>

I don't know the tech well enough to reply in detail here, but my goal
is not to rid us the need for binary components. My goal is that our
users don't lose (and often regain a few days later) functionality every
6 weeks because add-ons weren't compatible with our update.

- A

Benjamin Smedberg

unread,
Jan 17, 2012, 12:01:08 PM1/17/12
to Wes Garland, mozilla.dev.planning group
On 1/17/2012 11:44 AM, Wes Garland wrote:
> On 17 January 2012 11:39, Benjamin Smedberg<benj...@smedbergs.us> wrote:
>
>> In any non-trivial case, the recommended way to use ctypes is to compile a
>> DLL which exposes your well-known API, and have the C compiler do all the
>> dirty work for you. In this case none of the objections you have mentioned
>> really apply.
>>
> This is absolutely true, however, by following this approach, we now have a
> binary component.
No, you have a *DLL*. I am in no way proposing that we get rid of
extensions shipping DLLs. I in fact strongly encourage it for some use
cases. IETab, for example, should almost certainly be implemented as an
NPAPI plugin, not an XPCOM component.

I am merely talking about binary *XPCOM* components, which are the root
problem when we're talking about either freezing XPCOM interface
definitions for some of the rapid release cycle or disallowing those
components based on the user experience.

--BDS

Wes Garland

unread,
Jan 17, 2012, 12:04:24 PM1/17/12
to Benjamin Smedberg, mozilla.dev.planning group
> No, you have a *DLL*. I am in no way proposing that we get rid of
> extensions shipping DLLs.

AH - Thank you for the clarification. That is definitely a saner approach
than trying to go js-ctypes-only.

Benjamin Smedberg

unread,
Jan 17, 2012, 12:22:54 PM1/17/12
to Asa Dotzler, dev-pl...@lists.mozilla.org
On 1/17/2012 11:28 AM, Asa Dotzler wrote:
>
> There are a class of extension authors which, for what ever their
> reasons, have expressed an unwillingness to move to JS ctypes. Perhaps
> it's that they technically cannot or they're simply unwilling to (it
> would be useful to get to the bottom of that). It's been nearly a year
> now since we let them know we'd be requiring a re-compile every six
> weeks and began encouraging them to move to JS solutions and they've
> not done it.
I'm looking at
https://addons.mozilla.org/en-US/firefox/compatibility/11.0?appver=1-11.0&type=binary

What portion of these addons are not compatible at release time?

I can make educated guesses about the *possibility* of these addons
using something other than XPCOM components to achieve their
functionality, or could if we added a few APIs.

Garmin communicator: probably just code to communicate with the device
over USB: could certainly be accomplished with an intermediate ctypes
DLL, or perhaps even just from script using WebUSB (bug 674718/bug 711613).

IE Tab: the core embedding-trident functionality should be easily
accomplishable and would be far better using an NPAPI plugin. I don't
know if they also try to hook into our docshell hierarchy for the IE
tabs, which would in fact be very difficult to do from script. We could
perhaps expose an API to make that possible, with cooperation from them.

FoxTab: I really don't know why this can't just use webgl or an NPAPI
plugin.

Cooliris: also appears like it could use webgl or an NPAPI plugin.

Ant Video Downloader: does not appear to contain any binaries

Colorzilla: appears that this could easily use ctypes

> I think we've lost this battle and we need to step back and revisit
> the value we get from breaking that compatibility every 6 weeks and
> compare that against regressing millions of users every 6 weeks.
If this is true, I think we should revisit rapid release in general, and
not just this aspect of it. As Boris has mentioned, this will
significantly affect our ability to make changes to the platform.

--BDS

Henri Sivonen

unread,
Jan 17, 2012, 12:31:47 PM1/17/12
to dev-pl...@lists.mozilla.org
On Tue, Jan 17, 2012 at 7:22 PM, Benjamin Smedberg
<benj...@smedbergs.us> wrote:
> Cooliris: also appears like it could use webgl or an NPAPI plugin.

Cooliris is available for Chrome, so, clearly, the functionality can
be accomplished with NPAPI plug-in plus JS-based extension. IIRC, the
Firefox extension already contains an NPAPI plug-in.

Asa Dotzler

unread,
Jan 17, 2012, 12:32:04 PM1/17/12
to
On 1/17/2012 9:22 AM, Benjamin Smedberg wrote:
> On 1/17/2012 11:28 AM, Asa Dotzler wrote:
>>
>> There are a class of extension authors which, for what ever their
>> reasons, have expressed an unwillingness to move to JS ctypes. Perhaps
>> it's that they technically cannot or they're simply unwilling to (it
>> would be useful to get to the bottom of that). It's been nearly a year
>> now since we let them know we'd be requiring a re-compile every six
>> weeks and began encouraging them to move to JS solutions and they've
>> not done it.
> I'm looking at
> https://addons.mozilla.org/en-US/firefox/compatibility/11.0?appver=1-11.0&type=binary
>
>
> What portion of these addons are not compatible at release time?

I believe those are only the ones hosted at AMO. That's a minority of
add-ons.

> I can make educated guesses about the *possibility* of these addons
> using something other than XPCOM components to achieve their
> functionality, or could if we added a few APIs.

How do we convince them to change, even if we have a destination that's
better?

>> I think we've lost this battle and we need to step back and revisit
>> the value we get from breaking that compatibility every 6 weeks and
>> compare that against regressing millions of users every 6 weeks.
> If this is true, I think we should revisit rapid release in general, and
> not just this aspect of it. As Boris has mentioned, this will
> significantly affect our ability to make changes to the platform.

When we moved to rapid releases, I got the impression that we needed
time to clean up but that we wouldn't be in that state forever. I think
we should discuss ways to amend rapid releases to make it work better
for platform developers and users.

- A

Mark Banner

unread,
Jan 17, 2012, 12:59:59 PM1/17/12
to
On 17/01/2012 16:05, Justin Wood (Callek) wrote:
> Re: Option B.
>
> It would relegate Lightning to a non-release addon.
> It would relegate Enigmail to a non-release addon.
>
> These are two addons that users of Thunderbird for example, consider
> necessary. Yes doing Lightning/Enigmail with simply binary-parts but
> using js-ctypes to access those parts should be theoretically possible,
> but I make no guess as to the amount of work.

Work has already started on Lightning to make it non-binary. I believe
most of this is just translating the existing code to javascript.

I seem to remember from previous discussions I've had that Engmail would
be difficult without a significant amount of work, and possibly some
items landing in core.

Mark.

Steve Wendt

unread,
Jan 17, 2012, 1:07:35 PM1/17/12
to Justin Wood (Callek)
On 1/17/2012 8:05 AM, Justin Wood (Callek) wrote:

> It would relegate Lightning to a non-release addon.
> It would relegate Enigmail to a non-release addon.

It would be nice if those two were just included with Thunderbird and
SeaMonkey (like ChatZilla, DOM Inspector, etc.).

Boris Zbarsky

unread,
Jan 17, 2012, 1:09:32 PM1/17/12
to
On 1/17/12 12:32 PM, Asa Dotzler wrote:
> When we moved to rapid releases, I got the impression that we needed
> time to clean up but that we wouldn't be in that state forever.

I should not that we're not done with the cleanup yet...

"cleanup" includes implementing the dom4 core spec, note.

> I think we should discuss ways to amend rapid releases to make it work better
> for platform developers and users.

OK, let's define "better"? It sounds like you would prefer to move to
an 18-week release cycle for platform (including web) features, yes?

-Boris

Kyle Huey

unread,
Jan 17, 2012, 1:38:59 PM1/17/12
to mozilla.dev.planning group
On Tue, Jan 17, 2012 at 4:30 PM, Benjamin Smedberg <benj...@smedbergs.us>wrote:

> I see binary components as a useful tool in very specific circumstances.
> Primarily, they are a good way to prototype and experiment with new
> features which require mucking about with Mozilla internals. But I tend to
> think that we should discourage their use in any production environment,
> including perhaps disallowing them on AMO. I tend to think we should
> consider option "B".
>

I did not read the entire thread, so forgive me if this has been raised
already, but I think driving extensions off of AMO is absolutely the wrong
way to go. I would much rather have as many addons as possible in the
brightly lit, police-patrolled plaza that is AMO than the dark shady
alleyways that are the rest of the internet.

- Kyle

Ehsan Akhgari

unread,
Jan 17, 2012, 2:54:29 PM1/17/12
to Asa Dotzler, dev-pl...@lists.mozilla.org
Do we have data on what percentage of the add-ons with binary components do
not have an updated version available when we ship a new release?

--
Ehsan
<http://ehsanakhgari.org/>


On Tue, Jan 17, 2012 at 12:32 PM, Asa Dotzler <a...@mozilla.org> wrote:

> On 1/17/2012 9:22 AM, Benjamin Smedberg wrote:
>
>> On 1/17/2012 11:28 AM, Asa Dotzler wrote:
>>
>>>
>>> There are a class of extension authors which, for what ever their
>>> reasons, have expressed an unwillingness to move to JS ctypes. Perhaps
>>> it's that they technically cannot or they're simply unwilling to (it
>>> would be useful to get to the bottom of that). It's been nearly a year
>>> now since we let them know we'd be requiring a re-compile every six
>>> weeks and began encouraging them to move to JS solutions and they've
>>> not done it.
>>>
>> I'm looking at
>>
>> https://addons.mozilla.org/en-**US/firefox/compatibility/11.0?**
>> appver=1-11.0&type=binary<https://addons.mozilla.org/en-US/firefox/compatibility/11.0?appver=1-11.0&type=binary>
>>
>>
>> What portion of these addons are not compatible at release time?
>>
>
> I believe those are only the ones hosted at AMO. That's a minority of
> add-ons.
>
>
> I can make educated guesses about the *possibility* of these addons
>> using something other than XPCOM components to achieve their
>> functionality, or could if we added a few APIs.
>>
>
> How do we convince them to change, even if we have a destination that's
> better?
>
> I think we've lost this battle and we need to step back and revisit
>>> the value we get from breaking that compatibility every 6 weeks and
>>> compare that against regressing millions of users every 6 weeks.
>>>
>> If this is true, I think we should revisit rapid release in general, and
>>
>> not just this aspect of it. As Boris has mentioned, this will
>> significantly affect our ability to make changes to the platform.
>>
>
> When we moved to rapid releases, I got the impression that we needed time
> to clean up but that we wouldn't be in that state forever. I think we
> should discuss ways to amend rapid releases to make it work better for
> platform developers and users.
>
>
> - A
> ______________________________**_________________
> dev-planning mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/**listinfo/dev-planning<https://lists.mozilla.org/listinfo/dev-planning>
>

David Mandelin

unread,
Jan 17, 2012, 3:07:28 PM1/17/12
to
On 1/17/2012 7:30 AM, Benjamin Smedberg wrote:
> This post was prompted by the thread "Why do we keep revising intervace
> ids?" in mozilla.dev.platform, and Asa's reply which suggests that we
> reconsider our current policy of breaking (requiring a recompile) of
> binary components in each 6-week cycle. I thought that before we got
> into the details of our solution we should define the problem space.
>
> == Purpose ==
>
> Loadable XPCOM components allow developers of both applications and
> extensions access to the following basic functions:
>
> * Adding new functionality to the platform available via our XPCOM
> object framework.
> * Extending existing functionality of the platform, for example adding a
> new protocol handler.
> * Replacing core functionality of existing components/services.

I have started thinking of our platform as being like an OS and looking
for inspiration on APIs and architectures in OSs. Along those lines,
#1/#2 above could be done as either libraries or device drivers,
depending on how deeply they are integrated with the platform. Either
way, an OS would try to provide a pretty stable API (evolution over the
years, not the months). For #3, the main example that comes to mind is
complete UI replacements (explorer.exe or window managers), but it seems
replacing core features in a stable way is not done much and may not be
very practical--just producing a custom version of the platform seems
more likely.

But before thinking too hard about that, what is the long-term future of
binary extensions? roc recently predicted that NPAPI plugins will go
away, meaning we should worry only about user experience issues for
existing plugins:

http://robert.ocallahan.org/2011/11/end-of-plugins.html

I'm not sure if the comment about Windows 8 applies only to NPAPI
plugins, or other binary extensions too. I think there are valid use
cases for binary extensions, but if we don't expect them to be allowed
by the underlying platform or to continue to be used, then it wouldn't
make much sense to work on them.

If we are going to continue with binary extensions, I'd like to see us
get well-designed, stable APIs. It would take a while to get there,
though, and there is still the question about whether add-on developers
would actually use the new APIs.

> == Possible Alternatives ==
>
> There are few possible alternatives for what we could do to make using
> binary components easier or harder:
>
> A. Change nothing, leave the status quo as-is. Extensions which use
> binary components and are not recompiled every 6 weeks will "break", but
> they at least have the opportunity to do cool stuff . Users will
> continue to be vulnerable to Oracle-type crashes.

We know roughly how this looks. :-)

> B. Make no technical changes, but disallow binary components in release
> extensions on AMO. We may allow binary components in beta or
> experimental addons, with the understanding that these would need to be
> updated for each release.

This option is hard for me to understand. It seems to be essentially a
ban on binary components, except in prototyping. Is it really that much
easier to prototype as a binary extension? And what does the prototype
lead to, if it can't be shipped? A patch to the platform? A non-AMO addon?

> C. (copied from Asa's post) Only change interfaces every 3rd release (or
> something like that). This would mean that extensions which use C++
> components would need to compile less frequently. Users would still be
> exposed to Oracle-type issues, but less frequently.

This seems really disruptive to development, especially if it applies to
all interfaces. Is there a subset of the interfaces that is useful to
add-on developers that we could keep stable?

> D. Stop loading binary components from extensions. Continue exporting
> the XPCOM functions so that application authors can continue to use
> them. Users might still be exposed to Oracle-type issues.
> E. Stop loading binary components completely: compile Firefox as a
> static binary and stop exporting the XPCOM functions completely.

These two make sense if we do decide that binary components are going away.

> == bsmedberg's Opinion ==
>
> I see binary components as a useful tool in very specific circumstances.
> Primarily, they are a good way to prototype and experiment with new
> features which require mucking about with Mozilla internals. But I tend
> to think that we should discourage their use in any production
> environment, including perhaps disallowing them on AMO. I tend to think
> we should consider option "B".

Personally, I know way too little about the ecosystem of add-ons with
binary components, what they do, and what other kinds of implementations
they could use to be able to make a decision. It seems like a really
deep problem.

But the core issue that your options revolve around does seem to be: do
we want to keep binary components at all? I see 3 main options:

1. Deprecate and eventually remove binary components, on the grounds
that they are hard to support and won't be allowed on all platforms.
(Also, we could later revive them in some new form if we wanted.)
Besides the obvious "life gets easier", I'm not sure what the effects
are--especially, how do vendors of add-ons with binary components react?
(And is this the path MS is taking?)

2. Do binary components right: provide stable APIs and QA binary
components. It's expensive and doesn't solve anything in the immediate
future, but it does create a good, stable environment for binary
components (if it works). It seems like this is what Google is going for
with PPAPI and NaCl.

3. Bounce along as we are: unstable binary components. This is OK for
now but I don't see it as a sound long-term plan.

Dave

Matt Brubeck

unread,
Jan 17, 2012, 3:16:54 PM1/17/12
to
On 01/17/2012 07:30 AM, Benjamin Smedberg wrote:
> B. Make no technical changes, but disallow binary components in release
> extensions on AMO. We may allow binary components in beta or
> experimental addons, with the understanding that these would need to be
> updated for each release.

For non-AMO add-ons, this has no effect.

For AMO-hosted add-ons, it gives authors a choice between changing their
code or leaving AMO. If many authors chose to leave AMO, this would not
improve the experience for users, and it would reduce our ability to use
AMO procedures to improve things in the future.

If I remember correctly, the binary add-ons with the largest number of
users are not on AMO anyway, so this option would have little or no
effect on user experience for a majority of our users.

Benjamin Smedberg

unread,
Jan 17, 2012, 3:39:39 PM1/17/12
to David Mandelin, dev-pl...@lists.mozilla.org
On 1/17/2012 3:07 PM, David Mandelin wrote:
>
>
> But before thinking too hard about that, what is the long-term future
> of binary extensions? roc recently predicted that NPAPI plugins will
> go away, meaning we should worry only about user experience issues for
> existing plugins:
As noted to Wes, this discussion isn't really about binaries in general.
NPAPI plugins and using binaries via ctypes are the 'good alternatives'
to XPCOM components.

> If we are going to continue with binary extensions, I'd like to see us
> get well-designed, stable APIs. It would take a while to get there,
> though, and there is still the question about whether add-on
> developers would actually use the new APIs.
I don't think we should spend time making stable well-designed binary
APIs, if that is going to get in the way of making stable well-designed
JS APIs (and I don't think we have the resources to do it all). So I do
think that we should be working hard to get stable well-designed JS APIs
that most extensions can use, and in the cases where extensions can't
use the existing APIs, work with them to get the APIs they need.


>
>> == Possible Alternatives ==
>>
>> There are few possible alternatives for what we could do to make using
>> binary components easier or harder:
>>
>> A. Change nothing, leave the status quo as-is. Extensions which use
>> binary components and are not recompiled every 6 weeks will "break", but
>> they at least have the opportunity to do cool stuff . Users will
>> continue to be vulnerable to Oracle-type crashes.
>
> We know roughly how this looks. :-)
I guess I don't know how bad this looks currently. Asa believes that
many users are annoyed because these addons-with-binaries aren't updated
at release-time (and I have no reason to doubt him). If this is the
case, we'd either need to help them get updates *before* we do each
6-week release, or change our own practices so that we don't break them
at each release.

As I wrote a while back
(http://benjamin.smedbergs.us/blog/2011-09-15/complementing-firefox-rapid-release/),
I think we'd be better off having those consumers use a LTS release.

>
>> B. Make no technical changes, but disallow binary components in release
>> extensions on AMO. We may allow binary components in beta or
>> experimental addons, with the understanding that these would need to be
>> updated for each release.
>
> This option is hard for me to understand. It seems to be essentially a
> ban on binary components, except in prototyping. Is it really that
> much easier to prototype as a binary extension? And what does the
> prototype lead to, if it can't be shipped? A patch to the platform? A
> non-AMO addon?
Often times these things have started out as something which hooks into
the guts of nsIContent or the docshell hierarchy in order to do
interesting stuff. The goal would be to do whatever rapid prototyping
you need, and then convert that into a patch for the platform.


>
>> D. Stop loading binary components from extensions. Continue exporting
>> the XPCOM functions so that application authors can continue to use
>> them. Users might still be exposed to Oracle-type issues.
>> E. Stop loading binary components completely: compile Firefox as a
>> static binary and stop exporting the XPCOM functions completely.
>
> These two make sense if we do decide that binary components are going
> away.
When? Immediately (breaking our most popular addons, see Jorge's link),
or eventually? If eventually, how do we get there?
>
>
> 2. Do binary components right: provide stable APIs and QA binary
> components. It's expensive and doesn't solve anything in the immediate
> future, but it does create a good, stable environment for binary
> components (if it works). It seems like this is what Google is going
> for with PPAPI and NaCl.
I don't think this would help anyone. For the most part, people don't
seem to be using binary components because of speed benefits. It's
either because they need to make special calls into OS libraries (USB
stuff for Garmin, embedding Trident for IETab, etc) or because they want
to poke at the unstable gecko guts (roboform and perhaps some others).
In either case you NaCL/pepper doesn't help. Some of the stuff we're
doing for B2G will help (WebUSB).
>
> 3. Bounce along as we are: unstable binary components. This is OK for
> now but I don't see it as a sound long-term plan.
I *think* everyone agrees that it is not a stable plan for extensions.

--BDS


Zack Weinberg

unread,
Jan 17, 2012, 4:24:59 PM1/17/12
to
On 2012-01-17 7:30 AM, Benjamin Smedberg wrote:

> A. Change nothing, leave the status quo as-is. Extensions which use
> binary components and are not recompiled every 6 weeks will "break", but
> they at least have the opportunity to do cool stuff . Users will
> continue to be vulnerable to Oracle-type crashes.
> B. Make no technical changes, but disallow binary components in release
> extensions on AMO. We may allow binary components in beta or
> experimental addons, with the understanding that these would need to be
> updated for each release.
> C. (copied from Asa's post) Only change interfaces every 3rd release (or
> something like that). This would mean that extensions which use C++
> components would need to compile less frequently. Users would still be
> exposed to Oracle-type issues, but less frequently.
> D. Stop loading binary components from extensions. Continue exporting
> the XPCOM functions so that application authors can continue to use
> them. Users might still be exposed to Oracle-type issues.
> E. Stop loading binary components completely: compile Firefox as a
> static binary and stop exporting the XPCOM functions completely.

I think option B is an excellent one for the near term, but that in the
long run we want option D or E, and that concurrent with the change to
AMO, we should announce that binary components are deprecated and will
stop working at a stated time in the future (one or two years seems
reasonable to me).

To inform the choice between D and E, could authors of applications
outside m-c please chime in and tell us what they still need binary
components for? It seems *highly* desirable to me to get to a point
where Thunderbird, Seamonkey, etc don't need any binary components
beyond what's already in libxul.

zw

David Mandelin

unread,
Jan 17, 2012, 4:29:38 PM1/17/12
to Benjamin Smedberg
On 1/17/2012 12:39 PM, Benjamin Smedberg wrote:
> On 1/17/2012 3:07 PM, David Mandelin wrote:
>>
>>
>> But before thinking too hard about that, what is the long-term future
>> of binary extensions? roc recently predicted that NPAPI plugins will
>> go away, meaning we should worry only about user experience issues for
>> existing plugins:
>
> As noted to Wes, this discussion isn't really about binaries in general.
> NPAPI plugins and using binaries via ctypes are the 'good alternatives'
> to XPCOM components.

OK, but given that NPAPI may be on its way out (per roc's post) and
ctypes apparently is not being used much, it's not clear if the 'good
alternatives' are viable.

>> If we are going to continue with binary extensions, I'd like to see us
>> get well-designed, stable APIs. It would take a while to get there,
>> though, and there is still the question about whether add-on
>> developers would actually use the new APIs.
>
> I don't think we should spend time making stable well-designed binary
> APIs, if that is going to get in the way of making stable well-designed
> JS APIs (and I don't think we have the resources to do it all). So I do
> think that we should be working hard to get stable well-designed JS APIs
> that most extensions can use, and in the cases where extensions can't
> use the existing APIs, work with them to get the APIs they need.

That makes sense. Do you mean Jetpack, or something more?

>>> == Possible Alternatives ==
>>>
>>> There are few possible alternatives for what we could do to make using
>>> binary components easier or harder:
>>>
>>> A. Change nothing, leave the status quo as-is. Extensions which use
>>> binary components and are not recompiled every 6 weeks will "break", but
>>> they at least have the opportunity to do cool stuff . Users will
>>> continue to be vulnerable to Oracle-type crashes.
>>
>> We know roughly how this looks. :-)
>
> I guess I don't know how bad this looks currently. Asa believes that
> many users are annoyed because these addons-with-binaries aren't updated
> at release-time (and I have no reason to doubt him). If this is the
> case, we'd either need to help them get updates *before* we do each
> 6-week release, or change our own practices so that we don't break them
> at each release.
>
> As I wrote a while back
> (http://benjamin.smedbergs.us/blog/2011-09-15/complementing-firefox-rapid-release/),
> I think we'd be better off having those consumers use a LTS release.

That sounds right to me.

>>> B. Make no technical changes, but disallow binary components in release
>>> extensions on AMO. We may allow binary components in beta or
>>> experimental addons, with the understanding that these would need to be
>>> updated for each release.
>>
>> This option is hard for me to understand. It seems to be essentially a
>> ban on binary components, except in prototyping. Is it really that
>> much easier to prototype as a binary extension? And what does the
>> prototype lead to, if it can't be shipped? A patch to the platform? A
>> non-AMO addon?
>
> Often times these things have started out as something which hooks into
> the guts of nsIContent or the docshell hierarchy in order to do
> interesting stuff. The goal would be to do whatever rapid prototyping
> you need, and then convert that into a patch for the platform.

OK, so for prototyping. I guess I don't know enough about what that kind
of development is like to know how important that ability is.

>>> D. Stop loading binary components from extensions. Continue exporting
>>> the XPCOM functions so that application authors can continue to use
>>> them. Users might still be exposed to Oracle-type issues.
>>> E. Stop loading binary components completely: compile Firefox as a
>>> static binary and stop exporting the XPCOM functions completely.
>>
>> These two make sense if we do decide that binary components are going
>> away.
>
> When? Immediately (breaking our most popular addons, see Jorge's link),
> or eventually? If eventually, how do we get there?

Eventually. The how would be negotiated with add-on developers, users,
and product managers over time.

>> 2. Do binary components right: provide stable APIs and QA binary
>> components. It's expensive and doesn't solve anything in the immediate
>> future, but it does create a good, stable environment for binary
>> components (if it works). It seems like this is what Google is going
>> for with PPAPI and NaCl.
>
> I don't think this would help anyone. For the most part, people don't
> seem to be using binary components because of speed benefits. It's
> either because they need to make special calls into OS libraries (USB
> stuff for Garmin, embedding Trident for IETab, etc) or because they want
> to poke at the unstable gecko guts (roboform and perhaps some others).
> In either case you NaCL/pepper doesn't help. Some of the stuff we're
> doing for B2G will help (WebUSB).

I am not proposing doing NaCl/Pepper, I was just observing that Google
seems to have chosen that path.

On the use cases you mention: Poking at unstable guts seems like
something that we can never make stable. Special calls into OS libraries
seems more important. Can ctypes cover it?

>> 3. Bounce along as we are: unstable binary components. This is OK for
>> now but I don't see it as a sound long-term plan.
>
> I *think* everyone agrees that it is not a stable plan for extensions.

On further reflection, your proposal "B" to me seems like it largely
amounts to announcing that binary components are unsupported: the code
stays in the tree and you can still ship binaries but not on AMO, and
don't expect them to work as Firefox goes forward.

And that seems like "now, but more explicit about what that actually
means", which seems like a mild improvement over "now". I think it would
be better yet to combine that with a strong intention to create the JS
APIs (e.g., WebUSBs, improved ctypes if necessary) to get all the
add-ons that now have to use binaries onto a supportable technology,
which you also advocated above.

Dave

Kyle Huey

unread,
Jan 17, 2012, 4:34:49 PM1/17/12
to Zack Weinberg, dev-pl...@lists.mozilla.org
Do you mean the libxul that's in mozilla-central or the libxul mail is
shipping? If you mean the libxul that's in mozilla-central, that seems
pretty unrealistic.

- Kyle

David Anderson

unread,
Jan 17, 2012, 4:47:54 PM1/17/12
to dev-pl...@lists.mozilla.org
My experience with binary components has been overwhelmingly negative. I ran/run two decent-sized open source projects that allows both scripted addons and binary extensions. We tried to provide a stable API, but it's not enough. The API will be misused, abused, and often subtly broken by other changes. These binary components are difficult to ship because you need to compile them on N platforms (as well as the lowest common Linux distro denominator, if even possible). They are hard for their authors to debug unless they have access to these platforms. Sometimes the authors are very new to C++.

The API is often incomplete anyway. Then the worst thing: your API has some fundamental flaw you discover later, and now some ancient, unmaintained binary is totally relying on it.

In our experience, these binary components had three use cases:
(1) Making an external implementation scriptable. For example, embedding MySQL support via libmysql.
(2) Exposing to script something that was not provided by the API.
(3) Having some insane, tightly-integrated extension that really needed to talk to the guts of the host app.

(1) is solved by having a really good interop layer, and we were able to eliminate many uses cases with that. (2) is solved by letting users create scripted libraries that can be shared throughout the whole system. (3) is the hard one. These are usually the most buggy, because that level of integration is difficult and brittle, and it usually shows in the user experience as well. When you're totally hijacking a system from the outside it's a miracle if it all works, and a maintenance nightmare forever after. All of them are in part solved by being proactive about what (reasonable) functionality developers need, so the hard stuff gets upstream.

If I were to create such a project again, I would definitely ditch binary components. They're so much of a headache.

-David

> On 1/17/2012 11:44 AM, Wes Garland wrote:
> > On 17 January 2012 11:39, Benjamin Smedberg<benj...@smedbergs.us> wrote:
> >
> >> In any non-trivial case, the recommended way to use ctypes is to compile a
> >> DLL which exposes your well-known API, and have the C compiler do all the
> >> dirty work for you. In this case none of the objections you have mentioned
> >> really apply.
> >>
> > This is absolutely true, however, by following this approach, we now have a
> > binary component.
> No, you have a *DLL*. I am in no way proposing that we get rid of

Zack Weinberg

unread,
Jan 17, 2012, 4:50:10 PM1/17/12
to
I mean some future iteration of the libxul that's in m-c, which might
have things in it that aren't there right now.

Another way to describe the "highly desirable point" that I'm thinking
of would be: All the compiled code in the platform is in m-c. The
compiled executable built by m-c is xulrunner. All Mozilla-platform
applications *including Firefox* consist of xulrunner plus some
JavaScript (and XUL/HTML/CSS/etc).

> If you mean the libxul that's in mozilla-central, that seems
> pretty unrealistic.

I know that *presently* there's a lot of C++ code in the comm-central
version of libxul that isn't in the m-c version, but I don't know what
it's all for, and I could imagine a great deal of it being either
reimplementable in JS without much trouble, or reasonable to add to the
m-c codebase.

zw

Joshua Cranmer

unread,
Jan 17, 2012, 6:39:17 PM1/17/12
to
On 1/17/2012 3:50 PM, Zack Weinberg wrote:
> I know that *presently* there's a lot of C++ code in the comm-central
> version of libxul that isn't in the m-c version, but I don't know what
> it's all for, and I could imagine a great deal of it being either
> reimplementable in JS without much trouble, or reasonable to add to
> the m-c codebase.

As a prelude, I should point out that the mailnews code is often old and
crufty enough to make reimplementation a major task, fraught with
complications, even without the burden of translating to a more limiting
API boundary. I can explain the major problems of rewriting most/all of
comm-central in JS:

1. Charsets. This means a lot of strings will not pass through xpconnect
happily; additionally, a fair amount of the m-c charset/MIME code we
need to use is either not available via XPIDL or it is marked as [noscript].
2. Threads. Some portions of our codebase--particularly IMAP and
import--need to access things from different threads. It is absolutely
impossible to use a JS-implemented XPCOM component from another thread,
which means that it is almost inevitable that the core objects would
have to be implemented in C++. Additionally, IMAP relies on using socket
communications on a different thread, which would mean the same
limitation would still apply even if the core objects were implemented
without using XPCOM at all [which is highly unlikely].
3. RDF. It is impossible to implement RDF in JavaScript (and I have
tried before). While we do have an ongoing project to remove RDF from
mailnews, it has effectively stalled (the main pusher of the effort
appears to have retired from coding Thunderbird), and the last features
are where code is most likely to produce pernicious regressions.
4. System integration. The WABI address books, at the very least, uses a
MS COM implementation, which makes access via jsctypes all but
impossible. Now, this could probably be implemented via a jsctypes
thunk, but I'm just pointing out that it is almost certainly infeasible
to have all of comm-central be in JS.
5. I/O. Some of the I/O layers we use are again unusable from JS (e.g.,
nsIAuthModule). Admittedly, this is mostly fallout from 1 and 2, but I
still want to point out that it is a limit on what is possible.
6. All-or-nothing migration. Much of the current codebase is effectively
impossible to partially migrate to JS, due to several factors. There are
probably some individual files in base/src that could be independently
migrated, but the MIME, filter, composition, import, and database
libraries probably all have to be migrated as full libraries. Address
book might be partially migratible with some work, but the rest of
mailnews would pretty much have to swap all at the same time, or at
least in two units (the protocol communication and the actual base
object instances), due to liberal use of inheritance.

In short: it is, IMHO, highly unlikely that comm-central could be
rewritten in JS without several man-months of concentrated effort and
significant expansion of available APIs (I, for one, would love to see
some sort of "binary string" feature in xpconnect/JS/ishy-things, as
that pretty much eliminates my major concern for rewriting libmime in
JS). Unless, of course, you want to add all/most of comm-central back
into mozilla-central :-).

Asa Dotzler

unread,
Jan 17, 2012, 6:58:18 PM1/17/12
to
On 1/17/2012 1:24 PM, Zack Weinberg wrote:
> On 2012-01-17 7:30 AM, Benjamin Smedberg wrote:
>
>> A. Change nothing, leave the status quo as-is. Extensions which use
>> binary components and are not recompiled every 6 weeks will "break", but
>> they at least have the opportunity to do cool stuff . Users will
>> continue to be vulnerable to Oracle-type crashes.
>> B. Make no technical changes, but disallow binary components in release
>> extensions on AMO. We may allow binary components in beta or
>> experimental addons, with the understanding that these would need to be
>> updated for each release.
>> C. (copied from Asa's post) Only change interfaces every 3rd release (or
>> something like that). This would mean that extensions which use C++
>> components would need to compile less frequently. Users would still be
>> exposed to Oracle-type issues, but less frequently.
>> D. Stop loading binary components from extensions. Continue exporting
>> the XPCOM functions so that application authors can continue to use
>> them. Users might still be exposed to Oracle-type issues.
>> E. Stop loading binary components completely: compile Firefox as a
>> static binary and stop exporting the XPCOM functions completely.
>
> I think option B is an excellent one for the near term, but that in the
> long run we want option D or E, and that concurrent with the change to
> AMO, we should announce that binary components are deprecated and will
> stop working at a stated time in the future (one or two years seems
> reasonable to me).

If we do B, we do nothing about most add-ons with XPCOM components. Most
are hosted outside of AMO already. If we do B, the rest will probably
just move outside of AMO where we have even less visibility.

If we do D and E, we have to be prepared for the authors of all the
"security"-related add-ons that use XPCOM components to message to our
users that they will be unsafe and should not upgrade to newer versions
of Firefox that block them. We've already seen some of that from AV
vendors in the past.

- A

Asa Dotzler

unread,
Jan 17, 2012, 7:05:03 PM1/17/12
to
I don't yet know what I prefer. I'd like to find out what's possible. Is
it possible, for example, to stabilize and freeze (for, say, 18 weeks
ore more) a few key interfaces to get a large number add-ons compatible
while letting the more exotic ones break more frequently?

- A

Boris Zbarsky

unread,
Jan 17, 2012, 7:10:50 PM1/17/12
to
On 1/17/12 7:05 PM, Asa Dotzler wrote:
> I don't yet know what I prefer. I'd like to find out what's possible. Is
> it possible, for example, to stabilize and freeze (for, say, 18 weeks
> ore more) a few key interfaces to get a large number add-ons compatible
> while letting the more exotic ones break more frequently?

Possibly. But note that one of the things that's come up the most are
things like the DOM node and element interfaces, which we can only
stabilize/freeze by deferring web features, because people are adding
new APIs to those all the time.

Maybe that's no longer an issue, though. What would be interesting, if
we had a way of getting it, is some idea of the set of interfaces binary
XPCOM components are actually using...

-Boris

Asa Dotzler

unread,
Jan 17, 2012, 7:26:08 PM1/17/12
to
If there's some way to do that, even manually, I'd be willing to help.
Is there some way to have an instrumented Firefox that spits out a list
of any APIs an add-on is poking?

- A

Boris Zbarsky

unread,
Jan 17, 2012, 7:50:38 PM1/17/12
to
On 1/17/12 7:26 PM, Asa Dotzler wrote:
> If there's some way to do that, even manually, I'd be willing to help.
> Is there some way to have an instrumented Firefox that spits out a list
> of any APIs an add-on is poking?

Hmm.

That's an interesting question, actually. Maybe, yes. If we hack the
IDL and codegen for nsISupports such that the first 100 vtable entries
record some information about the concrete class the function is called
on (by examining the vtable pointer and then doing something with debug
symbols, if possible) and the index of the method, then forward on to
the "real" method (by adding 100 to the vtable index and calling the
result), we might be able to get something, esp. with some
postprocessing. Does that sound at all feasible?

The key part is that to tell apart the addon's calls and our calls we
need to make sure that the addon is calling _different_ methods from our
internal code. We get that somewhat for free if the two are compiled
against different headers.

-Boris


Simon Kornblith

unread,
Jan 17, 2012, 7:50:53 PM1/17/12
to
On Jan 17, 11:28 am, Asa Dotzler <a...@mozilla.org> wrote:
> On 1/17/2012 7:30 AM, Benjamin Smedberg wrote:
>
> > I see binary components as a useful tool in very specific circumstances.
> > Primarily, they are a good way to prototype and experiment with new
> > features which require mucking about with Mozilla internals. But I tend
> > to think that we should discourage their use in any production
> > environment, including perhaps disallowing them on AMO. I tend to think
> > we should consider option "B".
>
> Benjamin, thank you for the best description of binary and js components
> I've seen to date. It's a big help to folks like me who don't live so
> deeply in the code.
>
> There are a class of extension authors which, for what ever their
> reasons, have expressed an unwillingness to move to JS ctypes. Perhaps
> it's that they technically cannot or they're simply unwilling to (it
> would be useful to get to the bottom of that). It's been nearly a year
> now since we let them know we'd be requiring a re-compile every six
> weeks and began encouraging them to move to JS solutions and they've not
> done it.

As an add-on developer who falls into this category, I can provide
three reasons that we haven't moved to js-ctypes yet, and the
experience we've encountered in moving in that direction.

1) We have a non-trivial number of users still running Firefox 3.6 who
we still want to support. Moving to js-ctypes means dropping these
users or maintaining two separate code bases. For now, the plan is to
wait for Firefox 3.6 end-of-life before dropping support. Maintaining
two separate code bases is far more of a pain in the ass than updating
binary components for each release.

2) js-ctypes is ugly. The core of our extension is written in
JavaScript, but we have three different XPCOM components to
communicate with different word processors, all of which expose a
common object-oriented interface. With binary XPCOM, we wrote an idl
file and auto-generated the headers from it. Now we have a C API that
passes around structs for the library, and a JavaScript XPCOM
component that uses js-ctypes to interface with our native library but
exposes it in an object-oriented way. We could eventually move our
core code to expect this C API, but it's not anything resembling
idiomatic JavaScript. On top of this, memory management with js-ctypes
is a major pain; we are manually refcounting interfaces within
JavaScript. After writing the original code as an XPCOM component, js-
ctypes seems like a downgrade in just about every way. The ability to
call C++ from js-ctypes (bug 505907) would address this to some
degree, but it looks like it's been WONTFIXed.

3) It will take a lot of rapid release cycles before moving code to js-
ctypes will pay off. Moving code from XPCOM to js-ctypes requires a
non-trivial amount of effort and may introduce new bugs. Recompiling
against a new XULRunner SDK takes a couple of minutes and runs
comparatively little risk of regressions.

With that said, we are in the process of moving to js-ctypes in order
to decrease long-term maintenance requirements. However, it's not
something we are particularly enthusiastic about, and the only
benefits to us are indirect (through having a better Firefox). I don't
think it should be surprising that, when told to replace their current
paradigm with another paradigm that is worse in every way except for
forward compatibility, add-on developers are dragging their feet.

Simon

Justin Dolske

unread,
Jan 17, 2012, 9:56:19 PM1/17/12
to
On 1/17/12 7:30 AM, Benjamin Smedberg wrote:

> B. Make no technical changes, but disallow binary components in release
> extensions on AMO. We may allow binary components in beta or
> experimental addons, with the understanding that these would need to be
> updated for each release.

Non-AMO addons are unfortunately so common that I don't think this is
likely to be effective. It might be nice, someday, to more strongly
encourage an "app store" kind of model where we make it more difficult
to install addons without some kind of AMO review... But that's a thorny
issue and not likely to happen anytime soon.


> C. (copied from Asa's post) Only change interfaces every 3rd release (or
> something like that). This would mean that extensions which use C++
> components would need to compile less frequently. Users would still be
> exposed to Oracle-type issues, but less frequently.

I think this is likely to make the problem worse! Instead of code
breaking frequently (and hopefully caught during development), longer
windows of "it still happens to work" will elapse and end up impacting
more users.

Counter proposal (possibly crazy): essentially randomize
interfaes/vtables each release to reliably _ensure_ breakage. No binary
addon will work > 6 weeks (in a release channel), so less time for
broken code to spread to users.


> I see binary components as a useful tool in very specific circumstances.
> Primarily, they are a good way to prototype and experiment with new
> features which require mucking about with Mozilla internals.

Has anyone done some kind of survey to see what the common use-cases are
for binary addons? Would it be interesting to look at ways to expose
non-XPCOM binary hooks, with greater stability levels? [Essentially
Jetpack for native code, if you will. :)]

Another way to look at the problem is that there are still a lot of
developers out there who want to use the languages they know (C/C++) to
extend Firefox, and the only obvious interface points we provide are
these hazardous, unstable XPCOM APIs.

Justin

Philip Chee

unread,
Jan 17, 2012, 11:24:03 PM1/17/12
to
On Tue, 17 Jan 2012 11:31:57 -0500, Benjamin Smedberg wrote:
> Justin Wood (Callek) wrote:
>
>> Re: Option B.
>>
>> It would relegate Lightning to a non-release addon.
>> It would relegate Enigmail to a non-release addon.
>>
>> These are two addons that users of Thunderbird for example, consider
>> necessary. Yes doing Lightning/Enigmail with simply binary-parts but
>> using js-ctypes to access those parts should be theoretically possible,
>> but I make no guess as to the amount of work.
> If Firefox chooses option B, that doesn't necessarily mean that
> Thunderbird must also choose it. I really don't know lightning at all,
> but I believe that all enigmail needs is ipccode, which we should just
> consider adding as part of core.

Well here's the problem. Enigmail tried multiple times to get their IPC
code into core but was rejected each time. Eventually they were
grudgingly allowed to put their code into a separate repository
somewhere in hg.mozilla.org where nobody can find it.

Phil

--
Philip Chee <phi...@aleytys.pc.my>, <phili...@gmail.com>
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.

Kent James

unread,
Jan 17, 2012, 11:55:46 PM1/17/12
to
On 1/17/2012 1:24 PM, Zack Weinberg wrote:
> To inform the choice between D and E, could authors of applications
> outside m-c please chime in and tell us what they still need binary
> components for? It seems *highly* desirable to me to get to a point
> where Thunderbird, Seamonkey, etc don't need any binary components
> beyond what's already in libxul.

I've been a proponent of adding capability to Thunderbird to allow
adding new account types beyond the current IMAP, POP3, News, and RSS
types. The addon TweeQuilla (written in JavaScript) is a demonstration
of adding a Twitter account type, and uses a binary addon "New Account
Types" as a glue layer to the base Thunderbird code.

The fundamental mailnews design model for accounts essentially has a
base set of C++ classes (example: nsMsgDBFolder.cpp) that are then
extended with C++ supertypes to implement account-specific functionality
on top of XPCOM interfaces. So, for example, nsIMsgDBFolder has specific
methods that need to be implemented for each account type(for IMAP in
nsImapMailFolder.cpp), as well as has account-specific extensions to the
XPCOM interface (for example nsIImapIncomingServer for the server).

So why do I need binary XPCOM extensions? Because the entire mailnews
architecture is based on a C++ inheritance model which does not map well
onto javascript. Although the "New Account Types" addon showed that, in
fact, it is possible to overcome those issues in javascript, everyone
seemed to hate the complexity and ugliness required to do so, so
attempts to propose adding some of that to core got quickly mired in
criticism. While it would be possible to rewrite all of mailnews I
suppose to be more javascript-friendly, realistically that is not likely
to happen. I also know from bitter experience that even fairly small bug
fixes in mailnews often lead to YEARS of chasing regressions through fix
after fix, so do not kid yourself that any of this would be easy.

As for the compatibility issue, for New Account Types I currently ship
DLLs supporting versions 8, 9, and 10 in the same addon. In the review
queue (for three weeks) is the next version that will support 9, 10, and
11. With this overall scheme, someone can switch seamlessly between the
last-release, current release, and beta versions seamlessly. I as the
developer run aurora. This really works just fine. I just wish that
Lightning would go to similar scheme!

So in case it is not clear, I'm quite opposed to any attempts to prevent
remove binary compatibility for extensions for Thunderbird.

rkent

PhillipJones

unread,
Jan 18, 2012, 12:01:50 AM1/18/12
to
Best thing is stop messing with the extension developers and pulling the
rug out from under them at every new update. Lock it down to one system
and leave it alone. soon Mozilla will po the extension developers and
there will be no extension for Mozilla Products. Mozilla and extension
should sit down together agree to a system. and the stick to that system.

--
Phillip M. Jones, C.E.T. "If it's Fixed, Don't Break it"
http://www.phillipmjones.net mailto:pjo...@kimbanet.com

Mook

unread,
Jan 18, 2012, 12:05:40 AM1/18/12
to
On 1/17/2012 1:24 PM, Zack Weinberg wrote:
For things I've worked on: (Please remember these are personal opinions
and don't represent the projects involved.)

Songbird (application shipping its private XULRunner):
- gstreamer integration - media playback, transcoding. You really
don't want to be doing playback on the UI thread. Chances of landing
this on m-c are low; the existing gstreamer media backend bug (for
Linux) is in limbo (and is unrelated anyway), and this was used for
Linux/Windows/Mac anyway.
- Windows media integration (playback of wma, DRMed tracks, etc.) -
even less likely to land on m-c; basically, in a hypothetical world
where XPCOM is gone, js-ctypes is the only option. I believe MS-COM is
involved on background threads.
- Media tagging - gstreamer, wm, plus taglib. Possibly js-ctypes,
assuming that works on worker threads. (I've been told it does.)
- USB interaction (as mass-storage devices). Possibly whatever B2G
work, assuming it works cross-platform (Linux/Windows/Mac).
- USB/MTP interaction. This basically boils down to off-main-thread
MS-COM calls, I think. If js-ctypes started supported that it might
work? Not sure, but it might need to be called on threads it didn't
create too...
- Some database stuff (its own copy of sqlite and weird caching and...
stuff.)

Overall: if binary XPCOM is dead, it's more likely to just stop
upgrading Gecko than it is to do the work to go to a new version.
Luckily, it seems like this discussion is, so far, limited to
extensions-for-Firefox and not non-Firefox-applications

Komodo (non-XR application, XUL based):
- The main thing here is PyXPCOM. Komodo ships a private copy of
Python (yes!) and hg.m.o-hosted-but-not-m-c code implements a gateway
for that to talk XPCOM, both on the main thread and background threads.
This is basically unfeasible for js-ctypes.
- (This also uses a NPAPI plugin extensively; wrapped via a JS
component right now through npruntime. This will not change due to this
discussion, but might due to the aforementioned block post about killing
NPAPI.)
- That's about it here - all the fun interaction goes via Python here.
- Note that we don't build anything special as part of libxul, here,
either.

And for ancient history:
- Minimizetotray was a binary-XPCOM-component extension. (Roughly,
Gecko 1.7 to 1.9.0.) No longer maintained, but because of the trilicense
there's a few forks floating about on AMO. Showed up on the top binary
components list Jorge linked to, even. If I were to start from scratch
and XPCOM is dead, js-ctypes is my best guess - assuming that can
implement a WndProc.
- From this experience, I'd say hoping binary XPCOM bits can land on
m-c is overly optimistic. This code did end up getting reviewed and
landing... on svn.mozilla.org/projects/webrunner/. Since Firefox itself
didn't need this code, people didn't want to land it in CVS (hey, it's
ancient history). This matches what happened to hg.m.o/ipccode - sure
it landed, but not in a repo that actually gets shipped.

Other stuff:
- My experience with js-ctypes has been... poor. The syntax is
confusing (looks nothing like idiomatic JS or C++), no useful samples
(the unit tests are opaque, and no users in tree I know of), and
debugging is impossible. It's a black box, much worse than XPCOM
components where C++ compilers usually spit out useful information, you
can go read the headers, and you can find useful debuggers.

--
Mook

Nicholas Nethercote

unread,
Jan 18, 2012, 12:31:25 AM1/18/12
to Asa Dotzler, dev-pl...@lists.mozilla.org
On Tue, Jan 17, 2012 at 8:28 AM, Asa Dotzler <a...@mozilla.org> wrote:
>
> Preliminary numbers say that binary add-on installs outnumber JS add-on
> installs by about two to one. There are more than 30 add-ons with binary
> components that each have more than a million Firefox users today. There are
> only 13 JS add-ons that have more than a million Firefox users today.

How many of those 30 add-ons with binary components are things like
.Net Assistant and Java Console that users don't know/care about? And
how many of them are anti-virus type things?

FF8 made the 3rd party add-on situation better, I'm wondering how much
effect it had in practice.

Nick

Blair McBride

unread,
Jan 18, 2012, 12:57:21 AM1/18/12
to dev-pl...@lists.mozilla.org
On 18/01/2012 6:22 a.m., Benjamin Smedberg wrote:
> I'm looking at
> https://addons.mozilla.org/en-US/firefox/compatibility/11.0?appver=1-11.0&type=binary

Note that AMO's definition of "binary component" is NOT "binary XPCOM
component" - it's *any* binary code (including .sh files, oddly enough).
Bug 718694 will add support for specifically detecting binary XPCOM
components.

Back to the original topic... there are still numerous XPCOM
interfaces/functions that are marked [noscript], so there are still
things that can only be done via binary XPCOM. There aren't nearly as
many as there once was, and it's a fixable problem, but there is work to
be done to make them JS-friendly or provide alternatives.

- Blair

Henri Sivonen

unread,
Jan 18, 2012, 3:02:04 AM1/18/12
to mozilla.dev.planning group
On Tue, Jan 17, 2012 at 6:31 PM, Benjamin Smedberg
<benj...@smedbergs.us> wrote:
>>> D. Stop loading binary components from extensions. Continue exporting the
>>> XPCOM functions so that application authors can continue to use them.
>>> Users
>>> might still be exposed to Oracle-type issues.
>>
>> Who are "application authors" in this case? Authors of XULRunner apps?
>> Don't authors of XULRunner apps pretty much end up compiling XULRunner
>> themselves and, therefore, could bake their code into a static binary
>> of the type mentioned in option E?
>
> Do you have data on this? I don't think that most app authors compile their
> own XULRunner. In some cases they just try to use firefox -app.

I don't have proper data. Just the observation that big apps like
BlueGriffon, Thunderbird and Songbird seem to compile Gecko themselves
and, therefore, could add whatever C++ they'd like at Gecko compile
time. Maybe calling these "XULRunner apps" is technically wrong.

If there is a long tail of XULRunner apps that use Mozilla-supplied
XULRunner binaries, one option would be to lock down Firefox against
dynamically-added vtable-dependent code while leaving XULRunner open
to vtable-dependent extensions.

--
Henri Sivonen
hsiv...@iki.fi
http://hsivonen.iki.fi/

Mike Hommey

unread,
Jan 18, 2012, 3:06:31 AM1/18/12
to Justin Dolske, dev-pl...@lists.mozilla.org
On Tue, Jan 17, 2012 at 06:56:19PM -0800, Justin Dolske wrote:
> Has anyone done some kind of survey to see what the common use-cases
> are for binary addons? Would it be interesting to look at ways to
> expose non-XPCOM binary hooks, with greater stability levels?
> [Essentially Jetpack for native code, if you will. :)]

I think that's the core of the problem: we don't know what APIs they
need or use. Boris has been saying that stabilizing e.g. nsINode would
stall progress on new web features. But how many binary xpcom components
need to fiddle with nsINode?
We certainly don't need to stabilize all our APIs for binary xpcom
components to be sustainable.

Mike

Mike Hommey

unread,
Jan 18, 2012, 3:08:42 AM1/18/12
to Zack Weinberg, dev-pl...@lists.mozilla.org
Do you suggest that things like the ldap client implementation would
need to be in m-c's libxul?

Mike

Boris Zbarsky

unread,
Jan 18, 2012, 3:22:54 AM1/18/12
to
On 1/18/12 3:06 AM, Mike Hommey wrote:
> I think that's the core of the problem: we don't know what APIs they
> need or use. Boris has been saying that stabilizing e.g. nsINode would
> stall progress on new web features. But how many binary xpcom components
> need to fiddle with nsINode?

So... I probably have a biased sample, but about 2/3 of the ones I've
ended up having to deal with (because they were causing breakage of
various sorts) were touching things like nsINode/nsIContent/nsIDocument.

-Boris

Mike Hommey

unread,
Jan 18, 2012, 3:39:12 AM1/18/12
to Boris Zbarsky, dev-pl...@lists.mozilla.org
You probably have a biased sample, but then, that raises the corollary
question: what do they need to touch them for? More specifically, why do
they need to do it from C++ instead of javascript?

Mike

Boris Zbarsky

unread,
Jan 18, 2012, 3:42:25 AM1/18/12
to
For the most part, for absolutely no good reason I could see.

There was one sorta-exception: an addon that switched from DOM mutation
observers to nsIMutationObserver to not cause a large performance drag.
But imo that was the wrong fix for their problem; the right fix would
have been removing the mutation observer when it got triggered, then
readding after they had rescanned the entire page off a timeout (which
they continued to do). I sort of failed to convince them of that.

-Boris

Gervase Markham

unread,
Jan 18, 2012, 7:48:10 AM1/18/12
to Philip Chee
On 18/01/12 04:24, Philip Chee wrote:
> Well here's the problem. Enigmail tried multiple times to get their IPC
> code into core but was rejected each time. Eventually they were
> grudgingly allowed to put their code into a separate repository
> somewhere in hg.mozilla.org where nobody can find it.

I remember that; it was a long time ago. Without wanting to revisit the
rights or wrongs of the decision back then, perhaps it's worth making a
few brief enquiries of the relevant module owners as to whether the
answer would still be the same?

Gerv


Pavol Mišík

unread,
Jan 18, 2012, 8:18:34 AM1/18/12
to
Company I work for has written extension for Thunderbird to provide
tight integration of our product. Joshua mentioned there are many
limitations, so we did it in xpcom. We've successfully supported Tb2-Tb5
within *one* dll.

We created thin layer that wrapped all interfaces we need. At startup,
our extension verified if it is compatible with all necessary
interfaces. We could get necessary information from nsIInterfaceInfo.
We relied that xpcom\reflect doesn’t change interfaces. We did this
verification for two reasons to get necessary information to dynamically
build wrappers and to protect impatient users that tried to change
em:maxVersion in install.rdf

In Tb7/FF7 was changed nsIInterfaceInfo. In order to avoid potential
users problems in future with our integration we stopped supporting
newer versions because there is no guarantee that xpcom\reflect
interfaces will not change again. :-(
We also support other products e.g Microsoft. So our users have option
to use them.

If there will be frozen interface or this interface will be changed more
carefully(adding new methods at the end of interface - not in the
middle), we could get information about interfaces (xpcom\reflect) in
FF/TB/gecko safely we could reconsider our decision.
Probably there are also other companies that do their extension this way.

pm-

Kent James

unread,
Jan 18, 2012, 10:19:45 AM1/18/12
to
I think that for the purposes of this discussion, the point here is that
it is not a trivial task to get a major binary interface added to the
core code, even for an extension like Enigmail that is viewed as
critical to the Thunderbird ecosystem. And that is the past. If you
remove binary support, then the future Enigmails can never even happen,
particularly since in their early experimental phase there may not be
the recognized need yet from core developers.

It sounds great to say that we will fix the core code to remove these
deficiencies - but we are also talking about core code here that still
uses RSS and Mork. The resources are just not there for the rewrite that
you need to remove all possible requirements for binary addons.

rkent

Benjamin Smedberg

unread,
Jan 18, 2012, 11:00:32 AM1/18/12
to Philip Chee, dev-pl...@lists.mozilla.org
On 1/17/2012 11:24 PM, Philip Chee wrote:
> Well here's the problem. Enigmail tried multiple times to get their IPC
> code into core but was rejected each time. Eventually they were
> grudgingly allowed to put their code into a separate repository
> somewhere in hg.mozilla.org where nobody can find it.
Patrick and I mutually decided to make the code separate "for now",
since the Mozilla core code didn't need it. We can revisit that decision
as necessary.

--BDS

Robert Kaiser

unread,
Jan 18, 2012, 11:40:37 AM1/18/12
to
Asa Dotzler schrieb:
> If we do B, we do nothing about most add-ons with XPCOM components. Most
> are hosted outside of AMO already. If we do B, the rest will probably
> just move outside of AMO where we have even less visibility.

I fear that as well. Would give us no win but a lot more pain. :(

> If we do D and E, we have to be prepared for the authors of all the
> "security"-related add-ons that use XPCOM components to message to our
> users that they will be unsafe and should not upgrade to newer versions
> of Firefox that block them. We've already seen some of that from AV
> vendors in the past.

Actually, a number of AV vendors (if not all of them) use Windows
facilities to hook binaries into our processes and don't use XPCOM at
all, from what I can tell. This makes things just crash because there
are no XPCOM version checks at all. Oh beauty.

I fear a lot that people now using XPCOM might then wander off to using
those hooking facilities as well, which makes breakage and pain even
more likely ([startup] crashes are worse than incompatibility warnings)
and also makes them drop any support for non-Windows if they previously
might have had that. Not sure that's what we want to propagate.

Robert Kaiser

Simon Paquet

unread,
Jan 18, 2012, 11:53:03 AM1/18/12
to
Benjamin Smedberg wrote on 17. Jan 2012:

> [...]
>
> There are few possible alternatives for what we could do to make
> using binary components easier or harder:
>
> [...]
>
> B. Make no technical changes, but disallow binary components in
> release extensions on AMO. We may allow binary components in beta
> or experimental addons, with the understanding that these would
> need to be updated for each release.
>
> [...]
>
> == bsmedberg's Opinion ==
>
> I see binary components as a useful tool in very specific
> circumstances. Primarily, they are a good way to prototype and
> experiment with new features which require mucking about with
> Mozilla internals. But I tend to think that we should discourage
> their use in any production environment, including perhaps
> disallowing them on AMO. I tend to think we should consider
> option "B".

Just as a FYI:
Lightning, Thunderbird most popular addon (nearly 20% of all TB users
use Lightning) with over 1.4m daily users on weekdays, uses binary
components extensively.

So going with option "B" would hurt the Thunderbird ecosystem a lot.

Lightning uses binary components to embed an external library
(libical) that is used to work with .ics files, which allow Lightning
users to use non-local calendars hosted on web servers, ftp servers,
CalDAV servers, Google Calendar, etc.

It may be that libical is used for other places as well. I'll wait
for the lead developer, Philipp Kewisch (Fallen on IRC) to chime in
here.

From what I know Philipp is already working on moving the Lightning
codebase to js-ctypes. The main problem that he grapples with is
AFAIK the memory management, particularly how to free() ctypes
components attached to js objects (there are no destructors!).

--
Simon

Steve Wendt

unread,
Jan 18, 2012, 1:41:14 PM1/18/12
to
On 1/17/2012 8:24 PM, Philip Chee wrote:

> Well here's the problem. Enigmail tried multiple times to get their IPC
> code into core but was rejected each time. Eventually they were
> grudgingly allowed to put their code into a separate repository
> somewhere in hg.mozilla.org where nobody can find it.

Yes, too often there's the attitude that "Firefox doesn't need that."
The same attitude has been there somewhat even for Firefox, when it is
for non-tier 1 platforms. The argument tends to be "we don't want our
code polluted with stuff we don't care about." I can certainly
understand that position, but things like this are the bad side of it.

Wes Garland

unread,
Jan 18, 2012, 2:32:56 PM1/18/12
to Simon Paquet, dev-pl...@lists.mozilla.org
>
> The main problem that he grapples with is
> AFAIK the memory management, particularly how to free() ctypes
> components attached to js objects (there are no destructors!).
>

I wrote an FFI for GPSEE (a FOSS SpiderMonkey embedding) before js-ctypes
was around and bumped into this same issue. We solved it by creating a
sort of C function closure by adding a boxing object to the return value of
FFI calls and using it to hook the finalization stage of the garbage
collector. You can do this for pretty much any function where you can
compute the cleanup before it needs to happen -- the classic examples being
malloc/free - so, we can write real-world code like this:

function mmap(filename)
{
var fd = _open(filename, ffi.std.O_RDONLY, parseInt('0666'));
if (fd == -1)
throw(new Error(perror("Cannot open file " + filename)));

try
{
var sb = new ffi.MutableStruct("struct stat");
if (_fstat(fd, sb) != 0)
throw(new Error(perror("Cannot stat file " + filename)));

var mem =* _mmap.call*(null, sb.st_size, ffi.std.PROT_READ,
ffi.std.MAP_PRIVATE, fd, 0);
if (mem == ffi.Memory(-1))
throw(new Error(perror("Cannot mmap file " + filename)));

mem.length = sb.st_size;
* mem.finalizeWith(_munmap, mem, sb.st_size);*
return mem;
}
finally
{
_close(fd);
}
}

So, what happens in this case is that when the return value (mem) is
collected, munmap is invoked on the pointer and mapping size boxed by the
return value of _mmap.call(). The "C function closure" is created during
the finalizeWith() method, because it's too late to do it when the garbage
collector is running -- the private slot of the mem object will be
annotated with something like 'call function 0x1234 with argv [0xc0ffee,
800]'.

This technique could probably easily be adapted for use by js-ctypes if the
demand were there -- although it should be understood that this technique
also represents a pretty serious footgun.

Wes

--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102

Bobby Holley

unread,
Jan 18, 2012, 2:49:59 PM1/18/12
to Wes Garland, dev-pl...@lists.mozilla.org, Simon Paquet
We've considered putting this kind of thing into js-ctypes. Unfortunately,
since the GC doesn't guarantee any kind of finalization ordering, the
prospect of bugs resulting from destructors being called in unexpected
orders seemed quite high. If anyone has any clever ideas of how to do this
more effectively please let me know.

bholley

On Wed, Jan 18, 2012 at 11:32 AM, Wes Garland <w...@page.ca> wrote:

> >
> > The main problem that he grapples with is
> > AFAIK the memory management, particularly how to free() ctypes
> > components attached to js objects (there are no destructors!).
> >
>
> _______________________________________________
> dev-planning mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-planning
>

Wes Garland

unread,
Jan 18, 2012, 3:14:24 PM1/18/12
to Bobby Holley, dev-pl...@lists.mozilla.org, Simon Paquet
On 18 January 2012 14:49, Bobby Holley <bobby...@gmail.com> wrote:

> We've considered putting this kind of thing into js-ctypes. Unfortunately,
> since the GC doesn't guarantee any kind of finalization ordering, the
> prospect of bugs resulting from destructors being called in unexpected
> orders seemed quite high. If anyone has any clever ideas of how to do this
> more effectively please let me know.
>

AFAICT, there's not a whole lot of "more cleverly" you can do. You need to
select basic interfaces like fopen/fclose, open/close, mmap/munmap,
malloc/free, nothing which interacts with other objects, nothing which can
call back into JS.

Here is the key to making this technique workable: the FFI marshalling must
happen when the finalizer is declared, not later. This means that all
arguments are immediately turned into their C equivalents, ffi_prep_cif()
is invoked, etc, while the arguments are still guaranteed to be reachable.

You cannot hold on to references to your C data in places that are not
visible to the JS GC, but that goes for any embedding scenario where the JS
GC is allowed to manage non-JS storage, not just this scenario.

Philipp Kewisch

unread,
Jan 18, 2012, 6:04:07 PM1/18/12
to
Hi Folks,

In the case of Lightning we are actually working on getting rid of
binary components already, but only due to the fact that the fast
release pace is making users unhappy since we have a new version every
release and older releases are incompatible and also the problems that
multiple release packages bring. This rewrite taking a substantial
amount of time, since I am in the process of rewriting libical from C to
Javascript.

I've attempted to go the js-ctypes route, but as Simon (Paquet) said I
hit the problem with sticking ctypes refs on js objects and not being
able to destruct them. The workarounds that were posted sound like a
great hack to me and that's not a route I would like to go. I really
would need to call a custom libical function to free the memory
correctly, since I believe they keep hold of their components too.

Another thing that is really bugging me about this is the impact on
addon developers you would impose by this change. With Lightning we have
had many situations in the past where the Mozilla Platform was changed
and Lightning was forced to do time-consuming changes to adapt. Just to
name a few: component manifest changes, ability to create new threads in
js (before ChromeWorkers were introduced), removing Components access in
ChromeWorkers, the new release process.

Most of those changes required us to do something totally different and
restructure the code. This is a big problem, not only for us but also
for other addon developers.

Now again a large change is being planned that will impact us and
cripple the heart of Lightning. We are only lucky this time that we have
started converting before the discussion came up. I really would like to
spend more time improving the product instead of running behind platform
changes.

The situation Simon (Kornblith) mentioned sounds so very familiar: you
are planning on changing things, but this will break things bad for the
one or other, resulting in either lots of work or ugly code, or both.

I also fear that stopping it on AMO is only a start. As others have
said, its only a small part of binary addons that are on AMO. You won't
be fixing the problem by banning them from AMO, so at some point I could
imagine you'd just no longer load binary components from extensions as a
whole. Maybe I'm looking too far into the future here, I just don't
think stopping them at AMO is the right thing to do.

Although for once Lightning might not take the greatest hit (unless you
do this before Firefox 13), I really think you should desist.

Thanks for considering
Philipp

Alex Vincent

unread,
Jan 19, 2012, 4:15:44 PM1/19/12
to
On 1/17/2012 8:24 PM, Philip Chee wrote:
> Well here's the problem. Enigmail tried multiple times to get their IPC
> code into core but was rejected each time. Eventually they were
> grudgingly allowed to put their code into a separate repository
> somewhere in hg.mozilla.org where nobody can find it.
>
> Phil
>

http://hg.mozilla.org/ipccode/

Zack Weinberg

unread,
Jan 19, 2012, 5:31:24 PM1/19/12
to
I suppose I do. It seems to me that B2G would ultimately want that anyway.

A lot of the pushback from non-Firefox application authors seems to boil
down to "we have a lot of old crufty C++ and there isn't the manpower to
get it up to the current quality standard required for m-c" and/or
"there isn't enough interest from the Firefox side of things in
including code in m-c that Firefox doesn't require itself". Again, B2G
implies (to me) that both of those are going to have to change going
forward. A smartphone that can't access corporate email accounts is not
going to be appealing to a large segment of the market for smartphones.

zw

Blake Winton

unread,
Jan 19, 2012, 6:12:19 PM1/19/12
to
That doesn't mean that we need to get our current LDAP code up to m-c
quality, or include it in Firefox. (Or, more accurately, Gecko?)

We could rewrite it in Javascript.

We could run similar functionality on a (possibly Mozilla-owned) server.

We could tell B2G users that they should be using webmail. (And help
push that arena forward, perhaps by offering an easy-to-install
web-based front end to IMAP/SMTP servers.)

I, for one, would love to hear from the B2G folks about what their
non-SMS messaging plans are, but I suspect they are kinda busy these
days, and might not have had time to formulate any yet. :)

Later,
Blake.

Steve Wendt

unread,
Jan 19, 2012, 6:21:58 PM1/19/12
to
On 1/19/2012 3:12 PM, Blake Winton wrote:

> That doesn't mean that we need to get our current LDAP code up to m-c
> quality, or include it in Firefox. (Or, more accurately, Gecko?)
>
> We could rewrite it in Javascript.

Pushing everything to Javascript seems like a recipe for disaster; there
have been plenty of cons listed in this thread.

> We could run similar functionality on a (possibly Mozilla-owned) server.
>
> We could tell B2G users that they should be using webmail. (And help
> push that arena forward, perhaps by offering an easy-to-install
> web-based front end to IMAP/SMTP servers.)

Seriously? That's just inviting failure.

Henri Sivonen

unread,
Jan 20, 2012, 3:04:13 AM1/20/12
to dev-pl...@lists.mozilla.org
On Fri, Jan 20, 2012 at 12:31 AM, Zack Weinberg <za...@panix.com> wrote:
> A lot of the pushback from non-Firefox application authors seems to boil
> down to "we have a lot of old crufty C++ and there isn't the manpower to get
> it up to the current quality standard required for m-c" and/or "there isn't
> enough interest from the Firefox side of things in including code in m-c
> that Firefox doesn't require itself".

I think mixing non-Firefox apps into this is broadens the problem in a
way that doesn't help with solving the actual problem. The actual
problem is 3rd-party native code touching the vtables of native code
shipped in Firefox (not Gecko-based app but *Firefox* specifically).

Since we aren't talking about getting rid of XPCOM altogether, we
could lock down the symbol exports and external dll load paths in
Firefox so that we'd no longer support 3rd-party native code touching
vtables shipped in Firefox directly (i.e. 3rd-party native code would
have to talk to native code contained in Firefox through a C-to-C API
[we have NPAPI in this category already], through C and jsctypes or so
that there's always XPConnected JS between a Mozilla-supplied C++
XPCOM object and a 3rd-party C++ XPCOM object [as Bobby suggested in
the other thread]) and still let non-Firefox Gecko-based apps use
XPCOM C++-to-C++.

In particular, when the "old crufty C++" is part of the build of the
non-Firefox Gecko-based app, it could continue to touch Gecko vtables
as much as it needs to if the non-Firefox app ships its own Gecko and
compiles Gecko and the additional app-specific C++ code together. Even
in this scenario, we could still even shorten IIDs and stop revising
them.

Brian Smith

unread,
Jan 20, 2012, 4:37:26 AM1/20/12
to Zack Weinberg, dev-pl...@lists.mozilla.org
Zack Weinberg wrote:
> On 2012-01-18 12:08 AM, Mike Hommey wrote:
> > Do you suggest that things like the ldap client implementation
> > would need to be in m-c's libxul?
>
> I suppose I do. It seems to me that B2G would ultimately want that
> anyway.

> A lot of the pushback from non-Firefox application authors seems to
> boil down to "we have a lot of old crufty C++ and there isn't the
> manpower to get it up to the current quality standard required for
> m-c" and/or "there isn't enough interest from the Firefox side of
> things in including code in m-c that Firefox doesn't require itself".

I am strongly on that side of the fence. We can't afford to include and maintain all kinds of libraries that some extension might maybe need sometime, beyond what we currently do. (Like I mentioned in the NSS/NSPR thread, it doesn't even seem realistic to me to continue indefinitely supporting many of the unused-by-Firefox code we currently ship.)

> Again, B2G implies (to me) that both of those are going to have to
> change going forward. A smartphone that can't access corporate
> email accounts is not going to be appealing to a large segment of
> the market for smartphones.

Are we are going to try to standardize web APIs for LDAP/IMAP/SMTP/POP? Are we going to write an irreplaceable native code mail client for B2G? AFAICT, the answer to both of those questions is "no." I think, for B2G, we're going to have to provide a way to allow HTML5 apps to create raw socket connections, so that a pure HTML5 email application can do SMTP, IMAP, POP, and/or LDAP. (Who exactly is going to write the first pure JS email application that can do SMTP, IMAP, POP, and LDAP is another question. I guess this might be a very practical use case for C -> JS compilers.)

- Brian

Brian Smith

unread,
Jan 20, 2012, 4:44:47 AM1/20/12
to Steve Wendt, dev-pl...@lists.mozilla.org
Steve Wendt wrote:
> On 1/19/2012 3:12 PM, Blake Winton wrote:
>
> > That doesn't mean that we need to get our current LDAP code up to
> > m-c quality, or include it in Firefox. (Or, more accurately, Gecko?)
> >
> > We could rewrite it in Javascript.
>
> Pushing everything to Javascript seems like a recipe for disaster;
> there have been plenty of cons listed in this thread.

AFAICT, there is no such thing as an addon or extension API in B2G at all. Every app is to be a regular HTML5 web app, that you will be able to run in the desktop browser too. They will use new web APIs that haven't been standardized (or even created) yet, with some way of handling permissions so that the user can let web apps he/she trusts do things that currently our platform doesn't allow because it is too dangerous to let any web app do them by default.

Also, keep in mind that in B2G, the user is supposed to be able to replace any built-in app with his own app (except maybe safety-critical things like the dialer). And, ideally, the built-in apps wouldn't have any advantage over user-created apps--in particular, no special access to ANY kind of API (LDAP or otherwise). The best way to ensure that is to have all the built-in apps, including *especially* the default email client, be pure HTML5 apps.

(I do agree that SMTP/IMAP/POP/LDAP email is a must-have for any serious B2G phone.)

- Brian

Robert Kaiser

unread,
Jan 20, 2012, 10:11:11 AM1/20/12
to
Henri Sivonen schrieb:
> Since we aren't talking about getting rid of XPCOM altogether, we
> could lock down the symbol exports and external dll load paths in
> Firefox so that we'd no longer support 3rd-party native code touching
> vtables shipped in Firefox directly (i.e. 3rd-party native code would
> have to talk to native code contained in Firefox through a C-to-C API
> [we have NPAPI in this category already]

They'll go and use Windows Dll hooking facilities and hook their
libraries directly into our process, which is much worse than XPCOM as
there are no version checks for any interfaces at all there. That said,
a lot of 3rd-party software is already doing that anyhow and causing us
in the crash analysis department a lot of grief.

Robert Kaiser

Robert Kaiser

unread,
Jan 20, 2012, 10:14:11 AM1/20/12
to
Zack Weinberg schrieb:
> A smartphone that can't access corporate email accounts is not
> going to be appealing to a large segment of the market for smartphones.

Do you suppose we should define open web standards for accessing LDAP,
IMAP, maybe POP, for sure MS Exchange Server from web apps in a generic
way? If so, putting those things into Gecko makes sense. If not, then it
has to be reimplemented in JS for to be available in B2G, as AFAIK
everything any app in B2G does must be available to any web app
following open standards or at least spec that are on their way to
becoming such standards (see WebAPI).
I could be mistaken, but that's how I take the mission of this whole
project.

Robert Kaiser

Henri Sivonen

unread,
Jan 20, 2012, 10:47:27 AM1/20/12
to dev-pl...@lists.mozilla.org
On Fri, Jan 20, 2012 at 5:11 PM, Robert Kaiser <ka...@kairo.at> wrote:
> Henri Sivonen schrieb:
>
>> Since we aren't talking about getting rid of XPCOM altogether, we
>> could lock down the symbol exports and external dll load paths in
>> Firefox so that we'd no longer support 3rd-party native code touching
>> vtables shipped in Firefox directly (i.e. 3rd-party native code would
>> have to talk to native code contained in Firefox through a C-to-C API
>> [we have NPAPI in this category already]
>
>
> They'll go and use Windows Dll hooking facilities and hook their libraries
> directly into our process, which is much worse than XPCOM as there are no
> version checks for any interfaces at all there.

How does Chrome cope with this? Their source is also available, so
presumably 3rd parties might think they can use their header files,
etc.

> That said, a lot of
> 3rd-party software is already doing that anyhow and causing us in the crash
> analysis department a lot of grief.

Right. Version check didn't save us with the Oracle Single Sign-On
thing, for example.

Zack Weinberg

unread,
Jan 20, 2012, 10:49:35 AM1/20/12
to
On 2012-01-20 12:04 AM, Henri Sivonen wrote:
> On Fri, Jan 20, 2012 at 12:31 AM, Zack Weinberg<za...@panix.com> wrote:
>> A lot of the pushback from non-Firefox application authors seems to boil
>> down to "we have a lot of old crufty C++ and there isn't the manpower to get
>> it up to the current quality standard required for m-c" and/or "there isn't
>> enough interest from the Firefox side of things in including code in m-c
>> that Firefox doesn't require itself".
>
> I think mixing non-Firefox apps into this is broadens the problem in a
> way that doesn't help with solving the actual problem. The actual
> problem is 3rd-party native code touching the vtables of native code
> shipped in Firefox (not Gecko-based app but *Firefox* specifically).

It may not help with the technical problem all that much, but making
sure we take the needs of non-Firefox Gecko-based apps into account
helps with the *social* problem which is that we have this ridiculous
laser focus on The Web which is actually *hurting* efforts to make The
Web eat the desktop space.

> Since we aren't talking about getting rid of XPCOM altogether

Oh, but I most definitely am.

zw

Zack Weinberg

unread,
Jan 20, 2012, 10:52:56 AM1/20/12
to
On 2012-01-20 7:14 AM, Robert Kaiser wrote:
> Zack Weinberg schrieb:
>> A smartphone that can't access corporate email accounts is not
>> going to be appealing to a large segment of the market for smartphones.
>
> Do you suppose we should define open web standards for accessing LDAP,
> IMAP, maybe POP, for sure MS Exchange Server from web apps in a generic
> way?

I haven't thought about this in great detail. Long term, I prefer the
idea of finding some way to give web JS safe access to raw sockets; that
makes the web platform much more powerful, means we have less C++
exposed to data from the network, allows experimentation with entirely
new wire protocols, etc.

That is a really hard security problem, though; open standard web APIs
for commonly used protocols now might be the path of least resistance.

zw

Robert Kaiser

unread,
Jan 20, 2012, 11:01:00 AM1/20/12
to
Henri Sivonen schrieb:
> On Fri, Jan 20, 2012 at 5:11 PM, Robert Kaiser<ka...@kairo.at> wrote:
>> They'll go and use Windows Dll hooking facilities and hook their libraries
>> directly into our process, which is much worse than XPCOM as there are no
>> version checks for any interfaces at all there.
>
> How does Chrome cope with this? Their source is also available, so
> presumably 3rd parties might think they can use their header files,
> etc.

Hmm, I have no idea, have never talked to them.

>> That said, a lot of
>> 3rd-party software is already doing that anyhow and causing us in the crash
>> analysis department a lot of grief.
>
> Right. Version check didn't save us with the Oracle Single Sign-On
> thing, for example.

Exactly. And a ton of others, esp. with "Security Suites".

Robert Kaiser

Henri Sivonen

unread,
Jan 21, 2012, 4:40:19 AM1/21/12
to dev-pl...@lists.mozilla.org
On Fri, Jan 20, 2012 at 6:01 PM, Robert Kaiser <ka...@kairo.at> wrote:
>>> That said, a lot of
>>> 3rd-party software is already doing that anyhow and causing us in the
>>> crash
>>> analysis department a lot of grief.
>>
>> Right. Version check didn't save us with the Oracle Single Sign-On
>> thing, for example.
>
> Exactly. And a ton of others, esp. with "Security Suites".

So intentionally exposing XPCOM doesn't really help, since 3rd parties
crash us anyway.

What *might* help with DLL injection, though, would be C APIs that the
3rd-party DLLs could use. They could still corrupt memory, but at
least they wouldn't crash due to vtable changes.

Have we ever asked the "security suite" vendors for a list of things
they want to accomplish when they inject code into Firefox and
evaluated whether those use case could be satisfied with reasonable
effort by a C API made specifically for "security suite" integration?
That is, what kind of stable API would it take to make the "security
suite" vendors agree not to poke at Firefox's vtables?

(Unfortunately, "security suites" probably want to get at user-private
data, so having an API that exposes the private data to "security
suites" would probably make it super easy for malware to tap into the
private data feed, too. But then, malware can already get to plenty of
private data, so an explicit API probably wouldn't make things worse.
Maybe we could have a C callback-based API that accepts only one
function pointer per callback, so if malware manages to register its
callbacks first, the "security suite" would fail to register its
callbacks and could whine about it.)

Boris Zbarsky

unread,
Jan 22, 2012, 6:48:08 AM1/22/12
to
On 1/20/12 4:47 PM, Henri Sivonen wrote:
> How does Chrome cope with this? Their source is also available, so
> presumably 3rd parties might think they can use their header files,
> etc.

For their chrome (supervisor) process, I don't know.

For their renderer processes, specifically on Windows, they start the
process in a "stopped" state of some sort, go through the process memory
before it starts running and block some set of DLLs trying to hook into
it (I think they started with a whitelist but had to switch to a
blacklist because apparently hooking into other processes is _very_
common on Windows), then actually start the process.

Or at least they did something like this back about two years ago, if I
understood correctly at the time.

-Boris

Henri Sivonen

unread,
Jan 23, 2012, 4:13:18 AM1/23/12
to dev-pl...@lists.mozilla.org
On Sun, Jan 22, 2012 at 1:48 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
> On 1/20/12 4:47 PM, Henri Sivonen wrote:
>>
>> How does Chrome cope with this? Their source is also available, so
>> presumably 3rd parties might think they can use their header files,
>> etc.
>
>
> For their chrome (supervisor) process, I don't know.
>
> For their renderer processes, specifically on Windows, they start the
> process in a "stopped" state of some sort, go through the process memory
> before it starts running and block some set of DLLs trying to hook into it
> (I think they started with a whitelist but had to switch to a blacklist
> because apparently hooking into other processes is _very_ common on
> Windows), then actually start the process.

Is the sandboxing a prerequisite for this? Is this something that
could be applied to Firefox in the short term by making a small exe
bootstrap the real Firefox process like this?

Mike Hommey

unread,
Jan 23, 2012, 4:19:06 AM1/23/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Mon, Jan 23, 2012 at 11:13:18AM +0200, Henri Sivonen wrote:
> On Sun, Jan 22, 2012 at 1:48 PM, Boris Zbarsky <bzba...@mit.edu> wrote:
> > On 1/20/12 4:47 PM, Henri Sivonen wrote:
> >>
> >> How does Chrome cope with this? Their source is also available, so
> >> presumably 3rd parties might think they can use their header files,
> >> etc.
> >
> >
> > For their chrome (supervisor) process, I don't know.
> >
> > For their renderer processes, specifically on Windows, they start the
> > process in a "stopped" state of some sort, go through the process memory
> > before it starts running and block some set of DLLs trying to hook into it
> > (I think they started with a whitelist but had to switch to a blacklist
> > because apparently hooking into other processes is _very_ common on
> > Windows), then actually start the process.
>
> Is the sandboxing a prerequisite for this? Is this something that
> could be applied to Firefox in the short term by making a small exe
> bootstrap the real Firefox process like this?

We already do something like this, except we don't check what is already
loaded, we only block further loadings. And that doesn't really require
sandboxing.

Mike

Ted Mielczarek

unread,
Jan 23, 2012, 7:45:16 AM1/23/12
to Henri Sivonen, dev-pl...@lists.mozilla.org
On Fri, Jan 20, 2012 at 10:47 AM, Henri Sivonen <hsiv...@iki.fi> wrote:
>> They'll go and use Windows Dll hooking facilities and hook their libraries
>> directly into our process, which is much worse than XPCOM as there are no
>> version checks for any interfaces at all there.
>
> How does Chrome cope with this? Their source is also available, so
> presumably 3rd parties might think they can use their header files,
> etc.

I would assume that Chrome has less of a problem with this because
AFAIK they've never exposed a C++ API (not counting NPAPI). With Gecko
exposing XPCOM, it's not that hard to get a hold of various internal
bits of our engine and cast your way to certain doom. In Chrome, I
don't know how you'd get a hold of their internal C++-implemented
classes without grovelling around in the processes' address space.

-Ted

Benjamin Smedberg

unread,
Jan 23, 2012, 10:06:27 AM1/23/12
to Brian Smith, dev-pl...@lists.mozilla.org, Steve Wendt
On 1/20/12 4:44 AM, Brian Smith wrote:
>
> AFAICT, there is no such thing as an addon or extension API in B2G at all. Every app is to be a regular HTML5 web app, that you will be able to run in the desktop browser too. They will use new web APIs that haven't been standardized (or even created) yet, with some way of handling permissions so that the user can let web apps he/she trusts do things that currently our platform doesn't allow because it is too dangerous to let any web app do them by default.
This is not precisely true, there will almost certainly be an extension
API (either jetpack or something similar) for the B2G browser at least.
But it will not involve native code at all, just HTML/JS.

--BDS

Robert Kaiser

unread,
Jan 23, 2012, 2:01:31 PM1/23/12
to
Mike Hommey schrieb:
> We already do something like this, except we don't check what is already
> loaded, we only block further loadings. And that doesn't really require
> sandboxing.

I thought I read recently that our DLL blocklist stuff is ineffective
when dealing with the Windows hooking mechanisms.

Robert Kaiser

Ehsan Akhgari

unread,
Jan 23, 2012, 2:10:03 PM1/23/12
to Robert Kaiser, dev-pl...@lists.mozilla.org
It does not cover all of the possible hooking mechanisms, yes. But it does
cover the vast majority of them.

--
Ehsan
<http://ehsanakhgari.org/>

Philipp von Weitershausen

unread,
Jan 23, 2012, 5:03:12 PM1/23/12
to Brian Smith, dev-pl...@lists.mozilla.org, Steve Wendt
On Fri, Jan 20, 2012 at 1:44 AM, Brian Smith <bsm...@mozilla.com> wrote:
> AFAICT, there is no such thing as an addon or extension API in B2G at all.

Correct. If it will have one, it will be a Web API that we will be
trying to standardize (cf. bsmedberg's reply). It's important to
understand that in B2G, Gecko is part of the OS. It's not a program
that the user has much control over.

> Every app is to be a regular HTML5 web app, that you will be able to run in the desktop browser too. They will use new web APIs that haven't been standardized (or even created) yet, with some way of handling permissions so that the user can let web apps he/she trusts do things that currently our platform doesn't allow because it is too dangerous to let any web app do them by default.
>
> Also, keep in mind that in B2G, the user is supposed to be able to replace any built-in app with his own app (except maybe safety-critical things like the dialer). And, ideally, the built-in apps wouldn't have any advantage over user-created apps--in particular, no special access to ANY kind of API (LDAP or otherwise). The best way to ensure that is to have all the built-in apps, including *especially* the default email client, be pure HTML5 apps.
>
> (I do agree that SMTP/IMAP/POP/LDAP email is a must-have for any serious B2G phone.)

I disagree. There's no reason email delivery can't use HTTP. In fact,
let me say that I will call it a success if B2G will contribute even a
tiny bit to HTTP displacing more of SMTP/IMAP/POP/...

That said, your portrayal of the B2G platform is correct. Apps that
get to run on B2G are essentially websites using web APIs. That means
if there's functionality that isn't covered by a web API yet then it's
our plan to create one and standardize it. I'm skeptical as to whether
that should include new ways of communicating with a web servers, even
if it's to accomodate legacy protocols.

Robert Kaiser

unread,
Jan 24, 2012, 10:21:59 AM1/24/12
to
Ehsan Akhgari schrieb:
> It does not cover all of the possible hooking mechanisms, yes. But it does
> cover the vast majority of them.

OK, good, that's somewhat better than what I thought it would be, after
all. ;-)

Robert Kaiser
0 new messages