Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Binary components, embedding, XPCOM and Mozilla 2

51 views
Skip to first unread message

Benjamin Smedberg

unread,
Feb 8, 2008, 5:13:31 PM2/8/08
to
Following up on a bunch of discussions in email as well as some hanging
discussions in the "Thoughts on next release after Gecko 1.9" thread, I
wrote up where I think binary XPCOM ought to be going in Mozilla 2.

http://wiki.mozilla.org/Mozilla_2/XPCOM_and_Binary_Embedding

Please direct thoughts and comments to the mozilla.dev.platform list.

--BDS

Doug Turner

unread,
Feb 8, 2008, 5:30:50 PM2/8/08
to Benjamin Smedberg, dev-pl...@lists.mozilla.org

My worry is that not exposing the "whole platform" unfrozen or not
will restrict the "next great app" from doing what they need to do.

In FF3 and below, we have this idea of "frozen" and "non frozen"
interfaces. If an interfaces was frozen, consumers could trust that
it wouldn't change. In this new world you are proposing, all unfrozen
interfaces would be hidden and impossible to use (maybe i am
misreading). IF this is the case, I would like to point out that
there are no browsers (FF included) using only frozen interfaces.

> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

Benjamin Smedberg

unread,
Feb 8, 2008, 5:35:32 PM2/8/08
to
Doug Turner wrote:
>
> My worry is that not exposing the "whole platform" unfrozen or not will
> restrict the "next great app" from doing what they need to do.
>
> In FF3 and below, we have this idea of "frozen" and "non frozen"
> interfaces. If an interfaces was frozen, consumers could trust that it
> wouldn't change. In this new world you are proposing, all unfrozen
> interfaces would be hidden and impossible to use (maybe i am
> misreading). IF this is the case, I would like to point out that there
> are no browsers (FF included) using only frozen interfaces.

To be precise:

I'm proposing that no XPCOM interfaces would be exposed at all.

The embedding layer would expose APIs that are decoupled from XPCOM. Those
would not necessarily need to be "frozen", though they would hopefully be
more stable in general.

In general I think that binary compatibility across major versions hasn't
been all that important for embedders and we shouldn't design around it...
we should design around source compatibility where feasible.

--BDS

Richard Crowley

unread,
Feb 8, 2008, 5:51:03 PM2/8/08
to Doug Turner, Benjamin Smedberg, dev-pl...@lists.mozilla.org
I like where js-ctypes is headed. Being able to call functions within
GraphicsMagick or FFmpeg from JavaScript will be beyond cool. That
seems to be a long way off (is it really?). In the meantime I (like
Doug) tend to favor more exposure (at the expense of
simplicity/brevity/shallowness-of-learning-curve) to avoid things like
concatenating a bunch of strings together with a delimiter to get around
the lack of nsIArray in the SDK. Ultimately there has to be a way to
get to native code in the event that js-ctypes can't handle it, you need
better performance or some C API is just too awkward to lug around in
JavaScript.

Rich

Doug Turner wrote:
> My worry is that not exposing the "whole platform" unfrozen or not
> will restrict the "next great app" from doing what they need to do.
>
> In FF3 and below, we have this idea of "frozen" and "non frozen"
> interfaces. If an interfaces was frozen, consumers could trust that
> it wouldn't change. In this new world you are proposing, all unfrozen
> interfaces would be hidden and impossible to use (maybe i am
> misreading). IF this is the case, I would like to point out that
> there are no browsers (FF included) using only frozen interfaces.
>
>
>
>

> On Feb 8, 2008, at 2:13 PM, Benjamin Smedberg wrote:
>

Doug Turner

unread,
Feb 8, 2008, 6:06:54 PM2/8/08
to Benjamin Smedberg, dev-pl...@lists.mozilla.org

I do not think this is about binary compatibility as so much as
extensibility. As an embedders, although the internal guts of mozilla
are, well, not perfect..., I am very happy to know I can access the
various subsystems if and when required. This is something you just
can't really do with Webkit or trident.

Will we always stay ahead of the game by providing the right stuff in
the form of our embeddable widget? I think not and would advise
allowing embedders to gain access to the component manager so that
they can access everything if need be. We aren't guaranteeing that
stuff there works between builds, but we are insuring that they will
not be left out because they do more than your basic browser.

I don't really want to call into question the reasons behind this
change. But I do want to assert -- do not break extensibility lightly.

Doug Turner

Sergey Yanovich

unread,
Feb 9, 2008, 11:52:53 AM2/9/08
to
Benjamin Smedberg wrote:
> Following up on a bunch of discussions in email as well as some hanging
> discussions in the "Thoughts on next release after Gecko 1.9" thread, I
> wrote up where I think binary XPCOM ought to be going in Mozilla 2.
>
> http://wiki.mozilla.org/Mozilla_2/XPCOM_and_Binary_Embedding

The article doesn't provide answer to certain questions, which doesn't
look obvious:

> In Mozilla 2, the nature of XPCOM is changing significantly. It will no longer expose a stable ABI.

What prevents the new ABI to be frozen for Firefox 4 release?

How this type of instability will be addressed inside Mozilla?

> Both of these use-cases can and should be solved with a FFI library which is exposed to JavaScript. See mfinkle's post about an early prototype based on python ctypes. Solutions such as SWIG may also be useful for scripting more complex C++ APIs.

It is currently possible to call other components from a components,
irrespective of how each is implemented (native of JS). The quoted
paragraph address native-to-JS calls, but not native-to-native or
native-to-JS.

> There are significant costs of usably exposing C++ APIs through a shared library interface, even if we don't promise any binary compatibility. Every function in libxul that calls an exported function pays a price in terms of PIC code and startup time.

How big is the performance gain?

Are you proposing to ditch nsComponentManager and friends?

What is going to replace them?

Do you plan to preserve 'Abstract Factory' design pattern?

> In addition, we may be able to make more dramatic changes to the Mozilla binary code if we don't have to expose any of it directly to the outside world.

Nothing at all prevents you from doing any change to source or binary
code, as long as you don't care about compatibility.

--
Sergey Yanovich

smaug

unread,
Feb 10, 2008, 12:52:02 PM2/10/08
to


libxul already made it significantly harder to write binary extensions.
Some needed interfaces weren't exported in the beginning and those had
to be added later so that similar to 1.8 functionality could be
achieved. I'm pretty sure there are still some interfaces that can be
used with 1.8, but aren't available for 1.9 binary extensions.

Extensions must be able use native interfaces, or otherwise pretty much
all the interfaces must be made scriptable. Access to just DOM
interfaces is far from being enough. XForms, X+V and DCCI wouldn't
be possible with just DOM.
And of those X+V and DCCI aren't something from AMO. People are
actually using Mozilla as a platform for research because of its
extensibility.

We're actually missing some interfaces for extensions; adding somehow
new nsIFrame types or some other graphic stuff. Scriptable version of
nsIMutationObserver/nsIDocumentObserver might be useful for many
extensions...

I don't care about binary compatibility between major versions, but
I really don't want us to limit the extensibility of the platform.


-Olli

Benjamin Smedberg

unread,
Feb 11, 2008, 9:47:33 AM2/11/08
to
Sergey Yanovich wrote:

>> In Mozilla 2, the nature of XPCOM is changing significantly. It will no
>> longer expose a stable ABI.
>
> What prevents the new ABI to be frozen for Firefox 4 release?

In short, we don't want to... mozilla 2 is just one step on a long path
towards refactoring the mozilla codebase. The costs of having a stable API
are much higher than the potential benefits:

* Garbage collection technology will continue to change. For Mozilla 2 we
will be using a conservative garbage collector. Eventually we would like to
use exact garbage collection, first on the heap and then on the stack as well.

* We will be gradually rewriting large parts of the Mozilla codebase (using
automated tools) into JS2 from C++. As this happens, many of the existing
XPCOM interfaces may no longer be relevant... the primary programming
interface for mozilla will be JS2, not C++.

* Exception handling will be changing. We've already decided not to use C++
exceptions for mozilla2 (firefox 4) but will pursue that goal for a later
release (mozilla 2.1).

* Most extension authors don't care much about binary compatibility. They
can re-compile a new version of their extension for Firefox 5 and use
maxVersion/minVersion checks to ensure that the extension stays compatible.

> How this type of instability will be addressed inside Mozilla?

Note that you asked about ABI, not API. Within the Mozilla codebase itself
we are combining several approaches:

* source-level compatibility: the binary layout of classes is changing, but
the source-code for using the classes is still available.

* automatic rewriting: automatic rewriting tools based on elsa can convert
code patterns from one to another across the entire code base.

* static checking: a GCC plugin interface allows us to write static checking
rules for C++ code, so that bad patterns in our codebase can be noted at
compile-time as warnings or errors.

>> Both of these use-cases can and should be solved with a FFI library
>> which is exposed to JavaScript. See mfinkle's post about an early
>> prototype based on python ctypes. Solutions such as SWIG may also be
>> useful for scripting more complex C++ APIs.
>
> It is currently possible to call other components from a components,
> irrespective of how each is implemented (native of JS). The quoted
> paragraph address native-to-JS calls, but not native-to-native or
> native-to-JS.

ctypes allows script code to implement function pointers for JS->C calls as
well as C->JS callbacks.

We can also promote the JSAPI as an binary code interface mechanism. That's
how we would implement SWIG-like wrappers in any case, I'd think.

> How big is the performance gain?

Between 5 and 20%, depending on the surface of functions you're exposing.
There are four major factors:

* the number of functions you have to export from your DSO (startup cost)
* the number of times an exported function is referenced from a vtable
(startup cost)
* the number of functions which call an exported function (codesize)
* the number of times an exported function is called (runtime overhead,
pipeline stall)

> Are you proposing to ditch nsComponentManager and friends?
>
> What is going to replace them?
>
> Do you plan to preserve 'Abstract Factory' design pattern?

For Mozilla2 we are not "ditching" the component manager or abstract factory
design patterns. nsIComponentManager, nsIServiceManager, etc will continue
to exist in a form that is almost unchanged except for the fact that there
is no reference counting ;-) but the primary design goal will be changing
from binary code compatibility to JS code compatibility. Therefore we could
add or remove methods from interfaces as long as those changes don't affect
JS compatibility, which would at least require a C++ recompile.

As we move forward, we may end up rewriting core services such as the
component manager in JS2, and have the C++ reflection of XPCOM be the
exception instead of the rule.

>> In addition, we may be able to make more dramatic changes to the
>> Mozilla binary code if we don't have to expose any of it directly to
>> the outside world.
>
> Nothing at all prevents you from doing any change to source or binary
> code, as long as you don't care about compatibility.

This is a facile statement. Of course we care about *some* kinds of
compatibility:

* Compatibility with the web. Must-preserve
* Binary compatibility with core NPAPI. Must-preserve.
* Compatibility with extension JS code. Should preserve with few exceptions.
* Source compatibility with extension C++... I'm arguing we should provide
alternatives and move away from this model
* Binary compatibility with extension C++: will not preserve

--BDS

Sergey Yanovich

unread,
Feb 11, 2008, 12:37:28 PM2/11/08
to
Thank you for detailed and cooperative response. It has cleared most of
my concerns.

Benjamin Smedberg wrote:


> Sergey Yanovich wrote:
>> What prevents the new ABI to be frozen for Firefox 4 release?
>
> In short, we don't want to... mozilla 2 is just one step on a long path
> towards refactoring the mozilla codebase. The costs of having a stable API
> are much higher than the potential benefits:
>

> [...]


>
> * We will be gradually rewriting large parts of the Mozilla codebase (using
> automated tools) into JS2 from C++. As this happens, many of the existing
> XPCOM interfaces may no longer be relevant... the primary programming
> interface for mozilla will be JS2, not C++.
>

> [...]


>
>> How this type of instability will be addressed inside Mozilla?
>
> Note that you asked about ABI, not API. Within the Mozilla codebase itself
> we are combining several approaches:
>
> * source-level compatibility: the binary layout of classes is changing, but
> the source-code for using the classes is still available.
>
> * automatic rewriting: automatic rewriting tools based on elsa can convert
> code patterns from one to another across the entire code base.
>

> [...]
>

Right, I asked about ABI and specifically Firefox 4 release freeze. It
is nice to see thinking for years ahead, but I was more afraid about
immediate steps. Highlighted points are just what I wished to hear.

>> It is currently possible to call other components from a components,
>> irrespective of how each is implemented (native of JS). The quoted
>> paragraph address native-to-JS calls, but not native-to-native or
>> native-to-JS.
>
> ctypes allows script code to implement function pointers for JS->C calls as
> well as C->JS callbacks.
>
> We can also promote the JSAPI as an binary code interface mechanism.
> That's how we would implement SWIG-like wrappers in any case, I'd think.
>

IIUC, jc-ctypes uses libffi, which works with platform-specific DSO.
This means, it is necessary to create a native DSO in JS to use js-ctype
(or libffi, to be more precise) on a JS object from native code.

I am not asking of pure curiosity. Ability to freely mix native and
interpreted code makes Mozilla so attractive as a platform. And this
feature is widely used inside Mozilla tree, starting from default
command line handler. This works very transparently at the moment, and
it is highly desirable, that it continues to do so. All long as this is
true, any changes are fine for me.

>> How big is the performance gain?
>
> Between 5 and 20%, depending on the surface of functions you're
> exposing. There are four major factors:
>
> * the number of functions you have to export from your DSO (startup cost)
> * the number of times an exported function is referenced from a vtable
> (startup cost)
> * the number of functions which call an exported function (codesize)
> * the number of times an exported function is called (runtime overhead,
> pipeline stall)
>
>> Are you proposing to ditch nsComponentManager and friends?
>>
>> What is going to replace them?
>>
>> Do you plan to preserve 'Abstract Factory' design pattern?
>
> For Mozilla2 we are not "ditching" the component manager or abstract
> factory
> design patterns. nsIComponentManager, nsIServiceManager, etc will
> continue to exist in a form that is almost unchanged except for the fact
> that there is no reference counting ;-) but the primary design goal will
> be changing from binary code compatibility to JS code compatibility.
> Therefore we could add or remove methods from interfaces as long as
> those changes don't affect JS compatibility, which would at least
> require a C++ recompile.
>
> As we move forward, we may end up rewriting core services such as the
> component manager in JS2, and have the C++ reflection of XPCOM be the
> exception instead of the rule.
>

I intentionally grouped these question. I suspect, that JS2 interpreting
is going to be times (or even orders of magnitude) slower than C++
vtable dereferencing. Bytecode compilation is also usually more
expensive than DSO export. Speed for ease of maintenance trade-off is
fine, but nsIComponentManager is the thing, that makes Mozilla code easy
to use and maintain. It decouples everything from everything else.

IIUC, Mozilla ABI equals nsISupport. After XPCOMGC removes refcounting,
it will contain only QueryInterface. It may cease to be ABI, but QI +
CreateInstance* as API are very important platform extensibility.

>> Nothing at all prevents you from doing any change to source or binary
>> code, as long as you don't care about compatibility.
>
> This is a facile statement. Of course we care about *some* kinds of
> compatibility:

I neither mean any offense here, nor question the motivation of proposed
changes. I just wanted to say, that it is not required to hide the code
to change it.

John J. Barton

unread,
Feb 11, 2008, 1:37:29 PM2/11/08
to
Benjamin Smedberg wrote:
> * We will be gradually rewriting large parts of the Mozilla codebase
(using
> automated tools) into JS2 from C++. As this happens, many of the existing
> XPCOM interfaces may no longer be relevant... the primary programming
> interface for mozilla will be JS2, not C++.

I'm curious about this decision, specifically if the performance
implications have been tested experimentally. I'm guessing that this
decision may be driven by the Taramin JIT along the lines of: "it
generates code comparable to a compiler". That was the idea behind Java
Hot Spot as well, but the practical result did not match expectations,
perhaps because crossing from the JIT code to the OS is more expensive
than C++ to OS and lots of such crossing are in the C++ layer. (To be
clear, moving as much code to JS as possible is great).

John.

Jonas Sicking

unread,
Feb 15, 2008, 6:51:33 PM2/15/08
to

I agree with smaug. I think it's perfectly ok that we're giving up on
ABI compatibility between versions, but I don't see that there is a big
win in actively preventing people from using our internal APIs.

Why not treat the currently frozen APIs like we today treat internal
APIs? I.e. we freely modify them between major releases without caring
about external code. We do encourage people to not use them, but we
don't actively prevent them from doing so. We do however keep them
stable for dot-releases as best we can.

/ Jonas

Benjamin Smedberg

unread,
Feb 22, 2008, 7:49:18 PM2/22/08
to
Sergey Yanovich wrote:

>> We can also promote the JSAPI as an binary code interface mechanism.
>> That's how we would implement SWIG-like wrappers in any case, I'd think.
>>
> IIUC, jc-ctypes uses libffi, which works with platform-specific DSO.
> This means, it is necessary to create a native DSO in JS to use js-ctype
> (or libffi, to be more precise) on a JS object from native code.

I'm not sure what this paragraph means. libffi can do interactions in both
directions using function pointers, or at least the python version can.

> I am not asking of pure curiosity. Ability to freely mix native and
> interpreted code makes Mozilla so attractive as a platform. And this
> feature is widely used inside Mozilla tree, starting from default
> command line handler. This works very transparently at the moment, and
> it is highly desirable, that it continues to do so. All long as this is
> true, any changes are fine for me.

If the question is "will it be possible to write commandline handlers in
binary code"?... probably not without some JS glue. I think that we should
focus on making the interface good with pre-existing external libraries:
imagemagick, or VLC, or some proprietary DLL. The primary language for
interfacing with Mozilla will be JS.

> I intentionally grouped these question. I suspect, that JS2 interpreting
> is going to be times (or even orders of magnitude) slower than C++
> vtable dereferencing. Bytecode compilation is also usually more
> expensive than DSO export. Speed for ease of maintenance trade-off is
> fine, but nsIComponentManager is the thing, that makes Mozilla code easy
> to use and maintain. It decouples everything from everything else.

Well... JS performance is already pretty fast, partly because it is possible
to code at a higher level of abstration. But the tracing work in Tamarin
should make it possible, at least in theory, for JS code to be faster than
the equivalent compiled code, because you can focus optimization energy on
the actual hot loops being taken.

This is a technique which is still largely done by academia, and hasn't been
proven in the wild yet; but we're putting a fairly big bet on it.

> IIUC, Mozilla ABI equals nsISupport. After XPCOMGC removes refcounting,
> it will contain only QueryInterface. It may cease to be ABI, but QI +
> CreateInstance* as API are very important platform extensibility.

Yes, the concept of dynamic QI and creating objects/services by contract
will remain.

--BDS

Benjamin Smedberg

unread,
Feb 22, 2008, 10:36:49 PM2/22/08
to
smaug wrote:

> Extensions must be able use native interfaces, or otherwise pretty much
> all the interfaces must be made scriptable. Access to just DOM
> interfaces is far from being enough. XForms, X+V and DCCI wouldn't
> be possible with just DOM.

Because the DOM APIs perform too slowly, or the information simply isn't
available? What would it cost to make the information available to script
(just chrome script, even). Can you come up with a reasonable plan of attack
to expose that information?

> And of those X+V and DCCI aren't something from AMO. People are
> actually using Mozilla as a platform for research because of its
> extensibility.
>
> We're actually missing some interfaces for extensions; adding somehow
> new nsIFrame types or some other graphic stuff. Scriptable version of
> nsIMutationObserver/nsIDocumentObserver might be useful for many
> extensions...
>
> I don't care about binary compatibility between major versions, but
> I really don't want us to limit the extensibility of the platform.

To address your comment and sicking's reply together:

Extensibility is valuable. It is also costly. We're not going to go out of
our way to make binary XPCOM impossible. But we need to change the current
model of an API being a virtual method on a class. If we can gain code
readability or performance by deCOMTamination content, we should do that...
even if it means that XForms can't use it any more. For example, we can
probably reduce the size of most content objects by one or words by removing
the nsIDOM* interfaces which exist only for scriptability and providing
direct script implementation instead (similar to how Webkit implements
scriptable helpers).

To take another of your examples in particular: a general mechanism for
extensible frame types would cause unnecessary complication and performance
penalties in our codebase. This is not the kind of extensibility we should
ever impose on our platform. Instead, we should design good generic ways for
authors to create new layout features: canvas and SVG are both good at
drawing arbitrary content, and I believe you could implement all of mathml
in script using XBL2+SVG, and xforms using only XBL2. (Of course, that
assumes that we can get XBL2 done for Mozilla2, which isn't clear.)

And, if we do end up with XPCOM interfaces being available in mozilla2 for
binary components, that we should not make promises about their stability
even within dot-releases.

--BDS

Boris Zbarsky

unread,
Feb 22, 2008, 11:39:19 PM2/22/08
to Benjamin Smedberg
Benjamin Smedberg wrote:
>> Extensions must be able use native interfaces, or otherwise pretty
>> much all the interfaces must be made scriptable. Access to just DOM
>> interfaces is far from being enough. XForms, X+V and DCCI wouldn't
>> be possible with just DOM.
>
> Because the DOM APIs perform too slowly, or the information simply isn't
> available? What would it cost to make the information available to
> script (just chrome script, even). Can you come up with a reasonable
> plan of attack to expose that information?

I'd like to second that. For a while now I've wanted a good list of the
non-scriptable content/layout APIs that are used by extensions. The way I see
it, any time someone has to use nsIContent we lose (if nothing else, because we
are _not_ freezing anything like that, no matter what we do with Mozilla 2,
XPCOM, embedding, binary components, or whatnot).

-Boris

Boris Zbarsky

unread,
Feb 22, 2008, 11:40:39 PM2/22/08
to
Boris Zbarsky wrote:
> Benjamin Smedberg wrote:
>>> Extensions must be able use native interfaces, or otherwise pretty
>>> much all the interfaces must be made scriptable. Access to just DOM
>>> interfaces is far from being enough. XForms, X+V and DCCI wouldn't
>>> be possible with just DOM.
>>
>> Because the DOM APIs perform too slowly, or the information simply
>> isn't available? What would it cost to make the information available
>> to script (just chrome script, even). Can you come up with a
>> reasonable plan of attack to expose that information?
>
> I'd like to second that.

Just to make that clear, I'd want just a list of the things the DOM is
insufficient for. I'm pretty happy to help out with the other parts of what
Benjamin is asking for, as needed, but at the moment I don't think there's even
a clear picture of what's missing from the DOM API.

-Boris

Axel Hecht

unread,
Mar 4, 2008, 8:10:33 AM3/4/08
to

From my experience back in my XSLT days, DOM APIs worked, but where
just dog slow. Just moving to nsIContent and doing atom compares instead
of string compares gave us performances improvements of an order of
magnitude, if not more.

I'm personally not a big fan of the DOM api in general, too, IMHO, it's
totally js-unfriendly. I favour e4x from an api point of view, though
it's hard to get to an xml object.

Axel

Boris Zbarsky

unread,
Mar 4, 2008, 10:59:47 AM3/4/08
to
Axel Hecht wrote:
> From my experience back in my XSLT days, DOM APIs worked, but where
> just dog slow.

For something like XSLT (or XPath, or anything else that has to do a lot of DOM
traversal), this is definitely the case. This also came up for accessibility
recently. If nothing else, the refcounting and QI overhead kill you. The
former is going away with XPCOMGC, hopefully. If that and outparamdel happen,
that will just leave QI as the performance overhead for a lot of DOM methods
(well, plus the security checks and safety-checking they do).

I'm not sure what we can do to make that part faster. :( I do think we should
re-compare performance on the moz2 side.

> I'm personally not a big fan of the DOM api in general, too, IMHO, it's
> totally js-unfriendly. I favour e4x from an api point of view, though
> it's hard to get to an xml object.

Hard to get to an xml object in what sense?

-Boris

Boris Zbarsky

unread,
Mar 4, 2008, 11:01:08 AM3/4/08
to
Axel Hecht wrote:
> From my experience back in my XSLT days, DOM APIs worked, but where
> just dog slow.

One more thing here. This was in C++, right? In JS there is also the cost of
JS-wrapping all those DOM nodes for something like XSLT that traverses the whole
tree. Since our basic optimization assumption is that most nodes are NOT
accessed from JS, any time most are we lose.

-Boris

Axel Hecht

unread,
Mar 4, 2008, 11:27:23 AM3/4/08
to
Boris Zbarsky wrote:
> Axel Hecht wrote:
>> From my experience back in my XSLT days, DOM APIs worked, but where
>> just dog slow.
>
> For something like XSLT (or XPath, or anything else that has to do a lot
> of DOM traversal), this is definitely the case. This also came up for
> accessibility recently. If nothing else, the refcounting and QI
> overhead kill you. The former is going away with XPCOMGC, hopefully.
> If that and outparamdel happen, that will just leave QI as the
> performance overhead for a lot of DOM methods (well, plus the security
> checks and safety-checking they do).
>
> I'm not sure what we can do to make that part faster. :( I do think we
> should re-compare performance on the moz2 side.

In our case, we had to do a lot of node name checks to do, which ended
up on atom -> string conversions, and then string comparisons, I think
we mostly blame our perf problems to that.

>> I'm personally not a big fan of the DOM api in general, too, IMHO,
>> it's totally js-unfriendly. I favour e4x from an api point of view,
>> though it's hard to get to an xml object.
>
> Hard to get to an xml object in what sense?

Hard to get an e4x xml object from something like document or XHR.

Axel

dbradley

unread,
Mar 7, 2008, 8:12:57 PM3/7/08
to
What's the view on languages besides JS2 and C++ such as Java and
Python?

Is JS2 going to be the only way to get work done without a lot of
pain?

It would be nice to have a core object representation that would
support more than just one language and allow people to build their
own parsers that could interact with the objects. This would allow
people to support various languages yet work natively with the objects
in the system. No need for bridges like XPConnect and PYConnect.

I'm a little concerned that if no more than just mark and sweep is
used that switching the entire object system to be GC based we're
going to see some performance issues. Mainly when the OS encounters
memory pressure. There has already been issues around using the app
after minimizing or inactivity. With more of the objects moved into
the collector this will only get worse. Adding a generalization
algorithm might offset some of that, but I've not been all that
impressed with what I've seen out of Java and .Net.

David Bradley

Mike Shaver

unread,
Mar 8, 2008, 7:59:55 AM3/8/08
to dbradley, dev-pl...@lists.mozilla.org
On Fri, Mar 7, 2008 at 8:12 PM, dbradley <dbra...@gmail.com> wrote:
> What's the view on languages besides JS2 and C++ such as Java and
> Python?
>
> Is JS2 going to be the only way to get work done without a lot of
> pain?

Bridges written between JS2 and Java/Python/etc. should be
substantially nicer to work with than those written against today's
XPCOM, my gut says. Whether anyone writes or maintains those bridges
is unknown to me, and probably reflects the demand for those specific
use cases.

Mike

Benjamin Smedberg

unread,
Mar 10, 2008, 10:30:12 AM3/10/08
to
dbradley wrote:
> What's the view on languages besides JS2 and C++ such as Java and Python?
>
>
> Is JS2 going to be the only way to get work done without a lot of pain?
>
> It would be nice to have a core object representation that would support
> more than just one language and allow people to build their own parsers
> that could interact with the objects. This would allow people to support
> various languages yet work natively with the objects in the system. No
> need for bridges like XPConnect and PYConnect.

The direction I'd like to go in is that our official API is represented in
JS2 (or in the virtual machine used to implement JS2, which is really the
same thing).

So if somebody wanted to maintain a Java or Python bridge to Mozilla, they
should do it as a bridge from JS (or the JSAPI) to the other language.

Certainly support for other runtimes is nice in a perfect world, but I
really don't think the cost/benefit analysis stacks up that well. We should
focus on making our Mozilla+JS2 story world-class, and bridging a secondary
concern.

> I'm a little concerned that if no more than just mark and sweep is used
> that switching the entire object system to be GC based we're going to see
> some performance issues. Mainly when the OS encounters memory pressure.
> There has already been issues around using the app after minimizing or
> inactivity. With more of the objects moved into the collector this will
> only get worse. Adding a generalization algorithm might offset some of
> that, but I've not been all that impressed with what I've seen out of
> Java and .Net.

It is a concern, sure. I'm working on getting enough working that we can get
performance numbers. But MMgc already has support for incremental marking,
so, assuming we don't go paging to death, I think we can get pretty good
pause-time numbers, because the maximum pause time is then dependent on
sweeping, not marking. Right now we're going for "faster than cycle
collection takes currently" and can make improvements beyond that through
measurement and profiling. For more information you may want to watch the
tamarin-devel list.

--BDS

Michiel van Leeuwen

unread,
Mar 10, 2008, 1:17:12 PM3/10/08
to
Benjamin Smedberg wrote:
> The direction I'd like to go in is that our official API is represented in
> JS2 (or in the virtual machine used to implement JS2, which is really the
> same thing).

Does that mean that you are no longer working with interfaces, but with
a language? I think the nice thing about the current xpcom is that you
don't have to care about implementation languages, all that matter are
interfaces. But if the api is expressed in a certain language, that's no
longer true.
Or am I misunderstanding things?

> Certainly support for other runtimes is nice in a perfect world, but I
> really don't think the cost/benefit analysis stacks up that well. We
> should focus on making our Mozilla+JS2 story world-class, and bridging a
> secondary concern.

ouch. That feel like things you can do currently, like easy interfacing
to native libraries, is becoming second-class, and thus likely a lot
harder to do. (I'm thinking about calendars usage of libical now. It's
not that libical can't be ported to JS(2), but that would be a waste of
time when a library is already available)


Michiel

Benjamin Smedberg

unread,
Mar 10, 2008, 1:37:48 PM3/10/08
to
Michiel van Leeuwen wrote:

> Does that mean that you are no longer working with interfaces, but with
> a language? I think the nice thing about the current xpcom is that you
> don't have to care about implementation languages, all that matter are
> interfaces. But if the api is expressed in a certain language, that's no
> longer true.

Yes. XPCOM interfaces will continue to exist for a while yet, because we're
not going to throw away all of our existing code. But, especially for DOM
objects and other places that mainly implement XPCOM for the scriptability
we may have the object implement a JS object directly, instead of going
through xpconnect. In addition, security checking will happen through
JS-implemented security wrappers.

>> Certainly support for other runtimes is nice in a perfect world, but I
>> really don't think the cost/benefit analysis stacks up that well. We
>> should focus on making our Mozilla+JS2 story world-class, and bridging
>> a secondary concern.
>
> ouch. That feel like things you can do currently, like easy interfacing
> to native libraries, is becoming second-class, and thus likely a lot
> harder to do. (I'm thinking about calendars usage of libical now. It's
> not that libical can't be ported to JS(2), but that would be a waste of
> time when a library is already available)

I came up with the original plan for native libraries (jsctypes) with
libical specifically in mind. Writing a JS wrapper around libical should be
at least as simple as the binary XPCOM wrapper. Is there some particular
concern with this approach that we need to address?

--BDS

Michiel van Leeuwen

unread,
Mar 10, 2008, 2:05:35 PM3/10/08
to
Benjamin Smedberg wrote:

> I came up with the original plan for native libraries (jsctypes) with
> libical specifically in mind. Writing a JS wrapper around libical should
> be at least as simple as the binary XPCOM wrapper. Is there some
> particular concern with this approach that we need to address?

The example given at http://wiki.mozilla.org/Jctypes requires a lot of
lines of code for what is just one function call in c++ (plus one
addition to the makefile to link in the library). Writing so much code
for each function calls would lead to huge source codes. But I guess
that all this code can be generated automatically and hidden in helper
files.

Michiel

Benjamin Smedberg

unread,
Mar 10, 2008, 2:48:05 PM3/10/08
to

Yes, that's the goal. There are tools for python that will parse a C header
file and create a python library that hides FFI under the covers. I think we
can do the same, or better, using dehydra.

--BDS

Dan Mosedale

unread,
Mar 11, 2008, 3:23:10 PM3/11/08
to
Benjamin Smedberg wrote:

> Yes. XPCOM interfaces will continue to exist for a while yet, because
> we're not going to throw away all of our existing code. But, especially
> for DOM objects and other places that mainly implement XPCOM for the
> scriptability we may have the object implement a JS object directly,
> instead of going through xpconnect. In addition, security checking will
> happen through JS-implemented security wrappers.

If I've understood things correctly, for (eg) Thunderbird to use Mozilla
2, we should be able to make most things work using the same re-writing
tools that people will be using to convert core code. Existing
Thunderbird extensions should continue to work, assuming any binary code
gets similar treatment (auto-rewriting & recompile). Then at some point
after that (Mozilla 2.1? Mozilla 3?) XPCOM itself goes away. Assuming
this is the case, are there likely to be auto-rewriting tools to help
move from XPCOM to JS ctypes as well?

Dan

Benjamin Smedberg

unread,
Mar 11, 2008, 3:39:15 PM3/11/08
to
Dan Mosedale wrote:

> If I've understood things correctly, for (eg) Thunderbird to use Mozilla
> 2, we should be able to make most things work using the same re-writing
> tools that people will be using to convert core code. Existing

We don't know exactly what Mozilla2 will look like at the "end". Whether
binary XPCOM components will work probably depends on whether they can be
made to work without seriously affecting performance. I think that if we
make MMgc a C library (basically merge MMgc and jemalloc), that will
probably work.

> Thunderbird extensions should continue to work, assuming any binary code
> gets similar treatment (auto-rewriting & recompile). Then at some point
> after that (Mozilla 2.1? Mozilla 3?) XPCOM itself goes away. Assuming
> this is the case, are there likely to be auto-rewriting tools to help
> move from XPCOM to JS ctypes as well?

By "goes away" I mean "you can't write a component DLL"... internally some
form of XPCOM is going to live on for a long time.

Yes, there are going to be tools to rewrite binary C++ code into JS. But
that's far enough in the future that we don't know much about what they'll
be like ;-)

--BDS

Dan Mosedale

unread,
Mar 11, 2008, 7:16:19 PM3/11/08
to
Benjamin Smedberg wrote:
>> Thunderbird extensions should continue to work, assuming any binary
>> code gets similar treatment (auto-rewriting & recompile). Then at some
>> point after that (Mozilla 2.1? Mozilla 3?) XPCOM itself goes away.
>> Assuming this is the case, are there likely to be auto-rewriting tools
>> to help move from XPCOM to JS ctypes as well?
>
> By "goes away" I mean "you can't write a component DLL"... internally
> some form of XPCOM is going to live on for a long time.
>
> Yes, there are going to be tools to rewrite binary C++ code into JS. But
> that's far enough in the future that we don't know much about what
> they'll be like ;-)

So what I'm really trying to get a feel for is "what would the changes
you propose mean w.r.t. the existing mountains of C++ code in
Thunderbird and other apps". At this point, it's clearly pretty hard to
tell. I'd suggest that maintaining a section on the wiki page you
started addressing this would be worthwhile. In particular, that
maintenance seems likely to prove helpful to the various decisions that
need to be made along the way.

Dan

0 new messages