There are a couple of contexts in which I'm currently working with
SecurableModules moving to the browser. Generally speaking,
SecurableModules in the browser is a very good thing. But, there are
two issues that I see and I wanted to know what people think...
1. asynchronous loading to not lock up the browser for dynamically
loaded modules (that's in a separate thread... consensus is that you
don't use require() if you're doing async loading)
2. debugging
I'm certainly interested in hearing any additional ideas people might
have about require and whatnot in the browser, but for the moment I
want to focus on #2.
Generally speaking, to dynamically load a module you generally will
grab the JS via XHR and then eval it. This provides a very crappy
debugging experience. Syntax error in the module? Good luck! All you
generally get to see in Firebug, for instance, is that there's a
missing ')' somewhere in the file or some other problem.
Is there a better solution that people are aware of than just hacking
things up for use in a script tag?
https://wiki.mozilla.org/ServerJS/Modules/ScriptModules
Is there something more we should make the debuggers do to support
this? (Like allow the module loader to give the debugger information
about what it is loading...)
Has anyone had any particularly good experiences with dynamically
loaded modules?
Kevin
--
Kevin Dangoor
work: http://labs.mozilla.com/
email: k...@blazingthings.com
blog: http://www.BlueSkyOnMars.com
Thankfully, the two per server limit is being eased up a bit. All
current browsers support more than that (6?)
Cool. More posters is good!
> @Wes, I'm guessing debugging those modules was quite a pain as well
>
> Perhaps something like jslint (http://jslint.com/webjslint.js) could be used
> to
> locate problem areas in the module before eval() which would provide context
> as
> to what is exploding.
Someone responded to this thread over on the bespin-core mailing list
with a similar idea. If the browser debugging tools make it hard
presently, we can use external tools to get what we want (kind of like
what we get when using jQuery, Dojo or Prototype to extend the browser
API).
> @Kevin, I definitely believe that the module loading process should be
> transparent
> in such a way that it can be debugged in a casual way (read: not alert()).
alert is not a good way to debug, indeed.
> Has there been any talk about debugging the js on the server side? That to
> me would
> be a huge benefit over other server sided languages (especially if done
> right).
We haven't really talked about it, but there should be a lot that's
possible. I've lately been working on debugging in Bespin (attaching
to v8's remote debugger API). For the moment, at least, debugging is
very interpreter-dependent... it would be cool if there was a somewhat
standardized protocol for talking to the the interpreters.
To load a module you write:
Module('OurModule', {
"use": "resource://RequiredModule",
"body": {
// Code relying on RequiredModule
}
})
Where you specify a dependency with "use", "resource://" is the alias
for a given resource loader and "body" gets run once everything is loaded.
If we go this route for the browser platform we introduce two
requirements that may not be ideal:
1) The server librarian that prepares the modules for downloading (tusk
could possibly generate the appropriate archives ahead of time for
static serving). The librarian can also be optional where we simply make
more requests to load each file individually (only practical during
development - even then it's slow).
2) Wrapping code to allow delayed execution of dependent code. I am
looking at re-implementing "require" inside the "body" to still allow
fetching of objects (where the module code is already pre-loaded). If
this works then all we need is some way of pre-loading module code in an
intelligent manner.
> 2. debugging
>
> I'm certainly interested in hearing any additional ideas people might
> have about require and whatnot in the browser, but for the moment I
> want to focus on #2.
>
At this time I am betting on Firebug for debugging modules on the xpcom
platform. It should be no problem to extend this to the browser platform.
Firebug already has support for debugging evaled code on a browser page.
I tried it with Narwhal and the line numbers and call stack is
incorrect, but according to John (firebug core developer) it should be
no problem to fix.
I am going to see what I can do to get this working for xpcom this week.
Is there a browser platform yet with sample code I can use to test
debugging?
Christoph
[1] - http://code.google.com/p/joose-js/
[2] - http://extjs.com/forum/showthread.php?t=69161
Generally speaking, to dynamically load a module you generally will
grab the JS via XHR and then eval it. This provides a very crappy
debugging experience. Syntax error in the module? Good luck! All you
generally get to see in Firebug, for instance, is that there's a
missing ')' somewhere in the file or some other problem.
...
Is there something more we should make the debuggers do to support
this? (Like allow the module loader to give the debugger information
about what it is loading...)
Just a short note:
If you're not loading the modules from your own server, but are using
CDNs or similar, then you will be forced to use script tag loading due
to the cross-site restrictions of XHR. (f ex dojo.require switches from
XHR to script tag in cross-domain mode)
Best regards
Mike Wilson
>
> ... I've lately been working on debugging in Bespin (attaching
> to v8's remote debugger API). For the moment, at least, debugging is
> very interpreter-dependent... it would be cool if there was a somewhat
> standardized protocol for talking to the the interpreters.
I would be interested in this as well, but there seems to be little
interest from the JS implementers in doing this. I'd been thinking
perhaps the Open Web Advocacy GG (http://groups.google.com/group/openweb-group
) would be a place to talk about this? And other things like the //@
souceURL annotation for eval(), WebKit's "displayName" override for
anonymous functions, etc. "how to make debugging JS somewhat sane".
It doesn't seem like an ECMA ml would be the right place to talk about
this stuff, right now, but it would be good to reach out to all the JS
engine vendors, browser and standalone both.
Patrick Mueller - http://muellerware.org/
... I've lately been working on debugging in Bespin (attaching to v8's remote debugger API). For the moment, at least, debugging is very interpreter-dependent... it would be cool if there was a somewhat standardized protocol for talking to the the interpreters.I would be interested in this as well, but there seems to be little interest from the JS implementers in doing this. I'd been thinking perhaps the Open Web Advocacy GG (http://groups.google.com/group/openweb-group ) would be a place to talk about this? And other things like the //@ souceURL annotation for eval(), WebKit's "displayName" override for anonymous functions, etc. "how to make debugging JS somewhat sane".
That looks pretty neat; protecting the body from executing
until you have loaded all dependencies. But is the body an
object literal and not a function? What if you want to
execute some code in the module at load time?
Best regards
Mike Wilson
Module('OurModule', { "use": "resource://RequiredModule", "body": { // Code relying on RequiredModule } })That looks pretty neat; protecting the body from executing until you have loaded all dependencies. But is the body an object literal and not a function? What if you want to execute some code in the module at load time?
On Tue, Jun 9, 2009 at 9:49 AM, Christoph
Dorn<christ...@christophdorn.com> wrote:
> To load a module you write:
>
> Module('OurModule', {
> "use": "resource://RequiredModule",
> "body": {
> // Code relying on RequiredModule
> }
> })
>
> Where you specify a dependency with "use", "resource://" is the alias
> for a given resource loader and "body" gets run once everything is loaded.
That's pretty similar to what I'm proposing on a separate thread.
> If we go this route for the browser platform we introduce two
> requirements that may not be ideal:
>
> 1) The server librarian that prepares the modules for downloading (tusk
> could possibly generate the appropriate archives ahead of time for
> static serving). The librarian can also be optional where we simply make
> more requests to load each file individually (only practical during
> development - even then it's slow).
I suspect we are painted into a corner of having to deal with this,
given the various requirements.
> 2) Wrapping code to allow delayed execution of dependent code. I am
> looking at re-implementing "require" inside the "body" to still allow
> fetching of objects (where the module code is already pre-loaded). If
> this works then all we need is some way of pre-loading module code in an
> intelligent manner.
I'm not sure I parsed the problem you are describing here: [[ I am
looking at re-implementing "require" inside the "body" to still allow
fetching of objects (where the module code is already pre-loaded). ]]
-- which objects? Modules?
Are you saying that, even with (1), developers still need the *option*
to dynamically load modules async-ly?
Ihab
--
Ihab A.B. Awad, Palo Alto, CA
Module('OurModule', { "use": "resource://RequiredModule"
, "body": { // Code relying on RequiredModule } }) 1) The server librarian that prepares the modules for downloading (tusk could possibly generate the appropriate archives ahead of time for static serving). The librarian can also be optional where we simply make more requests to load each file individually (only practical during development - even then it's slow).
(In Joose "Module"(read package) means one or more "modules"(ServerJS). I need to be more consistent in my working.)Are you saying that, even with (1), developers still need the *option* to dynamically load modules async-ly?
2) Wrapping code to allow delayed execution of dependent code. I am looking at re-implementing "require" inside the "body" to still allow fetching of objects (where the module code is already pre-loaded). If this works then all we need is some way of pre-loading module code in an intelligent manner.I'm not sure I parsed the problem you are describing here: [[ I am looking at re-implementing "require" inside the "body" to still allow fetching of objects (where the module code is already pre-loaded). ]] -- which objects? Modules?
Module('OurModule', {
"use": "resource://RequiredModule",
"BEGIN" : function () {
//code will be executed after the all dependencies are loaded, but before "body"
},
"body": {
// Code relying on RequiredModule
}
})
I think I'm coming to agree on this. The [serverjs] "module" is the
unit of "near and easy" composition. The "package" is the unit of "far
and more difficult" composition.
> The concept of packages as a container for one or more modules is critical I
> think. I am working on a system that uses domain namespacing
> (com.github.cadorn....) to identify packages. It will include versioning as
> well as dependency resolution and loading. It will allow easy sharing of
> modules.
Cool. So hopefully, a package will be resolvable using any of a number
of different mechanisms, including stuff that does crypto to ensure
you have the right stuff. As I noted before, I would advocate a
package being a first-class object that is passed as an argument to
require():
var aPackage = /* get from package manager thingey */;
require(aPackage, 'foo/bar/baz');
> Modules yes. I am still getting a hang of the terminology. Calling require()
> inside "body" will work the same except for the module text/code being
> fetched from a "proxy" instead of XHR/filesystem.
>
> The proxy can be filled via individual/batched sync/async dependency loading
> when the program starts up or when a dependent package is declared.
>
> In this setup a module would always belong to a package and dependencies are
> declared on a package level.
Again, this point of view is one I'm coming to advocate, as a way of
simplifying the easy while making the difficult possible.
* * * * *
This does impose a burden on a JS engine to satisfy pre-declared
*module* dependencies synchronously. Given that, even if the package
is ready (downloaded or whatever), loading a module from it might
require a filesystem hit, I would personally prefer *all* module loads
to be async, so that the event loop would not block on even *that*
small interruption. However, as a compromise, sync require()-s and
async *dynamic* package instantiations is, I claim, a good compromise.
> package being a first-class object that is passed as an argument to
> require():
>
> var aPackage = /* get from package manager thingey */;
> require(aPackage, 'foo/bar/baz');
>
This works when modules from different packages are mixed sparingly. If
modules are mixed a lot something like the following would be nice
(assuming packages are already loaded):
require('#aPackage/foo/bar/baz');
Then you have a few edge cases:
- Scan for modules in myPackage first (this works with the current
relative path resolving), then dependent packages in the order declared
and finally the system paths and lib.
- Look for #myPackage/path/module.js first, then
#myPackage/foreach(dependentPackages)/path/module.js, then
foreach(dependentPackages)/path/module.js. This could be used to
override a module from a dependency in myPackage without changing
dependentPackage or introducing another package.
- Adding "overrides" in addition to "dependencies" in package.json we
could support looking for
foreach(#overridePackages)/myPackage/path/module.js, then
foreach(#overridePackages)/path/module.js, then
#myPackage/path/module.js. If we can add override packages at runtime we
can "overlay" new versions of modules while the program is running. This
would be very handy during development. It could alleviate some of the
issues vs "swapping" out modules. Both have their limitations.
Not sure if these kind of use-cases belong into the ServerJS spec or are
too application specific.
>> The proxy can be filled via individual/batched sync/async dependency loading
>> when the program starts up or when a dependent package is declared.
>>
>> In this setup a module would always belong to a package and dependencies are
>> declared on a package level.
>>
> This does impose a burden on a JS engine to satisfy pre-declared
> *module* dependencies synchronously. Given that, even if the package
> is ready (downloaded or whatever), loading a module from it might
> require a filesystem hit, I would personally prefer *all* module loads
> to be async, so that the event loop would not block on even *that*
> small interruption. However, as a compromise, sync require()-s and
> async *dynamic* package instantiations is, I claim, a good compromise.
>
We can eliminate the extra filesystem hit if we cache module text/code
in memory (that is what I meant with the proxy).
Christoph
So 'aPackage' would be a "local name" for something in the manifest
JSON? Like maybe the JSON would look like:
requirements: {
aPackage: {
checksum: '...',
sources: [ ... ]
},
anotherPackage: { ... },
aThirdPackage: { ... }
}
? If so then we're on the same page. :)
I do like your notation for require() such that it does not require 2
arguments. On the other hand, I suspect require() should handle the
case of the package being a 1st class object since it may have been
dynamically loaded by the client.
> Then you have a few edge cases:
>
> - Scan for modules in myPackage first (this works with the current
> relative path resolving), then dependent packages in the order declared
> and finally the system paths and lib.
I remember designing just such a "resolution order" concept for UI
model-view binding and having Mark Miller critique it based on the
notion that what is good OO design often is difficult to secure,
because responsibility is devolved to disparate entities such that it
is not easy to assign accountability for the final outcome.
With this in mind, I think any idea of a "resolution order" should be
highly discouraged. I imagine that, in the case where two packages
each define module 'foo/bar/baz.js', the idiomatic pattern is one
where it is unambiguous in the require()-ing code which one is being
used.
> - Look for #myPackage/path/module.js first, then
> #myPackage/foreach(dependentPackages)/path/module.js, then
> foreach(dependentPackages)/path/module.js. This could be used to
> override a module from a dependency in myPackage without changing
> dependentPackage or introducing another package.
Yeah, this overriding is precisely what makes me worried. :)
> - Adding "overrides" in addition to "dependencies" in package.json we
> could support looking for
> foreach(#overridePackages)/myPackage/path/module.js, then
> foreach(#overridePackages)/path/module.js, then
> #myPackage/path/module.js. If we can add override packages at runtime we
> can "overlay" new versions of modules while the program is running. This
> would be very handy during development. It could alleviate some of the
> issues vs "swapping" out modules. Both have their limitations.
Interesting. Any already instantiated modules would remain in the old
version, though, so the client code would still have to get some
"upgrade" signal, or the upgraded modules would have to be upgrade
aware via some mechanism.
That all said, I think this is a good direction in general. :)
require('#aPackage/foo/bar/baz');So 'aPackage' would be a "local name" for something in the manifest JSON? Like maybe the JSON would look like: requirements: { aPackage: { checksum: '...', sources: [ ... ] }, anotherPackage: { ... }, aThirdPackage: { ... } }
I do like your notation for require() such that it does not require 2 arguments. On the other hand, I suspect require() should handle the case of the package being a 1st class object since it may have been dynamically loaded by the client.
- Scan for modules in myPackage first (this works with the current relative path resolving), then dependent packages in the order declared and finally the system paths and lib.I remember designing just such a "resolution order" concept for UI model-view binding and having Mark Miller critique it based on the notion that what is good OO design often is difficult to secure, because responsibility is devolved to disparate entities such that it is not easy to assign accountability for the final outcome.
With this in mind, I think any idea of a "resolution order" should be highly discouraged. I imagine that, in the case where two packages each define module 'foo/bar/baz.js', the idiomatic pattern is one where it is unambiguous in the require()-ing code which one is being used.
Interesting. Any already instantiated modules would remain in the old version, though, so the client code would still have to get some "upgrade" signal, or the upgraded modules would have to be upgrade aware via some mechanism.
That all said, I think this is a good direction in general. :)
That was what I was thinking too. So long as --
> So the "./"and "../" prefixes would locate a module using a relative path,
> "/" would be the absolute path and in all other cases it is assumed that the
> first path element is a package name/id and the following path is resolved
> relative to the package root?
Sure, that sounds like a good idea, and is compatible with the notion
that the source package of a module be explicitly specified. E.g., if
two packages I depend on define a module named "foo/bar/baz.js", I can
unambiguously use them by require()-ing:
require('#pkg1/foo/bar/baz.js');
require('#pkg2/foo/bar/baz.js');
Cheers,
On Wed, Jun 10, 2009 at 9:52 AM, <ihab...@gmail.com> wrote:
> Sure, that sounds like a good idea, and is compatible with the notion
> that the source package of a module be explicitly specified. E.g., if
> two packages I depend on define a module named "foo/bar/baz.js", I can
> unambiguously use them by require()-ing:
>
> require('#pkg1/foo/bar/baz');
> require('#pkg2/foo/bar/baz');