Another idea on unifying Transport/C and /D

79 views
Skip to first unread message

Kris Zyp

unread,
Sep 1, 2010, 9:03:35 AM9/1/10
to CommonJS, requ...@googlegroups.com
I wanted to throw out another idea for module transport to see if it
would be possible bridge the two main divergent specs (I know we haven't
discussed this in a while, so all the discussions aren't fresh). Could
we use the transport/C argument list, but allow the argument list to be
repeating sets of id/dependencies/factory:

require.def(id, deps, factory, id, deps, factory, id, deps, factory, ...);

And then say the dependencies/injections arguments are optional
(defaults to ["require", "exports", "module"]), and one additional set
of dependencies can be suffixed as the last parameter. So it can also be
in the form:
require.def(id, factory, id, factory, deps);

This would at least satisfy my desire to have a concise way to define
multiple modules (avoiding multiple calls and pause and resume). It
would also eliminate the concern Tobie had about IE bugs with property
enumeration. And of course normal transport/C usage would be still
supported.

Another example:
require.def("foo", ["bar"], function(bar){
bar.test();
},
"bar", function(require, exports){
require("another");
exports.test = function(){}
}, ["another"]);

Also, one more thought/question, is it possible to make the very first
id optional? Can that be implied from module/script that was requested,
associating the script with the require.def call by the script element's
onload/onreadystate event that require.def precedes (or do browsers
sequence the script executions so you can determine the id by the order
of requested by scripts)?

--
Thanks,
Kris

James Burke

unread,
Sep 2, 2010, 12:59:09 AM9/2/10
to comm...@googlegroups.com, requ...@googlegroups.com
On Wed, Sep 1, 2010 at 6:03 AM, Kris Zyp <kri...@gmail.com> wrote:
> This would at least satisfy my desire to have a concise way to define
> multiple modules (avoiding multiple calls and pause and resume). It
> would also eliminate the concern Tobie had about IE bugs with property
> enumeration. And of course normal transport/C usage would be still
> supported.

pause and resume are in there to allow legacy scripts to be included
which may define global variables via var. Wrapping them in a function
wrapper would break those kinds of scripts, but I expect that is not
of interest in the context of CommonJS. As far as RequireJS is
concerned, I still want to support legacy scripts for now, so I will
likely still support pause/resume.

That said, it does not mean I could not support the transport proposal
in this thread. Although I do have a question (after next quote):

> Another example:
> require.def("foo", ["bar"], function(bar){
>  bar.test();
> },
> "bar", function(require, exports){
>  require("another");
>  exports.test = function(){}
> }, ["another"]);

What if "bar" depended on "baz"? How would that work for the factory arguments?

require.def("bar", ["baz"], function(baz?, require, exports) {});

As I recall, there were objections from Kris Kowal about treating
require, exports, module as dependencies that could be listed in the
dependency array as strings, then have them listed in same order as
factory function arguments, and I do not like forcing the only
arguments to the factory function to be just require, exports, module.
So I am not sure how to resolve that for RequireJS: I do not mind
supporting a CommonJS transport format, but I would not want to give
up the matching order for dependency string array to factory function
argument names for things that are just coded in RequireJS module
format.

I am not sure you are implying that, I think you are indicating both
would work, but I seem to be missing it.

> Also, one more thought/question, is it possible to make the very first
> id optional? Can that be implied from module/script that was requested,
> associating the script with the require.def call by the script element's
> onload/onreadystate event that require.def precedes (or do browsers
> sequence the script executions so you can determine the id by the order
> of requested by scripts)?

The order is not guaranteed. In particular, IE does not fire the
readystate change directly after executing the script. There is a test
in RequireJS at this location if you want to confirm:
http://github.com/jrburke/requirejs/tree/master/tests/browsertests/scriptload/

Other browsers seem to match them up. Perhaps IE 9 will work better
too. I am having trouble testing IE 9 Platform Preview 4 at the
moment. Looking at the HTML5 spec at this URL:
http://dev.w3.org/html5/spec/Overview.html#executing-a-script-block

Step 5 in the "If the load was successful" section seems to indicate
that onload should fire immediately after the script is executed, but
if an inline script, then the onload is queued in what I believe is
the normal event queue, which to me indicates it may not fire exactly
after the execution of the inline script.

It might be good to get clarification from the HTML5 folks to see if
we could match better scripts with their elements. Hmm, IIRC someone
made a proposal to the HTML5 group that allowed a script to get at its
related element? If so, that would work too. But all future stuff.
Depends on how important the now is to you. I still want to work in
the now, so for RequireJS, the module name always needs to be
specified.

James

Kris Zyp

unread,
Sep 2, 2010, 10:45:48 AM9/2/10
to requ...@googlegroups.com, James Burke, comm...@googlegroups.com

On 9/1/2010 10:59 PM, James Burke wrote:
> On Wed, Sep 1, 2010 at 6:03 AM, Kris Zyp <kri...@gmail.com> wrote:
>> This would at least satisfy my desire to have a concise way to define
>> multiple modules (avoiding multiple calls and pause and resume). It
>> would also eliminate the concern Tobie had about IE bugs with property
>> enumeration. And of course normal transport/C usage would be still
>> supported.
> pause and resume are in there to allow legacy scripts to be included
> which may define global variables via var. Wrapping them in a function
> wrapper would break those kinds of scripts, but I expect that is not
> of interest in the context of CommonJS. As far as RequireJS is
> concerned, I still want to support legacy scripts for now, so I will
> likely still support pause/resume.

That makes sense. Although even in the RequireJS world, it is reasonable
that in situations where pure RequireJS format is used (everything
enclosed in require.def calls, could be signaled with a build flag or
detected from code), that the build could combine modules with
sequential arguments instead of sequential calls between pause() and
resume() calls, saving some bytes in the built files, right?

> That said, it does not mean I could not support the transport proposal
> in this thread. Although I do have a question (after next quote):
>
>> Another example:
>> require.def("foo", ["bar"], function(bar){
>> bar.test();
>> },
>> "bar", function(require, exports){
>> require("another");
>> exports.test = function(){}
>> }, ["another"]);
> What if "bar" depended on "baz"? How would that work for the factory arguments?
>
> require.def("bar", ["baz"], function(baz?, require, exports) {});

In the example, bar already depends on another module, I just spelled it "another" instead of "baz". But if you want that dependency to be declared directly on that module, like your example, that would be:
require.def("bar", ["baz", "require", "exports"], function(baz, require, exports) {});
(just like Transport/C has always been)


> As I recall, there were objections from Kris Kowal about treating
> require, exports, module as dependencies that could be listed in the
> dependency array as strings, then have them listed in same order as
> factory function arguments, and I do not like forcing the only
> arguments to the factory function to be just require, exports, module.
> So I am not sure how to resolve that for RequireJS: I do not mind
> supporting a CommonJS transport format, but I would not want to give
> up the matching order for dependency string array to factory function
> argument names for things that are just coded in RequireJS module
> format.

I agree, I am definitely suggesting maintaining RequireJS/transport/C
style of mixing injection variables with dependencies for the arguments.
I understand Kowal's concern with this. I just don't agree that it is
really a practical problem. The cost of reserving a few module ids is
negligible (one already reserves module ids to deal with existing
code/modules). If we really wanted to preserve the namespace of module
ids, we could spell require, exports with some reserved characeter like
"!require", "!exports".


> I am not sure you are implying that, I think you are indicating both
> would work, but I seem to be missing it.
>
>> Also, one more thought/question, is it possible to make the very first
>> id optional? Can that be implied from module/script that was requested,
>> associating the script with the require.def call by the script element's
>> onload/onreadystate event that require.def precedes (or do browsers
>> sequence the script executions so you can determine the id by the order
>> of requested by scripts)?
> The order is not guaranteed. In particular, IE does not fire the
> readystate change directly after executing the script. There is a test
> in RequireJS at this location if you want to confirm:
> http://github.com/jrburke/requirejs/tree/master/tests/browsertests/scriptload/

Awesome, great tests. And yes, I can definitely reproduce the scripts
executing out of order, and the onreadystatechange definite does not
fire directly after the execution, but... Everytime I run the test, the
order that the scripts are executed exactly matches the order of the
firing of the onreadystatechange events. If script five executes before
script four, then the onreadystatechange for five will fire before the
onreadystatechange for four. While it is not as convenient as having the
event fire directly after the script executes, it does seem to provide
the information necessary to associate requests with script executions
(and thus anonymous require.def calls with module ids). For example,
annotating from your tests:

one.js script
two.js script
-> four.js script # four gets executed before three
-> three.js script
five.js script
six.js script
seven.js script
eight.js script
one.js loaded
nine.js script
two.js loaded
-> four.js loaded # the onreadystatechange/loaded event preserves the
order (four before three)
-> three.js loaded
five.js loaded
six.js loaded
seven.js loaded
eight.js loaded
nine.js loaded

Am I missing something?

--
Thanks,
Kris

jbrantly

unread,
Sep 2, 2010, 4:07:32 PM9/2/10
to CommonJS
On Sep 1, 9:03 am, Kris Zyp <kris...@gmail.com> wrote:
> This would at least satisfy my desire to have a concise way to define
> multiple modules (avoiding multiple calls and pause and resume).

Not sure if you meant it this way, but it's already possible to have
multiple modules with Transport/C in the same file. It does use
multiple calls to require.def of course, but no pause/resume
necessary: http://github.com/jbrantly/yabble/blob/master/test/transportC/tests/multipleDefines/program.js

I'm not sure I see a huge benefit in reducing the calls to one
(perhaps you could explain that better?). On the other hand, I don't
see a problem with it either, so why not? :) I think the main thing is
being able to stick them all into one file/request which is already
possible.

I like how you've solved the explicit injections difference. This
would be a requirement for me to be on board. I prefer D's approach of
a sane default with overrides if desired. I think that your function
signatures might should make a distinction between injects and deps
though. For example:

require.def(id, injects, factory); // normal transport C
require.def(id, factory, deps); // the additional set of dependencies
which are *not* injected, with a default injects of ['require',
'exports', 'module']

Both of the above are possible with your format but injects/deps don't
work quite the same way.

Also, I think you're right about associating script execution with
script onload by order, and thus it should be possible to name the
modules appropriately. Just push the first module into a holding queue
and pop when onload fires (or something like that).

Kris Zyp

unread,
Sep 2, 2010, 4:44:23 PM9/2/10
to comm...@googlegroups.com, jbrantly

On 9/2/2010 2:07 PM, jbrantly wrote:
> On Sep 1, 9:03 am, Kris Zyp <kris...@gmail.com> wrote:
>> This would at least satisfy my desire to have a concise way to define
>> multiple modules (avoiding multiple calls and pause and resume).
> Not sure if you meant it this way, but it's already possible to have
> multiple modules with Transport/C in the same file. It does use
> multiple calls to require.def of course, but no pause/resume
> necessary: http://github.com/jbrantly/yabble/blob/master/test/transportC/tests/multipleDefines/program.js

Say I do a require.ensure(["A"],...), which should trigger a request for
A.js. Let's say the response includes both modules A and B, and they
have a circular dependency:
require.def("A", ["B"], function(B){
});

require.def("B", ["A"], function(A){
});

The problem is that when the first require.def executes, it satisfies
the request for A, but indicates that module B is still needed. A loader
might then request module B, because there is no way for the loader to
know that another require.def call is going to be executed that will
provide module B. Perhaps this would occur in Yabble, although there are
ways around this, maybe you use setTimeout to wait until the current
execution is finished before deciding if unsatisfied dependencies still
need to be requested However, setTimeout(func) solutions are still not
optimally performant because of the minimum delay resolution for
setTimeout in the browser is something like 15ms or something, which can
add up quickly with lots of modules. Being able to explicitly define a
set of modules that should be defined (with a clear finish) before
requesting unsatisfied dependencies is the fastest, most reliable solution.

The other motivation for adding multiple modules in a single call is for
optimized compression of combined files. Doing multiple definitions in a
single call takes less bytes than doing multiple calls. Most things we
define in CommonJS don't need to be very terse. But, in this situation,
we are dealing with code that must be used to wrap every module sent to
browser. Brevity is extremely important for a call that is going be used
for the output of build processes that combine multiple files and minify
JavaScript to squeeze out the best performance. This is arena where
battles occur with just a few byte differences between google closure
compiler, yui compressor, shrinksafe, packer, and uglify fighting to
give the best user experience onband-width constrained mobile devices
and less-than optimal connections.


> I'm not sure I see a huge benefit in reducing the calls to one
> (perhaps you could explain that better?). On the other hand, I don't
> see a problem with it either, so why not? :) I think the main thing is
> being able to stick them all into one file/request which is already
> possible.
>
> I like how you've solved the explicit injections difference. This
> would be a requirement for me to be on board. I prefer D's approach of
> a sane default with overrides if desired. I think that your function
> signatures might should make a distinction between injects and deps
> though. For example:
>
> require.def(id, injects, factory); // normal transport C
> require.def(id, factory, deps); // the additional set of dependencies
> which are *not* injected, with a default injects of ['require',
> 'exports', 'module']
>
> Both of the above are possible with your format but injects/deps don't
> work quite the same way.

That make sense, having injection variables in the trailing dependency
list would be incoherent anyway. And that is a good point, we should
make sure the trailing dependency list could always be used for a clean,
unencumbered namespace for module ids (in case you really need to have a
module named "require" or "exports").

--
Thanks,
Kris

jbrantly

unread,
Sep 2, 2010, 5:12:28 PM9/2/10
to CommonJS
On Sep 2, 4:44 pm, Kris Zyp <kris...@gmail.com> wrote:
> Perhaps this would occur in Yabble, although there are
> ways around this, maybe you use setTimeout to wait until the current
> execution is finished before deciding if unsatisfied dependencies still
> need to be requested However, setTimeout(func) solutions are still not
> optimally performant because of the minimum delay resolution for
> setTimeout in the browser is something like 15ms or something, which can
> add up quickly with lots of modules. Being able to explicitly define a
> set of modules that should be defined (with a clear finish) before
> requesting unsatisfied dependencies is the fastest, most reliable solution.

You got it. I use a setTimeout to handle this situation. I'm aware of
the minimum delay but (I think) it only happens once per file (so if
you have your whole application in one file, there's only one delay).
However, your statement about this the fastest, most reliable solution
makes sense.

> The other motivation for adding multiple modules in a single call is for
> optimized compression of combined files. Doing multiple definitions in a
> single call takes less bytes than doing multiple calls. Most things we
> define in CommonJS don't need to be very terse. But, in this situation,
> we are dealing with code that must be used to wrap every module sent to
> browser. Brevity is extremely important for a call that is going be used
> for the output of build processes that combine multiple files and minify
> JavaScript to squeeze out the best performance. This is arena where
> battles occur with just a few byte differences between google closure
> compiler, yui compressor, shrinksafe, packer, and uglify fighting to
> give the best user experience onband-width constrained mobile devices
> and less-than optimal connections.

Understood, but if we want to get really technical, the "require.def"
part (which is the redundant part) would probably get compressed
fairly well using gzip. Everyone *is* using gzip, right? :)

In any case you've answered my question and like I said before I don't
see anything bad about it, so two thumbs up from me.

Kris Zyp

unread,
Sep 2, 2010, 6:20:10 PM9/2/10
to comm...@googlegroups.com, jbrantly
On 9/2/2010 3:12 PM, jbrantly wrote:
> On Sep 2, 4:44 pm, Kris Zyp <kris...@gmail.com> wrote:
>> Perhaps this would occur in Yabble, although there are
>> ways around this, maybe you use setTimeout to wait until the current
>> execution is finished before deciding if unsatisfied dependencies still
>> need to be requested However, setTimeout(func) solutions are still not
>> optimally performant because of the minimum delay resolution for
>> setTimeout in the browser is something like 15ms or something, which can
>> add up quickly with lots of modules. Being able to explicitly define a
>> set of modules that should be defined (with a clear finish) before
>> requesting unsatisfied dependencies is the fastest, most reliable solution.
> You got it. I use a setTimeout to handle this situation. I'm aware of
> the minimum delay but (I think) it only happens once per file (so if
> you have your whole application in one file, there's only one delay).
> However, your statement about this the fastest, most reliable solution
> makes sense.
And now to argue against my own proposal...

But on the otherhand, you could use the onload/onreadystatechange event
to delineate the module definitions. I don't believe there are any extra
delays with this event, and in this case of anonymous modules, you would
have to listen/wait for this event anyway. One could combine multiple
calls with this approach with changing transport/C (in this regard).
.


>> The other motivation for adding multiple modules in a single call is for
>> optimized compression of combined files. Doing multiple definitions in a
>> single call takes less bytes than doing multiple calls. Most things we
>> define in CommonJS don't need to be very terse. But, in this situation,
>> we are dealing with code that must be used to wrap every module sent to
>> browser. Brevity is extremely important for a call that is going be used
>> for the output of build processes that combine multiple files and minify
>> JavaScript to squeeze out the best performance. This is arena where
>> battles occur with just a few byte differences between google closure
>> compiler, yui compressor, shrinksafe, packer, and uglify fighting to
>> give the best user experience onband-width constrained mobile devices
>> and less-than optimal connections.
> Understood, but if we want to get really technical, the "require.def"
> part (which is the redundant part) would probably get compressed
> fairly well using gzip. Everyone *is* using gzip, right? :)
>
> In any case you've answered my question and like I said before I don't
> see anything bad about it, so two thumbs up from me.

Good point.

So to summarize I think there are four parts to what I have suggested as
changes to transport/C:
1. Make the injection+dep array optional (defaulting to ["require",
"exports", "module"]) to make it easy to wrap commonjs modules succinctly.
2. Make the module id optional, determining this from the requested
module and the order of the onload/onreadystatechange event. James
Burke's tests convinced me that this is feasible, and I think it would
be great to support anonymous modules and make it much easier to
hand-code modules without hard-coding them to a specific location.
3. Allow for repeating sets of id/injections/factory in the arguments.
This is in intended to improve the performance of combined modules.
After thinking about the ability of loaders to use the onload event and
gzip's elimination of redundancy, perhaps this doesn't buy us that much.
4. Allow for a trailing dependency array in the arguments. This array
does not mix with injection variables, so it is an encumbered namespace,
and it is also is important for succinct wrapping of commonjs modules so
the dependencies can be declared without having to write out "require",
"exports", "modules".

Thanks,
Kris


4. Allow for a trailing dependency list


--
Thanks,
Kris

James Burke

unread,
Sep 3, 2010, 6:34:35 PM9/3/10
to Kris Zyp, requ...@googlegroups.com, comm...@googlegroups.com
On Thu, Sep 2, 2010 at 7:45 AM, Kris Zyp <kri...@gmail.com> wrote:
> That makes sense. Although even in the RequireJS world, it is reasonable
> that in situations where pure RequireJS format is used (everything
> enclosed in require.def calls, could be signaled with a build flag or
> detected from code), that the build could combine modules with
> sequential arguments instead of sequential calls between pause() and
> resume() calls, saving some bytes in the built files, right?

Yes it could. In the context of CommonJS modules only you can get away
with the format you suggest. Just mentioning why pause/resume is
supported in the more general case, but for the purposes of a CommonJS
transport, what you suggest is fine. Although your point about using
the script onload to know when to trace dependencies is a nice way to
to avoid pause/resume. Hmm, although for other environments outside
the browser it means coding in special handling if it were to support
files that had more than one module in the file. That may be OK
though.

> Awesome, great tests. And yes, I can definitely reproduce the scripts
> executing out of order, and the onreadystatechange definite does not
> fire directly after the execution, but... Everytime I run the test, the
> order that the scripts are executed exactly matches the order of the
> firing of the onreadystatechange events. If script five executes before
> script four, then the onreadystatechange for five will fire before the
> onreadystatechange for four.

Ah, great observation! There does seem to be a way to tie them
together. It does make implementation a bit more complicated, but
certainly doable. I am a bit wary of depending on the behavior -- it
would be good to confirm with browser vendors that this behavior
always matches up, but it is promising. I am happy to confirm with
browser vendors if we get general agreement.

However, I want to push it a little further:

To me the criticisms I heard for using a require.def type of syntax as
the CommonJS source file module syntax were:

1) specifying the name of the module was seen as bad, makes moving
files to different directories harder.
2) Explicitly reserving "require", "exports", and "module" as
dependency names in the array to map to the CommonJS module
definitions was seen as bad.
3) Perceived to be more typing.

We could get rid of #1, and only use names when combining more than
one module together in a file.

#2: given that a function callback is used, and dependencies are
listed in an array/function args, "require" is not normally needed
inside the factory function (may want to for some circular
dependency/generic module referencing). The factory function can
return the module exports, so "exports" would not normally be needed
(maybe only for some circular dependency cases). And I believe
"module" is not needed that often either (but there are valid cases
for needing it). So in practice for most modules, they would not need
to specify any of those special dependency names. Yes, they would
still need to be reserved, but as far as practical impact on
developers or code weight, it seems negligible.

#3: is really just bikeshedding. I believe the code weight is about
the same, since return {} is used instead of typing exports more than
once, "require" normally only needs to be typed once, and since return
{} is used for setting exports, constructor functions can be used as
the module export, meaning that you do not have to see extra typing
like "new require('foo').Foo()". So in the end, the typing ends up to
be about the same.

So the differences between "source module" and "transport" end up
being the addition of a name for the module as the first arg. I like
that, and it means getting an easy to debug, fast loading source
module format for the browser. How does that sound? :)

If that goes over well, then a jbrantly mentioned, having extra
require.def calls in the transported file should be negligible with
gzip. It loses some optimization in that a common dependency will be
listed many times in the array of dependency/injections, but I think
gzip also helps a bit there too.

If the inertia for considering a change to the CommonJS source module
format is too great, then what you propose is fine, and I will focus
on obsoleting the old CommonJS source module format via RequireJS
adoption, possibly removing the need for the name in require.def
syntax in RequireJS. I am curious though, who else is interested in
the transport format? You, jbrantly and me, I wonder who else actively
tries to implement one of the transport proposals. I am sure there are
others, it has just been a while since transports were brought up.

James

Kris Zyp

unread,
Sep 3, 2010, 6:41:44 PM9/3/10
to James Burke, requ...@googlegroups.com, comm...@googlegroups.com

Sounds great! Maybe we can nail down exactly what should be changed in
the transport/C spec (optional first module id, optional injection list,
trailing dependency list) and you can do any additional verification of
script onload association .


> If that goes over well, then a jbrantly mentioned, having extra
> require.def calls in the transported file should be negligible with
> gzip. It loses some optimization in that a common dependency will be
> listed many times in the array of dependency/injections, but I think
> gzip also helps a bit there too.
>
> If the inertia for considering a change to the CommonJS source module
> format is too great, then what you propose is fine, and I will focus
> on obsoleting the old CommonJS source module format via RequireJS
> adoption, possibly removing the need for the name in require.def
> syntax in RequireJS. I am curious though, who else is interested in
> the transport format? You, jbrantly and me, I wonder who else actively
> tries to implement one of the transport proposals. I am sure there are
> others, it has just been a while since transports were brought up.

Well, potentially Dojo, I think that's an important one :).
--

Thanks,
Kris

James Burke

unread,
Sep 3, 2010, 6:55:15 PM9/3/10
to comm...@googlegroups.com, jbrantly
On Thu, Sep 2, 2010 at 3:20 PM, Kris Zyp <kri...@gmail.com> wrote:
> So to summarize I think there are four parts to what I have suggested as
> changes to transport/C:
> 1. Make the injection+dep array optional (defaulting to ["require",
> "exports", "module"]) to make it easy to wrap commonjs modules succinctly.
> 2. Make the module id optional, determining this from the requested
> module and the order of the onload/onreadystatechange event. James
> Burke's tests convinced me that this is feasible, and I think it would
> be great to support anonymous modules and make it much easier to
> hand-code modules without hard-coding them to a specific location.
> 3. Allow for repeating sets of id/injections/factory in the arguments.
> This is in intended to improve the performance of combined modules.
> After thinking about the ability of loaders to use the onload event and
> gzip's elimination of redundancy, perhaps this doesn't buy us that much.
> 4. Allow for a trailing dependency array in the arguments. This array
> does not mix with injection variables, so it is an encumbered namespace,
> and it is also is important for succinct wrapping of commonjs modules so
> the dependencies can be declared without having to write out "require",
> "exports", "modules".

I think gzipping makes the need for #3 and #4 less necessary. If #4
really made your side of processing noticeably better, I could
probably live with it.

I do like #2, want to prototype it some more.

#1 is a bit trickier. If no explicit dependencies are specified in
RequireJS, then I do not bother with creating an exports object, since
I allow return in the factory function to define the exports,
similarly, I do not bother with manufacturing a "module" object. So I
am concerned about doing extra work when it is not needed. At first
glance, I prefer just to be explicit with the dependencies, but may be
able to be convinced otherwise. Here again I think gzip helps to
collapse the size of those things if it is a common pattern.

I think the bigger thing that is indicated by this approach is that
"require", "module" and "exports" do become reserved strings that map
to the free variables expected in traditional CommonJS modules. I want
to call that out since it was contentious before.

James

Kris Zyp

unread,
Sep 3, 2010, 10:29:43 PM9/3/10
to comm...@googlegroups.com, James Burke, jbrantly

I had hoped the trailing dependency argument would mitigate this
problem. However, I think stating it as two optional arguments is too
confusing. Let me restate my proposal. The require.def could take two forms:

require.def(id?, injections, factory); // existing transport/C (with
optional first arg, hopefully)
require.def(id?, factory, dependencies);

This second form is like the first, except that arguments to the factory
are always "require", "module" and "exports", and the third argument has
no reserved module names ("require" would request the module with that
name). Basically, this form would be a super easy way to wrap existing
CommonJS modules with minimal alteration to require.def and without
reserved strings in the dependency list.

--
Thanks,
Kris

Kris Zyp

unread,
Sep 7, 2010, 8:47:53 AM9/7/10
to CommonJS, requ...@googlegroups.com
A few other thoughts on possible improvements to Transport/C:

* require.def should not depend |this|. You should be able to use
require.def in both forms:
require.def(...);
var def = require.def;
def(...);
This is already true for RequireJS and Yabble, but seems like it should
be explicitly stated in the specification.

* If we are going to support anonymous modules, we should certainly also
allow relative module ids. This is an important element to keeping
modules portable as well. Relative ids should be supported in the
dependency list and I believe also in the module id argument. The
dependency list module ids should be resolved relative to the module
that is being defined.

Allowing relative ids for the module id (the first param) helps to
address a broader issue of how packages could be accessed from a client
side loader. How do we create "built" packages for browsers? If I
request a module "a" from a package at http://somesite.com/foo, and I
have built "a" to also include it's dependencies "b" what should
http://somesite.com/foo/lib/a.js return? One possibility is it could
assume that it is being mapped to the "foo" namespace:

require.def("foo/a",["./b"], function(b){...});
require.def("foo/b",[], function(){...});

but then the modules are hard-coded to a particular expected location. A
much more portable solution is anonymous modules plus relative modules:

require.def(["./b"], function(b){...});
require.def("./b",[], function(){...});

I believe we would need to specify that relative module ids are resolved
relative to the last require.def call.

* Is a "transport" even still the right name for require.def anymore? As
we add the ability to do anonymous modules, maybe relative ids, and with
the existing ability to conveniently map dependencies to arguments, it
seems like it is not strictly a transport (I think Kris Kowal has
pointed this out before), it is perhaps an async module format, module
registration API or module definition API.

Thanks,
Kris

James Burke

unread,
Sep 8, 2010, 7:08:49 PM9/8/10
to comm...@googlegroups.com
On Tue, Sep 7, 2010 at 5:47 AM, Kris Zyp <kri...@gmail.com> wrote:
> but then the modules are hard-coded to a particular expected location. A
> much more portable solution is anonymous modules plus relative modules:
>
> require.def(["./b"], function(b){...});
> require.def("./b",[], function(){...});
>
> I believe we would need to specify that relative module ids are resolved
> relative to the last require.def call.

I do not understand, I would have thought not specifying the module
name at all was good enough. I suppose you may want to explain more by
what a "built" package means.

James

Kris Zyp

unread,
Sep 8, 2010, 7:15:58 PM9/8/10
to comm...@googlegroups.com, James Burke

By built package, I mean a package that may have modules that have
multiple dependencies in a single file like we do with builds in Dojo.
Taking an example from Dojo, the DataGrid.js we distribute to CDNs
(http://o.aolcdn.com/dojo/1.5/dojox/grid/DataGrid.xd.js) includes
multiples modules. Since all these modules are a single file
(DataGrid.js), only the DataGrid module can be anonymous, but the other
ones could at least be defined relatively so that aren't harded coded to
a certain path:

require.def(["./_Grid","./DataSelection",...], function(){ /*DataGrid
module */});
require.def("./_Grid",[...], function(){ /*_Grid module*/});
require.def("./DataSelection",[...], function(){ /*_Grid module*/});
...

Does that make sense?

--
Thanks,
Kris

James Burke

unread,
Sep 8, 2010, 8:03:44 PM9/8/10
to Kris Zyp, comm...@googlegroups.com
On Wed, Sep 8, 2010 at 4:15 PM, Kris Zyp <kri...@gmail.com> wrote:
> By built package, I mean a package that may have modules that have
> multiple dependencies in a single file like we do with builds in Dojo.
> Taking an example from Dojo, the DataGrid.js we distribute to CDNs
> (http://o.aolcdn.com/dojo/1.5/dojox/grid/DataGrid.xd.js) includes
> multiples modules. Since all these modules are a single file
> (DataGrid.js), only the DataGrid module can be anonymous, but the other
> ones could at least be defined relatively so that aren't harded coded to
> a certain path:

That would only work if modules from just one package were included in
the built file. I am not sure how common that is, or if it is worth
supporting. Once there modules from two packages in the file, a full
ID is needed.

For me, if we are talking about optimized, built code, it means the
module IDs have been set/burned in, so I do not see a great need to
have relative module IDs in that case.

James

Kris Zyp

unread,
Sep 8, 2010, 11:11:19 PM9/8/10
to James Burke, comm...@googlegroups.com

Yes, that makes sense. But relative ids in the dependency list for
anonymous modules would certainly still be reasonable (since it is
represents pre-built source code that may be built to any location),
right? And I would think it would be pretty easy to support as well.


--
Thanks,
Kris

James Burke

unread,
Sep 9, 2010, 1:02:50 AM9/9/10
to Kris Zyp, comm...@googlegroups.com
On Wed, Sep 8, 2010 at 8:11 PM, Kris Zyp <kri...@gmail.com> wrote:
> Yes, that makes sense. But relative ids in the dependency list for
> anonymous modules would certainly still be reasonable (since it is
> represents pre-built source code that may be built to any location),
> right? And I would think it would be pretty easy to support as well.

Agreed.

James

Kris Zyp

unread,
Sep 9, 2010, 9:43:52 AM9/9/10
to James Burke, comm...@googlegroups.com
OK, I'll update the proposal with the optional first argument and
relative ids (unless someone else wants to). And I still think that
Transport/C deserves a better name. Any objections to renaming it to
"Module Registration" or "Asynchronous Module Definition" specification?

Also, would relative ids in dependency lists *only* be supported for
anonymous modules or should it be for all modules? Is it more work to
disable this for non-anonymous modules if relative id is supported?

--
Thanks,
Kris

James Burke

unread,
Sep 9, 2010, 1:47:17 PM9/9/10
to comm...@googlegroups.com
On Thu, Sep 9, 2010 at 6:43 AM, Kris Zyp <kri...@gmail.com> wrote:
> OK, I'll update the proposal with the optional first argument and
> relative ids (unless someone else wants to). And I still think that
> Transport/C deserves a better name. Any objections to renaming it to
> "Module Registration" or "Asynchronous Module Definition" specification?

For me, it is a module definition, but if it helps for the purposes of
this group to call it a Transport proposal, that is fine. I would be
OK if it was revved to a Transport/E or whatever the next letter is,
just in case support for the anonymous modules falls through, but
reusing /C also works if you already did changes.

> Also, would relative ids in dependency lists *only* be supported for
> anonymous modules or should it be for all modules? Is it more work to
> disable this for non-anonymous modules if relative id is supported?

Seems fine to allow relative ids in the dependency list when name is
specified for the module. That works today in RequireJS, and I do not
think it forces any new burdens on the loader, since relative ids need
to be supported, they are always resolved in relation to the module id
for the require.def call. In the anonymous require.def case, the
module id gets applied later by the system before dependencies are
dealt with anyway.

James

Kris Zyp

unread,
Sep 9, 2010, 9:55:11 PM9/9/10
to comm...@googlegroups.com, James Burke, requ...@googlegroups.com
Here is the new asynchronous module definition proposal, based on the
ideas from Transport/C and recent changes discussed:

http://wiki.commonjs.org/wiki/Modules/AsynchronousDefinition

Feedback welcome, of course.

Thanks,
Kris

Tom Robinson

unread,
Sep 11, 2010, 1:45:22 AM9/11/10
to comm...@googlegroups.com
So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.

If you perceived the issues with loading CommonJS modules in a browser to be a big deal then it should have been dealt with a year and a half ago when we were defining CommonJS modules. I thought we had concluded it wasn't a major problem.

I don't have a problem with CommonJS modules in the browser because I've been using a similar system (Cappuccino's load system) for several years without major issues.

</rant>

-tom

> --
> You received this message because you are subscribed to the Google Groups "CommonJS" group.
> To post to this group, send email to comm...@googlegroups.com.
> To unsubscribe from this group, send email to commonjs+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/commonjs?hl=en.
>

James Burke

unread,
Sep 11, 2010, 5:24:58 AM9/11/10
to comm...@googlegroups.com
On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
>
> If you perceived the issues with loading CommonJS modules in a browser to be a big deal then it should have been dealt with a year and a half ago when we were defining CommonJS modules. I thought we had concluded it wasn't a major problem.

I have just as much frustration on the other side. A ServerJS group
switching names to CommonJS, but then treating the browser as second
class has not sat well with me. I did try to engage, about a year ago,
but was informed that the browser was not the first target for this
group. It would be bad to assume that this group has had enough of a
cross section of JS developers to know if they got the format right,
particularly given the ServerJS origins.

I do not think I got it perfect with Transport/C-RequireJS, and I am
happy that Kris Zyp has noticed a pattern I missed before that would
allow not specifying the name in a require.def call if there is just
one module in a script.

I believe it brings the format closer to something that could be used
in the browser directly and meets the goals for CommonJS. As I
mentioned in the other thread[1], the issues of typing is a bikeshed,
and the reservation of "require", "module" and "exports" do not seem
that bad, particularly given that most modules will not need them as
much when using a format that has a function wrapper, and it reduces
the overall typing in the format.

> I don't have a problem with CommonJS modules in the browser because I've been using a similar system (Cappuccino's load system) for several years without major issues.

Cappuccino and Objective-J are not what I would consider mainstream
front end development though. Not coding in the language that the
browser already knows is not natural for many front end developers,
including me. Which is fine, the browsers are capable enough to allow
variations on that spectrum. You find it works for you. We in Dojo
have found xhr+eval to be workable for many years, but it does not
work as well as something that uses script tags. We know this from
experience. It makes adoption of the toolkit harder. There are
complications with xdomain loading, debugging and speed. They are
workable, but there are real costs. Using a function wrapper avoids
those costs. It may be tempting to think the tradeoff is the amount of
typing, but as mentioned[1], that can be a hard argument to make
definitively. YUI seems to operate well with a function wrapper too.

I do not want to start a flame war on this, we are likely not to get
anywhere on it. I will try not to respond more on this thread about
it. I just wanted to point out that there is a nontrivial number of
developers that feel differently, and not all want to try to engage
with this group because it has not directly impacted them yet, and
there has not been much in it for them to participate, since it has
been mentioned a couple times that CommonJS is mainly concerned with
non-browser environments. If you want to keep it that way, fine by me,
but know that other solutions may gain more ground, regardless of the
transport or module format labels.

[1] http://groups.google.com/group/requirejs/msg/6922316ab3b66bbb

James

Tom Robinson

unread,
Sep 11, 2010, 7:42:44 AM9/11/10
to comm...@googlegroups.com

On Sep 11, 2010, at 2:24 AM, James Burke wrote:

> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
>>
>> If you perceived the issues with loading CommonJS modules in a browser to be a big deal then it should have been dealt with a year and a half ago when we were defining CommonJS modules. I thought we had concluded it wasn't a major problem.
>
> I have just as much frustration on the other side. A ServerJS group
> switching names to CommonJS, but then treating the browser as second
> class has not sat well with me. I did try to engage, about a year ago,
> but was informed that the browser was not the first target for this
> group. It would be bad to assume that this group has had enough of a
> cross section of JS developers to know if they got the format right,
> particularly given the ServerJS origins.

Personally I always intended to use ServerJS for more than servers and never liked the ServerJS name. But I also never thought there would be significant opposition to using the proposed module system in the browser.

> I do not think I got it perfect with Transport/C-RequireJS, and I am
> happy that Kris Zyp has noticed a pattern I missed before that would
> allow not specifying the name in a require.def call if there is just
> one module in a script.
>
> I believe it brings the format closer to something that could be used
> in the browser directly and meets the goals for CommonJS. As I
> mentioned in the other thread[1], the issues of typing is a bikeshed,
> and the reservation of "require", "module" and "exports" do not seem
> that bad, particularly given that most modules will not need them as
> much when using a format that has a function wrapper, and it reduces
> the overall typing in the format.
>
>> I don't have a problem with CommonJS modules in the browser because I've been using a similar system (Cappuccino's load system) for several years without major issues.
>
> Cappuccino and Objective-J are not what I would consider mainstream
> front end development though. Not coding in the language that the
> browser already knows is not natural for many front end developers,
> including me. Which is fine, the browsers are capable enough to allow
> variations on that spectrum. You find it works for you.

The language is irrelevant to this discussion so I'm going to ignore your superficial criticism on that. The Objective-J loader can load standard JavaScript, and we've even modified it to support CommonJS modules (it was a couple dozen line change). It works with Firebug and WebKit debuggers (and presumably anything else that supports @sourceURL). It's also got something similar to the transport format for loading cross domain, but that format is automatically generated by a tool upon deployment, of course.

Does CommonJS even need to appease "mainstream front end" developers anyway?

> We in Dojo
> have found xhr+eval to be workable for many years, but it does not
> work as well as something that uses script tags. We know this from
> experience. It makes adoption of the toolkit harder.

I suspect confusing people with multiple module formats is going to hurt adoption more.

> There are
> complications with xdomain loading, debugging and speed. They are
> workable, but there are real costs. Using a function wrapper avoids
> those costs. It may be tempting to think the tradeoff is the amount of
> typing, but as mentioned[1], that can be a hard argument to make
> definitively. YUI seems to operate well with a function wrapper too.

xdomain and speed are solved by the *transport* format. Debugger support is solved by @sourceURL, or the transport format.

What other value does using a script tag provide? Certainly not familiarity to "mainstream" JavaScript developers, since using any kind of dependency management system is vastly different than manually including script tags.

> I do not want to start a flame war on this, we are likely not to get
> anywhere on it. I will try not to respond more on this thread about
> it. I just wanted to point out that there is a nontrivial number of
> developers that feel differently, and not all want to try to engage
> with this group because it has not directly impacted them yet, and
> there has not been much in it for them to participate, since it has
> been mentioned a couple times that CommonJS is mainly concerned with
> non-browser environments. If you want to keep it that way, fine by me,
> but know that other solutions may gain more ground, regardless of the
> transport or module format labels.

It certainly seems we're not going to convince each other. If the majority of CommonJS contributors agree with you then I'll back off, but so far it seems like it's mostly you and a couple others.

I'll just say my preference if you're going to invent a new module format is to not call it CommonJS modules. We absolutely should not have two incompatible ways of defining modules.

If we do end up having two formats then we need to specify that implementations should support both, otherwise what's the point?

Alternatively, you could make your boilerplate something backwards compatible with CommonJS modules, something like:

(require.def || function(f) { f(exports, require, module); })(function(exports, require, module) {
// ...
}, "foo", ["bar", "baz"]);

Obviously it's a bit verbose and error prone. Not a problem if it's auto-generated though.

-tom

Kris Zyp

unread,
Sep 11, 2010, 8:17:28 AM9/11/10
to comm...@googlegroups.com, Tom Robinson

On 9/11/2010 5:42 AM, Tom Robinson wrote:
> On Sep 11, 2010, at 2:24 AM, James Burke wrote:
>
>> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.

But we never did have consensus on the transport API. We have always had
multiple APIs of CommonJS module transport with out a clear winning
proposal. The basic underlying difference of approach between
hand-coding an asynchronous module vs auto-generated wrappings of
CommonJS has constantly divided such discussions. The proposal doesn't
introduce anything new (well it altered the transport/C API slightly,
making the first param optional and removed require.pause and
require.resume), it simply calls Transport/C what it really is. If I got
the name wrong, suggest something different (I asked for suggestions
before creating the wiki page), but it ain't a "transport".

> [snip]


> xdomain and speed are solved by the *transport* format.

Absolutely, but the transport methodology is to wrap a CommonJS module.
Doing this by hand is a lot of extra work (AMD is vastly less effort),
and doing it with a tool creates an extra tool dependency. Within Dojo
(which wants to use CommonJS), requiring the use of a tool (for
wrapping) is a non-starter.

--
Thanks,
Kris

Wes Garland

unread,
Sep 11, 2010, 11:55:53 AM9/11/10
to comm...@googlegroups.com, Tom Robinson
Tom:


> I'll just say my preference if you're going to invent a new module format is to not call it CommonJS
> modules. We absolutely should not have two incompatible ways of defining modules.

I agree.  Two module formats is a losing proposition in many ways.  I'm already having enough problems extracting the mozilla-isms which I have been using freely on the server-side from my code so that it will run on the web.

I think in order to resolve this issue, though, we need to look hard at the REAL issues with using CommonJS on the browser.  They all boil down to: how can we load modules asynchronously?

Async loading was never a design consideration for CommonJS modules.  This is not all that surprising, given it's origins -- async is not really a requirement on the server, and those who are that server-side developers should be forced into an async-only paradigm are wrong.

When I first looked at using CommonJS on the browser, it was my assumption that modules would simply be loaded ahead of time, without consideration for lazy-loading.  It is still my belief that this is a perfectly via technique, and that browser authors would easily learn to load the modules they need in an effecient way: after all, we all learned about image preloading and CSS sprites as those techniques became necessary in our work.

With respect to transports, I have always seen a Transport + a Web-Client Framework to go hand-in-hand in presenting CommonJS modules to the user via the require() statement.  The mechanics of doing so strike me mostly as a tradeoffs/tuning exercise, and that modifying modules depending on which transport system you happen to favour is quite smelly.

Kris Zyp:

What's to stop Dojo from wrapping modules on the client side, when there is no server-side support?

Wes

--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102

James Burke

unread,
Sep 11, 2010, 11:56:05 AM9/11/10
to comm...@googlegroups.com
On Sat, Sep 11, 2010 at 4:42 AM, Tom Robinson <tlrob...@gmail.com> wrote:
> The language is irrelevant to this discussion so I'm going to ignore your superficial criticism on that. The Objective-J loader can load standard JavaScript, and we've even modified it to support CommonJS modules (it was a couple dozen line change). It works with Firebug and WebKit debuggers (and presumably anything else that supports @sourceURL). It's also got something similar to the transport format for loading cross domain, but that format is automatically generated by a tool upon deployment, of course.

I pointed out the language to illustrate that since you got something
that works with your choice of language, it does not translate to a
more general statement about the module format being ideal for front
end development. I mentioned YUI to illustrate someone else's
perspective of getting something to work just fine for them, but it
happens to use a function wrapper format. I mentioned Dojo because it
does exactly all the things you just described you do for standard
JavaScript in the Objective-J loader. Those tradeoffs are real.
@sourceURL is not helpful if there is a syntax error in the file, does
not work for IE. It makes development slower, speed is not just about
deployment speed. An extra tool step to get xdomain support adds more
steps, more things to know to deploy code.

The possibility to avoid those problems with a function wrapper format
seem worth it particularly since the amount of typing for it is not
that much different from the existing CommonJS module format, and it
allows setting the exported value in a natural way. While setting
exports may be contentious, the fact that it is implemented in more
than one implementation should indicate it is not a feature wanted by
a vocal minority that has no implementation to back up the talk.

> Does CommonJS even need to appease "mainstream front end" developers anyway?

You call it appeasement, I call it getting them involved in the
discussion, see what works best for them and if there is enough
overlap in goals to agree on something that is generally useful. I
think there is.

> I suspect confusing people with multiple module formats is going to hurt adoption more.

Right now there are three things someone needs to understand in
CommonJS to effectively use code in the browser:
- A module format
- A transport proposal (but which one?)
- An async require syntax proposal (but which one?)

It is confusing now. Compare that with one format proposal that would
define a module syntax, then say, "for optimizing transport/including
multiple modules in one file, you MAY place the ID of the module as
the first argument to require.def. If you do not want to define a
module, but just use some dependencies, use the same argument syntax
as require.def anonymous modules, but drop the .def property access".

That said, I am completely fine just calling the proposal from Kris
Zyp Transport/E. Just expect it to be pushed as a source format for
modules for some systems, and we can let implementation and adoption
sort it out.

James

Tom Robinson

unread,
Sep 11, 2010, 12:31:46 PM9/11/10
to comm...@googlegroups.com

On Sep 11, 2010, at 5:17 AM, Kris Zyp wrote:

>
>
> On 9/11/2010 5:42 AM, Tom Robinson wrote:
>> On Sep 11, 2010, at 2:24 AM, James Burke wrote:
>>
>>> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
> But we never did have consensus on the transport API. We have always had
> multiple APIs of CommonJS module transport with out a clear winning
> proposal. The basic underlying difference of approach between
> hand-coding an asynchronous module vs auto-generated wrappings of
> CommonJS has constantly divided such discussions. The proposal doesn't
> introduce anything new (well it altered the transport/C API slightly,
> making the first param optional and removed require.pause and
> require.resume), it simply calls Transport/C what it really is. If I got
> the name wrong, suggest something different (I asked for suggestions
> before creating the wiki page), but it ain't a "transport".

Sorry, I hadn't been following the details of the transport proposals closely. They were still being called "transport", so I assumed they were still intended to be used only as a transport mechanism (though RequireJS's abuse of the transport proposals bugged me from the beginning). It has come to my attention that that is not the case, and I'm not happy about the change in direction.

>
>> [snip]
>> xdomain and speed are solved by the *transport* format.
> Absolutely, but the transport methodology is to wrap a CommonJS module.
> Doing this by hand is a lot of extra work (AMD is vastly less effort),
> and doing it with a tool creates an extra tool dependency. Within Dojo
> (which wants to use CommonJS), requiring the use of a tool (for
> wrapping) is a non-starter.

And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)

How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?

To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.

-tom

Kris Zyp

unread,
Sep 11, 2010, 1:54:14 PM9/11/10
to CommonJS

On 9/11/2010 7:11 AM, Tom Robinson wrote:
> On Sep 11, 2010, at 5:17 AM, Kris Zyp wrote:
>
>>

>> On 9/11/2010 5:42 AM, Tom Robinson wrote:
>>> On Sep 11, 2010, at 2:24 AM, James Burke wrote:
>>>
>>>> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>>>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
>> But we never did have consensus on the transport API. We have always had
>> multiple APIs of CommonJS module transport with out a clear winning
>> proposal. The basic underlying difference of approach between
>> hand-coding an asynchronous module vs auto-generated wrappings of
>> CommonJS has constantly divided such discussions. The proposal doesn't
>> introduce anything new (well it altered the transport/C API slightly,
>> making the first param optional and removed require.pause and
>> require.resume), it simply calls Transport/C what it really is. If I got
>> the name wrong, suggest something different (I asked for suggestions
>> before creating the wiki page), but it ain't a "transport".

> Sorry, I hadn't been following the details of the transport proposals closely. They were still being called "transport", so I assumed they were still intended to be used only as a transport mechanism (though RequireJS's abuse of the transport proposals bugged me from the beginning). It has come to my attention that that is not the case, and I'm not happy about the change in direction.
>

>>> [snip]
>>> xdomain and speed are solved by the *transport* format.
>> Absolutely, but the transport methodology is to wrap a CommonJS module.
>> Doing this by hand is a lot of extra work (AMD is vastly less effort),
>> and doing it with a tool creates an extra tool dependency. Within Dojo
>> (which wants to use CommonJS), requiring the use of a tool (for
>> wrapping) is a non-starter.

> And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)

I don't see these as directly competing, they serve different roles.
There is one and only one CommonJS module format, which defines a set of
guaranteed free variables that will be available for synchronously
requiring modules and exporting functionality within a CommonJS
controlled evaluation context. Asynchronous module definition API, on
the other hand, defines of way of registering a module from outside a
CommonJS controlled evaluation context. The expected context is no
different than the transport API, but it differs in the purpose of being
optimized for hand-coding.

> How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?

Dojo has used synchronous dependency loading for years, and most of the
committers (who have used this for years) are pretty much in agreement
that it is wrong and don't want it anymore. Asynchronous dependency
loading is necessary, regardless of whether is called a transport API or
async module definition API.

IIUC, YUI added dependency loading in version 3. And it appears that
they do indeed use a function wrapper around each module. If you look at
the source, you'll see a YUI.add(moduleId, factory, version,
options-with-dependency-list) around each module.


> To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.

So your suggestion is that a client side library use synchronous loading
+ eval-based module loading? Or use CommonJS raw format plus a tool for
transport wrapping? I know in the context of Dojo, the former is what
years of experience have lead us away from. I suggested the latter on
the Dojo ML, and it was thoroughly shot down, Dojo won't limit its
availability to those who are willing to use a transport tool.

-- Thanks, Kris

Kris Zyp

unread,
Sep 11, 2010, 2:22:47 PM9/11/10
to comm...@googlegroups.com, James Burke

On 9/11/2010 9:56 AM, James Burke wrote:
> [snip]


> That said, I am completely fine just calling the proposal from Kris
> Zyp Transport/E. Just expect it to be pushed as a source format for
> modules for some systems, and we can let implementation and adoption
> sort it out.

I thought the letters were for creating alternate proposals (someone
correct me if I am wrong). This is clearly just an upgrade to
Transport/C (optional first arg, and removal of pause and resume due to
our realization that they weren't necessary) and not meant to compete
with Transport/C, and thus should be draft 2 if it keeps the name. It is
the name (and not the changes) that seem to be contentious. We could
revert back to calling it a "transport", but really? That is just a
lousy name for it (it is appropriate for Transport/D, but not
AMD/Transport/C). I don't see how we are helping the process but giving
it an inappropriate name, just so it has the same name as another API
within CommonJS. I'd love to hear other name suggestions (hopefully
better than "transport").

But perhaps the issue isn't so much with the name change, but just the
fact that AMD/transport/c exists, regardless of the name or minor updates...

--
Thanks,
Kris

Tom Robinson

unread,
Sep 11, 2010, 2:52:48 PM9/11/10
to comm...@googlegroups.com
On Sep 11, 2010, at 10:54 AM, Kris Zyp wrote:

> On 9/11/2010 7:11 AM, Tom Robinson wrote:

>>> [snip]


>>>
>> And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)
> I don't see these as directly competing, they serve different roles.
> There is one and only one CommonJS module format, which defines a set of
> guaranteed free variables that will be available for synchronously
> requiring modules and exporting functionality within a CommonJS
> controlled evaluation context. Asynchronous module definition API, on
> the other hand, defines of way of registering a module from outside a
> CommonJS controlled evaluation context. The expected context is no
> different than the transport API, but it differs in the purpose of being
> optimized for hand-coding.

Ok, but if I can't share modules between the client and server then using CommonJS on the client loses some of it's appeal.

>> How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?
> Dojo has used synchronous dependency loading for years, and most of the
> committers (who have used this for years) are pretty much in agreement
> that it is wrong and don't want it anymore. Asynchronous dependency
> loading is necessary, regardless of whether is called a transport API or
> async module definition API.

We can (and already do) support async loading of regular CommonJS modules without wrappers using async XHR. See my second paragraph below.

> IIUC, YUI added dependency loading in version 3. And it appears that
> they do indeed use a function wrapper around each module. If you look at
> the source, you'll see a YUI.add(moduleId, factory, version,
> options-with-dependency-list) around each module.
>
>
>> To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.
>
> So your suggestion is that a client side library use synchronous loading
> + eval-based module loading? Or use CommonJS raw format plus a tool for
> transport wrapping? I know in the context of Dojo, the former is what
> years of experience have lead us away from. I suggested the latter on
> the Dojo ML, and it was thoroughly shot down, Dojo won't limit its
> availability to those who are willing to use a transport tool.

I would advocate both, except use async loading instead of sync. Use the eval-based loader during development for ease of use, then optimize by bundling the modules in the transport format using a tool during deployment (when you'll most likely be running other build processes like minification anyway). This has always been our approach in Cappuccino. If you want to load libraries from a CDN, just make sure they're deployed in the transport format. IMO the loader should support mixing of both module formats side-by-side, so I can point my jQuery (or whatever) package to a CDN and my own modules locally during development.

Regarding async vs. sync loading, I thought it was well understood that you can do asynchronous module loading with regular CommonJS modules and no wrappers or tool. You asynchronously *download* all the modules and their transitive dependencies up front, then synchronously *execute* them. It adds a small amount of complexity to the loader, but not much. This is the reason we mandate strings passed to require() are literals, so that they're statically analyzable.

The transport formats get you a few distinct things:

1) The benefits from loading in a script tag:
a) cross-domain loading
b) better debugging in debuggers that don't support @sourceURL
c) maybe faster parsing/execution (?)
2) Combining multiple modules into a single file.
3) Dependency information that you don't have to parse out yourself.

(note that I don't include async loading because that can also be done with XHR+eval)

So 1a, 1c, 2, and 3 are probably only important during deployment. 1b is the only one that might matter during development, if you're using a debugger that doesn't support @sourceURL, but that's a tradeoff I'm willing to make since the most common debuggers (by far, Firebug and WebKit) support it.

There are already multiple implementations of CommonJS modules for browsers using this async loading / sync executing pattern. I think Yabble, Tiki, and my modified Objective-J loader all do this, off the top of my head.

-tom

Kris Zyp

unread,
Sep 11, 2010, 3:40:47 PM9/11/10
to comm...@googlegroups.com, Tom Robinson

On 9/11/2010 12:52 PM, Tom Robinson wrote:
> On Sep 11, 2010, at 10:54 AM, Kris Zyp wrote:
>
>> On 9/11/2010 7:11 AM, Tom Robinson wrote:
>>>> [snip]
>>>>
>>> And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)
>> I don't see these as directly competing, they serve different roles.
>> There is one and only one CommonJS module format, which defines a set of
>> guaranteed free variables that will be available for synchronously
>> requiring modules and exporting functionality within a CommonJS
>> controlled evaluation context. Asynchronous module definition API, on
>> the other hand, defines of way of registering a module from outside a
>> CommonJS controlled evaluation context. The expected context is no
>> different than the transport API, but it differs in the purpose of being
>> optimized for hand-coding.
> Ok, but if I can't share modules between the client and server then using CommonJS on the client loses some of it's appeal.

Using the standard CommonJS module format with the transport API is
totally the way to go if you are doing SSJS, I completely agree (that's
why I wrote transporter). For something like Dojo, probably less than
10% (maybe less than 1%) are using SSJS. Forcing the 90% who aren't
using JS anywhere but in the browser to use a format that is can't be
directly loading in script tags without additional processing is
untenable in Dojo.


>>> How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?
>> Dojo has used synchronous dependency loading for years, and most of the
>> committers (who have used this for years) are pretty much in agreement
>> that it is wrong and don't want it anymore. Asynchronous dependency
>> loading is necessary, regardless of whether is called a transport API or
>> async module definition API.
> We can (and already do) support async loading of regular CommonJS modules without wrappers using async XHR. See my second paragraph below.

Right, Nodule does this (static analysis for the purpose of async), but
you have to control the entire loading process and Dojo does not. Once a
sync require is made (from any script, inline or in a file), all the
transient dependencies have to sync loaded as well.


>> IIUC, YUI added dependency loading in version 3. And it appears that
>> they do indeed use a function wrapper around each module. If you look at
>> the source, you'll see a YUI.add(moduleId, factory, version,
>> options-with-dependency-list) around each module.
>>
>>
>>> To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.
>> So your suggestion is that a client side library use synchronous loading
>> + eval-based module loading? Or use CommonJS raw format plus a tool for
>> transport wrapping? I know in the context of Dojo, the former is what
>> years of experience have lead us away from. I suggested the latter on
>> the Dojo ML, and it was thoroughly shot down, Dojo won't limit its
>> availability to those who are willing to use a transport tool.
> I would advocate both, except use async loading instead of sync. Use the eval-based loader during development for ease of use, then optimize by bundling the modules in the transport format using a tool during deployment (when you'll most likely be running other build processes like minification anyway). This has always been our approach in Cappuccino. If you want to load libraries from a CDN, just make sure they're deployed in the transport format. IMO the loader should support mixing of both module formats side-by-side, so I can point my jQuery (or whatever) package to a CDN and my own modules locally during development.

That is the way it currently works in Dojo.


> Regarding async vs. sync loading, I thought it was well understood that you can do asynchronous module loading with regular CommonJS modules and no wrappers or tool. You asynchronously *download* all the modules and their transitive dependencies up front, then synchronously *execute* them. It adds a small amount of complexity to the loader, but not much. This is the reason we mandate strings passed to require() are literals, so that they're statically analyzable.
>
> The transport formats get you a few distinct things:
>
> 1) The benefits from loading in a script tag:
> a) cross-domain loading
> b) better debugging in debuggers that don't support @sourceURL
> c) maybe faster parsing/execution (?)
> 2) Combining multiple modules into a single file.
> 3) Dependency information that you don't have to parse out yourself.
>
> (note that I don't include async loading because that can also be done with XHR+eval)
>
> So 1a, 1c, 2, and 3 are probably only important during deployment. 1b is the only one that might matter during development, if you're using a debugger that doesn't support @sourceURL, but that's a tradeoff I'm willing to make since the most common debuggers (by far, Firebug and WebKit) support it.

From what I've seen, and from what I remember of the test results, the
performance difference is dramatic. And this is enormously important to
me for development. Faster loading equals faster development. Also, even
with @sourceURL, stack traces don't work in any browser, AFAICT.

> There are already multiple implementations of CommonJS modules for browsers using this async loading / sync executing pattern. I think Yabble, Tiki, and my modified Objective-J loader all do this, off the top of my head.

Yep, and that's exactly what Dojo does as well (just not with the
CommonJS API, but all the same patterns). And after years of supporting
multiple core loaders, and suffering through slow eval based loading, we
are ready to be done with it. Anyway, I think we will definitely have an
auxillary loader for CommonJS plain modules in Dojo, but having Dojo
modules be written in and designed for CommonJS has been discussed and
totally rejected.

Also, in regards to implementations, I know Yabble, RequireJS, and
Nodules (and soon Dojo) all implement Transport/C, so it is a reasonably
well implement spec, so I don't think we could just make it go away even
if we wanted to. We can give it a less confusing name though (having two
transport specs, differentiated by a letter is horrible).

--
Thanks,
Kris

Eugene Lazutkin

unread,
Sep 11, 2010, 4:22:39 PM9/11/10
to CommonJS
What you described is exactly what is done in Dojo: a sync loader + a
build tool (Dojo has an async loader too). Unfortunately there are
problems with this approach.

An example close to my heart: right now I am working on an
application, which doesn't work with the sync loader as intended ---
it shows "script takes too much time to run, do you want to cancel?"
twice breaking internal timers. So I am forced to do a build.

The build works great --- but now I have a major problem with
debugging: everything is in one (two, three) humongous file(s)
(obviously I do not use a minification or any other source code
processing for debugging). If there is a problem in line 12,345 I
don't know where it is in my original source code. It is a real
problem to find the actual file and the correct offset in it. It takes
time and makes me less productive --- I hope everybody knows that
programmers frequently fall in "groove", and it takes small kinks in
the overall development process to break the "groove". Probably I
forgot to mention that the build of that application takes about a
minute (Dojo's builder does more than JS concatenation --- it does the
same for CSS, and inlines internal resources).

That's about real-life working with transformation/build tools.

An automated loader, which does all these things on the server on
demand has the same problems and it brings more problems to the table.
The first and foremost problem is the much higher barrier of entry ---
you have to set up/run a web application to serve CommonJS modules,
which is more complex than running from a file system, or serving
static resources with a web server. The web app for that should be
written, or some stock one should be used. It always leads to "Why do
you use Flask? You should be using Ruby on Rails like me!", "IIS?
Windows? Start using PHP on Linux like all cool guys!", and so on.

To sum it up: a specialized loader is a non-starter. It is a hassle to
set it up especially for people who are not sysadmins, but need it for
development. At the moment and in a near future it is not available on
majority of platforms. Legacy platforms are especially bad at that.

The more burden we put on programmers, the more barriers of entry we
create.

Another thing I sense (it was not said openly but assumed) is that
many people afraid of ... asynchronous module loading in general. I
can understand arguments against doing it asynchronously in non-
browser environments. I just don't see anything in Kris' proposal,
which forces to load such modules asynchronously. You can do it
synchronously and "stop the world" while loading dependencies without
any problems. His proposal is about one thing only: *allow* to load
such modules asynchronously in a browser or anywhere else you want it.
Nobody forces implementors to do it one way or another.

If we don't start sharing modules between servers and clients, we will
rob themselves of this unique ability afforded by JS "all the way".
This is a huge advantage over Java, Python, Ruby, PHP, or any other
server-side languages. So far we are shooting ourselves in the foot by
incessant bike-shedding.

<rant>

We pooh-poohed it a year ago. We can pooh-poohed it now. The reason is
simple --- our majority either doesn't understand the idea of sharing
modules yet because the whole technique is in infancy, or lives in
closed gardens, which they fiercely guard from "outsiders". Either way
we prevent the technique to go mainstream ('Does CommonJS even need to
appease "mainstream front end" developers anyway?'), we reduce number
of programmers, who can benefit from it => the less the pool of
participant is => the less sustainable community is, the less new
ideas we get, the less real-world experience we have as a community.
But mark my words: in 2-3 years even former Visual Basic programmers
would want sharing modules, and the problem would be solved anyway,
and the very same people in this community, who oppose now, would be
agitated about it, and they will say "that was the whole idea of
CommonJS, you know!".

Why the opposition against an inevitable? Is Kris' proposal that
difficult to implement? I don't think so --- all machinery is already
in place in existing loaders. Does it make everything we wrote
instantly obsolete? Last time I checked it is fully backward
compatible. So what is the problem? Wrong name?

</rant>

Cheers,

Eugene

PS: Obviously I agree that require.def() or whatever it is called
should be universally supported --- otherwise the whole idea of
platform-neutral migrating modules doesn't work.

Eugene Lazutkin

unread,
Sep 11, 2010, 4:28:36 PM9/11/10
to CommonJS
BTW, platform-neutral modules/packages work both ways: they will
compel client-side developers to explore and use SSJS. Right now
browsers and SSJS are completely separate environments with different
libraries and everything. No much difference to do the server with
Python or Ruby. With truly shared code we gain instantly the benefit
of familiarity. Just think about it.

Cheers,

Eugene

Nathan Stott

unread,
Sep 11, 2010, 11:09:16 PM9/11/10
to comm...@googlegroups.com
I'm going to add some comments without talking about Transport/C specifically.

Sharing code between client and server is the major reason that I use
CommonJS modules. Couchapp and Transporter are great tools. It is
hard to imagine Transporter becoming mainstream.

It would be nice to reduce the friction between sharing modules
between client and server. It would help our community greatly.

Wes Garland

unread,
Sep 12, 2010, 6:56:09 AM9/12/10
to comm...@googlegroups.com
On Sat, Sep 11, 2010 at 11:09 PM, Nathan Stott <nrs...@gmail.com> wrote:
It would be nice to reduce the friction between sharing modules
between client and server.  It would help our community greatly.

Agreed. Let's talk requirements for a moment:

 - server-side process to deliver
 - no difference between server and browser modules
 - works in a plain script tag

So far, I'm seeing "pick two", and I think various members want a different two.  Frankly, I'm having a hard time figuring out how to make all three come true. Is XHR + eval really that slow?

Kevin Smith

unread,
Sep 12, 2010, 8:48:18 AM9/12/10
to comm...@googlegroups.com
All 3:

http://github.com/khs4473/FlyScript

There are a couple of features I'm trying to clean up and the documentation needs a lot of work, but I've used it to develop three biggish client-side libraries and it works really well.  If anyone wants to try it out, I'd more more than happy to work one-on-one with you to get it set up.  It's not hard, but I just haven't had time to write the proper documentation yet.

As an aside, this is written in PHP.  I like PHP (even though it's a weird little language) so I'm a little biased, but seriously, any web programmer that can't get PHP running on IIS or Apache in a day needs some schooling.  The argument that server-side tools are a non-starter because different projects use different web servers is a little overstated (although valid to a certain extent).  Besides, even if a project uses something other than IIS or Apache (like  a  Java engine), there's absolutely nothing preventing them from creating a separate, dedicated script server.  That may sound crazy to you now...  : )


--

Kevin Smith

unread,
Sep 12, 2010, 9:12:10 AM9/12/10
to comm...@googlegroups.com
Sorry - one more thing - for library development (as opposed to application development) you can have the best of both worlds by developing/debugging/testing with modules coming off a module server, and then pack it all up as a standalone javascript file for your users that don't utilize a module server.

Nathan Stott

unread,
Sep 12, 2010, 10:13:34 AM9/12/10
to comm...@googlegroups.com
It sounds as though having to have a dedicated script server is
exactly what is a non-starter for dojo.

Nathan Stott

unread,
Sep 12, 2010, 10:58:13 AM9/12/10
to comm...@googlegroups.com
If I have to pick two of those requirements, I'd pick 'works in plain
script tag' and 'no difference between server and browser modules'

Wes Garland

unread,
Sep 12, 2010, 12:09:26 PM9/12/10
to comm...@googlegroups.com
Heh - looks like both Kevin and Nathan have found holes in my original post (should stop posting before coffee, I guess).

Kevin - Your solution is meets only "pick two" from the intent of my post; there is no difference between server & browser, it works in a plain script tag, but it needs a server-side process to deliver (PHP).

Nathan - you've picked what I also consider "the right two" - but I don't know how to implement that without extra boilerplate in the modules, or changing the definition of what constitutes a module. Do you?

Incidentally, I've been mulling a solution which injects SCRIPT tags into an IFRAME somehow and yanks the content out as a way to avoid XHR+eval, but there is something nagging me there about feasibility. 

I know jquery's load function would lead to a workable solution. Does anybody know how that works? Does he XHR the HTML and then eval everything in SCRIPT tags?

Kevin Smith

unread,
Sep 12, 2010, 12:24:57 PM9/12/10
to comm...@googlegroups.com
I've said this before and I'll say it again:  the "servers are evil" thing is a relic.  : )  But I don't expect anyone here to agree with me so I'm not going to argue it further.


--

Mikeal Rogers

unread,
Sep 12, 2010, 12:34:52 PM9/12/10
to comm...@googlegroups.com
I find it kind of hilarious that every time we have spec work about making Modules work better in the client the same people freak out and say that CommonJS treats the browser as "second class".

We have a system in CouchDB that forces all your modules in to a sandbox scoped to a particular document. The browser pulls down that document and scopes a require function to return modules from it. It works great.

YUI, Dojo, and I guess Cappuccino all have similar systems that compile down all the modules you need in to a single GET and they didn't build them that way because they wanted to be compatible with an existing standard that didn't exist yet they did it because they wanted to have an easier way to separate units of functionality in to different files and then merge them all together to reduce the performance overhead of so many file loads in the browser.

Using those systems is slightly more complicated than regular script includes, sure. You know what *is* just as easy as regular script includes, fucking script includes. If you want things to be as "easy" as they are now then don't use Modules.

People seem to want some of the advents *on top of* Modules like dependency resolution without the easy separation of concerns that enable the rest of that system.

I guess what I'm getting at, and it's taking a while, is that it's a much better idea to try and standardize a few ways of making existing Modules work in the browser, like the recent adaptor spec http://wiki.ecmascript.org/doku.php?id=conventions:commonjs_adaptor, and to work on standardized tooling for merging the modules you need in to one GET the way that Dojo and YUI have already done but in a more Modules specific way that it is to try and create new specs that require divergences in the modules themselves.

-Mikeal

Dustin Machi

unread,
Sep 12, 2010, 12:55:21 PM9/12/10
to comm...@googlegroups.com
From a performance perspective, aside from the eval, xhr dependency loading in the browser usually ends up being nested sync xhr calls which is much worse than the equivalent number of script tags. With script tags, the browser in most cases can load the scripts in parallel (inline script issues aside) and simply ensure they execute in the proper order. It is also a requirement to use script tags for this for cross-domain loading for all practical purposes. Finally, I have written several loaders supporting thing full on the fly building of modules for the browser. It can certainly be done and works well, but it is not the solution for everyone (perhaps not even most). In practice, CDNs distribute the application code (before commenting, I know that these would can be built), sites share code/widgets, requiring a compile after every change sucks if you develop scripts, etc.

It seems to me we either are Common, which is to say support javascript equally across all platforms, or we aren't. I have a feeling that most people's interest, at least from the browser side, is the prospect of not being locked into a particular project/vendor and to provide more compatibility between common functionality. Secondly, in much of the technical discussion around browser loading, it is often forgotten that there is little choice for loading in the browser, there are a few ways that have been fairly well tested and sorted for their pros/cons. On the backend however, while it is perhaps a little bit inconvenient at times to conform to the browser it usually has little or not cost in terms of performance. The converse of that is not true as proven by experience at Dojo and elsewhere. Dozens of loaders exist and have been developed over years. This is not to say some of the browser solutions don't work, but there have been none that have proven to be more performant. Performance in the limited browser environment comes at every single place it can be grabbed, which I can attest to having spent the past couple of years mostly doing performance analysis of customer's browser apps.

A server platform running a set of commonjs modules has full control over how its code gets loaded, and can do so by pre-processing (building), processing on the fly, or even processing on the fly and caching without a lot of difficulty. This is impossible to do on the browser side without causing performance concerns. Someday in the future that will hopefully change for the better, at the current time IE6/7 are still be supported by a large chunk of the cash paying world. Requiring a specific server solution is often difficult or impossible at large organizations. They could of course implement that solution in their ecosystem on their own to the extent that one didn't exist (and in the beginning thats all of them), however that can cost as much as implementing the project they planned to use it for in the first place.

In short I fully support using servers to provide optimizations for loading in the browser, but that should not be required for a common system. The servers have the ability to adapt without penalty, while the browsers don't without increasing the requirements for a project.

Dustin

Mikeal Rogers

unread,
Sep 12, 2010, 1:04:39 PM9/12/10
to comm...@googlegroups.com
The only requirements on the browser to support this system is a single callback which will fire after the application/framework code is done loading. This gets us out of sync xhr (the devil!) and can allow a framework to load it's modules and provide a require function.

Maybe it's just me but requiring a single callback in the loader sounds a lot simpler than requiring a ton of callbacks in the crafting of all modules especially since almost all modern js frameworks already abstract the onload callback and require you to put your application code in it.

I'm just not seeing the benefit of putting this complexity on the authors of modules (many) instead of on the implementers of loaders (relatively few).

-Mikeal

Eugene Lazutkin

unread,
Sep 12, 2010, 1:25:37 PM9/12/10
to CommonJS
I am sorry to say but your whole premise is wrong. Using Dojo loader
is actually *easier* than *fucking script includes*. This is a typical
example:

<!-- loading Dojo -->
<script src="dojo/dojo.js"></script>
<script>
// let's load dependencies
dojo.require("dojo.io.script");
dojo.require("my.library");

// the actual code starts here

// yes, we can load a module any time we want
if(someCond){
dojo.require("dojo.colors");
// do something with colors
}

// yes we can load modules at will
var myVar = "dijit.Dialog");
dojo.require(myVar);

// and so on
</script>

This is for a synchronous module loader. The asynchronous one adds a
callback after loading a group of modules. If you interested I can
whip up an example for it too. It'll be practically the same.

In any case these are the point to observe:

1) Loading a module with the existing Dojo loader(s) is stupidly
simple.

2) Dependency resolution just works. No need to do anything special.
Just do not forget to dojo.require() what you need before you need
that and the rest is going to be tracked.

3) The boilerplate to load a module is smaller than a script (I used
HTML5 style for <script>, many add even more gunk like type=""). The
boilerplate can be even smaller, if the loader used smaller words.

4) The loader affords a flexibility unmatched by scripts --- we can
load modules conditionally, or at will. Doable with scripts, but it
will require some non-trivial JS in place. The loader takes care of
it.

5) Both loaders in Dojo (sync and async) do not require any kind of
server components, and can work with raw manually written modules and/
or "built" applications.

6) What you see in the second script block (the inlined JS) is
basically a Dojo module. There is nothing special about them. The only
difference it will identify itself with dojo.provide("my.module") ---
that's literally the only difference. So don't expect to see any more
boilerplate than in my example.

Pardon me if I don't see the part about "Using those systems is
slightly more complicated than regular script includes, sure.". It is
actually easier than scripts. The rest of arguments do not work for me
because of your flawed initial assumptions.

Cheers,

Eugene

On Sep 12, 11:34 am, Mikeal Rogers <mikeal.rog...@gmail.com> wrote:
> I find it kind of hilarious that every time we have spec work about making
> Modules work better in the client the same people freak out and say that
> CommonJS treats the browser as "second class".
>
> We have a system in CouchDB that forces all your modules in to a sandbox
> scoped to a particular document. The browser pulls down that document and
> scopes a require function to return modules from it. It works great.
>
> YUI, Dojo, and I guess Cappuccino all have similar systems that compile down
> all the modules you need in to a single GET and they didn't build them that
> way because they wanted to be compatible with an existing standard that
> didn't exist yet they did it because they wanted to have an easier way to
> separate units of functionality in to different files and then merge them
> all together to reduce the performance overhead of so many file loads in the
> browser.
>
> Using those systems is slightly more complicated than regular script
> includes, sure. You know what *is* just as easy as regular script includes,
> fucking script includes. If you want things to be as "easy" as they are now
> then don't use Modules.
>
> People seem to want some of the advents *on top of* Modules like dependency
> resolution without the easy separation of concerns that enable the rest of
> that system.
>
> I guess what I'm getting at, and it's taking a while, is that it's a much
> better idea to try and standardize a few ways of making existing Modules
> work in the browser, like the recent adaptor spechttp://wiki.ecmascript.org/doku.php?id=conventions:commonjs_adaptor, and to
> ...
>
> read more »

James Burke

unread,
Sep 12, 2010, 1:27:14 PM9/12/10
to comm...@googlegroups.com
On Sun, Sep 12, 2010 at 9:34 AM, Mikeal Rogers <mikeal...@gmail.com> wrote:
> YUI, Dojo, and I guess Cappuccino all have similar systems that compile down
> all the modules you need in to a single GET and they didn't build them that
> way because they wanted to be compatible with an existing standard that
> didn't exist yet they did it because they wanted to have an easier way to
> separate units of functionality in to different files and then merge them
> all together to reduce the performance overhead of so many file loads in the
> browser.

YUI has gone with a module format that uses a function with
dependencies specified outside it, and Dojo wants to do the same,
precisely for the points that have been brought up in this list. Two
different groups who have been living with building large,
componentized systems in the browser for many years are moving to that
pattern.

Both use a build system to help optimize code delivery in production
over the network, but those tools are not required for development. I
believe both have developer-time versions of the optimization tools:
the developer can run a step that optimizes once, as a part of code
deployment, then a specialized server is not needed.

So, feel free to think it is not important because you have personally
not seen a need for it, but know that there are people with real world
experience and implementations that feel otherwise. It is fine if this
group does not want to target those groups, but it will likely mean
the module format pushed by this group will not be so common,
particularly in the browser.

I think that is unfortunate, particularly since Kris Zyp's latest
proposal makes it really close to the existing module format design
goals (in particular no module name in the file), and the difference
in typing is really not that great when you consider that exports is
not needed in the vast majority case, which allows setting of exports
(that leads to not needing "module" as much), and "require" does not
need to be typed for each dependency.

James

Eugene Lazutkin

unread,
Sep 12, 2010, 1:41:22 PM9/12/10
to CommonJS
Mikeal,

Could you explain what "putting this complexity on the authors of
modules" actually means? With examples, if possible. Looking at the
async module proposal I don't see any complexity for authors. In the
case you were talking about some performance penalty --- I don't see
it either, at least one extra function call per module do not look
like a big deal to me.

In any case it is less complex than an alternative proposal cited in
http://groups.google.com/group/commonjs/browse_thread/thread/72fc194f04a6822b

Cheers,

Eugene
> > >+1 613 542 2787x 102
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > "CommonJS" group.
> > > To post to this group, send email to comm...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > commonjs+u...@googlegroups.com<commonjs%2Bunsubscribe@googlegroups.c om>
> > .
> > > For more options, visit this group at
> >http://groups.google.com/group/commonjs?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "CommonJS" group.
> > To post to this group, send email to comm...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > commonjs+u...@googlegroups.com<commonjs%2Bunsubscribe@googlegroups.c om>
> > .

Dean Landolt

unread,
Sep 12, 2010, 2:51:46 PM9/12/10
to comm...@googlegroups.com


On Thu, Sep 9, 2010 at 9:55 PM, Kris Zyp <kri...@gmail.com> wrote:
 Here is the new asynchronous module definition proposal, based on the
ideas from Transport/C and recent changes discussed:

http://wiki.commonjs.org/wiki/Modules/AsynchronousDefinition

Feedback welcome, of course.


Is there anything about the require.def line that can't be elicited from static analysis? If you were hand-coding is there anything you can do that can't be decompiled to a set of modules on the filesystem? If not, then I don't see what all the fuss is about. Just as transporter packages up modules with require.def, we could just as easily have untransporter for converting things back down into a SecurableModules format, right?

I believe that truly sharable modules one goal of this group. I personally very much like things as they are, and I'm perfectly happy to use a transport mechanism to push my modules into the client. But I still can't help but think of the world of modules out there that I can't (easily) use and feel like we could do more. Just imagine dojo's libs being just a require away? I bet some of the other libs would quickly follow suit.

It's apparent that there are two distinct, divergent needs here. Why can't we have both, especially if they're perfectly compatible? It would throw the doors wide open on this whole sharable modules thing.

There's still a good argument that having two module formats is confusing. But if you think of require.def as metadata, useful in one context (the browser or other async-required contexts) and superfluous in others (the server) it strikes me as a lot more palatable -- it's not really two module formats at all -- just two different ways of maintaining module metadata.

So what about require.def can't be expressed in our current modules?

Nathan Stott

unread,
Sep 12, 2010, 3:19:59 PM9/12/10
to comm...@googlegroups.com
I use Couchapp. I like Couchapp. However, I can see the value of
modules being loadable via script tags.

Also, I am not one of the 'same people' saying that CommonJS treats
the browser as 2nd class. It just plain does. It's obvious. I've
been using Zyps Transporter and I am happy with it; however, a
solution like that will never achieve the mass adoption that I would
like to see. If it requires a server, I doubt it will ever be used by
a large number of people.

Same goes for couch. It's great. I love it. I use it in almost
every project I work on now. However, couch as a gateway to commonjs
is going to pull in only a small number of 'elite' people. We're
trying to find a way to get the masses. Yes I know couch is deployed
on Ubuntu, but regular programmers are not going to be programming
couchapps anytime soon.

Daniel Friesen

unread,
Sep 12, 2010, 4:50:05 PM9/12/10
to comm...@googlegroups.com
I don't disagree...
Even Python can spawn up a local server to serve out wrapped commonjs
modules. Add in php and you've got the majority of web hosts.
It's not like simply wrapping a commonjs module is anything complex so I
fail to see how implementing it in your own ecosystem is prohibitively
expensive, I believe Wes even demonstrated how simple it is.
If you're obsessed with not running any server while you're doing local
development you're on one domain anyways, so I also don't see how
running xhr while developing and dropping the wrapped code into an
inline script is a problem.
Even if you're trying to run without any server it's not like there is
anything stopping you from using a quick command line script to wrap and
concatenate your scripts into one file. If you've got a problem with
typing in a command in, then setup a script to poll the filesystem
(better yet, if you can, use something like inotify if you can on your
system) and execute the command when you save a file.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]

> <mailto:comm...@googlegroups.com>.


> To unsubscribe from this group, send email to
> commonjs+u...@googlegroups.com

> <mailto:commonjs%2Bunsu...@googlegroups.com>.

Eugene Lazutkin

unread,
Sep 12, 2010, 6:01:03 PM9/12/10
to CommonJS
I am 100% with you if we target a bunch of hobbyists. Not if we work
with real world companies.

Point in case: I am a Linux guy myself. I love Linux. Nevertheless I
worked for Verizon at some point. This is a Microsoft shop. PHP and
Python would not fly with them, and the idea "Add in php and you've
got the majority of web hosts" is completely foreign --- they don't
care about 3rd-party web hosts. How big Verizon is? At my time it had
250,000 employees (running Windows XP on desktops) and 45,000,000
paying customers (not counting mobile) served by Microsoft Servers (my
apps had 120 servers allocated to them). While I can mandate them
(purely theoretically) what to run on the client side (e.g., Verizon
supported Firefox for a long time), I can not tell them "I think you
are idiots, you should switch to Linux and PHP, and forget about
client-side solutions from me until you comply with my ultimatum". :-)
I hope you understand that this is a major business decision, which
have nothing to do with technical reasons.

Regarding dev environments. Working with different client I saw all.
Some companies send me laptops. With Windows. I am not approved to use
anything else (short of hacking their security). Even BIOS is secured.
I cannot install anything new on this laptop (development tools come
preinstalled). And I cannot run the app on my own computer because all
data comes from VPN. Usually it is not a big deal for a client-side
development because the only thing you need is a text editor.

My other client gave me a desktop, which I should use when I visit
them. With Windows. Again, I cannot install unapproved soft. To work
around it I decided to install Ubuntu in VM. Great, Microsoft Virtual
PC comes with Windows! The problem is Ubuntu doesn't work under
Virtual PC for some reason. I spent couple days fiddling with it, and
finally decided to do my development under Windows.

Being a mostly front web developer in most cases I do not run any
server --- it is already provided to me and ran by admins. More
servers look like a major complication to me. I don't argue that there
are situations, when it is simple to add a loader or whatever you
want. My point is there are situations, when this is not an option. I
hope you understand that being server-side developers you have a very
specific view of the web development in general. But there are other
points of view, which have a right to exist too.

To add another wrinkle to this story --- I am the original author of
dojox.gfx (a cross-platform client-side vectror graphics library). At
the moment it supports 4 different backends. The total size of all
backends is way too big to load together. So the client side makes a
choice which one to load dynamically. What does it mean? It means we
need a dynamic loader. Obviously there are only two solutions: make a
choice and load it from a client (that's how it is done at the
moment), or sniff a UA string and include right backend on the server
effectively assembling a collection of modules dynamically. The static
concatenation of files doesn't work as is => we need a small simple
active component, if we go with a server-side loader.

Implementing a pure client-side solution I am totally free to do
whatever I want. As soon as I have a server-side component I have a
headache --- almost all major companies require a security review +
extensive security tests for any server side code. It may take months
and usually requires to do a bunch of documents spelling out all gory
details. Nobody needs this hassle.

Cheers,

Eugene

PS: Just ask Dojo guys about Dojo and security audits --- some
harmless PHP files were found in test directories --- fun for the
whole family ensues --- new release is forced without those files. Now
when you deploy Dojo there is no way to run some tests.
> >    +1 613 542 2787x 102

Kris Zyp

unread,
Sep 12, 2010, 6:43:15 PM9/12/10
to comm...@googlegroups.com
Just to be clear, I have not proposed any obsoletion of *the* CommonJS module specification, and the environment that it defines (defined in terms of free variables that it provides). There are numerous implementations, most notably Node, which guarantee its permanence. All I really did was make some relatively small (smaller than I originally proposed) updates to the existing Transport/C API, which included giving it a more appropriate and descriptive name. This proposal has existing for some time now, and has several independent implementations (so we couldn't eliminate now even if we wanted). I haven't really proposed anything new (I guess this discussion is just more about the merits of using the AMD API vs the direct module format).


On 9/12/2010 4:56 AM, Wes Garland wrote:
On Sat, Sep 11, 2010 at 11:09 PM, Nathan Stott <nrs...@gmail.com> wrote:
It would be nice to reduce the friction between sharing modules
between client and server.  It would help our community greatly.

FWIW, require.def/Transport/C is supported in Nodules, which runs on Narwhal and Node (and works with the standard module format as well), and RequireJS, which I believe runs on Node, so there is server side support for this, (not that I am claiming that widespread usage of Nodules or RequireJS, but impls are available on the server side).

Also, the boilerplate for writing modules that run on both AMD/Transport/C and native CommonJS is the lightest boilerplate of any, I think, you can pretty easily do one in about 100 characters.



Agreed. Let's talk requirements for a moment:

 - server-side process to deliver
 - no difference between server and browser modules
 - works in a plain script tag

So far, I'm seeing "pick two", and I think various members want a different two.  Frankly, I'm having a hard time figuring out how to make all three come true. Is XHR + eval really that slow?

Yes, from what I have seen of script tag based prototype of Dojo vs the standard XHR +eval, the difference is very big. I can try to dig up some of the test results if you are interested.

-- 
Thanks,
Kris

Irakli Gozalishvili

unread,
Sep 12, 2010, 8:01:49 PM9/12/10
to comm...@googlegroups.com
Hi I'm late on this thread but let me still express my opinion:

1. Browser is important.
2. Commonjs modules where not designed for browsers.
3. Requiring tooling for writing browser centric code is no go in large.
4. There is no solution that can address both scenarios browser and server, in fact I don't think there is a solution that may work for all frameworks either.

That being said, I do think it would be very wrong if commonjs modules where designed with browsers in mind since it's would put too much legacy on the servers.

I don't really seem to understand whole point of trying to extend require or modules in order to provide better support for transports in browsers. We already have a modules so why not just use them as a base ??

require('dojo/transport').def(...) << and go wild
require('YUI').add(....)

It also would be pretty easy to implement transport modules on any commonjs platforms in order to be able to use browser centric modules. Also one may implement 'sudo/transport' module to proxy between 'dojo/transport', 'YUI' or 'foo/bar'. I think that's much better way to go with both server and client side commonjs.



Regards
--
Irakli Gozalishvili
Web: http://www.jeditoolkit.com/
Address: 29 Rue Saint-Georges, 75009 Paris, France


--

Eugene Lazutkin

unread,
Sep 12, 2010, 10:47:45 PM9/12/10
to CommonJS
I think we were reading different proposals. The one I read
facilitated an asynchronous loading of modules in any suitable
environment. More than that it could be implemented by totally
synchronous means. Nobody wants to bring "browser legacy" to any
platform. The simple one-liner allows different implementations of
loading techniques, that's all. As far as I can tell the proposal is
totally backward compatible, and does not force module writers to use
it. If you found something else, please share the stuff I missed with
direct links and explanations.

Cheers,

Eugene
> Address: 29 Rue Saint-Georges, 75009 Paris, France <http://goo.gl/maps/3CHu>
>
>
>
>
>
>
>
> On Mon, Sep 13, 2010 at 00:43, Kris Zyp <kris...@gmail.com> wrote:
> >  Just to be clear, I have not proposed any obsoletion of *the* CommonJS
> > module specification, and the environment that it defines (defined in terms
> > of free variables that it provides). There are numerous implementations,
> > most notably Node, which guarantee its permanence. All I really did was make
> > some relatively small (smaller than I originally proposed) updates to the
> > existing Transport/C API, which included giving it a more appropriate and
> > descriptive name. This proposal has existing for some time now, and has
> > several independent implementations (so we couldn't eliminate now even if we
> > wanted). I haven't really proposed anything new (I guess this discussion is
> > just more about the merits of using the AMD API vs the direct module
> > format).
>
> > On 9/12/2010 4:56 AM, Wes Garland wrote:
>
> > commonjs+u...@googlegroups.com<commonjs%2Bunsubscribe@googlegroups.c om>
> > .

nlloyds

unread,
Sep 13, 2010, 3:31:44 PM9/13/10
to CommonJS
On Sep 12, 2:19 pm, Nathan Stott <nrst...@gmail.com> wrote:
> Also, I am not one of the 'same people' saying that CommonJS treats
> the browser as 2nd class.  It just plain does.  It's obvious.  I've
> been using Zyps Transporter and I am happy with it; however, a
> solution like that will never achieve the mass adoption that I would
> like to see.  If it requires a server, I doubt it will ever be used by
> a large number of people.

If good solutions like Transporter or FlyScript were readily available
on the majority of platforms (Rails Plugin? modulr is a start, Python/
django, .Net, Java, et. al.), in addition to availability of things
like the conversion scripts included with RequireJS, Humans would
rarely need to write modules in the MAD/Transport format.

Seems like the boilerplate needed in trivial enough that we could code
ourselves out of the confusion caused by the differences of the 2
formats. Those have solutions for loading plain modules don't have to
use it, either.

Nathan

Tom Robinson

unread,
Sep 13, 2010, 5:10:40 PM9/13/10
to comm...@googlegroups.com
A few comments in case we move forward with this...

1) If we're making the module ID optional, why not make the dependencies optional too? The dependencies can be extracted from the module text (obtained by toString-ing the function). If the goal is for this to be hand written it doesn't get any simpler than require.def(function(exports, require, module) { ... }). It would be rather annoying to keep all that duplicated information in sync by hand.

This wasn't an issue with the original transport specs since they were intended to be generated programatically. It's also not a problem in other systems like YUI since the dependencies are only specified at the "top", not also in calls to "require()".

Obviously there's a slight performance hit in the toString-ing and parsing of the module text, so it should be optimized during deployment, which brings me to my next point.

2) If the module ID is omitted it's impossible to bundle multiple modules in the same file, so at deployment we should be able to replace the hand written boilerplate with more complete generated boilerplate that includes the ID and dependencies. I'm not sure if this has any impact on the spec, just throwing that out there.

3) We should consider recommending all loaders implement both forms of module definition to maximize interoperability.

-tom

Eugene Lazutkin

unread,
Sep 13, 2010, 8:28:18 PM9/13/10
to CommonJS
Inline.

On Sep 13, 4:10 pm, Tom Robinson <tlrobin...@gmail.com> wrote:
> A few comments in case we move forward with this...
>
> 1) If we're making the module ID optional, why not make the dependencies
> optional too? The dependencies can be extracted from the module text
> (obtained by toString-ing the function). If the goal is for this to be hand
> written it doesn't get any simpler than require.def(function(exports,
> require, module) { ... }). It would be rather annoying to keep all that
> duplicated information in sync by hand.

There are to problems with this extraction:

a) Regexes brittle and generally unreliable, they have problems with
corner cases. The only good way to do it is to run a proper lexer. And
still some dependencies would present difficulties (can be solved with
more stringent guidelines).

b) Function.toString() is not always produces a source code for the
function. For example, many mobile platforms do not return anything
meaningful due to performance and size considerations.

Other than that I think you are raising valid questions.

Cheers,

Eugene

Kris Zyp

unread,
Sep 14, 2010, 11:19:28 PM9/14/10
to comm...@googlegroups.com

On 9/13/2010 6:28 PM, Eugene Lazutkin wrote:
> Inline.
>
> On Sep 13, 4:10 pm, Tom Robinson <tlrobin...@gmail.com> wrote:
> 1) If we're making the module ID optional, why not make the
> dependencies optional too? The dependencies can be extracted from the
> module text (obtained by toString-ing the function). If the goal is
> for this to be hand written it doesn't get any simpler than
> require.def(function(exports, require, module) { ... }). It would be
> rather annoying to keep all that duplicated information in sync by hand.

I link this idea. It would make wrapping ridiculously simple (no server
side analysis needed, purely static wrapping). Actually I had proposed
that we make the dependencies optional earlier in the original thread,
but hadn't considered the possibility of using static analysis. And it
seems like you could want it either way. I think it would be best if
static analysis was done if only the factory was provided (module id and
dependencies omitted) which is more targeted for development stage, and
static analysis was not done if the module id was provided since this
usually corresponds with "built" files where the dependencies may
already be included in the file (and don't need to be listed since they
are known to be provided) and it is faster to avoid the static analysis
for production code, obviously.

I guess this really comes down to whether or not implementers are
willing to include static analysis code. For my implementation, Nodules,
this is already present, but for the browser loaders, RequireJS and
Yabble, this might be a hard sell. However, I looked at what it would
take to add this to RequireJS, and looks like it is about a 75-78 byte
addition (after minification, unless I did something wrong below), which
seems like a pretty light addition.

callback.toString().replace(/require\("([^"]*)"\)/g,function(t,m){deps.push(m);});

>> A few comments in case we move forward with this...
>>
>> 1) If we're making the module ID optional, why not make the dependencies
>> optional too? The dependencies can be extracted from the module text
>> (obtained by toString-ing the function). If the goal is for this to be hand
>> written it doesn't get any simpler than require.def(function(exports,
>> require, module) { ... }). It would be rather annoying to keep all that
>> duplicated information in sync by hand.
> There are to problems with this extraction:
>
> a) Regexes brittle and generally unreliable, they have problems with
> corner cases. The only good way to do it is to run a proper lexer. And
> still some dependencies would present difficulties (can be solved with
> more stringent guidelines).
>
> b) Function.toString() is not always produces a source code for the
> function. For example, many mobile platforms do not return anything
> meaningful due to performance and size considerations.

So those are the limitations of omitting the dependencies. Certainly
doesn't seem like a problem to have this option as long as we indicate
the limitations. We have already pretty clearly stated on the list that
require statements must be in the form require("module-id") in order to
work on all module loaders, since it was expected that some would use
static analysis (and such loaders do indeed exist, Nodules and I believe
Yabble). Using toString'ed functions actually makes the regex even
simpler since the code is normalized. And if this feature is mainly
going to be used for development, mobile platforms could easily be a
non-issue for many projects. Obviously there are projects were this
would be an issue. And they wouldn't use this feature.

Anyway, implementation cost is probably the biggest barrier to this
being accepted.

--
Thanks,
Kris

Daniel Friesen

unread,
Sep 14, 2010, 11:46:38 PM9/14/10
to comm...@googlegroups.com
Trying to not consider browsers second-class with a feature that considers mobile second-class... doesn't that seem a little... well, maybe I shouldn't pull out the H- word...

Dean Landolt

unread,
Sep 14, 2010, 11:47:47 PM9/14/10
to comm...@googlegroups.com
Do you do your development on your phone? I didn't think so. 

Eugene Lazutkin

unread,
Sep 14, 2010, 11:53:19 PM9/14/10
to CommonJS
Hmm. If I (as a module author) want to support async loaders and incur
no penalty on identifying dependencies, I can specify the parameter.
On the other hand, if I don't worry about the penalty (we should
measure it --- it may be negligible), or don't want to constrain
myself during the development, I can skip it. I think it is a pretty
good idea. Thanks Tom and Kris!

The third options is the module format we already have --- no wrappers
of any kind, just plain exports/module/require --- perfectly suitable
for synchronous loading in any environment.

Probably the package description should include a flag if it supports
sync only or both options. In this case package managers can make
informed decisions when importing packages.

Cheers,

Eugene
> callback.toString().replace(/require\("([^"]*)"\)/g,function(t,m){deps.push (m);});

Eugene Lazutkin

unread,
Sep 14, 2010, 11:59:05 PM9/14/10
to CommonJS
Even if you develop specifically for the mobile environment, or want
your modules to be ready for the mobile environment as is, just
specify the dependencies using an optional parameter, and you are done
--- your module is universally accepted. I think it is a reasonable
trade-off.

Cheers,

Eugene

On Sep 14, 10:47 pm, Dean Landolt <d...@deanlandolt.com> wrote:
> On Tue, Sep 14, 2010 at 11:46 PM, Daniel Friesen
> <nadir.seen.f...@gmail.com>wrote:

Dean Landolt

unread,
Sep 15, 2010, 12:01:51 AM9/15/10
to comm...@googlegroups.com
On Tue, Sep 14, 2010 at 11:59 PM, Eugene Lazutkin <eugene....@gmail.com> wrote:
Even if you develop specifically for the mobile environment, or want
your modules to be ready for the mobile environment as is, just
specify the dependencies using an optional parameter, and you are done
--- your module is universally accepted. I think it is a reasonable
trade-off.


Sure. All I was trying to say is that an option optimizing for developer convenience can certainly treat mobile as second-class. 



Cheers,

Eugene

On Sep 14, 10:47 pm, Dean Landolt <d...@deanlandolt.com> wrote:
> On Tue, Sep 14, 2010 at 11:46 PM, Daniel Friesen
> <nadir.seen.f...@gmail.com>wrote:
> >  Trying to not consider browsers second-class with a feature that considers
> > mobile second-class... doesn't that seem a little... well, maybe I shouldn't
> > pull out the H- word...
>
> Do you do your development on your phone? I didn't think so.

James Burke

unread,
Sep 15, 2010, 12:44:47 AM9/15/10
to comm...@googlegroups.com
On Tue, Sep 14, 2010 at 8:19 PM, Kris Zyp <kri...@gmail.com> wrote:
> I guess this really comes down to whether or not implementers are
> willing to include static analysis code. For my implementation, Nodules,
> this is already present, but for the browser loaders, RequireJS and
> Yabble, this might be a hard sell. However, I looked at what it would
> take to add this to RequireJS, and looks like it is about a 75-78 byte
> addition (after minification, unless I did something wrong below), which
> seems like a pretty light addition.

I am concerned that it will encourage committing modules to source
control in this form, then those modules get distributed, then someone
tries to use them on a mobile browser, and things break.

If the main concern is typing cost, the typing is not that different
between this function stringify form and TransportC/AsyncModule form,
particularly since both use a function wrapper: In Transport/C if I
refer to the dependency in the array, then create a variable for it in
the function args, it works out well, particularly since normally
"require", "module" and "exports" are not needed inside the function:

require.def(function(require, module, exports){
var foo = require('foo');
var bar = require('bar');
//use foo and bar
module.exports = function(){};
});

vs

require.def(['foo', 'bar'], function (foo, bar){
//use foo and bar
return function(){};
});

since the second form also explicitly allows setting exports in a
natural form, it means reduced typing vs. systems that do not allow
setting the export:

require('MyConstructor').MyConstructor
vs
require('MyConstructor')

So in the end I do not think the function stringifying drastically
improves the typing costs over what is possible in Transport/C
(sometimes it is worse), and just adds more edge cases to explain.

James

Daniel Friesen

unread,
Sep 15, 2010, 12:54:08 AM9/15/10
to comm...@googlegroups.com
Do you believe that given the re-emergence of setExports/module.exports
over and over that if we give the option to leave dependencies out and
let the implementations magically detect dependencies using toString the
developers lazy enough to use that feature are going to all go and add
an explicit list of dependencies back in when they deploy and not use it
on their production sites?

Even if it's not coded explicitly to support mobile something you
program on the web may still work on a mobile browser (otherwise what is
the point of them trying to implement the same standards used on the
desktop), but if you go and use a feature like that you explicitly break
it whether it would have worked or not, for minimal gain.

In a case where the mobile platform gets treated second-class it's not
the developer who gets shot in the foot, it's the user who randomly
using a platform of their choice came onto a site where the developer
didn't bother considering the possibility that someone might actually
want to look at what they made without sitting at a computer.

Tom Robinson

unread,
Sep 15, 2010, 1:19:44 AM9/15/10
to comm...@googlegroups.com

On Sep 14, 2010, at 9:44 PM, James Burke wrote:

> On Tue, Sep 14, 2010 at 8:19 PM, Kris Zyp <kri...@gmail.com> wrote:
>> I guess this really comes down to whether or not implementers are
>> willing to include static analysis code. For my implementation, Nodules,
>> this is already present, but for the browser loaders, RequireJS and
>> Yabble, this might be a hard sell. However, I looked at what it would
>> take to add this to RequireJS, and looks like it is about a 75-78 byte
>> addition (after minification, unless I did something wrong below), which
>> seems like a pretty light addition.
>
> I am concerned that it will encourage committing modules to source
> control in this form, then those modules get distributed, then someone
> tries to use them on a mobile browser, and things break.

I am concerned that the transport format will encourage committing modules to source
control in the transport format, then those modules get distributed, then someone
tries to use them in an environment that doesn't support the transport format, and things break.

Which browsers don't support Function.prototype.toString anyway?

> If the main concern is typing cost, the typing is not that different
> between this function stringify form and TransportC/AsyncModule form,
> particularly since both use a function wrapper: In Transport/C if I
> refer to the dependency in the array, then create a variable for it in
> the function args, it works out well, particularly since normally
> "require", "module" and "exports" are not needed inside the function:
>
> require.def(function(require, module, exports){
> var foo = require('foo');
> var bar = require('bar');
> //use foo and bar
> module.exports = function(){};
> });
>
> vs
>
> require.def(['foo', 'bar'], function (foo, bar){
> //use foo and bar
> return function(){};
> });

I don't like this version anyway. It's not a natural extension of CommonJS modules. Lets stick with function(require, exports, module).

require.def(function(require, exports, module) { ... }) is just a normal CommonJS module wrapped in a function to facilitate asynchronous loading. It seems simple and elegant to me.

> since the second form also explicitly allows setting exports in a
> natural form, it means reduced typing vs. systems that do not allow
> setting the export:
>
> require('MyConstructor').MyConstructor
> vs
> require('MyConstructor')
>
> So in the end I do not think the function stringifying drastically
> improves the typing costs over what is possible in Transport/C
> (sometimes it is worse), and just adds more edge cases to explain.
>
> James
>

James Burke

unread,
Sep 15, 2010, 1:49:02 AM9/15/10
to comm...@googlegroups.com
On Tue, Sep 14, 2010 at 10:19 PM, Tom Robinson <tlrob...@gmail.com> wrote:
> I am concerned that the transport format will encourage committing modules to source
> control in the transport format, then those modules get distributed, then someone
> tries to use them in an environment that doesn't support the transport format, and things break.

That happens today with regular CommonJS modules. Some platforms allow
setting exported values in different ways. Some have different ways to
get a path relative to the module. The hope is to try to work out
something that will be more portable in the future. I have adapters
that allow Transport/C use in Rhino and Node, and I am happy to work
with any env owner who wants to incorporate support directly in the
env. The hope is for interoperability.

However, an implementation that depends on a Function toString
implementation that is known to have problems is a more serious issue.
I have not come across one, so it is good to have someone provide a
test and indicate what mobile browsers have the problem. It could be
those mobile browsers could not handle a script loader that depends on
a functioning script onload event handler too. But it makes me uneasy
enough that I am not enthusiastic about supporting it.

James

Tom Robinson

unread,
Sep 15, 2010, 2:07:23 AM9/15/10
to comm...@googlegroups.com

So which implementations don't support Function.prototype.toString? It's in the spec:

15.3.4.2 Function.prototype.toString ( )
An implementation-dependent representation of the function is returned. This representation has the syntax of a FunctionDeclaration. Note in particular that the use and placement of white space, line terminators, and semicolons within the representation string is implementation-dependent.

Kris Zyp

unread,
Sep 15, 2010, 10:14:00 AM9/15/10
to comm...@googlegroups.com
On 9/14/2010 10:44 PM, James Burke wrote:
> On Tue, Sep 14, 2010 at 8:19 PM, Kris Zyp <kri...@gmail.com> wrote:
>> I guess this really comes down to whether or not implementers are
>> willing to include static analysis code. For my implementation, Nodules,
>> this is already present, but for the browser loaders, RequireJS and
>> Yabble, this might be a hard sell. However, I looked at what it would
>> take to add this to RequireJS, and looks like it is about a 75-78 byte
>> addition (after minification, unless I did something wrong below), which
>> seems like a pretty light addition.
> I am concerned that it will encourage committing modules to source
> control in this form, then those modules get distributed, then someone
> tries to use them on a mobile browser, and things break.
>
> If the main concern is typing cost, the typing is not that different
> between this function stringify form and TransportC/AsyncModule form,
> particularly since both use a function wrapper: In Transport/C if I
> refer to the dependency in the array, then create a variable for it in
> the function args, it works out well, particularly since normally
> "require", "module" and "exports" are not needed inside the function:
>
I don't think the concern is typing a module from a scratch, it is ease
of wrapping an existing CommonJS module. Hand-coding vs wrapping has
been discussed endlessly here, and it seems clear that there will always
be those that want to wrap existing CommonJS modules and those that want
to hand-code (where taking advantage of dependency lists that match with
arguments is very pleasant). We can have two different specs, one for
transport/wrapping and one more designed for hand-coding, or we can try
to unify, which necessitates a design that will work for both groups.
The optional dependency list seems like it would make for a very
satisfactory transport wrapper. I guess it depends on how badly we want
unification of the specs.

On 9/15/2010 12:07 AM, Tom Robinson wrote:
> On Sep 14, 2010, at 10:49 PM, James Burke wrote:
>
>> On Tue, Sep 14, 2010 at 10:19 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>> I am concerned that the transport format will encourage committing modules to source
>>> control in the transport format, then those modules get distributed, then someone
>>> tries to use them in an environment that doesn't support the transport format, and things break.
>> That happens today with regular CommonJS modules. Some platforms allow
>> setting exported values in different ways. Some have different ways to
>> get a path relative to the module. The hope is to try to work out
>> something that will be more portable in the future. I have adapters
>> that allow Transport/C use in Rhino and Node, and I am happy to work
>> with any env owner who wants to incorporate support directly in the
>> env. The hope is for interoperability.
>>
>> However, an implementation that depends on a Function toString
>> implementation that is known to have problems is a more serious issue.
>> I have not come across one, so it is good to have someone provide a
>> test and indicate what mobile browsers have the problem. It could be
>> those mobile browsers could not handle a script loader that depends on
>> a functioning script onload event handler too. But it makes me uneasy
>> enough that I am not enthusiastic about supporting it.
> So which implementations don't support Function.prototype.toString?

I think it is Opera Mobile:
http://my.opera.com/hallvors/blog/show.dml/1665828
But of course there are already plenty of other potholes to be aware of
with this browser:
http://www.quirksmode.org/blog/archives/2010/07/operas_problems.html

This is pretty basic caveat emptor. If you want a module to work on a
browser that doesn't support function toString than don't use the
feature relies on it :P. Front end engineers are well acquainted with
making these types of decisions all the time.

-- Thanks, Kris

Eugene Lazutkin

unread,
Sep 15, 2010, 1:40:32 PM9/15/10
to CommonJS
Inline.

On Sep 15, 1:07 am, Tom Robinson <tlrobin...@gmail.com> wrote:
>
> So which implementations don't support Function.prototype.toString? It's in the spec:
>
> 15.3.4.2 Function.prototype.toString ( )
> An implementation-dependent representation of the function is returned. This representation has the syntax of a FunctionDeclaration. Note in particular that the use and placement of white space, line terminators, and semicolons within the representation string is implementation-dependent.

Please observe that spec doesn't say anything about presenting a
decompiled body. For example, this is a correct return of
Function.prototype.toString() for function add(a, b){ return a + b; }:

function add(a, b){
return 1234567;
}

And so is this:

function add(a, b){
"[compiled code";
}

And this:

function add(a, b){
// nothing to see here
}

But coming back to your original question: I personally had problems
with all versions of Opera Mobile.

In any case the spec you cited makes me wary of relying on
decompilation, which is not mandated nor even mentioned in the spec.

Cheers,

Eugene

Kris Kowal

unread,
Sep 15, 2010, 1:43:44 PM9/15/10
to comm...@googlegroups.com
Using Function.prototype.toString would also not compose well with minifiers.

Kris Kowal

Tom Robinson

unread,
Sep 15, 2010, 1:52:07 PM9/15/10
to comm...@googlegroups.com
Presumably minification is the last thing you'd do, after extracting the id and dependencies.

On Wed, Sep 15, 2010 at 10:43 AM, Kris Kowal <kris....@cixar.com> wrote:
Using Function.prototype.toString would also not compose well with minifiers.

Kris Kowal

Mikeal Rogers

unread,
Sep 15, 2010, 2:01:30 PM9/15/10
to comm...@googlegroups.com
I don't believe Opera mobile is or claims to be entirely spec compliant. I know of a slew of DOM level 1 issues that are out of spec and I wouldn't be surprised if there were a ton more.

-Mikeal

James Burke

unread,
Sep 15, 2010, 2:05:09 PM9/15/10
to comm...@googlegroups.com
On Wed, Sep 15, 2010 at 7:14 AM, Kris Zyp <kri...@gmail.com> wrote:
> The optional dependency list seems like it would make for a very
> satisfactory transport wrapper. I guess it depends on how badly we want
> unification of the specs.

I do prefer a unification of specs, and something that is easy to hand
code for use directly in the browser. Your observation of supporting a
name-less format in the browser and Tom's toString function meets that
well enough for me. I am willing to pursue the function toString
approach some more. I put up a test page here:

http://requirejs.org/temp/fts.html

It seemed to be fine for me in FF 2, 3.6 and 4/nightly, Safari 2 and
5, Opera 10.61, Chrome 6, and Mobile Safari on iOS 4.1. Fine as in
generating a string that could be parsed fairly easily.

It would be nice to see what other mobile browser on Android, Windows
mobile/phone and Blackberry do.

It sounds like Opera Mini would not work. Let's get a full list before
making a decision.

It would also be good to bring it up with the ES list, anyone willing
to do that? It would be good to know if they see problems with it.

There are some other edges to work out, but hopefully just soft edges.
For instance, there is still a require.def form that has the name and
dependencies burned in (for optimized delivery). I think those files
are the only ones that should be minified, as Tom mentioned. There are
some other things too, but they can be worked out after proving the
basic feasibility.

James

Eugene Lazutkin

unread,
Sep 15, 2010, 5:52:15 PM9/15/10
to CommonJS
Works fine in Android 2.2 and IE on Windows Mobile 6 Professional
(Windows CE OS 5.2.1235 Build 17745.0.2.3).

Cheers,

Eugene

On Sep 15, 1:05 pm, James Burke <jrbu...@gmail.com> wrote:

James Burke

unread,
Sep 15, 2010, 8:23:38 PM9/15/10
to comm...@googlegroups.com
On Wed, Sep 15, 2010 at 11:05 AM, James Burke <jrb...@gmail.com> wrote:
> http://requirejs.org/temp/fts.html
>
> It seemed to be fine for me in FF 2, 3.6 and 4/nightly, Safari 2 and
> 5, Opera 10.61, Chrome 6, and Mobile Safari on iOS 4.1. Fine as in
> generating a string that could be parsed fairly easily.

Got some results from a few folks, in particular thanks to Fil Maj at Nitobi:

OK:
- Android 2.2
- WebOS 1.4.5
- BlackBerry 5.0
- Fennec 1.1
- Windows Mobile 6.5 (keeps comments and has funky end of line
encoding issue, probably good enough)
- Windows Mobile 7

Not OK:
- BlackBerry 4.6 (content of function says "source code unavailable")

It would be good to clarify what version of Opera Mobile was bad. Was
it 9.5 or earlier? What device?

Also would be good to get a test of the native browsers for these platforms:
- Symbian S60, v3.2, v5.0
- MeeGo 1.1
- bada

I am just stealing the platform matrix from the jquerymobile site:
http://jquerymobile.com/gbs/.

I will ask on es-discuss next about the approach.

James

Eugene Lazutkin

unread,
Sep 16, 2010, 12:31:47 PM9/16/10
to CommonJS
Forgot to mention that both return the full function body ---
indentations, comments, and everything.

Kris Zyp

unread,
Sep 20, 2010, 6:18:57 PM9/20/10
to comm...@googlegroups.com

On Sep 15, 1:05 pm, James Burke <jrbu...@gmail.com> wrote:
>> On Wed, Sep 15, 2010 at 7:14 AM, Kris Zyp <kris...@gmail.com> wrote:
>>> The optional dependency list seems like it would make for a very
>>> satisfactory transport wrapper. I guess it depends on how badly we want
>>> unification of the specs.
>> I do prefer a unification of specs, and something that is easy to hand
>> code for use directly in the browser. Your observation of supporting a
>> name-less format in the browser and Tom's toString function meets that
>> well enough for me.

Would this be a reasonable addition to the proposal:

The second argument, the dependencies, is optional. If omitted, it
should default to ["require", "exports", "module"]. However, if the
factory function's arity (length property) is less than 3, than the
loader may choose to only call the factory with the number of arguments
corresponding to the function's arity or length.

If both the first argument (module id) and the second argument
(dependencies) are omitted, the module loader MAY choose to scan the
factory function for dependencies in the form of require statements
(literally in the form of require("module-string")). In some situations
module loaders may choose not to scan for dependencies due to code size
limitations or lack of toString support on functions (Opera Mobile is
known to lack toString support for functions). If either the first or
second argument is present, the module loader SHOULD NOT scan for
dependencies within the factory function.

Also, a few other suggested changes:
Currently there is an "optional" extension for allowing factory
functions to return a value to replace the exports. I think this should
be required functionality, it doesn't help interoperability much if it
might not be there (and we know it is possible to support since the
return is inside a function). This doesn't negate supporting
module.exports = value or module.setExports(value).

RequireJS allows for the last argument to be an object instead of a
function if you just want to return an known set of properties in an
object. This seems really convenient and easy to specify and support.

Finally, I think we should include an informational note indicating that
require.def calls must be in the literal form of 'require.def(...)' in
order to work properly with static analysis tools (like build tools).

--
Thanks,
Kris

James Burke

unread,
Sep 21, 2010, 2:06:05 AM9/21/10
to comm...@googlegroups.com
On Mon, Sep 20, 2010 at 3:18 PM, Kris Zyp <kri...@gmail.com> wrote:
> Would this be a reasonable addition to the proposal:
>
> The second argument, the dependencies, is optional. If omitted, it
> should default to ["require", "exports", "module"]. However, if the
> factory function's arity (length property) is less than 3, than the
> loader may choose to only call the factory with the number of arguments
> corresponding to the function's arity or length.

I would also mention that the order MUST always be "require",
"exports", "module", and the definition function should also order its
arguments accordingly: function(require, exports, module), using those
literal names for the function arguments.

Your other suggested changes are fine with me. I particularly like the
explicit support for returning from the definition function to allow
setting the module's exported value.

In a related note, I just finished a round of changes that support
anonymous modules and Tom's proposal in RequireJS:
http://tagneto.blogspot.com/2010/09/anonymous-module-support-in-requirejs.html

Some very simplified anonymous modules used for testing can be found in:
http://github.com/jrburke/requirejs/tree/master/tests/anon/

(not all modules in there are anonymous, part of the test is testing a
mix of anonymous and named modules)

James

Eugene Lazutkin

unread,
Sep 21, 2010, 3:32:32 PM9/21/10
to CommonJS
+1 to Kris' and James' corrections/additions. They look fine to me.

Cheers,

Eugene

On Sep 21, 1:06 am, James Burke <jrbu...@gmail.com> wrote:
> On Mon, Sep 20, 2010 at 3:18 PM, Kris Zyp <kris...@gmail.com> wrote:
> > Would this be a reasonable addition to the proposal:
>
> > The second argument, the dependencies, is optional. If omitted, it
> > should default to ["require", "exports", "module"]. However, if the
> > factory function's arity (length property) is less than 3, than the
> > loader may choose to only call the factory with the number of arguments
> > corresponding to the function's arity or length.
>
> I would also mention that the order MUST always be "require",
> "exports", "module", and the definition function should also order its
> arguments accordingly: function(require, exports, module), using those
> literal names for the function arguments.
>
> Your other suggested changes are fine with me. I particularly like the
> explicit support for returning from the definition function to allow
> setting the module's exported value.
>
> In a related note, I just finished a round of changes that support
> anonymous modules and Tom's proposal in RequireJS:http://tagneto.blogspot.com/2010/09/anonymous-module-support-in-requi...

Kris Zyp

unread,
Sep 27, 2010, 9:30:48 PM9/27/10
to comm...@googlegroups.com
I updated the proposal/spec with the changes.
http://wiki.commonjs.org/wiki/Modules/AsynchronousDefinition
Kris

--
Thanks,
Kris

Irakli Gozalishvili

unread,
Sep 30, 2010, 8:38:54 AM9/30/10
to comm...@googlegroups.com
Hi,

I'm kind of late for this show but few notes I have regarding current proposal:

1. I believe we sicked to the naming convention of camelCased full words, so I do think we should be consistent and have `require.define` rather then `require.def`

2. I think it would be better to make id a mandatory argument rather then optional. It's not that much of a typing but with scenarios of parallel module fetching will avoid a lot of issues.

3. I do think that dependencies should be third optional argument, since I suppose it will be rarely used.

4. I think passing dependencies as an arguments is also a bad idea, which makes modules look different form normal commonjs modules. Why not just guarantee dependencies so that in the body if the module requiring them
would work synchronously.

5. Again returning modules is non-complaint change with a currently existing modules and will only lead to the fragmentation so I'm against it.

Other then that it all looks fine by me, actually module loader that I implemented long time ago follows this specs pretty closely. Here it is in production btw

http://jeditoolkit.com/taskhub/

Regards
--
Irakli Gozalishvili
Web: http://www.jeditoolkit.com/
Address: 29 Rue Saint-Georges, 75009 Paris, France


Kris Zyp

unread,
Sep 30, 2010, 3:33:00 PM9/30/10
to comm...@googlegroups.com, Irakli Gozalishvili

On 9/30/2010 6:38 AM, Irakli Gozalishvili wrote:
> Hi,
>
> I'm kind of late for this show but few notes I have regarding current
> proposal:
>
> 1. I believe we sicked to the naming convention of camelCased full
> words, so I do think we should be consistent and have `require.define`
> rather then `require.def`
>
> 2. I think it would be better to make id a mandatory argument rather
> then optional. It's not that much of a typing but with scenarios of
> parallel module fetching will avoid a lot of issues.

Optional id allows anonymous modules which is a key principal in
CommonJS module design and is just awesome. Decoupling modules from
their namespace affords much greater portability.


>
> 3. I do think that dependencies should be third optional argument,
> since I suppose it will be rarely used.

The vast majority of modules I have seen in this format use it.


>
> 4. I think passing dependencies as an arguments is also a bad idea,
> which makes modules look different form normal commonjs modules. Why
> not just guarantee dependencies so that in the body if the module
> requiring them
> would work synchronously.

The guarantee of the dependencies is determined from the dependency
list. The alternate is to do a toString on the factory function and
statically analyze it. This is viable and supported by RequireJS, but I
believe function toString()'s are somewhat expensive and do not work on
all browsers (Opera mobile and playstation don't support it, from what I
understand). This can be a reasonable limitation for some apps or dev
environments, but not something we can lean on for everything.


>
> 5. Again returning modules is non-complaint change with a currently
> existing modules and will only lead to the fragmentation so I'm
> against it.

There was no AMD API before, how can it be non-compliant? :). This is
compatible with wrapping existing CommonJS modules because was no
previous specification for return, so plain CommonJS modules can't use
it. Either way we are in good shape.


>
> Other then that it all looks fine by me, actually module loader that I
> implemented long time ago follows this specs pretty closely. Here it
> is in production btw
>
> http://jeditoolkit.com/taskhub/
>

Cool, are you planning on updating this for current spec? It would be
awesome to have another impl.-- Thanks, Kris

Eugene Lazutkin

unread,
Oct 1, 2010, 9:39:43 PM10/1/10
to CommonJS
I need a clarification about "require" object. According to the spec
the minimal module can look like this:

require.def(function (require) {
return {x: require("abc").x};
});

As you can see we have two "require" objects in this snippet:

1) (pseudo) global "require", which we use to invoke "def" on.
2) local "require" passed as a parameter.

Are these "require" objects the same object? Is it possible to bypass
the parameter completely? Like that:

require.def(function () {
return {x: require("abc").x};
});

Is it possible that they are substantially different with different
interfaces?

Cheers,

Eugene

On Sep 27, 8:30 pm, Kris Zyp <kris...@gmail.com> wrote:
>  I updated the proposal/spec with the changes.http://wiki.commonjs.org/wiki/Modules/AsynchronousDefinition

Kris Zyp

unread,
Oct 2, 2010, 9:04:51 AM10/2/10
to comm...@googlegroups.com, Eugene Lazutkin

On 10/1/2010 7:39 PM, Eugene Lazutkin wrote:
> I need a clarification about "require" object. According to the spec
> the minimal module can look like this:
>
> require.def(function (require) {
> return {x: require("abc").x};
> });
>
> As you can see we have two "require" objects in this snippet:
>
> 1) (pseudo) global "require", which we use to invoke "def" on.
> 2) local "require" passed as a parameter.
>
> Are these "require" objects the same object? Is it possible to bypass
> the parameter completely? Like that:
>
> require.def(function () {
> return {x: require("abc").x};
> });
>
> Is it possible that they are substantially different with different
> interfaces?

It depends on the environment. A correct require() function generally
must be module specific so that it can properly lookup relative ids
(which are resolved relative to the current module). In environments
where the loading of the module can be controlled such that
module-specific require() (and exports and module variables) can be
provided, these two require()s can be the same (this is the case in
Nodules). However, for modules that are loaded as browser scripts, the
loader can't sufficiently control the context to give scope-specific
variables. It is impossible (AFAICT) to implement conformant require()
for browser scripts and CommonJS doesn't give any direction on how
require() should behave in that case. Likewise, implementations vary. I
believe Yabble throws an error if the global require() is called and
RequireJS came up with their own API for require() that's kind of cross
between require.def and require.ensure. For these browser module
loaders, the global require() differs quite significantly from the local
require().

--
Thanks,
Kris

Wes Garland

unread,
Oct 2, 2010, 9:28:24 AM10/2/10
to comm...@googlegroups.com, Eugene Lazutkin
> It is impossible (AFAICT) to implement conformant require()
> for browser scripts

You mean the Modules/1.0 require, as opposed to the Transports/C version of CommonJS?

If so, I disagree with you, although I haven't actually tried.

The technique I would use would be to wrap the module in a function, and pass in a module-specific require variable, which closes over a variable indicating that module's load path.

Wes

--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102

Kris Zyp

unread,
Oct 2, 2010, 9:33:03 AM10/2/10
to comm...@googlegroups.com, Wes Garland, Eugene Lazutkin

On 10/2/2010 7:28 AM, Wes Garland wrote:
> > It is impossible (AFAICT) to implement conformant require()
> > for browser scripts
>
> You mean the Modules/1.0 require, as opposed to the Transports/C
> version of CommonJS?
>
> If so, I disagree with you, although I haven't actually tried.
>
> The technique I would use would be to wrap the module in a function,
> and pass in a module-specific require variable, which closes over a
> variable indicating that module's load path.

Yes, Modules/1.0 require. I don't think you can wrap a browser loaded
script:

(function(require, exports, module){
<script src="my-module.js"></script>
})();

This doesn't seem to work in my browser ;).

--
Thanks,
Kris

Wes Garland

unread,
Oct 2, 2010, 2:36:48 PM10/2/10
to Kris Zyp, comm...@googlegroups.com, Eugene Lazutkin
> Yes, Modules/1.0 require. I don't think you can wrap a browser loaded
> script:

Sure you can, just not like that. Modules/1.0 requires a server-side component to do the wrapping, or you have to load it with XHR, wrap it and eval it().

The former solution is my personal preference, as it is quite fast and effecient (I load multiple modules with a single script tag, and the results are HTTP 304 cachable).

Wait - are you saying that Transports/C modules have a require which does not support relative modules?

Irakli Gozalishvili

unread,
Oct 9, 2010, 7:22:17 AM10/9/10
to comm...@googlegroups.com
So I still don't have any reasonable answer on why do we have to be inconsistent with naming conventions. Why is it
def and not define ?
--

Kris Zyp

unread,
Oct 9, 2010, 2:39:49 PM10/9/10
to comm...@googlegroups.com, Irakli Gozalishvili

On 10/9/2010 5:22 AM, Irakli Gozalishvili wrote:
> So I still don't have any reasonable answer on why do we have to be
> inconsistent with naming conventions. Why is it
> def and not define ?

The historical reason was that require.def was for transport/C (now AMD)
and require.defined was for transport/D. However, I think that AMD has
progressed to the point where we no longer need transport/D.

I actually think we should move from require.def to using "define" as
the global. Using "require" as the global variable doesn't make sense,
there is no CommonJS definition for a global named "require" (it is
completely different than the free variable defined by CommonJS
modules), and most applications that access a CommonJS global will be
defining modules much more than anything else. An ensure() call usually
just needs to be done once for app. So if we are going to rename
anything, I would suggest we rename require.def -> define and
require.ensure -> define.ensure.

--
Thanks,
Kris

Kris Kowal

unread,
Oct 9, 2010, 2:49:49 PM10/9/10
to comm...@googlegroups.com
On Sat, Oct 9, 2010 at 11:39 AM, Kris Zyp <kri...@gmail.com> wrote:
> So if we are going to rename
> anything, I would suggest we rename require.def -> define and
> require.ensure -> define.ensure.

I agree that "define" could be moved up to a global in <script> context.

"require.ensure" was designed for use in [[Module]] context and we
should not be adding free-variables to CommonJS modules. Since there's
obviously use for "require.ensure's" behavior in <script> context, you
might consider forking the specification, or soul-searching about
whether "require" is or isn't an adequate namespace for CommonJS in
<script> context in general. At the cost of some confusion, when
there's good reason to have the same behavior in both places, those
behaviors should have the same names to minimize the cost of
refactoring. "ensure" is useful in both contexts. "define" should
not be available in Module context because it does not make sense to
have a cycle in the layers of the architecture.

Kris Kowal

Kris Zyp

unread,
Oct 9, 2010, 2:50:43 PM10/9/10
to comm...@googlegroups.com, Irakli Gozalishvili

And require.package -> define.package (that would sure read a lot better).

--
Thanks,
Kris

Kris Zyp

unread,
Oct 9, 2010, 2:56:45 PM10/9/10
to comm...@googlegroups.com, Kris Kowal

True, that's good point. So could we have a define.ensure for scripts
that are outside a CommonJS free variable context, but have
require.ensure that can be accessed from the require free variable
inside a CommonJS context (inside a define factory or commonjs module),
both aliasing the same behavior? That would seem reasonable to me.

--
Thanks,
Kris

James Burke

unread,
Oct 10, 2010, 12:31:58 AM10/10/10
to comm...@googlegroups.com

I am hesitant to make another global in <script> space for define.
<script> space is even more sensitive to globals than the usual places
CommonJS modules are run, since all the scripts loaded via <script>
share a global space. I prefer the global impact for <script> to be
very small. Right now, it is just "require".

I also do not think define.ensure makes sense -- it really means a
kind of "require" vs a "define" -- the script requires that these
other scripts/modules are available before running the function.

require.def vs require.define: what Kris Zyp said, and it just less
typing, which is nice for something that will be hand-written. I
consider it like var or fs, although I understand the "use real words"
approach this group has tried to follow. Originally I was happy with
just:

require('optional module ID', [dependencies], function (){ definition
function }).

Which would also work for an ensure-like behavior as well as handling
async module definitions, with the require('string only') form being
the traditional require. I appreciate that it may be seen as too much
to do as part of one function signature.

If it made a *strong* difference to actual sync module proposal
implementors or people seriously considering implementation, then I
could put in an alias for require.define to require.def, but I will
likely still support require.def due to deployed code/existing users.

I did not follow "it does not make sense to have a cycle in the layers
of the architecture" from Kris Kowal's reply, so further explanation
of the cycle would help me.

James

Kris Zyp

unread,
Oct 10, 2010, 6:05:22 PM10/10/10
to comm...@googlegroups.com, James Burke, dojo dev.
(Added dojo-contributors as cc since it affects current module
refactor, response inline)

On 10/9/2010 10:31 PM, James Burke wrote:
> On Sat, Oct 9, 2010 at 11:56 AM, Kris Zyp <kri...@gmail.com> wrote:
>> On 10/9/2010 12:49 PM, Kris Kowal wrote:
>>> "ensure" is useful in both contexts. "define" should
>>> not be available in Module context because it does not make sense to
>>> have a cycle in the layers of the architecture.
>> True, that's good point. So could we have a define.ensure for scripts
>> that are outside a CommonJS free variable context, but have
>> require.ensure that can be accessed from the require free variable
>> inside a CommonJS context (inside a define factory or commonjs module),
>> both aliasing the same behavior? That would seem reasonable to me.
> I am hesitant to make another global in <script> space for define.
> <script> space is even more sensitive to globals than the usual places
> CommonJS modules are run, since all the scripts loaded via <script>
> share a global space. I prefer the global impact for <script> to be
> very small. Right now, it is just "require".
>

I am not suggesting adding "define" to "require", I am suggesting
replacing/removing "require" with "define". There would still only be
one global defined. There's no reason we ever had "require" as a global,
and it is time to choose an appropriate global.

Of course, I understand that RequireJS can't eliminate "require", since
it is a core API for it. However, needing to have two globals is hardly
going to get much sympathy from me, especially considering that the
whole module system eliminates the need for developers to be fighting
for globals at all. Back in the ol' days when we used archaic namespaces
hung off globals this was a bigger issue. Now we could have dozens of
globals without impacting the module system. EcmaScript itself defines
dozens and the typical browser environment has hundreds of globals. Plus
globals used without any community coordination (libraries grabbing
common names as globals without namespacing) is the biggest source of
possible conflict, but this is completely the opposite, it is totally
being done as a community, and is exactly the *right* way to establish
globals. One or two (community ascribed) globals should hardly be a concern.


> I also do not think define.ensure makes sense -- it really means a
> kind of "require" vs a "define" -- the script requires that these
> other scripts/modules are available before running the function.

Its no less logical than require.def, whose purpose is to define a
module. This is all about frequency. If every single module using the
module definition API, but only a call or two that does the initial
require (require([]) or define.ensure()), it makes sense to give the
more frequently used form the most elegant logical API. Or we could have
a "define" and have a global "require" like RequireJS's.

But RequireJS doesn't even implement require.ensure, does it? It doesn't
seem like this would affect RequireJS.

> require.def vs require.define: what Kris Zyp said, and it just less
> typing, which is nice for something that will be hand-written. I
> consider it like var or fs, although I understand the "use real words"
> approach this group has tried to follow. Originally I was happy with
> just:
>
> require('optional module ID', [dependencies], function (){ definition
> function }).
>
> Which would also work for an ensure-like behavior as well as handling
> async module definitions, with the require('string only') form being
> the traditional require. I appreciate that it may be seen as too much
> to do as part of one function signature.
>
> If it made a *strong* difference to actual sync module proposal
> implementors or people seriously considering implementation, then I
> could put in an alias for require.define to require.def, but I will
> likely still support require.def due to deployed code/existing users.

I think the main reason this is worth considering is that it affects
*every single* module (and the byte count thereof, which quickly adds up
for apps with many modules), so it is worth taking a hard look at what
is best here even if there might mean some API change pain.

--
Thanks,
Kris

James Burke

unread,
Oct 12, 2010, 1:22:56 PM10/12/10
to comm...@googlegroups.com, dojo dev.
On Sun, Oct 10, 2010 at 3:05 PM, Kris Zyp <kri...@gmail.com> wrote:
> Of course, I understand that RequireJS can't eliminate "require", since
> it is a core API for it. However, needing to have two globals is hardly
> going to get much sympathy from me, especially considering that the
> whole module system eliminates the need for developers to be fighting
> for globals at all. Back in the ol' days when we used archaic namespaces
> hung off globals this was a bigger issue. Now we could have dozens of
> globals without impacting the module system. EcmaScript itself defines
> dozens and the typical browser environment has hundreds of globals. Plus
> globals used without any community coordination (libraries grabbing
> common names as globals without namespacing) is the biggest source of
> possible conflict, but this is completely the opposite, it is totally
> being done as a community, and is exactly the *right* way to establish
> globals. One or two (community ascribed) globals should hardly be a concern.

My concern was more for browser code that gradually adopts the
require/define approach. I believe that will allow for quicker/broader
adoption if existing code can use this new functionality gradually, so
fewer globals are better. I also agree that if push came to shove,
then two vs one global is not that much of a change, but the proposed
renaming to me did not seem to give that much benefit to warrant
another global.

>> I also do not think define.ensure makes sense -- it really means a
>> kind of "require" vs a "define" -- the script requires that these
>> other scripts/modules are available before running the function.
>
> Its no less logical than require.def, whose purpose is to define a
> module. This is all about frequency. If every single module using the
> module definition API, but only a call or two that does the initial
> require (require([]) or define.ensure()), it makes sense to give the
> more frequently used form the most elegant logical API. Or we could have
> a "define" and have a global "require" like RequireJS's.

I believe require.def is more logical than define.ensure. require.def
implies you are defining something that obeys require's rules.
define.ensure does not define anything. But this is a bit of a
bikeshed.

>
> But RequireJS doesn't even implement require.ensure, does it? It doesn't
> seem like this would affect RequireJS.

I have not implemented it because no one has asked for it when using
RequireJS, and I think it is inferior to the require([]) syntax that
RequireJS provides. However, if it enabled widespread async module
adoption (vs RequireJS require([]), then I would implement it, since
it is a subset of what require([]) can do now.

> I think the main reason this is worth considering is that it affects
> *every single* module (and the byte count thereof, which quickly adds up
> for apps with many modules), so it is worth taking a hard look at what
> is best here even if there might mean some API change pain.

I do not believe byte size count matters for performance reasons (with
an optimized, minified delivery of modules with gzip, it will be
unnoticeable), but I appreciate wanting the cleanest API.

I am voting "no" though, I do not believe it buys that much for the
following reasons:
- inertia. Mostly my personal inertia. It does not feel broken to me.
- I also like the single global. It still makes sense to me that an
ensure stays on require, or even better, just uses require([]) as used
by RequireJS. That means the define name space just defines an async
module, and an async module implementation still needs to implement
something for "require".
- I like that require.def implies that it obeys require's rules.
define seems to float out in the ether.

If others feel strongly, that require.def is just wrong, then I can
support a define that maps to require.def in RequireJS. But others
should speak up soon, as in the next couple of days. I want to move on
to implementations and adoption.

James

Kris Zyp

unread,
Oct 12, 2010, 2:31:42 PM10/12/10
to comm...@googlegroups.com, James Burke

On 10/12/2010 11:22 AM, James Burke wrote:
> On Sun, Oct 10, 2010 at 3:05 PM, Kris Zyp <kri...@gmail.com> wrote:
>> Of course, I understand that RequireJS can't eliminate "require", since
>> it is a core API for it. However, needing to have two globals is hardly
>> going to get much sympathy from me, especially considering that the
>> whole module system eliminates the need for developers to be fighting
>> for globals at all. Back in the ol' days when we used archaic namespaces
>> hung off globals this was a bigger issue. Now we could have dozens of
>> globals without impacting the module system. EcmaScript itself defines
>> dozens and the typical browser environment has hundreds of globals. Plus
>> globals used without any community coordination (libraries grabbing
>> common names as globals without namespacing) is the biggest source of
>> possible conflict, but this is completely the opposite, it is totally
>> being done as a community, and is exactly the *right* way to establish
>> globals. One or two (community ascribed) globals should hardly be a concern.
> My concern was more for browser code that gradually adopts the
> require/define approach. I believe that will allow for quicker/broader
> adoption if existing code can use this new functionality gradually, so
> fewer globals are better. I also agree that if push came to shove,
> then two vs one global is not that much of a change, but the proposed
> renaming to me did not seem to give that much benefit to warrant
> another global.
>

Again, the suggestion is that CommonJS only define a single global not
two. In fact this should actually improve the conflict/pollution
potential of RequireJS. If RequireJS uses a community global (require or
define) and defines its own APIs on it, it is effectively polluting the
global/shared namespace with possible name conflicts just as much as if
it defines all its APIs directly on window. By putting modify(),
version, plugin(), isBrowser, baseUrl, etc. on a shared object
"require", you are injecting into a shared space. "require"'s can have
conflicts just like "window". By claiming that you are not using globals
because it is under require is a silly name game, it is still shared. If
CommonJS only defines a global "define" than RequireJS can use a single
namespace under require (since "require" would be free then) and keep
its own API separate from the shared namespace. This is the right way to
avoid conflicts (which is the point of avoiding global pollution,
whether it be the "window" global or any other shared namespace).

>>> I also do not think define.ensure makes sense -- it really means a
>>> kind of "require" vs a "define" -- the script requires that these
>>> other scripts/modules are available before running the function.
>> Its no less logical than require.def, whose purpose is to define a
>> module. This is all about frequency. If every single module using the
>> module definition API, but only a call or two that does the initial
>> require (require([]) or define.ensure()), it makes sense to give the
>> more frequently used form the most elegant logical API. Or we could have
>> a "define" and have a global "require" like RequireJS's.
>
> I believe require.def is more logical than define.ensure. require.def
> implies you are defining something that obeys require's rules.
> define.ensure does not define anything. But this is a bit of a
> bikeshed.

After thinking about this, there really is no reason CommonJS needs to
define a define.ensure, I recant that suggestion. The point of
require.ensure is to provide an interoperable way for modules to load
other modules on demand. The ensure() API is not needed for the initial
entry loading of modules, the initiating the module loader is always
module loader specific anyway, so it is fine to a module loader specific
API to do the initial launch of modules. Since ensure() is intended for
modules, it can continue to exist as a property of the "require" free
variable, without any need for a "require" global. RequireJS's use of
the require() global to launch modules makes perfect sense, and is
fantastic module loader specific API without any need for definition
from CommonJS.

>
>
>> But RequireJS doesn't even implement require.ensure, does it? It doesn't
>> seem like this would affect RequireJS.
> I have not implemented it because no one has asked for it when using
> RequireJS, and I think it is inferior to the require([]) syntax that
> RequireJS provides. However, if it enabled widespread async module
> adoption (vs RequireJS require([]), then I would implement it, since
> it is a subset of what require([]) can do now.

Yeah, I could see there not being much demand for require.ensure.


>> I think the main reason this is worth considering is that it affects
>> *every single* module (and the byte count thereof, which quickly adds up
>> for apps with many modules), so it is worth taking a hard look at what
>> is best here even if there might mean some API change pain.
> I do not believe byte size count matters for performance reasons (with
> an optimized, minified delivery of modules with gzip, it will be
> unnoticeable), but I appreciate wanting the cleanest API.
>
> I am voting "no" though, I do not believe it buys that much for the
> following reasons:
> - inertia. Mostly my personal inertia. It does not feel broken to me.
> - I also like the single global. It still makes sense to me that an
> ensure stays on require, or even better, just uses require([]) as used
> by RequireJS. That means the define name space just defines an async
> module, and an async module implementation still needs to implement
> something for "require".
> - I like that require.def implies that it obeys require's rules.
> define seems to float out in the ether.
>
> If others feel strongly, that require.def is just wrong, then I can
> support a define that maps to require.def in RequireJS. But others
> should speak up soon, as in the next couple of days. I want to move on
> to implementations and adoption.
>

Yeah, I'd like to hear from others too.

--
Thanks,
Kris

James Burke

unread,
Oct 12, 2010, 4:55:02 PM10/12/10
to comm...@googlegroups.com, dojo-con...@mail.dojotoolkit.org
On Tue, Oct 12, 2010 at 11:31 AM, Kris Zyp <kri...@gmail.com> wrote:
> Again, the suggestion is that CommonJS only define a single global not
> two.

This specific suggestion might be for just one, but for the loader to
actually be useful, it will need a bootstrap call require([]) or
require.ensure, so I still see it as needing two globals, since
hanging the bootstrap call off of define seems unlikely.

> In fact this should actually improve the conflict/pollution
> potential of RequireJS. If RequireJS uses a community global (require or
> define) and defines its own APIs on it, it is effectively polluting the
> global/shared namespace with possible name conflicts just as much as if
> it defines all its APIs directly on window. By putting modify(),
> version, plugin(), isBrowser, baseUrl, etc. on a shared object
> "require", you are injecting into a shared space. "require"'s can have
> conflicts just like "window". By claiming that you are not using globals
> because it is under require is a silly name game, it is still shared. If
> CommonJS only defines a global "define" than RequireJS can use a single
> namespace under require (since "require" would be free then) and keep
> its own API separate from the shared namespace. This is the right way to
> avoid conflicts (which is the point of avoiding global pollution,
> whether it be the "window" global or any other shared namespace).

Implementations of a spec usually provide some extensions. If you want
the code to be most portable do not use the extensions, like using
__dirname and __filename in node.

My main concern was with existing code that might have a non-compliant
globals that want to gradually upgrade to the new API. Having two vs
one possibly conflicting globals makes conflict more likely, but it is
not a reason on its own to discount considering a new global.

James

James Burke

unread,
Oct 12, 2010, 9:24:55 PM10/12/10
to dojo dev., comm...@googlegroups.com
On Tue, Oct 12, 2010 at 2:36 PM, Kris Zyp <kz...@dojotoolkit.org> wrote:
> If you care about reducing naming hazards (which is the whole purpose of
> minimizing globals), it seems like the best options would be for
> CommonJS to keep "require" and RequireJS to use "requirejs", or CommonJS
> could use "define" and RequireJS could keep "require". Sharing naming
> authority of "require" isn't really a tenable option for naming safety.

I care about reducing hazards to existing code in the wild. My
potentially poor choice of how I implemented require seems to be
orthogonal to that point. Existing code in the wild could have a
define or require that are most likely *not* CommonJS-compatible, and
having two globals that could potentially conflict with existing code
is more troublesome than one. As mentioned, that point is not a
complete reason for killing the proposal, but it is tradeoff with the
proposal.

And to be clear, I think there will be two globals, because to allow
interop, the "ensure" or some bootstrap method (whatever the name)
should be specified, and that seems unlikely/unnatural to be off of
define.

James

Kris Kowal

unread,
Oct 12, 2010, 9:30:17 PM10/12/10
to comm...@googlegroups.com
On Tue, Oct 12, 2010 at 11:31 AM, Kris Zyp <kri...@gmail.com> wrote:
>> If others feel strongly, that require.def is just wrong, then I can
>> support a define that maps to require.def in RequireJS. But others
>> should speak up soon, as in the next couple of days. I want to move on
>> to implementations and adoption.
> Yeah, I'd like to hear from others too.

I think we are lest likely to have problems if we use "require" as the
<script> context for all CommonJS stuff. I agree with Kris Zyp that
care should be taken to not drive the consumption of the "require"
name space in implementations. If you do chose to extend the require
name space, do-so carefully: it's delicate and we're pretty harsh to
implementations here if the right way to go forward means leaving
cruft behind.

I'm also fine with ditching the old require.define proposal, marking
it with a big X, and usurping its name.

Kris Kowal

It is loading more messages.
0 new messages