require.def(id, deps, factory, id, deps, factory, id, deps, factory, ...);
And then say the dependencies/injections arguments are optional
(defaults to ["require", "exports", "module"]), and one additional set
of dependencies can be suffixed as the last parameter. So it can also be
in the form:
require.def(id, factory, id, factory, deps);
This would at least satisfy my desire to have a concise way to define
multiple modules (avoiding multiple calls and pause and resume). It
would also eliminate the concern Tobie had about IE bugs with property
enumeration. And of course normal transport/C usage would be still
supported.
Another example:
require.def("foo", ["bar"], function(bar){
bar.test();
},
"bar", function(require, exports){
require("another");
exports.test = function(){}
}, ["another"]);
Also, one more thought/question, is it possible to make the very first
id optional? Can that be implied from module/script that was requested,
associating the script with the require.def call by the script element's
onload/onreadystate event that require.def precedes (or do browsers
sequence the script executions so you can determine the id by the order
of requested by scripts)?
--
Thanks,
Kris
pause and resume are in there to allow legacy scripts to be included
which may define global variables via var. Wrapping them in a function
wrapper would break those kinds of scripts, but I expect that is not
of interest in the context of CommonJS. As far as RequireJS is
concerned, I still want to support legacy scripts for now, so I will
likely still support pause/resume.
That said, it does not mean I could not support the transport proposal
in this thread. Although I do have a question (after next quote):
> Another example:
> require.def("foo", ["bar"], function(bar){
> bar.test();
> },
> "bar", function(require, exports){
> require("another");
> exports.test = function(){}
> }, ["another"]);
What if "bar" depended on "baz"? How would that work for the factory arguments?
require.def("bar", ["baz"], function(baz?, require, exports) {});
As I recall, there were objections from Kris Kowal about treating
require, exports, module as dependencies that could be listed in the
dependency array as strings, then have them listed in same order as
factory function arguments, and I do not like forcing the only
arguments to the factory function to be just require, exports, module.
So I am not sure how to resolve that for RequireJS: I do not mind
supporting a CommonJS transport format, but I would not want to give
up the matching order for dependency string array to factory function
argument names for things that are just coded in RequireJS module
format.
I am not sure you are implying that, I think you are indicating both
would work, but I seem to be missing it.
> Also, one more thought/question, is it possible to make the very first
> id optional? Can that be implied from module/script that was requested,
> associating the script with the require.def call by the script element's
> onload/onreadystate event that require.def precedes (or do browsers
> sequence the script executions so you can determine the id by the order
> of requested by scripts)?
The order is not guaranteed. In particular, IE does not fire the
readystate change directly after executing the script. There is a test
in RequireJS at this location if you want to confirm:
http://github.com/jrburke/requirejs/tree/master/tests/browsertests/scriptload/
Other browsers seem to match them up. Perhaps IE 9 will work better
too. I am having trouble testing IE 9 Platform Preview 4 at the
moment. Looking at the HTML5 spec at this URL:
http://dev.w3.org/html5/spec/Overview.html#executing-a-script-block
Step 5 in the "If the load was successful" section seems to indicate
that onload should fire immediately after the script is executed, but
if an inline script, then the onload is queued in what I believe is
the normal event queue, which to me indicates it may not fire exactly
after the execution of the inline script.
It might be good to get clarification from the HTML5 folks to see if
we could match better scripts with their elements. Hmm, IIRC someone
made a proposal to the HTML5 group that allowed a script to get at its
related element? If so, that would work too. But all future stuff.
Depends on how important the now is to you. I still want to work in
the now, so for RequireJS, the module name always needs to be
specified.
James
On 9/1/2010 10:59 PM, James Burke wrote:
> On Wed, Sep 1, 2010 at 6:03 AM, Kris Zyp <kri...@gmail.com> wrote:
>> This would at least satisfy my desire to have a concise way to define
>> multiple modules (avoiding multiple calls and pause and resume). It
>> would also eliminate the concern Tobie had about IE bugs with property
>> enumeration. And of course normal transport/C usage would be still
>> supported.
> pause and resume are in there to allow legacy scripts to be included
> which may define global variables via var. Wrapping them in a function
> wrapper would break those kinds of scripts, but I expect that is not
> of interest in the context of CommonJS. As far as RequireJS is
> concerned, I still want to support legacy scripts for now, so I will
> likely still support pause/resume.
That makes sense. Although even in the RequireJS world, it is reasonable
that in situations where pure RequireJS format is used (everything
enclosed in require.def calls, could be signaled with a build flag or
detected from code), that the build could combine modules with
sequential arguments instead of sequential calls between pause() and
resume() calls, saving some bytes in the built files, right?
> That said, it does not mean I could not support the transport proposal
> in this thread. Although I do have a question (after next quote):
>
>> Another example:
>> require.def("foo", ["bar"], function(bar){
>> bar.test();
>> },
>> "bar", function(require, exports){
>> require("another");
>> exports.test = function(){}
>> }, ["another"]);
> What if "bar" depended on "baz"? How would that work for the factory arguments?
>
> require.def("bar", ["baz"], function(baz?, require, exports) {});
In the example, bar already depends on another module, I just spelled it "another" instead of "baz". But if you want that dependency to be declared directly on that module, like your example, that would be:
require.def("bar", ["baz", "require", "exports"], function(baz, require, exports) {});
(just like Transport/C has always been)
> As I recall, there were objections from Kris Kowal about treating
> require, exports, module as dependencies that could be listed in the
> dependency array as strings, then have them listed in same order as
> factory function arguments, and I do not like forcing the only
> arguments to the factory function to be just require, exports, module.
> So I am not sure how to resolve that for RequireJS: I do not mind
> supporting a CommonJS transport format, but I would not want to give
> up the matching order for dependency string array to factory function
> argument names for things that are just coded in RequireJS module
> format.
I agree, I am definitely suggesting maintaining RequireJS/transport/C
style of mixing injection variables with dependencies for the arguments.
I understand Kowal's concern with this. I just don't agree that it is
really a practical problem. The cost of reserving a few module ids is
negligible (one already reserves module ids to deal with existing
code/modules). If we really wanted to preserve the namespace of module
ids, we could spell require, exports with some reserved characeter like
"!require", "!exports".
> I am not sure you are implying that, I think you are indicating both
> would work, but I seem to be missing it.
>
>> Also, one more thought/question, is it possible to make the very first
>> id optional? Can that be implied from module/script that was requested,
>> associating the script with the require.def call by the script element's
>> onload/onreadystate event that require.def precedes (or do browsers
>> sequence the script executions so you can determine the id by the order
>> of requested by scripts)?
> The order is not guaranteed. In particular, IE does not fire the
> readystate change directly after executing the script. There is a test
> in RequireJS at this location if you want to confirm:
> http://github.com/jrburke/requirejs/tree/master/tests/browsertests/scriptload/
Awesome, great tests. And yes, I can definitely reproduce the scripts
executing out of order, and the onreadystatechange definite does not
fire directly after the execution, but... Everytime I run the test, the
order that the scripts are executed exactly matches the order of the
firing of the onreadystatechange events. If script five executes before
script four, then the onreadystatechange for five will fire before the
onreadystatechange for four. While it is not as convenient as having the
event fire directly after the script executes, it does seem to provide
the information necessary to associate requests with script executions
(and thus anonymous require.def calls with module ids). For example,
annotating from your tests:
one.js script
two.js script
-> four.js script # four gets executed before three
-> three.js script
five.js script
six.js script
seven.js script
eight.js script
one.js loaded
nine.js script
two.js loaded
-> four.js loaded # the onreadystatechange/loaded event preserves the
order (four before three)
-> three.js loaded
five.js loaded
six.js loaded
seven.js loaded
eight.js loaded
nine.js loaded
Am I missing something?
--
Thanks,
Kris
On 9/2/2010 2:07 PM, jbrantly wrote:
> On Sep 1, 9:03 am, Kris Zyp <kris...@gmail.com> wrote:
>> This would at least satisfy my desire to have a concise way to define
>> multiple modules (avoiding multiple calls and pause and resume).
> Not sure if you meant it this way, but it's already possible to have
> multiple modules with Transport/C in the same file. It does use
> multiple calls to require.def of course, but no pause/resume
> necessary: http://github.com/jbrantly/yabble/blob/master/test/transportC/tests/multipleDefines/program.js
Say I do a require.ensure(["A"],...), which should trigger a request for
A.js. Let's say the response includes both modules A and B, and they
have a circular dependency:
require.def("A", ["B"], function(B){
});
require.def("B", ["A"], function(A){
});
The problem is that when the first require.def executes, it satisfies
the request for A, but indicates that module B is still needed. A loader
might then request module B, because there is no way for the loader to
know that another require.def call is going to be executed that will
provide module B. Perhaps this would occur in Yabble, although there are
ways around this, maybe you use setTimeout to wait until the current
execution is finished before deciding if unsatisfied dependencies still
need to be requested However, setTimeout(func) solutions are still not
optimally performant because of the minimum delay resolution for
setTimeout in the browser is something like 15ms or something, which can
add up quickly with lots of modules. Being able to explicitly define a
set of modules that should be defined (with a clear finish) before
requesting unsatisfied dependencies is the fastest, most reliable solution.
The other motivation for adding multiple modules in a single call is for
optimized compression of combined files. Doing multiple definitions in a
single call takes less bytes than doing multiple calls. Most things we
define in CommonJS don't need to be very terse. But, in this situation,
we are dealing with code that must be used to wrap every module sent to
browser. Brevity is extremely important for a call that is going be used
for the output of build processes that combine multiple files and minify
JavaScript to squeeze out the best performance. This is arena where
battles occur with just a few byte differences between google closure
compiler, yui compressor, shrinksafe, packer, and uglify fighting to
give the best user experience onband-width constrained mobile devices
and less-than optimal connections.
> I'm not sure I see a huge benefit in reducing the calls to one
> (perhaps you could explain that better?). On the other hand, I don't
> see a problem with it either, so why not? :) I think the main thing is
> being able to stick them all into one file/request which is already
> possible.
>
> I like how you've solved the explicit injections difference. This
> would be a requirement for me to be on board. I prefer D's approach of
> a sane default with overrides if desired. I think that your function
> signatures might should make a distinction between injects and deps
> though. For example:
>
> require.def(id, injects, factory); // normal transport C
> require.def(id, factory, deps); // the additional set of dependencies
> which are *not* injected, with a default injects of ['require',
> 'exports', 'module']
>
> Both of the above are possible with your format but injects/deps don't
> work quite the same way.
That make sense, having injection variables in the trailing dependency
list would be incoherent anyway. And that is a good point, we should
make sure the trailing dependency list could always be used for a clean,
unencumbered namespace for module ids (in case you really need to have a
module named "require" or "exports").
--
Thanks,
Kris
But on the otherhand, you could use the onload/onreadystatechange event
to delineate the module definitions. I don't believe there are any extra
delays with this event, and in this case of anonymous modules, you would
have to listen/wait for this event anyway. One could combine multiple
calls with this approach with changing transport/C (in this regard).
.
>> The other motivation for adding multiple modules in a single call is for
>> optimized compression of combined files. Doing multiple definitions in a
>> single call takes less bytes than doing multiple calls. Most things we
>> define in CommonJS don't need to be very terse. But, in this situation,
>> we are dealing with code that must be used to wrap every module sent to
>> browser. Brevity is extremely important for a call that is going be used
>> for the output of build processes that combine multiple files and minify
>> JavaScript to squeeze out the best performance. This is arena where
>> battles occur with just a few byte differences between google closure
>> compiler, yui compressor, shrinksafe, packer, and uglify fighting to
>> give the best user experience onband-width constrained mobile devices
>> and less-than optimal connections.
> Understood, but if we want to get really technical, the "require.def"
> part (which is the redundant part) would probably get compressed
> fairly well using gzip. Everyone *is* using gzip, right? :)
>
> In any case you've answered my question and like I said before I don't
> see anything bad about it, so two thumbs up from me.
Good point.
So to summarize I think there are four parts to what I have suggested as
changes to transport/C:
1. Make the injection+dep array optional (defaulting to ["require",
"exports", "module"]) to make it easy to wrap commonjs modules succinctly.
2. Make the module id optional, determining this from the requested
module and the order of the onload/onreadystatechange event. James
Burke's tests convinced me that this is feasible, and I think it would
be great to support anonymous modules and make it much easier to
hand-code modules without hard-coding them to a specific location.
3. Allow for repeating sets of id/injections/factory in the arguments.
This is in intended to improve the performance of combined modules.
After thinking about the ability of loaders to use the onload event and
gzip's elimination of redundancy, perhaps this doesn't buy us that much.
4. Allow for a trailing dependency array in the arguments. This array
does not mix with injection variables, so it is an encumbered namespace,
and it is also is important for succinct wrapping of commonjs modules so
the dependencies can be declared without having to write out "require",
"exports", "modules".
Thanks,
Kris
4. Allow for a trailing dependency list
--
Thanks,
Kris
Yes it could. In the context of CommonJS modules only you can get away
with the format you suggest. Just mentioning why pause/resume is
supported in the more general case, but for the purposes of a CommonJS
transport, what you suggest is fine. Although your point about using
the script onload to know when to trace dependencies is a nice way to
to avoid pause/resume. Hmm, although for other environments outside
the browser it means coding in special handling if it were to support
files that had more than one module in the file. That may be OK
though.
> Awesome, great tests. And yes, I can definitely reproduce the scripts
> executing out of order, and the onreadystatechange definite does not
> fire directly after the execution, but... Everytime I run the test, the
> order that the scripts are executed exactly matches the order of the
> firing of the onreadystatechange events. If script five executes before
> script four, then the onreadystatechange for five will fire before the
> onreadystatechange for four.
Ah, great observation! There does seem to be a way to tie them
together. It does make implementation a bit more complicated, but
certainly doable. I am a bit wary of depending on the behavior -- it
would be good to confirm with browser vendors that this behavior
always matches up, but it is promising. I am happy to confirm with
browser vendors if we get general agreement.
However, I want to push it a little further:
To me the criticisms I heard for using a require.def type of syntax as
the CommonJS source file module syntax were:
1) specifying the name of the module was seen as bad, makes moving
files to different directories harder.
2) Explicitly reserving "require", "exports", and "module" as
dependency names in the array to map to the CommonJS module
definitions was seen as bad.
3) Perceived to be more typing.
We could get rid of #1, and only use names when combining more than
one module together in a file.
#2: given that a function callback is used, and dependencies are
listed in an array/function args, "require" is not normally needed
inside the factory function (may want to for some circular
dependency/generic module referencing). The factory function can
return the module exports, so "exports" would not normally be needed
(maybe only for some circular dependency cases). And I believe
"module" is not needed that often either (but there are valid cases
for needing it). So in practice for most modules, they would not need
to specify any of those special dependency names. Yes, they would
still need to be reserved, but as far as practical impact on
developers or code weight, it seems negligible.
#3: is really just bikeshedding. I believe the code weight is about
the same, since return {} is used instead of typing exports more than
once, "require" normally only needs to be typed once, and since return
{} is used for setting exports, constructor functions can be used as
the module export, meaning that you do not have to see extra typing
like "new require('foo').Foo()". So in the end, the typing ends up to
be about the same.
So the differences between "source module" and "transport" end up
being the addition of a name for the module as the first arg. I like
that, and it means getting an easy to debug, fast loading source
module format for the browser. How does that sound? :)
If that goes over well, then a jbrantly mentioned, having extra
require.def calls in the transported file should be negligible with
gzip. It loses some optimization in that a common dependency will be
listed many times in the array of dependency/injections, but I think
gzip also helps a bit there too.
If the inertia for considering a change to the CommonJS source module
format is too great, then what you propose is fine, and I will focus
on obsoleting the old CommonJS source module format via RequireJS
adoption, possibly removing the need for the name in require.def
syntax in RequireJS. I am curious though, who else is interested in
the transport format? You, jbrantly and me, I wonder who else actively
tries to implement one of the transport proposals. I am sure there are
others, it has just been a while since transports were brought up.
James
Sounds great! Maybe we can nail down exactly what should be changed in
the transport/C spec (optional first module id, optional injection list,
trailing dependency list) and you can do any additional verification of
script onload association .
> If that goes over well, then a jbrantly mentioned, having extra
> require.def calls in the transported file should be negligible with
> gzip. It loses some optimization in that a common dependency will be
> listed many times in the array of dependency/injections, but I think
> gzip also helps a bit there too.
>
> If the inertia for considering a change to the CommonJS source module
> format is too great, then what you propose is fine, and I will focus
> on obsoleting the old CommonJS source module format via RequireJS
> adoption, possibly removing the need for the name in require.def
> syntax in RequireJS. I am curious though, who else is interested in
> the transport format? You, jbrantly and me, I wonder who else actively
> tries to implement one of the transport proposals. I am sure there are
> others, it has just been a while since transports were brought up.
Well, potentially Dojo, I think that's an important one :).
--
Thanks,
Kris
I think gzipping makes the need for #3 and #4 less necessary. If #4
really made your side of processing noticeably better, I could
probably live with it.
I do like #2, want to prototype it some more.
#1 is a bit trickier. If no explicit dependencies are specified in
RequireJS, then I do not bother with creating an exports object, since
I allow return in the factory function to define the exports,
similarly, I do not bother with manufacturing a "module" object. So I
am concerned about doing extra work when it is not needed. At first
glance, I prefer just to be explicit with the dependencies, but may be
able to be convinced otherwise. Here again I think gzip helps to
collapse the size of those things if it is a common pattern.
I think the bigger thing that is indicated by this approach is that
"require", "module" and "exports" do become reserved strings that map
to the free variables expected in traditional CommonJS modules. I want
to call that out since it was contentious before.
James
I had hoped the trailing dependency argument would mitigate this
problem. However, I think stating it as two optional arguments is too
confusing. Let me restate my proposal. The require.def could take two forms:
require.def(id?, injections, factory); // existing transport/C (with
optional first arg, hopefully)
require.def(id?, factory, dependencies);
This second form is like the first, except that arguments to the factory
are always "require", "module" and "exports", and the third argument has
no reserved module names ("require" would request the module with that
name). Basically, this form would be a super easy way to wrap existing
CommonJS modules with minimal alteration to require.def and without
reserved strings in the dependency list.
--
Thanks,
Kris
* require.def should not depend |this|. You should be able to use
require.def in both forms:
require.def(...);
var def = require.def;
def(...);
This is already true for RequireJS and Yabble, but seems like it should
be explicitly stated in the specification.
* If we are going to support anonymous modules, we should certainly also
allow relative module ids. This is an important element to keeping
modules portable as well. Relative ids should be supported in the
dependency list and I believe also in the module id argument. The
dependency list module ids should be resolved relative to the module
that is being defined.
Allowing relative ids for the module id (the first param) helps to
address a broader issue of how packages could be accessed from a client
side loader. How do we create "built" packages for browsers? If I
request a module "a" from a package at http://somesite.com/foo, and I
have built "a" to also include it's dependencies "b" what should
http://somesite.com/foo/lib/a.js return? One possibility is it could
assume that it is being mapped to the "foo" namespace:
require.def("foo/a",["./b"], function(b){...});
require.def("foo/b",[], function(){...});
but then the modules are hard-coded to a particular expected location. A
much more portable solution is anonymous modules plus relative modules:
require.def(["./b"], function(b){...});
require.def("./b",[], function(){...});
I believe we would need to specify that relative module ids are resolved
relative to the last require.def call.
* Is a "transport" even still the right name for require.def anymore? As
we add the ability to do anonymous modules, maybe relative ids, and with
the existing ability to conveniently map dependencies to arguments, it
seems like it is not strictly a transport (I think Kris Kowal has
pointed this out before), it is perhaps an async module format, module
registration API or module definition API.
Thanks,
Kris
I do not understand, I would have thought not specifying the module
name at all was good enough. I suppose you may want to explain more by
what a "built" package means.
James
By built package, I mean a package that may have modules that have
multiple dependencies in a single file like we do with builds in Dojo.
Taking an example from Dojo, the DataGrid.js we distribute to CDNs
(http://o.aolcdn.com/dojo/1.5/dojox/grid/DataGrid.xd.js) includes
multiples modules. Since all these modules are a single file
(DataGrid.js), only the DataGrid module can be anonymous, but the other
ones could at least be defined relatively so that aren't harded coded to
a certain path:
require.def(["./_Grid","./DataSelection",...], function(){ /*DataGrid
module */});
require.def("./_Grid",[...], function(){ /*_Grid module*/});
require.def("./DataSelection",[...], function(){ /*_Grid module*/});
...
Does that make sense?
--
Thanks,
Kris
That would only work if modules from just one package were included in
the built file. I am not sure how common that is, or if it is worth
supporting. Once there modules from two packages in the file, a full
ID is needed.
For me, if we are talking about optimized, built code, it means the
module IDs have been set/burned in, so I do not see a great need to
have relative module IDs in that case.
James
Yes, that makes sense. But relative ids in the dependency list for
anonymous modules would certainly still be reasonable (since it is
represents pre-built source code that may be built to any location),
right? And I would think it would be pretty easy to support as well.
--
Thanks,
Kris
Agreed.
James
Also, would relative ids in dependency lists *only* be supported for
anonymous modules or should it be for all modules? Is it more work to
disable this for non-anonymous modules if relative id is supported?
--
Thanks,
Kris
For me, it is a module definition, but if it helps for the purposes of
this group to call it a Transport proposal, that is fine. I would be
OK if it was revved to a Transport/E or whatever the next letter is,
just in case support for the anonymous modules falls through, but
reusing /C also works if you already did changes.
> Also, would relative ids in dependency lists *only* be supported for
> anonymous modules or should it be for all modules? Is it more work to
> disable this for non-anonymous modules if relative id is supported?
Seems fine to allow relative ids in the dependency list when name is
specified for the module. That works today in RequireJS, and I do not
think it forces any new burdens on the loader, since relative ids need
to be supported, they are always resolved in relation to the module id
for the require.def call. In the anonymous require.def case, the
module id gets applied later by the system before dependencies are
dealt with anyway.
James
http://wiki.commonjs.org/wiki/Modules/AsynchronousDefinition
Feedback welcome, of course.
Thanks,
Kris
If you perceived the issues with loading CommonJS modules in a browser to be a big deal then it should have been dealt with a year and a half ago when we were defining CommonJS modules. I thought we had concluded it wasn't a major problem.
I don't have a problem with CommonJS modules in the browser because I've been using a similar system (Cappuccino's load system) for several years without major issues.
</rant>
-tom
> --
> You received this message because you are subscribed to the Google Groups "CommonJS" group.
> To post to this group, send email to comm...@googlegroups.com.
> To unsubscribe from this group, send email to commonjs+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/commonjs?hl=en.
>
I have just as much frustration on the other side. A ServerJS group
switching names to CommonJS, but then treating the browser as second
class has not sat well with me. I did try to engage, about a year ago,
but was informed that the browser was not the first target for this
group. It would be bad to assume that this group has had enough of a
cross section of JS developers to know if they got the format right,
particularly given the ServerJS origins.
I do not think I got it perfect with Transport/C-RequireJS, and I am
happy that Kris Zyp has noticed a pattern I missed before that would
allow not specifying the name in a require.def call if there is just
one module in a script.
I believe it brings the format closer to something that could be used
in the browser directly and meets the goals for CommonJS. As I
mentioned in the other thread[1], the issues of typing is a bikeshed,
and the reservation of "require", "module" and "exports" do not seem
that bad, particularly given that most modules will not need them as
much when using a format that has a function wrapper, and it reduces
the overall typing in the format.
> I don't have a problem with CommonJS modules in the browser because I've been using a similar system (Cappuccino's load system) for several years without major issues.
Cappuccino and Objective-J are not what I would consider mainstream
front end development though. Not coding in the language that the
browser already knows is not natural for many front end developers,
including me. Which is fine, the browsers are capable enough to allow
variations on that spectrum. You find it works for you. We in Dojo
have found xhr+eval to be workable for many years, but it does not
work as well as something that uses script tags. We know this from
experience. It makes adoption of the toolkit harder. There are
complications with xdomain loading, debugging and speed. They are
workable, but there are real costs. Using a function wrapper avoids
those costs. It may be tempting to think the tradeoff is the amount of
typing, but as mentioned[1], that can be a hard argument to make
definitively. YUI seems to operate well with a function wrapper too.
I do not want to start a flame war on this, we are likely not to get
anywhere on it. I will try not to respond more on this thread about
it. I just wanted to point out that there is a nontrivial number of
developers that feel differently, and not all want to try to engage
with this group because it has not directly impacted them yet, and
there has not been much in it for them to participate, since it has
been mentioned a couple times that CommonJS is mainly concerned with
non-browser environments. If you want to keep it that way, fine by me,
but know that other solutions may gain more ground, regardless of the
transport or module format labels.
[1] http://groups.google.com/group/requirejs/msg/6922316ab3b66bbb
James
> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
>>
>> If you perceived the issues with loading CommonJS modules in a browser to be a big deal then it should have been dealt with a year and a half ago when we were defining CommonJS modules. I thought we had concluded it wasn't a major problem.
>
> I have just as much frustration on the other side. A ServerJS group
> switching names to CommonJS, but then treating the browser as second
> class has not sat well with me. I did try to engage, about a year ago,
> but was informed that the browser was not the first target for this
> group. It would be bad to assume that this group has had enough of a
> cross section of JS developers to know if they got the format right,
> particularly given the ServerJS origins.
Personally I always intended to use ServerJS for more than servers and never liked the ServerJS name. But I also never thought there would be significant opposition to using the proposed module system in the browser.
> I do not think I got it perfect with Transport/C-RequireJS, and I am
> happy that Kris Zyp has noticed a pattern I missed before that would
> allow not specifying the name in a require.def call if there is just
> one module in a script.
>
> I believe it brings the format closer to something that could be used
> in the browser directly and meets the goals for CommonJS. As I
> mentioned in the other thread[1], the issues of typing is a bikeshed,
> and the reservation of "require", "module" and "exports" do not seem
> that bad, particularly given that most modules will not need them as
> much when using a format that has a function wrapper, and it reduces
> the overall typing in the format.
>
>> I don't have a problem with CommonJS modules in the browser because I've been using a similar system (Cappuccino's load system) for several years without major issues.
>
> Cappuccino and Objective-J are not what I would consider mainstream
> front end development though. Not coding in the language that the
> browser already knows is not natural for many front end developers,
> including me. Which is fine, the browsers are capable enough to allow
> variations on that spectrum. You find it works for you.
The language is irrelevant to this discussion so I'm going to ignore your superficial criticism on that. The Objective-J loader can load standard JavaScript, and we've even modified it to support CommonJS modules (it was a couple dozen line change). It works with Firebug and WebKit debuggers (and presumably anything else that supports @sourceURL). It's also got something similar to the transport format for loading cross domain, but that format is automatically generated by a tool upon deployment, of course.
Does CommonJS even need to appease "mainstream front end" developers anyway?
> We in Dojo
> have found xhr+eval to be workable for many years, but it does not
> work as well as something that uses script tags. We know this from
> experience. It makes adoption of the toolkit harder.
I suspect confusing people with multiple module formats is going to hurt adoption more.
> There are
> complications with xdomain loading, debugging and speed. They are
> workable, but there are real costs. Using a function wrapper avoids
> those costs. It may be tempting to think the tradeoff is the amount of
> typing, but as mentioned[1], that can be a hard argument to make
> definitively. YUI seems to operate well with a function wrapper too.
xdomain and speed are solved by the *transport* format. Debugger support is solved by @sourceURL, or the transport format.
What other value does using a script tag provide? Certainly not familiarity to "mainstream" JavaScript developers, since using any kind of dependency management system is vastly different than manually including script tags.
> I do not want to start a flame war on this, we are likely not to get
> anywhere on it. I will try not to respond more on this thread about
> it. I just wanted to point out that there is a nontrivial number of
> developers that feel differently, and not all want to try to engage
> with this group because it has not directly impacted them yet, and
> there has not been much in it for them to participate, since it has
> been mentioned a couple times that CommonJS is mainly concerned with
> non-browser environments. If you want to keep it that way, fine by me,
> but know that other solutions may gain more ground, regardless of the
> transport or module format labels.
It certainly seems we're not going to convince each other. If the majority of CommonJS contributors agree with you then I'll back off, but so far it seems like it's mostly you and a couple others.
I'll just say my preference if you're going to invent a new module format is to not call it CommonJS modules. We absolutely should not have two incompatible ways of defining modules.
If we do end up having two formats then we need to specify that implementations should support both, otherwise what's the point?
Alternatively, you could make your boilerplate something backwards compatible with CommonJS modules, something like:
(require.def || function(f) { f(exports, require, module); })(function(exports, require, module) {
// ...
}, "foo", ["bar", "baz"]);
Obviously it's a bit verbose and error prone. Not a problem if it's auto-generated though.
-tom
On 9/11/2010 5:42 AM, Tom Robinson wrote:
> On Sep 11, 2010, at 2:24 AM, James Burke wrote:
>
>> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
But we never did have consensus on the transport API. We have always had
multiple APIs of CommonJS module transport with out a clear winning
proposal. The basic underlying difference of approach between
hand-coding an asynchronous module vs auto-generated wrappings of
CommonJS has constantly divided such discussions. The proposal doesn't
introduce anything new (well it altered the transport/C API slightly,
making the first param optional and removed require.pause and
require.resume), it simply calls Transport/C what it really is. If I got
the name wrong, suggest something different (I asked for suggestions
before creating the wiki page), but it ain't a "transport".
> [snip]
> xdomain and speed are solved by the *transport* format.
Absolutely, but the transport methodology is to wrap a CommonJS module.
Doing this by hand is a lot of extra work (AMD is vastly less effort),
and doing it with a tool creates an extra tool dependency. Within Dojo
(which wants to use CommonJS), requiring the use of a tool (for
wrapping) is a non-starter.
--
Thanks,
Kris
I pointed out the language to illustrate that since you got something
that works with your choice of language, it does not translate to a
more general statement about the module format being ideal for front
end development. I mentioned YUI to illustrate someone else's
perspective of getting something to work just fine for them, but it
happens to use a function wrapper format. I mentioned Dojo because it
does exactly all the things you just described you do for standard
JavaScript in the Objective-J loader. Those tradeoffs are real.
@sourceURL is not helpful if there is a syntax error in the file, does
not work for IE. It makes development slower, speed is not just about
deployment speed. An extra tool step to get xdomain support adds more
steps, more things to know to deploy code.
The possibility to avoid those problems with a function wrapper format
seem worth it particularly since the amount of typing for it is not
that much different from the existing CommonJS module format, and it
allows setting the exported value in a natural way. While setting
exports may be contentious, the fact that it is implemented in more
than one implementation should indicate it is not a feature wanted by
a vocal minority that has no implementation to back up the talk.
> Does CommonJS even need to appease "mainstream front end" developers anyway?
You call it appeasement, I call it getting them involved in the
discussion, see what works best for them and if there is enough
overlap in goals to agree on something that is generally useful. I
think there is.
> I suspect confusing people with multiple module formats is going to hurt adoption more.
Right now there are three things someone needs to understand in
CommonJS to effectively use code in the browser:
- A module format
- A transport proposal (but which one?)
- An async require syntax proposal (but which one?)
It is confusing now. Compare that with one format proposal that would
define a module syntax, then say, "for optimizing transport/including
multiple modules in one file, you MAY place the ID of the module as
the first argument to require.def. If you do not want to define a
module, but just use some dependencies, use the same argument syntax
as require.def anonymous modules, but drop the .def property access".
That said, I am completely fine just calling the proposal from Kris
Zyp Transport/E. Just expect it to be pushed as a source format for
modules for some systems, and we can let implementation and adoption
sort it out.
James
>
>
> On 9/11/2010 5:42 AM, Tom Robinson wrote:
>> On Sep 11, 2010, at 2:24 AM, James Burke wrote:
>>
>>> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
> But we never did have consensus on the transport API. We have always had
> multiple APIs of CommonJS module transport with out a clear winning
> proposal. The basic underlying difference of approach between
> hand-coding an asynchronous module vs auto-generated wrappings of
> CommonJS has constantly divided such discussions. The proposal doesn't
> introduce anything new (well it altered the transport/C API slightly,
> making the first param optional and removed require.pause and
> require.resume), it simply calls Transport/C what it really is. If I got
> the name wrong, suggest something different (I asked for suggestions
> before creating the wiki page), but it ain't a "transport".
Sorry, I hadn't been following the details of the transport proposals closely. They were still being called "transport", so I assumed they were still intended to be used only as a transport mechanism (though RequireJS's abuse of the transport proposals bugged me from the beginning). It has come to my attention that that is not the case, and I'm not happy about the change in direction.
>
>> [snip]
>> xdomain and speed are solved by the *transport* format.
> Absolutely, but the transport methodology is to wrap a CommonJS module.
> Doing this by hand is a lot of extra work (AMD is vastly less effort),
> and doing it with a tool creates an extra tool dependency. Within Dojo
> (which wants to use CommonJS), requiring the use of a tool (for
> wrapping) is a non-starter.
And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)
How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?
To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.
-tom
On 9/11/2010 7:11 AM, Tom Robinson wrote:
> On Sep 11, 2010, at 5:17 AM, Kris Zyp wrote:
>
>>
>> On 9/11/2010 5:42 AM, Tom Robinson wrote:
>>> On Sep 11, 2010, at 2:24 AM, James Burke wrote:
>>>
>>>> On Fri, Sep 10, 2010 at 10:45 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>>>> So we now have two different ways of defining CommonJS modules? There was a reason the transport specs were named "transport". This significantly complicates the CommonJS module story.
>> But we never did have consensus on the transport API. We have always had
>> multiple APIs of CommonJS module transport with out a clear winning
>> proposal. The basic underlying difference of approach between
>> hand-coding an asynchronous module vs auto-generated wrappings of
>> CommonJS has constantly divided such discussions. The proposal doesn't
>> introduce anything new (well it altered the transport/C API slightly,
>> making the first param optional and removed require.pause and
>> require.resume), it simply calls Transport/C what it really is. If I got
>> the name wrong, suggest something different (I asked for suggestions
>> before creating the wiki page), but it ain't a "transport".
> Sorry, I hadn't been following the details of the transport proposals closely. They were still being called "transport", so I assumed they were still intended to be used only as a transport mechanism (though RequireJS's abuse of the transport proposals bugged me from the beginning). It has come to my attention that that is not the case, and I'm not happy about the change in direction.
>
>>> [snip]
>>> xdomain and speed are solved by the *transport* format.
>> Absolutely, but the transport methodology is to wrap a CommonJS module.
>> Doing this by hand is a lot of extra work (AMD is vastly less effort),
>> and doing it with a tool creates an extra tool dependency. Within Dojo
>> (which wants to use CommonJS), requiring the use of a tool (for
>> wrapping) is a non-starter.
> And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)
I don't see these as directly competing, they serve different roles.
There is one and only one CommonJS module format, which defines a set of
guaranteed free variables that will be available for synchronously
requiring modules and exporting functionality within a CommonJS
controlled evaluation context. Asynchronous module definition API, on
the other hand, defines of way of registering a module from outside a
CommonJS controlled evaluation context. The expected context is no
different than the transport API, but it differs in the purpose of being
optimized for hand-coding.
> How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?
Dojo has used synchronous dependency loading for years, and most of the
committers (who have used this for years) are pretty much in agreement
that it is wrong and don't want it anymore. Asynchronous dependency
loading is necessary, regardless of whether is called a transport API or
async module definition API.
IIUC, YUI added dependency loading in version 3. And it appears that
they do indeed use a function wrapper around each module. If you look at
the source, you'll see a YUI.add(moduleId, factory, version,
options-with-dependency-list) around each module.
> To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.
So your suggestion is that a client side library use synchronous loading
+ eval-based module loading? Or use CommonJS raw format plus a tool for
transport wrapping? I know in the context of Dojo, the former is what
years of experience have lead us away from. I suggested the latter on
the Dojo ML, and it was thoroughly shot down, Dojo won't limit its
availability to those who are willing to use a transport tool.
-- Thanks, Kris
On 9/11/2010 9:56 AM, James Burke wrote:
> [snip]
> That said, I am completely fine just calling the proposal from Kris
> Zyp Transport/E. Just expect it to be pushed as a source format for
> modules for some systems, and we can let implementation and adoption
> sort it out.
I thought the letters were for creating alternate proposals (someone
correct me if I am wrong). This is clearly just an upgrade to
Transport/C (optional first arg, and removal of pause and resume due to
our realization that they weren't necessary) and not meant to compete
with Transport/C, and thus should be draft 2 if it keeps the name. It is
the name (and not the changes) that seem to be contentious. We could
revert back to calling it a "transport", but really? That is just a
lousy name for it (it is appropriate for Transport/D, but not
AMD/Transport/C). I don't see how we are helping the process but giving
it an inappropriate name, just so it has the same name as another API
within CommonJS. I'd love to hear other name suggestions (hopefully
better than "transport").
But perhaps the issue isn't so much with the name change, but just the
fact that AMD/transport/c exists, regardless of the name or minor updates...
--
Thanks,
Kris
> On 9/11/2010 7:11 AM, Tom Robinson wrote:
>>> [snip]
>>>
>> And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)
> I don't see these as directly competing, they serve different roles.
> There is one and only one CommonJS module format, which defines a set of
> guaranteed free variables that will be available for synchronously
> requiring modules and exporting functionality within a CommonJS
> controlled evaluation context. Asynchronous module definition API, on
> the other hand, defines of way of registering a module from outside a
> CommonJS controlled evaluation context. The expected context is no
> different than the transport API, but it differs in the purpose of being
> optimized for hand-coding.
Ok, but if I can't share modules between the client and server then using CommonJS on the client loses some of it's appeal.
>> How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?
> Dojo has used synchronous dependency loading for years, and most of the
> committers (who have used this for years) are pretty much in agreement
> that it is wrong and don't want it anymore. Asynchronous dependency
> loading is necessary, regardless of whether is called a transport API or
> async module definition API.
We can (and already do) support async loading of regular CommonJS modules without wrappers using async XHR. See my second paragraph below.
> IIUC, YUI added dependency loading in version 3. And it appears that
> they do indeed use a function wrapper around each module. If you look at
> the source, you'll see a YUI.add(moduleId, factory, version,
> options-with-dependency-list) around each module.
>
>
>> To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.
>
> So your suggestion is that a client side library use synchronous loading
> + eval-based module loading? Or use CommonJS raw format plus a tool for
> transport wrapping? I know in the context of Dojo, the former is what
> years of experience have lead us away from. I suggested the latter on
> the Dojo ML, and it was thoroughly shot down, Dojo won't limit its
> availability to those who are willing to use a transport tool.
I would advocate both, except use async loading instead of sync. Use the eval-based loader during development for ease of use, then optimize by bundling the modules in the transport format using a tool during deployment (when you'll most likely be running other build processes like minification anyway). This has always been our approach in Cappuccino. If you want to load libraries from a CDN, just make sure they're deployed in the transport format. IMO the loader should support mixing of both module formats side-by-side, so I can point my jQuery (or whatever) package to a CDN and my own modules locally during development.
Regarding async vs. sync loading, I thought it was well understood that you can do asynchronous module loading with regular CommonJS modules and no wrappers or tool. You asynchronously *download* all the modules and their transitive dependencies up front, then synchronously *execute* them. It adds a small amount of complexity to the loader, but not much. This is the reason we mandate strings passed to require() are literals, so that they're statically analyzable.
The transport formats get you a few distinct things:
1) The benefits from loading in a script tag:
a) cross-domain loading
b) better debugging in debuggers that don't support @sourceURL
c) maybe faster parsing/execution (?)
2) Combining multiple modules into a single file.
3) Dependency information that you don't have to parse out yourself.
(note that I don't include async loading because that can also be done with XHR+eval)
So 1a, 1c, 2, and 3 are probably only important during deployment. 1b is the only one that might matter during development, if you're using a debugger that doesn't support @sourceURL, but that's a tradeoff I'm willing to make since the most common debuggers (by far, Firebug and WebKit) support it.
There are already multiple implementations of CommonJS modules for browsers using this async loading / sync executing pattern. I think Yabble, Tiki, and my modified Objective-J loader all do this, off the top of my head.
-tom
On 9/11/2010 12:52 PM, Tom Robinson wrote:
> On Sep 11, 2010, at 10:54 AM, Kris Zyp wrote:
>
>> On 9/11/2010 7:11 AM, Tom Robinson wrote:
>>>> [snip]
>>>>
>>> And introducing a new first-class module format intended to be written by hand and distributed as source is not acceptable to me. It's crazy to have two incompatible module formats, and if popular projects like Dojo start using the transport format as their module format it's going to confuse the hell out of people coming to CommonJS who see a totally different format elsewhere (as if we don't already have enough non-standard/incompatible features between implementations!)
>> I don't see these as directly competing, they serve different roles.
>> There is one and only one CommonJS module format, which defines a set of
>> guaranteed free variables that will be available for synchronously
>> requiring modules and exporting functionality within a CommonJS
>> controlled evaluation context. Asynchronous module definition API, on
>> the other hand, defines of way of registering a module from outside a
>> CommonJS controlled evaluation context. The expected context is no
>> different than the transport API, but it differs in the purpose of being
>> optimized for hand-coding.
> Ok, but if I can't share modules between the client and server then using CommonJS on the client loses some of it's appeal.
Using the standard CommonJS module format with the transport API is
totally the way to go if you are doing SSJS, I completely agree (that's
why I wrote transporter). For something like Dojo, probably less than
10% (maybe less than 1%) are using SSJS. Forcing the 90% who aren't
using JS anywhere but in the browser to use a format that is can't be
directly loading in script tags without additional processing is
untenable in Dojo.
>>> How is this significantly different than Dojo's current system, or Cappuccino's, or SproutCore's, or YUI's, or basically any other large-ish JavaScript framework. Which frameworks actually use function wrapper boilerplate around every file (in their source)?
>> Dojo has used synchronous dependency loading for years, and most of the
>> committers (who have used this for years) are pretty much in agreement
>> that it is wrong and don't want it anymore. Asynchronous dependency
>> loading is necessary, regardless of whether is called a transport API or
>> async module definition API.
> We can (and already do) support async loading of regular CommonJS modules without wrappers using async XHR. See my second paragraph below.
Right, Nodule does this (static analysis for the purpose of async), but
you have to control the entire loading process and Dojo does not. Once a
sync require is made (from any script, inline or in a file), all the
transient dependencies have to sync loaded as well.
>> IIUC, YUI added dependency loading in version 3. And it appears that
>> they do indeed use a function wrapper around each module. If you look at
>> the source, you'll see a YUI.add(moduleId, factory, version,
>> options-with-dependency-list) around each module.
>>
>>
>>> To be clear, I have no problem with the idea of a standard module transport format, only with the idea that the transport format should be a first-class module format intended to be written by hand, distributed as source.
>> So your suggestion is that a client side library use synchronous loading
>> + eval-based module loading? Or use CommonJS raw format plus a tool for
>> transport wrapping? I know in the context of Dojo, the former is what
>> years of experience have lead us away from. I suggested the latter on
>> the Dojo ML, and it was thoroughly shot down, Dojo won't limit its
>> availability to those who are willing to use a transport tool.
> I would advocate both, except use async loading instead of sync. Use the eval-based loader during development for ease of use, then optimize by bundling the modules in the transport format using a tool during deployment (when you'll most likely be running other build processes like minification anyway). This has always been our approach in Cappuccino. If you want to load libraries from a CDN, just make sure they're deployed in the transport format. IMO the loader should support mixing of both module formats side-by-side, so I can point my jQuery (or whatever) package to a CDN and my own modules locally during development.
That is the way it currently works in Dojo.
> Regarding async vs. sync loading, I thought it was well understood that you can do asynchronous module loading with regular CommonJS modules and no wrappers or tool. You asynchronously *download* all the modules and their transitive dependencies up front, then synchronously *execute* them. It adds a small amount of complexity to the loader, but not much. This is the reason we mandate strings passed to require() are literals, so that they're statically analyzable.
>
> The transport formats get you a few distinct things:
>
> 1) The benefits from loading in a script tag:
> a) cross-domain loading
> b) better debugging in debuggers that don't support @sourceURL
> c) maybe faster parsing/execution (?)
> 2) Combining multiple modules into a single file.
> 3) Dependency information that you don't have to parse out yourself.
>
> (note that I don't include async loading because that can also be done with XHR+eval)
>
> So 1a, 1c, 2, and 3 are probably only important during deployment. 1b is the only one that might matter during development, if you're using a debugger that doesn't support @sourceURL, but that's a tradeoff I'm willing to make since the most common debuggers (by far, Firebug and WebKit) support it.
From what I've seen, and from what I remember of the test results, the
performance difference is dramatic. And this is enormously important to
me for development. Faster loading equals faster development. Also, even
with @sourceURL, stack traces don't work in any browser, AFAICT.
> There are already multiple implementations of CommonJS modules for browsers using this async loading / sync executing pattern. I think Yabble, Tiki, and my modified Objective-J loader all do this, off the top of my head.
Yep, and that's exactly what Dojo does as well (just not with the
CommonJS API, but all the same patterns). And after years of supporting
multiple core loaders, and suffering through slow eval based loading, we
are ready to be done with it. Anyway, I think we will definitely have an
auxillary loader for CommonJS plain modules in Dojo, but having Dojo
modules be written in and designed for CommonJS has been discussed and
totally rejected.
Also, in regards to implementations, I know Yabble, RequireJS, and
Nodules (and soon Dojo) all implement Transport/C, so it is a reasonably
well implement spec, so I don't think we could just make it go away even
if we wanted to. We can give it a less confusing name though (having two
transport specs, differentiated by a letter is horrible).
--
Thanks,
Kris
Sharing code between client and server is the major reason that I use
CommonJS modules. Couchapp and Transporter are great tools. It is
hard to imagine Transporter becoming mainstream.
It would be nice to reduce the friction between sharing modules
between client and server. It would help our community greatly.
It would be nice to reduce the friction between sharing modules
between client and server. It would help our community greatly.
--
--
It seems to me we either are Common, which is to say support javascript equally across all platforms, or we aren't. I have a feeling that most people's interest, at least from the browser side, is the prospect of not being locked into a particular project/vendor and to provide more compatibility between common functionality. Secondly, in much of the technical discussion around browser loading, it is often forgotten that there is little choice for loading in the browser, there are a few ways that have been fairly well tested and sorted for their pros/cons. On the backend however, while it is perhaps a little bit inconvenient at times to conform to the browser it usually has little or not cost in terms of performance. The converse of that is not true as proven by experience at Dojo and elsewhere. Dozens of loaders exist and have been developed over years. This is not to say some of the browser solutions don't work, but there have been none that have proven to be more performant. Performance in the limited browser environment comes at every single place it can be grabbed, which I can attest to having spent the past couple of years mostly doing performance analysis of customer's browser apps.
A server platform running a set of commonjs modules has full control over how its code gets loaded, and can do so by pre-processing (building), processing on the fly, or even processing on the fly and caching without a lot of difficulty. This is impossible to do on the browser side without causing performance concerns. Someday in the future that will hopefully change for the better, at the current time IE6/7 are still be supported by a large chunk of the cash paying world. Requiring a specific server solution is often difficult or impossible at large organizations. They could of course implement that solution in their ecosystem on their own to the extent that one didn't exist (and in the beginning thats all of them), however that can cost as much as implementing the project they planned to use it for in the first place.
In short I fully support using servers to provide optimizations for loading in the browser, but that should not be required for a common system. The servers have the ability to adapt without penalty, while the browsers don't without increasing the requirements for a project.
Dustin
YUI has gone with a module format that uses a function with
dependencies specified outside it, and Dojo wants to do the same,
precisely for the points that have been brought up in this list. Two
different groups who have been living with building large,
componentized systems in the browser for many years are moving to that
pattern.
Both use a build system to help optimize code delivery in production
over the network, but those tools are not required for development. I
believe both have developer-time versions of the optimization tools:
the developer can run a step that optimizes once, as a part of code
deployment, then a specialized server is not needed.
So, feel free to think it is not important because you have personally
not seen a need for it, but know that there are people with real world
experience and implementations that feel otherwise. It is fine if this
group does not want to target those groups, but it will likely mean
the module format pushed by this group will not be so common,
particularly in the browser.
I think that is unfortunate, particularly since Kris Zyp's latest
proposal makes it really close to the existing module format design
goals (in particular no module name in the file), and the difference
in typing is really not that great when you consider that exports is
not needed in the vast majority case, which allows setting of exports
(that leads to not needing "module" as much), and "require" does not
need to be typed for each dependency.
James
Here is the new asynchronous module definition proposal, based on the
ideas from Transport/C and recent changes discussed:
http://wiki.commonjs.org/wiki/Modules/AsynchronousDefinition
Feedback welcome, of course.
Also, I am not one of the 'same people' saying that CommonJS treats
the browser as 2nd class. It just plain does. It's obvious. I've
been using Zyps Transporter and I am happy with it; however, a
solution like that will never achieve the mass adoption that I would
like to see. If it requires a server, I doubt it will ever be used by
a large number of people.
Same goes for couch. It's great. I love it. I use it in almost
every project I work on now. However, couch as a gateway to commonjs
is going to pull in only a small number of 'elite' people. We're
trying to find a way to get the masses. Yes I know couch is deployed
on Ubuntu, but regular programmers are not going to be programming
couchapps anytime soon.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
> <mailto:comm...@googlegroups.com>.
> To unsubscribe from this group, send email to
> commonjs+u...@googlegroups.com
> <mailto:commonjs%2Bunsu...@googlegroups.com>.
On Sat, Sep 11, 2010 at 11:09 PM, Nathan Stott <nrs...@gmail.com> wrote:
It would be nice to reduce the friction between sharing modules
between client and server. It would help our community greatly.
Agreed. Let's talk requirements for a moment:
- server-side process to deliver
- no difference between server and browser modules
- works in a plain script tag
So far, I'm seeing "pick two", and I think various members want a different two. Frankly, I'm having a hard time figuring out how to make all three come true. Is XHR + eval really that slow?
-- Thanks, Kris
--
On 9/13/2010 6:28 PM, Eugene Lazutkin wrote:
> Inline.
>
> On Sep 13, 4:10 pm, Tom Robinson <tlrobin...@gmail.com> wrote:
> 1) If we're making the module ID optional, why not make the
> dependencies optional too? The dependencies can be extracted from the
> module text (obtained by toString-ing the function). If the goal is
> for this to be hand written it doesn't get any simpler than
> require.def(function(exports, require, module) { ... }). It would be
> rather annoying to keep all that duplicated information in sync by hand.
I link this idea. It would make wrapping ridiculously simple (no server
side analysis needed, purely static wrapping). Actually I had proposed
that we make the dependencies optional earlier in the original thread,
but hadn't considered the possibility of using static analysis. And it
seems like you could want it either way. I think it would be best if
static analysis was done if only the factory was provided (module id and
dependencies omitted) which is more targeted for development stage, and
static analysis was not done if the module id was provided since this
usually corresponds with "built" files where the dependencies may
already be included in the file (and don't need to be listed since they
are known to be provided) and it is faster to avoid the static analysis
for production code, obviously.
I guess this really comes down to whether or not implementers are
willing to include static analysis code. For my implementation, Nodules,
this is already present, but for the browser loaders, RequireJS and
Yabble, this might be a hard sell. However, I looked at what it would
take to add this to RequireJS, and looks like it is about a 75-78 byte
addition (after minification, unless I did something wrong below), which
seems like a pretty light addition.
callback.toString().replace(/require\("([^"]*)"\)/g,function(t,m){deps.push(m);});
>> A few comments in case we move forward with this...
>>
>> 1) If we're making the module ID optional, why not make the dependencies
>> optional too? The dependencies can be extracted from the module text
>> (obtained by toString-ing the function). If the goal is for this to be hand
>> written it doesn't get any simpler than require.def(function(exports,
>> require, module) { ... }). It would be rather annoying to keep all that
>> duplicated information in sync by hand.
> There are to problems with this extraction:
>
> a) Regexes brittle and generally unreliable, they have problems with
> corner cases. The only good way to do it is to run a proper lexer. And
> still some dependencies would present difficulties (can be solved with
> more stringent guidelines).
>
> b) Function.toString() is not always produces a source code for the
> function. For example, many mobile platforms do not return anything
> meaningful due to performance and size considerations.
So those are the limitations of omitting the dependencies. Certainly
doesn't seem like a problem to have this option as long as we indicate
the limitations. We have already pretty clearly stated on the list that
require statements must be in the form require("module-id") in order to
work on all module loaders, since it was expected that some would use
static analysis (and such loaders do indeed exist, Nodules and I believe
Yabble). Using toString'ed functions actually makes the regex even
simpler since the code is normalized. And if this feature is mainly
going to be used for development, mobile platforms could easily be a
non-issue for many projects. Obviously there are projects were this
would be an issue. And they wouldn't use this feature.
Anyway, implementation cost is probably the biggest barrier to this
being accepted.
--
Thanks,
Kris
Even if you develop specifically for the mobile environment, or want
your modules to be ready for the mobile environment as is, just
specify the dependencies using an optional parameter, and you are done
--- your module is universally accepted. I think it is a reasonable
trade-off.
Cheers,
Eugene
> <nadir.seen.f...@gmail.com>wrote:
On Sep 14, 10:47 pm, Dean Landolt <d...@deanlandolt.com> wrote:
> On Tue, Sep 14, 2010 at 11:46 PM, Daniel Friesen
> > Trying to not consider browsers second-class with a feature that considers
> > mobile second-class... doesn't that seem a little... well, maybe I shouldn't
> > pull out the H- word...
>
> Do you do your development on your phone? I didn't think so.
I am concerned that it will encourage committing modules to source
control in this form, then those modules get distributed, then someone
tries to use them on a mobile browser, and things break.
If the main concern is typing cost, the typing is not that different
between this function stringify form and TransportC/AsyncModule form,
particularly since both use a function wrapper: In Transport/C if I
refer to the dependency in the array, then create a variable for it in
the function args, it works out well, particularly since normally
"require", "module" and "exports" are not needed inside the function:
require.def(function(require, module, exports){
var foo = require('foo');
var bar = require('bar');
//use foo and bar
module.exports = function(){};
});
vs
require.def(['foo', 'bar'], function (foo, bar){
//use foo and bar
return function(){};
});
since the second form also explicitly allows setting exports in a
natural form, it means reduced typing vs. systems that do not allow
setting the export:
require('MyConstructor').MyConstructor
vs
require('MyConstructor')
So in the end I do not think the function stringifying drastically
improves the typing costs over what is possible in Transport/C
(sometimes it is worse), and just adds more edge cases to explain.
James
Even if it's not coded explicitly to support mobile something you
program on the web may still work on a mobile browser (otherwise what is
the point of them trying to implement the same standards used on the
desktop), but if you go and use a feature like that you explicitly break
it whether it would have worked or not, for minimal gain.
In a case where the mobile platform gets treated second-class it's not
the developer who gets shot in the foot, it's the user who randomly
using a platform of their choice came onto a site where the developer
didn't bother considering the possibility that someone might actually
want to look at what they made without sitting at a computer.
> On Tue, Sep 14, 2010 at 8:19 PM, Kris Zyp <kri...@gmail.com> wrote:
>> I guess this really comes down to whether or not implementers are
>> willing to include static analysis code. For my implementation, Nodules,
>> this is already present, but for the browser loaders, RequireJS and
>> Yabble, this might be a hard sell. However, I looked at what it would
>> take to add this to RequireJS, and looks like it is about a 75-78 byte
>> addition (after minification, unless I did something wrong below), which
>> seems like a pretty light addition.
>
> I am concerned that it will encourage committing modules to source
> control in this form, then those modules get distributed, then someone
> tries to use them on a mobile browser, and things break.
I am concerned that the transport format will encourage committing modules to source
control in the transport format, then those modules get distributed, then someone
tries to use them in an environment that doesn't support the transport format, and things break.
Which browsers don't support Function.prototype.toString anyway?
> If the main concern is typing cost, the typing is not that different
> between this function stringify form and TransportC/AsyncModule form,
> particularly since both use a function wrapper: In Transport/C if I
> refer to the dependency in the array, then create a variable for it in
> the function args, it works out well, particularly since normally
> "require", "module" and "exports" are not needed inside the function:
>
> require.def(function(require, module, exports){
> var foo = require('foo');
> var bar = require('bar');
> //use foo and bar
> module.exports = function(){};
> });
>
> vs
>
> require.def(['foo', 'bar'], function (foo, bar){
> //use foo and bar
> return function(){};
> });
I don't like this version anyway. It's not a natural extension of CommonJS modules. Lets stick with function(require, exports, module).
require.def(function(require, exports, module) { ... }) is just a normal CommonJS module wrapped in a function to facilitate asynchronous loading. It seems simple and elegant to me.
> since the second form also explicitly allows setting exports in a
> natural form, it means reduced typing vs. systems that do not allow
> setting the export:
>
> require('MyConstructor').MyConstructor
> vs
> require('MyConstructor')
>
> So in the end I do not think the function stringifying drastically
> improves the typing costs over what is possible in Transport/C
> (sometimes it is worse), and just adds more edge cases to explain.
>
> James
>
That happens today with regular CommonJS modules. Some platforms allow
setting exported values in different ways. Some have different ways to
get a path relative to the module. The hope is to try to work out
something that will be more portable in the future. I have adapters
that allow Transport/C use in Rhino and Node, and I am happy to work
with any env owner who wants to incorporate support directly in the
env. The hope is for interoperability.
However, an implementation that depends on a Function toString
implementation that is known to have problems is a more serious issue.
I have not come across one, so it is good to have someone provide a
test and indicate what mobile browsers have the problem. It could be
those mobile browsers could not handle a script loader that depends on
a functioning script onload event handler too. But it makes me uneasy
enough that I am not enthusiastic about supporting it.
James
So which implementations don't support Function.prototype.toString? It's in the spec:
15.3.4.2 Function.prototype.toString ( )
An implementation-dependent representation of the function is returned. This representation has the syntax of a FunctionDeclaration. Note in particular that the use and placement of white space, line terminators, and semicolons within the representation string is implementation-dependent.
On 9/15/2010 12:07 AM, Tom Robinson wrote:
> On Sep 14, 2010, at 10:49 PM, James Burke wrote:
>
>> On Tue, Sep 14, 2010 at 10:19 PM, Tom Robinson <tlrob...@gmail.com> wrote:
>>> I am concerned that the transport format will encourage committing modules to source
>>> control in the transport format, then those modules get distributed, then someone
>>> tries to use them in an environment that doesn't support the transport format, and things break.
>> That happens today with regular CommonJS modules. Some platforms allow
>> setting exported values in different ways. Some have different ways to
>> get a path relative to the module. The hope is to try to work out
>> something that will be more portable in the future. I have adapters
>> that allow Transport/C use in Rhino and Node, and I am happy to work
>> with any env owner who wants to incorporate support directly in the
>> env. The hope is for interoperability.
>>
>> However, an implementation that depends on a Function toString
>> implementation that is known to have problems is a more serious issue.
>> I have not come across one, so it is good to have someone provide a
>> test and indicate what mobile browsers have the problem. It could be
>> those mobile browsers could not handle a script loader that depends on
>> a functioning script onload event handler too. But it makes me uneasy
>> enough that I am not enthusiastic about supporting it.
> So which implementations don't support Function.prototype.toString?
I think it is Opera Mobile:
http://my.opera.com/hallvors/blog/show.dml/1665828
But of course there are already plenty of other potholes to be aware of
with this browser:
http://www.quirksmode.org/blog/archives/2010/07/operas_problems.html
This is pretty basic caveat emptor. If you want a module to work on a
browser that doesn't support function toString than don't use the
feature relies on it :P. Front end engineers are well acquainted with
making these types of decisions all the time.
-- Thanks, Kris
Kris Kowal
Using Function.prototype.toString would also not compose well with minifiers.
Kris Kowal
I do prefer a unification of specs, and something that is easy to hand
code for use directly in the browser. Your observation of supporting a
name-less format in the browser and Tom's toString function meets that
well enough for me. I am willing to pursue the function toString
approach some more. I put up a test page here:
http://requirejs.org/temp/fts.html
It seemed to be fine for me in FF 2, 3.6 and 4/nightly, Safari 2 and
5, Opera 10.61, Chrome 6, and Mobile Safari on iOS 4.1. Fine as in
generating a string that could be parsed fairly easily.
It would be nice to see what other mobile browser on Android, Windows
mobile/phone and Blackberry do.
It sounds like Opera Mini would not work. Let's get a full list before
making a decision.
It would also be good to bring it up with the ES list, anyone willing
to do that? It would be good to know if they see problems with it.
There are some other edges to work out, but hopefully just soft edges.
For instance, there is still a require.def form that has the name and
dependencies burned in (for optimized delivery). I think those files
are the only ones that should be minified, as Tom mentioned. There are
some other things too, but they can be worked out after proving the
basic feasibility.
James
Got some results from a few folks, in particular thanks to Fil Maj at Nitobi:
OK:
- Android 2.2
- WebOS 1.4.5
- BlackBerry 5.0
- Fennec 1.1
- Windows Mobile 6.5 (keeps comments and has funky end of line
encoding issue, probably good enough)
- Windows Mobile 7
Not OK:
- BlackBerry 4.6 (content of function says "source code unavailable")
It would be good to clarify what version of Opera Mobile was bad. Was
it 9.5 or earlier? What device?
Also would be good to get a test of the native browsers for these platforms:
- Symbian S60, v3.2, v5.0
- MeeGo 1.1
- bada
I am just stealing the platform matrix from the jquerymobile site:
http://jquerymobile.com/gbs/.
I will ask on es-discuss next about the approach.
James
Would this be a reasonable addition to the proposal:
The second argument, the dependencies, is optional. If omitted, it
should default to ["require", "exports", "module"]. However, if the
factory function's arity (length property) is less than 3, than the
loader may choose to only call the factory with the number of arguments
corresponding to the function's arity or length.
If both the first argument (module id) and the second argument
(dependencies) are omitted, the module loader MAY choose to scan the
factory function for dependencies in the form of require statements
(literally in the form of require("module-string")). In some situations
module loaders may choose not to scan for dependencies due to code size
limitations or lack of toString support on functions (Opera Mobile is
known to lack toString support for functions). If either the first or
second argument is present, the module loader SHOULD NOT scan for
dependencies within the factory function.
Also, a few other suggested changes:
Currently there is an "optional" extension for allowing factory
functions to return a value to replace the exports. I think this should
be required functionality, it doesn't help interoperability much if it
might not be there (and we know it is possible to support since the
return is inside a function). This doesn't negate supporting
module.exports = value or module.setExports(value).
RequireJS allows for the last argument to be an object instead of a
function if you just want to return an known set of properties in an
object. This seems really convenient and easy to specify and support.
Finally, I think we should include an informational note indicating that
require.def calls must be in the literal form of 'require.def(...)' in
order to work properly with static analysis tools (like build tools).
--
Thanks,
Kris
I would also mention that the order MUST always be "require",
"exports", "module", and the definition function should also order its
arguments accordingly: function(require, exports, module), using those
literal names for the function arguments.
Your other suggested changes are fine with me. I particularly like the
explicit support for returning from the definition function to allow
setting the module's exported value.
In a related note, I just finished a round of changes that support
anonymous modules and Tom's proposal in RequireJS:
http://tagneto.blogspot.com/2010/09/anonymous-module-support-in-requirejs.html
Some very simplified anonymous modules used for testing can be found in:
http://github.com/jrburke/requirejs/tree/master/tests/anon/
(not all modules in there are anonymous, part of the test is testing a
mix of anonymous and named modules)
James
--
Thanks,
Kris
On 9/30/2010 6:38 AM, Irakli Gozalishvili wrote:
> Hi,
>
> I'm kind of late for this show but few notes I have regarding current
> proposal:
>
> 1. I believe we sicked to the naming convention of camelCased full
> words, so I do think we should be consistent and have `require.define`
> rather then `require.def`
>
> 2. I think it would be better to make id a mandatory argument rather
> then optional. It's not that much of a typing but with scenarios of
> parallel module fetching will avoid a lot of issues.
Optional id allows anonymous modules which is a key principal in
CommonJS module design and is just awesome. Decoupling modules from
their namespace affords much greater portability.
>
> 3. I do think that dependencies should be third optional argument,
> since I suppose it will be rarely used.
The vast majority of modules I have seen in this format use it.
>
> 4. I think passing dependencies as an arguments is also a bad idea,
> which makes modules look different form normal commonjs modules. Why
> not just guarantee dependencies so that in the body if the module
> requiring them
> would work synchronously.
The guarantee of the dependencies is determined from the dependency
list. The alternate is to do a toString on the factory function and
statically analyze it. This is viable and supported by RequireJS, but I
believe function toString()'s are somewhat expensive and do not work on
all browsers (Opera mobile and playstation don't support it, from what I
understand). This can be a reasonable limitation for some apps or dev
environments, but not something we can lean on for everything.
>
> 5. Again returning modules is non-complaint change with a currently
> existing modules and will only lead to the fragmentation so I'm
> against it.
There was no AMD API before, how can it be non-compliant? :). This is
compatible with wrapping existing CommonJS modules because was no
previous specification for return, so plain CommonJS modules can't use
it. Either way we are in good shape.
>
> Other then that it all looks fine by me, actually module loader that I
> implemented long time ago follows this specs pretty closely. Here it
> is in production btw
>
> http://jeditoolkit.com/taskhub/
>
Cool, are you planning on updating this for current spec? It would be
awesome to have another impl.-- Thanks, Kris
On 10/1/2010 7:39 PM, Eugene Lazutkin wrote:
> I need a clarification about "require" object. According to the spec
> the minimal module can look like this:
>
> require.def(function (require) {
> return {x: require("abc").x};
> });
>
> As you can see we have two "require" objects in this snippet:
>
> 1) (pseudo) global "require", which we use to invoke "def" on.
> 2) local "require" passed as a parameter.
>
> Are these "require" objects the same object? Is it possible to bypass
> the parameter completely? Like that:
>
> require.def(function () {
> return {x: require("abc").x};
> });
>
> Is it possible that they are substantially different with different
> interfaces?
It depends on the environment. A correct require() function generally
must be module specific so that it can properly lookup relative ids
(which are resolved relative to the current module). In environments
where the loading of the module can be controlled such that
module-specific require() (and exports and module variables) can be
provided, these two require()s can be the same (this is the case in
Nodules). However, for modules that are loaded as browser scripts, the
loader can't sufficiently control the context to give scope-specific
variables. It is impossible (AFAICT) to implement conformant require()
for browser scripts and CommonJS doesn't give any direction on how
require() should behave in that case. Likewise, implementations vary. I
believe Yabble throws an error if the global require() is called and
RequireJS came up with their own API for require() that's kind of cross
between require.def and require.ensure. For these browser module
loaders, the global require() differs quite significantly from the local
require().
--
Thanks,
Kris
On 10/2/2010 7:28 AM, Wes Garland wrote:
> > It is impossible (AFAICT) to implement conformant require()
> > for browser scripts
>
> You mean the Modules/1.0 require, as opposed to the Transports/C
> version of CommonJS?
>
> If so, I disagree with you, although I haven't actually tried.
>
> The technique I would use would be to wrap the module in a function,
> and pass in a module-specific require variable, which closes over a
> variable indicating that module's load path.
Yes, Modules/1.0 require. I don't think you can wrap a browser loaded
script:
(function(require, exports, module){
<script src="my-module.js"></script>
})();
This doesn't seem to work in my browser ;).
--
Thanks,
Kris
--
On 10/9/2010 5:22 AM, Irakli Gozalishvili wrote:
> So I still don't have any reasonable answer on why do we have to be
> inconsistent with naming conventions. Why is it
> def and not define ?
The historical reason was that require.def was for transport/C (now AMD)
and require.defined was for transport/D. However, I think that AMD has
progressed to the point where we no longer need transport/D.
I actually think we should move from require.def to using "define" as
the global. Using "require" as the global variable doesn't make sense,
there is no CommonJS definition for a global named "require" (it is
completely different than the free variable defined by CommonJS
modules), and most applications that access a CommonJS global will be
defining modules much more than anything else. An ensure() call usually
just needs to be done once for app. So if we are going to rename
anything, I would suggest we rename require.def -> define and
require.ensure -> define.ensure.
--
Thanks,
Kris
I agree that "define" could be moved up to a global in <script> context.
"require.ensure" was designed for use in [[Module]] context and we
should not be adding free-variables to CommonJS modules. Since there's
obviously use for "require.ensure's" behavior in <script> context, you
might consider forking the specification, or soul-searching about
whether "require" is or isn't an adequate namespace for CommonJS in
<script> context in general. At the cost of some confusion, when
there's good reason to have the same behavior in both places, those
behaviors should have the same names to minimize the cost of
refactoring. "ensure" is useful in both contexts. "define" should
not be available in Module context because it does not make sense to
have a cycle in the layers of the architecture.
Kris Kowal
And require.package -> define.package (that would sure read a lot better).
--
Thanks,
Kris
True, that's good point. So could we have a define.ensure for scripts
that are outside a CommonJS free variable context, but have
require.ensure that can be accessed from the require free variable
inside a CommonJS context (inside a define factory or commonjs module),
both aliasing the same behavior? That would seem reasonable to me.
--
Thanks,
Kris
I am hesitant to make another global in <script> space for define.
<script> space is even more sensitive to globals than the usual places
CommonJS modules are run, since all the scripts loaded via <script>
share a global space. I prefer the global impact for <script> to be
very small. Right now, it is just "require".
I also do not think define.ensure makes sense -- it really means a
kind of "require" vs a "define" -- the script requires that these
other scripts/modules are available before running the function.
require.def vs require.define: what Kris Zyp said, and it just less
typing, which is nice for something that will be hand-written. I
consider it like var or fs, although I understand the "use real words"
approach this group has tried to follow. Originally I was happy with
just:
require('optional module ID', [dependencies], function (){ definition
function }).
Which would also work for an ensure-like behavior as well as handling
async module definitions, with the require('string only') form being
the traditional require. I appreciate that it may be seen as too much
to do as part of one function signature.
If it made a *strong* difference to actual sync module proposal
implementors or people seriously considering implementation, then I
could put in an alias for require.define to require.def, but I will
likely still support require.def due to deployed code/existing users.
I did not follow "it does not make sense to have a cycle in the layers
of the architecture" from Kris Kowal's reply, so further explanation
of the cycle would help me.
James
On 10/9/2010 10:31 PM, James Burke wrote:
> On Sat, Oct 9, 2010 at 11:56 AM, Kris Zyp <kri...@gmail.com> wrote:
>> On 10/9/2010 12:49 PM, Kris Kowal wrote:
>>> "ensure" is useful in both contexts. "define" should
>>> not be available in Module context because it does not make sense to
>>> have a cycle in the layers of the architecture.
>> True, that's good point. So could we have a define.ensure for scripts
>> that are outside a CommonJS free variable context, but have
>> require.ensure that can be accessed from the require free variable
>> inside a CommonJS context (inside a define factory or commonjs module),
>> both aliasing the same behavior? That would seem reasonable to me.
> I am hesitant to make another global in <script> space for define.
> <script> space is even more sensitive to globals than the usual places
> CommonJS modules are run, since all the scripts loaded via <script>
> share a global space. I prefer the global impact for <script> to be
> very small. Right now, it is just "require".
>
I am not suggesting adding "define" to "require", I am suggesting
replacing/removing "require" with "define". There would still only be
one global defined. There's no reason we ever had "require" as a global,
and it is time to choose an appropriate global.
Of course, I understand that RequireJS can't eliminate "require", since
it is a core API for it. However, needing to have two globals is hardly
going to get much sympathy from me, especially considering that the
whole module system eliminates the need for developers to be fighting
for globals at all. Back in the ol' days when we used archaic namespaces
hung off globals this was a bigger issue. Now we could have dozens of
globals without impacting the module system. EcmaScript itself defines
dozens and the typical browser environment has hundreds of globals. Plus
globals used without any community coordination (libraries grabbing
common names as globals without namespacing) is the biggest source of
possible conflict, but this is completely the opposite, it is totally
being done as a community, and is exactly the *right* way to establish
globals. One or two (community ascribed) globals should hardly be a concern.
> I also do not think define.ensure makes sense -- it really means a
> kind of "require" vs a "define" -- the script requires that these
> other scripts/modules are available before running the function.
Its no less logical than require.def, whose purpose is to define a
module. This is all about frequency. If every single module using the
module definition API, but only a call or two that does the initial
require (require([]) or define.ensure()), it makes sense to give the
more frequently used form the most elegant logical API. Or we could have
a "define" and have a global "require" like RequireJS's.
But RequireJS doesn't even implement require.ensure, does it? It doesn't
seem like this would affect RequireJS.
> require.def vs require.define: what Kris Zyp said, and it just less
> typing, which is nice for something that will be hand-written. I
> consider it like var or fs, although I understand the "use real words"
> approach this group has tried to follow. Originally I was happy with
> just:
>
> require('optional module ID', [dependencies], function (){ definition
> function }).
>
> Which would also work for an ensure-like behavior as well as handling
> async module definitions, with the require('string only') form being
> the traditional require. I appreciate that it may be seen as too much
> to do as part of one function signature.
>
> If it made a *strong* difference to actual sync module proposal
> implementors or people seriously considering implementation, then I
> could put in an alias for require.define to require.def, but I will
> likely still support require.def due to deployed code/existing users.
I think the main reason this is worth considering is that it affects
*every single* module (and the byte count thereof, which quickly adds up
for apps with many modules), so it is worth taking a hard look at what
is best here even if there might mean some API change pain.
--
Thanks,
Kris
My concern was more for browser code that gradually adopts the
require/define approach. I believe that will allow for quicker/broader
adoption if existing code can use this new functionality gradually, so
fewer globals are better. I also agree that if push came to shove,
then two vs one global is not that much of a change, but the proposed
renaming to me did not seem to give that much benefit to warrant
another global.
>> I also do not think define.ensure makes sense -- it really means a
>> kind of "require" vs a "define" -- the script requires that these
>> other scripts/modules are available before running the function.
>
> Its no less logical than require.def, whose purpose is to define a
> module. This is all about frequency. If every single module using the
> module definition API, but only a call or two that does the initial
> require (require([]) or define.ensure()), it makes sense to give the
> more frequently used form the most elegant logical API. Or we could have
> a "define" and have a global "require" like RequireJS's.
I believe require.def is more logical than define.ensure. require.def
implies you are defining something that obeys require's rules.
define.ensure does not define anything. But this is a bit of a
bikeshed.
>
> But RequireJS doesn't even implement require.ensure, does it? It doesn't
> seem like this would affect RequireJS.
I have not implemented it because no one has asked for it when using
RequireJS, and I think it is inferior to the require([]) syntax that
RequireJS provides. However, if it enabled widespread async module
adoption (vs RequireJS require([]), then I would implement it, since
it is a subset of what require([]) can do now.
> I think the main reason this is worth considering is that it affects
> *every single* module (and the byte count thereof, which quickly adds up
> for apps with many modules), so it is worth taking a hard look at what
> is best here even if there might mean some API change pain.
I do not believe byte size count matters for performance reasons (with
an optimized, minified delivery of modules with gzip, it will be
unnoticeable), but I appreciate wanting the cleanest API.
I am voting "no" though, I do not believe it buys that much for the
following reasons:
- inertia. Mostly my personal inertia. It does not feel broken to me.
- I also like the single global. It still makes sense to me that an
ensure stays on require, or even better, just uses require([]) as used
by RequireJS. That means the define name space just defines an async
module, and an async module implementation still needs to implement
something for "require".
- I like that require.def implies that it obeys require's rules.
define seems to float out in the ether.
If others feel strongly, that require.def is just wrong, then I can
support a define that maps to require.def in RequireJS. But others
should speak up soon, as in the next couple of days. I want to move on
to implementations and adoption.
James
On 10/12/2010 11:22 AM, James Burke wrote:
> On Sun, Oct 10, 2010 at 3:05 PM, Kris Zyp <kri...@gmail.com> wrote:
>> Of course, I understand that RequireJS can't eliminate "require", since
>> it is a core API for it. However, needing to have two globals is hardly
>> going to get much sympathy from me, especially considering that the
>> whole module system eliminates the need for developers to be fighting
>> for globals at all. Back in the ol' days when we used archaic namespaces
>> hung off globals this was a bigger issue. Now we could have dozens of
>> globals without impacting the module system. EcmaScript itself defines
>> dozens and the typical browser environment has hundreds of globals. Plus
>> globals used without any community coordination (libraries grabbing
>> common names as globals without namespacing) is the biggest source of
>> possible conflict, but this is completely the opposite, it is totally
>> being done as a community, and is exactly the *right* way to establish
>> globals. One or two (community ascribed) globals should hardly be a concern.
> My concern was more for browser code that gradually adopts the
> require/define approach. I believe that will allow for quicker/broader
> adoption if existing code can use this new functionality gradually, so
> fewer globals are better. I also agree that if push came to shove,
> then two vs one global is not that much of a change, but the proposed
> renaming to me did not seem to give that much benefit to warrant
> another global.
>
Again, the suggestion is that CommonJS only define a single global not
two. In fact this should actually improve the conflict/pollution
potential of RequireJS. If RequireJS uses a community global (require or
define) and defines its own APIs on it, it is effectively polluting the
global/shared namespace with possible name conflicts just as much as if
it defines all its APIs directly on window. By putting modify(),
version, plugin(), isBrowser, baseUrl, etc. on a shared object
"require", you are injecting into a shared space. "require"'s can have
conflicts just like "window". By claiming that you are not using globals
because it is under require is a silly name game, it is still shared. If
CommonJS only defines a global "define" than RequireJS can use a single
namespace under require (since "require" would be free then) and keep
its own API separate from the shared namespace. This is the right way to
avoid conflicts (which is the point of avoiding global pollution,
whether it be the "window" global or any other shared namespace).
>>> I also do not think define.ensure makes sense -- it really means a
>>> kind of "require" vs a "define" -- the script requires that these
>>> other scripts/modules are available before running the function.
>> Its no less logical than require.def, whose purpose is to define a
>> module. This is all about frequency. If every single module using the
>> module definition API, but only a call or two that does the initial
>> require (require([]) or define.ensure()), it makes sense to give the
>> more frequently used form the most elegant logical API. Or we could have
>> a "define" and have a global "require" like RequireJS's.
>
> I believe require.def is more logical than define.ensure. require.def
> implies you are defining something that obeys require's rules.
> define.ensure does not define anything. But this is a bit of a
> bikeshed.
After thinking about this, there really is no reason CommonJS needs to
define a define.ensure, I recant that suggestion. The point of
require.ensure is to provide an interoperable way for modules to load
other modules on demand. The ensure() API is not needed for the initial
entry loading of modules, the initiating the module loader is always
module loader specific anyway, so it is fine to a module loader specific
API to do the initial launch of modules. Since ensure() is intended for
modules, it can continue to exist as a property of the "require" free
variable, without any need for a "require" global. RequireJS's use of
the require() global to launch modules makes perfect sense, and is
fantastic module loader specific API without any need for definition
from CommonJS.
>
>
>> But RequireJS doesn't even implement require.ensure, does it? It doesn't
>> seem like this would affect RequireJS.
> I have not implemented it because no one has asked for it when using
> RequireJS, and I think it is inferior to the require([]) syntax that
> RequireJS provides. However, if it enabled widespread async module
> adoption (vs RequireJS require([]), then I would implement it, since
> it is a subset of what require([]) can do now.
Yeah, I could see there not being much demand for require.ensure.
>> I think the main reason this is worth considering is that it affects
>> *every single* module (and the byte count thereof, which quickly adds up
>> for apps with many modules), so it is worth taking a hard look at what
>> is best here even if there might mean some API change pain.
> I do not believe byte size count matters for performance reasons (with
> an optimized, minified delivery of modules with gzip, it will be
> unnoticeable), but I appreciate wanting the cleanest API.
>
> I am voting "no" though, I do not believe it buys that much for the
> following reasons:
> - inertia. Mostly my personal inertia. It does not feel broken to me.
> - I also like the single global. It still makes sense to me that an
> ensure stays on require, or even better, just uses require([]) as used
> by RequireJS. That means the define name space just defines an async
> module, and an async module implementation still needs to implement
> something for "require".
> - I like that require.def implies that it obeys require's rules.
> define seems to float out in the ether.
>
> If others feel strongly, that require.def is just wrong, then I can
> support a define that maps to require.def in RequireJS. But others
> should speak up soon, as in the next couple of days. I want to move on
> to implementations and adoption.
>
Yeah, I'd like to hear from others too.
--
Thanks,
Kris
This specific suggestion might be for just one, but for the loader to
actually be useful, it will need a bootstrap call require([]) or
require.ensure, so I still see it as needing two globals, since
hanging the bootstrap call off of define seems unlikely.
> In fact this should actually improve the conflict/pollution
> potential of RequireJS. If RequireJS uses a community global (require or
> define) and defines its own APIs on it, it is effectively polluting the
> global/shared namespace with possible name conflicts just as much as if
> it defines all its APIs directly on window. By putting modify(),
> version, plugin(), isBrowser, baseUrl, etc. on a shared object
> "require", you are injecting into a shared space. "require"'s can have
> conflicts just like "window". By claiming that you are not using globals
> because it is under require is a silly name game, it is still shared. If
> CommonJS only defines a global "define" than RequireJS can use a single
> namespace under require (since "require" would be free then) and keep
> its own API separate from the shared namespace. This is the right way to
> avoid conflicts (which is the point of avoiding global pollution,
> whether it be the "window" global or any other shared namespace).
Implementations of a spec usually provide some extensions. If you want
the code to be most portable do not use the extensions, like using
__dirname and __filename in node.
My main concern was with existing code that might have a non-compliant
globals that want to gradually upgrade to the new API. Having two vs
one possibly conflicting globals makes conflict more likely, but it is
not a reason on its own to discount considering a new global.
James
I care about reducing hazards to existing code in the wild. My
potentially poor choice of how I implemented require seems to be
orthogonal to that point. Existing code in the wild could have a
define or require that are most likely *not* CommonJS-compatible, and
having two globals that could potentially conflict with existing code
is more troublesome than one. As mentioned, that point is not a
complete reason for killing the proposal, but it is tradeoff with the
proposal.
And to be clear, I think there will be two globals, because to allow
interop, the "ensure" or some bootstrap method (whatever the name)
should be specified, and that seems unlikely/unnatural to be off of
define.
James
I think we are lest likely to have problems if we use "require" as the
<script> context for all CommonJS stuff. I agree with Kris Zyp that
care should be taken to not drive the consumption of the "require"
name space in implementations. If you do chose to extend the require
name space, do-so carefully: it's delicate and we're pretty harsh to
implementations here if the right way to go forward means leaving
cruft behind.
I'm also fine with ditching the old require.define proposal, marking
it with a big X, and usurping its name.
Kris Kowal