In response to Kris Kowal updating his Transport/B proposal, I pinged
him off-list about it, and how we might bridge the differences with
Transport/C. He suggested I move the conversation back on to the list,
so here is my summary of two bigger issues. I trust Kris will update
me where I got it wrong. This is a summary and also my response to
some of the points Kris raised, so my perspective is probably unfairly
weighted in this message.
While there are some surface similarities between the two proposals
(both have module IDs and function wrappers), there are two
fundamental differences that may be hard to bridge. There are some
other differences to work out, but they can probably be worked out if
the bigger two can be resolved in some way.
1) Free variables and module namespaces
----------------------------
Kris prefers one object to pass into the module factory function that
has properties that can be converted into the module spec variables
and allow the possibility of injecting other free variables:
require.def("ID", {
"requires": [MODULE_IDS],
factory: function() {
//Server transform injects this code section
var exports = arguments[0].exports,
require = arguments[0].require,
module = arguments[0].module;
//Any other types of free variables could be injected here, for
//example QUnit functions
var assert = arguments[0].assert;
//Rest of the regular module text goes here.
});
I prefer naming each dependency, even the normal free variables in the
module spec. While I do not get the desire for free variable injection
(at least not yet), it could be supported by an special "injected"
name/argument:
require.def("ID", ["require", "exports", "module", "injected",
...MODULE_IDS...], function(require, exports, module, injected) {
//Possible free injection here by server transform
var assert = injected.assert;
//Regular module text goes here
}
I believe Kris does not like my preferred syntax because things like
"require", "exports", "module" are not really IDs for real modules. He
believes the namespace for module IDs should be an orthogonal concern
to the naming used for module free variables (require, exports,
module).
However, by allowing require, exports and module as reserved module ID
names, this will allow easier hand-authoring of modules in the
transport format, mostly for the browser use case. Without the
reserved names, some other scheme needs to be invented for the
developer to define getting those objects for the hand-authored case.
Repeating the arguments[0] boilerplate seems excessive.
I believe it is OK to treat "require", "exports" and "module" as
reserved module names because:
- a small number of names are involved.
- they should not be allowed for developers to use for their own
modules, just to avoid confusion with the module free variable names.
It will be common for modules to be one word names, and most
developers when creating a local var for that module will use the same
var name as the module ID. Not allowing those free variables as module
IDs will avoid confusion.
On injected free variables:
It has been stated this is desirable for cases where a module uses
code from something like QUnit or Jake, to allow some sort of easier
code reuse?
I do not see the usefulness of this, but I fully appreciate I may be
missing something.
It seems like the desire for free variable injection is to allow
reusing existing JS files with stricter module code. This could work
in the browser because those existing modules could just create
globals. I believe that should be the path to supporting those types
of files in other CommonJS environments. Allow for some files to
create globals. If you want a purer, more secure system, then those
modules should be rewritten to use the module idiom where it attaches
properties to an exported module value.
Injecting free variables also does not seem to save anything on
dependency management. The module still needs to somehow indicate it
needs those free variables defined. I would assume that means doing
something like a require("qunit") to do that. If supporting globals in
the system is bad, allowing evaluation of the qunit.js and collecting
its globals and using them as the exported object seems like that
would work too -- return that generated export value as the result of
the require("qunit") call. But then I do not make those types of
loaders, so I could be wrong.
So ideally, we could remove the desire for injected variables from the
concerns of a transport format. But perhaps I am missing a case where
injecting free variables is really useful vs. just using globals or
doing an automatic global-to-export property conversion.
2) Hand authoring
-------------------------
Kris believes that the transport format should not be hand-authored. I
want something that is easy to hand-author because I want to use that
as the default module format when writing modules for the browser,
without having to do XHR or server transform contortions. I believe it
is important to give browser developers today a way to get into the
CommonJS modules, and if that is by the transport format, then so be
it.
I believe Kris feels that encouraging people to hand-author a format
that includes module IDs is bad because they are not as
portable/reusable and may have security implications.
To me, any transport format will have IDs in it, and anything emitting
a transport format will have the same levels of reusability/security
concerns as compared to a human writing in the transport format.
There are more chances for errors when a human codes it, but given
that there are multiple implementations that may output auto-generated
transport formatted files, the consumer of transport formatted files
should be doing conformance checking anyway, and if module
IDs/remapping is a concern, it is a concern whether the file was
hand-authored or not.
Hopefully if we can resolve these first two issues we can go on to
resolving the other, not as big, differences in the transport specs.
If we cannot resolve these issues, that is fine too, at least we will
know the two proposals cannot be bridged.
James
I would think we could preserve a clean module id namespace and provide
module injection with common free variable injection by including
separate mechanisms for defining dependencies and variable injection
where both are optional. If dependencies are omitted, the injections
imply the dependencies (like Transport/C), and if injections are
omitted, the standard set of module free variables are included (like
Transport/B).
I believe these issues can be bridged. This latter concern has a less
technically precise definition. Clearly both Transport B and C can both
be auto-generated as well as hand-coded, so this is more about how
easy/concise each approach is. IMO, we should first optimize for brevity
of auto-generated code that wraps CommonJS modules (to minimize machine
generated bytes), and second allow for readable hand-coding of async
modules that make use of convenient module injection, but I don't
believe these are mutually exclusive.
--
Thanks,
Kris
Specifically this builds on B and C by:
* Dependencies can be defined without module id namespace conflating
with the free variable namespace.
* Auto-generated wrapping of CommonJS modules can be achieved with even
greater efficiency (fewer bytes) than Transport/B
* Hand-coded module definitions can be written with modules as injected
as variables, and the format's usage of object properties is more
explicit (and consequently readability, at least IMO) than Transport/C.
Thanks,
Kris
Would it be bad practice to inject variables into a factory by relying
on factory.toString?
var inject = function(/*String[]*/ vars, /*Function*/ factory) {
return new Function(vars, "(" + factory + ")()");
};
If this works where modules transport is needed (as it does in a
browser), then it seems to me like require.def could be defined to
inject arbitrary names into a factory - without the factory signature
needing to contain those names.
If this is acceptable, the loader could always inject factories with
require, exports, and module, and could include optional injections even
if the factory took zero arguments (and in fact it should be defined
with no arguments).
require.def(id, factory, dependencies, injections);
Tim
>
> Thanks,
> Kris
>
Transport/D to me does not solve Kris Kowal's concern about conflating
the module format's free variable names with the IDs used for modules.
While Transport/D avoids the problem in the dependency array, it still
exists in the injection array. "require", "module" and "exports" end
up not being names of modules you can inject into the factory function
because they will be reserved for the module spec's names.
I think conflation issue is solvable like so, a sort of Transport/B2:
require.def("ID", [MODULE_ID_DEPENDENCIES], function(require, export, module) {
});
Where MODULE_ID_DEPENDENCIES *cannot* contain "require", "export" and
"module", as those are not real modules, and every factory function
will assume that require, export and module are *always* passed in as
the first three args.
But it just does not work for me for a hand-coded format. There is a
symmetry to having all the strings in the dependency array map
directly to the list of arguments to the factory function. A symmetry
that I think is required to make the hand-coded approach work well,
and work for viewing by humans who may not know the complete spec. It
is very clear on inspection what is happening when the the two lists
match.
I still do not see the problem with reserving the "require" "export"
and "module" names as names that cannot be used for module IDs. But if
the people on the list think that is a bad idea and hand-coding is not
goal to achieve, then something like the B2 version above might work
well and be fairly concise. I will not be able to use it, but that is
fine in the grand scheme of things.
James
While Function() can work in the browser, it is effectively eval(),
and it makes debugging and acceptance in all environments that have
restrictions on eval/Function() hard. I prefer not to use it.
James
On 3/16/2010 11:28 PM, James Burke wrote:
> On Tue, Mar 16, 2010 at 8:47 PM, Kris Zyp <kri...@gmail.com> wrote:
>
>> I attempted to bridge the gap combining aspects (and text) from
>> Transport/B and Transport/C:
>> http://wiki.commonjs.org/wiki/Modules/Transport/D
>>
>> Specifically this builds on B and C by:
>> * Dependencies can be defined without module id namespace conflating
>> with the free variable namespace.
>> * Auto-generated wrapping of CommonJS modules can be achieved with even
>> greater efficiency (fewer bytes) than Transport/B
>> * Hand-coded module definitions can be written with modules as injected
>> as variables, and the format's usage of object properties is more
>> explicit (and consequently readability, at least IMO) than Transport/C.
>>
> Transport/D to me does not solve Kris Kowal's concern about conflating
> the module format's free variable names with the IDs used for modules.
> While Transport/D avoids the problem in the dependency array, it still
> exists in the injection array. "require", "module" and "exports" end
> up not being names of modules you can inject into the factory function
> because they will be reserved for the module spec's names.
>
Maybe I shouldn't speak for him, but I don't think Kris Kowal ever wants
to use the "injects" array. The point is that auto-generated module
wrappers can stick to the dependencies array and never need to worry
about the namespace conflation/conflicts. The "injects" array exists for
hand-coding, when you can manually avoid conflicts.
> I think conflation issue is solvable like so, a sort of Transport/B2:
>
> require.def("ID", [MODULE_ID_DEPENDENCIES], function(require, export, module) {
> });
>
> Where MODULE_ID_DEPENDENCIES *cannot* contain "require", "export" and
> "module", as those are not real modules, and every factory function
> will assume that require, export and module are *always* passed in as
> the first three args.
>
> But it just does not work for me for a hand-coded format. There is a
> symmetry to having all the strings in the dependency array map
> directly to the list of arguments to the factory function. A symmetry
> that I think is required to make the hand-coded approach work well,
> and work for viewing by humans who may not know the complete spec. It
> is very clear on inspection what is happening when the the two lists
> match.
>
How is module defined using "injects" in the module descriptor in
Transport/D any harder to code than Transport/C's format? The "injects"
mechanism was basically copied directly from Transport/C, it is just a
little more verbose since it is a named property.
> I still do not see the problem with reserving the "require" "export"
> and "module" names as names that cannot be used for module IDs. But if
> the people on the list think that is a bad idea and hand-coding is not
> goal to achieve, then something like the B2 version above might work
> well and be fairly concise. I will not be able to use it, but that is
> fine in the grand scheme of things.
>
>
FWIW, I am not opposed to that, I don't see that as a big problem.
However, I do really prefer the multiple module per require.def with a
single dependency array in terms of efficiency though. And if you
combine this with module injection, this seems to be beg for a per
module injection list.
--
Thanks,
Kris
> Maybe I shouldn't speak for him, but I don't think Kris Kowal ever wants
> to use the "injects" array. The point is that auto-generated module
> wrappers can stick to the dependencies array and never need to worry
> about the namespace conflation/conflicts. The "injects" array exists for
> hand-coding, when you can manually avoid conflicts.
Given that it is trivial to construct one from the other, I am not
actually against having an "injects" array that maps the names of the
injection object to the values of positional arguments. I am merely
against conflating the injection namespace with the required modules
name space. I wouldn't even mind for the injection array to be
["require", "exports", "module"] by default if not specified. I would
like it to be possible to specify.
> FWIW, I am not opposed to that, I don't see that as a big problem.
> However, I do really prefer the multiple module per require.def with a
> single dependency array in terms of efficiency though. And if you
> combine this with module injection, this seems to be beg for a per
> module injection list.
I removed the multiple-modules per require.def call in response to a
point Tobie Langel brought up on the Twitters: IE doesn't provide
hasOwnProperty, so reliably enumerating the key value pairs is
non-trivial.
So, given the notions above, I would be fine with this form:
require.def("alpha", ["beta", "gamma"], function (require, exports, module) {
var beta = require("beta");
var gamma = require("gamma");
});
Since it is straightforward to normalize this behaviorally to something like:
require.def({
"id": "alpha",
"requires": ["beta", "gamma"],
"injects": ["require", "exports", "module"],
"factory": function (require, exports, module) {
var beta = require("beta");
var gamma = require('gamma');
}
});
That leaves the last vestige of verbosity to be the ostensible
redundancy of explicating the requires list, and then calling require.
I believe Kris Zyp said that this could be further condensed by
having a collection of modules and treating them as implicitly and
mutually dependent, and providing an array of their combined external
requirements. I think that approach has potential, if a way to
enumerate the modules of a single declaration.
Kris Kowal
On 3/17/2010 12:34 AM, Kris Kowal wrote:
> On Tue, Mar 16, 2010 at 11:02 PM, Kris Zyp <kri...@gmail.com> wrote:
>
>
>> Maybe I shouldn't speak for him, but I don't think Kris Kowal ever wants
>> to use the "injects" array. The point is that auto-generated module
>> wrappers can stick to the dependencies array and never need to worry
>> about the namespace conflation/conflicts. The "injects" array exists for
>> hand-coding, when you can manually avoid conflicts.
>>
> Given that it is trivial to construct one from the other, I am not
> actually against having an "injects" array that maps the names of the
> injection object to the values of positional arguments. I am merely
> against conflating the injection namespace with the required modules
> name space.
If one can so easily avoid using "injects" (and it wouldn't likely be
used in auto-generated module wrapping), why is this problematic?
> I wouldn't even mind for the injection array to be
> ["require", "exports", "module"] by default if not specified. I would
> like it to be possible to specify.
>
>
FWIW, I did specify defaulting to ["require", "exports", "module"] if
"injects" is not specified.
>> FWIW, I am not opposed to that, I don't see that as a big problem.
>> However, I do really prefer the multiple module per require.def with a
>> single dependency array in terms of efficiency though. And if you
>> combine this with module injection, this seems to be beg for a per
>> module injection list.
>>
> I removed the multiple-modules per require.def call in response to a
> point Tobie Langel brought up on the Twitters: IE doesn't provide
> hasOwnProperty, so reliably enumerating the key value pairs is
> non-trivial.
>
Object.prototype.hasOwnProperty has been in IE since 5.5. I don't think
anyone is seriously trying to run CommonJS stuff on anything earlier
than that, and if they are they are probably going to run into far more
serious problems than the lack of hasOwnProperty. I'd like to keep the
object name/value format for defining modules unless you have another
objection.
--
Thanks,
Kris
It seems like extra hoops to go through to save three reserved names.
It feels like trying accommodate two different philosophies in one
spec. I appreciate the effort to bring them together, but I feel like
it adds more moving parts to the format vs just deciding what is more
valuable (either restrict "require", "module" and "exports" from being
module IDs or allow them), and go from there.
It is hard for me to see the problems with saying "require", "module"
and "exports" cannot be module IDs, so that is probably clouding my
vision a bit.
>> I think conflation issue is solvable like so, a sort of Transport/B2:
>>
>> require.def("ID", [MODULE_ID_DEPENDENCIES], function(require, export, module) {
>> });
>>
> How is module defined using "injects" in the module descriptor in
> Transport/D any harder to code than Transport/C's format? The "injects"
> mechanism was basically copied directly from Transport/C, it is just a
> little more verbose since it is a named property.
I keep getting confused because I see the dependencies property and
the injects property in Transport/D and think that I have to do both
to conform. However, you are saying you could do one or the other or
both. I feel like it is simpler if just one way is chosen.
Even just using injects as allowed in Transport/D, it is more verbose,
both in the need for an injects property and a factory property to be
spelled out, and an extra level of braces, and indentation. It is
definitely subjective and I may be just to used to what I have
already, but that feels like too much to me for a hand-authored
format. I have been surprised that the extra indentation is more
important than I would have thought, particularly if the JSLint-style
4 spaces is used. Less fits on a line.
>> I still do not see the problem with reserving the "require" "export"
>> and "module" names as names that cannot be used for module IDs. But if
>> the people on the list think that is a bad idea and hand-coding is not
>> goal to achieve, then something like the B2 version above might work
>> well and be fairly concise. I will not be able to use it, but that is
>> fine in the grand scheme of things.
>>
>>
> FWIW, I am not opposed to that, I don't see that as a big problem.
> However, I do really prefer the multiple module per require.def with a
> single dependency array in terms of efficiency though. And if you
> combine this with module injection, this seems to be beg for a per
> module injection list.
I read your use of "efficiency" as file size savings, maybe you mean
something else? If file size/savings, for hand coding, I'll be opting
to use the injection array which is per module, so I do not see much
files savings for me in that route.
You could get multiple calls in one require.def call by allowing to
take an array of Transport/B2-like argument lists.
James
It is for..in that causes trouble in some cases. In IE (through 8),
toString will not be enumerated. Older Safari had trouble of a
different kind.
javascript:for(var k in {foo: true, toString: true}) alert(k)
No module id of toString for IE.
What is the reason for making the dependant modules be free variables automatically anyway? This doesn't happen on the server -why should it happen in this environment? Thinking about this further why are the deps even listed at this stage? What need mandates this?
As far as I understand these proposals, either you are hand writing it and know what modules are required and thus also include them somehow, or its server generated and it generates them.
I'm not seeing the point of the [MODULE_ID_DEPENDENCIES] at the moment.
>
>>> I still do not see the problem with reserving the "require" "export"
>>> and "module" names as names that cannot be used for module IDs. But if
>>> the people on the list think that is a bad idea and hand-coding is not
>>> goal to achieve, then something like the B2 version above might work
>>> well and be fairly concise. I will not be able to use it, but that is
>>> fine in the grand scheme of things.
>>>
>>>
>> FWIW, I am not opposed to that, I don't see that as a big problem.
>> However, I do really prefer the multiple module per require.def with a
>> single dependency array in terms of efficiency though. And if you
>> combine this with module injection, this seems to be beg for a per
>> module injection list.
>
> I read your use of "efficiency" as file size savings, maybe you mean
> something else? If file size/savings, for hand coding, I'll be opting
> to use the injection array which is per module, so I do not see much
> files savings for me in that route.
>
> You could get multiple calls in one require.def call by allowing to
> take an array of Transport/B2-like argument lists.
>
> James
The .pause and .resume interface really quite bamboozled me when i first saw it - my first instinct was 'why can't I just pass an object with keys as the module ids'.
-ash
Right, so your loader manually tests the Object.prototype dont-enum
properties to see if they are there, that's not difficult. Or you note
that modules with names that match Object.prototype.* don't work in IE.
Having 7 modules names that won't work in IE is pretty minor compared to
the countless other things we do in SSJS modules that will never work on IE.
--
Thanks,
Kris
The intent is that this makes it easier to hand-code modules. You can
therefore reference modules directly by name instead of using
require("module").
> Thinking about this further why are the deps even listed at this stage? What need mandates this?
>
Dependencies are listed so they can be asynchronously loaded if
necessary before executing the module factories (at which point you
could no longer asynchronous load dependencies).
> As far as I understand these proposals, either you are hand writing it and know what modules are required and thus also include them somehow, or its server generated and it generates them.
>
> I'm not seeing the point of the [MODULE_ID_DEPENDENCIES] at the moment.
>
Modules certainly shouldn't be required to include all of their
dependencies in the same response/file, even when server-generated.
Therefore the dependency list is necessary for the loader to ensure the
proper modules are loaded before starting module factories.
>
>>
>>>> I still do not see the problem with reserving the "require" "export"
>>>> and "module" names as names that cannot be used for module IDs. But if
>>>> the people on the list think that is a bad idea and hand-coding is not
>>>> goal to achieve, then something like the B2 version above might work
>>>> well and be fairly concise. I will not be able to use it, but that is
>>>> fine in the grand scheme of things.
>>>>
>>>>
>>>>
>>> FWIW, I am not opposed to that, I don't see that as a big problem.
>>> However, I do really prefer the multiple module per require.def with a
>>> single dependency array in terms of efficiency though. And if you
>>> combine this with module injection, this seems to be beg for a per
>>> module injection list.
>>>
>> I read your use of "efficiency" as file size savings, maybe you mean
>> something else? If file size/savings, for hand coding, I'll be opting
>> to use the injection array which is per module, so I do not see much
>> files savings for me in that route.
>>
>> You could get multiple calls in one require.def call by allowing to
>> take an array of Transport/B2-like argument lists.
>>
>> James
>>
> The .pause and .resume interface really quite bamboozled me when i first saw it - my first instinct was 'why can't I just pass an object with keys as the module ids'.
>
This was included in Transport/C so that multiple modules that have
circular references could be defined before beginning the factory
execution. These methods are not necessary when multiple modules can be
defined in a single require.def call (and thus do not exist in
Transport/D).
--
Thanks,
Kris
If it is not possible to bridge the gap between Transport/C and B/D, do
we need to move to create two different functions?
require.run -> Transport/C parameters
require.def -> Transport/D parameters
This would be kind of admitting defeat, I guess...
>>> I still do not see the problem with reserving the "require" "export"
>>> and "module" names as names that cannot be used for module IDs. But if
>>> the people on the list think that is a bad idea and hand-coding is not
>>> goal to achieve, then something like the B2 version above might work
>>> well and be fairly concise. I will not be able to use it, but that is
>>> fine in the grand scheme of things.
>>>
>>>
>>>
>> FWIW, I am not opposed to that, I don't see that as a big problem.
>> However, I do really prefer the multiple module per require.def with a
>> single dependency array in terms of efficiency though. And if you
>> combine this with module injection, this seems to be beg for a per
>> module injection list.
>>
> I read your use of "efficiency" as file size savings, maybe you mean
> something else? If file size/savings, for hand coding, I'll be opting
> to use the injection array which is per module, so I do not see much
> files savings for me in that route.
>
By efficiency I did mean file size. My goal was
efficiency/small-file-size for auto-generated module wrapping and
explicitness/readability for hand-coding (at the expense of efficiency).
--
Thanks,
Kris
> The intent is that this makes it easier to hand-code modules. You canOn the server-side, we simply do
> therefore reference modules directly by name instead of using
> require("module").
var module = require("module");
to achieve that result.
And, if you want to be able to use modules written for the server-side, that had better work.
Wes
As Kris Zyp mentioned, since some envs, most notably script tags in
the browser, load the dependencies asynchronously, it is necessary to
have them listed separately, outside the factory function.
Since the module IDs need to be listed out in an array outside the
module factory function, it is less typing to then just inject those
dependencies as named arguments to the module factory function. No
need to do yet another require('some/module') inside the factory
function. So if variables are being injected into the factory function
via named arguments, then do the same for "require", "module" and
"exports".
But I suppose that is the main problem: I want that injection
terseness because I want to hand-code in the transport format. I want
to code something that works natively in the browser. However, I
suspect most on this list see the transport format as a way to cajole
existing CommonJS code into something that might work in the browser,
but not something that would be hand-authored.
And as far as translating existing CommonJS code into a module format,
having the modules injected as named parameters is probably not so
meaningful since require("some/module") will still work inside the
factory function.
So I think it will be hard to bridge the underlying goals that drive
some of the Transport proposals, or at least the Transport/C one. I
was hoping we could get closer to hand-authored browser interop at
least via a Transport spec, but it is OK if it is not. My issues with
CommonJS modules really go beyond a transport format anyway.
If the group figures out a transport spec that works for its needs,
then I can just provide an optional adapter in RequireJS to accept
those calls and call RequireJS methods underneath. I will likely keep
my existing translator that can take CommonJS modules and convert them
to something looking like Transport/C, but I can provide another
optional adapter for any official Transport spec.
With that in mind, It would be ideal if the Transport spec did not use
"require.def" as the Transport function call since I am using that now
in RequireJS. Maybe the Transport spec could use "require.define".
> The .pause and .resume interface really quite bamboozled me when i first saw it - my first instinct was 'why can't I just pass an object with keys as the module ids'.
At least in the browser case, and what RequireJS supports, a module
could depend on a JS file that does not use strict module semantics
and declare globals, or even use an anonymous function to do other
global work then call a require.def to define a module. In this case,
I want those scripts to be able to live not inside a function wrapper,
but still pause dependency resolution until all the modules had been
registered.
It is useful for bootstrapping code from today to work with code in
the future, but again, probably not a concern for this group and the
Transport spec.
James
Coding around this issue is possible. My concern was more about
whether or not implementors would.
Best,
Tobie
[1] http://www.dhtmlkitchen.com/learn/js/enumeration/dontenum.jsp
On Mar 17, 8:09 am, Tim Schaub <tsch...@opengeo.org> wrote:
> Kris Zyp wrote:
>
> > On 3/17/2010 12:34 AM, Kris Kowal wrote:
On 3/18/2010 3:04 AM, Tobie Langel wrote:
> As Tim correctly points out, I wasn't concerned about the missing
> hasOwnProperty, but about the DontEnum bug which plagues all IE
> browsers. See [1] about midway down the page for more details.
>
> Coding around this issue is possible. My concern was more about
> whether or not implementors would.
>
Yeah, it is easy enough to code around, and we could include a note
about that in the spec. Although frankly, given the option between an
implementation that did and did not code for IE's DontEnum bug, I would
prefer the one did not waste several dozen bytes of my bandwidth on
preserving a fixed set of names that I can easily avoid, and don't want
to use anyway. But that's just my preference, implementors can decide
who they want to cater to. The bottom line is that objects are
completely sufficient for holding sets of ids/modules as name/values.
--
Thanks,
Kris
I've read through the different proposals and some of the discussions
about the Module Transport and I'm still unclear about some of the
details of Transport B. I'd really appreciate if anyone could
enlighten me.
Would:
require.def({'foo': { requires: [], factory: "…print('foo');" });
print the string 'foo', or would that require a specific call to
require('foo')?
If the above does print the string 'foo', in which order would the
following be called? Also, how do we load modules without getting
their factory method called?
require.def({
'foo': { requires: [], factory: "…print('foo');",
'bar': { requires: [], factory: "…print('bar');"
});
If on the contrary, the factory function isn't called by require.def,
how do we know that all of it's dependencies have already been loaded
and are available?
Thanks for your time.
Best,
Tobie
--
You received this message because you are subscribed to the Google Groups "CommonJS" group.
To post to this group, send email to comm...@googlegroups.com.
To unsubscribe from this group, send email to commonjs+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/commonjs?hl=en.
It looks like you're talking about Draft 1 of Transport B. See the
updated proposals.
http://wiki.commonjs.org/wiki/Modules/Transport/B
require.def("foo", {
requires: [/* no dependencies */],
factory: function(require, exports, module) {
print("this is the foo module");
}
});
http://wiki.commonjs.org/wiki/Modules/Transport/D
require.def({
foo: function(require, exports, module) {
print("this is the foo module");
}
}, [/* no dependencies */]);
>
> print the string 'foo', or would that require a specific call to
> require('foo')?
I imagine this could be implementation specific, but I'd bet that most
wouldn't call the factory function until the module was required the
first time.
>
> If the above does print the string 'foo', in which order would the
> following be called? Also, how do we load modules without getting
> their factory method called?
>
> require.def({
> 'foo': { requires: [], factory: "�print('foo');",
> 'bar': { requires: [], factory: "�print('bar');"
> });
>
My guess again is that this could be implementation specific. But the
"foo" module factory function shouldn't get called unless it is required
elsewhere.
> If on the contrary, the factory function isn't called by require.def,
> how do we know that all of it's dependencies have already been loaded
> and are available?
In Transport B, the dependencies are listed in the "requires" property
of the second argument to require.def.
In Transport D, the dependencies are given in the second argument to
require.def.
In the cases you've given above, the "foo" and "bar" module don't have
any dependencies.
Tim
I'm just trying to tie in the concept of asynchronicity with this
proposal.
If the factory function isn't called when the module, its shallow and
deep dependencies are available, how do we know that the module is
ready to be required?
It's probably useful to distinguish the phases as "loaded", "ready to
be required", and "should be required", the latter being distinct and
not specified by the transport proposal. In my prototype, I had a
"require.async(id):Promise" method that instructed the loader to
require a module when it was ready, and to forward the exports to a
the returned promise. I think "require.async" is the right spelling,
even if it doesn't return a promise since the "main" module doesn't
normally need to forward it's exports; the promise is for dynamic
non-blocking requires.
Tobie, do you think this ought to be specified?
Kris Kowal
Yes. Definitely.
Yeah, that's a good first layer; you're welcome to revise the
proposal. If promises were to be supported, it would look like:
Kris Kowal
Eugene
On Mar 21, 6:27 pm, Kris Kowal <cowbertvon...@gmail.com> wrote:
Comments welcomed.
Best,
Tobie
+1
Looks good to me.
Kris Kowal
Instead of require.async(), if require() is given a function as second
arg, it means async, would that work?
James
On Mar 22, 8:56 pm, Kevin Smith <khs4...@gmail.com> wrote:
> I like this - although what happens when all specified modules are already
> loaded?
The callback is called immediately.
There are other proposals [1] with intentions on the second argument
of require. These proposals are not universally loved, but I think we
should leave open the possibility.
Kris Kowal
.. [1] http://wiki.commonjs.org/wiki/Packages/A require2
I think that this might be contentious; I've heard arguments from Mark
Miller and read one by Oliver Steele some time ago [1] that, if a
method can execute in a future turn, it should always execute in a
future turn. This has been the crux of a discussion with MarkM and
Tyler Close about promises, where there's a tension between having
"progressively enhanced" sync/async code that runs promised callbacks
in the same turn if possible to preserve the synchronous code-path,
and that of guaranteeing that internal state is consistent within a
turn.
We could mandate that the callback be deferred at the very least with
setTimeout 0. I consider this an open question.
Kris Kowal
.. [1] http://osteele.com/archives/2008/04/minimizing-code-paths-asychronous-code
OK, as long as the other thing that may want to claim the second arg
is something that is used more often by more people. It is not clear
to me that package targeting will be used more often than async
behavior. Async concerns seem to come up more frequently than package
targeting in require, but my memory could be filtered by my bias.
James
We may ultimately decide to do "require2" in a module. If nested
packaging in this form dies at the side of the road, I think we should
revisit the idea of require(id, cb). Having require.async has some
nice properties though…
1.) feature testing. !!require.async is a strong indicator that the
feature is supported.
2.) sync require(id) does not and should not support typeof id ===
"array". Separating require and require.async keeps the method
signatures simple.
Kris Kowal
It would unless promises are involved in which case a call to require
could return either the foreign module's exports or a promise. I'm not
very familiar with promises so I'm not aware how important those will
be in the future (pun not intended).
That said, given that require.async needs to accept an array of module
identifiers, combining both functions into one would create a
confusing API:
require(identifier); // sync
require(identifier, callback); // async
require([identifier, ...]); // NOT SUPPORTED
require([identifier, ...], callback); // async
I would argue against it.
Best,
Tobie
Tobie
On Mar 22, 10:29 pm, Kris Kowal <cowbertvon...@gmail.com> wrote:
> .. [1]http://osteele.com/archives/2008/04/minimizing-code-paths-asychronous...
I would be surprised if promises were going to be mandated for a
module loading spec, particularly since the callback arg is
sufficient.
> That said, given that require.async needs to accept an array of module
> identifiers, combining both functions into one would create a
> confusing API:
>
> require(identifier); // sync
> require(identifier, callback); // async
> require([identifier, ...]); // NOT SUPPORTED
> require([identifier, ...], callback); // async
>
> I would argue against it.
require([identifier, ...]); could be the sync case used for
destructuring assignment, similar to how the array-to-function-args
case works in the async case. I think the API is fairly clear: if a
second function arg, then it is async.
As for allowing a feature detect (like what .async allows): if async
support goes into the main module spec, given the relative newness of
the specs and implementations, seems like over time the feature
detection will not be needed as implementations get the async api
support. In other words, in the short term any modules that use an
async API would just indicate what platforms it could run in, as what
happens today with some existing modules. But it depends on how
important async support is seen for a module spec and if it qualifies
for the main module spec.
James
There are already dozens of entrenched implementations, not all of
which even have event loops yet. It's a safe bet that feature
detection will be a necessary possibility for a migration period of at
least a year if not indefinitely.
Are you arguing for require(id, cb) or against require.async(id, cb)?
Do you object to having the latter initially with the possibility of
the former eventually?
Kris Kowal
+1
Sorry, this ended up being unclear.
I'm very much in favor of standardizing require.async initially then
moving on to require(id, cb), once/if the various issues are
clarified.
Tobie
I want to be sure more auxillary things are not added to require or
mulitple pathways for the same thing are introduced for the sake of
still relatively new implementations, particularly if one pathway will
suffice. It will increase the burden on future implementors. It seems
like the cases that would want async use will not be usable in the
existing implementations anyway.
So, I am arguing for require(id, cb) and against require.async(id, cb).
James
Would this be tenable in browser implementations where you get a minimum
15ms delay, IIRC, with setTimeout and which can pretty easily add up to
unacceptable delays with a series of sequential callbacks? I certainly
wouldn't want the delay when my ultimate goal is user experience.
--
Thanks,
Kris
How are you notified of an error/failure in loading a module?
--
Thanks,
Kris
To convince me that require(id, cb) must be used in lieu of
require.async(id, cb), you will need to convince me that require(id,
pkg) would not be useful or not in conflict, which means joining that
discussion and providing a proposal for a viable alternative and
convincing a sizable group of people to pursue that specification,
which blocks the resolution of this issue. I would personally prefer
to make progress on this issue orthogonally. Avoiding duplicate
behavior would be equally well served by making this the only
interface for async require.
Kris Kowal
require.length > 1 could be another. Specs could mandate
require.length be equal to 2.
Good idea. The remaining issues are:
* API orthogonality between async and destructuring.
* Conflict with require2 proposal.
Kris Kowal
I like how require.async nicely declares it's difference to standard
require.
require.async also has a significant difference in behavior to plain
require. Writing an api that returns two completely different things is
ugly api design, and can lead to people using it being confused.
Feature detection IS important.
It doesn't matter how long you wait, even once all the implementations
before async require was adopted have completely disappeared you will
NEVER be guaranteed that any modern implementation you use will support
require.async, there will be plenty of situations where it doesn't fit
and as such the implementation has decided to exclude it. So feature
detection will never go away. require.async fits very well with our
other proposals like if ( stream.write )
Besides that, why should any sort of detection be required to get a sane
abort error?
You remember one of the things ES5 strict mode does? It changes the this
in a function from global to undefined... why? Because it makes errors
throw much sooner and closer to the actual issue rather than later in
not so relevant code.
Here's what I mean, for example in these cases the implementation has
not implemented async require, for whatever reason they have not to.
Sane thing we want of course, is for the code to quickly throw an error
and die because it can't actually be run in that implementation.
require.async("foo", function() {
...
});
// Throws something like "TypeError: require.async is undefined, not a
function" right away indicating that require.async is not defined, and
fairly quickly points to the fault that the impl does not have async
require support.
require("foo", function() {
...
});
// require is treated as if the code was written require("foo"); it
synchronously loads the module because ignores the argument it doesn't
understand (this IS JavaScript after all) returns it, discards the
exports, the function never gets executed... In a more complex
environment it's possible that other parts of code may be run however if
they are waiting for something inside the function to be run, it will
never be run, they will simply sit there waiting. The program will never
throw a sane error telling them that the random perfectly fine CommonJS
script they ran is using async which it does not support.
// If said module is not installed, the user will get an error telling
them it could not load the module, the user will spend some time
installing the module into that implementation, the user will now be
thrown into the previous situation where they eventually (or never)
realize that they just installed a module into an implementation that
can't run the program that uses the module because the impl told them it
couldn't load the module, and they now have to switch implementations
and go through more work installing that same module into the other impl
when they could have just switch impls in the first place without
installing the module into the impl that doesn't support async.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
> James
>
I've written such a system as a prototype. The implication is clear:
if promises are fundamental, they have to either be loaded beforehand
(as part of the loader), or loaded using the more primitive callback
system on-demand. A full-featured promise implementation weighs about
5KB.
Kris Kowal
For reference:
YUI3 returns the foreign modules' exports as properties of
arguments[0] and returns a status object as second argument.[1]
requirejs throws.[2]
dojo.require also throws, although passing in true as second argument
prevents the error from being thrown.[3]
LABjs doesn't seem to handle this at all.
With that issue also comes the problem of timeouts and whether or not
it should be part of the spec.
Best,
Tobie
[1] http://developer.yahoo.com/yui/3/examples/yui/yui-loader-ext.html#syntax3
[2] http://github.com/jrburke/requirejs/blob/master/require.js
[3] http://api.dojotoolkit.org/jsdoc/HEAD/dojo.require
[4] http://github.com/getify/LABjs
It's just that using require.async("promise").something(...; when you
read it semantically implies "I'm asynchronously loading the promise
module, and synchronously returning a promise which is supposed to be
part of the promise module which I'm asynchronously loading." When
really promises are just hardcoded into the async loader. It isn't a
problem, but it has some fun semantic implications.
# "require.async" accepts either a module identifier or an array of
module identifiers as first argument.
# Optionally, "require.async" accepts a function callback as second
argument.
# Optionally, "require.async" accepts another function callback as
third argument to handle errors.
## If "require.async" was passed a function callback, this callback
must be called with the exported API of the foreign modules as
arguments as soon as the foreign modules, and their shallow and deep
dependencies have been loaded.
## If there is an error loading any of the foreign modules or its
dependencies, the function callback must not be called. Instead, if an
error handler was specified it must be called with the error as first
argument. if none was specified, the error must be rethrown.
# "require.async" has an optional, settable "timeout" property which
represents the time, in milliseconds, within which the foreign modules
and their dependencies must be loaded.
## This property accepts an integer and defaults to -1 (no timeout).
## Ff "require.async.timeout" is greater than -1 and the required
module hasn't been loaded within the set time, an error must be
thrown.
## This error must be caught and passed to "require.async"'s error
handler if one was set.
Let me know if all or part of the above seems adequate and if I should
amend the proposal accordingly (I'm worried about the temperability of
the timeout option and the security implications this has).
I don't want us to get carried away and would much rather we agreed
now on a spec which didn't include error handling or timeouts rather
than bike-shed over it and delay the specification.
Best,
Tobie
> Let me know if all or part of the above seems adequate and if I should
> amend the proposal accordingly (I'm worried about the temperability of
> the timeout option and the security implications this has).
I am also not comfortable with the timeout property. Let's ditch
timeouts entirely since they can be trivially constructed with a
wrapper, or take it as an argument to require.async.
> I don't want us to get carried away and would much rather we agreed
> now on a spec which didn't include error handling or timeouts rather
> than bike-shed over it and delay the specification.
Agreed.
Kris Kowal
Now this certainly doesn't invalidate the need for require.async.
require.async seems like it would provide an anonymous module equivalent
of require.def, which is specifically aimed at programmatically
controlled lazy loading of modules(which would explicitly avoid any
server side dependence resolving for wrapping and sending modules to the
client). This is indeed very valuable. I just want to make sure I am
clear on the intended relationship between require.def and require.async.
Kris
On 3/22/2010 1:26 PM, Tobie Langel wrote:
> Please find a draft proposal here: http://wiki.commonjs.org/wiki/Modules/require/async/A
>
> Comments welcomed.
>
> Best,
>
> Tobie
>
>
--
Thanks,
Kris
I have been making the opposite assumption; that an entry point must
be explicitly selected in some fashion. I had require.async in mind,
but did not specify it in conjunction with the module transport
format.
The module transport format is intended to be a standard way to
publish CommonJS modules statically to interface with a variety of
loaders. I think loading and executing should be separate concerns.
I think that having modules implicitly execute because they are
available can cause subtle invariants to break. I didn't account for
non-deterministic entry points when I wrote the Modules/1.0 compliance
tests which is why they didn't play well with Persevere so long ago.
> Or is it undefined if and when the client side loader executes
> modules if they haven't been required?
I had not defined it. Tobie thinks that it ought to be defined. I
would be content for it to be undefined by this specification; there
are a variety of ways to invoke the entry into a module system that
are all orthogonal concerns to how they are transported to the
browser.
> Or is it explicitly prohibited that a module be loaded if it
> is not required?
I think it would be good for implementations to use explicit entry
points in some form or another.
> After playing with module wrapping, I have found it
> quite elegant that I can load a CommonJS module with simply:
> <script src="require.js"></script>
> <script src="my-app.js"></script>
> Where my-app returns a CommonJS module that is wrapped in a require.def
> and is executed once all its dependencies are loaded. It doesn't seem
> like I should have to explicitly load it later.
Back before CommonJS, I had a loader called modules.js that looked
like:
<script src="modules.js?main.js"></script>
The modules.js script would find the <script> in the DOM, remove it,
and read the query-string for the entry point. It then used sync XHR
to grab it and its transitive dependencies. While the sync XHR was
obviously flawed, there certainly are ways to provide an explicit
entry point.
Then there's the performance concern. Successive, blocking script
tags stall the browser's resource exploration in everything but IE8.
> Now this certainly doesn't invalidate the need for require.async.
> require.async seems like it would provide an anonymous module
> equivalent of require.def, which is specifically aimed at
> programmatically controlled lazy loading of modules(which would
> explicitly avoid any server side dependence resolving for wrapping
> and sending modules to the client). This is indeed very valuable. I
> just want to make sure I am clear on the intended relationship
> between require.def and require.async.
Yeah, I'm glad Tobie put it up as a separate proposal. I think that
the transport format proposal would be useful even if the entry
mechanism is not defined. In fact, because a call to require.async
has no place in the transport format, and the RequireJS format doesn't
call for any similar notation, I don't think that it's even possible
for the transport specification to depend on this issue.
Kris Kowal
I would not bother with error handling functions, the sync require
does not allow for it. I would also not bother with an explicit
timeout spec either.
James
It rather does, actually.
http://wiki.commonjs.org/wiki/Modules/1.1 - Module Context - 1.4
If the requested module cannot be returned, "require" must throw an error.
Graceful recovery of some kind seems like a good idea to me.
Kris Kowal
--You received this message because you are subscribed to the Google Groups "CommonJS" group.
To post to this group, send email to comm...@googlegroups.com.
To unsubscribe from this group, send email to commonjs+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/commonjs?hl=en.
In a nutshell: I fully agree with you that module transport, loading
and execution should be separate concerns and treated as such (i.e.
completely different specs)[1].
That said, I'm under the impression that most of the discussions about
the proposed Transport format relies on a misunderstanding of the role
of the factory function, seen as either a module factory, an async
callback, or both (which imho leads to unhealthy coupling of loading
logic and regular code inside of modules).
Agreeing on a spec for an async require would clarify that issue. (And
also would help the spec get more traction among client side
developers.)
Best,
Tobie
[1] For the record, separating module loading from module execution is
a useful technique to decrease load times with slow interpreters. (The
mobile version of Gmail does it (http://googlecode.blogspot.com/
2009/09/gmail-for-mobile-html5-series-reducing.html), so does
SproutCore (http://blog.sproutcore.com/post/225219087/faster-loading-
through-eval). I've also recently built this into modulr (http://
github.com/codespeaks/modulr/commit/
460eb96ee188f4e671fe52bc24f30b14f592d36c). Relying on module execution
to fire "onload" callbacks would seriously complicate the
implementation of such performance improvements.
Small aside: I thought those techniques give a perf boost because they
avoid JS execution completely, and it is not a bulk win, it is just
moving the cost around to make initial perf better. The larger point
being that those types of optimizations end up being something not
impacted by how an async require is constructed in JS or how a
transport format in JS is defined -- those perf techniques avoid
evaluating JS completely. Once the JS is evaluated, a way to async
require and to define modules will then come into play.
On your larger point though, I agree that an async require spec can be
defined separately from the transport spec, because the requirements
are different: an async require is about obtaining references to
modules where the transport format is concerned about defining
modules. Turns out defining a module means obtaining references to
other modules, but there are other additional concerns for module
definition.
James
Defining a module should't need references to the modules. Saying "I depend on X" is different to "obtaining module X" to my mind anyway (to me obtaining module X is either doing the actual require, or fetching it from the server to get the factory/module text etc.)
Does a CommonJS spec for require.async (that accepts an array of module
identifiers) alleviate the need for the transport spec to support terse
module injection as arguments to the factories (like Transport/C for
reducing keystrokes in hand-coding)? Will Transport/B/D's require.def +
require.async be sufficient, or will we still need a Transport/C style
module-as-variable injection mechanism?
--
Thanks,
Kris
Or, just have a general require.capability object where
require.capability.async === true means async is supported.
> Good idea. The remaining issues are:
>
> * API orthogonality between async and destructuring.
If require.async can accept a string or an array as the first arg, it
should be OK for require() to also have that pattern? Returning an
array for when require([...]) is used seems intuitive too.
However, to reduce API permutations, perhaps restrict the array arg to
require always means async, and do not allow the string value in that
case. This reduces the require API to:
require("mod"); //sync, returns module definition for "mod"
require(["mod"], function(mod){}) //async, returns require (or undefined)
and anything else throws.
> * Conflict with require2 proposal.
Looking at the Packages/A [1] proposal: require('bez', 'main/curve'),
this could be accomplished by mapping 'bez' to the bezierUtils.zip
file in the manifest.json, then require('bez/main/curve') would fetch
the main/curve.js file inside the bezierUtils.zip file. In other
words, use path prefix mapping to avoid the need for a second
argument.
I am pushing a bit on using require() for async because I feel it is a
basic use case for obtaining module references, particularly if you
take the most widely entrenched JS implementations, browsers, into
consideration. Using require.async() looks like it is bolted on after
the fact. Async require support is a distinct advantage over other
module systems, and it will really help browser adoption. It should be
first class.
End of the evangelism. In the end if .async() is the way to get it
done, then so be it. I did not want to avoid the discussion though, to
just accept a .async() when trying to change it later will be harder
to do.
To Daniel Friesen's concern about getting timely error values in
non-async implementations: I hope that implementations properly check
the inputs. Judging from the Packages/A, a multi-arg require() has
been discussed. It is in the implementors best interests to strictly
check the require() inputs and throw if they do not like them.
[1] http://wiki.commonjs.org/wiki/Packages/A
James
To me, this is essentially require.async()? Or at least how I imagine
it would work. require.async or any sort of callback used for a
require should not mandate use of function arg names to obtain the
module references -- require() inside the function should also be
possible, and in fact necessary for some circular dependency cases.
So, I would express your need() above like so:
require(["moduleA", "moduleB"], function() {
require("moduleA").doSomething();
require("moduleB").doSomethingElse();
});
Replace that require call with require.async() if need be, but a
variation on the above is what I support in RequireJS, and what allows
me to easily convert CommonJS modules to a wrapped syntax.
Or are you suggesting this path to get around my desire not to call
require.async() directly? I can live with require.async (I will just
alias my require to require.async), but it looks cleaner/first class
to have a callback syntax inside the require() call itself. It will be
used heavily in the browser if it exists, might as well make it
compact.
James
Imagine we decide instead of async on require() or packages, we decide
to support something like require("foo", "2.5"); where the second arg is
a semantic version... (probably won't but it's an example) this case
works nicely with graceful degradation. In impls that just support
Modules/1.x it gets treated as require("foo"); and you only lose the
version restriction, the program still works fine even though the
implementation is old.
Also, while I dislike the confusing order require(module, pkgName); has
a useful advantage, it degrades gracefully. For example require("md5",
"crypto"); would load a md5 module from a crypto package... However in
an implementation where packages have not been implemented
require("md5", "crypto"); is treated as if you wrote require("md5"); as
a result even though you implementation does not support packages, all
you have to do is install the md5 module from the crypto package into
your normal module library path and the code will work.
So when require.async provides nice clean errors, I don't see a reason
why we should instead use require(, cb) and force implementations to go
throwing errors if they haven't implemented a spec, when simply using
require.async provides errors with no cost to implementation (no
implementation needs to change unless it implements the spec) and allows
for clean feature detection that doesn't require some unnecessary
compatibility object, or argument length test which could become
completely invalid if we are to standardize something like 2-arg package
require.
Fair enough, I was overly broad in my reply.
To get the module spec usable and performing in the browser, an async
require is needed, and if the right API choice is to have it as part
of require(), then I would rather take the cost of adjusting the spec
now vs. adding in extra cruft to avoid some bumpiness in the near
term. Ideally an async require would have been specified earlier, but
I can see in the ServerJS days where it was less of a concern.
An async require() may not make sense as part of the require() call
for other reasons, but I do not feel the backwards compatibility issue
should be a blocker. There is an issue with backwards compatibility,
and existing module containers could make it clear if they do not
support it by throwing. Or they could support it, but it would
basically be a sync call. But in reality, I expect mostly that people
will advertise what CommonJS containers their modules will work in,
since it is needed today (the setExports thing is another example of
this). The implementations are still relatively new.
James
On 3/23/2010 2:33 PM, Irakli Gozalishvili wrote:
> I would like to see something like
>
> require.async("foo", "bar", function ready() {
> require("foo).hello( if ( today === "friday") ? require("bar") :
> "whatever");
> });
>
> Callback should not get any args passed, it should just make modules
> available for require-ing them sync mode. Cause I might very much have
> a case where my dependency "bar" is used only on Fridays. I'm ok with
> fetching dependency but I would like to avoid evaluating it if I
> don't explicitly need it.
The asynchronous version of require isn't really a require, it is just a
pre-require? That doesn't seem logical or consistent to me. If you have
a need for loading modules without execution, you can do that by loading
a script that has a require.def in it.
--
Thanks,
Kris
It has a beautiful side effect, large chunks of sync CommonJS code can
be Coppied, wrapped in a require.async call, and pasted into code that
can now run in the browser without blocking the ui.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
So, I would express your need() above like so:
require(["moduleA", "moduleB"], function() {require("moduleA").doSomething();});
require("moduleB").doSomethingElse();
Replace that require call with require.async()
The asynchronous version of require isn't really a require, it is just a
pre-require? That doesn't seem logical or consistent to me.
Regarding require2 PLEASE leave require ALONE and SIMPLE!!!
Note that specifying the module names as params to the function is
optional, or should be. It is definitely convenient to not have to
replicate the require() string twice, once in the dependency section,
another inside the function for the require call.
James
I implemented my require.async() by returning a promise, and so far I
like it a lot better than passing a function as the last arg to
require.async().
require.async("stdio", "assert").then(
function onFulfill(STDIO, ASSERT) {
// my program goes in here like in
window.addEventListener("load", .., ..)
},
function onException(ex) {
// A specified set of errors can be handled here.
}
function onProgress(num) {
// num is the number of modules loaded or something
// maybe this callback is not even called by require.async() ???
});
This never throws an error on invocation, but instead passes them to
the exception handler. allowing the programmer to handle them
gracefully.
James
As Kris Zyp mentions, I cannot see a use case where an async require
is done but then you would not want to get a hold of the modules. I
believe it is trying to optimize on the wrong thing -- the hit to
download and parse the JS in the file has already been taken. If you
are looking for best performance, do not ask for it.
In the browser case, using nested require.async calls so that you can
avoid fetching things you do not need is the better overall
performance strategy -- it avoids the bytes and script evaluations,
and it also allows you to target those batches as optimized
downloadable units.
> 2) As was mentioned serverJS module can be wrapped in require.async / load
> and passed to browser simple no overhead.
This is still possible with the require.async as previously talked
about? How is it not?
James
On Tue, Mar 23, 2010 at 2:09 PM, Irakli Gozalishvili <rfo...@gmail.com> wrote:As Kris Zyp mentions, I cannot see a use case where an async require
> 1) Even if it's optional it means module should be created and then callback
> will be called. That's what I would like to avoid.
is done but then you would not want to get a hold of the modules. I
believe it is trying to optimize on the wrong thing -- the hit to
download and parse the JS in the file has already been taken. If you
are looking for best performance, do not ask for it.
In the browser case, using nested require.async calls so that you can
avoid fetching things you do not need is the better overall
performance strategy -- it avoids the bytes and script evaluations,
and it also allows you to target those batches as optimized
downloadable units.
This is still possible with the require.async as previously talked
> 2) As was mentioned serverJS module can be wrapped in require.async / load
> and passed to browser simple no overhead.
about? How is it not?
I didn't catch the reference to require.load. I use the term
require.load in Narwhal to request the original module factory
functions from the loader without instantiating them, which is handy
for Ihab Awad's "Maker" style modules and for DSL's.
http://github.com/280north/narwhal/blob/master/docs/modules-advanced.md
Kris Kowal
It would be interesting to contrast that method vs. using an HTML5
application cache[1]. It seems like an appcache would get the benefits
you are looking for, but still allow normal JS evaluation when you
actually needed the files. The plus side, eval is not needed/no extra
JS code to bootstrap the code-in-text in the appcache model. If
appcache is sufficient, then .async can evaluate modules immediately.
>>
>> > 2) As was mentioned serverJS module can be wrapped in require.async /
>> > load
>> > and passed to browser simple no overhead.
>>
>> This is still possible with the require.async as previously talked
>> about? How is it not?
>>
>
> Yes it's possible in case if passed params are ignored and if in callback
> you require in sync manner module you required async. But proposal doesn't
> mentions that after you do require.async([`foo`], callback) you are able to
> require(`foo`) in callback. Actually my impression was that Kris was against
> this as well.
require('foo') will need to be supported inside the callback to help
circular dependency cases. It should be part of the spec.
James
Lame, forgot the link:
It's a technique that is used in some scenarios I think in mobile gmail as well. you have a js source and you know you'll need it later on unless user will close browser, but you delay evaluation to save some memory till a point where you actually need this.
Clever. I'm unclear why the distinction is desirable. Did someone ask
for non-requiring async in this thread? I must have missed that. I
thought the issue was about whether require.def should implicitly
require when the module becomes available.
Kris Kowal
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
Kevin Smith wrote:
>
> It's a technique that is used in some scenarios I think in mobile
> gmail as well. you have a js source and you know you'll need it
> later on unless user will close browser, but you delay evaluation
> to save some memory till a point where you actually need this.
>
>
> In this case we can have it both ways. A loader could use the length
> property
> <https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Function/length>
> <mailto:comm...@googlegroups.com>.
> To unsubscribe from this group, send email to
> commonjs+u...@googlegroups.com
> <mailto:commonjs%2Bunsu...@googlegroups.com>.
> For more options, visit this group at
> http://groups.google.com/group/commonjs?hl=en.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "CommonJS" group.
> To post to this group, send email to comm...@googlegroups.com
> <mailto:comm...@googlegroups.com>.
> To unsubscribe from this group, send email to
> commonjs+u...@googlegroups.com
> <mailto:commonjs%2Bunsu...@googlegroups.com>.
--
Ah. Okay.
I think that at this point it would be good for everyone to summarize
their current position on the issues.
Mine:
I think require.async is good.
I'm no longer in touch with the details of the require.def proposals, but:
* I think one of them must be close to being acceptable.
* I think it's fine for the injection list to be implicitly "require",
"exports", "module", but I still insist that "require" must be used in
the body of the module function to get modules.
* I think "requires" is a great name for the module descriptor
property that lists the shallow dependencies of a factory, but I'm not
against constructing an abbreviated form where the "requires" and
"factory" properties of a module descriptor are implied by positional
arguments.
* I think it would be okay for a single .def call to register multiple
module descriptors.
* If that does not jive with RequireJS, I think we need two standards,
and I think that CommonJS should make a home for both.
To move forward, I want to hear back from Kris Zyp, Tobie, and James
at the very least.
Kris Kowal
I've grown very fond of Wes Garland's suggestion. Combined with the
method signature suggested by James (array of module identifiers,
callback), I think it makes for a great API.
It also delegates the task of throwing errors for missing modules to
the sync version of require. That should be a lot easier to handle.
I've updated the proposal accordingly (I hope that's the process
you've been following so far):
http://wiki.commonjs.org/wiki/Modules/Async/A
Given the semantic shift this method has taken, the name
"require.async" no longer seems appropriate. Before we embark on a
bike-shedding session, I'd like to make sure that it's the direction
we want the API to evolve to.
I'm also wondering whether the spec should mandate for "require" to be
made available as a free variable of the callback or if its
availability through the lexical closure is sufficient?
Thanks.
Tobie
That was precisely my point.
However, should the module factory itself be used as some sort of
callback (as was suggested at some point), then, what should have been
kept an implementation detail becomes a de facto standard and can no
longer be tweaked to accomodate varying and ever changing performance
needs.
Hence the need for require.async.
Best,
Tobie
Two standards would be great. After proposals for two distinctly
different styles of require.def, require.async, and talk of
require.load, I would just like to avoid 3 or 4 standards!
I believe require.async and require.def can fulfill two distinct roles.
It feels like we are getting a lot of cross-over of perceived needs, but
if we intelligently design these, I believe they can complement each
other very nicely without having to overload each with excessive
requirements. I would propose the purpose and design of each:
require.def -
* Used for defining named modules
* Generally created through automated generation/wrapping
* Designed to wrap CommonJS modules without any internal modification to
the module
* Injects "require", "module", and "exports" to match expectation of
CommonJS modules
* Modules factories are not guaranteed to execute until someone
require's or require.async's them.
* Efficient handling of multiple modules (one call with a module set)
require.async
* Used for loading and executing modules and then running anonymous code
(not a named module) to use these loaded modules
* Generally hand-coded
* Loaded modules are always executed before the callback
* Is not intended for wrapping CommonJS modules
* Injects the module objects as positional arguments to the callback for
ease of referencing the load/required modules
With this approach, each of these can be used in conjunction to fulfill
most of the important needs out there, including deferred execution,
module bundling, module injection, module/injection namespace
separation, and module transportation.
Also with this approach, I think that require.def can be simplified to
remove the "injects" property and it can simply take an object with a
set of module factories, and an array of dependencies.
--
Thanks,
Kris
I'm sorry I had gone ahead and updated my proposal before seeing your
recent post.
To answer your questions:
1. I think Transport B is a simple, solid and extendable option. I
would go with its second draft.
The reserves I mentioned regarding the IE DontEnum bug still hold.
Handling modules with identifiers 'contructor', 'toString', 'valueOf',
etc. in IE is doable but would require a bit more
Also Draft 2 allows for specifying a consistant and deterministic
behaviour when the same identifier is declared multiple times.
Compare:
require.def('foo', {...});
require.def('foo', {...});
which could be specified to either throw, silently fail or reset the
module descriptor of 'foo' to the latest value, to
require.def({
'foo': {...},
'foo': {...},
);
the outcome of which is undeterminable by ES3 spec (iteration order
isn't specified). And to:
require.def({ 'foo': {...}});
require.def({ 'foo': {...}});
whose behaviour would somehow need to be specified.
2. As mentioned earlier I would go with the second proposal of
require.async but name it differently (load, preload, fetch, get,
ensure, you name it).
3. both specs should be separated.
Best,
Tobie
> I believe require.async and require.def can fulfill two distinct roles.
> It feels like we are getting a lot of cross-over of perceived needs, but
> if we intelligently design these, I believe they can complement each
> other very nicely without having to overload each with excessive
> requirements. I would propose the purpose and design of each:
> require.def -
I gernally approve of these requirements for require.def.
> * Used for defining named modules
> * Generally created through automated generation/wrapping
> * Designed to wrap CommonJS modules without any internal modification to
> the module
> * Injects "require", "module", and "exports" to match expectation of
> CommonJS modules
Sure, I can give up the ability to override the injected names.
That's fine for now, as long as factories accept no other names. I
would prefer but do not insist that the proposal we converge on be
extensible in some way; keeping a migration path in mind would be nice
but not necessary.
> * Modules factories are not guaranteed to execute until someone
> require's or require.async's them.
> * Efficient handling of multiple modules (one call with a module set)
This last one is at odds with Tobie's expectations. You guys need to
come to some kind of understanding, or there need to be two
cooperative proposals. I think this would be fine, but I'll leave it
to you.
> require.async
> * Used for loading and executing modules and then running anonymous code
> (not a named module) to use these loaded modules
> * Generally hand-coded
> * Loaded modules are always executed before the callback
> * Is not intended for wrapping CommonJS modules
> * Injects the module objects as positional arguments to the callback for
> ease of referencing the load/required modules
> With this approach, each of these can be used in conjunction to fulfill
> most of the important needs out there, including deferred execution,
> module bundling, module injection, module/injection namespace
> separation, and module transportation.
To be clear, Tobie and Irakli want a feature that allows them to
asynchronously request the loading of a module but not execute a
module. It is your opinion that this goal can be adequately met by
issuing a request for a require.def transport, given that it does not
implicitly execute the module. Therefore, require.async is free to
implicitly require and pass exports to the callback.
require.async(["a", "b"], function (a, b) {
});
I think this is fine, and I hope Irakli and Tobie agree.
> Also with this approach, I think that require.def can be simplified to
> remove the "injects" property and it can simply take an object with a
> set of module factories, and an array of dependencies.
To be clear, the form of this would be:
require.def({
ID: function (require, exports, module) {
},
ID: function (require, exports, module) {
}
}, [IDS]);
Where the ID's specified as the (perhaps optional) second argument
would refer exclusively to external module dependencies, since the
internal modules can reliably depend on one another without
explicating their internal shallow dependencies. I think this is
fine. The only problem is that it is at odds with Tobie's aversion to
name space collisions with non-enumerable Object owned properties in
IE. I think that problem could be averted with a prefix, if
necessary.
require.def({
"_a": function (require, exports, module) {
}
});
Kris Kowal