IMO a commonjs application should not change in source when porting
between environments. Thus configuration information that is environment
or deployment specific must be loaded via a configuration management
module that is loaded into the application.
Depending on the capabilities of the loader you are using you should
have enough information available via `module` to get module and
application specific config info for the module.
var config = require("config").load(module, "XX");
When using loader plugins one must keep in mind that they reverse the
responsibility of which end declares the format of the 'module'. It
should be declared by the providing module (e.g. .coffee) not by the
consuming module calling require(). The only case in which loader
plugins can be reliably used are if they serve only modules like plain
require() or static text. Anything else makes portability and
interoperability a nightmare.
Christoph
I had a look at BravoJS and you are right. Looks like we need to
traverse all memoized modules and their dependencies to find anything
that is missing, and load it, before invoking the first module. This
would have to happen right before calling the first module as modules
may not be memoized in the same order as they would be called using
module.declare() so we cant do the accounting while memoizing.
I have not run into this as in deed my concatenater does compile plugins
and memoizes the final result. I do this by loading the application into
a loader on the server and then scraping all memoized modules. This has
been working real nice and always produces a lean build for the client
by only including the modules that are actually needed.
> Perhaps this is the appropriate conclusion: provider plug-ins should
> only be used in conjunction with a NodeJS-like environment so that the
> build process can also tap into the plug-in's magic. This is pretty
> annoying in our particular case, since our build system is quite dumb
> and not JavaScript-aware, but that's not really the spec's fault.
> Would you say this is a fair conclusion?
This is a tricky one. I see a few options:
1) Provider plugins *must* be portable CommonJS modules that will run in
all environments by loading them as part of the application. This will
require accounting for what has been memoized and loading the rest
before the first module is invoked as mentioned above.
2) Concatenaters must always memoize sources from provided modules, not
raw original sources.
3) No provider plugins as a general feature as they are not portable.
Use provider plugins for system-internal needs only within a specific
environment. This is what jetpack is going to be doing with `chrome:` [1].
4) No provider plugins.
I think all four options are valid and needed and the user must choose
the correct one based on the portability requirement of their code. (1)
and (4) would produce portable code. (2) can fix code that uses
non-portable provider plugins as long as they adhere to the 'static'
requirement and (3) produces non-portable code.
Christoph
[1] -
http://groups.google.com/group/mozilla-labs-jetpack/browse_thread/thread/2da6241694dc32a1
I don't think this should be an extension point. This seems pretty
fundamental to me. I think it must be baked into the loader which has
access to the list of memoized modules and their dependencies. We could
offer a flag to skip this final processing for environments that do not
need it.
If we are going to go this route all Modules/2 loaders must support this
natively I think.
Christoph
Right.
> I am curious what you meant by provider plug-ins being themselves
> CommonJS modules. The BravoJS examples are all raw script files, and
> if they were modules (perhaps with a standardized "plugin()" export)
> we would run into the spec hole discussed in
> http://groups.google.com/group/commonjs/browse_thread/thread/50d4565bd07e03cb
> , wherein calling global.module.provide before the main module is
> initialized breaks the contract that the main module is the first one
> executed. On the other hand, in writing my own plug-ins I definitely
> missed being able to partition functionality into modules (e.g. the
> config plugin could be mostly concerned with tapping into the Modules/
> 2.0 extension points, while delegating any actual config-parsing to a
> "configuration" module). Do you have examples of how this might work?
I am not sure the approach BravoJS takes with providing plugins via
module.constructor.prototype.* is the right approach. What I don't like
is that you can only register one plugin per method and it is used for
all modules globally. This limits the kind of plugins you can write but
maybe that was the intent. That plugins are only intended to patch the
environment in order to load the same static application source
cross-platform. That they are not to be used by applications to provide
custom functionality.
The former I can live with, for the latter we really need a different
plugin approach that can call plugin modules written as CommonJS
modules. So your main module would be invoked at which point you can
register plugins that are modules which are then used from that point
forward. This could work great as long as plugins are scoped per-package
as different packages may assign a different plugin to the same condition.
I have various provisions in PINF to be able to load a package and then
act on the source while loading modules from it by processing it via
other modules. All it takes is a declaraion/mapping in the package
descriptor. This works as I have another informal plugin layer above
module.constructor.prototype.* I don't have to worry about this not
working in the browser as I am using the scraping concatenator. It would
be great if we could spec this eventually to cover all plugin cases
cross-environment.
Christoph