Request for feedback- Yet another client-side module loader (it's different, I promise:))

460 views
Skip to first unread message

meelash

unread,
Mar 24, 2012, 8:04:52 PM3/24/12
to nodejs
tl;dr - Client-side require with a server-side component that caches
dependencies, bundles them, and caches the bundles. Need feedback on
the concept, syntax. Need suggestions/contributions on implementation.
Although, this works for me, it is almost just a proof-of-concept,
needs work.


As part of a project I'm working on, I spent a few hours writing a
little client-side module loader with a server-side component enabling
what I think is a pretty neat meaning to CommonJS module syntax. This
morning I pulled it out of the rest of my project and attempted to
package it in a useful way for others to use.

The basic idea is this- in your client-side code, you can use require
in either a "synchronous" or asynchronous fashion-
module1 = require('some/path.js');
require('some/other/path.js', function(err,result){module2 =
result;});

An asynchronous require makes a call to the server component to get
the file in question, but before returning the file, the server parses
it, finds all the synchronous require calls, loads those files as well
and returning the whole thing as a package. That way, when the
original file that was asynchronously loaded is executed and comes to
one of those synchronous require calls, that file is already there,
and the require is actually synchronous.

At this point, maybe this screencast demo will help to clarify how it
works: http://screencast.com/t/nOU53BRYUAX

Put another way:
If I async require fileA, and fileA has synchronous dependencies on
fileB, and fileC, and an asynchronous dependency on fileD, the server-
side component will return (in a single "bundle") and keep in memory
fileA, fileB, and fileC, not fileD, and it will execute fileA.
The client-side also separates fetching the files and eval'ing them
(the method of getting files is xhr+eval). So, let's say fileA has
require('fileB'); that executes when the file is parsed and executed
on the client, but require('fileC') is inside a function somewhere.
Then fileA will first be eval'ed, then fileB when it comes across
that, and the text of fileC will just be in memory, not eval'ed until
that function is called or some other require to it is called by any
other part of the program.

Another example-
fileA has dependencies fileB, fileC, fileD, fileE, fileF
fileG has dependencies fileC, fileE, fileH

When I call require('fileA', function(err,result){return 'yay';});,
the module loader will load fileA, fileB, fileC, fileD, fileE, and
fileF all in a single bundle.
If I, after that, call require('fileG', function(err,result){return
'yay';});, the module loader will only load fileG and fileH!

Hopefully, that's clear....

The advantages-
Being aware of the difference in synchronous and asynchronous require
in your client-side code make it extremely natural to break all your
client-side code into small reusable chunks- there is no penalty and
you don't have to "optimize" later by deciding what to package
together and what to package separately.
Handling dependencies becomes nothing. You don't have to think about
it.
The server can have a "deployment" mode, where it caches what the
dependencies of a file are and doesn't ever need to parse that file
again.
In "deployment" mode, the server can also cache bundles of multiple
files that are requested together, so when another client requests
that same bundle, it is already in memory.

To sum up:
xhr+eval-when-necessary client-side module loader
both synchronous-ish and asynchronous require in your client side-code
--the synchronous require is actually a command to the server-side
component to bundle
server-side component
--parses for dependencies and bundles them together
--can cache dependency parsing results and whole bundles


So- thoughts? Is this a horrible idea? Are there some gotchas that I'm
missing?

Specific advice needed-
• How to package this in a way that it can be easily used in other
projects? How can I make it integrate seamlessly with existing servers
and make it compatible with different transport mechanisms?
• How to handle path resolution?
• Suggestions for licensing?
• Suggestions for a name- (Mundlejs is a portmanteau of Module and
Bundle- didn't really think long about it)

Things that need to be (properly)implemented:
• server-side "parsing" is just a brittle regexp right now:
(line.match /require\('(.*)'\)/)
• neither type of server-side caching is implemented (pretty easy to
do)
• uniquely identify clients and keep the server away of what modules
they already have, so we can just send the diff of cached modules-
currently, I'm sending the entire list of already cached modules with
every xhr call, so the server doesn't load a dependency twice.
• proper compatibility with module specifications (i.e. CommonJS)-
right now, it's just require and module.exports


Code is available here: https://github.com/meelash/Mundlejs
To test it:
from Mundlejs/tests/, run
node server.js
visit http://127.0.0.1:1337/ and open your browser console.

coderzach

unread,
Mar 26, 2012, 2:40:46 AM3/26/12
to nod...@googlegroups.com
This seems like a really cool idea.  Here are my opinions on your questions:


How to package this in a way that it can be easily used in other
projects? How can I make it integrate seamlessly with existing servers
and make it compatible with different transport mechanisms?

If you want to maximize use by others, you could make it connect
middleware for portability.


How to handle path resolution?

You probably don't want to divulge the entire directory structure of your server,
so you should probably have a root directory, which will be the "root", of where
files can be included from on your server.

Suggestions for licensing?

MIT is probably the most common license in the node.js community.

uniquely identify clients and keep the server away of what modules
they already have, so we can just send the diff of cached modules-
currently, I'm sending the entire list of already cached modules with
every xhr call, so the server doesn't load a dependency twice.

Maybe you could you do the static analysis for dependencies on the client, so that you
don't need to maintain that state on the server?  Then you have the client request
"/modules/my-module.js?deps=/modules/a.js:/modules/b.js" based on what it already
has

Bruno Jouhier

unread,
Mar 26, 2012, 3:01:41 AM3/26/12
to nod...@googlegroups.com
I have a similar thing. Not fully packaged but I published it a while ago: https://github.com/Sage/streamline-require

It analyzes the dependencies server side and supports both synchronous and asynchonous requires. Like yours, it returns the entire dependency graph in one shot. So the client gets everything it needs in one roundtrip. It also monitors changes to the source tree and returns a 304 if the browser has an up-to-date version.

There is one further refinement: when you request additional module asynchronously, the client sends to the server the list of modules that it had requested before and the server computed the list of dependencies of the new modules as well as the list of dependencies of the modules that had been requested before and it sends back to the client a single response with the modules of the first list that are not in the second list. So, if the client had gotten A, B, C, D, E, F in a first require and requests G which requires B, C, and H, the server only returns G and H to the client. I the client then requests I which requires C, F and H and J, the server returns only I and J.

Overall, this is extremely fast.

I was doing the dependency analysis client side before and loading the modules one by one. Terrible in comparison.

Bruno

meelash

unread,
Mar 26, 2012, 3:33:38 PM3/26/12
to nod...@googlegroups.com
@Bruno- very cool... I especially like the watch ability.

Mine also handles that refinement as well, as you can see from the second example with fileA, fileB, etc... Although, I send the entire list of existing modules on the client-side (vs. just those that were required previously) which means the server doesn't have to compute those dependencies again. As I mentioned, I was toying with the idea of having the server remember which client has which modules and just sending a the most recent batch every time from the client for confirmation. But, given that the server can cache the list of dependencies of any module, I think your approach is more robust while still keeping the size of the request down and not requiring any extra processing on the server...

The thing I'm most excited about is how optimization can happen completely autonomously and precisely in deployment. You could put one function per file (if that was something you like to do for some reason) and after the first user visited your app and clicked around, you'd have a bunch of cached bundles exactly tuned to how users actually use your app.

saleem

meelash

unread,
Mar 26, 2012, 3:47:18 PM3/26/12
to nod...@googlegroups.com
Thanks, coderzach!

If you want to maximize use by others, you could make it connect
middleware for portability.


Are there any examples of what this might look like? Sorry, I'm a bit of a newbie...
 

How to handle path resolution?

You probably don't want to divulge the entire directory structure of your server,
so you should probably have a root directory, which will be the "root", of where
files can be included from on your server.

Right, this is a good point. I'm thinking when I can get complete compatibility with node require to also check in the node_modules folder so code can be loaded from there to the client.
 

Suggestions for licensing?

MIT is probably the most common license in the node.js community.

uniquely identify clients and keep the server away of what modules
they already have, so we can just send the diff of cached modules-
currently, I'm sending the entire list of already cached modules with
every xhr call, so the server doesn't load a dependency twice.

Maybe you could you do the static analysis for dependencies on the client, so that you
don't need to maintain that state on the server?  Then you have the client request
"/modules/my-module.js?deps=/modules/a.js:/modules/b.js" based on what it already
has

I think even just doing the dependency analysis might be too heavy for client-side, although that's just intuition, I don't have experience or benchmarks to back that up. Especially if we're encouraging modules to be as small and as numerous as makes sense.

meelash

unread,
Mar 26, 2012, 3:49:28 PM3/26/12
to nod...@googlegroups.com
btw, bruno, how do you do the dependency analysis? A full parser?


On Monday, March 26, 2012 12:01:41 AM UTC-7, Bruno Jouhier wrote:

Bruno Jouhier

unread,
Mar 27, 2012, 4:26:32 AM3/27/12
to nod...@googlegroups.com
Same way you do it: regexp

Mariusz Nowak

unread,
Mar 27, 2012, 4:34:34 AM3/27/12
to nod...@googlegroups.com

deitch

unread,
Mar 27, 2012, 7:24:07 AM3/27/12
to nod...@googlegroups.com
@meelash,

I am pretty sure I get the basic concept, not the details, so hard to comment. 

Let me rephrase how I understand the problem, and then maybe you can help explain how it solves it?

On server-side, I just require(modulename) and get it. It is easy, and every module can require whatever it needs.

On client-side, I have two problems:
a) Within a file, I cannot do require, the module (i.e. .js file) needs to assume whatever it needed was loaded.
b) I actually need to load the file separately, either through <script src="foo.js"></script> or using a loader like labjs.

It sounds like you are trying to solve these in one fell swoop, by including one file, say <script src="specialloader.js"></src> or whatever, and having that then load all the others. However, rather than (a) fileA.js assuming the contents of fileB.js (which it needs) have already been loaded and (b) manually creating a script tag to include fileB.js, you are doing something inside fileA.js that is then parsed by a middleware on the server side? If so, how do I do it, and how do I not break the traditional client-side?

Truth is, browsers desperately need a "require()" function, so that a script can just load other scripts...

meelash

unread,
Mar 27, 2012, 3:48:00 PM3/27/12
to nod...@googlegroups.com
Cool Mariusz, looks good... I'll switch to that this weekend, hopefully.

saleem

deitch

unread,
Mar 27, 2012, 7:22:26 PM3/27/12
to nod...@googlegroups.com
Actually, modules-webmake looks pretty cool. I am still trying to wrap my head around it, and how it is different than labjs or any of the other loaders, or browserify or other converters.

So, I would write code in the CommonJS style, but have it available to the browser? I still need to write for the browser (jQuery, DOM, non-V8 limitations, etc.), but rather than *assuming* I have stuff in, say, APP.utils.foo, I could just do 

var foo = require('./util/foo');

and this would do the right thing, and render it up for the browser to use? 

I could see how the programmatic might be useful. I add a line:

<script src="/myassets.js"></script>

then have expressjs intercept it, run Webmake, then minify and gzip the source. To prevent redoing, it could cache the result in memory or even on disk. If it exists, serve it out; if not, Webmake+minify+gzip and serve it out.

Q: does this have any impact on browser render times? This takes all 100+ files and makes them a single file, as opposed to loading multiple in parallel.

Cool.

Saleem Abdul Hamid

unread,
Apr 2, 2012, 3:04:54 PM4/2/12
to nod...@googlegroups.com
@deitch- 

I see the problem differently. I would say that the fundamental problem of a client-side module loader is exactly the same as that of a server-side one:
1) Allow me to write really modular reusable code, without artificial restrictions.

The client-side module loader has a lot of domain-specific implementation details that differ from the server-side. The answer to these should be to hide the ones that developers can't use for their advantage, as much as possible, and provide easy-to-use syntax for the ones that they can use for their advantage, or really need to know about and take into consideration.

Example of the first kind:
Dynamic optimization/caching are things that we want, but shouldn't have to think about

Example of the second kind:
In large web app, a developer knows as he's writing the app that some thing will not happen until a user clicks something, or may never happen. So never loading that code until it is needed is an advantage. In other cases a certain portion of the app needs a bunch of dependencies immediately, so loading them all in separate server calls doesn't make sense.

To handle those two cases, we have a simple syntax of synchronous vs. async require statements.

The examples are probably the easiest way to understand how it works.

Saleem Abdul Hamid

unread,
Apr 2, 2012, 3:15:44 PM4/2/12
to nod...@googlegroups.com
As I understand it, webmake is the same thing as requirejs+optimizer: the only difference is webmake uses sync require syntax and requirejs+optimizer uses async syntax. In the end you have all 100+ files loading as a single file.

Opposed to this, consider just using requirejs without its optimizer: Then you have 100+ files loading whenever they're needed sometimes with 10-20 requests at once perhaps which is also not optimal.

A third existing option is to use requirejs and selectively optimize, according to your knowledge of how the program works into different "super-modules" each containing a bunch of your actual modules. The problem with this is that it is work, and given reused modules in different areas of an app which might be loaded at different times, you'd end up getting the same code in more than one super-module.

My module loader (and streamline-require) is a blend of the first two approaches to get an automatically optimized version of the third. As you're developing, you just sync or async require exactly as it makes sense for your program. You deploy, and bundling, cacheing are all optimized automatically.

deitch

unread,
Apr 3, 2012, 2:57:03 AM4/3/12
to nod...@googlegroups.com
It does look like it. I think I like the sync style better only because it maps so cleanly to what we know and love in node. On the other hand, the browser is *not* node; modules in node are loaded directly from the local filesystem, which can be done much more efficiently than in the browser, where every require() has to come from the server. That, of course, leads to either lots of little calls (requirejs without optimizer) or one or a few mongo files (full or selective optimization), as you pointed out.

I can see how requirejs+optimizer with bundling intelligence - or for that matter webmake with the same - makes a lot of sense. You are saying that mundlejs is exactly that. I will need to take a look.

deitch

unread,
Apr 3, 2012, 3:05:19 AM4/3/12
to nod...@googlegroups.com
"Artificial restrictions." What qualifies as artificial?

I have not taken a good hard look at this stuff in quite some time; mundle, require, webmake, lot of interesting stuff.

Does any of these also do environment-specific optimization? E.g. in development and testing, I want all my client-side code loaded - however I do it: require, mundle, webmake, script tags, whatever - as is. In production (and maybe in beta, too, but that should be up to me), I want it all minified/gzipped. To my mind, the way to do it is programmatically, within my node app.js, have it make some call in "server.configure(env,fn)" wherein it will check for a cached minified/gzipped version and send it if there, else it will do so. But open to other ways.

Saleem Abdul Hamid

unread,
Apr 17, 2012, 5:10:15 PM4/17/12
to nod...@googlegroups.com
Update- I made some improvements like using Mariusz's parser, adding documentation, and improving the packaging (still not ideal). A big addition is path resolution, so you can require something relative to a module you already have on the client.

When called from the file that was in baseDir/testFolder1/testFile1.js,
require('./testFolder2/testFile2.js')
Will load baseDir/testFolder1/testFolder2/testFile2.js.

This works for both asynchronous and synchronous requires.
Next up is probably cacheing.

mgutz

unread,
Apr 18, 2012, 12:46:52 PM4/18/12
to nod...@googlegroups.com
I'm not sure how this the end result is different than the many libraries out there. It seems overly complicated. Have you looked at connect-assets, stitch, snockets ...? Pre-package your modules and let the browser already handle caching. I'm not convinced that fetching every dependency as needed is more efficient than loading one or two minified/compressed packages. I've tried many AMD loaders and in the end our projects are simpler without them. Stitch was trivial to patch to support multiple packages something like `require(package/some/path)`


On Saturday, March 24, 2012 5:04:52 PM UTC-7, Saleem Abdul Hamid wrote:
On Saturday, March 24, 2012 5:04:52 PM UTC-7, Saleem Abdul Hamid wrote:

Saleem Abdul Hamid

unread,
Apr 19, 2012, 3:16:33 PM4/19/12
to nod...@googlegroups.com
Hi mgutz,

Thanks for the feedback.
I'm not convinced that fetching every dependency as needed is more efficient than loading one or two minified/compressed packages.
Half of the whole point is actually to not fetch every dependency as needed. You want to fetch exactly optimized minified/compressed packages (whether it is one or two or ten depends on your project) The other half of the point is to not:

 Pre-package your modules and let the browser already handle caching.
What will you do if two modules have some shared dependencies and you don't know which one will be required first? Load two copies of the same code? How will you handle hand-packaging your dependencies into modules in an app with hundreds of modules? Why optimize something imperfectly and manually when it can be automated?

It seems overly complicated.
This may be because I'm explaining how things work. For the developer, it is extremely simple. Just:
module1 = require('module1'); 
For a dependency that your ux requires to be available immediately,
require('module2',function(err, module2){})
for a module that your ux allows to have loading time.

Have you looked at connect-assets, snockets
Connect-assets uses snockets, so I'll group them together. Besides what I mentioned above about shared dependencies and loading the same code twice, it doesn't appear that different modules are name-spaced separately, which is a shortcoming vs. just using requirejs for example. Also, declaring dependencies in comments is not that nice... what happens if you're using automatic documentation generation? And you're forcing upon yourself an inherent compile step, even when writing in javascript. And there's no way to reuse that same module on the server. These are a few shortcomings of that approach that are solved by mundlejs.

Have you looked at stitch?
 Again, the same problem with shared dependencies, which, in case you think is not a serious problem, consider the case of a client-side framework, where some base classes might be contained in modules and would be in the dependency trees of every module leading to serious code duplication and maybe doubling, or more, the size of the code running in the browser.

Have you looked at ...?
I did extensively look for another module loader before starting mundlejs, because, as I said, I actually was using it for a project. I didn't find any. Since I posted here, I've discovered that there are at least two pre-existing libraries that approach the problem the way I think it should be approached and am trying to do with mundlejs-
1) Bruno's streamline library that he posted in this thread
2) YUI loader, which someone emailed to me in response to this thread
The reasons I don't drop mundlejs and use one of these two are more taste/domain related than any fundamental difference of approach. Every other client-side loader/ pre-packager/bundler I think is pretty much of limited use.


If you have a simple app with a few js files then mundlejs is not *necessary* for you. It's my goal to make it nice enough and have so little overhead that you still might use it just for the convenience. But if you have a single-page web app with >1MB of javascript, I think you really need something like mundlejs and that's actually the kind of project it was written to support.

Bruno Jouhier

unread,
Apr 19, 2012, 3:57:33 PM4/19/12
to nod...@googlegroups.com
I confirm that the approach works really well. It's very easy to use and it handles lots of dependencies very efficiently. Also, it works great if you share files between client and server because the same "require" code works on both sides.

I never took the time to polish my streamline-require library. It's good enough for our own usage but it has some weaknesses, like the fact that it extracts the require dependencies with a regexp (I've hit once a case where it did work but the regexp used a lot of CPU because of heavy backtracking). Also, my library does not handle some features of require (if you give it a directory it handles the index.js default but it does not go as far as looking up the main from the package.json file -- we never needed this for client side modules).

So, it is good to see someone package a full-blown implementation. Go ahead Saleem.

Bruno

Saleem Abdul Hamid

unread,
Sep 22, 2012, 1:51:40 PM9/22/12
to nod...@googlegroups.com
Hi everyone,

This is now available as Connect middleware.


On Saturday, March 24, 2012 5:04:52 PM UTC-7, Saleem Abdul Hamid wrote:

Saleem Abdul Hamid

unread,
Oct 5, 2012, 1:24:35 PM10/5/12
to nod...@googlegroups.com
More updates :)

I added a plugin api, and wrote two example plugins. Now you can use those plugins to include coffee-script and jade files. So you can include jade template views in your bundles and then render the pre-compiled function client-side as much as you want. There's a demonstration screencast of both plugins in the readme.

I also forgot to mention that I added a lot of caching to the server and cleaned up some performance bottlenecks in the proof-of-concept version, so it now serves files faster than connect (even without taking into account the optimizability advantages). The crude, but sufficient ;), benchmarks are available in the readme.



On Saturday, March 24, 2012 5:04:52 PM UTC-7, Saleem Abdul Hamid wrote:

Martin Cooper

unread,
Oct 5, 2012, 8:39:39 PM10/5/12
to nod...@googlegroups.com
GPLv3? Really?

That seems like an odd choice in Node's predominantly MIT / BSD world,
especially if you're looking for adoption.

--
Martin Cooper
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en

Saleem Abdul Hamid

unread,
Jan 3, 2013, 12:44:54 PM1/3/13
to nod...@googlegroups.com
Hi Martin,

Aren't there plenty of successful os projects with GPL? Is there any reason why it doesn't work in the node community, besides what everyone else is doing? Are there a majority of commercial project using node?

Please don't mistake the questions for anything combative, just trying to get a full grasp before switching.

Thanks,

Saleem

Oliver Leics

unread,
Jan 3, 2013, 6:09:59 PM1/3/13
to nod...@googlegroups.com
On Thursday, January 3, 2013 6:44:54 PM UTC+1, Saleem Abdul Hamid wrote:
Aren't there plenty of successful os projects with GPL? Is there any reason why it doesn't work in the node community, besides what everyone else is doing? Are there a majority of commercial project using node?


Tauren Mills

unread,
Jan 5, 2013, 5:39:33 AM1/5/13
to nod...@googlegroups.com
Saleem,

I had a little time to experiment with mundle tonight. It's interesting and shows promise, but I have some significant concerns. I'd be interested in your perspective on them:
  1. Modules are loaded using XMLHttpRequest, which immediately brings up cross-domain concerns. Only pages on the same protocol and exact hostname will be able to load these modules without adding JSONP or CORS support.
  2. When a module is requested, the payload returned is JSON containing strings of code which are eval'ed. If you didn't already know, eval is evil. But even more troubling is that code loaded this way doesn't show up in the WebKit inspector (without using @sourceURL), thus hard to debug, set breakpoints, trace, etc.
I see room for a tool like mundle, as it solves some problems other loaders do not. But either one of the concerns above is enough for me to move on. I suggest that you read this link carefully, as it explains these issues better than I can: 

I recommend you consider making the following improvements:
  1. To get around the cross-domain issues, load modules by injecting a script tag into the <head> instead of using an AJAX call. A simple example of doing this can be found in the $script loader (see https://github.com/ded/script.js/ ). More complex implementations, such as YepNope, allow scripts to be loaded concurrently in any order, but executed in the specific order you desire. This is done by loading scripts as an image and letting the browser cache them, then loading them again from cache as JS in the correct order to execute them.
  2. Have the server combine modules into a regular Javascript file, not a JSON file. You have an advantage that most loaders do not - there is a server-side component! So use it to build and wrap the raw modules with the correct "exports" context and so forth. 
  3. Word of advice: you will get more interest if mundle was written in Javascript, not Coffeescript. Most developers I know want critical components (such ast their loader) to be pure JS. I have nothing against Coffeescript myself, and have used it on some projects. But I believe it is something better suited for building your custom application, not general purpose tools such as a loader.
Lastly, I created a pull request to fix an issue when loading mundle modules from an HTTPS server. I found that all modules were loaded via HTTP only, even if the current page is HTTPS. Here's the fix:

I hope this helps!

Tauren
 


Saleem Abdul Hamid

unread,
May 7, 2013, 9:36:30 AM5/7/13
to nod...@googlegroups.com, tau...@tauren.com
Hi Tauren,

I know it's been a good while since you made these comments :). Unfortunately, a new full-time gig, plus a move, plus family busyness threw my schedule enough out of whack that I couldn't make time for a while. For a month, I've been putting aside time regularly again for mundle now.

Anyway, your comments are really appreciated, and even more your time looking over and experimenting. I had to do a little bit of research to come up with the answers to your questions, in spite of having thought about these things early on when making these decisions. So that reminded me that I need to document the answers to a lot of the basic decisions I made for the future, as well.

3) I'll answer the easiest one first ;)- coffeescript. I appreciate the advice, but frankly, this is just for fun and I enjoy writing coffeescript, while writing javascript can be a chore. In any event, I think it's a silly objection, given that the javascript is in lib/ and is plenty readable.

1 and 2) As far as I can remember the choice of XHR+eval had a lot to do with the notion of separating asynchronous loading of scripts from their execution, which is blocking. At the time, I was working on a very large code base and large code bases are, I think one of the core would-be customers of mundle. Therefore, I felt it was an important performance boost to be able to load a lot of small code chunks asynchronously and have them available to be evaluated individually on need.


If there's a way to achieve this using other more versatile script loading methods, I'd definitely be open to it, unless there's something else that drove my decision that I'm not remembering right but will remember later ;)

As far as the cross-domain concern, I'm not sure this is a concern, since, by design, mundles have to be served by my server-side component which would be on my own domain. Is there a use-case where this would be a concern, given this peculiarity of the client-server interaction that mundle has?

Thanks again, and thanks for my first outside-pull request.

Saleem Abdul Hamid

unread,
May 7, 2013, 2:48:14 PM5/7/13
to nod...@googlegroups.com
BTW, I also rereleased this under the MIT license. Thanks Martin and Oliver for that advice.

Saleem Abdul Hamid

unread,
May 10, 2013, 10:11:24 AM5/10/13
to nod...@googlegroups.com, tau...@tauren.com
So, I've been thinking.... and cross-domain requests are going to be a concern if I am to implement my ace-in-the-hole feature- bundle caching to cdn. It seems like JSONP would be a great solution for this.

Another area of advice I'm interested in- I'm looking for a large open-source application that I can apply this to for public testing/demo. The more client-side js, the better (I think >1MB un-minified would be nice) and ideally something that's already using require.js or some other such solution would be very easy to port over. I have used this on a quite large project that I designed it for, which was using concatenation, so it took a bit more effort to really make a nice port. But, unfortunately, that source is proprietary and I don't think I could get permission to show it off.

Any ideas for that? 


On Saturday, January 5, 2013 5:39:33 AM UTC-5, Tauren Mills wrote:

Saleem Abdul Hamid

unread,
May 11, 2013, 10:18:35 AM5/11/13
to nod...@googlegroups.com
BTW, with respect to coffeescript vs javascript, let me just make it clear that the package is compiled to js before publishing to npm. Also, for the sake of one or two very good contributors with a strong anti-coffeescript preference, I'd even be willing to switch from coffeescript for the source. Just want to make clear that I'm not too fanatic about this.

Also, I'm going to start implementing the cache to cdn this weekend. I'm sure it's not a single weekend job, but stay tuned... mundle will really be amazing with this feature, capable of handling even very well traveled sites easily.
Reply all
Reply to author
Forward
0 new messages