Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Are callbacks good enough?

119 views
Skip to first unread message

Bruno Jouhier

unread,
Jan 9, 2011, 6:02:32 PM1/9/11
to nodejs
Last Friday, I submitted a new proposal to ease the pain that many of
us are experiencing with callbacks (Asynchronous Javascript for
dummies). A bit to my surprise the main feedback I got so far has
been: "Sorry, not interested! Callbacks are just fine! Programmers
just have to learn how to write them!"

I'd like to challenge this.

First, even if programmers will eventually "get there", not all of
them will. So do we want node.js to remain a niche technology that
only an elite will be able to master? Or do we want it to appeal to
mainstream programmers? The former would be a terrible waste. If we
want the latter, we need an alternative.

Second, and this is where the path that I followed taught me something
interesting, writing these callbacks is a real chore, and actually, it
is somewhat pointless because the machine could save us this chore.
So, maybe we would be an elite, but an elite of slaves who keep
applying over and over the same set of silly patterns, moreover to
produce hard to read code at the end of the day. Not a very
interesting perspective as far as I am concerned.

If you're puzzled at this point, there are more details in the little
tale that I published on my blog today:
http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-tale-of-harry/

Bruno

Marak Squires

unread,
Jan 9, 2011, 6:47:30 PM1/9/11
to nod...@googlegroups.com
Control flow libraries belong in user-land for now. If you have a particular way you like handling this stuff, then write a library for it. 

If your library is superior, then people will adopt it. If people aren't receptive to your ideas, well thats life. 

- Marak



--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.


Bradley Meck

unread,
Jan 9, 2011, 6:48:33 PM1/9/11
to nod...@googlegroups.com
I would like to just point out that synchronous code suffers to explain asynchronous behavior comparatively. Return values of functions that apply callbacks are one point of this and also the fact that callbacks show where control of a program are being changed. Asynchronous programming will lose some of the things that are characteristics of the way things are working underneath. I feel that yest there is little in terms of tutorials and resources to help with complex patterns of asynchronous behavior such as tree walking. While tools such as this are amazing as a mitigating factor to complexity (and good god can it get complex) it should be encouraged to understand the patterns of what is really going on.

Max Bridgewater

unread,
Jan 9, 2011, 6:56:18 PM1/9/11
to nod...@googlegroups.com
Here are my two cents on this issue. Before coming to nodejs, I did
use the event paradigm fairly a lot. My latest experience prior to
nodejs was, however a nightmare. The company I worked for had hired a
well known company to build an asynchronous middleware for large scale
distributed games. As a software engineer hired to build the
application logic on top of this middleware, my summary is that it was
real pain in the ass.

My believe is that this pain was because the guys who developed this
middleware tried to make EVERYTHING asynchronous. They rewrote most
aspect of Java to make it asynchrnous. Needless to say, testing the
application was a huge challenge.

The takeaway for me was this: a language can't be completely
asynchronous. You can't use asynchrnous communication everywhere.
There need to be a healthy balance between synchornous communication
and asynchronous communication. The designer of the application should
be responsible for identifying the parts of his software that should
use one or the other. This is why I'm in favor of keeping asynchrnous
communication explicit, and instead educate developers and help them
make informed decisions so they can use the right concept for the
right situation. In my view, hiding asynchronous communication
underneath synchronous communication doesn't help in making the right
decision.

Max.


On Sun, Jan 9, 2011 at 6:02 PM, Bruno Jouhier <bjou...@gmail.com> wrote:

todd hoff

unread,
Jan 9, 2011, 8:23:37 PM1/9/11
to nodejs


On Jan 9, 3:56 pm, Max Bridgewater <max.bridgewa...@gmail.com> wrote:


> The takeaway for me was this: a language can't be completely
> asynchronous. You can't use asynchrnous communication everywhere.
> There need to be a healthy balance between synchornous communication
> and asynchronous communication. The designer of the application should
> be responsible for identifying the parts of his software that should
> use one or the other. This is why I'm in favor of keeping asynchrnous
> communication explicit, and instead educate developers and help them
> make informed decisions so they can use the right concept for the
> right situation. In my view, hiding asynchronous communication
> underneath synchronous communication doesn't help in making the right
> decision.

An actor model can be a good compromise. Events are queued up for an
object which are processed sequentially and are controlled by a state
machine. Everything can still be async, but logic tends to be clearer
as you can look in one place and see what's happening.

Floby

unread,
Jan 10, 2011, 4:56:13 AM1/10/11
to nodejs
am I the only one not to get it ? ^^

I entered too. Node + MongDB.
> tale that I published on my blog today:http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-...
>
> Bruno

Floby

unread,
Jan 10, 2011, 5:11:18 AM1/10/11
to nodejs
sorry, that last posted was supposted to be in
http://groups.google.com/group/nodejs/browse_thread/thread/24162f361cc23163

Floby

unread,
Jan 10, 2011, 5:21:47 AM1/10/11
to nodejs
I still have to say that every javascript programmer, I ean each one
of them, don't know another way of doing things than asynchronously.
That's how jQuery and HXR taught them to code, and I believe they are
quite happy with it.

Also there was a point in the design of nodejs to make IO asynchronous
-> stop waisting time waiting for data. Ryan explains it well on the
conference linked on the nodejs homepage.
Callbacks are fine by me. javascript, with its anonymous function, is
one of the best suited languages for asynchronous operations.

On Jan 10, 11:11 am, Floby <florent.j...@gmail.com> wrote:
> sorry, that last posted was supposted to be inhttp://groups.google.com/group/nodejs/browse_thread/thread/24162f361c...

pagameba

unread,
Jan 10, 2011, 6:42:43 AM1/10/11
to nodejs
Every language that I have coded in over the years has certain idioms
that are fundamental to being able to write good code in that
language. Some of these things come naturally from language itself
and some are best practices of the community. In the case of
javascript, closure is an example of a fundamental property of the
language that you pretty much have to 'get' in order to write decent
code while the use of callbacks in asynchronous code is more of a best
practice. I can't see that it would be possible to write what would
be considered good client-side code these days without understanding
how it works and why it is good.

On the other hand, I think it is possible to overuse it in code to the
point of ridiculousness. I saw a discussion once about someone who
wanted to know if the math functions in javascript could be turned
into asynchronous methods using callbacks ... add(1,2, function(sum)
{ ... }); ... I guess there is probably some esoteric reason why this
might be desirable but honestly ...

Anyway, the use of callbacks for naturally asynchronous code is part
of the essential nature of using javascript in web browsers and
node.js to accomplish valid tasks and I don't think any amount of
sugar coating is going to help people, better they go through the pain
of learning and be better programmers in the long run rather than
sugar coat it for them and always have them writing crap code because
they don't understand something fundamental. Which is why I'm not a
good C++ programmer, I don't 'get' templates :)
> tale that I published on my blog today:http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-...
>
> Bruno

Vyacheslav Egorov

unread,
Jan 10, 2011, 8:53:46 AM1/10/11
to nod...@googlegroups.com
> Last Friday, I submitted a new proposal to ease the pain that many of
> us are experiencing with callbacks (Asynchronous Javascript for
> dummies).

Why try just to _ease_ the pain? The pain will disappear completely if
you stop using the tool which causes it.

You are trying to fight problem which is not a problem by rather a
_design_decision_.

If you have function foo which is in CPS and bar which is not then you
cannot substitute foo for bar and bar for foo. They are not
interchangeable. You have to accept it.

One can pile wrappers on top of wrappers and source-to-source
transformations on top of source-to-source transformations but that
would not fix the "problem". It will only hide it. It will make
debugging code more difficult: no nice stacktraces (in case of
transformations) or stack poluted with "async-framework" internals and
no debugger support (in case of tranformations). It will make
reasoning about code difficult because transformations you are
applying are far from trivial. "Dummy" will have a hard time figuring
out what is going behind the scenes and where he/she have to add _ to
make magic work.

--
Vyacheslav Egorov

Jorge

unread,
Jan 10, 2011, 9:59:58 AM1/10/11
to nod...@googlegroups.com

Here's another async directory walker function https://gist.github.com/742540 that compared to https://github.com/Sage/streamlinejs/blob/master/examples/diskUsage2.js , is, in my opinion, much easier to understand because the program flow is obvious/immediately apparent, instead of hidden behind an additional syntax layer, that of an async helper/abstraction framework/mechanism (noise ?).

I my head, asynchronous program flow becomes a quite straightforward thing, once you realize that most of the times -for maximum clarity- you can use a plain, named callback function declaration (instead of an inlined anonymous lambda) as if it were the *label* of the code block to which the program flow is going to jmp or goto next.
--
Jorge.

cwolves

unread,
Jan 10, 2011, 12:22:04 PM1/10/11
to nodejs
Bruno -

I personally commend you for coming up with a rather clean solution to
a problem that does bother so many people. I'm not sure how I feel
about it being in the solution of a pre-processor, but the way I see
it people use pre-processors for CSS & HTML all the time, why not the
actual server code? I haven't looked at exactly how you're
implementing things, but assuming it's one-off compiled and then run
indefinitely there's not much of a performance hit either and the
results can be cached if desired.

I understand why everyone is bashing on this -- it introduces invalid
JS code, and personally I find it very unlikely that I'll ever use it
since I hate pre-processors... But those people, any myself, don't
have to use it.

As for the people that are saying "People were taught to code JS
asynchronously" -- that's not entirely true. The vast majority of "JS
Programmers" (put in quotes since I wouldn't call the vast majority of
them competent) know how to code async in response to events:

onclick: do this
onload: do that

That's not the same thing as breaking a single linear thought into
multiple pieces:

onclick: do A, B, C. done.
vs
onclick : do A & B, wait for the results of A to come back, do C.
When B & C are done, done.

I don't think anyone will argue that programmers in almost any other
language are used to the first version.

So how does this compare to other libraries? I personally use step
and syntactically, this is nearly identical while being much cleaner:

step(function(cb){
doSomething(this.parallel());
doSomethingElse(this.parallel());

}, function(err, result1, result2){
cb(result1, result2);
});

vs (forgive me if this is invalid, it should be close -- I haven't
actually used streamline):

var result = flows.spray([
doSomething,
doSomethingElse
]).collectAll_();
cb(result[0], result[1]);


Okay, now my syntactical objections:
- it's not at all obvious that the above code is asynchronous. I know
that's the point, but if someone else is debugging my code and
doSomethingElse throws an error and cb() never gets called, they don't
have a clue what just happened... Okay, not the best example since I
don't know what type of error handling you have, but you get my point.

- I'd strongly suggest adding the ability to pass in functions/args
instead of always having to wrap functions inline:

flows.spray([function(){ doSomething_(A, B, C); }])
vs
flows.spray([[doSomething_, A, B, C]]);


Okay, I'll stop now :)

-Mark
> tale that I published on my blog today:http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-...
>
> Bruno

Tim Caswell

unread,
Jan 10, 2011, 12:38:31 PM1/10/11
to nod...@googlegroups.com
I don't think it's about beeing leet and not letting beginners in.  I think it's quite the opposite.  If beginners have to choose between the 15 different wrappers and transformations and "tools" that make it so you don't have to understand callbacks (at least for the first couple hours you spend learning their library), and then when something goes wrong (as it ALWAYS does with programming), they have to understand both what your library does AND how the callbacks and closures under the hood work, well, you get the point.  That's not easier.

The best thing for beginners is to learn callbacks and closures properly, then write their own async utility or choose a third party one that's already written by someone else. Less abstraction is almost always better. Especially for beginners.

Mikeal Rogers

unread,
Jan 10, 2011, 12:50:18 PM1/10/11
to nod...@googlegroups.com
I'm in 100% agreement with Tim.

Not a single one of these abstractions works for longer than a few hours before it leaks and you end up learning not only how callbacks work but becoming and expert in the implementation of their abstraction as well.

Take Q. It's probably the best promises implementation ever written. You can do super crazy stuff with that I don't even understand. But you can't go 5 minutes without need to wrap some other node callback in Q in order to use it, and when you get an error you'll have to understand how Q works in order to understand enough about the error to ask any of us why it's happening.

cwolves

unread,
Jan 10, 2011, 1:46:00 PM1/10/11
to nodejs
I don't think that's the argument here. It's not whether or not you
need to understand abstraction libraries to use them; it's that if you
do understand them, what is the ideal design pattern?

As for Tim's comment "beginners have to choose between the 15
different wrappers and transformations and...", I think that's quickly
becoming a problem with the Node community itself. I don't have time
to go analyze, compare & contrast every type of every library I want
to use, and someone new to JS isn't going to have a clue which of the
18 "Flow Control" libraries on the modules page to use, or which of
the numerous SQL, middleware, templating, etc libraries to use...
perhaps this needs a new thread.

-Mark

Liam

unread,
Jan 10, 2011, 3:01:37 PM1/10/11
to nodejs
Bruno,

I read your blog post; it makes a lot of good points. The async style
does force a lot of extra tokens, and requires idioms that are less
clear than the sync equivalents.

The early adopters now using Node are probably not your audience. We
prefer to see the code that runs (I often look at the assembly V8
produces -- not!) and are tolerant of new verbose idioms. In fact we
pride ourselves on learning this "superior" and different model before
it's become trendy! (Tho honestly I'm slightly annoyed by the extra
tokens and looping idiom pretty often.)

As a new wave of developers starts arriving at the Node party, I
imagine you'll get traction with streamline if it really works. And
you'll figure out how to address the issues folks are raising now,
about debugging especially.

Good luck, and keep us posted even if you do catch a little flak!

On Jan 9, 3:02 pm, Bruno Jouhier <bjouh...@gmail.com> wrote:
> tale that I published on my blog today:http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-...
>
> Bruno

Tim Caswell

unread,
Jan 10, 2011, 3:08:00 PM1/10/11
to nod...@googlegroups.com
I think this is the argument, that there is no ideal abstraction pattern.  They ALL have substantial flaws and fairly leaky abstractions.  No single is capable of solving this difficult problem.  

And what is the problem?  It's that people new to node and async javascript have a hard time understanding a language/style they don't know.  That's doesn't mean that callbacks and event emitters are a bad API or that JavaScript is a bad language.  (Even if it is a bad language, it's what node uses, so there isn't much point in trying to make it something it's not within the context of node.)  It means that we need to better educate people about the language and style.

Utility libraries and abstractions are great for two use cases.  

 - The programmer already has a clear understanding of closures and callbacks and wants to remove some boilerplate from their code to tidy things up.
 - The abstraction has zero leaks and can be used in blissful ignorance without knowing what the underlying implementation does.

The first case is quite common and more so as we educate more people and JavaScript gains more traction.  The second, in my opinion, doesn't exist yet and may never exist due to limitations of the language.  It's fine to strive for it and attempt to make such libraries, but it's not fine to pretend that leaky abstractions aren't leaky.  

Some examples of solid abstractions include C -> assembly, assembly -> physical electrons, JavaScript -> v8 implementation.  I can write JavaScript code all day long and never have to break down to the JIT code to find exactly what bits it's emitting to the CPU.  If I do, it's considered a bug and hopefully fixed right away. 

This is not the case for things like coffeescript, step, and promises.  There are often cases where you have to look at the generated javascript or library internals to debug a problem.  In fact, that's all the stack trace will point to. Promises are actually close to being solid thanks to all the thinking that's gone into them, but when it breaks down, it breaks down.  Also there are many like me who personally just don't like the style of promises and would rather use plain callbacks.  So even if it was leak proof, getting consensus would be impossible.

I think we should continue to look for good abstractions, but at the same time, I think we should not depend on those abstractions to solve the difficulty newcomers face.  We should, instead, focus on better educating and understanding of the language.


Bruno Jouhier

unread,
Jan 10, 2011, 3:09:10 PM1/10/11
to nodejs
Mark,

A few answers to your questions:

One-off compiled: With node 0.2.6, I compile every time the server
starts. I put a TODO in the node-init.js where caching should be done
but I cannot easily implement it with the 0.2.6 API because the
registerExtension hook does not give me the file name, only its
contents. I know that this API has changed in 0.3 but I have not yet
had the time to play with the new version. If it is better, it will be
very easy to cache the result.

It would also be very easy to write a small compiler that processes
all the files in a tree and puts the output of the transformations
elsewhere. (would facilitate debugging too). Ideally, something needs
to be hacked so that the debugger can translate line numbers between
the source and the generated code. But until this hack is available,
you should be able to debug on the generated code. The generated code
is not awful at all, it is actually pretty close to what a good
developper would write (and maybe even more regular). So it should not
be much more difficult to debug by stepping through it than through
hand-written code. On the other hand, a debugger which would
understand the streamline.js source would be a killer.

If you want to get a feel of what the generated code looks like, you
can take a look at the transform-test.js unit test:
https://github.com/Sage/streamlinejs/blob/master/test/transform-test.js

Invalid JS code: actually it does not. The source is valid JS, the
output of the preprocessor too. I know that this may sound strange but
that's because I use a naming convention rather than a language
extension to identify the spots where the code needs to be
transformed.

Regarding the spray example, I suggest that you take first a look at
the non-parallelized version of the example. It is in
https://github.com/Sage/streamlinejs/blob/master/examples/diskUsage.js.
As you'll see it is would be difficult to write something simpler.

diskUsage2.js is a bit more involved because I wanted to demonstrate
some workflow capabilities: parallelizing in one place with spray and
controlling the level of concurrency in another place with funnel.

So, yes, your spray adaptation would work. But it may be simpler
because you would not need to pass to a callback after the collectAll_
call. If you just want to return the result to another function, you
can write it as:

function foobar_() {
var result = flows.spray([
doSomething,
doSomethingElse
]).collectAll_();
return result;
}

You could also get rid of the result variable and put the return in
the first statement.

streamline.js will generate the function as foobar(callback). This is
a function that you can call as foobar_() from another streamline.js
and as foobar(function(err, result) { ... }) from any of your regular
js files (you get two functions for the price of one :-)

So, there is no pain point here. It just combines nicely.

Regarding error handling, you can just use the usual try/catch/finally
statement and it should work as expected (even if the exception is
thrown from a callback). I have not tested all the cases but basic
tests work (see the evaluation tests at the end of transform-test.js)
and, unless I made a mistake in my transformation patterns, it should
just work, however complex the source is.

The idea is to have the principle of least surprise: the code should
just work the way you would expect it to work. Instead of inventing
libraries that replace the control flow statements that got broken by
async, I've just fixed the standard control flow statements so that
they work "normally". So you don't have to learn new flow control
libraries (unless you want to parallelize code), you just use what has
always been in the language and just got broken when the language
landed up on node. This is not "yet another DSL", this is just
Javascript that got temporarily broken and had been repaired.

Bruno

Bruno Jouhier

unread,
Jan 10, 2011, 3:12:38 PM1/10/11
to nodejs
var result = flows.spray([
doSomething,
doSomethingElse
]).collectAll_();

Mihai Călin Bazon

unread,
Jan 10, 2011, 3:47:09 PM1/10/11
to nod...@googlegroups.com
I agree and disagree with Tim. :-)

1. Agree: there is currently NO WAY to abstract this shit out. And
since there is no way to have a solid abstraction, it's better for
folks to understand what's going on, rather than working around
horrible bugs that pop up with halfway abstractions.

2. Disagree: Nobody (including heavy programmers) likes to use
continuation-passing style everywhere. And that's why everybody tries
to abstract it out.

And because (1) and (2) combined, we'll still be talking about this 10
years from now, if nothing changes deep down in the language.

CPS makes total sense for a variety of problems, UI development being
the most obvious. My favorite example — CodeMirror [1] — parses your
code in chunks of X lines, but is able to interrupt when you start
typing. It saves one closure for each line of text, so that if it
parsed 1000 lines and you start typing on line 700, instead of
re-parsing everything it restarts from that line. It's blazing fast.
You can't do this in a single thread without saving continuations.
But parsers are a pain to write.

I know it's not exactly up to the Node community, but given that
JavaScript is so utterly important here, there are 2 things you should
lobby for: macros and threads. My [2] cents...

Cheers,
-Mihai

[1] http://codemirror.net/ and then http://www.ymacs.org/
[2] http://mihai.bazon.net/blog/whats-missing-from-node

> --
> You received this message because you are subscribed to the Google Groups
> "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en.
>

--
Mihai Bazon,
http://mihai.bazon.net/blog

Tim Caswell

unread,
Jan 10, 2011, 3:57:25 PM1/10/11
to nod...@googlegroups.com
I never said that experienced developers like to use raw callbacks everywhere.  I'm pretty sure that's why everyone has their own control-flow library.  Like I said before, those libraries are fine for the experienced people because they remove boilerplate and make code more concise.   They are quite safe and useful as long as you understand what they do.

I don't think node needs macros, the language is plenty flexible as it is and they bring plenty of headaches of their own.

As far as threads, that goes directly against the purpose of node and JavaScript.  Threads are an alternative way to do concurrency and completely separate from the event loop. Threads allow blocking IO at the expense of the OS having to manage concurrency for you.  It's proven very inefficient for programs that are mostly IO bound and have high concurrency (like a webserver)


Bruno Jouhier

unread,
Jan 10, 2011, 4:08:37 PM1/10/11
to nodejs
Hi Liam,

Makes perfect sense. Thanks.

Bruno

Mihai Călin Bazon

unread,
Jan 10, 2011, 4:20:40 PM1/10/11
to nod...@googlegroups.com
Only disagrees now... :-)

On Mon, Jan 10, 2011 at 10:57 PM, Tim Caswell <t...@creationix.com> wrote:
> I don't think node needs macros, the language is plenty flexible as it is
> and they bring plenty of headaches of their own.

I'll submit that this is a sensitive subject — it depends on who
you're talking to. If you talk to a PHP programmer, macros suck
("what r they and y should I care?"). Talk to a C programmer, macros
suck (because he was bitten). A Java programmer will say that macros
suck because he was taught that he will be bitten.

Talking to a C++ programmer, things start to change. C++ templates
are macros in disguise. A good C++ programmer appreciates templates,
so even if he doesn't know it yet, he will appreciate macros. ;-)

Talk to a Lisp or even a Perl programmer — "how can you live without 'em?".

> As far as threads, that goes directly against the purpose of node and
> JavaScript.  Threads are an alternative way to do concurrency and
> completely separate from the event loop. Threads allow blocking IO at the
> expense of the OS having to manage concurrency for you.  It's proven
> very inefficient for programs that are mostly IO bound and have high
> concurrency (like a webserver)

Not against the purpose of JavaScript, right? Nobody ever said that
JavaScript is supposed not to have threads, it just happens that it
doesn't.

The machine is definitely able to do more than one thing at a time,
and I can't see why you, the programmer, should account for all the
lost CPU ticks. Place more work on the hardware — that's why C won,
that's why Perl won, that's why PHP won. ... and that's why Lisp
lost. But things start to change. ;-) JavaScript is the new kid on
the block. If it is to survive, it must have threads at some point.
Real threads, not the pathetic "worker" substitute...

Cheers,
-Mihai

Tim Caswell

unread,
Jan 10, 2011, 4:37:50 PM1/10/11
to nod...@googlegroups.com
From the nodejs.org front-page:
"Node's goal is to provide an easy way to build scalable network programs...
This is in contrast to today's more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use."

If JavaScript ever gets threads, I promise the push won't come from the node community.

Yes the machine is powerful and developer time is important.  Node was designed for a niche where the threaded model is too expensive even with modern servers. JavaScript was chosen because of the excellent choice of VMs available and the lack of blocking I/O in the language (hence less to unlearn).  Browser JavaScript is single threaded and event based.  Node is single threaded and event based.  JavaScript is a relatively easy to code language and gaining momentum.  I think it was a great choice.

Mihai Călin Bazon

unread,
Jan 10, 2011, 5:31:24 PM1/10/11
to nod...@googlegroups.com
On Mon, Jan 10, 2011 at 11:37 PM, Tim Caswell <t...@creationix.com> wrote:
> JavaScript was chosen because of the excellent choice of VMs
> available

Node only works with V8. If another VM becomes better at some point,
bad luck (or good, depending how you take it). "Porting" Node to
another JavaScript engine would practically mean rewriting it.

> and the lack of blocking I/O in the language (hence less to
> unlearn).

Lack of *any* I/O, we could say.

That's part of what makes JavaScript such a great choice — you have
total freedom to "define" things that weren't defined by any
specification, because the specification doesn't care about how you do
I/O — it only defines the language.

> Browser JavaScript is single threaded and event based.  Node is
> single threaded and event based.

That's just because browser JavaScript is single threaded and event
based — and this makes total sense for an UI-oriented language, but
it's insufficient for a server-side language. The set of requirements
for the two are so different that really, I doubt that JavaScript can
survive as is on the server market.

> JavaScript is a relatively easy to code
> language and gaining momentum.  I think it was a great choice.

With this I totally agree. A beautiful language. The #1 reason why
people look into Node at all. JavaScript is cool. Too bad it misses
this and that. :-(

Cheers,

Isaac Schlueter

unread,
Jan 10, 2011, 5:32:31 PM1/10/11
to nod...@googlegroups.com
Tim represents my interests in this debate, as well. It's not all
that complicated. Learn how it works. Then use whatever leaky
abstraction you find most tasty. Or, design your own. It's not hard,
and you'll appreciate it better.

Q and promised-io are great implementations of promises. But, you
still have to understand the underlying mechanics to some degree to
use them properly. So they don't so much solve this problem as much
as just shove it over to somewhere else.

The perceived problems could be alleviated to some degree by creating
a language built around asynchrony. It *would* be neat to have a
language completely built around promises such that you could just
take two promises-that-resolves-to-a-number and add them together with
+ and get another promise-that-resolves-to-a-number. But that
language isn't JavaScript, and that is a major undertaking if done
properly.

--i

Jorge

unread,
Jan 10, 2011, 5:42:23 PM1/10/11
to nod...@googlegroups.com
On 10/01/2011, at 21:08, Tim Caswell wrote:
> No single is capable of solving this difficult problem.

I don't think (a-sinchronicity, or asynchronous program flows) are a difficult problem, not in JavaScript, and I don't think it requires any "helper" false-friend framework.

Of course, everything one does not understand becomes a "difficult problem" until one groks it : prototypes, closures, regexps, event-driven programming, async flows, etc., some things require some training, but none of these is "difficult", really.

It's not difficult (in JavaScript, thanks to the closures) to have a number of simultaneously alive program contexts (that one could think of as individual, switchable in-process processes) that can be exited at any time and easily re-entered in the future as many times as needed, simply via the callbacks (that I prefer to be named and used as code block labels) as if the code flow had never exited the context before.

People just need to learn to use functions as if they were ~ ordinary code blocks { } , and need to learn how and where to setup the context (disappointingly simple 99% of the times: in an outer closure) they'll want to re-enter in the future (i.e. asynchronously).

These two are the important (yet simple) concepts to grasp, but not any funky frameworks, nor other people's abstractions, just the proper use of the set of built-in facilities that JavaScript provides, which btw as a set are ~ unique to JS, which happens to be -imo- the reason why they're often so strange for programmers coming from other languages.
--
Jorge.

cwolves

unread,
Jan 10, 2011, 6:06:20 PM1/10/11
to nodejs
Server-side languages absolutely require threading or some parallel
idea. You can't deploy to an 8-core server and only be utilizing
12.5% of the CPU.

The only way to do this now is to launch 8 servers which, although may
work, is a very clunky work-around and you lose a good deal of
simplicity (you can't, for instance, now implement a session object as
a straight JS object).

-Mark

shaun etherton

unread,
Jan 10, 2011, 6:15:20 PM1/10/11
to nod...@googlegroups.com
On 11/01/11 1:59 AM, Jorge wrote:

> I my head, asynchronous program flow becomes a quite straightforward
> thing, once you realize that most of the times -for maximum clarity-
> you can use a plain, named callback function declaration (instead of
> an inlined anonymous lambda) as if it were the *label* of the code
> block to which the program flow is going to jmp or goto next.

Agree 100%. i'm surprised this is not seen more often.. imo named
functions help newbs a lot. Surprised more getting started type
tutorials do not do it that way.

cheers
--
shaun

Dean Landolt

unread,
Jan 10, 2011, 6:18:20 PM1/10/11
to nod...@googlegroups.com
On Mon, Jan 10, 2011 at 6:06 PM, cwolves <cwo...@gmail.com> wrote:
Server-side languages absolutely require threading or some parallel
idea.  You can't deploy to an 8-core server and only be utilizing
12.5% of the CPU.

Sure, but some parallel idea is a far cry from shared-state concurrency. It's a safe bet that javascript will never get threads no matter what Mihai seems to think is inevitable. Still, if you have thoughts on alternative concurrency approaches you should probably take it up with es-discuss -- you'll have a lot greater a chance of affecting change in the language there.
 
The only way to do this now is to launch 8 servers which, although may
work, is a very clunky work-around and you lose a good deal of
simplicity (you can't, for instance, now implement a session object as
a straight JS object).

That's a feature.

Bradley Meck

unread,
Jan 10, 2011, 6:25:28 PM1/10/11
to nod...@googlegroups.com
Well if you want a single process doing that with shared memory you probably wont actually be getting use of your cores anyway. Many standard libraries and os do not support sharing threads across cores, and the ones that do have to use memory resolution when combining states. I think lack of shared memory and use of processes with sockets / fd / etc is a simpler way to achieve multicore sandboxing without the semaphore / memory corruption nightmare. It is also important to note that node does have some minor use of threads for things done in C++ and IO currently, and while these may not be using up all of your CPU when you have a CPU bound app running in node, it can eat up a ton of an IO bound app's time.

Miguel Coquet

unread,
Jan 10, 2011, 6:35:07 PM1/10/11
to nod...@googlegroups.com
++1. Makes organizing code much much easier

Miguel Coquet
Lead Software Engineer
Attam.co.uk

Jorge

unread,
Jan 10, 2011, 6:39:01 PM1/10/11
to nod...@googlegroups.com
On 10/01/2011, at 21:47, Mihai Călin Bazon wrote:
>
> I know it's not exactly up to the Node community, but given that
> JavaScript is so utterly important here, there are 2 things you should
> lobby for: macros and threads. My [2] cents...


Yep. Workers are half-baked threads. We'd need to be able to pass real objects back and forth, not only serialized-as-a-text-message objects, which is clumsy, limiting and very expensive (expensive twice: in both directions).

If real objects were passed by copy, it would be less but still quite expensive.

If passed by reference, then we'd have real threads with real shared data and real threads synchronization issues... but I'd love to have thread synchronization issues to deal with: it would mean that I have threads :-)

What would you need the macros for ?
Aren't macros just a source preprocessor ?
Can't you preprocess a source .js file in JS, and achieve the same without real built-in macros ?
--
Jorge.

Isaac Schlueter

unread,
Jan 10, 2011, 6:50:27 PM1/10/11
to nod...@googlegroups.com
There are only 2 hard problems in computer science: shared state
concurrency, naming things, and off-by-one errors.

If you want your program to use all 8 cores, you have lots of options.

1. Use child processes. It's not like that's a new or radical idea.
Use require("child_process") to do this in node.
2. Use threads. Write a binding. Not very hard.
3. Use a different platform/language. It's not like we don't have options here.

Node is what it is. It's not ideal for every problem, but it is ideal
for many problems. It's not the simplest or the most powerful thing,
but it's pretty darn powerful for how simple it is, and that's a good
niche for a platform to occupy.

We are not going to add shared-state concurrency to node, ever.

--i

Bruno Jouhier

unread,
Jan 10, 2011, 7:39:49 PM1/10/11
to nodejs
BTW, the language that takes two async functions that resolve to a
number, adds them and get another async function that adds two numbers
is already there:

function f_() {
return f1_() + f2_();
}

Here is the code that streamline.js generates for it:

function f(_){
var __ = _;
return f1(__cb(_, function(__1){
return f2(__cb(_, function(__2){
return _(null, (__1 + __2));
}));
}));
}

By hand, I would have written:

function f(callback) {
return f1(function(err, v1)) {
if (err) return callback(err);
return f2(function(err, v2)) {
if (err) return callback(err);
return callback (null, v1 + v2);
});
});
}

This is the barebones JS version. My guess is that the promise version
would come pretty close.

Bruno

Bruno Jouhier

unread,
Jan 10, 2011, 8:41:59 PM1/10/11
to nodejs
If I understand you well, a case like function zoo() { return foo() +
bar(); } where foo and bar are two async functions would cause a
serious difficulty. By hand, you would write something like:

function zoo(callback) {
return foo(function(err, v1)) {
if (err) return callback(err);
return bar(function(err, v2)) {
if (err) return callback(err);
return callback (null, v1 + v2);
});
});
}

With streamline.js, you would write it as:

function zoo_() {
return foo_() + bar_();
}

and streamline will transform it into:

function zoo(_){
var __ = _;
return foo(__cb(_, function(__1){
return bar(__cb(_, function(__2){
return _(null, (__1 + __2));
}));
}));
}

streamline.js doesn't actually do any sort of crazy transformation. It
just applies the patterns that you would have applied yourself to
write the code. Once the patterns have been identified (there is one
for loop, one for branches, one for try/catch, one for try/finally
(not obvious), one for return, one for throw, two for expressions (non
lazy and lazy)), you have to write them in such a way that they become
combinable with each other (you need the right "interface", which is
why I have the extra __ variable) and you need to write the algorithm
that combines them. That's the gist of what streamline.js does. It's
just an algebraic process (maybe not completely today but once I'll
have cleaned it up it should be completely algebraic).

You can see the patterns in the transform-test.js unit test file (here
also I need to do a bit of cleanup)

So, this is not that hard to implement (once you got the idea), and
there is a good chance that it will generate correct code, even if the
source code contains async calls nested at any depth into complex
statements.

Moreover, the code is very systematic and just reproduces that
patterns that you would apply if you were to code the thing manually.
So, once you get visually accustomed to the basic patterns, it is not
be much harder to debug by stepping into the generated code than to do
it by stepping into manually written code (which is a pain anyway
because you have to alternate constantly between step in and step
next).

But maybe you had something else in mind when you talked about
swapping foo and bar.

Bruno

On Jan 10, 2:53 pm, Vyacheslav Egorov <vego...@chromium.org> wrote:
> > Last Friday, I submitted a new proposal to ease the pain that many of
> > us are experiencing with callbacks (Asynchronous Javascript for
> > dummies).
>
> Why try just to _ease_ the pain? The pain will disappear completely if
> you stop using the tool which causes it.
>
> You are trying to fight problem which is not a problem by rather a
> _design_decision_.
>
> If you have function foo which is in CPS and bar which is not then you
> cannot substitute foo for bar and bar for foo. They are not
> interchangeable. You have to accept it.
>
> One can pile wrappers on top of wrappers and source-to-source
> transformations on top of source-to-source transformations but that
> would not fix the "problem". It will only hide it. It will make
> debugging code more difficult: no nice stacktraces (in case of
> transformations) or stack poluted with "async-framework" internals and
> no debugger support (in case of tranformations). It will make
> reasoning about code difficult because transformations you are
> applying are far from trivial. "Dummy" will have a hard time figuring
> out what is going behind the scenes and where he/she have to add _ to
> make magic work.
>
> --
> Vyacheslav Egorov
>
> On Mon, Jan 10, 2011 at 12:02 AM, Bruno Jouhier <bjouh...@gmail.com> wrote:
> > Last Friday, I submitted a new proposal to ease the pain that many of
> > us are experiencing with callbacks (Asynchronous Javascript for
> > dummies). A bit to my surprise the main feedback I got so far has
> > been: "Sorry, not interested! Callbacks are just fine! Programmers
> > just have to learn how to write them!"
>
> > I'd like to challenge this.
>
> > First, even if programmers will eventually "get there", not all of
> > them will. So do we want node.js to remain a niche technology that
> > only an elite will be able to master? Or do we want it to appeal to
> > mainstream programmers? The former would be a terrible waste. If we
> > want the latter, we need an alternative.
>
> > Second, and this is where the path that I followed taught me something
> > interesting, writing these callbacks is a real chore, and actually, it
> > is somewhat pointless because the machine could save us this chore.
> > So, maybe we would be an elite, but an elite of slaves who keep
> > applying over and over the same set of silly patterns, moreover to
> > produce hard to read code at the end of the day. Not a very
> > interesting perspective as far as I am concerned.
>
> > If you're puzzled at this point, there are more details in the little
> > tale that I published on my blog today:
> >http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-...
>
> > Bruno
>

Mike Pilsbury (pekim)

unread,
Jan 11, 2011, 1:29:09 AM1/11/11
to nodejs
> We are not going to add shared-state concurrency to nod