I love async, but I can't code like this

5,372 views
Skip to first unread message

Sam McCall

unread,
Jan 11, 2011, 11:47:21 AM1/11/11
to nodejs
I love the event loop, and I understand the execution model, and I
don't want to change it.
I don't feel like Javascript is desperately missing macros.
But I need someone to talk me down, because I'm coming to the
conclusion that the certain sorts of programs are unwritable without
language extensions.
Not technically unwritable, of course, they work really well. But the
code is so full of callback noise that you can't see the logic.

I have a server that speaks an HTTP/JSON based protocol, and client
libraries for Java + Node.JS. (It's for GUI automation, think Selenium
for windows apps. Open source soon, waiting on corp lawyers).
Client apps are highly sequential. They're easy to write in Java,
they're easy to write in synchronous JavaScript, and they'd be easy to
write using Bruno Jouhier's transformation library.
The consensus seems to be that threads are evil (I agree), source
transformation is evil (I agree), and that you can write anything you
want in vanilla JS today.
Well, show me how to write this in a way that's nearly as concise as
Java (surely not a high standard!).

I don't have an easy solution, using anything other than standard
JavaScript syntax + semantics is incredibly problematic.
But can we admit that there's a problem?

== Java: ==
mainWindow.menu("File").openMenu().item("Open").click();
Window dialog = mainWindow.getChild(type(Window.class));
...

== Evil synchronous JavaScript ==
mainWindow.menu('File').openMenu().item('Open').click();
var dialog = mainWindow.getChild(type('Window'));

== JavaScript: ==
mainWindow.menu("File", function(err, file) {
if(err) throw err;
file.openMenu(function(err, menu) {
if(err) throw err;
menu.item("Open", function(err, item) {
if(err) throw err;
item.click(function(err) {
if(err) throw err;
mainWindow.getChild(type('Window'), function(err, dialog) {
if(err) throw err;
...
});
});
});
});
});

== JavaScript + Seq: ==
Seq()
.seq(function() { mainWindow.menu("File", this); })
.seq(function(file) { file.openMenu(this); })
.seq(function(menu) { menu.item("Open", this); })
.seq(function(open) { open.click(); })
.seq(function(){ mainWindow.getChild(type('Window'), this); })
.seq(function(dialog){ ... });

---
Cheers,
Sam McCall
node.js newbie

Nick Husher

unread,
Jan 11, 2011, 1:36:08 PM1/11/11
to nod...@googlegroups.com
Is interacting with (what appears to be) a GUI menu actually a
blocking operation? The point of Node isn't to make everything
asynchronous, the point is to note clearly and loudly where you have
places in your code where you're communicating with a slow data
source. In other words, the file system or the network.

Let's say you're communicating with a GUI over the network: that's
cool, sometimes it's necessary. I recommend writing a wrapper that
abstracts all that asynchronous business into a set of commands that
can be queued up and run, since it's only the last callback you're
really interested in.

> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
>
>

Guillermo Rauch

unread,
Jan 11, 2011, 1:43:05 PM1/11/11
to nod...@googlegroups.com
You're comparing apples to oranges.
You're not showing off an alternative concurrency model with your Java example. You're just using a blocking API. You can do the same thing with Node.JS

--
Guillermo Rauch
http://devthought.com

Sam McCall

unread,
Jan 11, 2011, 2:02:05 PM1/11/11
to nodejs
On Jan 11, 7:43 pm, Guillermo Rauch <rau...@gmail.com> wrote:
> You're just using a blocking API. You can do the same thing with Node.JS

Er, that would block the whole app, as node.js has no threads.

Guillermo Rauch

unread,
Jan 11, 2011, 2:18:20 PM1/11/11
to nod...@googlegroups.com
If you're writing a desktop GUI app, you have two options:
 - Blocking API + web workers
 - Non-blocking and some API that simplifies the nesting and error handling. See how we do it for Soda


browser
  .chain
  .session()
  .open('/')
  .assertTitle('Something')
  .and(login('foo', 'bar'))
  .assertTitle('Foobar')
  .and(login('someone', 'else'))
  .assertTitle('Someone else')
  .end(function(err){
    if (err) throw err;
  });

Sam McCall

unread,
Jan 11, 2011, 3:00:41 PM1/11/11
to nodejs
On Jan 11, 7:36 pm, Nick Husher <nhus...@gmail.com> wrote:
> Is interacting with (what appears to be) a GUI menu actually a
> blocking operation?
It's blocking per-session: suppose we're automating a notepad session
on machine A, if we call openMenu on a menu item then we can't do
anything[1] until we get the HTTP response. But of course we can do
work on another session targeted at another machine.

[1] Actually we can, but whether the menu is opened/closed/midway
isn't defined, so we can't usually do anything useful.

> Let's say you're communicating with a GUI over the network: that's
> cool, sometimes it's necessary.
That's exactly it, sorry if that wasn't clear.

> I recommend writing a wrapper that
> abstracts all that asynchronous business into a set of commands that
> can be queued up and run, since it's only the last callback you're
> really interested in.
Sure, this example is common enough to justify a library function like
window.clickMenuItem("File", "Open", cb);

Unfortunately, I chose it because it was universal/familiar, in
practice the code snippets tend to be application specific because the
UIAutomation tree (the Windows equivalent of the DOM) tends to vary in
unintuitive per-application ways.
Therefore the wrappers couldn't be part of the library, they'd be
written by whoever uses the library, for every app-specific task they
want to do. So you're writing the same noisy code, but in a function.

The JavaScript client is experimental, but we've used the Java client
to write automated tests for in-house software and in practice there's
little to no redundancy in the API - a higher level API would stop you
doing things you need to be able to do.

On Jan 11, 8:18 pm, Guillermo Rauch <rau...@gmail.com> wrote:
> If you're writing a desktop GUI app, you have two options:
>  - Blocking API + web workers
This is obviously an option, but it sucks having to have all the
overhead and sync problems of threads just to expose an API you can
write readable programs in. (Plus it doesn't work in Node yet,
right?)

> - Non-blocking and some API that simplifies the nesting and error handling.
That looks like a cool pattern, I'll check it out.
Our API is object-oriented (very similar to WebDriver) so we need to
deal with different object handles rather than just a single Selenium
connection, and there's sometimes complex flow-control mid-chain, but
I'll see if I can get something that works reasonably. (And ideally
that separates cleanly from both the app code and the library, so both
are readable). At this point i'm starting to disagree with people who
think everyone should write their own!

Cheers,
Sam

Bruno Jouhier

unread,
Jan 11, 2011, 3:29:59 PM1/11/11
to nodejs
Well Sam,

I wrote streamline.js for people like you and me, people who like
Javascript, who like node's event model and who are involved in
industrial projects, which means that they will, at one point or
another, need to involve "mainstream" programmers into it.

Mainstream programmers understand Javascript's flow control
statements. So I think that they'll understand streamline.js. I don't
think that they will be ever as productive and as comfortable with a
DSL however smart the DSL might be. Why would you need a DSL anyway
when all you're asking for is basic flow control statements that
actually exist in the language but got broken somehow. What you want
is that someone fixes the language so that you can use for loops, try/
catch, dot notation again, not that someone introduces new
abstractions that will never be as concise as the language's
primitives, that will have a learning curve and that will be obsolete
the next month because a newer abstraction will have taken the
spotlights.

I released streamline.js under MIT license which is not our standard
corporate license so that the community can feel relaxed about it (I'm
probably more fortunate than you because I have a kind CTO who
expedited things with the lawyers). So it is available and free and we
can start using it.

I am not particularly fond of preprocessors and I went this route
simply because I did not see any way to fix the problem in user-land
(I tried to write my own DSL like everyone else of course but I felt
like loosing my time). That's all. There is no ideological stand in
this, just pragmaticism.

So even if code transformation is evil, we may have to sympathize with
the devil, and write rock'n roll code. At least that's what I'm doing
from this point on. And maybe some day, the fix will get into the
language for real. After all, all it needs is a small operator or
keyword (to replace the underscore hack) and a relatively simple
algebraic code transfomation that can be pipelined into the other
stages of the compiler.

Bruno

Liam

unread,
Jan 11, 2011, 3:56:16 PM1/11/11
to nodejs
Sam, it sounds to me like you ought to be sending "scripts" to the UI
entities, and returning only the result you need for a particular
sequence...

Isn't there a serialization format for UI automation sequences that
allows them to be saved and played back?

Isaac Schlueter

unread,
Jan 11, 2011, 4:35:41 PM1/11/11
to nod...@googlegroups.com
Cognitive shenanigans. Strawman fallacy, more specifically, fallacy
of the excluded middle. No one actually writes code like this. The
options are not "73 layers of indentation, or threads."

The language includes named function declarations. Use them. That
alone would make your example way less terrible.

--i

Bradley Meck

unread,
Jan 11, 2011, 4:46:22 PM1/11/11
to nod...@googlegroups.com
name all your functions anyway for debugging purposes, lord knows seeing a stack trace of anonymous functions is painful on the eyes.

Sam McCall

unread,
Jan 11, 2011, 5:09:21 PM1/11/11
to nodejs
Bruno: I'm really tempted by streamline.js, and may well end up using
it.
I really don't like the idea of a preprocessor, with and the
associated deployment hassles and ambiguous semantics. So if it can be
done with something like a proxy object that records a chain of method
calls synchronously, then plays them back asynchronously, I'd prefer
that. But if it isn't manageable, then pragmatically I think it'll be
the best option. (Your implementation looks really good, it's a purely
philosophical objection).

Liam: we haven't defined such a format. The actions are functions with
complex object parameters and many different types of return values
that can raise errors at any step etc. This would basically mean
defining a new scripting language with synchronous semantics and
writing an asynchronous interpreter for it. It seems strange to have
to do this.

Isaac: there are 5 inline functions declared in the example.
'var x = function(){};' 5 times is a lot of code even before you do
anything in them, it can never be concise or really readable.
Besides, I'm not really sure what to name them as they don't really
have much independent behaviour, they're just glue.
But I'd be happy to be convinced, could you show me how you'd write it?

Liam

unread,
Jan 11, 2011, 6:25:17 PM1/11/11
to nodejs
Hmm, so fine-grained remote control scripts are a pain in callback-
style programming...

But if you don't need tons of concurrent script "threads", perhaps
write sync JS scripts and use a pool of Node processes consuming a
queue of scripts?

Isaac Schlueter

unread,
Jan 11, 2011, 9:33:37 PM1/11/11
to nod...@googlegroups.com
On Tue, Jan 11, 2011 at 14:09, Sam McCall <sam.m...@gmail.com> wrote:
> Isaac: there are 5 inline functions declared in the example.
> 'var x = function(){};' 5 times is a lot of code even before you do
> anything in them, it can never be concise or really readable.
> Besides, I'm not really sure what to name them as they don't really
> have much independent behaviour, they're just glue.
> But I'd be happy to be convinced, could you show me how you'd write it?

Cut out the "var" and the "=". But yeah, "function" is long and
klunky for how common it is. I'd love a shorter keyword for it.

Here's a very brief untested example of what I'm talking about:
https://gist.github.com/775591

I usually only care about success or failure, so I don't use any kind
of "passalong" util in my programs. But it's not hard to see how this
general technique could be applied in other ways.

--i

shaun etherton

unread,
Jan 11, 2011, 10:08:49 PM1/11/11
to nod...@googlegroups.com

IMO this is a great example; especially for new comers. It makes the
strange looking yet common function nesting look more familiar, then
progresses to show the awesomeness of JS in a simple, clear and
understandable way.
Sometimes i think those who are intimately familiar with the nested
callbacks/anon function style may not realize the amount of angst and
confusion it can cause for the uninitiated.

- shaun

mgutz

unread,
Jan 11, 2011, 10:45:27 PM1/11/11
to nodejs
And of course CoffeeScript (the more signal-to-noise ratio JavaScript)

# using named functions
doSomething = (done) ->
mainWindow.menu "File", onFile

onFile = (er, file) ->
return cb(er) if er
file.openMenu onMenu

onMenu = (er, menu) ->
return er if er
menu.item "Open", onOpen

onOpen = (er, item) ->
return er if er
item.click onClick

onClick = (er) ->
return er if er
mainWindow.getChild type("Window"), onGetWindow

onGetWindow = (er, dialog) ->
return er if er
dialog.doSomething done

mgutz

unread,
Jan 11, 2011, 10:49:22 PM1/11/11
to nodejs
oops, pasted wrong one

# using named functions
function doSomething (cb) ->
mainWindow.menu "File", onFile

onFile = (er, file) ->
return cb(er) if er
file.openMenu onMenu

onMenu = (er, menu) ->
return cb(er) if er
menu.item "Open", onOpen

onOpen = (er, item) ->
return cb(er) if er
item.click onClick

onClick = (er) ->
return cb(er) if er
mainWindow.getChild type("Window"), onGetWindow

onGetWindow = (er, dialog) ->
return cb(er) if er
dialog.doSomething cb(null)

aaronblohowiak

unread,
Jan 12, 2011, 4:40:15 AM1/12/11
to nodejs
Everyone: repeat this at every occasion, please. I am going to go
back and edit some of my source to do this.

Floby

unread,
Jan 12, 2011, 5:49:42 AM1/12/11
to nodejs
No, coffeescript has no named functions. It functions affected to
variables.
all your coffeescript function will come up as anonymous in a stacj
trace.

Preston Guillory

unread,
Jan 12, 2011, 8:54:04 PM1/12/11
to nod...@googlegroups.com
Sam has a point.  Asynchronous code is much larger and has a lot of syntactic noise.  Look at his example: synchronous Java does in 2 lines what takes 7 lines of asynch code, even using Seq.  Named intermediate functions don't help -- see how Isaac's example stretched to 14 lines.

I run into the same situation frequently, so I feel his pain.  My code has lots of nesting and "if (err) return callback(err)".  Any time I touch the file system, or call exec(), or query a database, or interact with the network.  And hey, that describes basically everything the code does!

It's not a problem in the sense that the approach doesn't work -- the code functions and it's fast.  But for a lot of common programming tasks, Node.js is verbose, for sure.

John Taber

unread,
Jan 12, 2011, 10:46:27 PM1/12/11
to nod...@googlegroups.com
yes, more LOC, more chance of errors and more testing required. Async
code w/ assoc. callbacks is great when apps need non-blocking, but for
many "normal" CRUD type apps where blocking isn't that much an issue,
either a simpler approach to using Node or a synchronous js server side
package (w/o the huge JVM) would make programming simpler.

Sam McCall

unread,
Jan 13, 2011, 2:39:13 AM1/13/11
to nodejs
This is by no means production ready, and so far only addresses this
very narrow case (calling methods on the 'return value' of an async
result), but I got this working:

var Plate = require('plate'); // [1]


Plate(mainWindow).menu('File').openMenu().item('Open').click().end(function(err)
{
if(err) throw err;
});

Where all the library functions are still written in callback-as-last-
parameter style.
Plate returns a proxy (hoxy?) object that records the method calls and
starts the chain when you call end().

I'll code using this for a while and see if there are other common
patterns I can't live without.

[1] https://github.com/sam-mccall/node-plate

Mihai Călin Bazon

unread,
Jan 13, 2011, 2:43:33 AM1/13/11
to nod...@googlegroups.com
Also, may I add, the biggest issue with continuation-passing style is
that it turns your code inside-out, almost like disassembling a piece
of code from a high-level language into a lower-level one. Async:

get_val("foo", function(error, foo){
if (error) print("there was an error");
else get_val("bar", function(error, bar){
if (error) print("there was an error");
else get_val("baz", function(error, baz){
if (error) print("there was an error");
else print(foo + bar + baz);
});
});
});

Sync:

try {
print(get_val("foo") + get_val("bar") + get_val("baz"));
} catch(error) {
print("there was an error");
}

The sync version is cleaner: you can see at a glance what it does. It
prints something -- the sum of 3 values returned by get_val. In case
of an error, you get "there was an error". You get the luxury of
handling the error in a single place.

The async looks like the low-level version of the sync one. If we
could disassemble a JavaScript function, it would look pretty much
like the async version (in terms of program flow). What happens is
that get_val("foo") is executed first, then get_val("bar"), then
get_val("baz"), then their sum is computed and printed -- that's
exactly what both versions do. But which one do you prefer to write?

I don't want to hurt anyone's feelings, but to me at least, Node is
unusable for serious work because of this. I'm prepared to lobby
against do-everything-asynchronously for a long time. :-)

Cheers,
-Mihai

--
Mihai Bazon,
http://mihai.bazon.net/blog

Guillermo Rauch

unread,
Jan 13, 2011, 3:00:56 AM1/13/11
to nod...@googlegroups.com
Mihai,

You can create an api that looks like

print(a,b,c)

where a b c are functions that take callbacks, or they're promises. And it's still async.

Sam McCall

unread,
Jan 13, 2011, 5:00:03 AM1/13/11
to nodejs
Mihai: a single-threaded, event-loop based execution model using non-
blocking APIs is the essential premise behind node.js.

Sure, it has issues that could be addressed nicely by using threads
instead, and I agree that with the current conventions, code can be
hard to read and debug.

But fixing this by using threads would be a solution that had nothing
in common with node, so it's not really on-topic for the node mailing
list.

Isaac Schlueter

unread,
Jan 13, 2011, 5:04:26 AM1/13/11
to nod...@googlegroups.com
On Wed, Jan 12, 2011 at 19:46, John Taber <john.tig...@gmail.com> wrote:
> a synchronous js server side package would make programming simpler.

There are many of those. Have fun. (Not being facetious.
Programmers write better programs while enjoying the process, so if
that means using Rhino or v8cgi or Ruby, more power to you.)

This conversation is a bit silly. I mean, conceptually, asynchronous
code *is* a bit more complicated.

For many problems, threads do not scale effectively. But for many
problems, they're perfectly adequate, and if you're using an
implementation that's well understood, you may even be able to work
around the Hard Problems that they introduce.

Node is cool and hot and all the tech blorgs in the blagodome are
weblooging about how awesome and fast and super insanely stupid sexy
it is. But that doesn't mean it's best for every problem, or that
JavaScript is everyone's favorite language, or that you have to love
callbacks, or that you should use it.

When you're writing network programs, your program is a proxy the vast
majority of the time. It is a glue that wires together a few
different things that *by their very nature* are most efficiently
handled with asynchronous evented message-passing. XMPP, HTTP, AMQP,
child processes, etc. Those problems are well served by an event
loop.

You can run an event loop in C. You know what? After juggling
structs and function pointers for a bit, all this "verbose" JavaScript
starts to look pretty nice.


> for many "normal" CRUD type apps where blocking isn't that much an issue...

Are you joking? That's *exactly* the sort of app where blocking very
quickly becomes a *huge* issue.

Or do you mean the kind of app where only one user is ever going to be
creating, reading, updating, and deleting data at once? Because
that's what blocking IO means.

All websites are fast with one user on localhost.


> On 01/12/2011 06:54 PM, Preston Guillory wrote:
>> Sam has a point.  Asynchronous code is much larger and has a lot of
>> syntactic noise.  Look at his example: synchronous Java does in 2 lines what
>> takes 7 lines of asynch code

Except that the 7 lines of async code *can handle other events while
waiting*, without the overhead of coroutines or threads.

And, with the Java version, there's a *ton* of stuff going on below
the surface. Actually doing real work in a higher-level-than-c
language with an event loop is kind of a newish thing. Granted, those
of us who cut our teeth on web browsers have been doing it forever,
which is part of why JS is so right for this task. But the tooling is
not super mature, and node is a lowlevel library by design.


>> Named intermediate functions
>> don't help -- see how Isaac's example stretched to 14 lines.

They reduce indentation, label blocks, and improve stack traces. They
won't improve your golf game.


>> But for a lot of common programming tasks, Node.js
>> is verbose, for sure.

The advantages are that you can do a bunch of things in parallel, and
handle "local" stuff in the pattern as "remote" stuff.

I write most of my shell scripts in Bash.


On Wed, Jan 12, 2011 at 23:39, Sam McCall <sam.m...@gmail.com> wrote:
> Plate(mainWindow).menu('File').openMenu().item('Open').click().end(function(err)
> {
>       if(err) throw err;
>    });
>
> Where all the library functions are still written in callback-as-last-
> parameter style.

See? I *told you* you could do better than my example ;D


If you can come up with a terser way to express callbacks, that
doesn't lose anything in translation, then by all means, implement it.
I think that this problem, such as it is, will be solved by either
writing a new language no one uses, or just sucking it up and using
the JavaScript we have instead of the JavaScript we wish we had.


--i

Bruno Jouhier

unread,
Jan 13, 2011, 5:22:18 AM1/13/11
to nodejs
Exactly.

This is why I wrote the little streamline.js transformation engine. I
was getting sick of turning the code inside out, not being able to use
the language's native constructs, etc. I felt like programming with a
low-level language (see http://bjouhier.wordpress.com/2011/01/09/asynchronous-javascript-the-tale-of-harry/)

Maybe some people find it "fun" to write such disassembled code. I
don't. This is just a waste of energy.

There seems to be a golden rule out there saying: "You have to solve
this problem in user-land, through DSLs, libraries, whatever, ...".
But this is just not working, or rather it always only works "up to a
point". So maybe someone has to break the rule some day, abandon this
dogma that the solution must lie in user land, write a transformation
engine, design it so that it fits the node model (you should be able
to consume node functions, and the functions that you write should in
turn be regular node functions) and hook it up into node. See
https://github.com/Sage/streamlinejs

I used a "naming" hack because I want to be able to use this today,
with today's Javascript editors (I cannot write JS code if the
"reformat entire file" function is broken). But this is not how it
should be done ultimately. What is needed is a special operator to
distinguish async function definitions and calls from sync ones. If we
had such an operator in the language, we could happily use Javascript
to mix sync and async calls.

There are probably some other solutions, like introducing true
continuations in the language (my CPS transform is probably only half-
baked) but we have to also be careful to not lose sight of what makes
Javascript so successful: its simplicity and its "mainstream-ness". So
it is probably more important to "repair" the built-in keywords of the
language so that they work in async-land the same way they have always
worked in sync-land, than to introduce more abstract features into the
language.

Also, I have the weird feeling that continuations are somewhat like
gotos (they really allow you to jump anywhere) while the
"streamlining" that I am proposing is more "structured". It will not
put the full power of continuations in the hands of programmers (so
that they don't shoot themselves in the foot) but it will give them
enough to solve their main problem (writing async code).

Bruno

Sam McCall

unread,
Jan 13, 2011, 5:35:00 AM1/13/11
to nodejs
On Jan 13, 11:04 am, Isaac Schlueter <i...@izs.me> wrote:
> See? I *told you* you could do better than my example ;D

:-) And a good learning exercise.
I'm not convinced that "write your own, it's easy" is a the right long
term answer though.
You learn a lot by implementing your own hashtable, but most of the
time you use a library.

> sucking it up and using the JavaScript we have instead of the JavaScript we wish we had

First, there's a difference between syntax and idiom.
'Functions take parameters, perform actions, and return values' is
syntax.
'Functions take a callback parameter, and call it on completion with
the error as the first argument' is idiom.
And I think it's entirely appropriate to try to find the most
expressive idioms, given the syntax we have. In public, both because
otherwise we might give up (see first post) and because having idioms
in common mean we can read each others' code.

Secondly, node.js is 'cool and hot' and could well be influential in
JS's future development.
That doesn't mean wishing for a language feature will magically make
it appear, but if people are aware of the friction points then maybe
there'll be a proposal or two when ES6 comes around.

Sam

billywhizz

unread,
Jan 13, 2011, 5:56:14 AM1/13/11
to nodejs
+100 for Isaac's excellent response.

i would also recommend anyone who is advocating adding threads/
blocking calls to node.js to have a look at this presentation by the
godfather of javascript:

http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/

if you're pressed for time, skip to approx 20 mins in where he starts
to get into the nitty-gritty of event loops v threading. mr crockford
explains it a lot better than i would be able to...

yes, async coding in an event loop is different and does have some
pain associated with it, but from where i am standing the pain is a
lot less than dealing with all the synchronization and locking issues
that threading and shared memory bring to the table...

On Jan 13, 10:04 am, Isaac Schlueter <i...@izs.me> wrote:

Preston Guillory

unread,
Jan 13, 2011, 9:25:59 AM1/13/11
to nod...@googlegroups.com
Generally every design pattern represents a missing language feature.  For instance in Java, in order to enforce constraints on an object's properties, you create setters and getters and then only access the properties through those methods.  It's extra code and not as clean, but a Java programmer only sees the necessity of it.  But C# has shown us that it was really just a missing language feature all along.

I haven't tried Bruno's solution yet, but I like direction it's taking.  Programming languages trend toward compact, expressive syntaxes.  See Ruby and Python.

Bradley Meck

unread,
Jan 13, 2011, 9:27:17 AM1/13/11
to nod...@googlegroups.com
I would be interested to know how the code bruno generates prevents concurrent editting between the synchronous looking chain or does it require semaphores?

Bruno Jouhier

unread,
Jan 13, 2011, 10:03:12 AM1/13/11
to nodejs
I don't actually generate the code myself, streamline.js does it for
me :-)

What do you mean by concurrent editting? Can you give an example?

The generated code corresponds to what you would have to write with
callbacks to reproduce the sequential chaining that is expressed in
the sequential version of the code (if this mumbo jumbo makes any
sense to you). If you want to see examples of what is generated, you
can take a look at the unit test: https://github.com/Sage/streamlinejs/blob/master/test/transform-test.js

I'm actually thinking of revisiting this a bit. The idea is that when
I write something like
var x = foo_().bar_().x + zoo_().y
it is very important to have a sequential chain for foo_().bar_().x on
one side and another one for zoo_().y but it rather counterproductive
to chain the two together to apply the plus. Instead, we should let
them execute in parallel and join the two chains before the plus.

My current transform doesn't do that but it should be relatively easy
to modify it so that it does that. With this, the programmer would get
transparent parallelization of the evaluation of sub-expressions. I
find the idea pretty cool. Actually, if I apply the same principle to
array and object literals, I get a pretty neat way to parallelize
function calls and put their results into an array (or an object):

var results = [ f1_(), f2_() ]; // this is neater than the spray/
colllectAll API that I proposed.

But then, you'll need a mechanism to control the level of parallelism.
There are places where you really want to serialize execution. This is
where the little "funnel" utility comes into play. The diskUsage2.js
shows how I use it to avoid exhaustion of file descriptors when
parallelizing a recursive traversal of directories. (https://
github.com/Sage/streamlinejs/blob/master/examples/diskUsage2.js).

So maybe the funnel is what you're asking for. If you set its max to
1, you get some kind of monitor that queues incoming calls and only
lets them go through one at a time.

Bruno.

Isaac Schlueter

unread,
Jan 13, 2011, 1:35:02 PM1/13/11
to nod...@googlegroups.com
On Thu, Jan 13, 2011 at 02:22, Bruno Jouhier <bjou...@gmail.com> wrote:
> There seems to be a golden rule out there saying: "You have to solve
> this problem in user-land, through DSLs, libraries, whatever, ...".
> But this is just not working, or rather it always only works "up to a
> point".

Really? Seems to me like it's working. Qv: this thread.

> What is needed is a special operator to
> distinguish async function definitions and calls from sync ones. If we
> had such an operator in the language, we could happily use Javascript
> to mix sync and async calls.

Sure. That's a fair point. So... go do that.

But it won't be JavaScript. It'll be some other thing. If you want
to change JavaScript-the-language, you're posting on the wrong mailing
list.

If you wanna design a new language that compiles to node-friendly JS,
you can clearly do that. Some people will love it, others will hate
it, sky's still blue, pope's still catholic, etc.

> It will not
> put the full power of continuations in the hands of programmers

You may find that programmers are more power-hungry than you can
possibly imagine. ;)


On Thu, Jan 13, 2011 at 02:35, Sam McCall <sam.m...@gmail.com> wrote:
> First, there's a difference between syntax and idiom.

Fair point.

Node's idiom is the most simple idiom possible to *enable* more
creative things in userland. It's the base. Think of the
continuation-callback style as the bytecode that your flow control
library interfaces with.

When you're interfacing with the low-level node libraries a lot,
callbacks are actually a very nice abstraction that gives you a lot of
control over what's going on. If your program is going to deal with
streams more often than single fs operations, then perhaps it makes
more sense to structure your program differently. No one's trying to
say that callbacks are best for everything.

In npm, I *have* to do a lot of lowlevel file system operations.
That's what npm is for. So, I need some abstraction around it so that
I can do a list of N things sequentially or in parallel, and compose
multiple callback-taking actors into one.

Since I have those abstractions, it makes sense to kind of shove the
stream-ish operations (writing or uploading a file, for instance) into
a function that takes a callback, so that I can say "Write this file,
create a symlink, make this directory, upload this thing, and then
signal success or failure" and wrap that up as one "operation" to be
composed in another area of my program.

The thing about closures and continuations is that, while they look
obnoxious in these little examples, once embraced fully, they allow
for a huge degree of DRYness and consistency.

> And I think it's entirely appropriate to try to find the most
> expressive idioms, given the syntax we have. In public, both because
> otherwise we might give up (see first post) and because having idioms
> in common mean we can read each others' code.

Rewind the NodeJS clock about 10 months, to when everyone was debating
how to do Promises in the most powerful and correct way.

Shooting that horse was the only way to get everyone to stop beating
it. Now we have 3 promise libraries that are all better than the one
that was in node-core.

It is important for a platform to be very careful about what it lets
in, lest it prevent innovation by the community. Ryan's extreme
aesthetic in that regard is a big part of what has made node
successful.

> if people are aware of the friction points then maybe
> there'll be a proposal or two when ES6 comes around.

I don't disagree. But I've got programs to write in the thousand
years or so before ES6 will ever be a reality.

--i

Sam McCall

unread,
Jan 13, 2011, 2:01:49 PM1/13/11
to nodejs
I've rewritten my lib so it's a bit more flexible. And I renamed it
Ship because Plate was taken (I really suck at names!)
It's not as powerful as Bruno's stuff, but it doesn't do source
parsing/transformation, it uses proxy objects instead which are
marginally less evil.

So in case it's of interest to anyone:

https://github.com/sam-mccall/ship || npm install ship

Examples:
(the APIs called are all async except where the names indicate)

function handler(err, result) { ... };

// Sequential execution:
var ps = plate(something);
ps.foo();
ps.bar();
// is the same as
something.foo(function(err) {
if(err)
handler(err);
else
something.bar(handler);
};

// Chaining:
plate(something).foo(1).bar(2).end(handler);
// is the same as
something.foo(1, function(err, data) {
if(err)
handler(err);
else
data.bar(2, handler);
});

// Inside-out evaluation:
var ps = plate(something);
ps.foo(2, ps.bar()).end(handler);
// is the same as
something.bar(function(err, data) {
if(err)
handler(err);
else
something.foo(2, data, handler);
}

// Call sync functions with $, and access attributes
plate(something).asyncOne().sync$().attrib().asyncTwo().end(handler);
// is the same as
something.asyncOne(function(err, data) {
if(err)
handler(err);
else
data.sync().attrib.asyncTwo(handler);
});

Sam McCall

unread,
Jan 13, 2011, 2:09:34 PM1/13/11
to nodejs
Isaac: all true. My only quibble is with:

> The thing about closures and continuations is that, while they look
> obnoxious in these little examples, once embraced fully, they allow
> for a huge degree of DRYness and consistency.

For some tasks it will always look obnoxious, and you don't need the
power.
I really wish we had continuations in node, this stuff would be cake.

Thanks for the context w.r.t. promises, I wasn't around then but that
does explain a lot.

Isaac Schlueter

unread,
Jan 13, 2011, 2:37:47 PM1/13/11
to nod...@googlegroups.com
On Thu, Jan 13, 2011 at 11:09, Sam McCall <sam.m...@gmail.com> wrote:
> For some tasks it will always look obnoxious, and you don't need the
> power.

Yep.

> I really wish we had continuations in node, this stuff would be cake.

The actor/callback pattern basically is CPS, though of course it's
much uglier in JS than in Scheme. There's no TCO, of course, but the
event loop's stack-destruction performs a similar effect as TCO.

The thing is that, like in Scheme, JS's continuations trap their state
in a closure, rather than a stack-saving Continuation object that
implements coroutines. It's very similar in practice to the Promise
pattern, except done in FP style rather than OOP.

--i

Sam McCall

unread,
Jan 13, 2011, 3:08:51 PM1/13/11
to nodejs
Yeah, I guess with a continuation you capture the whole stack, so you
can easily end up with similar overhead to threads. Mostly I just
meant the syntax is better - pass in a callback function when
suitable, pass in the current continuation where it makes more sense
to write sequential (but asynchronous) code.

Hmm, I don't think you need full continuations for this case, as you
would always be jumping up the stack... just the equivalent of setjmp/
longjmp. But I'll save it for Ecma3000 :-)

Sam McCall

unread,
Jan 13, 2011, 3:12:09 PM1/13/11
to nodejs
On Jan 13, 9:08 pm, Sam McCall <sam.mcc...@gmail.com> wrote:
> Hmm, I don't think you need full continuations for this case, as you
> would always be jumping up the stack... just the equivalent of setjmp/
> longjmp. But I'll save it for Ecma3000 :-)

Er, disregard that, only in the synchronous case!

Isaac

unread,
Jan 13, 2011, 9:55:29 PM1/13/11
to nodejs
Continuations are the functional expression of the GOTO statement, and
I'm sure we're all well aware how GOTO turned out...

Anyway user land is exactly the right place for this sort of thing.
Why ? Because there are a whole bunch of people, including myself, who
like Node.js just the way it is in this regard.

Its great to be discussing this stuff and trying out solutions =)

Just please don't try to mess up the core JavaScript language by
adding an operator or whatever else because of a personal style
preference. Keep it in user land so other people at least have a
choice to continue happily passing around anonymous functions in async
styles.

Another reason to stay in user land is for code portability esp with
browsers for client-server code sharing. Unless your planning to have
an operator added to the EMCAScript standard for this and even then it
would take years for browser vendors to implement it.

Cheers.

Mikeal Rogers

unread,
Jan 14, 2011, 2:58:15 AM1/14/11
to nod...@googlegroups.com
FYI, for those interested. Brendan is proposing that ECMA Harmony's new function keyword, #, actually return a new implementation of a function object that, among other things, will do proper tail calls, which would allow for some more of these ideas.

The place for that discussion is the es-discuss list, the public facing part of TC-39 (the ECMAScript committee).

Bruno Jouhier

unread,
Jan 14, 2011, 4:00:38 AM1/14/11
to nodejs
> Just please don't try to mess up the core JavaScript language by
> adding an operator or whatever else because of a personal style
> preference. Keep it in user land so other people at least have a
> choice to continue happily passing around anonymous functions in async
> styles.

With what I propose, people still have choice. They can continue to
pass around anonymous functions in async style, as you say, or they
can use the new way. They can actually mix and match between the old
and new way at their will . The precompiler will *not* do anything to
their "old way" code. It only transforms functions that are defined
the new way and it will transform them to create a function that has
the classical (old way) signature (so it can be called seamlessly by
old style code).

The only problem is that you cannot do this if you stay in user land,
you have to transform the code. The key is to have a transformation
that works locally and stops at function boundary, not something that
is going to turn the entire code upside down.

Alen Mujezinovic

unread,
Jan 14, 2011, 5:45:33 AM1/14/11
to nod...@googlegroups.com
A bit late to chip in on a way of doing it, but here's another solution I quite like. Saves one using a library and you can plug and play your code to your liking. Using CoffeeScript:

menu null, "File", mainWindow, (err, file)->
    openMenu err, file, (err, menu)->
        getItem err, "Open", menu, (err, item)->
            click err, item, (err, item)->
                getChild err, mainWindow, type("Window"), (err, item)->
                    if err? and err instanceof SomeError
                        #do stuff
                    if err? and err instanceof OtherError
                        #do stuff
                    if err?
                        throw err
                    #do other stuff

Can't use that way if you break your code down into classes though. I'm using modules to break functionality up and keep classes to a minimum size and functionality.

Thinking the idiom of each callback taking as first argument an error a bit further, each function should take as first argument an error too, and if it's not undefined, just pass it up the callback stack until dealt with:

menu = (err, type, window, callback)->
    return callback err if err?
    # do other stuff
    
You can wrap that also into a closure that unwraps your arguments, checks the first and then passes things along.

Havoc Pennington

unread,
Jan 14, 2011, 2:58:53 PM1/14/11
to nod...@googlegroups.com
Here's what I think would be nice, I wrote down a couple months ago:
http://blog.ometer.com/2010/11/28/a-sequential-actor-like-api-for-server-side-javascript/

but it requires yield (generators) in the JS implementation.

Havoc

Nick

unread,
Jan 14, 2011, 4:47:26 PM1/14/11
to nodejs


On Jan 14, 2:58 am, Mikeal Rogers <mikeal.rog...@gmail.com> wrote:
> FYI, for those interested. Brendan is proposing that ECMA Harmony's new function keyword, #, actually return a new implementation of a function object that, among other things, will do proper tail calls, which would allow for some more of these ideas.
> The place for that discussion is the es-discuss list, the public facing part of TC-39 (the ECMAScript committee).

There's some disagreement about what the lambda keyword will be, but I
think this qualifies as a very good thing. Personally I like the "Ruby-
style" of [].forEach({|x| x+1 }) more.

Although it's worth noting that there's also some noise advocating an
AST API, which would allow you to solve a lot of the ugly christmas-
tree callback stuff by writing a simple translator to do what you need.

Mikeal Rogers

unread,
Jan 14, 2011, 6:52:50 PM1/14/11
to nod...@googlegroups.com
you should tell that the es-discuss list :)

Isaac Schlueter

unread,
Jan 14, 2011, 7:51:19 PM1/14/11
to nod...@googlegroups.com
Node is not the future of JavaScript.

Node is the present of JavaScript.

--i

On Fri, Jan 14, 2011 at 15:52, Mikeal Rogers <mikeal...@gmail.com> wrote:
> you should tell that the es-discuss list :)
>

Felix Geisend?rfer

unread,
Jan 15, 2011, 1:22:32 PM1/15/11
to nod...@googlegroups.com
On Tuesday, January 11, 2011 5:47:21 PM UTC+1, Sam McCall wrote:
== Java: ==
mainWindow.menu("File").openMenu().item("Open").click();
Window dialog = mainWindow.getChild(type(Window.class));

I'm surprised nobody mentioned this so far, but your example is a really nasty violation of the Law of Demeter [1].

I can see why it's attractive for the kind of testing you do, but if you find yourself testing multiple interactions with the 'dialog' window from your example, you should really consider:

testSuite.getMyDialog(function(err, dialog) {
  // test goes here
});

In the long run, this kind of testing is doomed anyway due to the combinatoric explosion of test cases. So if you have any chance to test for basic correctness, please consider them : ).

Other than that, your proxy example looks promising if you really need lots of those "train wreck" calls : ).

--fg


PS: You can think of the law of demeter whatever you want. But in node.js you are really paying an extra price for violating it when doing I/O.

Sam McCall

unread,
Jan 15, 2011, 2:07:30 PM1/15/11
to nodejs
Felix:
Re: combinatoric explosion - yeah, it's not a method that will ever
give you exhaustive testing of every codepath.
But it does let you smoke test the common paths through the app,
generate screenshots for 10 languages, on a two week release cycle,
without access to the source, automatically. (i.e. it may not be
perfect but it's a hell of a lot better than what we had before).

Re: law of demeter - I've heard of it, but its correlation with useful
code is zero, to be generous. It assumes the OO metaphor is the most
important part of your code, rather than just a language feature that
makes certain patterns clearer.

Especially in this case - the intention of a GUI automation library is
not to abstract the GUI away from the client code. A 'menu item'
object represents an OS MenuItem widget, nothing more or less, and the
client library should provide low-level, not high-level operations.
Even openMenu() is debatable, in Notepad++ the 'close document' button
is actually a MenuItem labeled 'X'.

Sure, the client code will probably want some wrappers for things that
are common in their particular target application, but they'll be
implemented as above.

In any case, if
a.b().c()
is a violation, then
a.b(function(x) {
x.c();
});
certainly is also. (The rule claims to be about encapsulation and
coupling, not syntax, and the two are clearly the equivalent sync/
async code).

Granted, the short version of demeter is 'two dots' and node makes it
hard to use 'two dots' for unrelated reasons.
However:
var Thing = require('thing');
Thing.whatever();
also violates the law of demeter.

Cheers,
Sam

Ryan Dahl

unread,
Jan 16, 2011, 4:07:49 AM1/16/11
to nod...@googlegroups.com

Coroutines are possible with V8. That is, you can longjmp into another
stack without disrupting the VM - there are several implementations on
the internet. Node is not going to do this because multiple call
stacks dramatically complicate the mental model of how the system
works. Furthermore it's a hard deviation from JavaScript - browser
programmers have never had to consider multiple execution stacks - it
would confuse them to subtly yet significantly change the language.

Adding multiple stacks is not a seamless abstraction. Users must know
worry that at any function call they may be yielded out and end up in
an entirely different state. Locking is now required in many
situations. Knowing if a function is reentrant or not is required.
Documentation about the "coroutine safety" of methods is required.
Node offers a powerful guarantee: when you call a function the state
of the program will only be modified by that function and its
descendants.

Havoc Pennington

unread,
Jan 16, 2011, 9:08:23 PM1/16/11
to nod...@googlegroups.com
Hi,

On Sun, Jan 16, 2011 at 4:07 AM, Ryan Dahl <r...@tinyclouds.org> wrote:
> Adding multiple stacks is not a seamless abstraction. Users must know
> worry that at any function call they may be yielded out and end up in
> an entirely different state. Locking is now required in many
> situations. Knowing if a function is reentrant or not is required.

I sympathize with what you're saying and with node's design. There are
real tradeoffs in simplicity and also in overhead. My first reaction
to this was also, "it's better to be explicit and type in your
callbacks."

That said:

- I'm not sure all the points you mentioned about coroutines apply.
JavaScript generators are more controlled.

- I'd love to try something built on node or like node that goes
slightly differently, even if node itself remains with the tradeoffs
it has. I think the way node works is good, but it's not the only
useful way.
It sort of depends on what kind of developer and what kind of task.

- Raw-callback code is truly rocky in many instances. Chains of
callbacks where async result C depends on async result B depends on
async result A, are the worst. And trying to ensure that the outermost
callback always gets called, exactly once, with only one of either an
error or a valid result. It's not writable or readable code.

Some details that I think matter:

1. Generators here are just an implementation detail of promises. They
don't "leak" upward ... i.e. I don't think a caller would ever care
that somewhere below it in the call stack, someone used "yield" - it
should not be distinguishable from someone creating a promise in
another way.

See fromGenerator() here, which is just a promise factory:
http://git.gnome.org/browse/gjs/tree/modules/promise.js#n317

you mentioned:


> Node offers a powerful guarantee: when you call a function the state
> of the program will only be modified by that function and its
> descendants.

fwiw, I don't think generators and yield modify this guarantee. From
"inside" the generator during a yield, yes program state can be
changed by the caller; but from "outside" the generator, the generator
is just a function. Anytime program state can change "under you,"
you're going to be seeing the yield keyword, and you're going to be
implementing a generator yourself.


2. Promises in this code are just callback-holders. That is, if you
have an API that takes a callback(result, error) then you can
trivially wrap it to return a promise:

function asyncThingWithPromise() {
var p = new Promise();
asyncThingWithCallback(function(result, error) { if (error)
p.putError(error); else p.putReturn(result); })
return p;
}

And vice versa:

function asyncThingWithCallback(cb) {
asyncThingWithPromise.get(cb);
}


3. The way I'd see APIs and libraries in this world would be
promise-based, i.e. an async routine returns a promise. Given
asyncThing() that returns a promise, you can set up a callback:

asyncThing().get(function(result, error) { });

Or if you're a generator, you can kick the promise up to your caller,
save your continuation, and either resume or throw when the result
arrives:

result = yield asyncThing();

"yield" only goes up one frame though, it just makes an iterator from
the continuation and returns it, it doesn't unwind the stack or allow
arbitrary code to run. The caller (or some caller of that caller)
still has to set up a callback on the promise. The callback would
resume the continuation.


4. I could be wrong but I don't think JS generators in SpiderMonkey
get longjmp() involved. The yield keyword saves a continuation, but
the generator function then returns "normally" returning an iterator
object. The calling function has to take that iterator object (which
is implemented with a saved continuation) and if it chooses, it can
further unwind the stack by itself yielding or returning, or it
doesn't have to. Anyway there's not really much magic. Continuations
are an implementation detail of iterators.

Another way to put it, is that at each re-entrancy point you're going
to see either the "yield" keyword (push the promise to the caller, by
returning a promise-generating iterator) or you're going to see a
callback (add a callback to the promise).

It isn't like exceptions or threads where you can be interrupted at
any time. Interruptions only happen when you unwind all the way back
to the event loop which then invokes a callback.


5. The flip side of this is that "yield" isn't magic; people have to
understand how it relates to promises, what a generator function is,
etc. Pasting yield-using code from one function to another might not
work if the destination function isn't a generator.


6. I piled several potentially separable but kinda-related things into
the blog post I linked to. One is the "yield" convention. Others
include:

- have "actors" which have their own isolated JS global object, so no
state shared with other actors.

- share the event loop among actors such that callbacks from
different actors can run concurrently but those from the same actor
can't
(i.e. from perspective of each distinct JS global object, there's
only one thread, from perspective of the overall runtime, there are
multiple - but not thread-per-request! the thread pool runs libev
callbacks)
(actors think they each have their own event loop, but it's
implemented with a shared one for efficiency)

- have a framework where each http request gets its own actor
(therefore global object) ... (something that's only efficient with
SpiderMonkey in their most bleeding edge stuff, and I have no idea
about any other JS implementation. and for sure it would always have
_some_ overhead. I haven't measured it.)

- I'd expect state shared among http requests to be handled as in
other actor frameworks, for example you might start an actor which
holds your shared state, and then http request handlers could use
async message passing to ask to set/get the state, just as if they
were talking to another process.


7. In addition to using all cores without shared state or locking, an
actors approach could really help with the re-entrancy dangers of both
callbacks AND a "yield promise" syntax. The reason is that each actor
is isolated, so you don't care that the event loop is running handlers
in *other* actors. The only dangerous re-entrancy (unexpected callback
mixing) comes from callbacks in the actor you're writing right now.
One way to think about actors is that each one works like its own
little server process, even though implementation-wise they are much
lighter-weight.


8. So the idea is Akka-style functionality, and ALSO libev, and ALSO
ability to write in JavaScript, and ALSO wire it up nicely to http
with actor-per-request ;-) and a pony of course


Anyhow, I realize there are lots of practical problems with this from
a node project perspective. And that it's harder to understand than
the browser-like model.

At the same time, perhaps it remains simpler than using actors with
Java, Scala, or Erlang and preserves other advantages of node. Once
you're doing complex things, sometimes it can be simpler to have a
little more power in the framework taking care of hard problems on
your behalf.

Just some food for thought is all since it was sort of an in-the-weeds
thread to begin with. I don't intend it as a highly-unrealistic
feature request ;-) don't worry.

I am hoping there are people out there who explore these kind of
directions sometime.

Havoc

Mikeal Rogers

unread,
Jan 16, 2011, 10:01:32 PM1/16/11
to nod...@googlegroups.com
JavaScript doesn't have generators. Spidermonkey has an implementation of generators and there is a pending proposal in ECMA Harmony for generators but none have been approved and aren't part of the standard definition of the language ECMA5.

Furthermore, those implementations introduce a new keyword and are breaking changes to the language so considering them for use in node is incredibly premature.

v8 has no publicly stated plans to support generators.

I've worked with generators when I worked in the Mozilla platform. They suck. They tend to make code more complicated and less obvious, all for a perceiving performance improvement that isn't true with modern function optimizations. Python needs generators because function overhead is absurdly high, JavaScript doesn't.

There isn't a task you can't accomplish because you don't have generators except sugar.

Pau

unread,
Jan 17, 2011, 3:08:59 AM1/17/11
to nod...@googlegroups.com
IMO, the limiting javascript syntax and the callback style is what makes nodejs modules so simple.
There are hundreds of modules already written, and when you try them, in 5 minutes you are requesting pulls on Github.

This discussion reminds me all the people blaming on Lisp parenthesis.

Bruno Jouhier

unread,
Jan 17, 2011, 5:20:57 AM1/17/11
to nodejs
Yes, it is important to keep the "mental model" simple.

There are a lot of proposals to improve the situation but most of them
rely on the introduction of new mechanisms into the language
(generators, coroutines, continuations). While these mechanisms may
have their own virtues (and also their own pitfalls because they give
the programmer new ways to "shoot himself in the foot"), I think that
the problem can be solved without adding such extra power to the
language.

Programming with node.js feels a bit like programming with a
schizophrenic language. Your code has two territories: sync-land and
async-land.

In sync-land you can use all the basic features of the language
(nested function calls, subexpressions that call functions, if/else
followed by common code, while loops, try/catch/finally, etc.). The
code is aligned on the way you naturally express your algorithms (at
least if you are a mainstream programmer) and there is no extra
"noise".

In async-land, things are very different because you have to deal
explicitly with the callbacks. They impose a new structure on your
code. Often it feels like you have to turn the code inside out: you
have to move all the async calls that would naturally be nested into
subexpressions before the statement that contains them. You also have
to rethink your flow control: turn iterative constructs into recursive
ones, find tricks to re-package common code after if/then branches,
etc. The simple patterns that worked so well in sync-land don't work
any more; you have to learn new, much heavier, patterns. You feel (or
at least I felt) like you are not working with a high-level language
any more but instead with some kind of strange "intermediate language"
that forces you into a complex translation effort: your still think
about your algorithms in terms of nested function calls, if/else
branches, while loops, etc. but you cannot write them this way; you
have to transform the code to cope with what callbacks impose on you.

What I am proposing with streamline.js is to fix this so that you
don't have two languages but one, and so that you can use the same
"mental model" in sync-land and async-land. I had to introduce a new
operator for this (the underscore prefix - I actually hacked it with a
naming convention to avoid problems with today's tools) and I can
understand that the community is worried to see a new operator
introduced in the language. But this operator is only a catalyst. It
does not do much by itself. It does something much more interesting:
it re-enables all the existing language features (nested calls, if/
else branches, while loops, try/catch/finally) in async-land.

The idea is that the programmer should not be required to learn new
libraries or abstractions in order to write async code. He should be
able to use the simple mechanisms that he has been using all along in
sync-land and the compiler should take care of translating his code to
the new intermediate language. He won't feel like programming with a
schizophrenic language any more because async-land programming will
feel again like sync-land programming: simple algorithms expressed
with core language constructs that you can directly relate to the
problem you're trying to solve, not disassembled code with lots of
extra noise.

What I like in this approach is its simplicity: it should allow
mainstream Javascript developers to write code for node, without
having to learn anything radically new.

Bruno

On Jan 16, 10:07 am, Ryan Dahl <r...@tinyclouds.org> wrote:
> On Fri, Jan 14, 2011 at 11:58 AM, Havoc Pennington <h...@pobox.com> wrote:
> > Here's what I think would be nice, I wrote down a couple months ago:
> >http://blog.ometer.com/2010/11/28/a-sequential-actor-like-api-for-ser...

billywhizz

unread,
Jan 17, 2011, 7:36:50 AM1/17/11
to nodejs
just what is wrong with learning something new? i don't understand the
argument. developers should be constanstly learning new things, or by
"mainstream" developers do you mean "bad" developers. please stop
flogging this dead horse....

Liam

unread,
Jan 17, 2011, 8:32:36 AM1/17/11
to nodejs
On Jan 17, 4:36 am, billywhizz <apjohn...@gmail.com> wrote:
> just what is wrong with learning something new? i don't understand the
> argument. developers should be constanstly learning new things, or by
> "mainstream" developers do you mean "bad" developers. please stop
> flogging this dead horse....

It's hardly a dead horse.

One of Ryan's stated goals is to replace PHP. When average PHP
developers start checking out Node, many are likely to be frustrated
by the issues Bruno's raising. If the Node community doesn't have an
answer for them, they won't hang around.

Havoc Pennington

unread,
Jan 17, 2011, 11:54:00 AM1/17/11
to nod...@googlegroups.com
Hi,

On Sun, Jan 16, 2011 at 10:01 PM, Mikeal Rogers <mikeal...@gmail.com> wrote:
> Furthermore, those implementations introduce a new keyword and are breaking
> changes to the language so considering them for use in node is incredibly
> premature.

Sure. Still, in my opinion there's a pretty interesting problem about
nicer coding models, and I think "callback spaghetti" is a real
problem in code that contains chains of async calls, especially in
more complex cases (making a set of parallel calls then collecting
results, or trying to handle errors robustly, are examples of
complexity).

There are various third-party attempts to address this for node as
mentioned earlier in the thread, showing the demand.

It's sort of like raw threads vs. higher-level tools such as
java.util.concurrent or actors. Those higher-level tools are useful.

> There isn't a task you can't accomplish because you don't have generators
> except sugar.

I've also used them pretty heavily (exactly as described) doing
client-side async code with gjs, and I think they work well for this.

The thing you can uniquely do is write sequential code (not
"inverted"/"nested") that waits on async events [1], because
generators allow you to save a continuation. Error handling also
happens more naturally since the "yield" statement can throw.

I agree generators aren't that exciting just as a way to write an
iterator, and that it's a bit odd that continuation-saving appears as
kind of a side effect of an iterator mechanism.

The usage of "yield" here is almost exactly like C#'s new "await"
keyword, I think. The "await" language feature is done purely for this
write-my-callbacks-sequentially purpose instead of mixing in the
iterator stuff. A JS feature like that might be nicer.

Kilim for Java does a very similar thing as well.

This stuff really is just syntactic sugar for typing your callbacks in
a nicer way, it's true. "Make the rest of this function a callback"
rather than typing the callback as a function literal.

However, sequential code is a lot easier for human brains to work with.

I think that's why multiple frameworks/languages/whatever are
inventing this kind of thing.

Havoc

[1] another way to write sequential async code, that's been used some
e.g. in GTK+ in the past, is the "recursive main loop" which is
allowed by GLib. In this case you don't return to the calling function
but just do a mainloop poll() while inside a callback from the
original mainloop poll(), then continue with your original callback.
There are some worrisome side effects and complexities caused by this
though so the yield/await type thing has more appeal imo.

Havoc Pennington

unread,
Jan 17, 2011, 11:58:55 AM1/17/11
to nod...@googlegroups.com

Kyle Simpson

unread,
Jan 17, 2011, 12:28:59 PM1/17/11
to nod...@googlegroups.com
I agree that the verbosity of nested callbacks hurts the code-maintainability of async patterned JS code. This is not something that merely having a special character for shorthand of "function" will help.
 
A(1, 2, 3, #(){  ...B("foo", #(){ ...C("bar"); ... } ); ... });
 
Is not really that much better than:
 
A(1, 2, 3, function(){  ...B("foo", function(){ ...C("bar"); ... } ); ... });
 
Moreover, it creates a negative side-effect: If I want B to happen after A finishes (asynchronously), I have to pass a reference to B into A, and then I have to rely on A to properly execute B (with proper parameters, if any) for me, and I have to trust it to not muck with it, or call it multiple times, or too early, or pass it invalid input, etc.
 
What if I don't control A? What if I don't really trust A in that sense, and only want to "subscribe" to be notified that A has finished, and then let *my* code that I control be responsible for executing (and parameter passing) of B? And that line of questioning obviously gets much murkier if there's a chain of A, B, C, and D, etc.
 
----------
I have informally proposed and toyed around with a language extension to JS (an operator) that would allow statement-local "continuations" for expressing a chain of sync/async steps, while not affecting any surrounding sync statement processing. It might look like this:
 
some_sync_statement;
//...
A(1,2,3) @ B("foo") @ C("bar");
//...
another_sync_statement;
 
In this case, A() would run immediately. But if inside A, it suspended itself for async completion later, the @ operator would see this and would suspend execution of the rest of its continaining statement, and continue with the rest of the program. When A "completes", at the next available "turn", the statement will resume (aka, "continue") and `B` will then be called, etc. This allows a chain of sync and async statements to be set together into a single statement, and allows the calling program to control all the executions and parameter passing and expressively code the "dependency" chain.
 
Any expression operand in the the @ statement which is not a function that suspends itself (including regular functions, and any non-function-call expressions), would be assumed to "complete" immediately and directly continue along to the next @ operator expression in the statement.
 
So:
 
A(1,2,3) @ blah = 12 @ B("foo") @ alert("bar");
 
Would be a legal chain of sync and async functions that would execute in order.
 
There's lots more detail in this exploration:
 
 
And this blog post:
 
 
And also this es-discuss thread:
 
 
 
--Kyle

Chris Austin-Lane

unread,
Jan 17, 2011, 1:18:33 PM1/17/11
to nod...@googlegroups.com
On Monday, January 17, 2011, Kyle Simpson <get...@gmail.com> wrote:

> What if I don't control A? What if I don't really trust A in that sense, and only want to "subscribe" to be notified that A
> has finished, and then let *my* code that I control be responsible for executing (and parameter passing) of B?
> And that line of questioning obviously gets much murkier if there's a chain of A, B, C, and D, etc.
>

This question is very important for programming in single-threaded
call back environment when you have various libraries of various
degrees of quality.

The canonical answer is to make your interface to the bad libs you
call be intermediates with a dict of strings of pending calls. You
get called back from that lib, you take the string handed back, look
it up in the dict, log error and return if not found, and delete the
string from the dict and proceed if found (using data from the dict
values if your caller is especially bad).

If there is some chance the cb will never be invoked, then you set a
timer in what you control and delete the pending call from your dict
there and do some time out handling.

Then you can explicitly handle the suckiness of your API.

On a broader note, the stark reality of network calls is that any
given call have have at least six results:

Request receieved, succeeded, reply received.
Request receieved, succeeded, reply lost.
Request receieved, failed, reply received.
Request receieved, failed, reply lost.
Request fatal to receiver, no reply, endpoint killed. (retries do not
fix all problems).
Request lost.

People that think about these outcomes and see how different from
in-stack function calls it is will write more robust networking code,
that will be faster and have fewer weird failures than those that use
an RPC type model.

The amazing power that the one-thread, async framework gets you is an
ability to reason much more strongly about your code. This reasoning
power saves you more programmer time than the async syntax is costing
you, at least once you have any trouble or surprises.

--Chris

billywhizz

unread,
Jan 17, 2011, 2:10:53 PM1/17/11
to nodejs
the dead horse is the idea that keeps getting pushed to have this kind
of thing in core. ryan has stated many times that core will remain as
small and low level as possible. i have no problem with anyone
building and publicising a library to do this kind of thing, but
please stop pushing for it to be in core. if i'm misreading bruno i
apologise, but it seems to me that is what he keeps arguing for...

Kyle Simpson

unread,
Jan 17, 2011, 2:37:02 PM1/17/11
to nod...@googlegroups.com
?> the dead horse is the idea that keeps getting pushed to have this kind

> of thing in core. ryan has stated many times that core will remain as
> small and low level as possible.

That's exactly what I'm proposing with the @ operator, a very low-level
basic building block *built into JavaScript* for negotiating async
continuations, localized to a single statement. Many of the common use-cases
would be composable by simple use of the @ operator, and the libraries would
then be more for the advanced use-cases and for providing additional
syntactic sugar as desired.

I'm thinking about implementing this idea as an extension to V8 to make it
available to node. Even if the idea never caught on for mainstream
JavaScript engines, that doesn't mean something low-level like this (or
similar) couldn't be useful to the V8 that powers node.


--Kyle

billywhizz

unread,
Jan 17, 2011, 2:37:22 PM1/17/11
to nodejs
i had a look at that c# await stuff and found it much more confusing
that just using callbacks. maybe it's just me, but i'd prefer to use
the low level stuff and know what it is actually doing than have tons
of layers of syntactic sugar to wade through when my program goes
wrong...

also, i'm very skeptical of the idea of "average php programmers".
what exactly does that mean? php is a horrendous mess of a language
and i am pretty sure that ryan has no intention of making node.js
anything like it. also, i think there are a lot of "bad/lazy"
programmers out there who currently get paid quite a lot of money to
hack together unmaintainable solutions in PHP without really having a
clue what they are doing (i have encountered a LOT of these down the
years). they will all be out of work or doing something else in the
next 10 years unless they go back to basics and start to understand
what is going on under the hood. seems to me the world of
multiprocessor/parallel programming is in the process of going
mainstream (because the market is demanding the efficiencies it brings
to the table) and there is no way of making that stuff "easy"....
maybe i'm completely off base, but i've been in this game a long time
and it sure seems to me that a lot of "programmers" are going to be
getting a pretty serious wake up call in the years ahead...

Adam Crabtree

unread,
Jan 17, 2011, 3:12:09 PM1/17/11
to nod...@googlegroups.com
All the proposed new operators really worry me. Though I'm sure there are some outstanding proposed language additions, I would recommend the community stay vigilantly cautious against language noise. 

I personally don't like your @ and prefer to leave this up to the libraries, but I may be in the minority, and I can certainly choose not to utilize it. ;)

Mikeal Rogers

unread,
Jan 17, 2011, 3:17:11 PM1/17/11
to nod...@googlegroups.com
Node will not be adding new syntax, node takes v8, if v8 implements new syntax it might be taken.

Node doesn't even allow additions to global prototypes in core so adding syntax that isn't in v8 is way out of the question.

Liam

unread,
Jan 17, 2011, 4:33:37 PM1/17/11
to nodejs
OK, sure, it's not going in the Node "kernel". But don't be surprised
when the most popular Node "distribution" prominently features a tool
like this!

That said, it does seem like a solution more native to the language
would be most effective, and as a community we probably ought to be
arguing for that in V8 and ECMA with whatever cachet we have, and not
just shrugging, "not my department."

Mikeal Rogers

unread,
Jan 17, 2011, 4:37:17 PM1/17/11
to nod...@googlegroups.com
One thing is sure, asking for language features here is pointless because this isn't where they are specified or implemented and they never will be.

Just because you're afraid of the es-discuss or v8 lists don't think that this is a productive way to get these features somewhere you can use them.

Kyle Simpson

unread,
Jan 17, 2011, 4:49:47 PM1/17/11
to nod...@googlegroups.com
?> Node will not be adding new syntax, node takes v8, if v8 implements new
syntax it might be taken.
>
> Node doesn't even allow additions to global prototypes in core so adding
> syntax that isn't in v8 is way out of the question.

I'm not suggesting Node add new syntax. I'm suggesting that I'm going to
fork and extend *V8*, to support the new syntax, and then I'm going to allow
that new "V8+" to easily run inside Node.

Frankly, I don't really care if "Node" (as in, the core Node team) ever
wants to accept or promote the V8 extension I propose. Whether you ever
officially endorsed the idea or not is irrelevant -- I'm simply going to
make it easy for someone to use an alternate fork of V8 in their own copy of
Node, to have a simpler way of dealing with sync/async. They should be just
as free to choose that fork of Node/V8 as they would to choose one of the
several promise/defer libs being kicked around.

BTW, extending V8 to experiment with new ideas for the language already has
strong precedent from the Mozilla team. They do new stuff all the time, most
of which has never been accepted into the official language. That doesn't
mean it's not useful for projects that choose to use their engine as its
core. The same should be true here.

In any case, that's a real shame if core Node is completely opposed to
looking at ways to improve server-side JavaScript by extending the language.

----------

I've written enough JavaScript to be convinced that the language is indeed
lacking in its ability for developers to cleanly, efficiently and securely
negotiate async and sync code in the same program. It's enough of an
important fundamental characteristic that I think *something* of a solution
belongs natively in the language, if even only a narrow helper.

If you like nested callbacks and you don't care about why this is important
to the rest of us, fine. Go on writing that god-awful looking code. Don't
expect me to use your code though. I'd rather rewrite an entire library from
scratch than use and try to maintain some of what I've seen out in the
Node.js community's projects around these async code patterns.

I didn't think I'd ever say something like this, but I prefer *JAVA* and
*PHP* over some of the terrible async JS callback code I've seen floating
around. Gah! Someone please just shoot me in the knee cap for saying that.

It's more palatable to me to take on the giant task of extending the JS
engine's language syntax than it is to keep writing (and reading) all this
crappy callback-laden code, or having some group of people arbitrarily
decide for me what *they* think is the best API for this is.

The @ operator's advantage is that it's a very simple building-block upon
which more complex and syntax-sugary promise/defer libs could be built to
help appease developer preferences. It may or may not prove to be a good
idea, but at least it's a step in *some* direction rather than the constant
circular arguing that never goes anywhere.

Can't we see it's utterly pointless to argue over natively included
APIs/libs because no single API will ever solve *all* the different
async/defer/promise use-cases properly. If Node core (or JavaScript itself)
ever chose one specific promise/defer API, they'd be alienating/ignoring all
the other developers for whom that API is either distasteful or inadequate.
Just shouldn't happen. And if it *did* happen, it'd be worse than what we
have now. So, get over it.

With respect to (very slim) chances to get native support/acceptance for @,
I think a single operator for statement-level continuations has an
attractive simplicity and smaller surface area to disagree about than any of
the other stuff being considered. Perhaps it's na�ve and outlandish of me to
think @ has any chance of making it into JavaScript, but it's got to have a
microscopic sliver of a better chance than any complicated API/lib making it
in.

----------

I'm going to build the prototype for this idea so that it's not just theory
anymore. If nothing else, at least *I* will able to start writing cleaner
and better async code. I just wish the community would stop arguing over and
over ad nauseum about the same dead horses. At some point, we just have to
take some step toward a solution. Almost any step any any direction would be
better than this dross.

Wash, rinse, repeat. Yawn.

--Kyle

Mikeal Rogers

unread,
Jan 17, 2011, 4:55:27 PM1/17/11
to nod...@googlegroups.com
Ryan has advocated, for some time, that people try alternate experiments with server side javascript.

But, whatever you build, and I wish you the best, it won't be node. Part of what defines node are the very points you take issue with and do not prefer. If people agree with you and like what you're doing that is great.

This is the node.js list. Let's get this back to being about node.js.

Good luck with your project, it sounds cool.

Mihai Călin Bazon

unread,
Jan 17, 2011, 5:00:10 PM1/17/11
to nod...@googlegroups.com
That's the spirit! Thumbs up man!
Good luck and do let us know when you have something usable.

Cheers,
-Mihai

> the other stuff being considered. Perhaps it's naīve and outlandish of me to


> think @ has any chance of making it into JavaScript, but it's got to have a
> microscopic sliver of a better chance than any complicated API/lib making it
> in.
>
> ----------
>
> I'm going to build the prototype for this idea so that it's not just theory
> anymore. If nothing else, at least *I* will able to start writing cleaner
> and better async code. I just wish the community would stop arguing over and
> over ad nauseum about the same dead horses. At some point, we just have to
> take some step toward a solution. Almost any step any any direction would be
> better than this dross.
>
> Wash, rinse, repeat. Yawn.
>
> --Kyle
>

> --
> You received this message because you are subscribed to the Google Groups
> "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en.
>
>

--
Mihai Bazon,
http://mihai.bazon.net/blog

Kyle Simpson

unread,
Jan 17, 2011, 5:04:35 PM1/17/11
to nod...@googlegroups.com
?> One thing is sure, asking for language features here is pointless because
this isn't where they are specified or implemented and they never will be.

I am not "asking" for a language feature. I'm suggesting one, and announcing
that I'm going to build it so that perhaps others can benefit. I'm
announcing that effort *here* because I think some people in this community
may benefit from a different approach to this problem instead of the same
old arguing that never goes anywhere. This is a node.js list, and I'm
talking about a fork project of node.js.

> Just because you're afraid of the es-discuss or v8 lists don't think that
> this is a productive way to get these features somewhere you can use them.

I'm not afraid of either list. And I don't need your help or blessing to
"get these features somewhere" that I (or others) can use them.

> ...whatever you build... it won't be node... This is the node.js list.

> Let's get this back to being about node.js.

I fail to see how discussing an alternative idea which relies on an
extension to JavaScript doesn't qualify as being related to Node.js, if my
publicly expressed intent is to use that feature inside of Node wrapped
around the extended V8.

--Kyle

Liam

unread,
Jan 17, 2011, 5:06:29 PM1/17/11
to nodejs
An "average" programmer is a guy trying to get his current project
done with the minimum amount of thought and time. He may not plan
ahead because most of his projects won't need scale, and many of his
customers don't continually request features. He'll try out new things
that become trendy.

Javascript is pretty attractive to these folks. Callbacks aren't so
much. We're gonna need a good patterns FAQ or they'll inundate this
list with basic questions that most of us answered for ourselves. If
we can steer these guys to a library or language feature early on,
everyone will be a lot happier.

Liam

unread,
Jan 17, 2011, 5:11:23 PM1/17/11
to nodejs
On Jan 17, 1:37 pm, Mikeal Rogers <mikeal.rog...@gmail.com> wrote:
> One thing is sure, asking for language features here is pointless

We need to develop a consensus at some point on how to address this
issue for the next wave of Node users. If the consensus is a language
feature, proposing that as a community is more effective.

Ryan Gahl

unread,
Jan 17, 2011, 5:24:01 PM1/17/11
to nod...@googlegroups.com
The consensus is most assuredly _not_ a language feature. The problem is lack of understanding. You can't fix that by changing the language. This is like people who type "facebook" into the Google search bar, thinking that Google is their browser's address bar, and then when the first link on the search results one day is suddenly not facebook.com they freak out because "their browser is broken" or "facebook changed something"... and then everyone saying "OMFG, the Internet is broken!!!!!! We need to come to a consensus on how to fix the Internet so people that don't understand how to use it can use it easier, in that broken way they think it should work"

If you can't deal with deeply nested callback structures, try putting just a little thought towards workflow composition in your app. Try a lightweight IoC pattern, or finite state machines (works wonders!).... use custom EventEmitters to orchestrate messaging tasks via deferred queues. Try... abstraction *gasp*. 

This is a userland thing. That's definitely the consensus.

-rg

Ryan Dahl

unread,
Jan 17, 2011, 5:41:20 PM1/17/11
to nod...@googlegroups.com
On Sun, Jan 16, 2011 at 6:08 PM, Havoc Pennington <h...@pobox.com> wrote:
> Hi,

Hi

Aside: I'm a Gnome/GTK fan-boy. I eagerly read your book when I was in
high school - it was great.

> On Sun, Jan 16, 2011 at 4:07 AM, Ryan Dahl <r...@tinyclouds.org> wrote:
>> Adding multiple stacks is not a seamless abstraction. Users must know
>> worry that at any function call they may be yielded out and end up in
>> an entirely different state. Locking is now required in many
>> situations. Knowing if a function is reentrant or not is required.
>
> I sympathize with what you're saying and with node's design. There are
> real tradeoffs in simplicity and also in overhead. My first reaction
> to this was also, "it's better to be explicit and type in your
> callbacks."
>
> That said:
>
>  - I'm not sure all the points you mentioned about coroutines apply.
> JavaScript generators are more controlled.
>
>  - I'd love to try something built on node or like node that goes
> slightly differently, even if node itself remains with the tradeoffs
> it has. I think the way node works is good, but it's not the only
> useful way.
> It sort of depends on what kind of developer and what kind of task.

Yeah - it will be interesting to see how it works out. You should be
careful to note that it is not EcmaScript.

I think your ultimate goals - what you list at the end - are
achievable without modifying the language.


>  - Raw-callback code is truly rocky in many instances. Chains of
> callbacks where async result C depends on async result B depends on
> async result A, are the worst. And trying to ensure that the outermost
> callback always gets called, exactly once, with only one of either an
> error or a valid result.  It's not writable or readable code.
>
> Some details that I think matter:
>
> 1. Generators here are just an implementation detail of promises. They
> don't "leak" upward ... i.e. I don't think a caller would ever care
> that somewhere below it in the call stack, someone used "yield" - it
> should not be distinguishable from someone creating a promise in
> another way.
>
> See fromGenerator() here, which is just a promise factory:
> http://git.gnome.org/browse/gjs/tree/modules/promise.js#n317

Okay. So a generator will allow you to "blocking" a computation - to
save the continuation. I grant this is useful and does not leak upward
- but blocking on pure computation is not what people care about. The
callbacks in Node are never from computation - they are from I/O.
You're not addressing the complaints, actually.

People complain not being able to do this:

var result = database.query("select * from table");

People do not complain about being forced to structure their
interruptible parser as a state machine rather than recursive decent -
which generators would allow. Blocking computation vs blocking I/O.


> you mentioned:
>> Node offers a powerful guarantee: when you call a function the state
>> of the program will only be modified by that function and its
>> descendants.
>
> fwiw, I don't think generators and yield modify this guarantee. From
> "inside" the generator during a yield, yes program state can be
> changed by the caller; but from "outside" the generator, the generator
> is just a function. Anytime program state can change "under you,"
> you're going to be seeing the yield keyword, and you're going to be
> implementing a generator yourself.

Yes, it seems that's correct to me too. But again, I don't think this
is the addressing problems people have.


> 2. Promises in this code are just callback-holders. That is, if you
> have an API that takes a callback(result, error) then you can
> trivially wrap it to return a promise:
>
>  function asyncThingWithPromise() {
>     var p = new Promise();
>     asyncThingWithCallback(function(result, error) { if (error)
> p.putError(error); else p.putReturn(result); })
>     return p;
>  }
>
> And vice versa:
>
> function asyncThingWithCallback(cb) {
>   asyncThingWithPromise.get(cb);
> }

Promises are purely library sugar. I'm not sure if you're suggesting
Promises require generators - if so - you're wrong. Here is an
implementation that does not require modifying the language:
https://github.com/kriszyp/node-promise

If the promise has a 'wait()' function, then it becomes interesting.


> 3. The way I'd see APIs and libraries in this world would be
> promise-based, i.e. an async routine returns a promise. Given
> asyncThing() that returns a promise, you can set up a callback:
>
>   asyncThing().get(function(result, error) { });
>
> Or if you're a generator, you can kick the promise up to your caller,
> save your continuation, and either resume or throw when the result
> arrives:
>
>   result = yield asyncThing();
>
> "yield" only goes up one frame though, it just makes an iterator from
> the continuation and returns it, it doesn't unwind the stack or allow
> arbitrary code to run. The caller (or some caller of that caller)
> still has to set up a callback on the promise. The callback would
> resume the continuation.
>
>
> 4. I could be wrong but I don't think JS generators in SpiderMonkey
> get longjmp() involved. The yield keyword saves a continuation, but
> the generator function then returns "normally" returning an iterator
> object. The calling function has to take that iterator object (which
> is implemented with a saved continuation) and if it chooses, it can
> further unwind the stack by itself yielding or returning, or it
> doesn't have to. Anyway there's not really much magic. Continuations
> are an implementation detail of iterators.

"saves a continuation" means that the call stack was saved - you
return to the previous. Once it's saved you now have two execution
stacks. It appears spidermonkey does this by memcpying the top frames.
Magic.


> Another way to put it, is that at each re-entrancy point you're going
> to see either the "yield" keyword (push the promise to the caller, by
> returning a promise-generating iterator) or you're going to see a
> callback (add a callback to the promise).
>
> It isn't like exceptions or threads where you can be interrupted at
> any time. Interruptions only happen when you unwind all the way back
> to the event loop which then invokes a callback.
>
>
> 5. The flip side of this is that "yield" isn't magic; people have to
> understand how it relates to promises, what a generator function is,
> etc. Pasting yield-using code from one function to another might not
> work if the destination function isn't a generator.

It's definitely not EcmaScript and it's not implementable in
EcmaScript. I would call this magic.


> 6. I piled several potentially separable but kinda-related things into
> the blog post I linked to. One is the "yield" convention.  Others
> include:
>
>  - have "actors" which have their own isolated JS global object, so no
> state shared with other actors.
>
>  - share the event loop among actors such that callbacks from
> different actors can run concurrently but those from the same actor
> can't
>   (i.e. from perspective of each distinct JS global object, there's
> only one thread, from perspective of the overall runtime, there are
> multiple - but not thread-per-request! the thread pool runs libev
> callbacks)
>   (actors think they each have their own event loop, but it's
> implemented with a shared one for efficiency)

This is definitely do-able. In V8 our context creation isn't very
cheap, so creating new "green processes" would not be as cheap as one
would like - but probably much cheaper than starting a whole new
process.

I like my model:
- Processes are thick, they handle many hundreds or thousands of
concurrent connections.
- Process creation is relatively expensive (compared to green
threads), it requires starting a whole OS process.
- However: all the OS features can be applied: killing them
independently, security, etc.
- Most apps live in a single process - which is simple and nice.

Note that this in no way bars one from having multiple event
loops/contexts that communicate. The only limitation is that you won't
have a event loop/context for each connection because the overhead is
too high. You need to pack a couple hundred connections together into
one process.


>  - have a framework where each http request gets its own actor
> (therefore global object) ...  (something that's only efficient with
> SpiderMonkey in their most bleeding edge stuff, and I have no idea
> about any other JS implementation. and for sure it would always have
> _some_ overhead. I haven't measured it.)

This is something that can't be done in Node without a lot of overhead.


>  - I'd expect state shared among http requests to be handled as in
> other actor frameworks, for example you might start an actor which
> holds your shared state, and then http request handlers could use
> async message passing to ask to set/get the state, just as if they
> were talking to another process.

This is how large (multi-process) Node programs to work as well. The
shared state actor might be redis.


> 7. In addition to using all cores without shared state or locking, an
> actors approach could really help with the re-entrancy dangers of both
> callbacks AND a "yield promise" syntax. The reason is that each actor
> is isolated, so you don't care that the event loop is running handlers
> in *other* actors. The only dangerous re-entrancy (unexpected callback
> mixing) comes from callbacks in the actor you're writing right now.
> One way to think about actors is that each one works like its own
> little server process, even though implementation-wise they are much
> lighter-weight.

Yes. In Node the processes are real processes - so they are somewhat
heavy. Giving a process to each request is too much overhead in Node -
so there will always be multiple connections on a process. There are a
lot of good things about this but a bad thing is that you cannot kill
a single connection and remove its memory and consequentially you
cannot upgrade a single connection to a new version of the software
without bringing down the entire OS process. Your thin processes will
be better in that area.


> 8. So the idea is Akka-style functionality, and ALSO libev, and ALSO
> ability to write in JavaScript, and ALSO wire it up nicely to http
> with actor-per-request ;-) and a pony of course

That's my idea too. I haven't yet released my IPC/Actor framework for
Node - but this is all possible with thick, simple processes and
without modifying the language. We're working on this outside of the
public repo right now, but we'll release it in a few months.

(By 'thick' I mean that they are OS processes and tend to handle many
hundreds of connections, as opposed to Erlang-style 'green' or 'thin'
processes.)


> Anyhow, I realize there are lots of practical problems with this from
> a node project perspective. And that it's harder to understand than
> the browser-like model.
>
> At the same time, perhaps it remains simpler than using actors with
> Java, Scala, or Erlang and preserves other advantages of node. Once
> you're doing complex things, sometimes it can be simpler to have a
> little more power in the framework taking care of hard problems on
> your behalf.

I do not think Node preclude one from 'Actor'-style systems.

> Just some food for thought is all since it was sort of an in-the-weeds
> thread to begin with. I don't intend it as a highly-unrealistic
> feature request ;-) don't worry.
>
> I am hoping there are people out there who explore these kind of
> directions sometime.

Me too, it's interesting. I just want people to understand that even
though Node takes a very "simplistic" approach to computing, it is not
fundamentally limited. I admit that Node has not done enough to make
building IPC/cross-machine programs obvious - but the foundations are
there.

Ryan

Ryan Dahl

unread,
Jan 17, 2011, 6:04:47 PM1/17/11
to nod...@googlegroups.com
On Tue, Jan 11, 2011 at 8:47 AM, Sam McCall <sam.m...@gmail.com> wrote:
> I love the event loop, and I understand the execution model, and I
> don't want to change it.
> I don't feel like Javascript is desperately missing macros.
> But I need someone to talk me down, because I'm coming to the
> conclusion that the certain sorts of programs are unwritable without
> language extensions.
> Not technically unwritable, of course, they work really well. But the
> code is so full of callback noise that you can't see the logic.

Sam,

I applaud your effort. You're trying the address your problem without
resorting to a more complex programming model. High-performance,
single-call stack attempts at "blocking" I/O are woefully absent from
the literature.

(That said, I personally do not think a pre-processor is needed.)

Ryan

Preston Guillory

unread,
Jan 17, 2011, 6:16:21 PM1/17/11
to nod...@googlegroups.com
Can you really imagine changing JavaScript?  We still have a lot of IE6 boxes out there.  It takes something like a decade before new language features are widely enough deployed to be able to assume that clients have it.  Or if you say, "never mind the browsers, we're talking about Node," then you're abandoning the ability to run the same code both client- and server-side, one of the big selling points of server-side JS.

On the other hand, new languages are popping up all the time that compile to existing languages.  Not merely pre-processors, but entirely new languages.

https://github.com/jashkenas/coffee-script/wiki/List-of-languages-that-compile-to-JS
http://www.igvita.com/2010/12/14/beyond-ruby-mirah-reia-rite/

Would it make sense for CoffeeScript to incorporate the stuff Bruno did?

Kyle Simpson

unread,
Jan 17, 2011, 6:56:26 PM1/17/11
to nod...@googlegroups.com
?> The consensus is most assuredly _not_ a language feature. The problem is
lack of understanding. You can't fix that by changing the language...If you
can't deal with deeply nested callback structures, try putting just a little
thought towards workflow composition in your app. Try a lightweight IoC
pattern, or finite state machines (works wonders!).... use custom
EventEmitters to orchestrate messaging tasks via deferred queues. Try...
abstraction *gasp*.

So, let's walk through the progression of code patterns as it relates to
this discussion, as I'm curious exactly where in these steps your superior
knowledge of patterns would have kicked in and just magically solved all my
problems (even the ones I didn't know about yet).

I start out a new project and write:

A(1,2);
B(3,4);
C(5,6);

When I first write that code, the implementations of A, B, and C are all
synchronous/immediate and under my direct control. So my code works fine.

Then, for some reason a few weeks later, I need to change the implementation
of A and B and make them both asynchronously completing (because of XHR, for
instance). So, I go back and try to re-pattern my code with as little a
change as possible, both to the calling code and ALSO to the implementation
code inside A and B.

My first stab at it is:

A(1,2,function(){
B(3,4,function(){
C(5,6);
}
});

Which works! And I shrug off the syntactic ugliness of it. After all, 3
levels of nesting is not terrible. And I'll probably never need more nesting
than that. I hope.

But then, later, I realize that I need to swap in another implementation of
A from a third-party, which I don't really know or trust. And as I start to
examine this pattern, I realize, really, all I care about in this case is to
be "notified" (think: event listening) of when A fully finishes (whether
that is immediately or later).

In fact, I read up on that third-party's documentation for A, and I find
that they actually have conditions inside the implementation where sometimes
the callback will be executed immediately, and sometimes it'll happen later.
This starts to make me nervous, because I'm not sure if I can really trust
that they'll properly call my callback at the right time. What if they call
it twice? Or too early? What if that uncertainty creates other race
conditions in my calling code.

So, at this point, I'm getting really frustrated. I don't know exactly what
pattern I can use to cleanly and efficiently ensure that A(..) will run
(however it wants to run) and then once it's complete, and only then, and
only once, will I be notified of it being complete so I can run B.

I look around at promises and defers, and I see things like `when()`. Since
those third-party implementations don't support promise/defer, I reluctantly
switch back to re-implementing A and B myself. So I try that pattern:

when(function(){ return A(1,2); }, function(){
when(function(){ return B(3,4); }, function(){
C(5,6);
}
});

Ouch, that's getting uglier quickly. It's starting to get a lot harder to
understand what the heck that code is supposed to do. Then I see `then`
style pattern, and I try that:

A(1,2)
.then(function(){ return B(3,4); })
.then(function(){ C(5,6); });

OK, we're making some progress. That style is not terrible. It's not great,
but I can probably live with it. Except for the fact that I have a heavy
"Promise" library implementation creating these return objects, and the
implementation inside A and B is ugly, and *forces* me to always return a
promise.

What if in another part of my code, I want A to only return an immediate
value and not need/observe its asynchronously completing behavior?

Oops, guess I have to refactor again, and create a wrapper to A like this:

function immediate_A(num1, num2) { ... }
function A(num1,num2) {
immediate_A(num1,num2);
return ...; /* some promise creation mechanism
}

Now, I decide that I want B to actually pass a "message" onto C (aka, the
results of what B accomplished). I'm forced to change the function signature
of C so that it accepts a third parameter, and then I have to hard-code that
B will call C with exactly that signature. I try to get clever and use
partial currying for C for its first two parameters, but then my code keeps
getting even uglier and uglier.

The further down the rabbit hole I go, the more I realize that what was
natural to me in synchronous coding is nowhere even in the same universe as
the code I need to write when things become asynchronous. I realize that
even with perfect future-thinking foreknowledge of all my possible needs of
my use of these functions, I probably wouldn't have used a pattern that
could solve all of them.

All this "abstraction" has become a hindrance rather than a useful tool. I'm
frustrated and about ready to give up. Why oh why can't I just have the
language help me out instead of forcing me to jump through all these
"functional" hoops that are inevitably brittle when I need to make a change
to my usage of the pattern a few weeks from now?

All of the questions (and others) I'm raising here are ones I've actually
run into in my own coding, especially when writing code that I share between
browser and server where I need good (and maintainable) solutions for
sync/async negotiation.

-----
Boy, it sure would be nice if this just worked:

var foo = A(10,20) * B(30,40);
A(1,2) @ B(3,4) @ C(5,6);

And it hid from me most of the pain of dealing with promises and smoothed
over the differences between A, B, and C being sync or async, and let me use
immediate return values from A, B and C if I want to, without having to care
at all about possible asynchronicity under the covers.


> This is a userland thing. That's definitely the consensus.

The word "consensus" indicates that if the majority opinion/decision is not
unanmious, minority objections must be mitigated or resolved. You're either
conflating consensus with unanimous, or suggesting that anyone who thinks
the language could/should solve this (or at least be *part* of the solution)
is wrong and ignorable.

I have, as you can see above, strong objections to the assertion that it's
purely something a "userland" library can solve. At *best*, I'm going to
need several entirely different and probably incompatible promise/defer
libraries/syntaxes that I mix into my project for my different needs. What a
nightmare.

Before you go and casually dismiss the minority opinion that the language
could/should be part of the solution, I'd love to see perfect answers to all
of the above complications (and the dozen others I didn't present).

--Kyle

Havoc Pennington

unread,
Jan 17, 2011, 7:03:08 PM1/17/11
to nod...@googlegroups.com
Hi,

On Mon, Jan 17, 2011 at 5:41 PM, Ryan Dahl <r...@tinyclouds.org> wrote:
> Aside: I'm a Gnome/GTK fan-boy. I eagerly read your book when I was in
> high school - it was great.

I appreciate it! node.js is sweet as well, otherwise I wouldn't be on
the mailing list of course.

> Yeah - it will be interesting to see how it works out. You should be
> careful to note that it is not EcmaScript.
>
> I think your ultimate goals - what you list at the end - are
> achievable without modifying the language.

I'm glad to hear you're working on actor stuff.

To be clear, I spent a fair bit of time messing around with some of
this, but for now I put it down and just posted what I had as a source
of ideas for others. I figure it'll be cool to see how node and akka
and other projects end up.

> People complain not being able to do this:
>
>    var result = database.query("select * from table");

An example of where we would use the "yield" syntax in gjs code, is
syncing data via http requests to a local sqlite database, where both
the http requests and the sqlite calls are async.

So it would look like:

var result = yield database.query("select * from table");

here database.query returns a promise but the value of "result" is the
promised value, not the promise itself.

Without the "yield" sugar you'd write:

database.query("select * from table").get(function(result, exception)
{ ... whatever ... });

.get() could also be called .addCallback()

Anyway, I guess this syntactic sugar discussion ends up pretty
academic at least in the short term, since V8 doesn't have it.

It is true that it's just sugar.

> Promises are purely library sugar. I'm not sure if you're suggesting
> Promises require generators - if so - you're wrong. Here is an
> implementation that does not require modifying the language:
> https://github.com/kriszyp/node-promise

The gjs implementation of promises I linked to does not require
generators - it's just a dead-simple list of callbacks that look like:
function(result, exception) { }

Generators which yield promises are one way of creating a promise.

When you do promise.putReturn(value) or promise.putError(exception)
then all the callbacks are invoked, and some callbacks may happen to
continue a suspended generator.

> If the promise has a 'wait()' function, then it becomes interesting.

wait() presumably needs a thread, or recursive main loop, or
something, so the implementation in gjs doesn't have that. To "block"
on a promise (you can't truly block) you would use the yield stuff
which boils down to just using callbacks, rather than blocking.

I think promises are kind of useful even without wait() or
simulated-wait, i.e. just as callback holders.

Some reasons are:

- you can have more than one callback on the same event pretty easily
(the promise is like a mini signals-and-slots thing)

- you can write utility routines like "make a promise that is
fulfilled when all these other promises are fulfilled"

> stacks. It appears spidermonkey does this by memcpying the top frames.
> Magic.

yeah, it's a little scary I suppose. :-) but it's better than
longjmp()! (OK a "better than longjmp" argument can justify anything
at all ;-)

> Yes. In Node the processes are real processes - so they are somewhat
> heavy. Giving a process to each request is too much overhead in Node -
> so there will always be multiple connections on a process. There are a
> lot of good things about this but a bad thing is that you cannot kill
> a single connection and remove its memory and consequentially you
> cannot upgrade a single connection to a new version of the software
> without bringing down the entire OS process. Your thin processes will
> be better in that area.

It's kind of nebulous but what I like about actors / thin processes is
the separation of concerns; meaning you can decide separately if you
want to have multiple OS processes, a thread pool, or even a single
thread. That decision moves up into the container. Ideally, it's even
kind of magic; the container figures out how many cores you need to
use and uses them.

Even assuming the JS implementation keeps each new
global-object-plus-stack pretty light, it would definitely have to be
an optional application or web framework decision to have one per
request, since it's overhead for sure. You have http-parser so finely
tuned to be ultra-low-memory that it isn't hard to make it several
times larger I'd guess and that wouldn't be suitable in many cases.

> That's my idea too. I haven't yet released my IPC/Actor framework for
> Node - but this is all possible with thick, simple processes and
> without modifying the language. We're working on this outside of the
> public repo right now, but we'll release it in a few months.

Cool.

> Me too, it's interesting. I just want people to understand that even
> though Node takes a very "simplistic" approach to computing, it is not
> fundamentally limited.

Oh, hopefully that's very clear. I think the event loop is the right
foundational building block.

Havoc

James Carr

unread,
Jan 17, 2011, 7:15:36 PM1/17/11
to nod...@googlegroups.com
instead of complaining, why not implement it? I'm sure someone will
find it useful.

John Taber

unread,
Jan 17, 2011, 7:27:33 PM1/17/11
to nod...@googlegroups.com, Liam
On 01/17/2011 03:06 PM, Liam wrote:
> An "average" programmer is a guy trying to get his current project
> done with the minimum amount of thought and time. He may not plan
> ahead because most of his projects won't need scale, and many of his
> customers don't continually request features. He'll try out new things
> that become trendy.
>
> Javascript is pretty attractive to these folks. Callbacks aren't so
> much. We're gonna need a good patterns FAQ or they'll inundate this
> list with basic questions that most of us answered for ourselves. If
> we can steer these guys to a library or language feature early on,
> everyone will be a lot happier.

I suggested a "cookbook" or best practice wiki and gave 2 typical
examples - we'll contribute some code if there's a good place to put
this - should we just start a git hub on this ?

Liam

unread,
Jan 17, 2011, 7:52:36 PM1/17/11
to nodejs
Perhaps a wiki page in the node repo? There isn't an FAQ there at
present...

https://github.com/ry/node/wiki/_pages

For it to be successful, one or two node wizards will need to write a
bunch of entries to kick off...

Liam

unread,
Jan 17, 2011, 8:00:35 PM1/17/11
to nodejs
On Jan 17, 3:56 pm, "Kyle Simpson" <get...@gmail.com> wrote:
> The word "consensus" indicates that if the majority opinion/decision is not
> unanimous, minority objections must be mitigated or resolved. You're either
> conflating consensus with unanimous, or suggesting that anyone who thinks
> the language could/should solve this (or at least be *part* of the solution)
> is wrong and ignorable.

Amen.

Paul Spencer

unread,
Jan 17, 2011, 8:21:25 PM1/17/11
to nod...@googlegroups.com
I believe that is exactly what Kyle (or was it someone else?) is intending to do based on previous comments on this thread, specifically he would work on a modified version of v8 that extended the language to provide the @ operator and this new version of v8 could be used with node as a drop-in replacement - no change to node required, just try it and if you like it - party on!

Jorge

unread,
Jan 17, 2011, 8:21:47 PM1/17/11
to nod...@googlegroups.com
On 18/01/2011, at 00:56, Kyle Simpson wrote:

> (...)


> Then, for some reason a few weeks later, I need to change the implementation of A and B and make them both asynchronously completing (because of XHR, for instance). So, I go back and try to re-pattern my code with as little a change as possible, both to the calling code and ALSO to the implementation code inside A and B.
>
> My first stab at it is:
>
> A(1,2,function(){
> B(3,4,function(){
> C(5,6);
> }
> });
> Which works! And I shrug off the syntactic ugliness of it.

Perhaps, for maximum clarity, that should have been :

A(1,2,b);
function b () { B(3,4,c) }
function c () { C(5,6) }

> After all, 3 levels of nesting is not terrible. And I'll probably never need more nesting than that. I hope.

But it needed no nesting... (?)

> But then, later, I realize that I need to swap in another implementation of A from a third-party, which I don't really know or trust. And as I start to examine this pattern, I realize, really, all I care about in this case is to be "notified" (think: event listening) of when A fully finishes (whether that is immediately or later).
>
> In fact, I read up on that third-party's documentation for A, and I find that they actually have conditions inside the implementation where sometimes the callback will be executed immediately, and sometimes it'll happen later. This starts to make me nervous, because I'm not sure if I can really trust that they'll properly call my callback at the right time. What if they call it twice? Or too early? What if that uncertainty creates other race conditions in my calling code.

var FLAG= 1;

A(1,2,b);
function b () { FLAG && B(3,4,c), FLAG= 0 }
function c () { C(5,6) }

> So, at this point, I'm getting really frustrated. I don't know exactly what pattern I can use to cleanly and efficiently ensure that A(..) will run (however it wants to run) and then once it's complete, and only then, and only once, will I be notified of it being complete so I can run B.

Adding a flag gets you frustrated ?

> (...)


>
> Before you go and casually dismiss the minority opinion that the language could/should be part of the solution, I'd love to see perfect answers to all of the above complications (and the dozen others I didn't present).

Honestly, I think there's nothing complicated above. Perhaps somewhere in the other twelve, you mean ?
--
Jorge.

Bruno Jouhier

unread,
Jan 18, 2011, 4:07:45 AM1/18/11
to nodejs
Kyle,

What I am proposing is the following:

A_(1, 2);
B_(3, 4);
C_(5, 6);

which will be transformed into

A(1, 2, function(err) {
if (err) return callback(err);
B(3, 4, function(err) {
if (err) return callback(err);
C(5, 6, function(err) {
if (err) return callback(err);
....
});
});
});

Instead of introducing a new operator @ that somehow replaces
something that's already there in the language (the semicolon), I'm
adding a little marker on the async functions.

But semicolon and statements chaining is only the beginning of the
story. What's more interesting is that once this marker is in place,
you can start using *all* the flow control statements that already
exist in Javascript. For example, you can write:

if (A_(1, 2)) {
B_(3, 4);
}
C_(5, 6);

or even:

try {
A_(1, 2);
}
catch {
B_(3, 4);
}
finally {
C_(4, 6);
}

And even tricky things like

if (A_(1, 2) && B_(3, 4)) // don't call B if A is false
C_(5, 6);

will work as you would expect them to work.

And you can combine all these at your will. The transformation is very
algebraic and won't bark at you because you wrote something complex
(except if you start using labelled break or continue but that's just
because I did not have time to implement them yet).

So, if you are going to try to put something into V8, you should try
to bring *all* the flow control statements back to life in async-land,
not just the semicolon that chains statements. You may want to take a
look at https://github.com/Sage/streamlinejs/blob/master/test/transform-test.js
for examples of code transformations.

Bruno

Bruno Jouhier

unread,
Jan 18, 2011, 5:03:04 AM1/18/11
to nodejs
I think it would be a good fit for CoffeeScript.

The advantage of a preprocessor is that it generates vanilla
Javascript. So you can target all the JS runtimes out there. Even IE6!

A benchmark would be really enlightening too:

* how much code do you write?
* how readable is that code?
* how robust/safe is that code?
* how easily can you call native node APIs?
* how easily can you expose your functions as "normal" node APIs?
* how fast does it run?
* how much additional baggage (libraries) do you carry?

Bruno

On Jan 18, 12:16 am, Preston Guillory <pguill...@gmail.com> wrote:
> Can you really imagine changing JavaScript?  We still have a lot of IE6
> boxes out there.  It takes something like a decade before new language
> features are widely enough deployed to be able to assume that clients have
> it.  Or if you say, "never mind the browsers, we're talking about Node,"
> then you're abandoning the ability to run the same code both client- and
> server-side, one of the big selling points of server-side JS.
>
> On the other hand, new languages are popping up all the time that compile to
> existing languages.  Not merely pre-processors, but entirely new languages.
>
> https://github.com/jashkenas/coffee-script/wiki/List-of-languages-tha...http://www.igvita.com/2010/12/14/beyond-ruby-mirah-reia-rite/

Ryan Dahl

unread,
Jan 18, 2011, 4:33:32 PM1/18/11
to nod...@googlegroups.com

Ah, thanks for the explanation - that seems safe and useful. It seems
coffeescript should be adding some syntax macros like this.

jashkenas

unread,
Jan 18, 2011, 5:19:49 PM1/18/11
to nodejs
On Jan 18, 4:33 pm, Ryan Dahl <r...@tinyclouds.org> wrote:
>
> Ah, thanks for the explanation - that seems safe and useful. It seems
> coffeescript should be adding some syntax macros like this.

Hey folks -- probably a bit too little, too late to be weighing in on
this here...

But, CoffeeScript already *has* tried to implement this feature
(Bruno's func_, Kyle's @), with a language feature called "defer". You
would tag a function with "defer", and at compile time, the rest of
the computation (the continuation) would be transformed into the
callback. We had an implementation that was fairly far along, worked
on by Tim Cuthbertson, with lots of edge cases taken care of ... and
yet, the effort never got merged in to master. I'll explain why in a
moment, but first, here's all of the relevant back story:

Blog posts:

http://gfxmonk.net/2010/07/04/defer-taming-asynchronous-javascript-with-coffeescript.html
http://gfxmonk.net/2010/07/06/dealing-with-return-in-deferred-coffeescript.html

Github Issues:

https://github.com/jashkenas/coffee-script/issues/issue/241
https://github.com/jashkenas/coffee-script/issues/issue/287
https://github.com/jashkenas/coffee-script/issues/issue/350

If you have a moment, and you're interested in the ins and outs of
trying to build such a transformation -- I'd suggest reading through
at least the blog posts.

One issue that we weren't able to resolve is the fundamental tension
between running a series of async functions in serial, versus in
parallel, with a language-level defer keyword.

for item in list
defer func(item)

In synchronous code, and in serial async code, "func" would be able to
rely on the fact that the previous item had already been processed. In
parallel async code, it wouldn't. Which is the default? Do you have
different keywords to specify which one you mean?

Another issue lies in the fact that async code "pollutes" everything
it touches, all the way up the call stack to the top. Absent static
analysis, there's no way to know if the function you're about to call
is going to "defer" -- if it does, you have to pass a callback, or
defer yourself. This means that any function that's going to call an
async function has to know it in advance, and defer the call
accordingly. At this point, it's already hardly better than just
passing the callback yourself.

Another issue lies in argument ordering. Node assumes that "err" is
always the first argument to the callback, and should be treated
specially -- but should this assumption be built-in to the language?
Surely not every async API will follow this pattern. Same goes for
callback-as-the-final-argument-to-the-function. Some APIs take a
callback and an errback, how do you deal with those?

At the end of the day, you end up with "defer" sugar that makes a lot
of assumptions, choosing either serial or parallel, and locking you in
to a fixed callback structure, for better or for worse. It was decided
that it wasn't worth adding such a limited sort of "defer" to
CoffeeScript -- especially because the generated code has to be quite
verbose, to take into account things like try, catch, break, and
continue, through a deferred call. There's also a lot of merit in
having an asynchronous callback *look like* an asynchronous callback.
When code that looks synchronous starts to behave in non-deterministic
ways, you're asking for trouble.

All that said, if either Bruno or Kyle (or Sam) wants to take a stab
at building such a thing for CoffeeScript, I'd love to see another
attempt, and would be glad to help them make their changes to the
compiler. The basic syntax is all already taken care of, so most of
the work would be in manipulating the AST and emitting the JavaScript
you'd like to see.

Cheers,
Jeremy

Alexander Fritze

unread,
Jan 18, 2011, 6:30:56 PM1/18/11
to nod...@googlegroups.com
On Tue, Jan 18, 2011 at 11:19 PM, jashkenas <jash...@gmail.com> wrote:
> One issue that we weren't able to resolve is the fundamental tension
> between running a series of async functions in serial, versus in
> parallel, with a language-level defer keyword.
>
>    for item in list
>      defer func(item)
>
> In synchronous code, and in serial async code, "func" would be able to
> rely on the fact that the previous item had already been processed. In
> parallel async code, it wouldn't. Which is the default? Do you have
> different keywords to specify which one you mean?

Since you're raising this point... we've had some experience with this
tension of executing stuff in parallel or sequentially in
StratifiedJS.

When you write a async-to-sequential framework, there is a great
temptation to parallelize everything that's parallelizable. E.g. in an
initial version of StratifiedJS we made function calls evaluate their
parameters in parallel. I.e. in foo(a(),b(),c()), 'a()', 'b()', and
'c()' were automatically evaluated in parallel.

In practice we found this to be very dangerous - in non-trivial code
it can quickly introduce non-obvious race conditions. Our conclusion
was that it is better never to parallelize anything by default; the
programmer should have to do it explicitly ( in StratifiedJS by using
waitfor/and/or or a higher-level abstraction built on these, such as
waitforAll - http://onilabs.com/modules#cutil/waitforAll ).

We've got a working version of StratifiedJS for nodejs btw (see
https://github.com/afri/sjs-nodejs ). Some things in node are very
easy to sequentialize, e.g. this is a blocking version of
child_process.exec:

function exec(command, options) {
waitfor(var err, stdout, stderr) {
var child = require('child_process').exec(command, options, resume);
}
retract {
child.kill();
}
if (err) throw err;
return { stdout: stdout, stderr: stderr };
}

With this you can e.g. write a http server that serves shell commands
in a blocking style:

http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type':'text/plain'});
res.end(exec("uname -a").stdout);
}).listen(8124, "127.0.0.1");

Other things in node are harder to convert to sequential, in
particular streams. The fundamental problem here is that node's
streams *push* data, whereas for blocking logic you always want to
*pull*. I'd be interested if anyone has any good idea of how to wrap
streams with a blocking API while still keeping compatibility with
their normal async usage in node.

Cheers,
Alex

Ryan Dahl

unread,
Jan 18, 2011, 7:35:33 PM1/18/11
to nod...@googlegroups.com

You can:

1. resume the stream,
2. wait for 'data' event,
3. pause the stream,
4. wait until the user calls a synchronous read() again,
5. go to 1.

Havoc Pennington

unread,
Jan 18, 2011, 11:11:28 PM1/18/11
to nod...@googlegroups.com
Hi,

Short version of this post, the gjs promises/generators stuff and C#
await both have a promise/task object, and I think that might help
some. Don't know.

Long version is to spell out how it works,

On Tue, Jan 18, 2011 at 5:19 PM, jashkenas <jash...@gmail.com> wrote:
>    for item in list
>      defer func(item)
>
> In synchronous code, and in serial async code, "func" would be able to
> rely on the fact that the previous item had already been processed. In
> parallel async code, it wouldn't. Which is the default? Do you have
> different keywords to specify which one you mean?

In the case of

for each (item in list)
yield func(item)

Then func() would be returning a promise, which is a handle to either
a result value or an exception, where exactly one of the result or
exception will exist when a future async callback gets invoked.

This is not done in parallel. At each yield, the generator function
containing the yield is suspended until the callback is invoked by the
event loop. So it's done sequentially.

If you wanted to do this in parallel, one simple approach is something
like this:

// fire off the event loop handlers and gather promises of their completion
var promises = []
for each (item in list)
promises.push(func(item))

// now wait for ALL the promises
yield promiseFromPromiseList(promises)


Here, promiseFromPromiseList creates a new promise which invokes its
callback when all the promises in the list have invoked theirs.
(Sample implementation: add a callback to each promise in the list
which increments a count, save in the closure the expected count and
the new promise, complete the new promise when the count is as
expected.)

This shows the value of having a promise abstraction rather than raw
callbacks. You could not implement callbackFromCallbackList afaik.
(Well I guess with a preprocessor you can do about anything)

The line:
yield promiseFromPromiseList(promises)

would also throw an exception if any of the promises in the list have
an error set on them. "yield" asks the calling function to unpack the
promise into either a throw or a value. In this case the value is
discarded but you'd still get a throw.

Instead of the promiseFromPromiseList you could also just do this:

for each (p in promises)
yield p

but it would suspend/resume the function more times, if that makes a
difference, probably it doesn't. Also with promiseFromPromiseList you
could decide to do something like collect and log all exceptions
before completing the composite promise.

Say you actually needed the values to go with the individual items in
the list. You could get cute and implement a
reducePromises/foldPromises type thing, but leaving that aside, you
can do something like:

var promises = []
for each (item in list)
promises.push(func(item))

var results = []
for each (p in promises)
results.push(yield p)

Here, first you create all the promises (adding the corresponding task
to the event loop), and then the first "yield" returns to the event
loop. At that point the event loop could actually work on all of the
tasks. For example maybe all the http requests are written out. When
the http reply comes back, then the promises' callbacks are invoked.
The function here would resume at the "yield p" when the first promise
in the list got a reply back (or threw an exception). Then it would
proceed through the rest of the promises. It waits on them in order,
but that doesn't matter since they're all running in parallel.

It's a critical convention that when an async call returns a promise,
it's already added the needed hooks to the event loop (the promise has
to be "fired off" already, so work starts to happen as soon as the
event loop spins).

> Another issue lies in the fact that async code "pollutes" everything
> it touches, all the way up the call stack to the top. Absent static
> analysis, there's no way to know if the function you're about to call
> is going to "defer" -- if it does, you have to pass a callback, or
> defer yourself. This means that any function that's going to call an
> async function has to know it in advance, and defer the call
> accordingly. At this point, it's already hardly better than just
> passing the callback yourself.

The model we are using is that async calls return a promise object. If
the async call is implemented with a generator, then it looks like:

function waitTwoSeconds() {
return Promise.fromGenerator(function() {
// add a timeout, returning a promise which is completed when
timeout fires
yield addTimeout(2000);
// do stuff 2000ms later
// can have infinite "yield" or also manual promise usage in here
// can just throw here and it makes the promise "throw"
});
}

Now if the caller itself is a generator it can do:

yield waitTwoSeconds()

if it isn't, it can do the usual callback model:

waitTwoSeconds().get(function(result, error) {} )

"result" is just undefined for a timeout of course.

You only have to pollute upward if you also would have to in the
callback case, i.e. you _can_ fire-and-forget a callback/promise as a
side effect, if you don't need its result in order to return from your
function.
If you do need the async result, yeah you have to pollute upward. That
seems inherent and even desirable. A function either returns a value
or a promise, as part of its type signature.

Because this is all on top of generators, the fromGenerator() wrapper
thing is needed whenever you want to use the yield syntax, which is
pesky.

Another awkwardness with using generators is that just "return 10" at
the end of the generator doesn't work, instead there has to be a
special function to provide the value of the promise, which is
implemented by throwing a special exception. So that sucks.

Both of these would be fixable in CoffeeScript I'd think. I guess it's
all fixable in JS too if the ECMAScript guys got on it ;-)

> Another issue lies in argument ordering. Node assumes that "err" is
> always the first argument to the callback, and should be treated
> specially -- but should this assumption be built-in to the language?
> Surely not every async API will follow this pattern. Same goes for
> callback-as-the-final-argument-to-the-function. Some APIs take a
> callback and an errback, how do you deal with those?

Promises let you easily adapt different callback conventions. For example:

function myThing() {


var p = new Promise();

myThingThatTakesAWeirdCallback(function(foo, bar, result, baz,


error) { if (error) p.putError(error); else p.putReturn(result); })
return p
}

Built-in to http://git.gnome.org/browse/gjs/tree/modules/promise.js
there's support for a "function(result,error)" flavor, but it's easy
to write the little adapters back and forth to whatever you have.

Another advantage of promises is that they are always returned, so you
don't have this issue of which argument is the callback (first? last?
if last what if you want a variable arg list?)

Another example, convert standard setTimeout to promises:

function addTimeout(ms) {


var p = new Promise();

setTimeout(function() { p.putReturn(); }, ms);
return p;
}

> At the end of the day, you end up with "defer" sugar that makes a lot
> of assumptions, choosing either serial or parallel,

I think as long as promises "fire off" when created, the sugar can
just be serial, because people can create a bunch of promises before
suspending their function to yield any of their values. Fire off all
your tasks before you block on any.

C#'s name "await" for the keyword makes a lot more sense than "yield"
probably, yield is more based on using generators for iteration. Also
C# doesn't require the kludges like fromGenerator() and
putReturnFromGenerator(). CoffeeScript could presumably be more
cleaned up, like C#.

> and locking you in
> to a fixed callback structure, for better or for worse.

Since a promise gives you a way to deal with callbacks (and callback
lists) "generically" you can use them as the lingua franca and adapt
to various callback flavors, which is nice.

Havoc

Bruno Jouhier

unread,
Jan 19, 2011, 9:53:39 AM1/19/11
to nodejs
Hi Jeremy,

Thanks for taking the time. Some answers:

* Regarding the for loop, my approach is to stick to the sync
semantics so that the programmer can really reason about his code the
same way he would reason about the sync version. So the sequencing
*must* be preserved at the statement level. If we want a for loop that
executes in parallel, we need to introduce a new keyword in the
language, or do it via a library call.

There is a sweet spot, though: what do we do with subexpressions. For
example if I write a_() + b_(), do we impose that b be called after a,
or do we allow both to execute in parallel? My current implementation
does it in sequence but I think that it would make sense to enable
parallel execution here (we are in "functional-land" here, not in
"imperative-land"). To summarize, I would preserve sequencing in
"imperative-land" (or "statement-land") but I would parallelize things
in "functional-land" (or "expression-land")

There is also a tricky issue with lazy operators (a_() && b_()) but my
implementation handles them correctly (skips b completely if a returns
false).

* Regarding stack pollution, streamline.js does not have the problem.
Functions are transformed one by one and only the functions that
contain async calls are transformed (the other ones are left
unmodified). The trick is to put the marker on the async calls and
also on the definitions of async functions, and to have an algorithm
that applies "combinable' patterns to the parse nodes encountered
inside a function definition and that always falls back on its feet
when it reaches the function definition node.

* Regarding the callback signature, my assumption is that we
standardize on the node.js signatures: the callback is always
callback(err[, result]) and the callback is always passed as last
parameter to async calls (which also implies that async calls have a
fixed number of parameters).

To use an API that does not conform to these rules, you would have to
create a small wrapper that turns your function into one that complies
with these rules.

I see the node callback signatures as somewhat "canonical". If we want
to "streamline" async programming, we need to work on stable ground.
So, we need to enforce a few things.

The next step is probably that I dive into the coffeescript compiler
to see how this can be hooked in.

Bruno

On Jan 18, 11:19 pm, jashkenas <jashke...@gmail.com> wrote:
> On Jan 18, 4:33 pm, Ryan Dahl <r...@tinyclouds.org> wrote:
>
>
>
> > Ah, thanks for the explanation - that seems safe and useful. It seems
> > coffeescript should be adding some syntax macros like this.
>
> Hey folks -- probably a bit too little, too late to be weighing in on
> this here...
>
> But, CoffeeScript already *has* tried to implement this feature
> (Bruno's func_, Kyle's @), with a language feature called "defer". You
> would tag a function with "defer", and at compile time, the rest of
> the computation (the continuation) would be transformed into the
> callback. We had an implementation that was fairly far along, worked
> on by Tim Cuthbertson, with lots of edge cases taken care of ... and
> yet, the effort never got merged in to master. I'll explain why in a
> moment, but first, here's all of the relevant back story:
>
> Blog posts:
>
> http://gfxmonk.net/2010/07/04/defer-taming-asynchronous-javascript-wi...http://gfxmonk.net/2010/07/06/dealing-with-return-in-deferred-coffees...
>
> Github Issues:
>
> https://github.com/jashkenas/coffee-script/issues/issue/241https://github.com/jashkenas/coffee-script/issues/issue/287https://github.com/jashkenas/coffee-script/issues/issue/350

billywhizz

unread,
Jan 19, 2011, 5:59:07 PM1/19/11
to nodejs
"There's also a lot of merit in having an asynchronous callback *look
like* an asynchronous callback. When code that looks synchronous
starts to behave in non-deterministic ways, you're asking for trouble.
"

This is the most sensible thing i have heard in this thread... i still
don't buy the argument that shoe-horning asynchronous behaviour into a
synchronous syntax makes things "easier" for anyone. learning how the
language/platform actually works under the hood however, always makes
things easier... have fun with your experiments though - nothing wrong
with trying! ;)

On Jan 18, 10:19 pm, jashkenas <jashke...@gmail.com> wrote:
> On Jan 18, 4:33 pm, Ryan Dahl <r...@tinyclouds.org> wrote:
>
>
>
> > Ah, thanks for the explanation - that seems safe and useful. It seems
> > coffeescript should be adding some syntax macros like this.
>
> Hey folks -- probably a bit too little, too late to be weighing in on
> this here...
>
> But, CoffeeScript already *has* tried to implement this feature
> (Bruno's func_, Kyle's @), with a language feature called "defer". You
> would tag a function with "defer", and at compile time, the rest of
> the computation (the continuation) would be transformed into the
> callback. We had an implementation that was fairly far along, worked
> on by Tim Cuthbertson, with lots of edge cases taken care of ... and
> yet, the effort never got merged in to master. I'll explain why in a
> moment, but first, here's all of the relevant back story:
>
> Blog posts:
>
> https://github.com/jashkenas/coffee-script/issues/issue/241https://github.com/jashkenas/coffee-script/issues/issue/287https://github.com/jashkenas/coffee-script/issues/issue/350

Kyle Simpson

unread,
Jan 19, 2011, 6:17:41 PM1/19/11
to nod...@googlegroups.com
?>> "There's also a lot of merit in having an asynchronous callback *look

>> like* an asynchronous callback. When code that looks synchronous
>> starts to behave in non-deterministic ways, you're asking for trouble.
>
> This is the most sensible thing i have heard in this thread... i still
> don't buy the argument that shoe-horning asynchronous behaviour into a
> synchronous syntax makes things "easier" for anyone.

Just out of curiosity, do you think an operator like @ "looks" synchronous
or asynchronous? In other words, does:

A(1,2,3) @ B(4,5,6);

"look" more synchronous (or rather, less asynchronous) than:

A(1,2,3,function(){ B(4,5,6); });

If so, can you please explain how?

I have several places in my current code where I pass around function
references and the code is entirely synchronous. Should I be worried then
that my actully-synchronous code in fact "looks" asynchronous and so is thus
harder to understand? That's a pretty slippery slope to stand on.

I'm actually kinda confused by the whole concept of something looking
synchronous or asynchronous. If you're asserting that merely passing around
callbacks (function references) inherently/semantically looks asynchronous,
I take issue with that assertion. The only reason that it looks that way (to
you, maybe not to someone else) is not anything semantic about the usage,
but just because *almost* all uses do in fact turn out to be asynchronous,
so a lot (but not all) of us have been conditioned that way, and it becomes
obvious (but not semantic).

In my mind, there's a big difference between making a decision to pattern
something some way because "that's how everyone does it" versus "that's how
it semantically makes sense".

In fact, I would argue (albeit a little tenuously) that I think the @
operator as shown above "looks" more asynchronous than nested callbacks
because there is actually a semantic reason I chose to pattern it that way,
and why I used the @ operator specifically:

A() @ B() is to be interpreted as "execute A, and if A suspends himself,
then pause the rest of the statement. If A does not pause himself, or once A
completes (or errors), then immediately 'continue' execution of the
statement *AT* B."

So, please explain how the A(... ,B) pattern is more semantically
asynchronous (than the alternatives)?

--Kyle


Alexander Fritze

unread,
Jan 19, 2011, 6:31:55 PM1/19/11
to nod...@googlegroups.com
On Wed, Jan 19, 2011 at 1:35 AM, Ryan Dahl <r...@tinyclouds.org> wrote:
> On Tue, Jan 18, 2011 at 3:30 PM, Alexander Fritze <al...@onilabs.com> wrote:
>> [...]

>> Other things in node are harder to convert to sequential, in
>> particular streams. The fundamental problem here is that node's
>> streams *push* data, whereas for blocking logic you always want to
>> *pull*. I'd be interested if anyone has any good idea of how to wrap
>> streams with a blocking API while still keeping compatibility with
>> their normal async usage in node.
>
> You can:
>
> 1. resume the stream,
> 2. wait for 'data' event,
> 3. pause the stream,
> 4. wait until the user calls a synchronous read() again,
> 5. go to 1.

You're right, this works quite well, although there is quite a lot of
adding/removing of listeners (because for each read we need to listen
to 'data', 'end', and 'error' - see
https://github.com/afri/sjs-nodejs/blob/master/lib/stream.sjs ).

Also, from the perspective of sync APIs, it would be desirable to have
all streams paused by default on creation. But I can see how this
clashes with keeping the async APIs simple.

Cheers,
Alex

Mikeal Rogers

unread,
Jan 19, 2011, 6:34:54 PM1/19/11
to nod...@googlegroups.com
Streams objects are created in a "closed" state for the most part.

When you add your callbacks it's usually before the "connect" event has even happened except in certain cases like request objects for HTTP clients that might be using keep-alive.

Requiring you to call resume() on a stream before you received data would be pretty annoying since you can add any listeners you want in the closed state before nextTick.

Alexander Fritze

unread,
Jan 19, 2011, 6:47:26 PM1/19/11
to nod...@googlegroups.com
Exactly. All I'm saying is that there is a complementary annoyance
working the other way: When you manipulate async streams through a
sync layer, you have to remember to pause the stream (or at least not
do anything that returns to the event loop) before you call read().

Clifton

unread,
Jan 20, 2011, 2:11:49 AM1/20/11
to nodejs
----
"There's also a lot of merit in having an asynchronous callback *look
like* an asynchronous callback. When code that looks synchronous
starts to behave in non-deterministic ways, you're asking for
trouble."
----

I also completely agree with this quote.

The basic structure of async code in node is great, in my opinion. It
does a good job of keeping the coder cognizant of his own async
structure and what functions must run before others get executed. The
general callback structure works great and is readable for me, and
proper use of EventEmitters will clean up a lot of asynchronous code
and can really keep your visual callback function level from being too
deep so that it's really manageable.

However, some of the quirks and syntax of async code in JavaScript
look messy. It's a language level issue, but I really don't think the
solution is going to be disguising async JS as synchronous with the
introduction of new operators. The solution will likely lie in a new
programming language built from the ground up with async in mind --
where we still have a good visual callback structure but we have
access to a more modern language/syntax.

billywhizz

unread,
Jan 20, 2011, 8:54:59 AM1/20/11
to nodejs
kyle, i can't understand anything you're talking about. maybe i'm just
stupid and that's why i like to keep things simple...

Marco Rogers

unread,
Jan 20, 2011, 10:06:33 AM1/20/11
to nod...@googlegroups.com
Kyle:

I think it's cool that you're experimenting with new ways of writing sync code.  I actually think it's a good thing.  I'm also wary of trying to make async code look sync.  But I fully believe that we can come up with a way to make async code feel a little less unwieldy in places.

That being said, I think your @ operator doesn't come across the way you think.  There is nothing in your example statement A() @ B() that suggests asynchronous behavior.  The '@' symbol just looks like a weird operator.  Replace it with + and the average programmer wouldn't even begin to suspect asynchronous behavior.  Just because you put some language behind why you chose @ doesn't make it more intuitive.

I also agree that passing a callback "looks" more asynchronous.  By nature when you pass a function it means "this function could be called at a later time".  It could also be called immediately of course.  But you are still forced to consider whether that is the case or not. But just as important, you have to deal with that fact that the function will be called *in a different context*.  The state of the program has potentially changed.  It brings the possibly asynchronous nature of your code front and center.  Even promises don't shy away from hiding this.  You can delay the task of handling the asynchronous behavior.  But at some point you have to call when(), or wai()t, or whatever and pass it a callback.  The system isn't going to do it for you.  There's no magic.  You have to understand what's going on and work within the language constructs.  I think that's a good thing.

The problem I have with synchronous looking code being asynchronous, is that many developers will use it as an excuse to avoid learning being smart about async control flow. Maybe that's what some people want. And it'll seem really cool at first as node gets an influx of new interested developers. But in the long run it'll just end up generating a lot of crappy systems.

I'm sure this ground has already been covered (this is a long thread, didn't read all of it).  But I'm not sure if the pro-callback folks are making it clear why there's so much push back against these various syntax proposals. It's not just an aesthetic sense in my mind.  Sync and Async control flows are very different, and you shouldn't be able to easily forget that.

:Marco

Kyle Simpson

unread,
Jan 20, 2011, 11:38:32 AM1/20/11
to nod...@googlegroups.com
?> That being said, I think your @ operator doesn't come across the way you
think. There is nothing in your example statement A() @ B() that suggests
asynchronous behavior. The '@' symbol just looks like a weird operator.
Replace it with + and the average programmer wouldn't even begin to suspect
asynchronous behavior. Just because you put some language behind why you
chose @ doesn't make it more intuitive.

I think the only reason any programmer knows what any operator does is
because there's clear documentation and plenty of example use-cases. And
this would especially be true if the language introduced a new operator,
there'd be lots and lots written about the new operator and how it works,
what it's for and not for, etc.

I don't think that just because you look at an operator and don't know
immediately what it does, that this means the operator isn't useful or
semantic. Consider ~. That operator is quite strange looking. Most people
never even type that character on their keyboards. But developers (when they
learn it) understand exactly what it's doing, that it's a unary operator
that inverts the bits of the operand. You have to admit, that's not
particularly semantic at first glance. But many developers have learned that
operator, and put it to good use all the time.

*I* happen to think that synchronous and asynchronous code *shouldn't* be so
starkly different. I guess that's where I differ from a lot of people on the
list. But I write both synchronous and asynchronous code mixed in my
programs all the time, and I long for the ability to write the same style of
code in both contexts. In fact, I have a number of prominent use-cases in my
code where the same function call is either synchronous or asynchronous
depending on run-time conditions. I even have functions that are both
synchronous and asynchronous at the same time -- that is, the same function
will return an immediate value AND it will fire off asynchronous behavior
later, too.

The intended definition I have for the @ operator, as I eluded to in my
previous message, is that it can be either synchronous OR asynchronous,
depending on what the function call chooses to do. *I* think the function
itself should get to decide if it's sync or async, and the calling code
should just respond to that decision appropriately. I've never liked the
reverse idea, that the calling code is the one who is in control of if a
function is immediate or not.

Contrary to what you suggest about this making developers lazy, I think
usage of that operator will *force* the developer to write code that is
"async safe" because the function in question may or may not defer itself.

setTimeout(function(){ ... }, 5000);

vs.

setTimeout(1000) @ function() { ... };

I don't think that type of syntax would make developers any lazier about
knowing how to safely write async code. And JavaScript could easily be
changed to where a function like setTimeout() could be called in either way,
depending on what the developer prefers.

I absolutely love the idea of, once I understand sync/async issues (which I
do), being able to write code that blurs the lines (while still functioning
the way I want it to) between sync and async. IMHO, if a language like
JavaScript supports async functionality, that async behavior should be a
first class citizen and just as obvious and face-forward as his sync
brother. Right now, with clunky nested callback syntax, async "feels" more
like the awkward step-child of JavaScript. And that's what I want to change.

I'm not suggesting that JavaScript would change to where there'd be no
callbacks for accessing async behavior. I'm simply suggesting that @
operator could be *added* to the equation to let those of us (maybe only
me!?) that want to take advantage of it, do so. Perhaps callbacks would stay
the default/preferred method, but perhaps given long enough exposure, @
might end up being a preferred pattern.

---------
When a developer designs an API (even just a function signature), they've
allowed the calling code to rely on a stable and consistent usage of that
API, regardless of the underlying implementation details changing. This is a
fundamental tenant of software development. I'm basically just extending
that to say that sync vs. async should be able to be "hidden" behind the
scenes as an implementation detail, and the calling code "API" should look
consistent.

In fact, in several of my projects, I currently use a "Promise"
implementation to abstract away the differences between async and sync. I
have code that I run both in the browser and on the server, and in the
browser it's asynchronous, but in the server it's synchronous. I like the
fact that by using promises, my code operates just fine regardless of sync
or async context. The "Promise" hides that detail from my main code base,
and I think that's a great thing. It certainly makes my job of maintaining
my code better.

Bottom line: many/most of you seem to want sync and async code to "look"
very different, but I want the opposite. I want to be able to write
consistent code that responds appropriately to either context as
appropriate.

Perhaps that weird desire is just what makes me polar opposite to the
mainstream node.js crowd. Sorry for clogging up this thread with my own
opinions on the topic. I guess I'll just go do my own thing.


--Kyle

It is loading more messages.
0 new messages