fibers 0.5; say "no" callbacks (when it makes sense in your application)

364 views
Skip to first unread message

Marcel Laverdet

unread,
Jul 31, 2011, 9:15:45 PM7/31/11
to nodejs
Hey guys I finished up on some new work on Fibers over the past week or so and wanted to get some feedback. For those of you who have missed the ongoing holy wars, fibers are a simplified implementation of coroutines for NodeJS. These work similar to threads, except only one is running at a time and the programmer explicitly chooses when to switch to another coroutine (thus, there are no locking & race condition issues).

Anyway the biggest change is that I decided to include an implementation of futures along with the package. I've been saying that since day 1 that using fibers without some kind of abstraction is foolish, but I didn't actually provide any example abstraction out of the box. As a result it was really hard to get started with fibers or see the advantages of them. I was reluctant to include this library because it's possible to build all kinds of interesting abstractions on fibers and I didn't want to stifle anyone's creativity. But alas I want to make it easier for people to try out fibers so I'm including Futures alongside Fibers (without changing the core Fiber API).

Futures is the library I've been using in my project with support from Fibers. They are very similar to existing "promises" implementations with the addition of a "wait" method which will yield the current fiber until a future has resolved. What I like about Futures is that it's really easy to mix and match callback-based code with "blocking" fiber code whenever a particular style makes more sense.

Quick example (some bootstrapping abbreviated):

// Future.wrap takes an existing callback-based function and make a function

// which returns a Future

var readdir = Future.wrap(require('fs').readdir);

var dir = readdir('.');


// Once you have a Future you can either resolve it asynchronously

dir.resolve(function(err, val) {

  if (err) throw err;

  console.log('Files in this directory: '+ val.join(', '));

});


// Or you can yield your current fiber synchronously (the wait() call is the

// important part):

console.log('Files in this directory: '+ dir.wait().join(', '));


Using fibers here gives you some cool advantages:


- Reasonable stack traces: Sometimes it can be difficult to find bugs that occur in callback-based code because you lose your stack with every tick. But since each fiber is a stack, traces are maintained even past asynchronous calls. Additionally line numbers are not mangled like they are with the existing rewriting solutions.


- Exceptions are usable again: Using fibers you no longer have to manually propagate exceptions, this is all taken care for you. In the example above, wait() will throw if readdir() resolved an error.


- Avoid excessive callbacks without sacrificing parallelism: Even though wait() blocks execution of Javascript it's important to note that Node is never actually blocked. Each Fiber you create is a whole new stack which can be switched into and out of appropriately. So while many stacks of JS are blocking, another is running. You can even use wait() to wait on many Futures at the same time via Future.wait(future1, future2, future3, ...) or Future.wait(arrayOfFutures).


- Choose when to use callbacks and when not to: To be honest I use callbacks about as often as I use blocking fibers in my applications. Sometimes callbacks just make more sense (especially with streams of data), but when I do use Fibers I am really really glad I have them available.


Fibers are compatible in 32-bit and 64-bit modes on Linux & OS X (Lion supported as of fibers 0.5). Windows is unfortunately not supported but if there is strong interest it could be built in without too much work.


Check out the documentation on github:

https://github.com/laverdet/node-fibers

(see "FUTURES" for new stuff)


Thanks for reading,

Marcel

Shimon Doodkin

unread,
Aug 3, 2011, 6:42:13 PM8/3/11
to nodejs
as it now it is fine. but maybe you can you convince the right people
to include it as a built-in lib in node
i don't like the idea of patching memory.

Mikeal Rogers

unread,
Aug 3, 2011, 6:43:35 PM8/3/11
to nod...@googlegroups.com
never. going. to. happen.

> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en

Jorge

unread,
Aug 3, 2011, 7:02:04 PM8/3/11
to nod...@googlegroups.com
On 04/08/2011, at 00:43, Mikeal Rogers wrote:

> never. going. to. happen.

Never say never.
--
Jorge.

Ben Noordhuis

unread,
Aug 3, 2011, 7:30:31 PM8/3/11
to nod...@googlegroups.com
On Thu, Aug 4, 2011 at 01:02, Jorge <jo...@jorgechamorro.com> wrote:
> On 04/08/2011, at 00:43, Mikeal Rogers wrote:
>
>> never. going. to. happen.
>
> Never say never.

Never.

Isaac Schlueter

unread,
Aug 3, 2011, 8:00:17 PM8/3/11
to nod...@googlegroups.com

Never.

Jorge

unread,
Aug 3, 2011, 8:29:02 PM8/3/11
to nod...@googlegroups.com
On 01/08/2011, at 03:15, Marcel Laverdet wrote:
>
> (...)

> // Or you can yield your current fiber synchronously (the wait() call is the
> // important part):
> console.log('Files in this directory: '+ dir.wait().join(', '));
>
> Using fibers here gives you some cool advantages:
>
> - Reasonable stack traces: Sometimes it can be difficult to find bugs that occur in callback-based code because you lose your stack with every tick. But since each fiber is a stack, traces are maintained even past asynchronous calls. Additionally line numbers are not mangled like they are with the existing rewriting solutions.
>
> - Exceptions are usable again: Using fibers you no longer have to manually propagate exceptions, this is all taken care for you. In the example above, wait() will throw if readdir() resolved an error.
>
> - Avoid excessive callbacks without sacrificing parallelism: Even though wait() blocks execution of Javascript it's important to note that Node is never actually blocked. Each Fiber you create is a whole new stack which can be switched into and out of appropriately. So while many stacks of JS are blocking, another is running. You can even use wait() to wait on many Futures at the same time via Future.wait(future1, future2, future3, ...) or Future.wait(arrayOfFutures).
> (...)

This is simply amazing. Thanks. Keep up the good work !

ISTM that many people here don't realize *yet* how much better their lives would be if they could simply yield() and resume() instead of having to return (and end) on every function invocation. They think they don't need yield() just because they've got closures to preserve the contexts. What a mistake.

But they'll realize it, sooner or later, as they progress. You'll see.
--
Jorge.

Jorge

unread,
Aug 3, 2011, 8:33:01 PM8/3/11
to nod...@googlegroups.com

...say never.

mscdex

unread,
Aug 3, 2011, 11:33:13 PM8/3/11
to nodejs
On Aug 3, 8:33 pm, Jorge <jo...@jorgechamorro.com> wrote:
> On 04/08/2011, at 02:00, Isaac Schlueter wrote:
>
> > On Wed, Aug 3, 2011 at 16:30, Ben Noordhuis <i...@bnoordhuis.nl> wrote:
> >> On Thu, Aug 4, 2011 at 01:02, Jorge <jo...@jorgechamorro.com> wrote:
> >>> On 04/08/2011, at 00:43, Mikeal Rogers wrote:
>
> >>>> never. going. to. happen.
>
> >>> Never say never.
>
> >> Never.
>
> > Never.
>
> ...say never.

Never.

brez

unread,
Aug 3, 2011, 11:47:00 PM8/3/11
to nodejs
I don't get it - what's the idea behind using fibers (concurrency?) in
evented i/o?

Marcel Laverdet

unread,
Aug 4, 2011, 1:48:39 AM8/4/11
to nod...@googlegroups.com
never. going. to. happen.

Yeah I wish people would quit suggesting this as it always spawns a massive flame war that is truly undeserved. I've personally never even hinted that Fiber should be a part of node core but it always comes up and becomes a straw man for why fibers are the worst thing to happen to JavaScript since with(){}. Drop the hate man :(

> I don't get it - what's the idea behind using fibers (concurrency?) in
> evented i/o?

Fibers don't get you any concurrency, you are mistaking them with threads. Fibers are merely another way to express asynchronous io*. Instead of having a callback every time you need to wait for something to happen, you can just yield your fiber. Imagine being able to use all the fs.*Sync methods but without incurring the process block.

* Simplified description; you can use them for many other tasks, see also using Fibers for generators: https://gist.github.com/1124572

Chris

unread,
Aug 4, 2011, 2:00:38 AM8/4/11
to nodejs
I wrote a blog post on one possible use of fibers - creating
asynchronous setters/getters: http://chris6f.com/synchronous-nodejs

It's never going to get into the core but interesting nonetheless!

Mikeal Rogers

unread,
Aug 4, 2011, 2:07:52 AM8/4/11
to nod...@googlegroups.com
just to clarify my comment a bit, i think we're passed the flame war on whether or not this is "evil" or something.

this might lead at some point to an alternate platform, which could be very interesting.

it's just never going to be part of node core.

-Mikeal

octave

unread,
Aug 4, 2011, 3:21:47 AM8/4/11
to nodejs
Great work, Marcel!

I'm going to improve node-sync soon, to let it work as a middleware
for express/connect.
The most interesting point is that with fibers we can pass req/res
context through fibers tree, without passing it each time as function
arguments.
It just creates the new fiber for each request and isolates all
subsequent asynchronous calls within the main "request fiber".

If you are interested, check out the draft implementation in dev
branch:
https://github.com/0ctave/node-sync/blob/dev/lib/sync.js#L340

Any improvements on memory/performance?

On Aug 1, 4:15 am, Marcel Laverdet <mar...@laverdet.com> wrote:

octave

unread,
Aug 4, 2011, 3:26:39 AM8/4/11
to nodejs
We can pass variables through subsequent fibers:
https://github.com/0ctave/node-sync/blob/dev/examples/vars.js

Implementation:
https://github.com/0ctave/node-sync/blob/dev/lib/sync.js#L111

On Aug 1, 4:15 am, Marcel Laverdet <mar...@laverdet.com> wrote:

Bruno Jouhier

unread,
Aug 4, 2011, 3:30:30 AM8/4/11
to nodejs
Nice try Marcel :-). I'm in a slightly better position as my tag line
is a bit less disruptive: say "yes" callbacks; but don't bother
writing them!

Bruno

On Aug 4, 7:48 am, Marcel Laverdet <mar...@laverdet.com> wrote:
> > never. going. to. happen.
>

brez

unread,
Aug 4, 2011, 10:29:42 AM8/4/11
to nodejs

> > I don't get it - what's the idea behind using fibers (concurrency?) in
> > evented i/o?
>
> Fibers don't get you any concurrency, you are mistaking them with threads.

Yea I guess I was thinking of fibers in Ruby 1.9

> Instead of having
> a callback every time you need to wait for something to happen, you can just
> yield your fiber.

Other than syntax, what's the difference?

> Imagine being able to use all the fs.*Sync methods but
> without incurring the process block.

Why not just use the async methods in the first place?

Thanks

Bruno Jouhier

unread,
Aug 5, 2011, 2:53:11 AM8/5/11
to nodejs

> > Imagine being able to use all the fs.*Sync methods but
> > without incurring the process block.
>
> Why not just use the async methods in the first place?

My experience is with streamline.js rather than with fibers but the
benefits are similar:

* You have less code: I converted around 10,000 lines of code from
async style to sync style. New code is 30% smaller.
* Code is easier to write.
* You can write more elegant code (chaining is not disrupted by
callbacks).
* Code is MUCH easier to read and maintain!
* You can do robust exception handling with try/catch/finally (even
around blocks that contain async calls).
* You have meaningful stack traces in the debugger (fibers, not
streamline).

And I'm probably forgetting some other benefits.

Bruno

Joshua Holbrook

unread,
Aug 5, 2011, 3:03:52 AM8/5/11
to nod...@googlegroups.com
> * Code is easier to write.

> * Code is MUCH easier to read and maintain!

Just here to point out the obvious; This is a pretty subjective thing,
and heavily influenced by past experiences with async code management
strategies. It'd be interesting, though, if there was a way to measure
ease-of-writing and ease-of-reading, and then to do a survey amongst a
wide range of developers. Quick, somebody write a grant proposal!

Captain Obvious out.

--Josh

Bruno Jouhier

unread,
Aug 5, 2011, 5:28:01 AM8/5/11
to nodejs


On Aug 5, 9:03 am, Joshua Holbrook <josh.holbr...@gmail.com> wrote:
> > * Code is easier to write.
> > * Code is MUCH easier to read and maintain!
>
> Just here to point out the obvious; This is a pretty subjective thing,
> and heavily influenced by past experiences with async code management
> strategies. It'd be interesting, though, if there was a way to measure
> ease-of-writing and ease-of-reading, and then to do a survey amongst a
> wide range of developers. Quick, somebody write a grant proposal!

I think that this debate is tricky because people approach this with
different use cases and different backgrounds. I'm going to use an
analogy with the old world (pre-node):

* When you write a device driver, you do it in C or assembly.
* When you write business logic, you do it in a higher level language
(although some still do it in C!).

Sometimes, the guys who write device drivers get into sterile
religious debates with the guys who write applications.

Same thing with node.js
* When you write a technical I/O layer, you go to the metal and
callbacks are probably your best choice.
* When you write higher level app code with business logic, you need
something higher level.

That's all! There is simply no "one size fits all": The node core team
is into technical layers, and is doing a great job going to the metal
(and they make my day every day :-)). But people who are building
certain types of applications need something higher level. Fibers and
CPS tools can help them, and this is not the end of it. I'm sure that
there are going to be other interesting innovations along the way.

But the idea that business applications, for example, should be
written in callbacks/event style because they are now on top of an
async db layer is a complete nonsense. I do not mean that these apps
should not try to take advantage of the async nature of the APIs but
they need higher level mechanisms for this. Futures are an interesting
starting point.

A benchmark with realistic web app code (non trivial business logic
calling database or web service calls) would sure be enlightening! And
if you want to measure readability/maintainability, don't go into
theories that will be challenged, just put people in front of
realistic pieces of code written in both styles (callback and pseudo-
sync) and measure the time it takes them to understand the code and
make a change. Also, measure the number of errors they will make. I've
not done the experiment but I'm ready to bet big money on the outcome!

Bruno

Bruno Jouhier

unread,
Aug 5, 2011, 5:43:08 AM8/5/11
to nodejs
A quick illustration from something I posted to another thread. How
would you write 'pipe' on top of async read/write calls?

The callback version:

function pipe(inStream, outStream, callback){
var loop = function(err){
if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});
}
loop();
}

The streamline.js equivalent:

function pipe(inStream, outStream, _) {
var buf;
while (buf = inStream.read(_))
write(buf, _);
}

(the fiber version would be even simpler: no underscore parameters).

I don't think that any reasonable and honest person would find the
first one more readable!

If I were a node.js core developer, I would still write it the
callback way. It takes a few extra brain cycles but this code is going
to be used over and over and I want maximum efficiency.

On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.

Bruno

Jeffrey Zhao

unread,
Aug 5, 2011, 7:53:29 AM8/5/11
to nod...@googlegroups.com
Cannot agree more.

Jeffrey Zhao
Blog: http://blog.zhaojie.me/
Twitter: @jeffz_cn (Chinese) | @jeffz_en (English)

Scott González

unread,
Aug 5, 2011, 7:56:09 AM8/5/11
to nod...@googlegroups.com
On Fri, Aug 5, 2011 at 5:43 AM, Bruno Jouhier <bjou...@gmail.com> wrote:
On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.

Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.

And just because this is all subjective, having underscores littered throughout your code as a meaningful piece of information is not more readable to me.

Branko Vukelić

unread,
Aug 5, 2011, 8:07:53 AM8/5/11
to nod...@googlegroups.com
On 2011-08-05 07:56 -0400, Scott Gonz�lez wrote:
> And just because this is all subjective, having underscores littered
> throughout your code as a meaningful piece of information is not more
> readable to me.

So true.

--
Branko Vukelic
bra...@herdhound.com
bg.b...@gmail.com

Lead Developer
Herd Hound (tm) - Travel that doesn't bite
www.herdhound.com

Love coffee? You might love Loveffee, too.
loveffee.appspot.com

Joshua Holbrook

unread,
Aug 5, 2011, 11:47:01 AM8/5/11
to nod...@googlegroups.com
> I don't think that any reasonable and honest person would find the
> first one more readable!

Hah! That's where you'd be wrong. For me, the streamline.js version is
way too much magic. I had to read the callbacks version first in order
to understand what the second one was trying to get at. Keep in mind,
of course, that my experience with async is almost exclusively with
node.

True story.

--Josh

Jorge

unread,
Aug 5, 2011, 11:52:57 AM8/5/11
to nod...@googlegroups.com
On 05/08/2011, at 13:56, Scott González wrote:
On Fri, Aug 5, 2011 at 5:43 AM, Bruno Jouhier <bjou...@gmail.com> wrote:
On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.

Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.

And just because this is all subjective,

It's not subjective. There's at least 3 ways to handle asynchronous jobs:

1.- Block execution on every async call. Requires multiple threads of execution to work. E.g. ruby.

2.- Call a function to launch the job, and give it a callback that the async job will call when done. It's currently the only way to do it in ES3/ES5, because (1) ES is single-threaded and (2) because ES can't *suspend* a function in the middle of execution: it must run to completion. But it's not the most convenient way to do it, because the context where the call to the async job happens is necessarily destroyed (*must* run to completion, *must* end via a return) which is bad for many reasons, the most obvious and annoying being that you're forced to exit when the job you wanted to do is not finished yet, you're simply not done yet, you're just awaiting a result (the one coming from the async call), and also (very important !) because it makes it impossible to try/catch errors in the current call stack.

3.- Call a function to launch the job, and let it be suspended/resumed as many times as needed (fibers). The advantage here is that the call stack is preserved, try/catch blocks work as expected, and there's no need to nest a new callback per async call inside the initial context to preserve it (in closures). Not to mention that the code flow is much more human (rather than nerd) readable than in 2.

So yes, there are good, objective and valid reasons to say that 3 is better than 2.

For example, this could (and should!) be perfectly valid, non-blocking, node.js source code:

...
try {
  result= doSomethingAsynchronously();
} catch (error) {
  // handle the error.
}

// code that uses result.
...

(Now try to write that (in a manner as readable) using callbacks)

There's an opportunity for node to innovate. Node could serve to improve JS. The browsers can't do it, because they can't "break the web". But node can do (almost) anything it wants (say, for example, invent buffers), because it's got no legacy baggage.
-- 
Jorge.

having underscores littered throughout your code as a meaningful piece of information is not more readable to me.

Ben Noordhuis

unread,
Aug 5, 2011, 11:54:09 AM8/5/11
to nod...@googlegroups.com

I just did.

For the record, I think Marcel's fibers add-on is a pretty clever
hack. I can even see it being useful in some cases. But it's not
something you'll see in core until V8 supports coroutines natively.

Jorge

unread,
Aug 5, 2011, 12:03:01 PM8/5/11
to nod...@googlegroups.com
On 05/08/2011, at 17:47, Joshua Holbrook wrote:

>> I don't think that any reasonable and honest person would find the
>> first one more readable!
>
> Hah! That's where you'd be wrong. For me, the streamline.js version is
> way too much magic. I had to read the callbacks version first in order
> to understand what the second one was trying to get at. Keep in mind,
> of course, that my experience with async is almost exclusively with
> node.
>
> True story.

Ok, which one is better ?

#1

function pipe(inStream, outStream, callback){
(function loop (err){


if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});

})();
}

#2:

function pipe (inStream, outStream, _) {
var buf;
while (buf = inStream.read(_)) outStream.write(buf, _);
}

#3:

function pipe (inStream, outStream) {
var buf;
while (buf = inStream.read()) outStream.write(buf);
}

All of them are possible, in node.
--
Jorge.

Jorge

unread,
Aug 5, 2011, 12:10:56 PM8/5/11
to nod...@googlegroups.com

Marcel's fibers are Pure Win.
--
Jorge.

Joshua Holbrook

unread,
Aug 5, 2011, 12:15:06 PM8/5/11
to nod...@googlegroups.com
On Fri, Aug 5, 2011 at 9:03 AM, Jorge <jo...@jorgechamorro.com> wrote:
> On 05/08/2011, at 17:47, Joshua Holbrook wrote:
>
>>> I don't think that any reasonable and honest person would find the
>>> first one more readable!
>>
>> Hah! That's where you'd be wrong. For me, the streamline.js version is
>> way too much magic. I had to read the callbacks version first in order
>> to understand what the second one was trying to get at. Keep in mind,
>> of course, that my experience with async is almost exclusively with
>> node.
>>
>> True story.
>
> Ok, which one is better ?
>
> #1
>
>  function pipe(inStream, outStream, callback){
>   (function loop (err){
>     if (err) callback(err);
>     else inStream.read(function(err, data){
>       if (err) callback(err);
>       else data != null ? outStream.write(data, loop) : callback();
>     });
>   })();
>  }

This one is long, but it's obvious what it's doing because the
callbacks are explicit.

>
> #2:
>
>  function pipe (inStream, outStream, _) {
>   var buf;
>   while (buf = inStream.read(_)) outStream.write(buf, _);
>  }

Shorter, yes, but I don't really understand what these underscores are
about, and knowing that they're async has me scratching my head a bit
("how does *that* work?"). Having figured out the end goal (by reading
the first one) it's pretty clear, sure. If I spent enough time with
this I could probably get to the point where I can read it easy.

>
> #3:
>
> function pipe (inStream, outStream) {
>  var buf;
>  while (buf = inStream.read()) outStream.write(buf);
> }

Excepting for the underscores, this is pretty much the same as #2. The
use of `while` still strikes me as a bit magical.

I'm open to the suggestion that alternate approaches to callbacks are
"better" in some way(s), but I definitely thing that "more readable"
is the sort of thing that's heavily influenced by what you've done
before. None of these things are truly intuitive; we have to *learn*
them.

--Josh

>
> All of them are possible, in node.
> --
> Jorge.
>

Dean Landolt

unread,
Aug 5, 2011, 12:23:17 PM8/5/11
to nod...@googlegroups.com


V8 with harmony generators will be pure win...and I suspect this thread will continue until they land :)

Jorge

unread,
Aug 5, 2011, 12:39:37 PM8/5/11
to nod...@googlegroups.com
On 05/08/2011, at 18:15, Joshua Holbrook wrote:
>
> I'm open to the suggestion that alternate approaches to callbacks are
> "better" in some way(s), but I definitely thing that "more readable"
> is the sort of thing that's heavily influenced by what you've done
> before. None of these things are truly intuitive; we have to *learn*
> them.

The only reason you need callbacks for async jobs is because (in JS) function calls can't be suspended/resumed, they must, always, run to completion. But async does not necessarily mean 'must provide a callback', that's not the only way to do it.
--
Jorge.

Joshua Holbrook

unread,
Aug 5, 2011, 3:19:37 PM8/5/11
to nod...@googlegroups.com

Yeah, that's definitely true. On the other hand, I don't really have
any exposure to this alternate approach, so the callback approach
happens to make more sense to me.

--Josh

Mikeal Rogers

unread,
Aug 5, 2011, 3:36:27 PM8/5/11
to nod...@googlegroups.com
Can we just point this thread at the last time fibers was mentioned and all this was brought up and argued about?

Let's all calm down and get back to writing some code now.

Bruno Jouhier

unread,
Aug 5, 2011, 6:16:18 PM8/5/11
to nodejs
> >  function pipe (inStream, outStream, _) {
> >   var buf;
> >   while (buf = inStream.read(_)) outStream.write(buf, _);
> >  }
>
> Shorter, yes, but I don't really understand what these underscores are
> about, and knowing that they're async has me scratching my head a bit
> ("how does *that* work?"). Having figured out the end goal (by reading
> the first one) it's pretty clear, sure. If I spent enough time with
> this I could probably get to the point where I can read it easy.

You are mixing 2 questions:

WHAT does this code do?
HOW does it work?

Let's take a look again at the alternatives again:

#1

function pipe(inStream, outStream, callback){
(function loop (err){
if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});
})();
}

Q1: WHAT does this code do?
A: Not so obvious. Takes a bit of time to see that it reads data from
inStream, writes it outStream and recurses through loop to process the
next chunk. But you also need to analyze the error handling. A few
more brain cycles, and YES, I've checked it completely and errors are
correctly reported through the callback.

Q2: HOW does it do it?
A: Pretty obvious, just follow the callbacks.

#2

function pipe (inStream, outStream, _) {
var buf;
while (buf = inStream.read(_)) outStream.write(buf, _);
}

Q1: WHAT does this code do?
A: Obvious: it loops until read returns null, passes the chunks to the
write call.

Q2: HOW does it do it?
A: It yields and resumes in every call that has little underscore.
Looks a bit magic of course but that's a pretty accurate description
of how it works.

#3:

function pipe (inStream, outStream) {
var buf;
while (buf = inStream.read()) outStream.write(buf);

}

Same answers basically. Looks even more magic because you don't see
where it yields and resumes.

But what's the most important question when you have to read and
maintain large pieces of code? I think that it is the first one: "WHAT
does it do?". And I think that we can objectively say that this
question is easier to answer in #2 and #3 than in #1.

You are asking the second question because you do not TRUST the magic.
So, you need to also understand the HOW, to be reassured that the
magic actually works . The problem is here. Once you get confident
that the magic works, you won't need to worry about the HOW any more,
you will just let the magic operate.

This is the same thing as when I'm writing

function tan(x) { return sin(x) / cos(x) }

The sin and cos functions are MAGIC. I don't know how they work
internally but I TRUST that do what the doc says (and what my math
teacher told me).

So, I immediately understand WHAT the function does: it computes the
tangent.

If I did not trust sin and cos, I would start to investigate HOW they
work, to be sure that my tan function does what I want it to do.

So, it really boils down to a question of trust. What are you ready to
trust? If you are not ready to trust CPS tools like streamline, or
fibers (performance, robustness, ... reasons may vary), then fine, go
to the metal and write all the callbacks.

But if you are ready to trust these tools, just relax and let the
magic operate.

Bruno Jouhier

unread,
Aug 5, 2011, 6:35:43 PM8/5/11
to nodejs
Hi Mikeal,

I think that the debate is not just a rehash of things that have been
said in previous threads. Some of the responses shed a slightly
different light on the issue. For example, Jorge's comment on the 3
ways async can be dealt with can help people see things from a
different perspective. My own comments on the "technical I/O layer" vs
the "applicative layer" tries to put things into perspective too and
may help people understand where we (the CPS and fiber guys) are
coming from and why the approaches may be more complementary than
antagonistic. Of course, there is a bit of futile arguing, but also
some progress in the debate.

Why should we kill it? If people are not interested, they can just
skip this thread.

Bruno

Branko Vukelić

unread,
Aug 5, 2011, 6:56:45 PM8/5/11
to nod...@googlegroups.com
On 2011-08-05 12:36 -0700, Mikeal Rogers wrote:
> Can we just point this thread at the last time fibers was mentioned and
> all this was brought up and argued about?

So what is it? Threads of fibers now? ;)

Branko Vukelić

unread,
Aug 5, 2011, 7:04:37 PM8/5/11