fibers 0.5; say "no" callbacks (when it makes sense in your application)

363 views
Skip to first unread message

Marcel Laverdet

unread,
Jul 31, 2011, 9:15:45 PM7/31/11
to nodejs
Hey guys I finished up on some new work on Fibers over the past week or so and wanted to get some feedback. For those of you who have missed the ongoing holy wars, fibers are a simplified implementation of coroutines for NodeJS. These work similar to threads, except only one is running at a time and the programmer explicitly chooses when to switch to another coroutine (thus, there are no locking & race condition issues).

Anyway the biggest change is that I decided to include an implementation of futures along with the package. I've been saying that since day 1 that using fibers without some kind of abstraction is foolish, but I didn't actually provide any example abstraction out of the box. As a result it was really hard to get started with fibers or see the advantages of them. I was reluctant to include this library because it's possible to build all kinds of interesting abstractions on fibers and I didn't want to stifle anyone's creativity. But alas I want to make it easier for people to try out fibers so I'm including Futures alongside Fibers (without changing the core Fiber API).

Futures is the library I've been using in my project with support from Fibers. They are very similar to existing "promises" implementations with the addition of a "wait" method which will yield the current fiber until a future has resolved. What I like about Futures is that it's really easy to mix and match callback-based code with "blocking" fiber code whenever a particular style makes more sense.

Quick example (some bootstrapping abbreviated):

// Future.wrap takes an existing callback-based function and make a function

// which returns a Future

var readdir = Future.wrap(require('fs').readdir);

var dir = readdir('.');


// Once you have a Future you can either resolve it asynchronously

dir.resolve(function(err, val) {

  if (err) throw err;

  console.log('Files in this directory: '+ val.join(', '));

});


// Or you can yield your current fiber synchronously (the wait() call is the

// important part):

console.log('Files in this directory: '+ dir.wait().join(', '));


Using fibers here gives you some cool advantages:


- Reasonable stack traces: Sometimes it can be difficult to find bugs that occur in callback-based code because you lose your stack with every tick. But since each fiber is a stack, traces are maintained even past asynchronous calls. Additionally line numbers are not mangled like they are with the existing rewriting solutions.


- Exceptions are usable again: Using fibers you no longer have to manually propagate exceptions, this is all taken care for you. In the example above, wait() will throw if readdir() resolved an error.


- Avoid excessive callbacks without sacrificing parallelism: Even though wait() blocks execution of Javascript it's important to note that Node is never actually blocked. Each Fiber you create is a whole new stack which can be switched into and out of appropriately. So while many stacks of JS are blocking, another is running. You can even use wait() to wait on many Futures at the same time via Future.wait(future1, future2, future3, ...) or Future.wait(arrayOfFutures).


- Choose when to use callbacks and when not to: To be honest I use callbacks about as often as I use blocking fibers in my applications. Sometimes callbacks just make more sense (especially with streams of data), but when I do use Fibers I am really really glad I have them available.


Fibers are compatible in 32-bit and 64-bit modes on Linux & OS X (Lion supported as of fibers 0.5). Windows is unfortunately not supported but if there is strong interest it could be built in without too much work.


Check out the documentation on github:

https://github.com/laverdet/node-fibers

(see "FUTURES" for new stuff)


Thanks for reading,

Marcel

Shimon Doodkin

unread,
Aug 3, 2011, 6:42:13 PM8/3/11
to nodejs
as it now it is fine. but maybe you can you convince the right people
to include it as a built-in lib in node
i don't like the idea of patching memory.

Mikeal Rogers

unread,
Aug 3, 2011, 6:43:35 PM8/3/11
to nod...@googlegroups.com
never. going. to. happen.

> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en

Jorge

unread,
Aug 3, 2011, 7:02:04 PM8/3/11
to nod...@googlegroups.com
On 04/08/2011, at 00:43, Mikeal Rogers wrote:

> never. going. to. happen.

Never say never.
--
Jorge.

Ben Noordhuis

unread,
Aug 3, 2011, 7:30:31 PM8/3/11
to nod...@googlegroups.com
On Thu, Aug 4, 2011 at 01:02, Jorge <jo...@jorgechamorro.com> wrote:
> On 04/08/2011, at 00:43, Mikeal Rogers wrote:
>
>> never. going. to. happen.
>
> Never say never.

Never.

Isaac Schlueter

unread,
Aug 3, 2011, 8:00:17 PM8/3/11
to nod...@googlegroups.com

Never.

Jorge

unread,
Aug 3, 2011, 8:29:02 PM8/3/11
to nod...@googlegroups.com
On 01/08/2011, at 03:15, Marcel Laverdet wrote:
>
> (...)

> // Or you can yield your current fiber synchronously (the wait() call is the
> // important part):
> console.log('Files in this directory: '+ dir.wait().join(', '));
>
> Using fibers here gives you some cool advantages:
>
> - Reasonable stack traces: Sometimes it can be difficult to find bugs that occur in callback-based code because you lose your stack with every tick. But since each fiber is a stack, traces are maintained even past asynchronous calls. Additionally line numbers are not mangled like they are with the existing rewriting solutions.
>
> - Exceptions are usable again: Using fibers you no longer have to manually propagate exceptions, this is all taken care for you. In the example above, wait() will throw if readdir() resolved an error.
>
> - Avoid excessive callbacks without sacrificing parallelism: Even though wait() blocks execution of Javascript it's important to note that Node is never actually blocked. Each Fiber you create is a whole new stack which can be switched into and out of appropriately. So while many stacks of JS are blocking, another is running. You can even use wait() to wait on many Futures at the same time via Future.wait(future1, future2, future3, ...) or Future.wait(arrayOfFutures).
> (...)

This is simply amazing. Thanks. Keep up the good work !

ISTM that many people here don't realize *yet* how much better their lives would be if they could simply yield() and resume() instead of having to return (and end) on every function invocation. They think they don't need yield() just because they've got closures to preserve the contexts. What a mistake.

But they'll realize it, sooner or later, as they progress. You'll see.
--
Jorge.

Jorge

unread,
Aug 3, 2011, 8:33:01 PM8/3/11
to nod...@googlegroups.com

...say never.

mscdex

unread,
Aug 3, 2011, 11:33:13 PM8/3/11
to nodejs
On Aug 3, 8:33 pm, Jorge <jo...@jorgechamorro.com> wrote:
> On 04/08/2011, at 02:00, Isaac Schlueter wrote:
>
> > On Wed, Aug 3, 2011 at 16:30, Ben Noordhuis <i...@bnoordhuis.nl> wrote:
> >> On Thu, Aug 4, 2011 at 01:02, Jorge <jo...@jorgechamorro.com> wrote:
> >>> On 04/08/2011, at 00:43, Mikeal Rogers wrote:
>
> >>>> never. going. to. happen.
>
> >>> Never say never.
>
> >> Never.
>
> > Never.
>
> ...say never.

Never.

brez

unread,
Aug 3, 2011, 11:47:00 PM8/3/11
to nodejs
I don't get it - what's the idea behind using fibers (concurrency?) in
evented i/o?

Marcel Laverdet

unread,
Aug 4, 2011, 1:48:39 AM8/4/11
to nod...@googlegroups.com
never. going. to. happen.

Yeah I wish people would quit suggesting this as it always spawns a massive flame war that is truly undeserved. I've personally never even hinted that Fiber should be a part of node core but it always comes up and becomes a straw man for why fibers are the worst thing to happen to JavaScript since with(){}. Drop the hate man :(

> I don't get it - what's the idea behind using fibers (concurrency?) in
> evented i/o?

Fibers don't get you any concurrency, you are mistaking them with threads. Fibers are merely another way to express asynchronous io*. Instead of having a callback every time you need to wait for something to happen, you can just yield your fiber. Imagine being able to use all the fs.*Sync methods but without incurring the process block.

* Simplified description; you can use them for many other tasks, see also using Fibers for generators: https://gist.github.com/1124572

Chris

unread,
Aug 4, 2011, 2:00:38 AM8/4/11
to nodejs
I wrote a blog post on one possible use of fibers - creating
asynchronous setters/getters: http://chris6f.com/synchronous-nodejs

It's never going to get into the core but interesting nonetheless!

Mikeal Rogers

unread,
Aug 4, 2011, 2:07:52 AM8/4/11
to nod...@googlegroups.com
just to clarify my comment a bit, i think we're passed the flame war on whether or not this is "evil" or something.

this might lead at some point to an alternate platform, which could be very interesting.

it's just never going to be part of node core.

-Mikeal

octave

unread,
Aug 4, 2011, 3:21:47 AM8/4/11
to nodejs
Great work, Marcel!

I'm going to improve node-sync soon, to let it work as a middleware
for express/connect.
The most interesting point is that with fibers we can pass req/res
context through fibers tree, without passing it each time as function
arguments.
It just creates the new fiber for each request and isolates all
subsequent asynchronous calls within the main "request fiber".

If you are interested, check out the draft implementation in dev
branch:
https://github.com/0ctave/node-sync/blob/dev/lib/sync.js#L340

Any improvements on memory/performance?

On Aug 1, 4:15 am, Marcel Laverdet <mar...@laverdet.com> wrote:

octave

unread,
Aug 4, 2011, 3:26:39 AM8/4/11
to nodejs
We can pass variables through subsequent fibers:
https://github.com/0ctave/node-sync/blob/dev/examples/vars.js

Implementation:
https://github.com/0ctave/node-sync/blob/dev/lib/sync.js#L111

On Aug 1, 4:15 am, Marcel Laverdet <mar...@laverdet.com> wrote:

Bruno Jouhier

unread,
Aug 4, 2011, 3:30:30 AM8/4/11
to nodejs
Nice try Marcel :-). I'm in a slightly better position as my tag line
is a bit less disruptive: say "yes" callbacks; but don't bother
writing them!

Bruno

On Aug 4, 7:48 am, Marcel Laverdet <mar...@laverdet.com> wrote:
> > never. going. to. happen.
>

brez

unread,
Aug 4, 2011, 10:29:42 AM8/4/11
to nodejs

> > I don't get it - what's the idea behind using fibers (concurrency?) in
> > evented i/o?
>
> Fibers don't get you any concurrency, you are mistaking them with threads.

Yea I guess I was thinking of fibers in Ruby 1.9

> Instead of having
> a callback every time you need to wait for something to happen, you can just
> yield your fiber.

Other than syntax, what's the difference?

> Imagine being able to use all the fs.*Sync methods but
> without incurring the process block.

Why not just use the async methods in the first place?

Thanks

Bruno Jouhier

unread,
Aug 5, 2011, 2:53:11 AM8/5/11
to nodejs

> > Imagine being able to use all the fs.*Sync methods but
> > without incurring the process block.
>
> Why not just use the async methods in the first place?

My experience is with streamline.js rather than with fibers but the
benefits are similar:

* You have less code: I converted around 10,000 lines of code from
async style to sync style. New code is 30% smaller.
* Code is easier to write.
* You can write more elegant code (chaining is not disrupted by
callbacks).
* Code is MUCH easier to read and maintain!
* You can do robust exception handling with try/catch/finally (even
around blocks that contain async calls).
* You have meaningful stack traces in the debugger (fibers, not
streamline).

And I'm probably forgetting some other benefits.

Bruno

Joshua Holbrook

unread,
Aug 5, 2011, 3:03:52 AM8/5/11
to nod...@googlegroups.com
> * Code is easier to write.

> * Code is MUCH easier to read and maintain!

Just here to point out the obvious; This is a pretty subjective thing,
and heavily influenced by past experiences with async code management
strategies. It'd be interesting, though, if there was a way to measure
ease-of-writing and ease-of-reading, and then to do a survey amongst a
wide range of developers. Quick, somebody write a grant proposal!

Captain Obvious out.

--Josh

Bruno Jouhier

unread,
Aug 5, 2011, 5:28:01 AM8/5/11
to nodejs


On Aug 5, 9:03 am, Joshua Holbrook <josh.holbr...@gmail.com> wrote:
> > * Code is easier to write.
> > * Code is MUCH easier to read and maintain!
>
> Just here to point out the obvious; This is a pretty subjective thing,
> and heavily influenced by past experiences with async code management
> strategies. It'd be interesting, though, if there was a way to measure
> ease-of-writing and ease-of-reading, and then to do a survey amongst a
> wide range of developers. Quick, somebody write a grant proposal!

I think that this debate is tricky because people approach this with
different use cases and different backgrounds. I'm going to use an
analogy with the old world (pre-node):

* When you write a device driver, you do it in C or assembly.
* When you write business logic, you do it in a higher level language
(although some still do it in C!).

Sometimes, the guys who write device drivers get into sterile
religious debates with the guys who write applications.

Same thing with node.js
* When you write a technical I/O layer, you go to the metal and
callbacks are probably your best choice.
* When you write higher level app code with business logic, you need
something higher level.

That's all! There is simply no "one size fits all": The node core team
is into technical layers, and is doing a great job going to the metal
(and they make my day every day :-)). But people who are building
certain types of applications need something higher level. Fibers and
CPS tools can help them, and this is not the end of it. I'm sure that
there are going to be other interesting innovations along the way.

But the idea that business applications, for example, should be
written in callbacks/event style because they are now on top of an
async db layer is a complete nonsense. I do not mean that these apps
should not try to take advantage of the async nature of the APIs but
they need higher level mechanisms for this. Futures are an interesting
starting point.

A benchmark with realistic web app code (non trivial business logic
calling database or web service calls) would sure be enlightening! And
if you want to measure readability/maintainability, don't go into
theories that will be challenged, just put people in front of
realistic pieces of code written in both styles (callback and pseudo-
sync) and measure the time it takes them to understand the code and
make a change. Also, measure the number of errors they will make. I've
not done the experiment but I'm ready to bet big money on the outcome!

Bruno

Bruno Jouhier

unread,
Aug 5, 2011, 5:43:08 AM8/5/11
to nodejs
A quick illustration from something I posted to another thread. How
would you write 'pipe' on top of async read/write calls?

The callback version:

function pipe(inStream, outStream, callback){
var loop = function(err){
if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});
}
loop();
}

The streamline.js equivalent:

function pipe(inStream, outStream, _) {
var buf;
while (buf = inStream.read(_))
write(buf, _);
}

(the fiber version would be even simpler: no underscore parameters).

I don't think that any reasonable and honest person would find the
first one more readable!

If I were a node.js core developer, I would still write it the
callback way. It takes a few extra brain cycles but this code is going
to be used over and over and I want maximum efficiency.

On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.

Bruno

Jeffrey Zhao

unread,
Aug 5, 2011, 7:53:29 AM8/5/11
to nod...@googlegroups.com
Cannot agree more.

Jeffrey Zhao
Blog: http://blog.zhaojie.me/
Twitter: @jeffz_cn (Chinese) | @jeffz_en (English)

Scott González

unread,
Aug 5, 2011, 7:56:09 AM8/5/11
to nod...@googlegroups.com
On Fri, Aug 5, 2011 at 5:43 AM, Bruno Jouhier <bjou...@gmail.com> wrote:
On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.

Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.

And just because this is all subjective, having underscores littered throughout your code as a meaningful piece of information is not more readable to me.

Branko Vukelić

unread,
Aug 5, 2011, 8:07:53 AM8/5/11
to nod...@googlegroups.com
On 2011-08-05 07:56 -0400, Scott Gonz�lez wrote:
> And just because this is all subjective, having underscores littered
> throughout your code as a meaningful piece of information is not more
> readable to me.

So true.

--
Branko Vukelic
bra...@herdhound.com
bg.b...@gmail.com

Lead Developer
Herd Hound (tm) - Travel that doesn't bite
www.herdhound.com

Love coffee? You might love Loveffee, too.
loveffee.appspot.com

Joshua Holbrook

unread,
Aug 5, 2011, 11:47:01 AM8/5/11
to nod...@googlegroups.com
> I don't think that any reasonable and honest person would find the
> first one more readable!

Hah! That's where you'd be wrong. For me, the streamline.js version is
way too much magic. I had to read the callbacks version first in order
to understand what the second one was trying to get at. Keep in mind,
of course, that my experience with async is almost exclusively with
node.

True story.

--Josh

Jorge

unread,
Aug 5, 2011, 11:52:57 AM8/5/11
to nod...@googlegroups.com
On 05/08/2011, at 13:56, Scott González wrote:
On Fri, Aug 5, 2011 at 5:43 AM, Bruno Jouhier <bjou...@gmail.com> wrote:
On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.

Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.

And just because this is all subjective,

It's not subjective. There's at least 3 ways to handle asynchronous jobs:

1.- Block execution on every async call. Requires multiple threads of execution to work. E.g. ruby.

2.- Call a function to launch the job, and give it a callback that the async job will call when done. It's currently the only way to do it in ES3/ES5, because (1) ES is single-threaded and (2) because ES can't *suspend* a function in the middle of execution: it must run to completion. But it's not the most convenient way to do it, because the context where the call to the async job happens is necessarily destroyed (*must* run to completion, *must* end via a return) which is bad for many reasons, the most obvious and annoying being that you're forced to exit when the job you wanted to do is not finished yet, you're simply not done yet, you're just awaiting a result (the one coming from the async call), and also (very important !) because it makes it impossible to try/catch errors in the current call stack.

3.- Call a function to launch the job, and let it be suspended/resumed as many times as needed (fibers). The advantage here is that the call stack is preserved, try/catch blocks work as expected, and there's no need to nest a new callback per async call inside the initial context to preserve it (in closures). Not to mention that the code flow is much more human (rather than nerd) readable than in 2.

So yes, there are good, objective and valid reasons to say that 3 is better than 2.

For example, this could (and should!) be perfectly valid, non-blocking, node.js source code:

...
try {
  result= doSomethingAsynchronously();
} catch (error) {
  // handle the error.
}

// code that uses result.
...

(Now try to write that (in a manner as readable) using callbacks)

There's an opportunity for node to innovate. Node could serve to improve JS. The browsers can't do it, because they can't "break the web". But node can do (almost) anything it wants (say, for example, invent buffers), because it's got no legacy baggage.
-- 
Jorge.

having underscores littered throughout your code as a meaningful piece of information is not more readable to me.

Ben Noordhuis

unread,
Aug 5, 2011, 11:54:09 AM8/5/11
to nod...@googlegroups.com

I just did.

For the record, I think Marcel's fibers add-on is a pretty clever
hack. I can even see it being useful in some cases. But it's not
something you'll see in core until V8 supports coroutines natively.

Jorge

unread,
Aug 5, 2011, 12:03:01 PM8/5/11
to nod...@googlegroups.com
On 05/08/2011, at 17:47, Joshua Holbrook wrote:

>> I don't think that any reasonable and honest person would find the
>> first one more readable!
>
> Hah! That's where you'd be wrong. For me, the streamline.js version is
> way too much magic. I had to read the callbacks version first in order
> to understand what the second one was trying to get at. Keep in mind,
> of course, that my experience with async is almost exclusively with
> node.
>
> True story.

Ok, which one is better ?

#1

function pipe(inStream, outStream, callback){
(function loop (err){


if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});

})();
}

#2:

function pipe (inStream, outStream, _) {
var buf;
while (buf = inStream.read(_)) outStream.write(buf, _);
}

#3:

function pipe (inStream, outStream) {
var buf;
while (buf = inStream.read()) outStream.write(buf);
}

All of them are possible, in node.
--
Jorge.

Jorge

unread,
Aug 5, 2011, 12:10:56 PM8/5/11
to nod...@googlegroups.com

Marcel's fibers are Pure Win.
--
Jorge.

Joshua Holbrook

unread,
Aug 5, 2011, 12:15:06 PM8/5/11
to nod...@googlegroups.com
On Fri, Aug 5, 2011 at 9:03 AM, Jorge <jo...@jorgechamorro.com> wrote:
> On 05/08/2011, at 17:47, Joshua Holbrook wrote:
>
>>> I don't think that any reasonable and honest person would find the
>>> first one more readable!
>>
>> Hah! That's where you'd be wrong. For me, the streamline.js version is
>> way too much magic. I had to read the callbacks version first in order
>> to understand what the second one was trying to get at. Keep in mind,
>> of course, that my experience with async is almost exclusively with
>> node.
>>
>> True story.
>
> Ok, which one is better ?
>
> #1
>
>  function pipe(inStream, outStream, callback){
>   (function loop (err){
>     if (err) callback(err);
>     else inStream.read(function(err, data){
>       if (err) callback(err);
>       else data != null ? outStream.write(data, loop) : callback();
>     });
>   })();
>  }

This one is long, but it's obvious what it's doing because the
callbacks are explicit.

>
> #2:
>
>  function pipe (inStream, outStream, _) {
>   var buf;
>   while (buf = inStream.read(_)) outStream.write(buf, _);
>  }

Shorter, yes, but I don't really understand what these underscores are
about, and knowing that they're async has me scratching my head a bit
("how does *that* work?"). Having figured out the end goal (by reading
the first one) it's pretty clear, sure. If I spent enough time with
this I could probably get to the point where I can read it easy.

>
> #3:
>
> function pipe (inStream, outStream) {
>  var buf;
>  while (buf = inStream.read()) outStream.write(buf);
> }

Excepting for the underscores, this is pretty much the same as #2. The
use of `while` still strikes me as a bit magical.

I'm open to the suggestion that alternate approaches to callbacks are
"better" in some way(s), but I definitely thing that "more readable"
is the sort of thing that's heavily influenced by what you've done
before. None of these things are truly intuitive; we have to *learn*
them.

--Josh

>
> All of them are possible, in node.
> --
> Jorge.
>

Dean Landolt

unread,
Aug 5, 2011, 12:23:17 PM8/5/11
to nod...@googlegroups.com


V8 with harmony generators will be pure win...and I suspect this thread will continue until they land :)

Jorge

unread,
Aug 5, 2011, 12:39:37 PM8/5/11
to nod...@googlegroups.com
On 05/08/2011, at 18:15, Joshua Holbrook wrote:
>
> I'm open to the suggestion that alternate approaches to callbacks are
> "better" in some way(s), but I definitely thing that "more readable"
> is the sort of thing that's heavily influenced by what you've done
> before. None of these things are truly intuitive; we have to *learn*
> them.

The only reason you need callbacks for async jobs is because (in JS) function calls can't be suspended/resumed, they must, always, run to completion. But async does not necessarily mean 'must provide a callback', that's not the only way to do it.
--
Jorge.

Joshua Holbrook

unread,
Aug 5, 2011, 3:19:37 PM8/5/11
to nod...@googlegroups.com

Yeah, that's definitely true. On the other hand, I don't really have
any exposure to this alternate approach, so the callback approach
happens to make more sense to me.

--Josh

Mikeal Rogers

unread,
Aug 5, 2011, 3:36:27 PM8/5/11
to nod...@googlegroups.com
Can we just point this thread at the last time fibers was mentioned and all this was brought up and argued about?

Let's all calm down and get back to writing some code now.

Bruno Jouhier

unread,
Aug 5, 2011, 6:16:18 PM8/5/11
to nodejs
> >  function pipe (inStream, outStream, _) {
> >   var buf;
> >   while (buf = inStream.read(_)) outStream.write(buf, _);
> >  }
>
> Shorter, yes, but I don't really understand what these underscores are
> about, and knowing that they're async has me scratching my head a bit
> ("how does *that* work?"). Having figured out the end goal (by reading
> the first one) it's pretty clear, sure. If I spent enough time with
> this I could probably get to the point where I can read it easy.

You are mixing 2 questions:

WHAT does this code do?
HOW does it work?

Let's take a look again at the alternatives again:

#1

function pipe(inStream, outStream, callback){
(function loop (err){
if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});
})();
}

Q1: WHAT does this code do?
A: Not so obvious. Takes a bit of time to see that it reads data from
inStream, writes it outStream and recurses through loop to process the
next chunk. But you also need to analyze the error handling. A few
more brain cycles, and YES, I've checked it completely and errors are
correctly reported through the callback.

Q2: HOW does it do it?
A: Pretty obvious, just follow the callbacks.

#2

function pipe (inStream, outStream, _) {
var buf;
while (buf = inStream.read(_)) outStream.write(buf, _);
}

Q1: WHAT does this code do?
A: Obvious: it loops until read returns null, passes the chunks to the
write call.

Q2: HOW does it do it?
A: It yields and resumes in every call that has little underscore.
Looks a bit magic of course but that's a pretty accurate description
of how it works.

#3:

function pipe (inStream, outStream) {
var buf;
while (buf = inStream.read()) outStream.write(buf);

}

Same answers basically. Looks even more magic because you don't see
where it yields and resumes.

But what's the most important question when you have to read and
maintain large pieces of code? I think that it is the first one: "WHAT
does it do?". And I think that we can objectively say that this
question is easier to answer in #2 and #3 than in #1.

You are asking the second question because you do not TRUST the magic.
So, you need to also understand the HOW, to be reassured that the
magic actually works . The problem is here. Once you get confident
that the magic works, you won't need to worry about the HOW any more,
you will just let the magic operate.

This is the same thing as when I'm writing

function tan(x) { return sin(x) / cos(x) }

The sin and cos functions are MAGIC. I don't know how they work
internally but I TRUST that do what the doc says (and what my math
teacher told me).

So, I immediately understand WHAT the function does: it computes the
tangent.

If I did not trust sin and cos, I would start to investigate HOW they
work, to be sure that my tan function does what I want it to do.

So, it really boils down to a question of trust. What are you ready to
trust? If you are not ready to trust CPS tools like streamline, or
fibers (performance, robustness, ... reasons may vary), then fine, go
to the metal and write all the callbacks.

But if you are ready to trust these tools, just relax and let the
magic operate.

Bruno Jouhier

unread,
Aug 5, 2011, 6:35:43 PM8/5/11
to nodejs
Hi Mikeal,

I think that the debate is not just a rehash of things that have been
said in previous threads. Some of the responses shed a slightly
different light on the issue. For example, Jorge's comment on the 3
ways async can be dealt with can help people see things from a
different perspective. My own comments on the "technical I/O layer" vs
the "applicative layer" tries to put things into perspective too and
may help people understand where we (the CPS and fiber guys) are
coming from and why the approaches may be more complementary than
antagonistic. Of course, there is a bit of futile arguing, but also
some progress in the debate.

Why should we kill it? If people are not interested, they can just
skip this thread.

Bruno

Branko Vukelić

unread,
Aug 5, 2011, 6:56:45 PM8/5/11
to nod...@googlegroups.com
On 2011-08-05 12:36 -0700, Mikeal Rogers wrote:
> Can we just point this thread at the last time fibers was mentioned and
> all this was brought up and argued about?

So what is it? Threads of fibers now? ;)

Branko Vukelić

unread,
Aug 5, 2011, 7:04:37 PM8/5/11
to nod...@googlegroups.com
On 2011-08-05 15:16 -0700, Bruno Jouhier wrote:
> Same answers basically. Looks even more magic because you don't see
> where it yields and resumes.

Of course, there are idiots like me who have dropped their jaws to the
floor the moment this discussion has begun and were like "What?! I
barely managed to figure out the whole callback thing, and now this?!"

Seriously, though. I feel incredibly stupid trying to follow what you
guys are arguing about, but you say "you don't see where it yields" in
reference to the third example, whereas I didn't see where it yielded in
the second as well. Callbacks make more sense to me because that's what
I'm used to seeing and the semantics of it is clear to me.

> But what's the most important question when you have to read and
> maintain large pieces of code? I think that it is the first one: "WHAT
> does it do?". And I think that we can objectively say that this
> question is easier to answer in #2 and #3 than in #1.

So, along the lines of what I wrote above, no #2 and #3 didn't make
things clearer. I seemed clear at first glance, but then I tried to
figure out what it did, and I had to look up #1 to find out.

> You are asking the second question because you do not TRUST the magic.
> So, you need to also understand the HOW, to be reassured that the
> magic actually works . The problem is here. Once you get confident
> that the magic works, you won't need to worry about the HOW any more,
> you will just let the magic operate.

And thus you prove someone else's point, and that's: reading code (and
the ease of readin that comes with that skill) is acquired skill. It's
not internal to the code itself. Once people decide to learn the new
pattern, it will become more readable to _them_. Sort of like a
self-fulfilling prophecy.

Marco Rogers

unread,
Aug 5, 2011, 8:44:27 PM8/5/11
to nod...@googlegroups.com


I think that the debate is not just a rehash of things that have been
said in previous threads.

Actually yeah it is. Just different ways of saying the same thing. Trust me there has been a LOT of discussion on this.
 
Of course, there is a bit of futile arguing, but also
some progress in the debate.

Why should we kill it? If people are not interested, they can just
skip this thread.


It's not a futile argument. But every time it comes up, it goes the same way.

1. The advocates show up and start trying to convince people
2. The detractors who are not worn out by the debate show up and jump into the fray
3. Some new people are interested and learn some new things. This is helpful. But they ask questions which just exasperates the debate
4. Some few of those new people are convinced
5. Those detractors who ARE worn out by this debate get really annoyed and try to shut down the debate

I'm sure you're chasing #4. It's always nice to have people use your software. But keep in mind that you cannot escape #5. Those people know they can't really stop you. But they're welcome to try. Just like you're welcome to keep hawking your wares at every opportunity.

Here's where this went wrong. Not when someone asked if this could go into core. But that when they received several firm No's from people in positions of authority, that should really be it. If you keep arguing, then it sounds like you're still trying to convince people to put it in core or that it's the "right" way, instead of trying to convince people to give your module a shot. That's a recipe for disaster. It's not the "right" way, nor the "wrong" one, because those are opinions. Instead, here are the facts.

1. It's not supported natively by v8
2. The people who are responsible for maintaining node have decided that it's not something they will support unless it is natively supported by the language

As long as you accept those thing. You can debate all you want. As long as you're prepared to ignore the haters.

:Marco

Shimon Doodkin

unread,
Aug 5, 2011, 9:05:37 PM8/5/11
to nodejs
for me this one is better for cloning streams and doing other back-end
staff:
#1
function pipe(inStream, outStream, callback){
(function loop (err){
if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});
})();
}


but for end usages like accessing database or values from templates. i
prefer to not use callbacks.
this is too low level for the task.

C M

unread,
Aug 5, 2011, 9:57:56 PM8/5/11
to nod...@googlegroups.com
Can we please stop using this example?  It's one of the ugliest nested callbacks I've ever seen, and I'm pretty sure it was specifically written to look as ugly as possible.

First and foremost, why in God's name are you writing your own .pipe()? Node has that built-in.

inStream.pipe(outStream);

Furthermore, code is only as ugly as you make it.  Don't want to nest your callbacks?  Give them names!  Functions are first-class objects in JS.  You can name them and pass them around like anything else.

// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN

function NIHpipe(inStream, outStream, cb) {
  inStream.read(readFn);

  function readFn(err, data) {
    if (err) { return cb(err) }
    data ? writeFn(null, data) : cb(null);
  }

  function writeFn(err, data) {
    if (err) { return cb(err) }
    outStream.write(data);
  }
}

function cb(err) {
  if (err) {
    console.error('OMG FLAGRANT ERROR.');
  }
}

Greatly increased coder sanity; greatly decreased diagonal march toward the right-hand margin.

Isaac Schlueter

unread,
Aug 5, 2011, 10:02:07 PM8/5/11
to nod...@googlegroups.com
You all sure do like talking about code.

C M

unread,
Aug 5, 2011, 10:08:02 PM8/5/11
to nod...@googlegroups.com
What else am I supposed to do on a Friday night!?

tjholowaychuk

unread,
Aug 5, 2011, 11:33:06 PM8/5/11
to nodejs
I like em, but yeah doubt they will ever be in core. As far as raw
networking stuff goes the slight extra overhead probably isnt worth it
anyway but with userland/web apps it's kinda cool, just tough to adopt
something like that since it ends up sharding the community like
coffeescript

Isaac Schlueter

unread,
Aug 6, 2011, 1:00:06 AM8/6/11
to nod...@googlegroups.com
Sharding is what makes our community web-scale.

Mark Hahn

unread,
Aug 6, 2011, 2:16:05 AM8/6/11
to nod...@googlegroups.com
You can't have evolution without sharding.

Marcel Laverdet

unread,
Aug 6, 2011, 3:17:32 AM8/6/11
to nod...@googlegroups.com

Responses to a bunch of stuff inline

On Aug 3, 2011, at 11:07 PM, Mikeal Rogers wrote:
this might lead at some point to an alternate platform, which could be very interesting.
I honestly don't think it needs to be a separate platform.. node-fibers and streamline are doing just fine as Node modules.


On Aug 4, 2011, at 11:53 PM, Bruno Jouhier wrote:
* You can do robust exception handling with try/catch/finally (even
around blocks that contain async calls).
This is probably the biggest "pro" for me. The way the Node community has totally dismissed the usefulness of exceptions seems clumsy to me. Exceptions are really really powerful.


On Aug 5, 2011, at 4:56 AM, Scott González wrote:
Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.

And just because this is all subjective, having underscores littered throughout your code as a meaningful piece of information is not more readable to me.
Even with flow control helpers I feel the style becomes quite obtuse (subjective) and more verbose (objective). Here is the pipe() example written as best I can with async. You're forced to make each operation its own function which adds needless code bloat. You don't have to juggle `err` as much but you still deal with the added function overhead.

function pipe(inStream, outStream, callback) {
  ~function loop() {
    async.waterfall([function(next) {
      inStream.read(next);
    }, function(data, next) {
      data !== null ? outStream.write(data, next) : callback();
    }], function(err) {
      err ? callback(err) : loop();
    });
  }();
}


On Aug 5, 2011, at 9:10 AM, Jorge wrote:
Marcel's fibers are Pure Win.
=D thank you


On Aug 5, 2011, at 12:19 PM, Joshua Holbrook wrote:
Yeah, that's definitely true. On the other hand, I don't really have
any exposure to this alternate approach, so the callback approach
happens to make more sense to me.
At the risk of sounding mean, if everyone had this way of thinking we'd all be still be programming with punchcards. Taking 30 minutes to understand a new paradigm saves you countless hours down the road. This isn't true of just fibers but any programming concept.. subroutines, classes, closures, for loops, objects, and so on. Refusing to explore something because it's unfamiliar and scary is foolish as far I'm concerned.


On Aug 5, 2011, at 3:16 PM, Bruno Jouhier wrote:
You are asking the second question because you do not TRUST the magic.
So, you need to also understand the HOW, to be reassured that the
magic actually works . The problem is here. Once you get confident
that the magic works, you won't need to worry about the HOW any more,
you will just let the magic operate.
This is really insightful actually. I guess I've kind of ignored this because I ended up with a somewhat intimate understanding of coroutines. I think it would be helpful if I (or anyone else) were to diagram the differences between traditional worker-based applications, NodeJS applications, and coroutines.


On Aug 5, 2011, at 5:44 PM, Marco Rogers wrote:
Here's where this went wrong. Not when someone asked if this could go into core. But that when they received several firm No's from people in positions of authority, that should really be it. If you keep arguing, then it sounds like you're still trying to convince people to put it in core or that it's the "right" way, instead of trying to convince people to give your module a shot. That's a recipe for disaster. It's not the "right" way, nor the "wrong" one, because those are opinions.
The Node core argument is is a straw man. I don't think fibers should be in Node core anymore than I think MySQL should be in Node core. Node core consists of nothing but bare metal; I understand, appreciate, and agree with that. The "does this belong in node core" argument is an argument that shouldn't be happening and serves only as a vehicle to flame these threads.


On Aug 5, 2011, at 6:57 PM, C M wrote:
First and foremost, why in God's name are you writing your own .pipe()? Node has that built-in.
I think it's a reasonable case study.. It's a common function that everyone understands the constraints of, and demonstrates basic callback patterns. It's the same reason there's so many examples of printing "hello world".


On Aug 5, 2011, at 6:57 PM, C M wrote:
// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN
<<16 lines of code>>
This is 4x more code than the streamline & fibers version.

Bruno Jouhier

unread,
Aug 6, 2011, 5:03:56 AM8/6/11
to nodejs
Marcel,

You wrapped it up brilliantly. Full agreement on all points.

Also, this thing about us wanting to push things into core or
evangelizing at all costs makes me smile. I see streamline as a
stopgap solution for async programming which is completely independent
from node (generated code is vanilla JS which does not have any
strings attached). Once harmony takes over, streamline will just die
and I'll feel better because I'd rather base my stuff on a standard,
highly optimized runtime than on a hack that I wrote.

Bruno
> > bg.bra...@gmail.com

Jann Horn

unread,
Aug 6, 2011, 5:35:33 AM8/6/11
to nod...@googlegroups.com
Time to heat up the coffee debate, too! :D

2011/8/6 C M <cha...@charlieistheman.com>:


> // OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN
> function NIHpipe(inStream, outStream, cb) {
>   inStream.read(readFn);
>   function readFn(err, data) {
>     if (err) { return cb(err) }
>     data ? writeFn(null, data) : cb(null);
>   }
>   function writeFn(err, data) {
>     if (err) { return cb(err) }
>     outStream.write(data);
>   }
> }
> function cb(err) {
>   if (err) {
>     console.error('OMG FLAGRANT ERROR.');
>   }
> }

# OMG JS is so verbose must write it shorter
pipe = (in, out, cb) ->
do rwChunk = -> in.read (err, data) ->
return cb err if err or not data
out.write data, rwChunk

Isaac Schlueter

unread,
Aug 6, 2011, 5:38:03 AM8/6/11
to nod...@googlegroups.com
> On Aug 4, 2011, at 11:53 PM, Bruno Jouhier wrote:
> This is probably the biggest "pro" for me. The way the Node community has
> totally dismissed the usefulness of exceptions seems clumsy to me.

How have we dismissed the "usefulness of exceptions"?

I don't think anyone close to node honestly believes that it's a great
thing that "throw" loses state. We all hate that.

The solution is long stack traces and small processes that can crash
completely on failure without taking down your application. Even with
fibers, and wrapping everything in a try/catch, I would bet on there
being ways to tank the entire program in some terrible way, so you end
up needing this anyway. Also, the lack of post-hoc debugging
facilities makes it really hard to get a handle on bugs that only show
up in production.

If half the energy spent on flow control were spent on process
management, and IPC systems, and state capturing, it would be a more
interesting world. The number of people involved in the conversation
should be an indicator that there's no real puzzle here, just
something to talk around.

>> And just because this is all subjective, having underscores sprinkled
>> throughout your code as a meaningful piece of information is much more
>> readable
>
> With flow control helpers I feel the style becomes less obtuse
> (subjective) and verbose (objective). Here is the pipe() example


> written as best I can

The most annoying thing about this discussion is the "let me show you
how you write your programs, and then tell you how I don't like it."
It has never changed anyone's mind ever.


> I think it's a reasonable case study.. It's a common function that everyone
> understands the constraints of, and demonstrates basic callback patterns.
> It's the same reason there's so many examples of printing "hello world".

Node's Stream.pipe is hardly a "hello world". It handles a lot of
issues; back-pressure, multiple inputs, leaving the destination open
or shut based on a config, completely cleaning up after itself, and so
on. Have a look:
https://github.com/joyent/node/blob/master/lib/stream.js#L31-163

It turns out you actually have to support all this stuff in the real
world. The pipe(src, dest, cb) pattern is a stepping stone that
doesn't meet the real need, which is why we have the much more elegant
Stream.pipe().

The second most annoying thing about this (apparently immortal) thread
is the constant rehashing of a function that is woefully incorrect and
lacking in the first place.


Perhaps we can address that one with a real challenge?

Write an async `rm -rf` using your favorite fiber/promise/flow-control/coro lib.

Requirements:

1. Handle EBUSY by trying that file again in 100ms, then 200ms later
if encountered again, then 300 after that, then failing.
2. Handle EMFILE by backing off in a gradually increasing timeout
amount, and then set the EMFILE-timeout value to 0 when it is
successful.

Here's a reference implementation, where the entire flow control
utility, and callback-based implementation, with comments, is 81 lines
of code: https://github.com/isaacs/rimraf/blob/master/rimraf.js
(including a sync version).


For the record, I don't care how you write your programs. If they do
something useful, and you're going to keep maintaining them, then
that's fantastic.

mscdex

unread,
Aug 6, 2011, 5:38:17 AM8/6/11
to nodejs
On Aug 6, 5:35 am, Jann Horn <jannh...@googlemail.com> wrote:
> Time to heat up the coffee debate, too! :D

s/debate//

Jann Horn

unread,
Aug 6, 2011, 5:47:30 AM8/6/11
to nod...@googlegroups.com
2011/8/6 Jann Horn <jann...@googlemail.com>:

> Time to heat up the coffee debate, too! :D

Aww, damn, I should think before trolling.

> pipe = (in, out, cb) ->
>  do rwChunk = -> in.read (err, data) ->
>    return cb err if err or not data
>    out.write data, rwChunk

Actually, two more lines (or one if you don't count the helper):

captureError = (cb, func) -> (err, args...) -> if err then cb(err)
else func.apply @, args


pipe = (in, out, cb) ->

  do rwChunk = captureError cb, ->
in.read captureError cb, (data) ->
    return cb null if not data
  out.write data, rwChunk

Jorge

unread,
Aug 6, 2011, 7:57:33 AM8/6/11
to nod...@googlegroups.com

Aaaarghhh. What's that, brainfuck deluxe ?
--
Jorge.

Jann Horn

unread,
Aug 6, 2011, 8:32:40 AM8/6/11
to nod...@googlegroups.com
2011/8/6 Jorge <jo...@jorgechamorro.com>:

> Aaaarghhh. What's that, brainfuck deluxe ?

I can explain it in detail. :)

> On 06/08/2011, at 11:47, Jann Horn wrote:
>> captureError = (cb, func) -> (err, args...) -> if err then cb(err) else func.apply @, args

captureError is a helper function that wraps callbacks. If the wrapped
callback gets called with an error, it will call the `cb` function, else it
will call the original callback, but without the "err" argument (it's falsy
anyway).


>> pipe = (input, out, cb) ->

Definition of "pipe", "pipe" takes three arguments.

>>   do rwChunk = captureError cb, ->

Define rwChunk and call it afterwards ("do"). The function is wrapped
with "captureError", so we don't have to worry about write errors
anymore.

>>    input.read captureError cb, (data) ->

Read a chunk from input with the following callback (again protected
by "captureError"):

>>       return cb null if not data

if (!data) return cb(null);

>>       out.write data, rwChunk

out.write(data, rwChunk);

Marcel Laverdet

unread,
Aug 6, 2011, 8:46:47 AM8/6/11
to nod...@googlegroups.com

On Aug 6, 2011, at 2:03 AM, Bruno Jouhier wrote:
Also, this thing about us wanting to push things into core or
evangelizing at all costs makes me smile. I see streamline as a
stopgap solution for async programming which is completely independent
from node (generated code is vanilla JS which does not have any
strings attached). Once harmony takes over, streamline will just die
and I'll feel better because I'd rather base my stuff on a standard,
highly optimized runtime than on a hack that I wrote.
I see fibers in a similar fashion. I've been thinking ahead about how I will restructure my code after generators are ready in v8 and am pretty confident it won't be a huge leap from fibers. On the other hand fibers could still be useful in a post-generator world, for instance Ruby has both.


On Aug 6, 2011, at 2:47 AM, Jann Horn wrote:
Actually, two more lines (or one if you don't count the helper):
Do I get to use the fact that it took you two iterations to get the function right as evidence that it was non-trivial to build this? Also as far as I can tell your code doesn't parse because `in` is a reserved word. Correct me if I'm wrong; I don't do a lot of CoffeeScript.

Anyway no doubt CoffeeScript is powerful but all you're doing is replacing the same logic with more a compact version. You're packing just as many metaphors and expressions into a smaller space; you're saving **bits** but not **human processing time**. Let's rewrite it using fibers in CoffeeScript:

pipe = (from, to) -> out.write buf while buf = from.read()

One line!


On Aug 6, 2011, at 2:38 AM, Isaac Schlueter wrote:
How have we dismissed the "usefulness of exceptions"?

I don't think anyone close to node honestly believes that it's a great
thing that "throw" loses state.  We all hate that.
Er, maybe I misunderstood but you personally called try \ catch "ugly" in this thread a few months ago:

Node has gone to great lengths to create a convention which preserves robust error handling.. particularly the fact that `err` is the first parameter to every callback is nothing short of a brilliant decision made at an early stage in Node's lifetime. But the fact that we can't use the built-in error-handling facilities of the language is saddening.


On Aug 6, 2011, at 2:38 AM, Isaac Schlueter wrote:
The solution is long stack traces and small processes that can crash
completely on failure without taking down your application.  Even with
fibers, and wrapping everything in a try/catch, I would bet on there
being ways to tank the entire program in some terrible way, so you end
up needing this anyway.  Also, the lack of post-hoc debugging
facilities makes it really hard to get a handle on bugs that only show
up in production.
I've had good luck with a top level try block in my request handler that sends down a 500 error, and then some strategically-placed try blocks around common spontaneous errors with graceful recoveries.


On Aug 6, 2011, at 2:38 AM, Isaac Schlueter wrote:
The most annoying thing about this discussion is the "let me show you
how you write your programs, and then tell you how I don't like it."
It has never changed anyone's mind ever.
Hello developers,

Look at your program, now look at mine, now look at yours. Now back to mine. Sadly, your program isn't mine. But if you started using Fibers it could look like mine. Look again, your program is now diamonds. Etc.


On Aug 6, 2011, at 2:38 AM, Isaac Schlueter wrote:
Node's Stream.pipe is hardly a "hello world".  It handles a lot of
issues; back-pressure, multiple inputs, leaving the destination open
or shut based on a config, completely cleaning up after itself, and so
on.  Have a look:
https://github.com/joyent/node/blob/master/lib/stream.js#L31-163
Straw man! Straw man, straw man, straw man!!

We understand that pipe is a complex and robust function. I respect the amount of thought and consideration that you and Ryan have put into it, and I'm not trying to defile the integrity of your work.

However, when boiled down to a **naive solution** it is an excellent case study in design.


On Aug 6, 2011, at 2:38 AM, Isaac Schlueter wrote:
Perhaps we can address that one with a real challenge?

Write an async `rm -rf` using your favorite fiber/promise/flow-control/coro lib.

Requirements:

1. Handle EBUSY by trying that file again in 100ms, then 200ms later
if encountered again, then 300 after that, then failing.
2. Handle EMFILE by backing off in a gradually increasing timeout
amount, and then set the EMFILE-timeout value to 0 when it is
successful.
Why stop there!? We should make the reference implementation a distributed rtree index service which can operate on both euclidean and spherical coordinate systems.

I just don't see what this has to do with what we're talking about, or what you're trying to prove. Are you asserting that the expressiveness of streamline and fibers somehow falls apart once any amount of complexity is added?

I don't get where all this hostility is coming from dude. I seriously have nothing but respect for you and the Joyent team. Nearly everything in Node is just the way I would have done it myself. I'm not saying that the Node way is wrong; I don't even see the two design patterns as competing, I see them complementing each other. I think there's a time and a place for each. This thread started out positive but has taken a turn for the worse. I just wanted to talk about some software that I'm really excited about without starting a hate party.

Jorge

unread,
Aug 6, 2011, 9:11:02 AM8/6/11
to nod...@googlegroups.com

Thanks, but still... Aaaarghhh :-P (I ♥ JS)
--
Jorge.

Branko Vukelić

unread,
Aug 6, 2011, 9:12:53 AM8/6/11
to nod...@googlegroups.com
On 2011-08-06 05:46 -0700, Marcel Laverdet wrote:
> pipe = (from, to) -> out.write buf while buf = from.read()

Was it supposed to be s/out/to/ or am I being an idiot again?

> and a place for each. This thread started out positive but has taken a
> turn for the worse. I just wanted to talk about some software that I'm
> really excited about without starting a hate party.

Marcel, don't take it personally. Guys are arguing about fibers, not
you. And there is very little you can do to stop it. I'm sure everybody
caught your initial statement that you weren't looking to push this into
Node core. But there are people out there that would still like to see
it happen, and hence the argument. It's not all that bad, too. I've
learned a thing or two just reading other people's opinions.

So you should just relax and let them fight it out. :)

Lead Developer

Message has been deleted

Bruno Jouhier

unread,
Aug 6, 2011, 10:18:46 AM8/6/11
to nodejs


On Aug 6, 11:38 am, Isaac Schlueter <i...@izs.me> wrote:
> > On Aug 4, 2011, at 11:53 PM, Bruno Jouhier wrote:
> > This is probably the biggest "pro" for me. The way the Node community has
> > totally dismissed the usefulness of exceptions seems clumsy to me.
>
> How have we dismissed the "usefulness of exceptions"?
>
> I don't think anyone close to node honestly believes that it's a great
> thing that "throw" loses state.  We all hate that.
>
> The solution is long stack traces and small processes that can crash
> completely on failure without taking down your application.  Even with
> fibers, and wrapping everything in a try/catch, I would bet on there
> being ways to tank the entire program in some terrible way, so you end
> up needing this anyway.  Also, the lack of post-hoc debugging
> facilities makes it really hard to get a handle on bugs that only show
> up in production.
>
> If half the energy spent on flow control were spent on process
> management, and IPC systems, and state capturing, it would be a more
> interesting world.  The number of people involved in the conversation
> should be an indicator that there's no real puzzle here, just
> something to talk around.

You lost me Isaac.

The big difference between try/catch and pure API schemes
(callback(err)) is that try/catch deals with expected exceptions (the
ones you throw explicitly) as well as UNEXPECTED ones (the ones you
get because someone left a bug somewhere deep into an obscure module
that gets invoked only every other month with the special data
combination that makes it fail).

And, if you have a good discipline (Bertrand Meyer's design by
contract is now old literature but probably still a good place to
start), you can do very robust EH with try/catch, with all the
contextual information that you need. I've using this for more than 20
years in big projects and it just works great! It is a lot less
fragile that API based solutions because it deals with all the
unexpected cases (including someone misusing the APIs).

I'm using my good old EH recipes in my current node projects
(streamline, like fibers, gives you a robust try/catch mechanism) and
it makes a real difference (compared to all the API alternatives).

I see a trend here, that I would summarize as follows:
async is new (not really true) => we need to throw away everything
we have been doing before and start from scratch: reinvent flow
control, reinvent error handling, etc.

If you start from scratch, it is going to take a number of iterations
before you reach something really good. So maybe you should not throw
away all the old wisdom at once. IMO, classical exception handling
(used in a disciplined way) is a MUST HAVE (until something better
comes along).

Bruno

Isaac Schlueter

unread,
Aug 6, 2011, 1:08:50 PM8/6/11
to nod...@googlegroups.com
Please stop taking things so personally.

> Er, maybe I misunderstood but you personally called try \ catch "ugly" in
> this thread a few months ago:
> http://groups.google.com/group/nodejs/browse_thread/thread/5bd51524229c27b4

Yes, I think that try/catch is an ugly pattern. It is a concealed
goto, and I had problems with it long before ever coming to node.

That doesn't mean that I think our current setup is sufficient. When
an exception is thrown, the process should crash completely and
cleanly, and it should not bring down the app, and the resulting data
should be sufficient to trace through and debug the problem.

This is one thing that Erlang and C do right, and JavaScript does not.
I think that 2012 will produce some interesting solutions to this
issue.


> Straw man! Straw man, straw man, straw man!!

I don't get it...?

> We understand that pipe is a complex and robust function. I respect the
> amount of thought and consideration that you and Ryan have put into it, and
> I'm not trying to defile the integrity of your work.
> However, when boiled down to a **naive solution** it is an excellent case
> study in design.

I wasn't claiming that you don't appreciate the work or whatever.
(And it was mostly Mikeal and Felix who wrote that, I believe.)

I'm saying that the thing you're using is not a good case study in
design, because it's a bad program. I'm not interested in tools that
make bad programs arbitrarily pretty. If your tool can do more, I'd
love to see it.


> Why stop there!? We should make the reference implementation a distributed
> rtree index service which can operate on both euclidean and spherical
> coordinate systems.

That would also be interesting, though I don't have a use for it. Is
that something that can be done in a single node module with 80 or so
lines of commented callback-based JavaScript and no external
libraries? Is that something that's used in production, and can be
written from scratch in about an hour?

If you're saying that rimraf is an unreasonable challenge, then I'm
kind of confused, because I thought the point of fibers is to make
code easier to write, and it was pretty easy already. I'm not going
to write it badly as an amateur in an pattern I don't fully
understand, and then conclude that it's harder, or that the clumsy
result is necessarily uglier. I'd like to see what a fibers expert
can do to this problem to make it beautiful, less verbose, etc.


> I just don't see what this has to do with what we're talking about, or what
> you're trying to prove. Are you asserting that the expressiveness of
> streamline and fibers somehow falls apart once any amount of complexity is
> added?

I'm not *asserting* anything. I'm asking for a better example,
because I don't really see the point of any of this. I've seen a
naive broken pipe(src, dest, cb) written 4 different ways in this
thread. They *ALL* suck even worse than util.pump, which is terrible.
Show me a side-by-side of a program that does some relatively
interesting asynchronous IO juggling, in a well-understood problem
domain, in less than 100 lines.


> I don't get where all this hostility is coming from dude.

> wanted to talk about some software that I'm really excited about without
> starting a hate party.

You have me all wrong! There's no hostility here. I'm not disparaging
your respect or anything like that. I'd like to understand the value
in this thing that some people seem to see value in, but the way it's
being presented and debated, any value that *might* be there is
actively obscured.

It would be a lot more interesting for everyone to talk about the
shortcomings of the system they have chosen, rather than the
shortcomings of the system that they don't actually use in any
meaningful way.


On Sat, Aug 6, 2011 at 07:18, Bruno Jouhier <bjou...@gmail.com> wrote:
> The big difference between try/catch and pure API schemes
> (callback(err)) is that try/catch deals with expected exceptions (the
> ones you throw explicitly) as well as UNEXPECTED ones (the ones you
> get because someone left a bug somewhere deep into an obscure module
> that gets invoked only every other month with the special data
> combination that makes it fail).

I'm suggesting that it'd be better for those failures to tank the
whole process, leaving you with enough information to debug the
problem after the fact and easily figure it out. Right now, that's
*really* hard in node. If a bizarre error happens deep in the stack,
it's not *that* much better to be able to catch it with a try/catch
than with a process.on("unhandledException") handler. You're saving,
what, one request that gets a 500 error instead of a dangling
connection?

> If you start from scratch, it is going to take a number of iterations
> before you reach something really good. So maybe you should not throw
> away all the old wisdom at once. IMO, classical exception handling
> (used in a disciplined way) is a MUST HAVE (until something better
> comes along).

Actually, what I was really suggesting was something that would fill
the usecase of a core dump, which is as old as dirt. Even if you use
fibers, you're still going to find yourself in situations where
something that *can't* happen *does* happen, and it's usually not in
your JavaScript.

From what I've seen fibers solve the easy problem, not the hard one.
That's fine, easy problems need to be solved to, but since it's a
roughness that I am well calloused against, it doesn't really inspire
me. Prove me wrong, please!


On Sat, Aug 6, 2011 at 06:15, Jorge <jo...@jorgechamorro.com> wrote:
> On the one hand you've got a bunch a crazy kamikazes that dare to comment on what they think is a Good Thing™, on the other you've got a bunch of Joyent employees annoyed by the mere existence of the conversation, inviting them to shut up (e.g. that "never" thread, you know).

Please don't bring Joyent into this. So far, I'm the only Joyent
employee to comment in this thread, and I don't speak for my employer.
Thanks.

I'm not annoyed by the existence of fibers, but yes, I am annoyed by
the pointlessness of this thread.


> If I were an ordinary node user I would prefer to stand by the side of the ones that rule (which, in the case of node, is *not* the community, but Joyent) and I would just shut up.

Actually, it's Ryan.

And I'm not asking you to shut up. I'm asking you to communicate more
effectively.


> It's not a matter of tastes. Fibers provide a bunch of real benefits, none of which is subjective.

Those have not been sufficiently demonstrated to be worth the
trade-offs. Take the rimraf challenge, let's see it.


>> The second most annoying thing about this (apparently immortal) thread
>> is the constant rehashing of a function that is woefully incorrect and
>> lacking in the first place.
>

> Is that an argument against fibers ?

No, it's an argument against this discussion thread. All the fiber
gives people verbal diarrhea ;)


Bottom line, you don't actually have to convince anyone of anything.
We can write programs in different ways, and that's great. I'm only
asking because I'm curious, and can't seem to find the answers in this
thread.

Marco Rogers

unread,
Aug 6, 2011, 1:23:34 PM8/6/11
to nod...@googlegroups.com

This is probably the biggest "pro" for me. The way the Node community has totally dismissed the usefulness of exceptions seems clumsy to me. Exceptions are really really powerful.


The way Bruno and Marcel have totally dismissed the usefulness of callbacks seems clumsy to me. Callbacks are really really powerful.
 

At the risk of sounding mean, if everyone had this way of thinking we'd all be still be programming with punchcards. 

What of those of us who totally understand it and still choose not to use it? Would you like to insult us as well? I'm sure it'll work out well for you.
 
The Node core argument is is a straw man. I don't think fibers should be in Node core anymore than I think MySQL should be in Node core. Node core consists of nothing but bare metal; I understand, appreciate, and agree with that. The "does this belong in node core" argument is an argument that shouldn't be happening and serves only as a vehicle to flame these threads.

That's exactly what I said. What you seem to miss is that continuing to "debate" after that has been thrown out serves to escalate the animosity because it is conflated with the original debate. Yes this is silly and shouldn't be the case. But it's very difficult to change avoid. You're seeing the affects right now as even you have started to get a little defensive and belligerent. And you're usually unflappable. That's why some people are trying to stop this debate. Because it turns people into trolls.
 

On Aug 5, 2011, at 6:57 PM, C M wrote:
// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN
<<16 lines of code>>
This is 4x more code than the streamline & fibers version.

Ooooooohhhh, now I get it. That was the final argument I needed to realize that I'm idiot for using callbacks. I didn't get it when you said it the 100 other times.

Seriously though, I think this is the issue that you and Bruno could be a little more cognizant of. Most people here are professional developers. We're smart and we get good work done doing it our way every single day. You're welcome to try to change our minds. But the longer you do so, the more it becomes clear that you have some contempt for the decision we've made that differs from yours.

You can see this by examining the posts here that actually try to further the debate (rather than just trolling). They all look the same and Isaac hit it on the head. "Here's your code <example>. Clearly that (stupid|unreadable|too long|confusing). Here's mine <example>. If you used mine, you'd be making a smarter decision as a programmer." Go ahead and try to convince us that you're not saying that. But you are. There's no other way to look at it. Yes that's our problem, but if you don't respect that fact, then it'll become yours.

That's my best explanation of why you're seeing so much animosity. It won't go away. You can't reason your way through it. You can accept it and continue to push forward in the face of opposition, or you can avoid the debate. Those are your only options.

On a different note, if I was to take the whole of your arguments and take them to a logical conclusion, I'd wonder why you're not using something like Erlang. And I've asked others this. It has all the features you seem to want built in. It's a little difficult to read but according to you we should all spend the time to learn it because it's better and it would be "foolish" not to. You could counter that with it's not as readable. But that goes against our just learn it mantra as well. And the Erlang folks would certainly disagree. Your only recourse then is to say you chose node because you liked javascript better, because of the community, because you're more comfortable with it. And you're not going to choose Erlang even in the face of overwhelming evidence. And before you know it, you've developed something called empathy, where you can see where we're coming from. Welcome. We're glad to have you.

:Marco

Bruno Jouhier

unread,
Aug 6, 2011, 2:45:59 PM8/6/11
to nodejs
> > The big difference between try/catch and pure API schemes
> > (callback(err)) is that try/catch deals with expected exceptions (the
> > ones you throw explicitly) as well as UNEXPECTED ones (the ones you
> > get because someone left a bug somewhere deep into an obscure module
> > that gets invoked only every other month with the special data
> > combination that makes it fail).
>
> I'm suggesting that it'd be better for those failures to tank the
> whole process, leaving you with enough information to debug the
> problem after the fact and easily figure it out.  Right now, that's
> *really* hard in node.  If a bizarre error happens deep in the stack,
> it's not *that* much better to be able to catch it with a try/catch
> than with a process.on("unhandledException") handler.  You're saving,
> what, one request that gets a 500 error instead of a dangling
> connection?

Depends on the size of your software and the rate at which such
exceptions will occur, and the expectations of your users. Typical LoB
applications tend to run in the millions of lines of code which are
written by people with different skill levels (some under our direct
control, some not). So the likelihood of having an error in some
obscure place is not that low. It is important to have at least some
basic safeguard and a system that will recover smoothly from the most
stupid mistakes (generating a meaningful stack trace so that the
problem can be identified and debugged offline and returning a 500
code). Try/catch gives us that.

Maybe the coredump / reboot is really the way for the future. But
you'll need really good state management then. Until this is in place,
try/catch is a simple, proven mechanism that allows us to recover
smoothly most of the time (of course shit still happens).

>
> > If you start from scratch, it is going to take a number of iterations
> > before you reach something really good. So maybe you should not throw
> > away all the old wisdom at once. IMO, classical exception handling
> > (used in a disciplined way) is a MUST HAVE (until something better
> > comes along).
>
> Actually, what I was really suggesting was something that would fill
> the usecase of a core dump, which is as old as dirt.  Even if you use
> fibers, you're still going to find yourself in situations where
> something that *can't* happen *does* happen, and it's usually not in
> your JavaScript.
>
> From what I've seen fibers solve the easy problem, not the hard one.
> That's fine, easy problems need to be solved to, but since it's a
> roughness that I am well calloused against, it doesn't really inspire
> me.  Prove me wrong, please!

Yes, there will also be crashes in the low level C/C++ libraries.
These are going to be really problematic (security exploits?). But the
frequency will be much lower. I can live with the system that just
crashes when this happens (from a security standpoint this might be
better than continuing in an unknown state), but having the service go
down every time a stupid line of JS gets hit is just not acceptable
for us.

This all comes back to my previous post about people doing different
things with node and having different requirements. Again, I am not
saying that everyone should stay away from callbacks and use CPS tools
or fibers (I also write some low level things with callbacks once in a
while). What I am saying is that there are some people who are using
node in scenarios where callbacks are just too low level, and where
classical try/catch EH is a MUST HAVE (given the current state of
things on the EH side).

I'm not telling you what approach YOU should use. To the opposite, I'm
more than happy that the core team goes the extra mile of writing
everything close to the metal because this gives all of us the most
efficient core possible. What I am just saying is that there are some
people (like us) who are building higher level stuff on top of the
core and who need higher level abstractions to be in their comfort
zone.

What surprises me is the level of aggressiveness against us (people
who are proposing alternatives to callbacks). This is as if there was
a golden rule saying "thou shall program node with callbacks" and that
we are being heretic by proposing something else. If I transpose this
to the traditional OS / application world, it would be as if the OS
guys would get into a fight with the application guys because they
don't write their stuff in C and assembly. This does not make sense
and I just hope that we'll get over it.

Bruno

Mikeal Rogers

unread,
Aug 6, 2011, 3:03:38 PM8/6/11
to nod...@googlegroups.com
Core's pipe does more than *just* push the data, there are other events propagated that you *need* to have propagate. 

Mikeal Rogers

unread,
Aug 6, 2011, 3:06:08 PM8/6/11
to nod...@googlegroups.com

On Aug 6, 2011, at August 6, 201112:17 AM, Marcel Laverdet wrote:


Responses to a bunch of stuff inline

On Aug 3, 2011, at 11:07 PM, Mikeal Rogers wrote:
this might lead at some point to an alternate platform, which could be very interesting.
I honestly don't think it needs to be a separate platform.. node-fibers and streamline are doing just fine as Node modules.


That just isn't true. The assumptions I make about state mutation in a context actually changes with fibers and can be mutated. That means it's not interoperable.

It's like saying Twisted isn't it's own platform. It is, because everything in Twisted has to stay in Twisted and can't effectively interop with the anything else in Python where IO is involved.

-Mikeal

Mikeal Rogers

unread,
Aug 6, 2011, 3:07:48 PM8/6/11
to nod...@googlegroups.com

On Aug 6, 2011, at August 6, 20112:03 AM, Bruno Jouhier wrote:

> Marcel,
>
> You wrapped it up brilliantly. Full agreement on all points.
>
> Also, this thing about us wanting to push things into core or
> evangelizing at all costs makes me smile.

Marcel never advocated this going in to core. YOU DID!

You also brushed aside core people saying that it would "never happen".

-Mikeal

Mikeal Rogers

unread,
Aug 6, 2011, 3:17:43 PM8/6/11
to nod...@googlegroups.com
This is bullshit dude.

You took something we already did in core, which is built in to the subjects of the things you're sending to this function, and implemented it again in a terrible way that actually misses the whole point of Stream.pipe() which is to handle back pressure as well as moving data forward.

Streams and pipe() are fundamental to node, they are a core level API that is being built to help deal with IO in a performant and simple way.

You don't get to say "we could implement that API easier" because that API doesn't even make sense with your alternate IO system. Also, it's already built, everyone writing node programs can use it today.

pipe() will be the way you move all data in node. In node 0.5.3 you can actually do this with my request library.

function (req, resp) {
req.pipe(request.get('http://www.blah.com/image.png')).pipe(resp)
}

That's a one line proxy that handles all the headers the way you want.

Significantly simpler than your fibers example, and it all uses callbacks behind the scenes.

We're figuring out the places where abstractions make sense and what we can do with them.

fibers is not an abstraction, it's an alternative to the way node does IO, it will need it's own abstractions. if you want to build those go ahead but don't shoot down the API implementations we're giving people in node.js with these foolish examples.

-Mikeal

Mikeal Rogers

unread,
Aug 6, 2011, 3:27:01 PM8/6/11
to nod...@googlegroups.com
On Aug 6, 2011, at August 6, 201112:17 AM, Marcel Laverdet wrote:

function pipe(inStream, outStream, callback) {
  ~function loop() {
    async.waterfall([function(next) {
      inStream.read(next);
    }, function(data, next) {
      data !== null ? outStream.write(data, next) : callback();
    }], function(err) {
      err ? callback(err) : loop();
    });
  }();
}


On Aug 5, 2011, at 6:57 PM, C M wrote:
// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN
<<16 lines of code>>
This is 4x more code than the streamline & fibers version.

Umn..... where are you handling back pressure from the client? That's what "all that code" does in Stream.pipe().

You probably aren't, and if you are you're doing it at the coro layer in fibers, which is in C, which is WAY more lines that Stream.pipe() I'm sure.

If you want to target a core API as being verbose you should try to understand everything that it does first.

-Mikeal



Bruno Jouhier

unread,
Aug 6, 2011, 3:56:37 PM8/6/11
to nodejs
How come? I did not participate to the "never happen" subthread.

Mikeal Rogers

unread,
Aug 6, 2011, 4:16:42 PM8/6/11
to nod...@googlegroups.com
mistaken identity, my bad :)

Jorge

unread,
Aug 6, 2011, 5:38:51 PM8/6/11
to nod...@googlegroups.com
I did because I think fibers are pure win and should (and will, eventually) be into node core. Once they are, you'll start to love them. Mark my words. Cheers,
--
Jorge.

Bruno Jouhier

unread,
Aug 6, 2011, 6:05:46 PM8/6/11
to nodejs
The treeview in Google groups is confusing. Some of the replies look
like replies to my posts, when they are replies to someone else's
post.

My main mistake may have been to bring the pipe example in (and to
call it pipe), out of context. This example makes different
assumptions than the core APIs: one assumption is that the read method
takes care of pausing/resuming under the hood and that the write
method takes care of draining before calling back. So it is not
totally naive and it does takes care of the back pressure issue
(probably still in a naive way compared to the core pipe
implementation).

But my point what not do discuss stream APIs here. It was to
illustrate the callback issue with a very simple example: combining a
read operation with a write operation. I should probably have inserted
an asynchronous transform in the middle. Something like:

while (inData = inStream.read(_)) {
var outData = transform(inData, _);
outStream.write(outData, _);
}

This kind of pattern is not too far from what typical applications
would do. Maybe the read would not be from a stream but from a
database API or a web service, same thing for the write. So don't take
it to literally. What's interesting here is the pattern, not the
actual APIs used. My point was to compare the callback version of this
loop with the pseudo-sync version.

Another good comparison is just

var resource = allocateResource(_);
try {
doSomething(resource, _);
}
finally {
resource.release(_);
}

gotcha: doSomething may be buggy (dereferencing null before initiating
the async operation, or after, from a callback) and you want the
finally block to be executed in all cases.

The streamline/fiber source for this is just trivial and will be
familiar to many programmers. The callback version is challenging.
Here is what streamline generates:

var resource;
return allocateResource(__cb(_, function(__0, __1) {
resource = __1;
(function(__then) {
(function(_) {
__tryCatch(_, function() {
return doSomething(resource, __cb(_, function() {
_(null, null, true);
}));
});
}(function(__e, __r, __cont) {
(function(__then) {
__tryCatch(_, function() {
return resource.release(__cb(_, __then));
});
}(function() {
__tryCatch(_, function() {
if (__cont) {
__then();
}
else {
_(__e, __r);
}
;
});
}));
}));
}(function() {
__tryCatch(_, _);
}));
}));

where __tryCatch and __cb are two small helper functions.

You can probably end up with slightly better code by writing the
callbacks "by hand" but I doubt that you can reach a level that will
be "immediately" understandable.

Of course, try/finally is probably a worst case for hand-written
callbacks. The pull/push loop is more of an average case, which is why
I chose it (choosing try/catch would have been a bit unfair to start
with). In the previous discussion threads, the comparisons were always
centered around "blocks of async statements" which are just the easy
cases. The pull/transform/push or try/finally patterns are patterns
that application developers have today in their code (not contrived
things that I would have built on purpose to prove a point), so I
think that we need to go beyond the trivial cases to see what the
pseudo-sync tools bring to the table.

And behind all this, there is an important question: should people
learn a new way of programming (callbacks) because of node's async
nature, or are there ways to use node today with less disruption.

My feeling is that a new programming style is going to emerge but that
it is too immature today. I see the callback/event artifacts as an
"assembly language" for node programming. Something higher level will
emerge (don't know what). So, while this is maturing, wouldn't it be
good to have solutions that allow people to get quickly up-to-speed on
node without having to change their habits much. The pragmatic guys
won't be looking for a new experience in programming, they will just
want to use node.js because of its great efficient model. And if they
can do it with minimal disruption, this will be a great win for them.
Isn't this an interesting opportunity for node?

This does not imply that the core team has to do things differently as
all this can be built completely outside of core. If you look at
streamline, you'll see that it does not require anything special from
core, it does not even require any special runtime, it is just a
preprocessor. And it does not change anything with regards to
concurrency semantics and the fact that you can reason about the code
as if it were single-threaded (the only thing is that you have to stop
at the first _ rather than at the end of the function).

Bruno

Isaac Schlueter

unread,
Aug 6, 2011, 7:01:05 PM8/6/11
to nod...@googlegroups.com
On Sat, Aug 6, 2011 at 15:05, Bruno Jouhier <bjou...@gmail.com> wrote:
> while (inData = inStream.read(_)) {
>  var outData = transform(inData, _);
>  outStream.write(outData, _);
> }

This stream API you're writing examples against doesn't exist. There
are many reasons why. It doesn't handle errors, and is
extraordinarily inefficient in the way it delivers backpressure.
There's no way to distinguish between a 0-byte read, an EOF, and an
error. It would fall down immediately in any use case more strenuous
than an example.

I therefor cannot evaluate any of these examples, because they only
exist in magic imagination land.

I look forward to the day when someone posts an example that subjects
their pet coroutine lib to a real use case. Until then, I find it
difficult to take these things any more seriously than the tooth
fairy.

Will Conant

unread,
Aug 6, 2011, 10:37:29 PM8/6/11
to nodejs
Here's the reason I'm using node-fibers for my current project:

I'm using it to hoodwink my team and my clients into building software
that has the potential to really scale.

Among the programmers that I work with, I'm the maladjusted weirdo
that spends all his time thinking about software. I spend the whole
work-week writing software, and then I spend all weekend writing
software too. When I drive my car, I think about software. When I go
for walks, I think about software. Most of the programmers I work
with, on the other hand, are pretty well adjusted people. They put in
40-45 hours of programming per week, and they spend the rest of their
time hanging out with their kids or riding their mountain bikes.

It doesn't matter what I think about the ease and clarity of using a
callback-style API to talk to the database. They won't like it. They
won't do it. They'll make fun of me. They'll convince everyone I'm
crazy, and we'll have to go back to Perl.

With node-fibers, I have tricked them, and here's how: I wrote every
bit of code that spawns fibers or yields control. I hid that stuff in
libraries with nice traditional synchronous APIs. I fibbed a little
bit and told them that each web request, api request, and triggered
process ran in its own thread-like thing that may be suspended at any
moment. I told them to call me the moment the word "mutex" popped into
their heads.

I am well aware that there will be complications. I'm sure they'll
occasionally find ways to make code that behaves very strangely. I'm
sure they'll call me with baffling problems, but, honestly, it's
always been that way. I'm always the guy who understands the whole
framework. Everyone else is usually content to write request handlers
and business logic. They leave the weird stuff to me.

To be fair, I'm dramatizing this story a little bit. Of course, I had
to sell everyone on a modern concurrency solution in the first place,
and, of course, I explained how the whole fibers thing was going to
work. But the crux of the sale was the assurance that they wouldn't
have to think about it--that I would take care of the tricky stuff,
and they'd be able to write nice normal code.

I came up with this plan a long time ago, and I considered many
alternatives to Node/node-fibers. Erlang is too weird. Go's static
types are mediocre for manipulating JSON data structures. Perl/Coro is
still going to leak memory, and it won't work with all the synchronous
CPAN modules. Ruby 1.9 is almost there, but you still have to avoid
all the synchronous modules. The actual runner up was Node/streamline,
but streamline has its underscores and confusing stack traces.

Regardless of the intent of its authors, Node was made for fibers.
They don't need to add fibers to the core. Marcel has done an
excellent job of making fibers work as an npm module, so they're off
the hook.

Without fibers (or something like it), Node wouldn't even be on my
radar. It simply wouldn't solve my problem, which, to summarize, is
marrying modern concurrency with traditional server-side programming.
I doubt the authors even care. From their perspective, I'm using their
software to do something they didn't build it for. I might even be
using it to do something they think is despicable.

However, I'm going to make a prediction: In a couple of years, I'll
have a lot of company. Lots of development teams are composed of nerds
and bigger nerds. The bigger nerds will want to use Node for its
amazing power and flexibility, and they'll want to use fibers to hide
the tricky details from their teammates behind a facade of clean,
straightforward JavaScript.

Cheers!
Will Conant

Jorge

unread,
Aug 7, 2011, 4:14:15 AM8/7/11
to nod...@googlegroups.com
On 06/08/2011, at 21:06, Mikeal Rogers wrote:
>
> The assumptions I make about state mutation in a context actually changes with fibers and can be mutated. That means it's not interoperable.


State mutation in a context can happen with callbacks too. A callback has full access and can mutate everything in any of its outer contexts.

When using fibers execution is halted in the middle of a function call, its context is mutable for that reason.

When using callbacks, the contexts persist in the closures created by the callback, and are mutable too.

So I must be missing something, can you elaborate, please ?
--
Jorge.

Jorge

unread,
Aug 7, 2011, 4:35:37 AM8/7/11
to nod...@googlegroups.com


Oh, please. He's pulling rabbits out of his hat, and all you say is you don't like rabbits. I'm afraid you miss the point.
--
Jorge.

Bruno Jouhier

unread,
Aug 7, 2011, 4:51:46 AM8/7/11
to nodejs
On Aug 7, 1:01 am, Isaac Schlueter <i...@izs.me> wrote:
> On Sat, Aug 6, 2011 at 15:05, Bruno Jouhier <bjouh...@gmail.com> wrote:
> > while (inData = inStream.read(_)) {
> >  var outData = transform(inData, _);
> >  outStream.write(outData, _);
> > }
>
> This stream API you're writing examples against doesn't exist.  There
> are many reasons why.  It doesn't handle errors, and is
> extraordinarily inefficient in the way it delivers backpressure.
> There's no way to distinguish between a 0-byte read, an EOF, and an
> error.  It would fall down immediately in any use case more strenuous
> than an example.

It DOES EXIST (https://github.com/Sage/streamlinejs/blob/master/lib/
streams/server/streams_.js).

It handles ALL errors (including unexpected ones).

If the underlying node streams delivers data events with 0-byte
buffers (does it?) it will deliver them as inData = 0-byte-buffer so
you can distinguish this from EOF (inData = null).

There is an option to configure the input buffering with lowMark/
highMark values. So it is not completely stupid either. Now, I cannot
comment on how efficient it is overall because I have not done any
benchmarks.

I think that people have the right to come up with new ideas (like a
pair of async read/write methods to deal with streams) and publish
them. Don't they? I'm not asking for this to be in core. I just
submitted it and I wrote a blog article about it a few months ago
because I think that this is an interesting idea. Before commenting
about it, maybe you should take 5 minutes to look at it and verify
that your assertions hold.

Sorry for the somewhat angry tone here. I have tried to remain polite
and constructive in this debate so far by bringing new ideas to the
table. For example I have been very patient to explain that different
people have different requirements and that this explains why we are
looking for alternatives to callbacks, and why this may even be an
opportunity for node (does this make sense?). Instead of getting
feedback on this important point (much more important than API A vs.
API B), I just get a flame full of incorrect assertions). This is
frustrating.

Bruno

Jorge

unread,
Aug 7, 2011, 5:01:20 AM8/7/11
to nod...@googlegroups.com
On 06/08/2011, at 19:23, Marco Rogers wrote:

>
> This is probably the biggest "pro" for me. The way the Node community has totally dismissed the usefulness of exceptions seems clumsy to me. Exceptions are really really powerful.
>
>
> The way Bruno and Marcel have totally dismissed the usefulness of callbacks seems clumsy to me. Callbacks are really really powerful.


And fibers and continuation passing style are not mutually exclusive.

But, while with fibers you can catch any exceptions perfectly, but with CPS you simply CAN'T. It's not possible, because by the time the exception happens, you're already out of the function where you wanted to catch it.

It's simply a true limitation of CPS, it's not Bruno and Marcel dismissing callbacks.


> (snip)


Oh, please.
--
Jorge.

Bruno Jouhier

unread,
Aug 7, 2011, 5:13:12 AM8/7/11
to nodejs
Sounds so familiar! And very well written!

Bruno, another maladjusted weirdo who wakes up in the middle of the
night thinking about software while the others carry on their dreams.

Branko Vukelić

unread,
Aug 7, 2011, 5:22:14 AM8/7/11
to nod...@googlegroups.com

Let me try. And please don't try to turn it into a [f]lame war. I really
only want to give a concerete use case where fibers simplify things:

// PROBLEM: We want to use validator function that has to make an
// async requirest to a mongo:
var Validator = require('node-validator').Validator;

Validator.prototype.userExists = function (callback) {
User.findOne({username: this.str, function (err, user) {
callback(err, !!user)
});
}

// #1
Validator.check('f...@bar.com').userExists(function (err, exists) {
// We get an err object if user doesn't exist + extists == false
});

// #2
require('synchronous')
Validator.prototype.syncGetter('userExists');

var exists = Validator.check('f...@bar.com').userExists();

Before you comment, there's a practical application of the code further
below. Also note that code using syncrhonous is not mine. It was
suggested by node-validator's author as one of the possible solutions
when I asked about async validation.

First, for those of you who are not familiar with how node-validator
works, it basically has a bunch of methods like `notNull`, `isEmail`,
etc, which all check a string stored in `this.str`, and throw an error
if there is one, or return `this` to allow chaining. You can then trap
the thrown error and do whatever you want. Either respond with an error
message, or push the error message into an array and return `this` from
the error handler attached the `Validator` object to resume the chain.

I've written an Express middleware which does the latter. I attach the
Validator#check() to req.v, and make the error handler update the
req.validation.isValid flag when an error is flinged (there's more stuff
in req.validation, that's why it has that long name). So the concrete
real-world example would look like this:

req.v('email', 'Required').notNull().notEmpty().isEmail();


if (!req.validation.isValid) {
// Send error response
}

// Or do something

Now, suppose one of the conditions was that email belongs to an existing
user:

//----- in separate module -------
require('synchronous');

Validator.prototype.userExists = function (callback) {
User.findOne({email: this.str, function (err, user) {
callback(err, !!user)
});
}

Validator.prototype.syncGetter('userExists');

//----- in request handler -------
req.v('email', 'Required').notNull().notEmpty().
isEmail().userExists(); // Doesn't change the syntax

I think this is pretty good.

Isaac Schlueter

unread,
Aug 7, 2011, 5:26:57 AM8/7/11
to nod...@googlegroups.com
On Sun, Aug 7, 2011 at 01:51, Bruno Jouhier <bjou...@gmail.com> wrote:
> It DOES EXIST (https://github.com/Sage/streamlinejs/blob/master/lib/
> streams/server/streams_.js).

I was imprecise, I apologize.

What I mean is, the *non-fibers callback version* of the pipe function
that has been shown in this thread does not exist. No one actually
writes callback code that way, so the comparisons are all absurd
strawmen.

Pipe is a solved problem. It's solved with event emitters. It's not
even a "callback hell" type of problem, so I'm really not sure what
the point is of that. Replacing the whole Stream API pattern doesn't
make it easier to write node programs, so I'm not seeing the value
there.


Show me a real program that uses callbacks in real life, and then show
the comparable fibers counterpart. It should be shorter than a bunch
of the messages in this thread. I want a side-by-side that isn't or
contrived.

This is an actual attempt to get new information that I don't already
have. Please help me understand how fibers can make my programs
better, or at least, different.

Bruno Jouhier

unread,
Aug 7, 2011, 7:46:14 AM8/7/11
to nodejs
> But, while with fibers you can catch any exceptions perfectly, but with CPS you simply CAN'T. It's not possible, because by the time the exception happens, you're already out of the function where you wanted to catch it.
>
> It's simply a true limitation of CPS, it's not Bruno and Marcel dismissing callbacks.

Actually you CAN catch them with CPS and callbacks. Streamline
generates the callback pattern that does it. The problem is that this
pattern is so complex that nobody would reasonable write it "by hand".
You need a preprocessor's help if you want to do it.

I don't care that much about people using or not using streamline (as
I said earlier, it's just a stopgap solution). On the other hand,
there is an interesting academic side to it. Streamline (even if it is
not used much and if people will prefer fibers instead) somehow shows
how far you can go with callbacks, and without them. This is why I
took the time to write an "equivalence post" on my blog. What this
post demonstrates is that all the flow control keywords of the
Javascript language can be mechanically converted from pseudo-sync
style to callback style, and that, conversely, anything that you would
write in callback style can be converted to pseudo-sync style (with
the "futures" feature).

The only thing that streamline cannot handle (and that fibers can) are
async calls behind parameterless APIs (property accessors, toString()
overrides, etc.). But this is a syntax sugaring issue, not a
fundamental one.

There is also a little gotcha on the EH side: streamline does a
perfect job as long as the EH code itself does not fail unexpectedly
(and stacktraces are not great but there may be ways to fix it).
Fibers don't have these problems. In the real world, this is not a
problem (at least for me) because a well engineered system should have
little EH code in a few strategic places. So it is relatively easy to
ensure that the EH code itself won't throw unexpectedly.

Maybe I'm being getting into too many details here. But I'm interested
by the technical and academic underpinnings, and by the usability
aspects of all this, not by the promotion of my own solution (fibers
and streamline are competing and I see good reasons why people would
go with fibers rather than with streamline). What I'm not interested
in is the religion of the status quo.

Bruno

Bruno Jouhier

unread,
Aug 7, 2011, 9:28:18 AM8/7/11
to nodejs
Again, you speak about things without looking at them.

The *non-fibers callback version* of this DOES EXIST.

Streamline.js has nothing to do with fibers. It is a preprocessor that
generates vanilla Javascript with callbacks. If you look at the
transformed file (https://github.com/Sage/streamlinejs/blob/master/
lib/
streams/server/streams.js), you will see that this is a PURE CALLBACK
implementation of this API, which does not have any strings attached
to it: no node C++ extension, not even a JS helper module -- it only
'requires' modules from node core!

Replacing the whole stream API may not help much on usability if you
program to the metal (with callback and events -- although this is
probably debatable). But if you use a pseudo-sync tool, it does help a
lot.

And besides that, I think that the concept of wrapping an event-based
API with a callback-based one is an interesting one "intellectually".
There is a general pattern behind this (very similar to the "future"
pattern) which I find rather compelling and which may help solve other
problems. If you don't want to use it, fine.

I can provide hundreds of code fragments that demonstrate the
difference in usability. I just need a bit of time to format them
properly.

Bruno

On Aug 7, 11:26 am, Isaac Schlueter <i...@izs.me> wrote:

Jeff Fifield

unread,
Aug 7, 2011, 10:02:38 AM8/7/11
to nod...@googlegroups.com
On Sun, Aug 7, 2011 at 5:46 AM, Bruno Jouhier <bjou...@gmail.com> wrote:
> Maybe I'm being getting into too many details here. But I'm interested
> by the technical and academic underpinnings, and by the usability
> aspects of all this, not by the promotion of my own solution (fibers
> and streamline are competing and I see good reasons why people would
> go with fibers rather than with streamline). What I'm not interested
> in is the religion of the status quo.
>

By the way, this debate (however you frame it: events vs. threads;
continuation passing vs. fibers; etc) has been discussed extensively
in the academic literature, and is much older than node. One thing
that this research has shown is that at the end of the day everything
reduces to personal preference and quality of implementation. Both
approaches are appropriate.

Jeff

Branko Vukelić

unread,
Aug 7, 2011, 10:09:03 AM8/7/11
to nod...@googlegroups.com
On 2011-08-07 08:02 -0600, Jeff Fifield wrote:
> By the way, this debate (however you frame it: events vs. threads;
> continuation passing vs. fibers; etc) has been discussed extensively
> in the academic literature, and is much older than node. One thing
> that this research has shown is that at the end of the day everything
> reduces to personal preference and quality of implementation. Both
> approaches are appropriate.

I seriously hope everybody got this already. :)))

Isaac Schlueter

unread,
Aug 7, 2011, 5:30:30 PM8/7/11
to nod...@googlegroups.com
On Sun, Aug 7, 2011 at 06:28, Bruno Jouhier <bjou...@gmail.com> wrote:
> Again, you speak about things without looking at them.
>
> The *non-fibers callback version* of this DOES EXIST.
>
> Streamline.js has nothing to do with fibers. It is a preprocessor that
> generates vanilla Javascript with callbacks. If you look at the
> transformed file (https://github.com/Sage/streamlinejs/blob/master/
> lib/
> streams/server/streams.js), you will see that this is a PURE CALLBACK
> implementation of this API, which does not have any strings attached
> to it: no node C++ extension, not even a JS helper module -- it only
> 'requires' modules from node core!

So, your claim is that the streamline code is better than the callback
version generated by streamline? That's not terribly impressive.
There are already lots of tools that can turn JavaScript into less
readable JavaScript.


> I can provide hundreds of code fragments that demonstrate the
> difference in usability. I just need a bit of time to format them
> properly.

Not looking for fragments. Looking for the effect on a small,
finished, well-understood program. Otherwise, this is all just make
believe.

Is recursive directory removal too hard to do with fibers or
streamline or coroutines?

Bruno Jouhier

unread,
Aug 7, 2011, 6:08:26 PM8/7/11
to nodejs
Replying to Jeff and going a bit off-topic (too much into streamline-
land). Apologies in advance...

Yes Jeff, there are nevertheless some subtleties when it comes to
analyzing real designs and implementations.

In my case, the issue was the following:

* intially, streamline did not have its "future" feature. Then it was
rather obvious that "pure streamline.js" (that is streamline without
any async callbacks at all) was not as expressive as JS with async
callbacks. Situation was even much worse: "pure streamline" would just
be completely unsuitable for node, it needed an escape to
callbacks!!!.

* Then I added the future feature. So the question became: Is "pure
streamline with futures" equivalent to "JS with async callbacks" (in
other terms, will the "futures" feature let you do everything you can
do with async callbacks, or only a limited subset thereof). And it was
easy to prove that: "yes, they both provide the same power".

I could probably have saved several hours of writing and learned
interesting things by going into the academic papers and finding the
general theorem of which this is just a corollary. But trying to prove
it from a "constructivist" angle was an interesting exercise. Also, it
explains the magic and can help reassure people who wonder about the
HOW.

It also brought an interesting insight on the relationship between
callbacks and futures: a callback can be viewed as a future that gets
resolved by the event loop rather than by an explicit resolve call in
the code.

Maybe this too is a well know thing and I'm just restating the obvious
here. But I found it "interesting".

Anyway, I'm probably side tracking this debate completely at this
stage. Time to quit.

Bruno

On Aug 7, 4:02 pm, Jeff Fifield <fifi...@mtnhigh.net> wrote:

Bruno Jouhier

unread,
Aug 7, 2011, 6:36:11 PM8/7/11
to nodejs
On Aug 7, 11:30 pm, Isaac Schlueter <i...@izs.me> wrote:
> On Sun, Aug 7, 2011 at 06:28, Bruno Jouhier <bjouh...@gmail.com> wrote:
> > Again, you speak about things without looking at them.
>
> > The *non-fibers callback version* of this DOES EXIST.
>
> > Streamline.js has nothing to do with fibers. It is a preprocessor that
> > generates vanilla Javascript with callbacks. If you look at the
> > transformed file (https://github.com/Sage/streamlinejs/blob/master/
> > lib/
> > streams/server/streams.js), you will see that this is a PURE CALLBACK
> > implementation of this API, which does not have any strings attached
> > to it: no node C++ extension, not even a JS helper module -- it only
> > 'requires' modules from node core!
>
> So, your claim is that the streamline code is better than the callback
> version generated by streamline?  That's not terribly impressive.
> There are already lots of tools that can turn JavaScript into less
> readable JavaScript.

In this particular instance, 90% of the source is actually written in
callback-style and is not transformed at all. This module is not at
all a typical streamline module. It does not deal with business logic.
It deals with a low level technical problem: transforming event-style
code into callback-style. I'm just using streamline in one or two
methods because this was handy. I could have as well written all of it
in callback-style. Maybe I'll do it some day to make it a bit more
efficient.

>
> > I can provide hundreds of code fragments that demonstrate the
> > difference in usability. I just need a bit of time to format them
> > properly.
>
> Not looking for fragments.  Looking for the effect on a small,
> finished, well-understood program.  Otherwise, this is all just make
> believe.
>
> Is recursive directory removal too hard to do with fibers or
> streamline or coroutines?

Well, this is a technical problem (dealing with retries and all that
stuff). Typically the kind of thing that I might still write in
callback-style (or rather let you write for me!).

I'd rather go with small typical samples of business logic. Because
then I can explain what I mean by "easy to read and easy to
maintain" (as you'll see, the maintenance issue takes a very different
form than in the technical layers). As I said earlier, there is no
"one size fits all" and I did not pretend that streamline or fibers
were the right tools for what you are writing (I said the opposite).

The best format to explain this is probably as a blog post and it is
going to take me some time to do so (especially as my week-ends are
currently very loaded with lots of down to earth duties). So please be
a bit patient.

Time to quit, if you don't mind.

Bruno

Nicolas Chambrier

unread,
Aug 7, 2011, 6:43:19 PM8/7/11
to nod...@googlegroups.com
I see lots of bad faith on either side... I don't this this discussion could go further anyway,
you both lose your time, end of story ;)

--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en



--

si ce message est crypté:
- PGP KeyID -> 0x2F080247
- ou http://naholyr.free.fr/naholyr.pubring.gpg

Isaac Schlueter

unread,
Aug 7, 2011, 7:51:02 PM8/7/11
to nod...@googlegroups.com
On Sun, Aug 7, 2011 at 15:36, Bruno Jouhier <bjou...@gmail.com> wrote:
> In this particular instance, 90% of the source is actually written in
> callback-style and is not transformed at all. This module is not at
> all a typical streamline module. It does not deal with business logic.
> It deals with a low level technical problem:  transforming event-style
> code into callback-style. I'm just using streamline in one or two
> methods because this was handy. I could have as well written all of it
> in callback-style. Maybe I'll do it some day to make it a bit more
> efficient.

I'm so lost.

It sounds like you're saying that Streamline isn't useful for anything
beyond trivial examples, and you're sort of convincing me.


> Well, this is a technical problem (dealing with retries and all that
> stuff). Typically the kind of thing that I might still write in
> callback-style (or rather let you write for me!).

Ok.

What about if you don't have to handle edge cases? Then could you do
it with a coro pseudo-sync try/catch-able style? What would that look
like?


> I'd rather go with small typical samples of business logic.

In my business, I need logic to remove directories, because my
programs run on computers.


> The best format to explain this is probably as a blog post and it is
> going to take me some time to do so (especially as my week-ends are
> currently very loaded with lots of down to earth duties). So please be
> a bit patient.

If it would take more than an hour to write rimraf in streamline, it's
not any kind of solution to "callback hell".


> Time to quit, if you don't mind.

Not at all. Thanks for playing :)


On Sun, Aug 7, 2011 at 15:43, Nicolas Chambrier <nah...@gmail.com> wrote:
> I see lots of bad faith on either side... I don't this this discussion could
> go further anyway,
> you both lose your time, end of story ;)

I actually would like to be able to have this discussion, but everyone
keeps running away or getting mad. It's very confusing.

I just want an apples-to-apples comparison that isn't fake, that's all.

Marcel Laverdet

unread,
Aug 7, 2011, 10:19:36 PM8/7/11
to nod...@googlegroups.com
Isaac here is rimraf with EBUSY and EMFILE handling. The function itself is 26 lines plus about 20 lines for includes and a quick timer future.


I was hesitant to build this because I'm pretty positive that the behavior is not precisely the same as your version, but it should have the same end result. I could be missing edge cases here.

Worth nothing is that it's still just as parallel as the original version.


--

Isaac Schlueter

unread,
Aug 7, 2011, 10:26:00 PM8/7/11
to nod...@googlegroups.com
OMG THANK YOU.

Isaac Schlueter

unread,
Aug 8, 2011, 12:03:21 AM8/8/11
to nod...@googlegroups.com
This is really close to being a true 1:1 comparison. There are a few
issues, but I'm going to fork your gist and try to fix them. Thanks
again.

It's stupid for (callback) rimraf to keep the counters for EBUSY
waiting in a module-global object when it can just be a local var.
Derp. I'm dumb :)

The (coro) version doesn't handle one lstat race, where the lstat
throws, meaning that the file is already gone. Easy to fix.

Both versions have a race condition where the file might go away
between the lstat and the unlink/rmdir. They need to swallow ENOENTs.
I'll fix that. I think that may be the source of a really rare
sporadic failure in npm 0.x that I never tracked down, actually.

The coro version doesn't handle the "gently" stuff that I just added
to use it with npm, but that's fine since that's actually kind of an
oddly specific use-case anyway. Now that I see the pattern of what
you've done here, though, I'm going to try to see if it's possible to
bolt that onto the coro version. I'll leave a comment when I've got
it done and you can tell me if I'm doing it wrong :)

Bottom line, it's looking like about a 20% LOC reduction, with the
trade-off being that the async bits are less explicit, none of which
should be a surprise to anyone. But I think having a good
side-by-side comparison of two programs doing the same thing will be
helpful for making this subject a bit less absurd.

On Sun, Aug 7, 2011 at 19:19, Marcel Laverdet <mar...@laverdet.com> wrote:

Bruno Jouhier

unread,
Aug 8, 2011, 3:31:21 AM8/8/11
to nodejs
I was going to quit but Marcel did all the dirty work (thanks :-). I
forked a streamline version.

The function itself is still 26 lines (same ones). And the extra stuff
is now down to 4 lines. The generated code is bigger (97 lines vs 71
for the original) but who cares.

I also tried to be nice to people who are allergic to underscores, for
once.

My intent with the blog was more to go into the human factors (what
does it mean to develop business logic, what's the typical programmer
profile, what does their typical code look like, what does code
maintenance mean for them, etc.). A bit along the lines of Will
Conant's post, but with real code examples.

Bruno


On Aug 8, 4:19 am, Marcel Laverdet <mar...@laverdet.com> wrote:
> Isaac here is rimraf with EBUSY and EMFILE handling. The function itself is
> 26 lines plus about 20 lines for includes and a quick timer future.
>
> https://gist.github.com/1131093
>
> I was hesitant to build this because I'm pretty positive that the behavior
> is not precisely the same as your version, but it should have the same end
> result. I could be missing edge cases here.
>
> Worth nothing is that it's still just as parallel as the original version.
>
>
>
>
>
>
>
> On Sun, Aug 7, 2011 at 4:51 PM, Isaac Schlueter <i...@izs.me> wrote:

Marco Rogers

unread,
Aug 8, 2011, 12:37:54 PM8/8/11
to nod...@googlegroups.com
Thanks Bruno and Marcel. Looking at real code helps to solidify people's arguments and bring the conversation back up from the muck. Can I suggest that we kill this thread and start fresh discussing the rimraf stuff? Also let me apologize if any of my comments came across as offensive or personal. I was in heavy sarcasm mode, but not trolling. I was trying to at least bring some perspective to the proceedings. Poorly executed I'm sure.

:Marco

Bruno Jouhier

unread,
Aug 8, 2011, 4:48:52 PM8/8/11
to nodejs
We are all passionate about what we are doing, and node is really a
great playground.

Don't worry, I'm an old guy and I've been a few times in situations
where I advocated loudly for something that I considered really stupid
a few years later (you should see my posts when I was arguing for
statically typed languages and against dynamic ones; they are just
laughable!!!). Everyone can get overboard once in a while.

Frankly, node is a great technology, and I'm enjoying it every day.
This is probably one of the most interesting programming experience in
my professional life. I'm also using it for a serious industrial
project. So, my interest is to build a respectful and trustful
relationship with the people who are behind this project, not to get
into sterile arguments with them.

The most frustrating part of this discussion was probably the fact
that I could not get through the message that "different people have
different requirements". I did not develop streamline.js because I
wanted to challenge node's standard way of doing things, I developed
it because I felt a lot of potential behind node but also a serious
impedance mismatch with what we are doing. Will's post is very
eloquent on this and I'm in a similar boat: there is simply no way we
can ask our developers to change their habits so radically. They write
"business rules", they have unique skills that we don't have (like
deciphering obscure legislative material) and it would be completely
counter productive to try to turn them into systems engineers.

So, yes, let's move on and work together constructively on improving
things. Moving rimraff to a different thread is a good idea because
this one is getting a bit crowded.

Bruno

Alexey Petrushin

unread,
Nov 9, 2012, 7:12:18 PM11/9/12
to nod...@googlegroups.com
Why there are so many negative responses about fibers?

I use fibers and it seems very helpfull because:

- it removes callbacks and makes code simple and linear.
- allow to use global try/catch block that will catch all errors (both sync and async).
- performance impact seems to be small.
- compatible with streams.

A short sample of how code looks like (simple function loading JSON content and parsing it)

    var load = function(indexPath){
      // `fs2.readFile` is actually is asynchronous `fs.readFile` 
      // wrapped with fibers.
      var data = fs2.readFile(indexPath, 'utf8')
      return JSON.parse(data)
    }
    
    // All exceptions will be caught, bot sync and async.
    try {      
      var obj = load('/data.json')
    } catch(err) {
      console.log(err)
    }
    
There's no any notion that the code is actually asynchronous, i.e. the abstraction isn't leaky (as with
some other technics when You still has to care specifically about error handling or other stuff).

So, what's wrong with it?

Mark Hahn

unread,
Nov 9, 2012, 7:25:56 PM11/9/12
to nod...@googlegroups.com
This subject has been beaten to death on this forum and I fear you might be starting yet another 100-message thread.

--

Charlie McConnell

unread,
Nov 9, 2012, 7:27:56 PM11/9/12
to nod...@googlegroups.com
OH MY GOD IT HAS RISEN FROM THE DEAD KILL IT KILL IT
--
Charlie McConnell
Head of DevOps
Nodejitsu, Inc.

Alexey Petrushin

unread,
Nov 9, 2012, 7:33:38 PM11/9/12
to nod...@googlegroups.com
I probably missed it, but I didn't seen any solid facts, only personal opinions about this topics.

Mark Hahn

unread,
Nov 9, 2012, 10:14:02 PM11/9/12
to nod...@googlegroups.com
but I didn't seen any solid facts,

How can there be solid facts for something so opinionated?  Many prefer the async coding style or the promise style or blah blah blah ...

Most people who follow this forum know all the sync options available.  No one is saying there is anything wrong with fibers other than personal preferences.

On Fri, Nov 9, 2012 at 4:33 PM, Alexey Petrushin <alexey.p...@gmail.com> wrote:
I probably missed it, but I didn't seen any solid facts, only personal opinions about this topics.

--
It is loading more messages.
0 new messages