// Future.wrap takes an existing callback-based function and make a function
// which returns a Future
var readdir = Future.wrap(require('fs').readdir);
var dir = readdir('.');
// Once you have a Future you can either resolve it asynchronously
dir.resolve(function(err, val) {
if (err) throw err;
console.log('Files in this directory: '+ val.join(', '));
});
// Or you can yield your current fiber synchronously (the wait() call is the
// important part):
console.log('Files in this directory: '+ dir.wait().join(', '));
Using fibers here gives you some cool advantages:
- Reasonable stack traces: Sometimes it can be difficult to find bugs that occur in callback-based code because you lose your stack with every tick. But since each fiber is a stack, traces are maintained even past asynchronous calls. Additionally line numbers are not mangled like they are with the existing rewriting solutions.
- Exceptions are usable again: Using fibers you no longer have to manually propagate exceptions, this is all taken care for you. In the example above, wait() will throw if readdir() resolved an error.
- Avoid excessive callbacks without sacrificing parallelism: Even though wait() blocks execution of Javascript it's important to note that Node is never actually blocked. Each Fiber you create is a whole new stack which can be switched into and out of appropriately. So while many stacks of JS are blocking, another is running. You can even use wait() to wait on many Futures at the same time via Future.wait(future1, future2, future3, ...) or Future.wait(arrayOfFutures).
- Choose when to use callbacks and when not to: To be honest I use callbacks about as often as I use blocking fibers in my applications. Sometimes callbacks just make more sense (especially with streams of data), but when I do use Fibers I am really really glad I have them available.
Fibers are compatible in 32-bit and 64-bit modes on Linux & OS X (Lion supported as of fibers 0.5). Windows is unfortunately not supported but if there is strong interest it could be built in without too much work.
Check out the documentation on github:
https://github.com/laverdet/node-fibers
(see "FUTURES" for new stuff)
Thanks for reading,
Marcel
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en
> never. going. to. happen.
Never say never.
--
Jorge.
Never.
Never.
This is simply amazing. Thanks. Keep up the good work !
ISTM that many people here don't realize *yet* how much better their lives would be if they could simply yield() and resume() instead of having to return (and end) on every function invocation. They think they don't need yield() just because they've got closures to preserve the contexts. What a mistake.
But they'll realize it, sooner or later, as they progress. You'll see.
--
Jorge.
...say never.
> * Code is MUCH easier to read and maintain!
Just here to point out the obvious; This is a pretty subjective thing,
and heavily influenced by past experiences with async code management
strategies. It'd be interesting, though, if there was a way to measure
ease-of-writing and ease-of-reading, and then to do a survey amongst a
wide range of developers. Quick, somebody write a grant proposal!
Captain Obvious out.
--Josh
Jeffrey Zhao
Blog: http://blog.zhaojie.me/
Twitter: @jeffz_cn (Chinese) | @jeffz_en (English)
On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.
So true.
--
Branko Vukelic
bra...@herdhound.com
bg.b...@gmail.com
Lead Developer
Herd Hound (tm) - Travel that doesn't bite
www.herdhound.com
Love coffee? You might love Loveffee, too.
loveffee.appspot.com
Hah! That's where you'd be wrong. For me, the streamline.js version is
way too much magic. I had to read the callbacks version first in order
to understand what the second one was trying to get at. Keep in mind,
of course, that my experience with async is almost exclusively with
node.
True story.
--Josh
On Fri, Aug 5, 2011 at 5:43 AM, Bruno Jouhier <bjou...@gmail.com> wrote:On the other hand, if I'm writing business logic hand I need to apply
a complex transformation between the read and the write (the
transformation itself will make async calls of course), I'm probably
better off with the pseudo-sync version.
Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.
And just because this is all subjective,
having underscores littered throughout your code as a meaningful piece of information is not more readable to me.
I just did.
For the record, I think Marcel's fibers add-on is a pretty clever
hack. I can even see it being useful in some cases. But it's not
something you'll see in core until V8 supports coroutines natively.
>> I don't think that any reasonable and honest person would find the
>> first one more readable!
>
> Hah! That's where you'd be wrong. For me, the streamline.js version is
> way too much magic. I had to read the callbacks version first in order
> to understand what the second one was trying to get at. Keep in mind,
> of course, that my experience with async is almost exclusively with
> node.
>
> True story.
Ok, which one is better ?
#1
function pipe(inStream, outStream, callback){
(function loop (err){
if (err) callback(err);
else inStream.read(function(err, data){
if (err) callback(err);
else data != null ? outStream.write(data, loop) : callback();
});
})();
}
#2:
function pipe (inStream, outStream, _) {
var buf;
while (buf = inStream.read(_)) outStream.write(buf, _);
}
#3:
function pipe (inStream, outStream) {
var buf;
while (buf = inStream.read()) outStream.write(buf);
}
All of them are possible, in node.
--
Jorge.
Marcel's fibers are Pure Win.
--
Jorge.
This one is long, but it's obvious what it's doing because the
callbacks are explicit.
>
> #2:
>
> function pipe (inStream, outStream, _) {
> var buf;
> while (buf = inStream.read(_)) outStream.write(buf, _);
> }
Shorter, yes, but I don't really understand what these underscores are
about, and knowing that they're async has me scratching my head a bit
("how does *that* work?"). Having figured out the end goal (by reading
the first one) it's pretty clear, sure. If I spent enough time with
this I could probably get to the point where I can read it easy.
>
> #3:
>
> function pipe (inStream, outStream) {
> var buf;
> while (buf = inStream.read()) outStream.write(buf);
> }
Excepting for the underscores, this is pretty much the same as #2. The
use of `while` still strikes me as a bit magical.
I'm open to the suggestion that alternate approaches to callbacks are
"better" in some way(s), but I definitely thing that "more readable"
is the sort of thing that's heavily influenced by what you've done
before. None of these things are truly intuitive; we have to *learn*
them.
--Josh
>
> All of them are possible, in node.
> --
> Jorge.
>
The only reason you need callbacks for async jobs is because (in JS) function calls can't be suspended/resumed, they must, always, run to completion. But async does not necessarily mean 'must provide a callback', that's not the only way to do it.
--
Jorge.
Yeah, that's definitely true. On the other hand, I don't really have
any exposure to this alternate approach, so the callback approach
happens to make more sense to me.
--Josh
Let's all calm down and get back to writing some code now.
So what is it? Threads of fibers now? ;)
Of course, there are idiots like me who have dropped their jaws to the
floor the moment this discussion has begun and were like "What?! I
barely managed to figure out the whole callback thing, and now this?!"
Seriously, though. I feel incredibly stupid trying to follow what you
guys are arguing about, but you say "you don't see where it yields" in
reference to the third example, whereas I didn't see where it yielded in
the second as well. Callbacks make more sense to me because that's what
I'm used to seeing and the semantics of it is clear to me.
> But what's the most important question when you have to read and
> maintain large pieces of code? I think that it is the first one: "WHAT
> does it do?". And I think that we can objectively say that this
> question is easier to answer in #2 and #3 than in #1.
So, along the lines of what I wrote above, no #2 and #3 didn't make
things clearer. I seemed clear at first glance, but then I tried to
figure out what it did, and I had to look up #1 to find out.
> You are asking the second question because you do not TRUST the magic.
> So, you need to also understand the HOW, to be reassured that the
> magic actually works . The problem is here. Once you get confident
> that the magic works, you won't need to worry about the HOW any more,
> you will just let the magic operate.
And thus you prove someone else's point, and that's: reading code (and
the ease of readin that comes with that skill) is acquired skill. It's
not internal to the code itself. Once people decide to learn the new
pattern, it will become more readable to _them_. Sort of like a
self-fulfilling prophecy.
I think that the debate is not just a rehash of things that have been
said in previous threads.
Of course, there is a bit of futile arguing, but also
some progress in the debate.
Why should we kill it? If people are not interested, they can just
skip this thread.
this might lead at some point to an alternate platform, which could be very interesting.
* You can do robust exception handling with try/catch/finally (even
around blocks that contain async calls).
Or maybe you're better off using some kind of flow control helper that doesn't hide the async nature of the code. Why do people keep comparing raw async code to things like streamline and fibers? If you're going to use an helper layer, then actually compare helper layers.
Even with flow control helpers I feel the style becomes quite obtuse (subjective) and more verbose (objective). Here is the pipe() example written as best I can with async. You're forced to make each operation its own function which adds needless code bloat. You don't have to juggle `err` as much but you still deal with the added function overhead.And just because this is all subjective, having underscores littered throughout your code as a meaningful piece of information is not more readable to me.
Marcel's fibers are Pure Win.
Yeah, that's definitely true. On the other hand, I don't really have
any exposure to this alternate approach, so the callback approach
happens to make more sense to me.
You are asking the second question because you do not TRUST the magic.
So, you need to also understand the HOW, to be reassured that the
magic actually works . The problem is here. Once you get confident
that the magic works, you won't need to worry about the HOW any more,
you will just let the magic operate.
Here's where this went wrong. Not when someone asked if this could go into core. But that when they received several firm No's from people in positions of authority, that should really be it. If you keep arguing, then it sounds like you're still trying to convince people to put it in core or that it's the "right" way, instead of trying to convince people to give your module a shot. That's a recipe for disaster. It's not the "right" way, nor the "wrong" one, because those are opinions.
First and foremost, why in God's name are you writing your own .pipe()? Node has that built-in.
// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN
<<16 lines of code>>
2011/8/6 C M <cha...@charlieistheman.com>:
> // OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN
> function NIHpipe(inStream, outStream, cb) {
> inStream.read(readFn);
> function readFn(err, data) {
> if (err) { return cb(err) }
> data ? writeFn(null, data) : cb(null);
> }
> function writeFn(err, data) {
> if (err) { return cb(err) }
> outStream.write(data);
> }
> }
> function cb(err) {
> if (err) {
> console.error('OMG FLAGRANT ERROR.');
> }
> }
# OMG JS is so verbose must write it shorter
pipe = (in, out, cb) ->
do rwChunk = -> in.read (err, data) ->
return cb err if err or not data
out.write data, rwChunk
How have we dismissed the "usefulness of exceptions"?
I don't think anyone close to node honestly believes that it's a great
thing that "throw" loses state. We all hate that.
The solution is long stack traces and small processes that can crash
completely on failure without taking down your application. Even with
fibers, and wrapping everything in a try/catch, I would bet on there
being ways to tank the entire program in some terrible way, so you end
up needing this anyway. Also, the lack of post-hoc debugging
facilities makes it really hard to get a handle on bugs that only show
up in production.
If half the energy spent on flow control were spent on process
management, and IPC systems, and state capturing, it would be a more
interesting world. The number of people involved in the conversation
should be an indicator that there's no real puzzle here, just
something to talk around.
>> And just because this is all subjective, having underscores sprinkled
>> throughout your code as a meaningful piece of information is much more
>> readable
>
> With flow control helpers I feel the style becomes less obtuse
> (subjective) and verbose (objective). Here is the pipe() example
> written as best I can
The most annoying thing about this discussion is the "let me show you
how you write your programs, and then tell you how I don't like it."
It has never changed anyone's mind ever.
> I think it's a reasonable case study.. It's a common function that everyone
> understands the constraints of, and demonstrates basic callback patterns.
> It's the same reason there's so many examples of printing "hello world".
Node's Stream.pipe is hardly a "hello world". It handles a lot of
issues; back-pressure, multiple inputs, leaving the destination open
or shut based on a config, completely cleaning up after itself, and so
on. Have a look:
https://github.com/joyent/node/blob/master/lib/stream.js#L31-163
It turns out you actually have to support all this stuff in the real
world. The pipe(src, dest, cb) pattern is a stepping stone that
doesn't meet the real need, which is why we have the much more elegant
Stream.pipe().
The second most annoying thing about this (apparently immortal) thread
is the constant rehashing of a function that is woefully incorrect and
lacking in the first place.
Perhaps we can address that one with a real challenge?
Write an async `rm -rf` using your favorite fiber/promise/flow-control/coro lib.
Requirements:
1. Handle EBUSY by trying that file again in 100ms, then 200ms later
if encountered again, then 300 after that, then failing.
2. Handle EMFILE by backing off in a gradually increasing timeout
amount, and then set the EMFILE-timeout value to 0 when it is
successful.
Here's a reference implementation, where the entire flow control
utility, and callback-based implementation, with comments, is 81 lines
of code: https://github.com/isaacs/rimraf/blob/master/rimraf.js
(including a sync version).
For the record, I don't care how you write your programs. If they do
something useful, and you're going to keep maintaining them, then
that's fantastic.
Aww, damn, I should think before trolling.
> pipe = (in, out, cb) ->
> do rwChunk = -> in.read (err, data) ->
> return cb err if err or not data
> out.write data, rwChunk
Actually, two more lines (or one if you don't count the helper):
captureError = (cb, func) -> (err, args...) -> if err then cb(err)
else func.apply @, args
pipe = (in, out, cb) ->
do rwChunk = captureError cb, ->
in.read captureError cb, (data) ->
return cb null if not data
out.write data, rwChunk
Aaaarghhh. What's that, brainfuck deluxe ?
--
Jorge.
I can explain it in detail. :)
> On 06/08/2011, at 11:47, Jann Horn wrote:
>> captureError = (cb, func) -> (err, args...) -> if err then cb(err) else func.apply @, args
captureError is a helper function that wraps callbacks. If the wrapped
callback gets called with an error, it will call the `cb` function, else it
will call the original callback, but without the "err" argument (it's falsy
anyway).
>> pipe = (input, out, cb) ->
Definition of "pipe", "pipe" takes three arguments.
>> do rwChunk = captureError cb, ->
Define rwChunk and call it afterwards ("do"). The function is wrapped
with "captureError", so we don't have to worry about write errors
anymore.
>> input.read captureError cb, (data) ->
Read a chunk from input with the following callback (again protected
by "captureError"):
>> return cb null if not data
if (!data) return cb(null);
>> out.write data, rwChunk
out.write(data, rwChunk);
Also, this thing about us wanting to push things into core or
evangelizing at all costs makes me smile. I see streamline as a
stopgap solution for async programming which is completely independent
from node (generated code is vanilla JS which does not have any
strings attached). Once harmony takes over, streamline will just die
and I'll feel better because I'd rather base my stuff on a standard,
highly optimized runtime than on a hack that I wrote.
Actually, two more lines (or one if you don't count the helper):
How have we dismissed the "usefulness of exceptions"?
I don't think anyone close to node honestly believes that it's a great
thing that "throw" loses state. We all hate that.
The solution is long stack traces and small processes that can crash
completely on failure without taking down your application. Even with
fibers, and wrapping everything in a try/catch, I would bet on there
being ways to tank the entire program in some terrible way, so you end
up needing this anyway. Also, the lack of post-hoc debugging
facilities makes it really hard to get a handle on bugs that only show
up in production.
The most annoying thing about this discussion is the "let me show you
how you write your programs, and then tell you how I don't like it."
It has never changed anyone's mind ever.
Node's Stream.pipe is hardly a "hello world". It handles a lot of
issues; back-pressure, multiple inputs, leaving the destination open
or shut based on a config, completely cleaning up after itself, and so
on. Have a look:
https://github.com/joyent/node/blob/master/lib/stream.js#L31-163
Perhaps we can address that one with a real challenge?
Write an async `rm -rf` using your favorite fiber/promise/flow-control/coro lib.
Requirements:
1. Handle EBUSY by trying that file again in 100ms, then 200ms later
if encountered again, then 300 after that, then failing.
2. Handle EMFILE by backing off in a gradually increasing timeout
amount, and then set the EMFILE-timeout value to 0 when it is
successful.
Thanks, but still... Aaaarghhh :-P (I ♥ JS)
--
Jorge.
Was it supposed to be s/out/to/ or am I being an idiot again?
> and a place for each. This thread started out positive but has taken a
> turn for the worse. I just wanted to talk about some software that I'm
> really excited about without starting a hate party.
Marcel, don't take it personally. Guys are arguing about fibers, not
you. And there is very little you can do to stop it. I'm sure everybody
caught your initial statement that you weren't looking to push this into
Node core. But there are people out there that would still like to see
it happen, and hence the argument. It's not all that bad, too. I've
learned a thing or two just reading other people's opinions.
So you should just relax and let them fight it out. :)
--
Branko Vukelic
bra...@herdhound.com
bg.b...@gmail.com
Lead Developer
> Er, maybe I misunderstood but you personally called try \ catch "ugly" in
> this thread a few months ago:
> http://groups.google.com/group/nodejs/browse_thread/thread/5bd51524229c27b4
Yes, I think that try/catch is an ugly pattern. It is a concealed
goto, and I had problems with it long before ever coming to node.
That doesn't mean that I think our current setup is sufficient. When
an exception is thrown, the process should crash completely and
cleanly, and it should not bring down the app, and the resulting data
should be sufficient to trace through and debug the problem.
This is one thing that Erlang and C do right, and JavaScript does not.
I think that 2012 will produce some interesting solutions to this
issue.
> Straw man! Straw man, straw man, straw man!!
I don't get it...?
> We understand that pipe is a complex and robust function. I respect the
> amount of thought and consideration that you and Ryan have put into it, and
> I'm not trying to defile the integrity of your work.
> However, when boiled down to a **naive solution** it is an excellent case
> study in design.
I wasn't claiming that you don't appreciate the work or whatever.
(And it was mostly Mikeal and Felix who wrote that, I believe.)
I'm saying that the thing you're using is not a good case study in
design, because it's a bad program. I'm not interested in tools that
make bad programs arbitrarily pretty. If your tool can do more, I'd
love to see it.
> Why stop there!? We should make the reference implementation a distributed
> rtree index service which can operate on both euclidean and spherical
> coordinate systems.
That would also be interesting, though I don't have a use for it. Is
that something that can be done in a single node module with 80 or so
lines of commented callback-based JavaScript and no external
libraries? Is that something that's used in production, and can be
written from scratch in about an hour?
If you're saying that rimraf is an unreasonable challenge, then I'm
kind of confused, because I thought the point of fibers is to make
code easier to write, and it was pretty easy already. I'm not going
to write it badly as an amateur in an pattern I don't fully
understand, and then conclude that it's harder, or that the clumsy
result is necessarily uglier. I'd like to see what a fibers expert
can do to this problem to make it beautiful, less verbose, etc.
> I just don't see what this has to do with what we're talking about, or what
> you're trying to prove. Are you asserting that the expressiveness of
> streamline and fibers somehow falls apart once any amount of complexity is
> added?
I'm not *asserting* anything. I'm asking for a better example,
because I don't really see the point of any of this. I've seen a
naive broken pipe(src, dest, cb) written 4 different ways in this
thread. They *ALL* suck even worse than util.pump, which is terrible.
Show me a side-by-side of a program that does some relatively
interesting asynchronous IO juggling, in a well-understood problem
domain, in less than 100 lines.
> I don't get where all this hostility is coming from dude.
> wanted to talk about some software that I'm really excited about without
> starting a hate party.
You have me all wrong! There's no hostility here. I'm not disparaging
your respect or anything like that. I'd like to understand the value
in this thing that some people seem to see value in, but the way it's
being presented and debated, any value that *might* be there is
actively obscured.
It would be a lot more interesting for everyone to talk about the
shortcomings of the system they have chosen, rather than the
shortcomings of the system that they don't actually use in any
meaningful way.
On Sat, Aug 6, 2011 at 07:18, Bruno Jouhier <bjou...@gmail.com> wrote:
> The big difference between try/catch and pure API schemes
> (callback(err)) is that try/catch deals with expected exceptions (the
> ones you throw explicitly) as well as UNEXPECTED ones (the ones you
> get because someone left a bug somewhere deep into an obscure module
> that gets invoked only every other month with the special data
> combination that makes it fail).
I'm suggesting that it'd be better for those failures to tank the
whole process, leaving you with enough information to debug the
problem after the fact and easily figure it out. Right now, that's
*really* hard in node. If a bizarre error happens deep in the stack,
it's not *that* much better to be able to catch it with a try/catch
than with a process.on("unhandledException") handler. You're saving,
what, one request that gets a 500 error instead of a dangling
connection?
> If you start from scratch, it is going to take a number of iterations
> before you reach something really good. So maybe you should not throw
> away all the old wisdom at once. IMO, classical exception handling
> (used in a disciplined way) is a MUST HAVE (until something better
> comes along).
Actually, what I was really suggesting was something that would fill
the usecase of a core dump, which is as old as dirt. Even if you use
fibers, you're still going to find yourself in situations where
something that *can't* happen *does* happen, and it's usually not in
your JavaScript.
From what I've seen fibers solve the easy problem, not the hard one.
That's fine, easy problems need to be solved to, but since it's a
roughness that I am well calloused against, it doesn't really inspire
me. Prove me wrong, please!
On Sat, Aug 6, 2011 at 06:15, Jorge <jo...@jorgechamorro.com> wrote:
> On the one hand you've got a bunch a crazy kamikazes that dare to comment on what they think is a Good Thing™, on the other you've got a bunch of Joyent employees annoyed by the mere existence of the conversation, inviting them to shut up (e.g. that "never" thread, you know).
Please don't bring Joyent into this. So far, I'm the only Joyent
employee to comment in this thread, and I don't speak for my employer.
Thanks.
I'm not annoyed by the existence of fibers, but yes, I am annoyed by
the pointlessness of this thread.
> If I were an ordinary node user I would prefer to stand by the side of the ones that rule (which, in the case of node, is *not* the community, but Joyent) and I would just shut up.
Actually, it's Ryan.
And I'm not asking you to shut up. I'm asking you to communicate more
effectively.
> It's not a matter of tastes. Fibers provide a bunch of real benefits, none of which is subjective.
Those have not been sufficiently demonstrated to be worth the
trade-offs. Take the rimraf challenge, let's see it.
>> The second most annoying thing about this (apparently immortal) thread
>> is the constant rehashing of a function that is woefully incorrect and
>> lacking in the first place.
>
> Is that an argument against fibers ?
No, it's an argument against this discussion thread. All the fiber
gives people verbal diarrhea ;)
Bottom line, you don't actually have to convince anyone of anything.
We can write programs in different ways, and that's great. I'm only
asking because I'm curious, and can't seem to find the answers in this
thread.
This is probably the biggest "pro" for me. The way the Node community has totally dismissed the usefulness of exceptions seems clumsy to me. Exceptions are really really powerful.
At the risk of sounding mean, if everyone had this way of thinking we'd all be still be programming with punchcards.
The Node core argument is is a straw man. I don't think fibers should be in Node core anymore than I think MySQL should be in Node core. Node core consists of nothing but bare metal; I understand, appreciate, and agree with that. The "does this belong in node core" argument is an argument that shouldn't be happening and serves only as a vehicle to flame these threads.
On Aug 5, 2011, at 6:57 PM, C M wrote:// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN<<16 lines of code>>This is 4x more code than the streamline & fibers version.
Responses to a bunch of stuff inlineOn Aug 3, 2011, at 11:07 PM, Mikeal Rogers wrote:this might lead at some point to an alternate platform, which could be very interesting.I honestly don't think it needs to be a separate platform.. node-fibers and streamline are doing just fine as Node modules.
> Marcel,
>
> You wrapped it up brilliantly. Full agreement on all points.
>
> Also, this thing about us wanting to push things into core or
> evangelizing at all costs makes me smile.
Marcel never advocated this going in to core. YOU DID!
You also brushed aside core people saying that it would "never happen".
-Mikeal
You took something we already did in core, which is built in to the subjects of the things you're sending to this function, and implemented it again in a terrible way that actually misses the whole point of Stream.pipe() which is to handle back pressure as well as moving data forward.
Streams and pipe() are fundamental to node, they are a core level API that is being built to help deal with IO in a performant and simple way.
You don't get to say "we could implement that API easier" because that API doesn't even make sense with your alternate IO system. Also, it's already built, everyone writing node programs can use it today.
pipe() will be the way you move all data in node. In node 0.5.3 you can actually do this with my request library.
function (req, resp) {
req.pipe(request.get('http://www.blah.com/image.png')).pipe(resp)
}
That's a one line proxy that handles all the headers the way you want.
Significantly simpler than your fibers example, and it all uses callbacks behind the scenes.
We're figuring out the places where abstractions make sense and what we can do with them.
fibers is not an abstraction, it's an alternative to the way node does IO, it will need it's own abstractions. if you want to build those go ahead but don't shoot down the API implementations we're giving people in node.js with these foolish examples.
-Mikeal
function pipe(inStream, outStream, callback) {~function loop() {async.waterfall([function(next) {inStream.read(next);}, function(data, next) {data !== null ? outStream.write(data, next) : callback();}], function(err) {err ? callback(err) : loop();});}();}
On Aug 5, 2011, at 6:57 PM, C M wrote:// OMG PIPE NOT INVENTED HERE MUST MAKE MY OWN<<16 lines of code>>This is 4x more code than the streamline & fibers version.
This stream API you're writing examples against doesn't exist. There
are many reasons why. It doesn't handle errors, and is
extraordinarily inefficient in the way it delivers backpressure.
There's no way to distinguish between a 0-byte read, an EOF, and an
error. It would fall down immediately in any use case more strenuous
than an example.
I therefor cannot evaluate any of these examples, because they only
exist in magic imagination land.
I look forward to the day when someone posts an example that subjects
their pet coroutine lib to a real use case. Until then, I find it
difficult to take these things any more seriously than the tooth
fairy.
State mutation in a context can happen with callbacks too. A callback has full access and can mutate everything in any of its outer contexts.
When using fibers execution is halted in the middle of a function call, its context is mutable for that reason.
When using callbacks, the contexts persist in the closures created by the callback, and are mutable too.
So I must be missing something, can you elaborate, please ?
--
Jorge.
Oh, please. He's pulling rabbits out of his hat, and all you say is you don't like rabbits. I'm afraid you miss the point.
--
Jorge.
>
> This is probably the biggest "pro" for me. The way the Node community has totally dismissed the usefulness of exceptions seems clumsy to me. Exceptions are really really powerful.
>
>
> The way Bruno and Marcel have totally dismissed the usefulness of callbacks seems clumsy to me. Callbacks are really really powerful.
And fibers and continuation passing style are not mutually exclusive.
But, while with fibers you can catch any exceptions perfectly, but with CPS you simply CAN'T. It's not possible, because by the time the exception happens, you're already out of the function where you wanted to catch it.
It's simply a true limitation of CPS, it's not Bruno and Marcel dismissing callbacks.
> (snip)
Oh, please.
--
Jorge.
Let me try. And please don't try to turn it into a [f]lame war. I really
only want to give a concerete use case where fibers simplify things:
// PROBLEM: We want to use validator function that has to make an
// async requirest to a mongo:
var Validator = require('node-validator').Validator;
Validator.prototype.userExists = function (callback) {
User.findOne({username: this.str, function (err, user) {
callback(err, !!user)
});
}
// #1
Validator.check('f...@bar.com').userExists(function (err, exists) {
// We get an err object if user doesn't exist + extists == false
});
// #2
require('synchronous')
Validator.prototype.syncGetter('userExists');
var exists = Validator.check('f...@bar.com').userExists();
Before you comment, there's a practical application of the code further
below. Also note that code using syncrhonous is not mine. It was
suggested by node-validator's author as one of the possible solutions
when I asked about async validation.
First, for those of you who are not familiar with how node-validator
works, it basically has a bunch of methods like `notNull`, `isEmail`,
etc, which all check a string stored in `this.str`, and throw an error
if there is one, or return `this` to allow chaining. You can then trap
the thrown error and do whatever you want. Either respond with an error
message, or push the error message into an array and return `this` from
the error handler attached the `Validator` object to resume the chain.
I've written an Express middleware which does the latter. I attach the
Validator#check() to req.v, and make the error handler update the
req.validation.isValid flag when an error is flinged (there's more stuff
in req.validation, that's why it has that long name). So the concrete
real-world example would look like this:
req.v('email', 'Required').notNull().notEmpty().isEmail();
if (!req.validation.isValid) {
// Send error response
}
// Or do something
Now, suppose one of the conditions was that email belongs to an existing
user:
//----- in separate module -------
require('synchronous');
Validator.prototype.userExists = function (callback) {
User.findOne({email: this.str, function (err, user) {
callback(err, !!user)
});
}
Validator.prototype.syncGetter('userExists');
//----- in request handler -------
req.v('email', 'Required').notNull().notEmpty().
isEmail().userExists(); // Doesn't change the syntax
I think this is pretty good.
I was imprecise, I apologize.
What I mean is, the *non-fibers callback version* of the pipe function
that has been shown in this thread does not exist. No one actually
writes callback code that way, so the comparisons are all absurd
strawmen.
Pipe is a solved problem. It's solved with event emitters. It's not
even a "callback hell" type of problem, so I'm really not sure what
the point is of that. Replacing the whole Stream API pattern doesn't
make it easier to write node programs, so I'm not seeing the value
there.
Show me a real program that uses callbacks in real life, and then show
the comparable fibers counterpart. It should be shorter than a bunch
of the messages in this thread. I want a side-by-side that isn't or
contrived.
This is an actual attempt to get new information that I don't already
have. Please help me understand how fibers can make my programs
better, or at least, different.
By the way, this debate (however you frame it: events vs. threads;
continuation passing vs. fibers; etc) has been discussed extensively
in the academic literature, and is much older than node. One thing
that this research has shown is that at the end of the day everything
reduces to personal preference and quality of implementation. Both
approaches are appropriate.
Jeff
I seriously hope everybody got this already. :)))
So, your claim is that the streamline code is better than the callback
version generated by streamline? That's not terribly impressive.
There are already lots of tools that can turn JavaScript into less
readable JavaScript.
> I can provide hundreds of code fragments that demonstrate the
> difference in usability. I just need a bit of time to format them
> properly.
Not looking for fragments. Looking for the effect on a small,
finished, well-understood program. Otherwise, this is all just make
believe.
Is recursive directory removal too hard to do with fibers or
streamline or coroutines?
--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
I'm so lost.
It sounds like you're saying that Streamline isn't useful for anything
beyond trivial examples, and you're sort of convincing me.
> Well, this is a technical problem (dealing with retries and all that
> stuff). Typically the kind of thing that I might still write in
> callback-style (or rather let you write for me!).
Ok.
What about if you don't have to handle edge cases? Then could you do
it with a coro pseudo-sync try/catch-able style? What would that look
like?
> I'd rather go with small typical samples of business logic.
In my business, I need logic to remove directories, because my
programs run on computers.
> The best format to explain this is probably as a blog post and it is
> going to take me some time to do so (especially as my week-ends are
> currently very loaded with lots of down to earth duties). So please be
> a bit patient.
If it would take more than an hour to write rimraf in streamline, it's
not any kind of solution to "callback hell".
> Time to quit, if you don't mind.
Not at all. Thanks for playing :)
On Sun, Aug 7, 2011 at 15:43, Nicolas Chambrier <nah...@gmail.com> wrote:
> I see lots of bad faith on either side... I don't this this discussion could
> go further anyway,
> you both lose your time, end of story ;)
I actually would like to be able to have this discussion, but everyone
keeps running away or getting mad. It's very confusing.
I just want an apples-to-apples comparison that isn't fake, that's all.
--
It's stupid for (callback) rimraf to keep the counters for EBUSY
waiting in a module-global object when it can just be a local var.
Derp. I'm dumb :)
The (coro) version doesn't handle one lstat race, where the lstat
throws, meaning that the file is already gone. Easy to fix.
Both versions have a race condition where the file might go away
between the lstat and the unlink/rmdir. They need to swallow ENOENTs.
I'll fix that. I think that may be the source of a really rare
sporadic failure in npm 0.x that I never tracked down, actually.
The coro version doesn't handle the "gently" stuff that I just added
to use it with npm, but that's fine since that's actually kind of an
oddly specific use-case anyway. Now that I see the pattern of what
you've done here, though, I'm going to try to see if it's possible to
bolt that onto the coro version. I'll leave a comment when I've got
it done and you can tell me if I'm doing it wrong :)
Bottom line, it's looking like about a 20% LOC reduction, with the
trade-off being that the async bits are less explicit, none of which
should be a surprise to anyone. But I think having a good
side-by-side comparison of two programs doing the same thing will be
helpful for making this subject a bit less absurd.
On Sun, Aug 7, 2011 at 19:19, Marcel Laverdet <mar...@laverdet.com> wrote:
--
I probably missed it, but I didn't seen any solid facts, only personal opinions about this topics.
--