Some ideas that came up:
Promises can be replaced by a function taking a callback:
// promise style
posix.stat("file").addCallback(function(stat_info){...})
// callback style
posix.stat("file",function(stat_info){...})
The callback style has the benefit of being a little lighter, both
syntactically and in terms of resources. The Promise style has the
benefit that the returned Promise is an object that can then be
manipulated, combined with other promises, etc.
The callback style can be further improved by allowing the callback
function to be left out and provided later. This allows things
similar to what promises allow now. In my own fileIO API I call
this pattern a Continuable.
There are some various syntax proposals along these lines from
Ryan, Jed, Felix, and myself:
http://inimino.org/~inimino/blog/fileio_first_release
http://boshi.inimino.org/3box/asof/1263250371783/fileIO/README.html
Promises could be moved out of the low-level node APIs altogether,
and put in a higher-level module, which can be built on these
simpler components.
What are people's thoughts about these various approaches? What
do we want for node, and what are your thoughts about promises as
they exist in node today?
I like that this is light-weight and compatible with other javascript environments, including browsers.
I don't use the rest of events all that much, so I don't have an opinion there.
> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
>
If an async function takes three arguments:
fn( arg1, arg2, callback );
then a call with two uses a closure to return a partially applied
function that takes the callback:
fn( arg1, arg2 )( callback );
I'm not a huge fan of a separate Promise class, and would rather we
establish a simple convention, like the way jQuery plugin developers
know to have their methods always return "this" to enable chaining.
I'd also like to keep it easy to callbacks into success callbacks and
error callbacks:
(my second choice would be to adopt Felix's approach of having the
error as the first argument in the callback.)
Another limiting thing about having promises is that they're only
called once. I'd rather think of callbacks as being able to be called
many times (like a stream) depending on purpose, and called with null
to indicate completion, as I'm doing in (fab).
All these proliferating Promise/Continuation/Stream classes seem like
overkill when simple convention gets us just as much without worrying
about browser/node.js/Narwhal platform parity.
Jed
If the reasoning to avoid promises is performance, perhaps a good
trade-off could be that a callback could be included as last argument to
the function, if it's not supplied a promise is returned, if it's
present null is returned and the callback is called once the result is
ready.
fn( arg1, arg2 ) -> returns a Promise
fn( arg1, arg2, callback ) -> returns null
This way programmers can choose to skip the promise creation overhead
where they need the extra performance but the default will offer a
common unified API.
Regarding the possibility to execute a promise more than once, perhaps
the initial promise constructor could expose a clone() method which will
return a new independent instance of the promise. I've usually solved
this by having functions as promise "factories" but perhaps a native
cloning mechanism can be faster.
regards,
/imv
Jed Schmidt voiced on 1/31/10 12:05 AM:
The FileIO proposal was interesting to read. I had a good fun wrapping
my head around
the closures.
Here are some raw reflections I had while studying the code:
* addCallback, errback is more expressive and thus probably more
abordable to newbies ?
Given this code:
node> s=file.streamLines('11.txt','ascii')
node> s(function(message_type){puts(message_type)})
node> s('next')
a) Imagine s('next') throws an exception randomly. Since s('next') is
sometimes synchronous, sometimes not, will I have two different
exception paths ?
b) Why not use a second variable for the stepper ? streamLines ::
Path, Encoding -> Callback -> Stepper (not sure about the Haskell
notation). Whith each callback spawning a new reading stepper.
node> s=file.streamLines('11.txt','ascii')
node> var s2 = s(function(message_type){puts(message_type)})
node> s2('next');
c) By the way, why not let the stepper be an object. s2->next() and s2->close()
* stat, open and read implementations don't have return values ?
--
Jonas Pfenniger (zimbatm) <jo...@pfenniger.name>
On IRC, Ryan mentioned the possibility of making node use
callbacks in all the low-level APIs, which I think is a great
idea. Continuables, Promises (in all their various flavors),
and anything else people prefer can be built on top of this.
Then you can continue to use promises everywhere by using
a module that provides Promise-based I/O, or you can use
them only where you need them by creating them as needed.
Adding promises on top of callbacks is simple enough:
Say you have an implementation of stat that takes a callback:
function stat_callback_version(file, callback, errback){
/* ... eventually calls callback(stat_data)
or errback(error_info) ...
*/ }
The promise-returning version creates a promise and
returns it:
function stat_promise_version(file){
var promise = new Promise
stat_callback(file, function(stat_data){promise.keep(stat_data)}
, function (error) {promise.break(error)})
return promise}
Even a simple Promise implementation doesn't take much code:
function Promise(){
this.callbacks=[]
this.errbacks=[]}
Promise.prototype.addCallback=function(handler){
if('value' in this){
process.nextTick(function(){handler(this.value)})}
this.callbacks.push(handler)}
Promise.prototype.addErrback=function(handler{
if('error' in this){
process.nextTick(function(){handler(this.error)})}
this.errbacks.push(handler)}
Promise.prototype.keep=function(value){
this.value=value
for(var i=0,l=this.callbacks.length;i<l;i++){
this.callbacks[i](value)}}
Promise.prototype.break=function(error){
this.error=error
if(this.errbacks.length === 0) throw error
for(var i=0,l=this.errbacks.length;i<l;i++){
this.errbacks[i](value)}}
I like this too, as you know. If Ryan wants to add simple callback
based low-level APIs everywhere in node, we can build this on top
easily enough, as I did in fileIO on top of the (undocumented)
process.fs functions:
// stat :: path → Continuable Either Error StatResult
function stat(path){return function(cont){
fs.stat(path,function(x){cont (x instanceof posix.Stats ? Right(x) : Left(x) ) })}}
The ugliest thing in this is having to test the result to determine
whether it is an error or not.
If node does move to an entirely callback-based API, I would prefer
a "double-barreled" style with separate callbacks for success and
failure.
On 2010-01-30 17:33, Jonas Pfenniger wrote:
> Hi inimino,
>
> The FileIO proposal was interesting to read. I had a good fun wrapping
> my head around the closures.
>
> Here are some raw reflections I had while studying the code:
Thanks.
> * addCallback, errback is more expressive and thus probably more
> abordable to newbies ?
Yes, probably. Isaacs mentioned this on IRC as well. I'm not
sure if the gained clarity warrants the verbosity. Personally
in my experience with Promises so far I have found them just a
little beyond my tolerance for verbosity, so I'm leaning towards
brevity, even at the cost an extra minute or two of explanation
for newbies. (Async APIs tend to need explanation anyway.)
> Given this code:
>
> node> s=file.streamLines('11.txt','ascii')
> node> s(function(message_type){puts(message_type)})
> node> s('next')
>
> a) Imagine s('next') throws an exception randomly. Since s('next') is
> sometimes synchronous, sometimes not, will I have two different
> exception paths ?
Yes.
> b) Why not use a second variable for the stepper ? streamLines ::
> Path, Encoding -> Callback -> Stepper (not sure about the Haskell
> notation). Whith each callback spawning a new reading stepper.
> node> s=file.streamLines('11.txt','ascii')
> node> var s2 = s(function(message_type){puts(message_type)})
> node> s2('next');
I like this API, but it would require buffering the stream
contents since the steppers are independent and can read the
stream at different rates.
As for whether a stream should return a separate stepper, or
the caller should continue to call the stream itself with
'next' messages... I wasn't quite sure which was the best.
I went with what I did mainly so that things like
node> s(function(){ ... s('next') ...})
don't have to be written using an extra assignment:
node> var s2=s(function(){ ... s2('next') ...})
> c) By the way, why not let the stepper be an object. s2->next() and s2->close()
I wanted to build an API with functions in place of objects.
After all, if you have closures, you don't really /need/
anything else... fileIO is partly an experiment in building
APIs on that principle, and then seeing how nice we can make
them.
> * stat, open and read implementations don't have return values ?
Right, the low-level (undocumented) ones in process.fs that I'm
using don't have (meaningful) return values, they just call their
callback with a value when they have one.
Thanks for your thoughts! It's really helpful to get other's
responses to these ideas.
Now I know what was missing. When you introduced the choice of either
streamLines(path, encoding, callback) or streamLines(path, encoding)(callback).
You can see the second choice as a partial application on streamLines.
You can also build it from the outside like this:
pSlice = Array.prototype.slice;
function partial(fn) {
var args = pSlice.call(arguments, 1);
return function() {
fn.apply(null, args.concat(pSlice.call(arguments)));
}
}
And then:
var p = partial(streamLines, path, encoding)
p(callback) //-> Streppable
If the partial application is called twice, it means we get two
different streams, since it is equivalent
to calling streamLines two times.
I forgot to mention : the reason why this solution is more "clean" is
that there are less side-effects. But as a Haskeller you probably
already know that :-)
> I wanted to build an API with functions in place of objects.
> After all, if you have closures, you don't really /need/
> anything else... fileIO is partly an experiment in building
> APIs on that principle, and then seeing how nice we can make
> them.
This is one of the things that has always bothered me about Javascript. It's a neat language, but it can't decide if it wants to be a pragmatic Self or a pragmatic Lisp. I think we're going to keep running into these issues as we build an eco-system around node. Mixing and matching between Selfish and Lispy code is going to get pretty ugly at times.
Colin
One of the advantages of removing promises entirely would be
simplifying the docs. I'd prefer node to be 'only i/o' - except in
situations where it makes it very difficult to use without support -
e.g. having 'require'.
(most importantly, not having to support wait() anymore)
* Having a single callback for both error/success
* Having only a single "value" a promise can take on at any time
Otherwise we are just bike-shedding syntax here, but we won't achieve
any simplification for creating more complex constructs such as
chains, groups, etc..
So, if we were to go with a single callback, we'd need a dedicated
error parameter. My initial proposal was for that to always be the
first parameter, but as people have pointed out, this is annoying for
callbacks that have no error conditions.
However, what about this (I'll refer to the API as continuable):
* Continuable callbacks always have exactly two parameters: value,
error
* The error parameter has to be 'instanceof Error'
* When nothing is returned in a callback, the current error or value
(depending on the state) keeps propagating
* Another continuable can be returned from any callback
* If a continuable remains in error state and no continuable callback
is attached until the nextTick, an unhandled error exception is thrown
Here is an example how this could look in practice:
https://gist.github.com/85c313e588a3fc9d8636
In every callback, either the 'value' or the 'error' parameter could
have a value, never both.
Let me know what you think,
-- fg
On Feb 1, 8:16 am, "Isaac Z. Schlueter" <i...@foohack.com> wrote:
> +1. Removing promises would rock.
>
> I'm quite a fan of the double-barreled callback/"continuable" style.
>
> posix.cat("file")(success, error);
> posix.cat("file")(success); // default error just throws
>
> On Jan 31, 11:44 am, Ryan Dahl <coldredle...@gmail.com> wrote:
>
>
>
> > On Sun, Jan 31, 2010 at 11:23 AM, Ryan Dahl <coldredle...@gmail.com> wrote:
+1
For both these reasons, I think removing promises would be great.
It will also help spark a little more experimentation around the
various approaches to convenient asynchronous programming.
I think more advanced events belong in extra libraries, but not built in and used by all core node apis.
Yes, it is partial application.
(Actually, there is no streamLines(path,encoding,callback) and the
second form is required, so it's a bit like Haskell in which all
functions take exactly one argument, and some of the arguments are
tuples.)
The idiom I used is:
function foo(x,y){return function(z){
...
}}
giving foo :: (a, b) -> c -> ...
> If the partial application is called twice, it means we get two
> different streams, since it is equivalent
> to calling streamLines two times.
That's an interesting point, and I immediately really like the
idea... the functions that return continuables already work
this way (though I don't think I've mentioned it anywhere) so
readFile :: (path::String, encoding::String) → Continuable Either Error String
implies that you can do
myFile = readFile("path/to/file","binary")
myInterleavedFiles = interleaveLines(myFile, myFile)
where 'interleave' is suitably defined, and it will just
open the same file twice and interleave lines from each,
exactly as if the files were different. This is one of the
nice things about Continuables. Since they don't do any I/O
until they have a continuation, they can be combined, chained,
etc. freely and no I/O happens until something actually needs
the resulting value.
I'm not sure how well this carries over to streams. What if
we are wrapping a stream that can't be duplicated?
In the case of streamLines (with an underlying disk file)
this is no problem as I can just open the file again, but
what if the stream represents a socket, or an incoming HTTP
request, or something similar? It's always possible to make
the second call throw, I suppose, but then you don't have
referential transparency anymore anyway.
> I forgot to mention : the reason why this solution is more "clean" is
> that there are less side-effects. But as a Haskeller you probably
> already know that :-)
In Haskell terms, the question is where the IO goes in the type
signature. (I didn't try putting anything like this in my type
annotations, partly trying to avoid scaring people off with the
M-word and partly because I'm still figuring all this out.)
Essentially the stream encapsulates some I/O but the question is
whether that I/O begins when you connect the stream to a consumer,
or whether attaching a consumer is pure partial application, in
which case you can do it multiple times and duplicate a stream.
At some point, instead of partial application, you're stepping
down into the level of what would be IO implementation, where
you have some value of type IO () and the only thing you can do
with that is run it and see what happens.
I think your idea of having s return s2 means maintaining
referential transparency and purity for one extra function
call, whereas the current API dives into IO implementation
at that point.
In a JavaScript API, we have to use function calls to mean
both application of pure functions, and something like runIO,
since function calls are all we have.
Interestingly, though, with a suitable type system and some
annotations where necessary, you could get a lot of compiler
assistance in writing this kind of JavaScript, that would
statically catch about as many errors as a Haskell compiler
can, up to the point at which you're actually at the IO
implementation level and all bets are off.
> I'm quite a fan of the double-barreled callback/"continuable" style.
>
> posix.cat("file")(success, error);
> posix.cat("file")(success); // default error just throws
This is my favorite suggestion so far, as long as we provide a way of
opting out of the overhead of creating a promise closure at all:
posix.cat( "file" )( success, error );
posix.cat( "file" )( success ); // default error just throws
posix.cat( "file", success ); // default error just throws
Jed
Interestingly, OO in the Self style seems best accomplished by
throwing away the features in JavaScript that look a little like
Self, in that you can actually have true encapsulation, and you
can easily implement such things as method_missing, which are
impossible with standard JavaScript objects.
So instead of:
var x = new Foo()
x.foo()
x.bar(42)
You can get much closer to Self or Smalltalk semantics with:
var x = Foo()
x('foo')
x('bar',42)
Including the flexibility for the full range of crazy
inheritance tricks. For those who like the original
message-passing-only conception of OO, this is the way to
get it.
This has the advantage that you don't have to write the logic for
chained callbacks two times and it's not up to the library developer
to decide what an "error"/"success" is.
Kind regards,
Jan
Example:
var File = {
open: function (filename, mode) {
// Set value for the optional parameter
mode = mode || "w";
return function (callback) {
// If there is an error then
// throw new Error("Cannot open " + filename + " because ...");
// Do Something and when done...
callback(fd);
}
},
read: function (fd, bytes) {
bytes = bytes || 1024
return function (callback) {
// Read bytes and then when done...
callback(data);
}
},
close: function (fd) {
return function (callback) {
// Close the file and then...
callback();
}
}
};
And then to use it:
try {
File.open("test.txt")(function (fd) {
File.read(fd)(function (data) {
File.close(fd)(function () {
// DONE!
});
});
});
} catch(e) {
// There was a problem somewhere.
// Handle it please.
}
Also a benefit from the "double barreled" approach vs traditional CPS is that the base function may have variable arguments and/or optional arguments. The callback is cleanly separated out into a second invocation.
Unfortunately try..catch does not work this way. By that I mean that a
try..catch block can never be re-entered. Once you leave the block,
that is first time the imaginary execution pointer moves outside of
it, there is no way back in. Any callbacks that seem to lead back into
the block will execute as if it wasn't there.
Essentially this is what makes this whole async function metaphor more
difficult to model than one would think.
-- fg
On Mon, Feb 1, 2010 at 11:29 PM, Tim Caswell <t...@creationix.com> wrote:
> And then to use it:
>
> try {
> File.open("test.txt")(function (fd) {
> File.read(fd)(function (data) {
> File.close(fd)(function () {
> // DONE!
> });
> sys.debug("3");
> });
> sys.debug("2");
> });
> } catch(e) {
> // There was a problem somewhere.
> // Handle it please.
> }
> sys.debug("1");
Wouldn't the output be:
1
2
3
What means, the catch wouldn't catch anything?
Kind regards,
Jan
One issue with most of the suggestions I've seen for this
so far is that they require type testing to distinguish
success or failure.
If you're going to have a single callback, make it
unambiguous, so that the same pattern can be used regardless
of the range of values that might be returned.
In Felix's[1] example:
function loadConfig() {
return posix.cat('config.json')
(function(config, error) {
return config || posix.cat('configuration.json')
})
(function(config, error) {
return error || JSON.parse(config);
})
}
[1]: https://gist.github.com/85c313e588a3fc9d8636
This works only until you need to return undefined, (or 0,
"", null, or Boolean false...) as a successful asynchronous
result.
I like to use values which are either [0,<error>] or [1,<data>],
which gives[2]:
function loadConfig() {
return posix.cat('config.json')
(function(either) {
return either[0] ? either[1] : posix.cat('configuration.json')
})
(function(either) {
return either[0] ? JSON.parse(either[1]) : "Could not read conf file: "+either[1]
})
}
[2]: https://gist.github.com/67cbf71c79fd5b6bf3a1
Or if you prefer separate arguments, make the first indicate
success and the second be either the successful result or the
error:
function loadConfig() {
return posix.cat('config.json')
(function(succeeded,value) {
return succeeded ? value : posix.cat('configuration.json')
})
(function(succeeded,value) {
return succeeded ? JSON.parse(value) : "Could not read conf file: "+value
})
}
-Ray
On Feb 1, 2010, at 5:01 PM, Felix Geisendörfer wrote:
On Feb 1, 2010, at 5:01 PM, Felix Geisendörfer wrote:
You're not actually performing I/O. Your callbacks are being called
from the functions instead from the event loop.
Same gist, but updated: http://gist.github.com/292192
+1 for separate callbacks.
-0.5 for continuable style:
posix.cat("file")(function() {}, function() {});
One more closure than needed, callbacks not labelled.
Propose:
posix.cat("file", { onSuccess: function() {}, onError: function()
{} });
Avoids closure, callbacks labelled. Extensible.
Propose:
Asynchronous calls without an onError callback fail silently rather
than throw an exception (given the difficulties of catching an
exception thrown asynchronously, and the lack of interest in the
exception signaled by the developer not providing an onError
callback).
(Sometimes) it is necessary to do synchronous IO. How could this be
incorporated into the interface (so that we can remove .wait())?
Re: mentions of "newbies". Node would be better optimizing for
experienced developers, so as firstly to attract them and secondly to
avoid creating an API for "dummies" (see the recent standardization
fiasco around window.localStorage in this regard).
On Tue, Feb 2, 2010 at 2:30 AM, Joran Greef <joran...@gmail.com> wrote:
[SNIP]
> posix.cat("file")(function() {}, function() {});
> One more closure than needed, callbacks not labelled.
...the ES3 grammar allows:
posix.cat("file")(function callBack() {}, function errBack() {});
-Louis
Actually, continuable style takes only one callback. I'm not
sure this style has a name, but it's not a continuable.
Actual continuable style:
posix.cat("file")(function(x){ /* determine whether x is error or success, and take appropriate action */ })
This means that continuable convenience libraries (chains,
groups, etc) don't have to care how many different types of
results (0,1,2, or more) the underlying asynchronous action
can produce.
I'm not proposing the continuable style be exposed by core
node APIs. It can easily be built on top, as can promises,
and if node exposes separate callbacks, there is effectively
no overhead in doing so.
> Propose:
> posix.cat("file", { onSuccess: function() {}, onError: function() {} });
> Avoids closure, callbacks labelled. Extensible.
-1 for a low-level API. Creates unneeded object where just
two functions will do:
posix.cat("file", function success(){}, function error(){});
The function names are optional.
> Propose:
> Asynchronous calls without an onError callback fail silently rather
> than throw an exception (given the difficulties of catching an
> exception thrown asynchronously, and the lack of interest in the
> exception signaled by the developer not providing an onError
> callback).
-1
I don't find the difficulties of catching an exception
asynchronously compelling... if the developer doesn't want
an exception to be thrown, they can pass an error handler.
I'd actually say the low-level API should throw an exception
if called without an error handler argument.
The alternative problem, silent and invisible failure, is a
much greater hazard in my opinion.
> (Sometimes) it is necessary to do synchronous IO. How could this be
> incorporated into the interface (so that we can remove .wait())?
Perhaps a separate function for each:
var fileContents = posix.cat_sync("file")
Or better, a separate module:
var sync = require("posix_sync")
var fileContents = sync.cat("file")
I think it would be easier to document the asynchronous
versions if they are completely separate from the
synchronous versions, which can be given reduced visibility
in the API and come with some performance warnings.
> Re: mentions of "newbies". Node would be better optimizing for
> experienced developers, so as firstly to attract them and secondly to
> avoid creating an API for "dummies" [...]
Agreed.
However in the low-level APIs I think we ought to worry
first about what is performant and what is flexible enough
for other things to be built on top of it.
Then higher-level APIs can compete on features like newbie-
friendliness, compatibility with existing client-side
libraries, composability, and so on.
When possible, it should be difficult to accidentally write code
that silently and invisibly fails
cat('file',success_callback) // whoops, forgot the error callback, this silently fails
vs.
cat('file',success_callback,function(){} /*throw away errors*/) // here the programmer had to explicitly choose to not care about failure.
I prefer the latter.
The cool thing about functions is, that you are capable of applying
multiple arguments at once.
I'll move from the simple posix.cat example to an example with
multiple values and which is capable to fail with an error message:
sql queries. :)
function a(b, c, callback) {
mysql_query("insert into magic_table pseudocode query (...)",
function(status, data) {
if (status) {
callback(false, "Cannot add item!");
return ;
}
// do custom code
callback(true);
});
}
[1] http://gist.github.com/293086
In this case the custom callback in mysql_query want's to expose the
error with a first argument, when it calls callback(true/false). Since
exceptions don't work here (noone knows when the async call comes back
...), using functions with a flag at the beginning, will work pretty
well. In contrast to the need always having to define an
ErrBack/CompleteCallback, you don't need to pass the flag if it's not
necessary.
Disadvantages: If I want to add new arguments to the function 'a', I
have to move the callback. The proposal from Joran (using an options
argument with callback) would avoid this.
We actually just need this for an asynchronous path through the
application flow. In case of events, we still have the event system.
:)
Kind regards,
Jan
--
> Javascript functions are variadic. Let Javascript be Javascript.
>
> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
>
Hi Joran,
Would you mind providing a little more context when you reply to a message here? I can't make heads or tails of what you write, because you don't quote (or down quote extensively enough) the original message.
Thanks,
Colin