Re: Stream

17 views
Skip to first unread message

r...@tinyclouds.org

unread,
Jan 26, 2010, 12:28:37 PM1/26/10
to Isaac Schlueter, nodejs
On Tue, Jan 26, 2010 at 1:06 AM, Isaac Schlueter <i...@izs.me> wrote:
> Hey, I saw that you pastied this: http://pastie.org/794198
>
> I didn't really know the context, if you'd done something like that in
> the C level already, or what.  But it's a good fit for ejsgi, so I
> implemented it here: http://github.com/isaacs/node-stream
>
> Thanks!  It's a good API.  This wasn't specified in your pastie, but I
> went a step further and guarantee that write/close won't trigger
> data/drain/eof events synchronously in any case, since that means i
> can do stuff like: out.write(); out.close(); return out; and have the
> calling function be notified of the events without losing chunks of
> the body.  Also, I added "pause" and "resume" events, since I need to
> be able to pause the http body if an app pauses the stream.

So, after I merge net2 I'll be reworking all the streaming interfaces
to look like that. Ir at least something similar. HTTP request and
response objects will follow it too - so instead of getting "body"
events and doing "sendBody", it will act as a normal data stream.

What's cool is we'll be able to write abstract functions to pump data
from a readable stream into a writable one. Maybe we could have
something where you can open a new file write stream (which under the
hood uses the posix module) and if you wanted to do a HTTP server with
uploads you'd do

createFile("/tmp/x", function (fStream) {
pump(req, fStream);
});

The pump would do automatic throttling. Similarly if you want to
stream a file through a socket you might do

openFileStream("/data/movie.avi", function (fStream) {
pump(fStream, socket);
});

The "pump" function would be very simple

function pump (readStream, writeStream, cb) {
readStream.addListener("data", function (d) {
if (!writeStream.write(d)) readStream.pause();
});
readStream.addListener("eof", function () {
writeStream.close();
cb();
});
writeStream.addListener("drain", function () {
readStream.resume();
});
}

Or at least something like that.

r...@tinyclouds.org

unread,
Jan 26, 2010, 12:51:02 PM1/26/10
to Isaac Schlueter, nodejs
On Tue, Jan 26, 2010 at 1:06 AM, Isaac Schlueter <i...@izs.me> wrote:
> Hey, I saw that you pastied this: http://pastie.org/794198

CommonJS should adopt this as their I/O API and it should be the basis
of the socket and HTTP server APIs. It needs to be broken in before
proposing it there. For example, it's not clear to me the best way to
"filter" data through the callbacks.

Ray Morgan

unread,
Jan 26, 2010, 12:53:53 PM1/26/10
to nod...@googlegroups.com, Isaac Schlueter
I really like this! It will be amazing to have a common interface.

-Ray

> --
> You received this message because you are subscribed to the Google Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
>

Isaac Schlueter

unread,
Jan 26, 2010, 12:59:10 PM1/26/10
to nod...@googlegroups.com
I've got an example of doing that now in ejsgi. Check out
http://github.com/isaacs/ejsgi/blob/master/examples/rot13.js

The key is that write() and close() need to be guaranteed async. They
can push data onto the buffer (or close it to new writes)
automatically, but they must not fire "data" or "eof" events until at
least process.nextTick.

Then, you can have something like this:

function filter (input, output, cb) {
input.addListener("data", function (chunk) { output.write(cb(chunk)) });
input.addListener("eof", function () { output.close() });
}

Now everything that input reads will be written to output after being
filtered through cb. The callback could of course buffer it, look for
EOL tokens, whatever.

I'm going to write a sed-style middleware using this principle today.

--i

Daniel N

unread,
Jan 26, 2010, 4:26:11 PM1/26/10
to nod...@googlegroups.com
Using this principal and attaching listeners to the data event, is it
possible to modify each chunk as it passes through the listeners on
it's way out the stream?

Reply all
Reply to author
Forward
0 new messages