Proposal: Add "pause" and "resume" events to readable streams

141 views
Skip to first unread message

Mikeal Rogers

unread,
Jul 7, 2010, 1:19:14 AM7/7/10
to nod...@googlegroups.com
This came up earlier on the list regarding pump().


When you're pumping a readable stream to a writable stream and the write() returns false you call .pause() on the readable stream. This works fine when that readable stream isn't also a writeable that is being pumped to as well. For instance:

File Read Stream -pump> Gzip Stream -pump> HTTP response.

What you want to do, ideally, when the HTTP response write returns false is get the File Read stream to pause but with the current Stream API and pump methods you can't reliably do this. 

If, for some reason, you can't pause the initial read stream the best thing to do is buffer at the HTTP response stream which is closest to the client and not try to buffer in the intermediate streams like the gzip stream above.

The best way I can think of to handle the propagation of events in the pump is to have the readable stream emit "pause" and "resume" when the readable stream isn't directly responsible for a file handler.

We can then improve pump by adding handlers for "pause" and "resume" and third party writable/readable streams like gzippers can start emitting them.

The only alternative is to expect authors of readable/writable streams to maintain some internal "paused" state and return false on the subsequent writes. The issue with this is that it's easy to screw up and there still isn't a good way to propagate the "resume" event.

-Mikeal

Matt Ranney

unread,
Jul 7, 2010, 3:25:31 AM7/7/10
to nod...@googlegroups.com
On Tue, Jul 6, 2010 at 10:19 PM, Mikeal Rogers <mikeal...@gmail.com> wrote:
File Read Stream -pump> Gzip Stream -pump> HTTP response.

What you want to do, ideally, when the HTTP response write returns false is get the File Read stream to pause but with the current Stream API and pump methods you can't reliably do this. 

When you pause the gzip stream, won't that cause the next write into the gzip stream to return false?

It does mean that for every step in the chain you'll have a little bit buffered, but maybe this is a feature.

Marco Rogers

unread,
Jul 7, 2010, 9:55:25 AM7/7/10
to nodejs
Yep, sounds like you need "pause" and "resume" events.

I am still confused about what the actual implementation of pause is
though. It just does stopRead() which just let's the input hang?
There's no buffering of the input until resume is called? So for
instance, say this happens on a form post from the browser. When the
request.pause() is called, the browser will just sit and spin until
resume() is called? Is that ideal?

:Marco

On Jul 7, 3:25 am, Matt Ranney <m...@ranney.com> wrote:

Mikeal Rogers

unread,
Jul 7, 2010, 2:19:45 PM7/7/10
to nod...@googlegroups.com
Hopefully not.

I don't think we should encourage buffering in intermediate streams or require them to keep a "paused" state where they return false to their write. read/write streams should be free to just proxy data through, take write calls, and emit a data event with their changes (when applicable). The pump should be responsible for taking a "pause" event on the writable/readable and pausing the read stream.

-Mikeal

--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.

Mikeal Rogers

unread,
Jul 7, 2010, 2:23:33 PM7/7/10
to nod...@googlegroups.com
Since streams are a generic interface stopRead() is actually just a suggestion. If the input can be paused, it is, if it's not it will continue to emit data events. A pause() is not assured to stop data events until resume.

You are correct about request.pause() being called but in this case it would be because the disc or whatever is actually writing or persisting the data (maybe a databsae) in the request is backed up. It's much better to pause the client and spin for a second than fill up your available memory and crash :) If you have so many upload requests that your disc or database is backed up it's probably enough concurrent uploads to fill up the available memory if you don't pause them.

-Mikeal

Marco Rogers

unread,
Jul 7, 2010, 2:55:49 PM7/7/10
to nodejs
If pause is not guaranteed to do anything, it probably shouldn't be
part of the official api for streams. Or if not that then it should
be explained and qualified. The way you describe it, calling pause on
a stream that doesn't actually do anything useful is a way to lose
data. Or if data events continue to be emitted then it's essentially
muddling up code without providing any real throttling capabilities.

But setting that aside, it feels to me like it should work like this.
For a readable stream:

##
pause() = (1) the stream will stop emitting data events. (2) The
"pause" event will be emitted.

This doesn't say anything about implementation. Buffering on pause is
optional. But then it's up to the provider of the stream to make it
clear what happens when pause is called (connection hangs, data is
buffered in memory or to a file or to a db, data is lost (!))

This is also a little misleading because any data that has already
been received and put on the event loop will still be fired right? It
would be nice to be able to say that all current data events have been
fired _before_ "pause" is emitted. Then at least you can be sure you
won't receive any more events from that point until you call resume.
But I realize that might be impossible because the stream doesn't
track data events once they've been fired.

resume() = (1) The "resume" event will be emitted. (2) Data events
will resume firing. (notice the difference in order here)

Any buffered data MUST be emitted _in the order it was received_
before resuming real time streaming. Buffering data is still optional
here, but if it's used it needs to be implemented in a sensible manner
and preserve the consistency of the stream. I don't know how this fits
with dgram or udp stuff. Not that knowledgeable about it.
##

For a node i/o stream (fs, net, http), it follows this loose outline
but things are more well specified and consistent.

##
pause() = stream will stop emitting data events. Data is not
buffered. If it's a file system descriptor, it will stop reading from
the descriptor and no cursor state will change. If it's a unix or
network socket, it will stop reading from the connection, but
connection will remain open (leaving a net connection open in a paused
state risks timeout or disconnect from the other end. what happens
then?)

resume() = data events will be resumed. Stream will start reading
from file descriptor or socket again. (if socket has timed out or
been disconnected, what happens? "timeout" or "end" event has already
been fired correct?)
##

That's enough for one post. Writable stream is much simpler anyway.
The only open question in my mind is, is an implementer required to
return "false" from write() and under what circumstances is this
required?

:Marco
> > nodejs+un...@googlegroups.com<nodejs%2Bunsu...@googlegroups.com>
> > .

Mikeal Rogers

unread,
Jul 7, 2010, 3:08:39 PM7/7/10
to nod...@googlegroups.com
pause works when it can, and when it can't you *do not* want to buffer data at the read stream you want to buffer it at the last writer in a series of pumps so that it's closest to the client when the client resumes or closest to the writer to disc.

when a readable/writable stream isn't in charge of a file handler it should emit a "pause" event, that's the purpose of this proposal, to define what a readable stream does when it isn't in charge of the actual file handler.

requiring the stream to buffer (stop emitting data events) is dangerous. let the stream that is talking to the final writer be responsible for buffering instead of making every readable/writable stream anyone implements be responsible for it, this way we can optimize the next write after resume it does in many cases (a larger fsync when writing to disc) and we only need to write the buffering logic in one place.

-Mikeal

To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.

Marco Rogers

unread,
Jul 7, 2010, 3:20:40 PM7/7/10
to nod...@googlegroups.com
I agree with this in general (I don't think what I wrote up contradicts this, let me know if I'm wrong).

But sending "pause" events all the way back to the original reader is contrary to what you're saying right?  So in the earlier example if the write to http response returns false, the gzip is paused.  The pause event gets fired and caught by the file reader, which also pauses.  But instead you're saying the file reader doesn't care if there's a block down stream, it just keeps sending data events?  The gzip should buffer until the response is ready to write again?

I can buy that.  But then when is it appropriate to listen for the "pause" event at all?

I think it would be very useful to set up some of these scenarios and do some testing.  I'm very interested in the streams api solidifying and working in a consistent manner.  But it sounds like there are some kinks to work out.

:Marco
--
Marco Rogers
marco....@gmail.com

Life is ten percent what happens to you and ninety percent how you respond to it.
- Lou Holtz

Mikeal Rogers

unread,
Jul 7, 2010, 3:35:38 PM7/7/10
to nod...@googlegroups.com
A pump should listen for "pause" and "resume" events on it's "writable" argument to handle the case where that stream is a readable/writable stream. Upon receiving the event it would call .pause() and .resume() on it's "readable" argument.

What I don't agree with in your proposal is that you require that after calling .pause() on the readable stream it stop emitting data events. In some cases there will be bending data events or it's possible that there is still some out of process stuff happening on previous data (if for instance the gzipper calls out to a subprocess) and that data should *still* be emitting and written up the pump until it is buffered at the final writer.

-Mikeal

Marco Rogers

unread,
Jul 7, 2010, 4:12:17 PM7/7/10
to nod...@googlegroups.com
I'm okay with that scenario accept for the fact that you've called pause().  What's the point of calling it if you can't expect it to actually do anything?  I think it's misleading and will have people making assumptions that aren't true.  The OP in that original thread already spent a lot of time working with the api assuming that pause() actually means... pause.  As soon as it doesn't, we're going to have lots more people having trouble getting consistent behavior.  But let's come back to that.

Okay, here's a few questions to see if I'm following you. Assume the setup mentioned previously

FileReader -pump1-> GZipper -pump2-> HttpResponse
/* GZipper is a read/write stream. pump1 and pump2 are the pipes connecting these streams. In Mikeal's node-utils implementation they are simple event emitters */

Now assume the HttpResponse has returned false on write.

- GZipper.pause() is called right?  Who calls it?  pump2 i'm assuming.
- Does the "pause" event make it all the way back to the file reader?  And does that actually pause, as in stop reading from the file?
- Which agent actually does buffering when it's necessary?  Is it pump2 or the gzipper?
- When drain is emitted GZipper.resume() is called right? and the resume event makes it all the way back to the file reader and restarts it?

This makes sense to me I guess.  The important distinction is that pause/resume is for _throttling_ streams when there is a block at a writer. But it doesn't necessarily _stop_ them.  Each link in the chain should still expect to get some data events.  And the last link before the write block needs to buffer.  I'm fine with that.  But where does the buffering code live?  If we're not expecting userland modules to write their own every time (e.g. in the GZipper), it needs to be inherited from a node agent right?  That's why I asked if the pump mechanism encapsulates the buffering feature. If so then it has to live in pump2 and pump has to be a first class api method in node.

I guess I'm saying the semantics for read streams and write streams should be consistent irrespective of each other.  If a stream happens to be both, you can expect the read behavior on one side and the write behavior on the other, nothing fancy.  When you have to do special things or make special assumptions based on if a particular object has both properties, it becomes harder to reason about.  And unnecessarily so IMO.

With that being said, I still have a lot to learn in order to recognize when that kind of straight forward semantics just isn't possible.  Mikeal you and ry have been very helpful in that respect.  The questions I'm raising are meant to help clarify rather than causing contention.

:Marco

Mikeal Rogers

unread,
Jul 7, 2010, 4:59:35 PM7/7/10
to nod...@googlegroups.com
On Wed, Jul 7, 2010 at 1:12 PM, Marco Rogers <marco....@gmail.com> wrote:
I'm okay with that scenario accept for the fact that you've called pause().  What's the point of calling it if you can't expect it to actually do anything?  

What I'm actually suggesting is that pause() attempt to stopRead() on it's file handler if it has one and emit "pause" if it cannot.

Streams are a generic interface description so they don't *require* stopRead() because it might not be applicable. The "pause" event gives Stream implementors something to do when they cannot stopRead().
 
I think it's misleading and will have people making assumptions that aren't true.  

Which assumption? That data events won't be fired after pause() is called?
 
The OP in that original thread already spent a lot of time working with the api assuming that pause() actually means... pause.  As soon as it doesn't, we're going to have lots more people having trouble getting consistent behavior.  But let's come back to that.

Okay, here's a few questions to see if I'm following you. Assume the setup mentioned previously

FileReader -pump1-> GZipper -pump2-> HttpResponse
/* GZipper is a read/write stream. pump1 and pump2 are the pipes connecting these streams. In Mikeal's node-utils implementation they are simple event emitters */

Now assume the HttpResponse has returned false on write.

- GZipper.pause() is called right?  Who calls it?  pump2 i'm assuming.

Yes, one of the things a pump is responsible for is pausing the readable stream when the write() returns false.

FYI, in my stream utils event emitter a "pause" event is emitted *on the pump EventEmitter* and my thinking was that people who setup the pumps could use this to propagate the pause up to the parent stream but this proved difficult and kind of annoying.
 
- Does the "pause" event make it all the way back to the file reader?  And does that actually pause, as in stop reading from the file?

I'm proposing that Gzipper just emit "pause" and not attempt to buffer.

Whoever is sending data to the Gzipper from a file handler is responsible for handling that event and pausing the file reader stream, this will most likely be a pump.
 
- Which agent actually does buffering when it's necessary?  Is it pump2 or the gzipper?

Neither. The HTTP Response handles the buffering. It, conveniently, already implements buffering and when the client is available to accept writes again it's in the best position to optimize that write.
 
- When drain is emitted GZipper.resume() is called right? and the resume event makes it all the way back to the file reader and restarts it?

Correct, since Gzipper isn't responsible for a file handler it should just emit a "resume" event and the pump should be listening to this and resume the file reader.

We *could* simply use the "drain" event again in this case I'm just a little worried about some case where you might need to differentiate the two, but it could be a better idea to just use "drain".
 

This makes sense to me I guess.  The important distinction is that pause/resume is for _throttling_ streams when there is a block at a writer. But it doesn't necessarily _stop_ them.  Each link in the chain should still expect to get some data events.  And the last link before the write block needs to buffer.  I'm fine with that.  But where does the buffering code live?

Buffering already *must* be implemented by any stream that is responsible for writing a file handler except in special cases like synchronous filesystem operations (for obvious reasons).

Also, it's not implemented yet but there are great optimizations you can make to fsync() when you're writing sequential data if you have 8 chunks in your buffer you can batch the write and the fsync().
 
 If we're not expecting userland modules to write their own every time (e.g. in the GZipper), it needs to be inherited from a node agent right?  That's why I asked if the pump mechanism encapsulates the buffering feature. If so then it has to live in pump2 and pump has to be a first class api method in node.

pumps, for the most part, aren't responsible for any buffering, only writable Streams that have a file handler.

the exception is pumps that pump to multiple sources like my multiPump because they only pause the input streams if *all* the writable streams pause, if not the data is buffered in the pump. but, this is a rare case.
 

I guess I'm saying the semantics for read streams and write streams should be consistent irrespective of each other.  If a stream happens to be both, you can expect the read behavior on one side and the write behavior on the other, nothing fancy.  When you have to do special things or make special assumptions based on if a particular object has both properties, it becomes harder to reason about.  And unnecessarily so IMO.

the problem we're discussing *only* happens when a stream is both readable and writable. if a stream is only readable or only writable it can't be in the middle of two pumps like this.
 

With that being said, I still have a lot to learn in order to recognize when that kind of straight forward semantics just isn't possible.  Mikeal you and ry have been very helpful in that respect.  The questions I'm raising are meant to help clarify rather than causing contention.

I hope these latest comments clarify things a bit more.
 
-Mikeal

Marco Rogers

unread,
Jul 7, 2010, 5:44:37 PM7/7/10
to nod...@googlegroups.com
Ah, having the buffering in the httpresponse makes this a lot easier to swallow.  A few things.

What I'm actually suggesting is that pause() attempt to stopRead() on it's file handler if it has one and emit "pause" if it cannot.


I would expect that if there is a "pause" event it gets fired every time pause() is called, even if stopRead is also called.  If you don't need it, just don't listen for it.

Streams are a generic interface description so they don't *require* stopRead() because it might not be applicable. The "pause" event gives Stream implementors something to do when they cannot stopRead().
 

Agreed, except you should always emit pause. There's no harm and it keeps things consistent.  But this leaves the question of ordering.  Is "pause" emitted before or after stopRead is called?  Is stopRead async? And if so, should the "pause" event wait until it's successful?  What if it's not?

Which assumption? That data events won't be fired after pause() is called?

Yeah that's the assumption that I had and the OP had. It's easy to read pause() as "call this and the stream is stopped. you're safe from events until you resume".  I understand now why that's a false assumption, but it's not clear from the spec.  I don't think that problem is going to go away.  It's just a teaching point I suppose.  I was trying to think of better names that would put people more in the mind of throttling rather than "stop this stream".  No luck yet.
 

Yes, one of the things a pump is responsible for is pausing the readable stream when the write() returns false.

FYI, in my stream utils event emitter a "pause" event is emitted *on the pump EventEmitter* and my thinking was that people who setup the pumps could use this to propagate the pause up to the parent stream but this proved difficult and kind of annoying.

Right, that's what I meant by "pause makes it back to the file reader".  Through successive pause events down the chain.  That's sensible and keeps things decoupled.
 
I'm proposing that Gzipper just emit "pause" and not attempt to buffer.

Whoever is sending data to the Gzipper from a file handler is responsible for handling that event and pausing the file reader stream, this will most likely be a pump.

Yeah I think we're on the same page with the pause/resume events.
 
Neither. The HTTP Response handles the buffering. It, conveniently, already implements buffering and when the client is available to accept writes again it's in the best position to optimize that write.

This was the critical piece I was missing in your explanation.  Essentially write() returning false should really only happen on core i/o streams like socket output, file writing, etc.  And these should also include code to buffer any data if they can't write, because data events can still come in.  These data events will be written in the same order when the write stream is flushed.  I like this.

So the things that weren't clear in terms of implementation are these:

1) If you are an upstream data source and stream.write returns false, you should start the pause chain that will eventually make it back to the core read stream.

2) But you may still get data events from further upstream.  You should still accept those and still call stream.write.  The write stream should still be able to accept writes, and if it can't actually write to a file descriptor or socket, it should buffer them.  Essentially returning false from write() is a signal to throttle the data if possible, but it shouldn't stop the stream.
 
- When drain is emitted GZipper.resume() is called right? and the resume event makes it all the way back to the file reader and restarts it?

Correct, since Gzipper isn't responsible for a file handler it should just emit a "resume" event and the pump should be listening to this and resume the file reader.

We *could* simply use the "drain" event again in this case I'm just a little worried about some case where you might need to differentiate the two, but it could be a better idea to just use "drain".

Nah, I like sticking with pause/resume.  drain is specific to the "blocked" write stream.  It's paired with write() == false which only happens sometimes.
 
Buffering already *must* be implemented by any stream that is responsible for writing a file handler except in special cases like synchronous filesystem operations (for obvious reasons).

Also, it's not implemented yet but there are great optimizations you can make to fsync() when you're writing sequential data if you have 8 chunks in your buffer you can batch the write and the fsync().
 
 If we're not expecting userland modules to write their own every time (e.g. in the GZipper), it needs to be inherited from a node agent right?  That's why I asked if the pump mechanism encapsulates the buffering feature. If so then it has to live in pump2 and pump has to be a first class api method in node.

pumps, for the most part, aren't responsible for any buffering, only writable Streams that have a file handler.

the exception is pumps that pump to multiple sources like my multiPump because they only pause the input streams if *all* the writable streams pause, if not the data is buffered in the pump. but, this is a rare case.
 

I guess I'm saying the semantics for read streams and write streams should be consistent irrespective of each other.  If a stream happens to be both, you can expect the read behavior on one side and the write behavior on the other, nothing fancy.  When you have to do special things or make special assumptions based on if a particular object has both properties, it becomes harder to reason about.  And unnecessarily so IMO.

the problem we're discussing *only* happens when a stream is both readable and writable. if a stream is only readable or only writable it can't be in the middle of two pumps like this.

Yeah I understand all of this now.  I think the disconnect was I didn't understand where the buffering functionality lived. The thing is I don't think these read/write streams will be that uncommon.  Processing raw streams before writing them out to wherever will probably happen a lot.  That's why I'm concerned about getting the semantics right.  Thanks for the clarification.

Ry, do you agree with the consensus that is forming here?  I want to take another shot at updating the api to reflect this thinking.

:Marco

Mikeal Rogers

unread,
Jul 7, 2010, 6:01:57 PM7/7/10
to nod...@googlegroups.com
On Wed, Jul 7, 2010 at 2:44 PM, Marco Rogers <marco....@gmail.com> wrote:
Ah, having the buffering in the httpresponse makes this a lot easier to swallow.  A few things.

What I'm actually suggesting is that pause() attempt to stopRead() on it's file handler if it has one and emit "pause" if it cannot.


I would expect that if there is a "pause" event it gets fired every time pause() is called, even if stopRead is also called.  If you don't need it, just don't listen for it.

Sure, I guess the focus of the effort is for the case where you can't stopRead() but there isn't any reason we shouldn't emit in the other case as well.
 

Streams are a generic interface description so they don't *require* stopRead() because it might not be applicable. The "pause" event gives Stream implementors something to do when they cannot stopRead().
 

Agreed, except you should always emit pause. There's no harm and it keeps things consistent.  But this leaves the question of ordering.  Is "pause" emitted before or after stopRead is called?  Is stopRead async? And if so, should the "pause" event wait until it's successful?  What if it's not?

This is a good question. Basically, should a pause event handler be allowed to do something that might prevent stopRead from being called?
 

Which assumption? That data events won't be fired after pause() is called?

Yeah that's the assumption that I had and the OP had. It's easy to read pause() as "call this and the stream is stopped. you're safe from events until you resume".  I understand now why that's a false assumption, but it's not clear from the spec.  I don't think that problem is going to go away.  It's just a teaching point I suppose.  I was trying to think of better names that would put people more in the mind of throttling rather than "stop this stream".  No luck yet.

I think the spec should be clearer that there are no assurances and that if you truly can't handle getting data events anymore you'll need to implement buffering.
 
 

Yes, one of the things a pump is responsible for is pausing the readable stream when the write() returns false.

FYI, in my stream utils event emitter a "pause" event is emitted *on the pump EventEmitter* and my thinking was that people who setup the pumps could use this to propagate the pause up to the parent stream but this proved difficult and kind of annoying.

Right, that's what I meant by "pause makes it back to the file reader".  Through successive pause events down the chain.  That's sensible and keeps things decoupled.
 
I'm proposing that Gzipper just emit "pause" and not attempt to buffer.

Whoever is sending data to the Gzipper from a file handler is responsible for handling that event and pausing the file reader stream, this will most likely be a pump.

Yeah I think we're on the same page with the pause/resume events.
 
Neither. The HTTP Response handles the buffering. It, conveniently, already implements buffering and when the client is available to accept writes again it's in the best position to optimize that write.

This was the critical piece I was missing in your explanation.  Essentially write() returning false should really only happen on core i/o streams like socket output, file writing, etc.  And these should also include code to buffer any data if they can't write, because data events can still come in.  These data events will be written in the same order when the write stream is flushed.  I like this.

So the things that weren't clear in terms of implementation are these:

1) If you are an upstream data source and stream.write returns false, you should start the pause chain that will eventually make it back to the core read stream.

Right, seperating the concerns a bit more clearly; The pump "actor" is responsible for calling .pause() on a readable when it's writable stream either returns false on a write() or emits a pause event.
 

2) But you may still get data events from further upstream.  You should still accept those and still call stream.write.  The write stream should still be able to accept writes, and if it can't actually write to a file descriptor or socket, it should buffer them.  Essentially returning false from write() is a signal to throttle the data if possible, but it shouldn't stop the stream.

Yeah, pump is fairly new so we should specify some good generic "proxying" semantics.
 
 
- When drain is emitted GZipper.resume() is called right? and the resume event makes it all the way back to the file reader and restarts it?

Correct, since Gzipper isn't responsible for a file handler it should just emit a "resume" event and the pump should be listening to this and resume the file reader.

We *could* simply use the "drain" event again in this case I'm just a little worried about some case where you might need to differentiate the two, but it could be a better idea to just use "drain".

Nah, I like sticking with pause/resume.  drain is specific to the "blocked" write stream.  It's paired with write() == false which only happens sometimes.
 
Buffering already *must* be implemented by any stream that is responsible for writing a file handler except in special cases like synchronous filesystem operations (for obvious reasons).

Also, it's not implemented yet but there are great optimizations you can make to fsync() when you're writing sequential data if you have 8 chunks in your buffer you can batch the write and the fsync().
 
 If we're not expecting userland modules to write their own every time (e.g. in the GZipper), it needs to be inherited from a node agent right?  That's why I asked if the pump mechanism encapsulates the buffering feature. If so then it has to live in pump2 and pump has to be a first class api method in node.

pumps, for the most part, aren't responsible for any buffering, only writable Streams that have a file handler.

the exception is pumps that pump to multiple sources like my multiPump because they only pause the input streams if *all* the writable streams pause, if not the data is buffered in the pump. but, this is a rare case.
 

I guess I'm saying the semantics for read streams and write streams should be consistent irrespective of each other.  If a stream happens to be both, you can expect the read behavior on one side and the write behavior on the other, nothing fancy.  When you have to do special things or make special assumptions based on if a particular object has both properties, it becomes harder to reason about.  And unnecessarily so IMO.

the problem we're discussing *only* happens when a stream is both readable and writable. if a stream is only readable or only writable it can't be in the middle of two pumps like this.

Yeah I understand all of this now.  I think the disconnect was I didn't understand where the buffering functionality lived. The thing is I don't think these read/write streams will be that uncommon.  Processing raw streams before writing them out to wherever will probably happen a lot.  That's why I'm concerned about getting the semantics right.  Thanks for the clarification.

Ry, do you agree with the consensus that is forming here?  I want to take another shot at updating the api to reflect this thinking.

I'm pretty sure Ry is still on vacation.
 

:Marco

--
Marco Rogers
marco....@gmail.com

Life is ten percent what happens to you and ninety percent how you respond to it.
- Lou Holtz

--
Reply all
Reply to author
Forward
0 new messages