Unfortunatelyrequest.pause()
currently does not pause the request right away. A few more data events may still be emitted. That's a problem in node and ryan wants to adress it on the event loop level, but hasn't done so yet.
server.listen(3000);
--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
Wrt this, I think you're mistaken, Marco.
--
Jorge.
var ONE= new Buffer("ONE");
var TWO= new Buffer("TWO");
fs.write(process.stdout.fd, ONE, 0, 3, -1, cb);
fs.write(process.stdout.fd, TWO, 0, 3, -1, cb);
might eventually produce the -surprising but correct (!)- output TWO ONE, however one can be confident that
console.log("ONE");
console.log("TWO");
is going to produce the output ONE TWO in the expected order, always.
--
Jorge.
So, just to be clear req.pause() currently does work - kind of.
It will /eventually/ will stop the req from emitting 'data' events,
just not immediately. It's arguable that this is the correct API - the
packet has already arrived - it's being parsed - there's not much to
"pause".
That said, many people have found this to be ugly and I'm 70% though
fixing it. The todo item is
https://github.com/ry/node/blob/bb7bf58cc77c07b6421c44872926460479aee496/TODO#L31
That said, many people have found this to be ugly and I'm 70% though
fixing it.
https://github.com/ry/node/tree/http_parser_refactor
It will make it so no data events will happen after a call to pause().
This file is probably best for looking at:
https://github.com/ry/node/blob/d79122575745ca5dcebd75162fac86a250caa963/http_parser.js
no
--i
> --
> You received this message because you are subscribed to the Google Groups
> "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com.
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
how is it inconsistent right now between fs and net streams?
--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
Essentially this issue with pause/resume doesn't come up as often with fs streams because they generally exhibit the "immediate pause" behavior that everyone seems to expect.
Could you comment on why you prefer having the buffering handled in core instead of in userland or at least in a js module in stream?
Essentially this issue with pause/resume doesn't come up as often with fs streams because they generally exhibit the "immediate pause" behavior that everyone seems to expect.
fs.ReadStream pause() is *always* immediate. It buffers. However, it is guaranteed that the maximum buffer size is identical to the buffer sized used for chunking the reads, so it seems like a no-brainer there.Could you comment on why you prefer having the buffering handled in core instead of in userland or at least in a js module in stream?
I don't want anything to be in core, I merely want the core to be consistent and follow the stated goals of the project:
Goal: Because nothing blocks, less-than-expert programmers are able to develop fast systems.
Consistency with this goal means: pause() behaves the same way on all streams
If I have to know about the semantics of the underlaying stream, and that there is a latency problem when calling pause(), it gets really complex. Now, if that's the route we decide to go, fine. But lets be consistent about it.
--fg--On Tue, Mar 1, 2011 at 3:50 PM, Mikeal Rogers <mikeal...@gmail.com> wrote:
how is it inconsistent right now between fs and net streams?--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
--
Marco Rogers
marco....@gmail.com
Life is ten percent what happens to you and ninety percent how you respond to it.
- Lou Holtz
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
Marco Rogers
March 1, 2011 March 1, 20111:09 PM
@Mikeal
I'm assuming Felix is referring to the behavior you can expect when you call pause on an fs stream or a net stream. With fs there's little to no delay from when you call pause to when it stops emitting data. Because it just stops reading from the fd. But with a net stream it 's much more likely that you'll get several more data events before it's actually paused. Because there is a pipeline between when it's actually read of the socket and when it actually get's emitted. This is even more pronounced with http because it's going through the parser.
Essentially this issue with pause/resume doesn't come up as often with fs streams because they generally exhibit the "immediate pause" behavior that everyone seems to expect.
If I have the wrong idea about this please let me know.
@Felix
Could you comment on why you prefer having the buffering handled in core instead of in userland or at least in a js module in stream? The pause(buffer = true) idea would throw a wrench in the simplicity of stream.pipe. My idea is if you know you want it buffered set it up that way.
socket.pipe( new BufferedStream(4096) ).pipe( process.stdout );
or even
( new BufferedStream( socket, 4046 ) ).pipe( process.stdout );
:Marco
Then we should emit that last data chunk on fs streams to make it consistent, not add additional abstractions to net streams.
It's quite simple. pause() and resume() are methods (and events in a pump chain) that correspond to fd signals to stop and resume pulling data. If you want to buffer data, run it through a buffering stream, which we can easily provide.
--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.
why clone and hack rather than just pumping to a buffered stream?
It's quite simple. pause() and resume() are methods (and events in a pump chain) that correspond to fd signals to stop and resume pulling data. If you want to buffer data, run it through a buffering stream, which we can easily provide.
Forwarded message:
From: Mikeal Rogers <mikeal...@gmail.com>
Reply To: nod...@googlegroups.com
To: nod...@googlegroups.com
Date: Friday, March 4, 2011 10:19:46 PM
Subject: Re: [nodejs] Re: Is Request.pause useless?
--
I think the main use case we came up with, that you want to do some IO in order to figure out where to route a request to, is a low latency operation, you're talking to a local redis or memcached, hopefully.
For that use case, you shouldn't pause the input stream, and once it's piped, you should never do any more buffering.
You have a short amount of time that you need to do some IO, you don't want to pause the input stream because it's going to cause at least two roundtrips to the client which will probably take longer than talking to redis.
Once the streams are piped together, you want the fd signals to go back and forth uninterrupted because the amount of time you would need to buffer for is unknown and the last stream you write to will buffer any left over data anyway, you need to immediately pause the input stream if the output stream can't handle data.
Just in case Redis or Memcached takes longer than expected, on the client upload is absurdly fast, you have the limit parameter that will pause the input stream once a buffering threshold is hit.
I'm not saying there isn't a use case for buffering in between pause/resume but I don't know what it is and I'd like it laid out before we talk about the best way to solve it. If the use case is "you have crazy application code that is causing pause/resume after you pipe to it" that's not a real use case, that's just bad and/or lazy code.