The short answer is: if you use yield and set "response.stream = True"
for that URL, that tells all of CherryPy "don't buffer the output", in
which case it behaves very much like write(). See
[mailto:firstname.lastname@example.org] On Behalf Of Yash Parghi
Sent: Tuesday, August 14, 2012 8:18 AM
Subject: [cherrypy-users] Re: Dumping the request body on write in
Thanks for the explanation, Daniel. Though I don't think it justifies
this specific behavior of CherryPy's, it seems in practice that we
shouldn't put too much stock in write()'s behavior in general. I'll look
into yielding instead of write() -- more than anything else, I depend on
write for its implicit immediacy in sending data to the client, so if
yielding doesn't send the data across the wire "soon enough" for the
client's user experience, I'll have to stick with write().
On Friday, August 10, 2012 1:55:32 PM UTC-5, Daniel Dotsenko wrote:
See (especially section on .write() ) :
Don't use write()
Alternatives are, unfortunately, complex. I recommend to go for
full-blown generator as any in-between (inline yield's) solution will
quickly become small for you.
To simulate generator "in-line" try:
yield "some more"
return ['rest', 'of', 'response']
But real solution is make a looping generator class and just return it
(after start_response(... ))
I wish I could give you a simple example of smart generator, but there
is no simple example of smart generator that I know. This is relatively
n-be-very-harmful.html but does not show parallel stream processing.
I use one such parallel stream-processing generator in
Works well against CherryPy's WSGI server. (The generator itself is in
subprocessio.py - wraps call to git into subprocess and relays back
command output realtime without caching the data to drive.)
On Tuesday, August 7, 2012 12:52:12 PM UTC-7, Yash Parghi wrote:
I'm new to WSGI and CherryPy, and I'm trying to understand why
CherryPyWSGIServer reads and discards the request body when my app first
calls write(). The code snippet that does this is footnote  below
from CherryPyWSGIServer (we're using 3.1.1, but the behavior looks the
same in later versions).
This is what's happening in our app, as best I can tell:
1. an unchunked request comes in with a large body and a Content-Length
2. my app starts reading the body from wsgi.input
3. at some arbitrary point mid-read, my app sends back a progress update
to the client via write() (which is, as far as cherrypy is concerned,
just some plain old bytes)
4. cherrypy consumes the rest of the request body as part of sending the
response headers before sending the data passed to write().
I see the rationale in the CherryPyWSGIServer code, from the HTTP 1.1
spec -- "the server SHOULD NOT close the transport connection until it
has read the entire request" -- but I don't see how that entails reading
and discarding the request on the first write, since the connection
isn't being closed at that point. Can anyone clarify/expand?
If that rationale stands for whatever reason, how _should_ my app use
write() without discarding the remainder of the request? Do I need to
make sure the entire request body has been read before I ever call
write()? Is that a WSGI expectation?
Thanks a lot -
if (not self.close_connection) and (not self.chunked_read):
# Read any remaining request body data on the socket.
You received this message because you are subscribed to the Google
Groups "cherrypy-users" group.
To view this discussion on the web visit
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at