I'm working on a patch to add the ability to recv_body in chunks, for
eg: writing to disk while maintaining a small buffer, when you don't
know what the final attachment size is.
Here's a diff:
http://friendpaste.com/6UQ7fJPnp0wajZzh6ozbUc
The patch is a first pass, when it's done we're planning to use it in
CouchDB, but I'd like to make sure it makes it in upstream.
I added a function stream_chunked_body, which I was able to use
internally for the implementation of read_chunked_body
Hoping for feedback!
Chris
--
Chris Anderson
http://jchris.mfdz.com
CouchDB has 127 nils. ;)
We mostly use null to represent JSON null. It's not particularly
crucial to me which we use here. ;)
>
> I would take a different approach entirely -- offer a streaming
> version of recv_body that takes an argument for callback and an
> argument for the maximum size for framing. For regular bodies you
> would read in chunks of <= MaxSize and send it to the callback, for
> chunked bodies you'd read in chunks of <= min(MaxSize, ChunkSize) for
> convenience. With your API proposal, the client controls the framing,
> which seems weird to me given the use case. I would probably call it
> stream_body instead of recv_body since the implementation is going to
> be almost entirely divergent (recv_body might use stream_body though).
>
> I'll see if I can find some time to look at this a bit more closely
> later today (no promises).
Great feedback. Hits the nail on the head about my doubts with it. I
might not pick it up again today, but I'm happy to apply your
suggestions.
Weird, why not use 'null' to represent null? :)
-bob
I think this new version of the patch accomplishes what you are looking for.
http://friendpaste.com/4Qb4gAsRHXJ6gf5TCfU549
I switched the nil to undefined.
Cheers,
Ok, this looks good to me. Applied in r90
-bob