Yeah, of course there are some troubles in my application, caused by
processing of such a huge peaces of data, as at least it blocks the main
thread for a severeal tenth of a second on any synchronous operation
with such amounts of data. But, actually, output is the large JSON block
containing some statistics information used for maintenance, and the
problem here is not that the large amount of data is dumped to JSON, but
in the fact that tornado fails to write it to the socket in acceptable
amount of time (without splitting the output in my application).
Split the buffer in IOStream.write looks reasonable for now and works
good. Should I make a pull request, or you will commit it yourself?
About the resize/copies - we have them any way if using _merge_prefix.
Also, I don't understand, is there any profit in joining a group of
little chunks into the large one? I guess that it would be right to
delegate such work to the kernel, am I wrong? May be we could do it
better passing the little chunks to socket.send without modification and
using the memoryview on the large chunks, to avoid memcpy operations in
python at all? Would it be acceptable, or I should not even try to
implement it? Is there any benchmarks, I should use to test it? Probably
this may be an ideal solution.
--
Andrew
24.05.2012 09:06, Ben Darnell написал: