tornado streaming big files

2,456 views
Skip to first unread message

fvisconte

unread,
Dec 23, 2009, 9:01:59 AM12/23/09
to Tornado Web Server
Hi,

I'm trying to use tornado to stream an on-the fly converted video
using chunked http transfer with the following
method (code sample is just to illustrate situation):

myhander(tornado.web.RequestHandler):
def get():
fd = open("video.flv", "r")
while True:
buffer = fd.read(1024)
self.write(buffer)
self.flush()
self.finish()


The problem i have is that bytes are not transfered until the "finish
()" call. It's a bit confusing for big files as tornado will use a lot
of memory and the http client will have to wait until all data are
buffered on the server. I think the problem is quite the same for
upload.

Is there any way to send a chunk per while loop ?

Regards,
F.

Creotiv

unread,
Dec 23, 2009, 9:14:45 AM12/23/09
to Tornado Web Server
There is ChunkedTransferEncoding class for this things. It works with
flush function.
This transform class added automatically to your application if you
don't set transform parameter manual. If you setting it manual just
add to the transforms ChunkedTransferEncoding.

Francois Visconte

unread,
Dec 23, 2009, 10:02:54 AM12/23/09
to python-...@googlegroups.com
Hi,

ChunkTransferEncoding is altready in the Appliation.transforms. However
it appears that
data (in IOStream._write_buffer) are not sended to the socket until the
loop is finished.
Is there a way to release chunk on the socket each time flush() is called ?

F.

Le 23/12/09 15:14, Creotiv a écrit :

Creotiv

unread,
Dec 23, 2009, 3:16:05 PM12/23/09
to Tornado Web Server
I've tested this now, and you are right, but in the same way you are
wrong)

Check it with this:

for i in xrange(10):
self.write("TEsting"+str(i))
self.flush()

So chunke is finished if after flush you try to write something in the
buffer, or you use finish() method. So your code must work. But i'm
not recommend you to work with file like you do, cause it will block
Tornado process, better to create subrocces with callback.

Francois Visconte

unread,
Dec 24, 2009, 4:51:34 AM12/24/09
to python-...@googlegroups.com
Hi,

I have the same problem in the following scenario:

[amazonS3] <---> [tornado storage WS] <---> [tornado indexing WS] <---
client

When i upload a big file on the client side the data are buffered on the
indexing server before beeing sent to the storage server and so on...
This results in twice longer data transfer time.

It's in fact the same problem I have experienced with streaming server.
But I don't know
excatly how to make the _write_buffer beeing consumed regulary.


Regards,
F.

Le 23/12/09 21:16, Creotiv a écrit :

fvisconte

unread,
Dec 24, 2009, 2:09:32 PM12/24/09
to Tornado Web Server
Hi,

I finaly managed to run my handler quite smoothly using the code
snippet provided above. The only problem i have now is that
when i launch twice the same handler the last one is blocked until the
ffmpeg command is terminated. Do you know a good way to do that ?

class FlvFormater(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self, video_name):
self.set_header("Content-Type", "video/x-flv")
cmd = "/usr/local/bin/ffmpeg -i %s -f flv -acodec aac -s vga
-"%video_name
self.ioloop = tornado.ioloop.IOLoop.instance()
self.pipe = cmdfd = os.popen(cmd)
#self.pipe = cmdfd = open("a.avi", "r")
self.ioloop.add_handler( cmdfd.fileno(), self.async_callback
(self.on_read), self.ioloop.READ )

def on_read(self, fd, events):
buffer = self.pipe.read(1024)
try:
assert buffer
self.write(buffer)
self.flush()
except:
self.pipe.close()
self.ioloop.remove_handler(fd)
self.finish()

mattd

unread,
Dec 24, 2009, 9:30:39 PM12/24/09
to Tornado Web Server
i would suggest processing w/ ffmpeg outside of the request. something
like celeryd would be perfect.

Francois Visconte

unread,
Dec 25, 2009, 7:11:40 AM12/25/09
to python-...@googlegroups.com
Hi,

Thank for the tip. I will try this solution. My promary goal was to
build a tiny web
server to watch my videos from anywhere and celery seems to be a bit
complex.

Regarsds,
F.


Le 25/12/09 03:30, mattd a écrit :

Reply all
Reply to author
Forward
0 new messages