What's the right way of using blocking library in Tornado?

875 views
Skip to first unread message

mrtn

unread,
Jan 8, 2013, 10:31:58 PM1/8/13
to python-...@googlegroups.com

I am aware that an async library is ideal in most cases, but in many other cases, we have no choice but using blocking libraries (which may block longer than merely a few ms). Hence I wonder what is the correct pattern/recipe for using such libraries in Tornado 2.4+? Start a separate thread?

In a related question, if I am using a blocking redis library (e.g. redis-py), but would like to use it for both get/set like commands as well as pub/sub functionalities inside the same app but different handlers, what would be the right way to do this? Note that the sub command will block the redis client from executing other commands. Create two redis connections?


A. Jesse Jiryu Davis

unread,
Jan 9, 2013, 8:46:01 AM1/9/13
to python-...@googlegroups.com
So why use Tornado instead of a multithreaded framework like Flask?

Ben Darnell

unread,
Jan 9, 2013, 9:23:57 AM1/9/13
to Tornado Mailing List
On Tue, Jan 8, 2013 at 10:31 PM, mrtn <mrtn...@gmail.com> wrote:

I am aware that an async library is ideal in most cases, but in many other cases, we have no choice but using blocking libraries (which may block longer than merely a few ms). Hence I wonder what is the correct pattern/recipe for using such libraries in Tornado 2.4+? Start a separate thread?

Yes, when you need to do something that will block for too long, a thread is generally the simplest way to do it.  I recommend creating a threadpool from concurrent.futures, submitting a function to it, and using future.add_done_callback and io_loop.add_callback to return control to tornado when it's done:

   thread_pool.submit(func).add_done_callback(lambda future: io_loop.add_callback(functools.partial(callback, future)))

There will be built-in support for futures in tornado 3.0.  Other strategies include running the blocking code in a separate service and talking to it over http.

-Ben

mrtn

unread,
Jan 10, 2013, 3:55:05 AM1/10/13
to python-...@googlegroups.com, b...@bendarnell.com

Thanks Ben. Would the concurrent.futures approach also work for long running stuff like redis' pub/sub?

Serge S. Koval

unread,
Jan 10, 2013, 4:02:18 AM1/10/13
to python-...@googlegroups.com
Is there particular reason why you use redispy instead of asynchronous
driver for the Tornado?

Serge.

Ben Darnell

unread,
Jan 10, 2013, 8:53:16 AM1/10/13
to mrtn, Tornado Mailing List
On Thu, Jan 10, 2013 at 3:55 AM, mrtn <mrtn...@gmail.com> wrote:

Thanks Ben. Would the concurrent.futures approach also work for long running stuff like redis' pub/sub?

It'll work, but long-running pub/sub stuff is exactly what the asynchronous event-driven approach is good for, so doing this in a thread seems like an odd choice.

-Ben

tiadobatima

unread,
Jan 10, 2013, 1:27:25 PM1/10/13
to python-...@googlegroups.com, b...@bendarnell.com
Off topic, but since you mentioned: Is there an estimate for when 3.0 is going to be out? :)

Cheers,
g.

mrtn

unread,
Jan 11, 2013, 1:26:20 AM1/11/13
to python-...@googlegroups.com

Which async redis driver for Tornado is reliable and suitable for Production usage?

Aleksey Silk

unread,
Jan 11, 2013, 1:36:33 AM1/11/13
to python-...@googlegroups.com
Hello!

Is there some way to run not HTTP async function with tornado?

Serge S. Koval

unread,
Jan 11, 2013, 3:31:01 AM1/11/13
to python-...@googlegroups.com
We're using my own fork on toredis - https://github.com/mrjoes/toredis

Dead simple, tiny, supports pub/sub, supported.

Serge.

A. Jesse Jiryu Davis

unread,
Jan 11, 2013, 8:55:23 AM1/11/13
to python-...@googlegroups.com
Aleksey - use IOStream to do any socket operation asynchronously: http://www.tornadoweb.org/documentation/iostream.html

Ben Darnell

unread,
Jan 12, 2013, 11:30:46 AM1/12/13
to Tornado Mailing List
On Thu, Jan 10, 2013 at 1:27 PM, tiadobatima <gbar...@gmail.com> wrote:
Off topic, but since you mentioned: Is there an estimate for when 3.0 is going to be out? :)

No estimate right now; I haven't had a lot of time to spend on it lately.

-Ben

mrtn

unread,
Jan 17, 2013, 6:42:39 AM1/17/13
to python-...@googlegroups.com

Just checked out the project looks very minimalistic indeed,  but is there an example for pub/sub? And when you said 'supported', you mean you're actively maintaining this lib? thanks.

Serge S. Koval

unread,
Jan 17, 2013, 7:00:57 AM1/17/13
to python-...@googlegroups.com
There's small pub/sub example in unit test:
https://github.com/mrjoes/toredis/blob/master/tests/test_handler.py#L10

Yes, I'm using it in production environment and maintaining it.

Serge.

mrtn

unread,
Jan 17, 2013, 2:43:21 PM1/17/13
to python-...@googlegroups.com

Trying it out now. A quick question, how is connection in toredis managed? Given there is no connection pool yet, can I have two connection instances in the same tornado process? For example, I intend to have one connection for all regular tornado handlers, and another for all the SockJSConnection instances in the same tornado app. Is this a sensible design?

Serge S. Koval

unread,
Jan 17, 2013, 4:36:12 PM1/17/13
to python-...@googlegroups.com
Instance of the Client class is separate redis connection. You can
create as many connections as you want.

You can use single redis connection for most of the tasks as long as
you don't hit redis limitations. For example, you can't issue "normal"
(GET/etc) commands on connection working in SUB mode. Or if you
started MULTI, make sure you don't do any asynchronous operations, as
something else might issue more commands to that connection before you
ran EXEC, etc.

Serge.

mrtn

unread,
Jan 18, 2013, 7:38:32 AM1/18/13
to python-...@googlegroups.com

thanks for confirming. btw, is pipelining supported by toredis, cannot find any trace of it on github..

Serge S. Koval

unread,
Jan 18, 2013, 8:08:16 AM1/18/13
to python-...@googlegroups.com
By pipelining you mean sending new command before receiving response
for previous one?

It is here: https://github.com/mrjoes/toredis/blob/master/toredis/client.py#L98
and here https://github.com/mrjoes/toredis/blob/master/toredis/client.py#L179

Serge.

mrtn

unread,
Jan 18, 2013, 9:49:40 AM1/18/13
to python-...@googlegroups.com

i mean sending a series of commands (e.g. 100 of them) in a row (yes, without having to wait for individual responses), executing them together in a transaction, and getting the results back in the same order as I sent them. Much like the behavior of this: https://github.com/andymccurdy/redis-py#pipelines

is this supported? if so, could you provide an example of how to do this in toredis? thanks.

Serge S. Koval

unread,
Jan 18, 2013, 9:59:46 AM1/18/13
to python-...@googlegroups.com
Ah.

You don't need explicit pipelining in toredis, as toredis is not
blocking by design.

For example, if you use redis-py and want to execute two commands
without waiting for intermediate response(s), you have to use their
pipelining mechanics.

So this code:

pipe = r.pipeline()
pipe.set('foo', 'bar')
pipe.get('bing')

In toredis will look like:

conn.set('foo', 'bar')
conn.get('bing', callback=handle)

If you want to make lots of GET requests and want to handle resulst in
one place, use MULTI command:
conn.multi()
conn.get('foo')
conn.get('bar')
conn.execute(callback=handle)

If you don't want to use MULTI and you're using tornado.gen, you can
wait for more than one response:

response1, response2 = yield [gen.Task(conn.get, 'foo'),
gen.Task(conn.get, 'bar')]

Or you can come up with your own implementation :-)

Serge.

mrtn

unread,
Jan 18, 2013, 1:05:18 PM1/18/13
to python-...@googlegroups.com

to confirm, using the gen.Task approach for a series of SETs, can I do:

pipe = []
for i in xrange(100):
    pipe.append(gen.Task(conn.set, 'foo' + str(i), 'bar' + str(i)))

responses = yield pipe
   
and expect that the above SETs will be executed in a transaction, which is the behavior of a pipeline execution in redis-py? What happens when one of the SETs fails?

In addition, all those SETs above are sent to redis in one request, right? which is the way how pipelining is implemented in other libraries. 

Serge S. Koval

unread,
Jan 18, 2013, 1:20:54 PM1/18/13
to python-...@googlegroups.com
Pipeline in redis-py is translated into MULTI/EXEC redis statements.

Check redis docs: http://redis.io/topics/transactions

Serge.

mrtn

unread,
Jan 18, 2013, 2:13:14 PM1/18/13
to python-...@googlegroups.com

So:

1. the gen.Task() approach is NOT equivalent to a MULTI/EXEC transaction which provides the two guarantees in the documentation

2. with gen.Task(), commands are not buffered in a single request to redis, but sent as separate requests sequentially

Am I right to conclude that there shouldn't be much difference in terms of speed between this gen.Task() and redis-py pipelining, but gen.Task() approach lack transaction features that redis-py pipelining offers?

Serge S. Koval

unread,
Jan 18, 2013, 2:33:25 PM1/18/13
to python-...@googlegroups.com
Yes, that's correct.

Serge.

Lorenzo Bolla

unread,
Jan 22, 2013, 9:36:57 AM1/22/13
to python-...@googlegroups.com
As a demonstration of this technique, I've written a small script:
http://lbolla.info/blog/2013/01/22/blocking-tornado

Cheers,
L.


On Wed, Jan 09, 2013 at 09:23:57AM -0500, Ben Darnell wrote:
> On Tue, Jan 8, 2013 at 10:31 PM, mrtn <mrtn...@gmail.com> wrote:
>
> >
> > I am aware that an async library is ideal in most cases, but in many other
> > cases, we have no choice but using blocking libraries (which may block
> > longer than merely a few ms). Hence I wonder what is the correct
> > pattern/recipe for using such libraries in Tornado 2.4+? Start a separate
> > thread?
> >
>
> Yes, when you need to do something that will block for too long, a thread
> is generally the simplest way to do it. I recommend creating a threadpool
> from concurrent.futures, submitting a function to it, and using
> future.add_done_callback and io_loop.add_callback to return control to
> tornado when it's done:
>
> thread_pool.submit(func).add_done_callback(lambda future:
> io_loop.add_callback(functools.partial(callback, future)))
>
> There will be built-in support for futures in tornado 3.0. Other
> strategies include running the blocking code in a separate service and
> talking to it over http.
>
> -Ben
>
>
> >
> > In a related question, if I am using a blocking redis library (e.g.
> > redis-py), but would like to use it for both get/set like commands *as
> > well as* pub/sub functionalities inside the same app but different
> > handlers, what would be the right way to do this? Note that the sub command
> > will block the redis client from executing other commands. Create two redis
> > connections?
> >
> >
> >

--
Lorenzo Bolla
http://lbolla.info
Reply all
Reply to author
Forward
0 new messages