i've asked a question and dident get the answer:
http://stackoverflow.com/questions/13219805/why-do-people-talk-about-threading-when-using-tornado
there is GIL, but everyone talks about threads?
On 10 nov, 02:04, Russ Weeks <
rwe...@newbrightidea.com> wrote:
> The way you have it set up right now, you'll create a new thread to service
> each request. Probably better to use a thread pool to cap the number of
> active threads.
>
> I've found the stdlib's multiprocessing.pool.ThreadPool to be very easy to
> use. Set it up like this:
>
> tp = ThreadPool(num_threads)
>
> Then instead of creating and starting a new thread, do:
>
> tp.apply(lambda:self.perform(self.on_callback))
>
> The thread pool uses a task queue, so even if all threads are busy your
> request handler won't block waiting for a thread to become available.
>
> -Russ
>
>
>
>
>
>
>
> On Fri, Nov 9, 2012 at 4:44 PM, CODEY <
ed.pat...@gmail.com> wrote:
> > thank you. ..i found this online, i think it is fixed now:
>
> > import functoolsimport timeimport threadingimport logging
> > import tornado.webimport tornado.websocketimport tornado.localeimport tornado.ioloop
> > class Handler(tornado.web.RequestHan**dler):
> > def perform(self, callback):
> > #do something cuz hey, we're in a thread!
> > time.sleep(5)
> > output = 'foo'
> > tornado.ioloop.IOLoop.instance**().add_callback(functools.part**ial(callbac k, output))
>
> > def initialize(self):
> > self.thread = None
>
> > @tornado.web.asynchronous
> > def get(self):
> > self.thread = threading.Thread(target=self.p**erform, args=(self.on_callback,))
> > self.thread.start()
>
> > self.write('In the request')
> > self.flush()
>
> > def on_callback(self, output):
> >
logging.info('In on_callback()')
> > self.write("Thread output: %s" % output)
> > self.finish()
>
> > application = tornado.web.Application([
> > (r"/", Handler),
> > ])
> > if __name__ == "__main__":
>
> > application.listen(8888)
> > tornado.ioloop.IOLoop.instance**().start()