I have a problem, that a encoutered two weeks ago - and still no
success :(
I've read all messages in Google groups, found several solutions. But
they are not applicable for me.
So, the problem: say, I would like to create host monitoring
application, using python and tornado, certainly.
Let it be just s simple ping function:
def ping(host):
...
some long executing/scanning mechanism
...
So, how could I run it asynchronously via "get" request? I've read,
that tornado is a single threaded application and I should't use
threads (maybe I'm wrong and missunderstuded smthing). I saw chat
example, but I don't understand it clearly :(
I found solution with executing some shell command async (http://
brianglass.wordpress.com/2009/11/29/asynchronous-shell-commands-with-
tornado/). So I can put my function in file ping.py and make a call:
....
self.pipe = p = os.popen('python ping.py')
self.ioloop.add_handler( p.fileno(),
self.async_callback(self.on_response), self.ioloop.READ )
--save results to file and read them back--
....
But it looks like a crap :(
I looked at PeriodicalCallback example, but it's also doesn't work for
me:
class RandHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
self.scheduler = tornado.ioloop.PeriodicCallback(self._check,
1000)
self.scheduler.start()
def _check(self):
# here bad things happen - sleep blocks the whole application.
The same will be on any long-running function :(
sleep(10)
self.scheduler.stop()
self.write("Hello from sleep request!")
self.finish()
So, all I want to do is call some long-running function, using
@tornado.web.asynchronous annotation in "get" RequestHandler. And then
be notified, when it's finished - to return data back.
For example, when i navigate my browser to http://localhost:8888/ - it
says "Hello!" - it's simple.
And when I go to http://localhost:8888/ping - it executes my function,
waits and after some loading time gives me result. Within loading
time, root page should not be blocked, of course.
Thanks in advance and sorry, if there are some mistakes...
Feel free to ask any questions, if my explanation is not very clear.
I posted something to the list a few weeks ago.
Basically, you can use the process pool from the multiprocessing module to perform asynchronous actions in a separate process.
I made a modification to the hello world example and posted it as a gist on github. I'm not at a computer now, or I'd post the link again. If you search the archive for multiprocessing, you'll probably find it. Or if you're interested, I can repost the link when I get back.
Doug
It's a little confusing what exactly you are trying to accomplish, if perhaps you could try to make a list of the things you'd like to accomplish using Tornado, we'll have an easier time boiling down the issue and steering you in the proper direction.
Working with what you've given us so far:
When you add the @tornado.web.asynchronous decorator to a method, it doesn't block while your socket is idle, it simple goes into the IOLoop and is handled when ready. However, when you explicitly call time.sleep(), or any other "blocking" function, it blocks your application. Tornado applications are one-thread-per, and because of the GIL (Global Interpreter Lock, read up on it if you're not familiar with it), "true" concurrency within a Python application is a bit harder. Kqueue, epoll, poll, and select enable Tornado's magic.
How exactly is your ping function going to work? If all you're trying to do is resolve a URL, Tornado includes an asynchronous HTTPClient which takes care of the dirty work for you. See: http://github.com/facebook/tornado/blob/master/tornado/httpclient.py#L78.
Matthew, after your answer I begun to think, what actually do I
want? :)
So, the problem: I have several servers, which I would like to monitor
and some processes, running on them.
For now, I've got a multi-threaded application, which checks, that
host is alive (got response from ping command), connects to it via
SSH, checks, that process is running and do some other stuff. Then i
generate plain html page, using result from my scanning. Page is saved
and I can view it. My script is running via cron.
Now I would like to upgrade my monitoring system - start using Tornado
and AJAX with long-polling.
So, the client navigates to my URL, javascript (jquery or prototype
lib) connects to my server and waits for results. In a while my
program runs long running scan of all my servers. So, two or more
clients can wait, until scan is complete. After that, all clients got
response from my scanner, that job is complete - with results of that
job (via XML or json).
So, to start, I would like just to implement simple host-pinger. Can
you give me advise, what is better solution?
I suggest, that there is some repeatedly executed method, that pings
hosts from list. But how to connect this method with Tornado's "get"
and how to push result back to client?
Sorry, my explanation very confusing, but I hope, I let you know, what
I want :)
Any help is appreciated!
> > For example, when i navigate my browser tohttp://localhost:8888/- it
> > says "Hello!" - it's simple.
> > And when I go tohttp://localhost:8888/ping- it executes my function,
So you create all of your scanning logic in your separate module, and
expose it via a function or encapsulate it
in a class. Then in your tornado code, you simply create an async
handler (like the one in my example), and
have that async request handler call the apply_async method on your
process pool (also like I did in my example),
and pass your scanning function/method as the first argument to
apply_async. Then have apply_async run whatever
callback finishes your tornado request and send the json result to the client.
I honestly think that's the cleanest method. I still haven't seen
anyone on the list talk about the multiprocessing
module and whether or not it's a good fit for tornado. I'd really like
to hear some of the developers give their 2 cents.
So maybe there's some nasty thing where the multiprocessing module is
a bad idea, but as far as I can tell, it's
great.
Can anyone else comment on it? I'm also thinking it'd be a decent way
to call blocking db servers...but again,
I could be wrong and perhaps it's horribly inefficient, or maybe the
universe will implode if you do it...
Doug
--
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
I've completed my task - it works now as I wanted.
Everyone can have a look at result here: http://github.com/grundic/thePinger
Thanks a lot everybody for help =)
class AsyncRequestHelperThread(Thread):
def __init__(self, callback, action, *args):
self.callback = callback
self.action = action
self.args = args
super(AsyncRequestHelperThread, self).__init__()
def run(self):
try:
result = self.action(*self.args)
except:
result = sys.exc_info()
self.callback(result)
Just spawn a new thread and finish() the result in callback method (or
handle the exception). The callback needs to check the type of result
to handle it differently in case of an exception (by using some
expression such as: "result and isinstance(result, tuple) and
len(result) == 3 and isinstance(result[0], type)").
-- Hari
On Mar 1, 10:36 am, grundic <grun...@gmail.com> wrote:
> Thanks a lot, Douglas!
> Yep, I've found your example code - here it is, if someone is
> interested -http://gist.github.com/312676.
>
[snip]
Atleast that's what I've read...
Doug
--
No, you're correct. Normally threading is the better option. However,
due to how the GIL works, if one of your threads blocks, it could
block incoming connections, sort of defeating the purpose of making
your project async.
Atleast that's what I've read...
On Apr 14, 1:32 pm, Claudio Freire <klaussfre...@gmail.com> wrote:
> On Wed, Apr 14, 2010 at 5:29 PM, Claudio Freire <klaussfre...@gmail.com>wrote:
[snip]