Starting multiple instances from one server.py

3,789 views
Skip to first unread message

Stuart King

unread,
May 5, 2010, 9:34:16 PM5/5/10
to Tornado Web Server
Does anyone see any issues with this? I guess my question lies with
starting the ioloop instance once, but 3 tornado http servers on
different ports.


from logger import info
from settings import URL_MAPPINGS
import threading
import tornado.httpserver
import tornado.ioloop

application = tornado.web.Application(URL_MAPPINGS)

class ServerThread(threading.Thread):
def __init__(self, port):
threading.Thread.__init__(self)
self.port = port

def run(self):
info('Tornado started on port %s...' % str(self.port))
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(self.port)

def start_server(port):
ServerThread(port).start()

if __name__ == "__main__":
start_server(8886)
start_server(8887)
start_server(8888)
tornado.ioloop.IOLoop.instance().start()

Cheers

Stu

David P. Novakovic

unread,
May 5, 2010, 9:36:51 PM5/5/10
to python-tornado
As far as I know the IOLoop isn't thread safe. This looks... not good.

Why not start three tornado processes?

Thomas Rampelberg

unread,
May 6, 2010, 12:33:20 AM5/6/10
to python-...@googlegroups.com
Why not just use the built-in pre-forking?

Ben Darnell

unread,
May 6, 2010, 1:07:53 AM5/6/10
to python-...@googlegroups.com
David's right that the IOLoop is generally not thread-safe[1], so it
wouldn't work to start up your HTTP servers in multiple threads.
Fortunately, you don't need to - HTTPServer.listen() returns
immediately, so you can just do
HTTPServer(app1).listen(port1)
HTTPServer(app2).listen(port2)
IOLoop.instance().start()

from your main function. Of course, this really only makes sense when
you're running multiple apps - I can't think of a reason to start
multiple copies of the same app on different ports in the same
process. If you're wanting to run multiple instances of the same app
for performance reasons (i.e. to take advantages of multiple CPUs),
you need to have multiple processes because of the python GIL. You
can either run the processes separately or use tornado's preforking
support (note that preforking will only work with a single HTTPServer
per master process).

[1] I've been meaning to write up some docs for thread safety since
we're using tornado with multiple threads in Brizzly, but haven't
gotten around to it. The short version is that IOLoop.add_callback()
is safe to call from any thread at any time, but all other methods
must only be called from the thread that calls IOLoop.start() (so if
another thread wants to do something to the IOLoop, it can use
add_callback to schedule that operation in the IOLoop's thread).
Objects that have an IOLoop member variable (e.g. HTTPServers and
AsyncHTTPClients) should also only be used from their IOLoop's thread.
It is safe to have multiple threads each with its own IOLoop - we
have two IOLoop threads so that we can use AsyncHTTPClient from legacy
Django code (running under tornado.wsgi) without deadlocks.

-Ben

Stuart King

unread,
May 6, 2010, 9:52:37 PM5/6/10
to Tornado Web Server
Thanks guys, that clears it up for me.

Stu

Алексей Силк

unread,
May 25, 2013, 2:34:43 AM5/25/13
to python-...@googlegroups.com
If someone needs, I start my tornado as follows


#!/usr/bin/env python
# -*- coding: utf-8
__author__ = 'rootiks - ale...@silk.bz (Python 2.7.6)'

import os
import torndb.torndb
import tornado.httpserver
import tornado.ioloop
import tornado.web
import logging
from tornado.options import define, options
from tornado import locale

# REST support
from classes import support as ppSupportHandler
from classes import support_page as ppSupportPageHandler
from classes import support_ticket as ppSupportUserHandler
from classes import support_ticket_action as ppSupportTicketActionHandler
# End REST support


define("port", default=8111, help="run on the given port", type=int)

define("mysql_host", default="", help="database host")
define("mysql_database", default="", help="database name")
define("mysql_user", default="", help="database user")
define("mysql_password", default="", help="database password")

define("memcache_hosts", default="127.0.0.1:11011", multiple=True)


ports_run_on = ["8111", "8112", "8113", "8114", "8115", "8116", "8117", "8118", "8119", "8120"]




class Application(tornado.web.Application):
    def __init__(self):
        import logging
        logging.getLogger().setLevel(logging.DEBUG)

        static_dir = os.path.join(os.path.dirname(__file__), "web")
        static_dir_dict = dict(path=static_dir)

        tornado.locale.load_translations(os.path.join(os.path.dirname(__file__), "web/lang"))

        settings = dict(
            debug = True, # TODO Change this!!!
            autoescape = None,
            gzip = True,
            template_path = os.path.join(os.path.dirname(__file__), "web/templates"),
            static_path = os.path.join(os.path.dirname(__file__), "web"),
            xsrf_cookies = True,
            secret_key = "" ,
            cookie_secret = "",
            login_url = "/",
            quotagb = ,
            encookie_secret = '',
        )

        ppMyHandlers = [
                (r"/register/go", ppRegisterGoHandler.RegisterGoHandler),

                # REST support user
                (r"/support/ticket/([0-9]+)", ppSupportUserHandler.SupportUserHandler),
                (r"/support/ticket/(?P<ticket>[0-9]+)/?(?P<action>[^\/]+)?", ppSupportTicketActionHandler.SupportTicketActionHandler),
                (r"/support/page/([0-9]+)", ppSupportPageHandler.SupportPageHandler),

                (r"/support", ppSupportHandler.SupportHandler),
                (r"/support/(.*)", ppSupportHandler.SupportHandler ),
                # End REST support user

       (r"/(robots\.txt)", tornado.web.StaticFileHandler, static_dir_dict),
                (r"/(.*)", ppIndexAllHandler.IndexAllHandler )
            ]


        # 2. Create Tornado application
        tornado.web.Application.__init__(self, ppMyHandlers, **settings)


        # Have one global connection to the blog DB across all handlers
        self.db = torndb.torndb.Connection(
            host=options.mysql_host, database=options.mysql_database,
            user=options.mysql_user, password=options.mysql_password)




def startServerProcess():
    # Have one global var where all ports are
    global ports_run_on

    i=0
    # define server variable dynamic
    server = {}


    try:
        while i < len(ports_run_on):
            tornado.options.parse_command_line()

            port = ports_run_on[i]
            server[port] = tornado.httpserver.HTTPServer(Application(), xheaders = True)
            server[port].listen(int(port))

            logging.info("Server thread quied and is starting on: " + str(port))
            i=i+1

        # if ok start all threads
        tornado.ioloop.IOLoop.instance().start()
    except Exception as inst:
        tornado.ioloop.IOLoop.instance().stop()
        logging.info("Server port is busy, passing. Error: "+ str(inst) )
        return





if __name__ == '__main__':
    startServerProcess()







and init.d script:


#!/bin/bash

DAEMON_DIR=/var/www/support
DAEMON=$DAEMON_DIR/support.py
NAME="support"
DESC="support daemon"

test -f $DAEMON || exit 0

set -e

case "$1" in
  start)
        echo -n "Starting $DESC: "
        start-stop-daemon --start --pidfile /var/run/$NAME.pid \
            --chdir $DAEMON_DIR \
            --make-pidfile --background -c nobody --startas $DAEMON
        echo "$NAME."
        ;;
  stop)
        echo -n "Stopping $DESC: "
        start-stop-daemon --stop --oknodo \
            --pidfile /var/run/$NAME.pid
        rm -f /var/run/$NAME.pid
        echo "$NAME."
        ;;
  restart)
        echo -n "Restarting $DESC: "
        start-stop-daemon --stop --oknodo \
            --pidfile /var/run/$NAME.pid
        rm -f /var/run/$NAME.pid
        start-stop-daemon --start --pidfile /var/run/$NAME.pid \
            --chdir $DAEMON_DIR \
            --make-pidfile --background -c nobody --startas $DAEMON
        echo "$NAME."
esac

exit 0

Ben Darnell

unread,
May 25, 2013, 11:47:52 AM5/25/13
to Tornado Mailing List
It doesn't really do any good tor run multiple copies of the same Application on one IOLoop.  You need to use multiple processes (or at least multiple threads), each with their own IOLoop, to get any benefit from multiple copies of the same Application.  Tornado can start multiple processes for you if you pass a number to HTTPServer.start().

-Ben



--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornad...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Laurier Rochon

unread,
May 25, 2013, 12:00:58 PM5/25/13
to python-...@googlegroups.com
I was wondering about this - are there any examples of this with HTTPServer.start()? Also, what's the advantage of running multiple processes this way VS having them run by supervisor or another monitoring program (which often have auto-reload on crash, process grouping, etc.)?

Ben Darnell

unread,
May 25, 2013, 12:07:28 PM5/25/13
to Tornado Mailing List
On Sat, May 25, 2013 at 12:00 PM, Laurier Rochon <lau...@human.co> wrote:
I was wondering about this - are there any examples of this with HTTPServer.start()? Also, what's the advantage of running multiple processes this way VS having them run by supervisor or another monitoring program (which often have auto-reload on crash, process grouping, etc.)?

The HTTPServer docs show how to use it; I don't know of any complete end-to-end examples.

Tornado's multi-process mode has auto-restart on crash too, but I would still say that supervisord (or another similar process manager) is overall the best option.  A good process manager will let you address individual processes and do zero-downtime rolling restarts, while tornado's multi-process mode is easier to get started with and lets all the processes share the same listening socket without proxies or other operational complexity.

Laurier Rochon

unread,
May 25, 2013, 12:30:36 PM5/25/13
to python-...@googlegroups.com
Great thanks. This is interesting :

| lets all the processes share the same listening socket without proxies or other operational complexity.

I'm guessing this would be the equivalent to having multiple tornado servers listening to a process manager's socket (e.g. unix:/tmp/supervisor.sock) and having requests proxied to it?

Ben Darnell

unread,
May 25, 2013, 12:53:05 PM5/25/13
to Tornado Mailing List
On Sat, May 25, 2013 at 12:30 PM, Laurier Rochon <lau...@human.co> wrote:
Great thanks. This is interesting :

| lets all the processes share the same listening socket without proxies or other operational complexity.

I'm guessing this would be the equivalent to having multiple tornado servers listening to a process manager's socket (e.g. unix:/tmp/supervisor.sock) and having requests proxied to it?

There is no proxying in this scenario; each process has a copy of the same listening socket.  When a connection comes in all idle processes will wake up and call accept(); the first one gets the connection and processes it, and the others go back to their IOLoop.  There is no master process relaying requests that could add overhead and become a bottleneck.  

To get the same effect with separately-managed processes, you can use a technique like https://gist.github.com/bdarnell/1073945 .  The combination of rolling restarts and shared listening sockets is helpful for reaching highest levels of scalability (1M+ connections per machine), since it spreads the load of clients resuming their connections when the server they were talking to goes away.

Aleksey Silk

unread,
May 26, 2013, 11:45:50 PM5/26/13
to python-...@googlegroups.com
Ok. but how to manage all proceses started by tornado. init.d script can handle only first one, if for example I start 2 or more.
Any examples?

С уважением, Алексей Силк
With best regards, Aleksey Silk
 
skype - rootiks
 


2013/5/25 Ben Darnell <b...@bendarnell.com>

Ben Darnell

unread,
May 27, 2013, 12:12:48 PM5/27/13
to Tornado Mailing List
On Sun, May 26, 2013 at 11:45 PM, Aleksey Silk <ale...@silk.bz> wrote:
Ok. but how to manage all proceses started by tornado. init.d script can handle only first one, if for example I start 2 or more.
Any examples?

init.d scripts can do whatever you want; they're just shell scripts.  The simplest way to manage multiple processes in the init.d style is to use setproctitle (https://pypi.python.org/pypi/setproctitle) in the python process so you can use killall.  But I would really recommend using supervisord instead; it's not that complicated (https://github.com/bdarnell/tornado-production-skeleton/tree/master/production)

-Ben

Aleksey Silk

unread,
May 27, 2013, 1:36:21 PM5/27/13
to python-...@googlegroups.com
hm ... the problem is hat I do have other python processes ... and I can't killall python ... hm ... 

С уважением, Алексей Силк
With best regards, Aleksey Silk
 
skype - rootiks
 


2013/5/27 Ben Darnell <b...@bendarnell.com>

Laurier Rochon

unread,
May 27, 2013, 1:48:34 PM5/27/13
to python-...@googlegroups.com
You can add a [program:program_name] directive for every python process you have. If you need multiple processes for the same application, use the numprocs = X variable.

Ben Darnell

unread,
May 27, 2013, 2:09:37 PM5/27/13
to Tornado Mailing List
On Mon, May 27, 2013 at 1:36 PM, Aleksey Silk <ale...@silk.bz> wrote:
hm ... the problem is hat I do have other python processes ... and I can't killall python ... hm ... 

That's what setproctitle is for - it gives the process a different name so they're not all just "python".  

Aleksey Silk

unread,
May 27, 2013, 2:42:44 PM5/27/13
to python-...@googlegroups.com
I've done on that.
I did multiprocessing with different processes and PIDs. I kill them with ps axu | grep BLA BLA | awk '{print $2} | xarg kill -9'

So I kill PID's not python at all
Reply all
Reply to author
Forward
0 new messages