Running cherrypy3 as a daemon

224 views
Skip to first unread message

Herb...@googlemail.com

unread,
Jan 22, 2008, 10:26:42 PM1/22/08
to cherrypy-users
How do I run cherrypy as a daemon?

Doing:

python app.py &, then exiting the remote ssh, throws me an Input/
Output error when I browse to my cp3 site:

<pre>
Traceback (most recent call last):
File "/var/lib/python-support/python2.5/cherrypy/wsgiserver/
__init__.py", line 624, in communicate
req.respond()
File "/var/lib/python-support/python2.5/cherrypy/wsgiserver/
__init__.py", line 357, in respond
response = self.wsgi_app(self.environ, self.start_response)
File "/var/lib/python-support/python2.5/cherrypy/_cptree.py", line
74, in __call__
return self.wsgiapp(environ, start_response)
File "/var/lib/python-support/python2.5/cherrypy/_cpwsgi.py", line
290, in __call__
return head(environ, start_response)
File "/var/lib/python-support/python2.5/cherrypy/_cpwsgi.py", line
42, in __call__
return IRResponse(self.nextapp, environ, start_response,
self.recursive)
File "/var/lib/python-support/python2.5/cherrypy/_cpwsgi.py", line
55, in __init__
self.setapp()
File "/var/lib/python-support/python2.5/cherrypy/_cpwsgi.py", line
60, in setapp
self.response = self.nextapp(self.environ, self.start_response)
File "/var/lib/python-support/python2.5/cherrypy/_cpwsgi.py", line
278, in tail
return self.response_class(environ, start_response, self.cpapp)
File "/var/lib/python-support/python2.5/cherrypy/_cpwsgi.py", line
138, in __init__
_cherrypy.log(tb)
File "/var/lib/python-support/python2.5/cherrypy/__init__.py", line
311, in __call__
return log.error(*args, **kwargs)
File "/var/lib/python-support/python2.5/cherrypy/_cplogging.py",
line 40, in error
self.error_log.log(severity, ' '.join((self.time(), context,
msg)))
File "/usr/lib/python2.5/logging/__init__.py", line 1056, in log
apply(self._log, (level, msg, args), kwargs)
File "/usr/lib/python2.5/logging/__init__.py", line 1101, in _log
self.handle(record)
File "/usr/lib/python2.5/logging/__init__.py", line 1111, in handle
self.callHandlers(record)
File "/usr/lib/python2.5/logging/__init__.py", line 1148, in
callHandlers
hdlr.handle(record)
File "/usr/lib/python2.5/logging/__init__.py", line 655, in handle
self.emit(record)
File "/usr/lib/python2.5/logging/__init__.py", line 757, in emit
self.handleError(record)
File "/usr/lib/python2.5/logging/__init__.py", line 706, in
handleError
traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr)
File "/usr/lib/python2.5/traceback.py", line 124, in print_exception
_print(file, 'Traceback (most recent call last):')
File "/usr/lib/python2.5/traceback.py", line 13, in _print
file.write(str+terminator)
IOError: [Errno 5] Input/output error
</pre>


Any ideas?



Thanks,
Herb

hctv19

unread,
Jan 23, 2008, 2:49:35 AM1/23/08
to cherryp...@googlegroups.com

My web application "Filelocker" uses a daemonizing script that includes
start, stop, and restart functionality via commandline arguments. Here it
is, let me know if you have any questions:
-------------

#!/usr/bin/python

import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
import cherrypy
import ConfigParser
import time
import signal, errno


CONFIG_PATH = "/home/wbdavis/source/filelocker/trunk/conf/server.conf"

# Default daemon parameters.
# File mode creation mask of the daemon.
UMASK = 0

# Default working directory for the daemon.
WORKDIR = "/"

# Default maximum for the number of available file descriptors.
MAXFD = 1024

# The standard I/O file descriptors are redirected to /dev/null by default.
##REDIRECT_TO = "/dev/null"
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"

def createDaemon():
"""Detach a process from the controlling terminal and run it in the
background as a daemon.
"""

try:
# Fork a child process so the parent can exit. This returns control
to
# the command-line or shell. It also guarantees that the child will
not
# be a process group leader, since the child receives a new process ID
# and inherits the parent's process group ID. This step is required
# to insure that the next call to os.setsid is successful.
pid = os.fork()
except OSError, e:
raise Exception, "%s [%d]" % (e.strerror, e.errno)

if (pid == 0): # The first child.
# To become the session leader of this new session and the process
group
# leader of the new process group, we call os.setsid(). The process
is
# also guaranteed not to have a controlling terminal.
os.setsid()

# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are
exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session
that
# is attached to a terminal device, SIGHUP is sent to all
processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to
have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since
the
# second child is not STOPPED though, we can safely forego ignoring
the
# SIGHUP signal. In any case, there are no ill-effects if it is
ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)

try:
# Fork a second child and exit immediately to prevent zombies.
This
# causes the second child process to be orphaned, making the init
# process responsible for its cleanup. And, since the first child
is
# a session leader without a controlling terminal, it's possible
for
# it to acquire one by opening a terminal in the future (System V-
# based systems). This second fork guarantees that the child is no
# longer a session leader, preventing the daemon from ever
acquiring
# a controlling terminal.
pid = os.fork() # Fork a second child.
except OSError, e:
raise Exception, "%s [%d]" % (e.strerror, e.errno)

if (pid == 0): # The second child.
# Since the current working directory may be a mounted filesystem,
we
# avoid the issue of not being able to unmount the filesystem at
# shutdown time by changing it to the root directory.
os.chdir(WORKDIR)
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over
permissions.
os.umask(UMASK)
else:
# exit() or _exit()? See below.
os._exit(0) # Exit parent (the first child) of the second child.
else:
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It
also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be
unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
os._exit(0) # Exit parent of the first child.

# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a
variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the
maximum
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD

# Iterate through and close all file descriptors.
for fd in range(0, maxfd):
try:
os.close(fd)
except OSError: # ERROR, fd wasn't open to begin with (ignored)
pass

# Redirect the standard I/O file descriptors to the specified file.
Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.

# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)

# Duplicate standard input to standard output and standard error.
os.dup2(0, 1) # standard output (1)
os.dup2(0, 2) # standard error (2)

return(0)


def minifyAndUnite(files, output_file, staticRoot):
full_text = []
for f in files:
full_text.extend([l.lstrip() for l in open(f, "r").readlines()])
open(output_file, "w").write("\n".join(full_text).replace("STATIC_ROOT",
staticRoot))

def _maintenance():
files.deleteQueuedFiles()
files.clearExpiredShares()
files.clearExpiredFiles()
files.clearOrphanedFiles()
users.clearInactiveUsers()

def start():
config_parse = ConfigParser.ConfigParser()
fh = open(CONFIG_PATH, 'r')
config_parse.readfp(fh)
fh.close()

config_dict = {}
config_dict['pidFile'] = config_parse.get("/filelocker", "PIDFN")
config_dict['loggingConfigPath'] = config_parse.get("/filelocker",
"LOGGING_CONFIG_PATH")
config_dict['maintenanceFrequency'] = config_parse.get("/filelocker",
"MAINT_FREQ_HOURS")
config_dict['adminUsers'] = config_parse.get("/filelocker",
"ADMIN_USERS")
config_dict['organization'] = config_parse.get("/filelocker",
"ORGANIZATION")
config_dict['version'] = config_parse.get("/filelocker", "VERSION")
config_dict['filelockerRoot'] = config_parse.get("/filelocker",
"FILELOCKER_ROOT")
config_dict['filelockerPath'] = config_parse.get("/filelocker",
"FILELOCKER_PATH")
config_dict['staticRoot'] = config_parse.get("/filelocker",
"STATIC_ROOT")
config_dict['staticPath'] = config_parse.get("/filelocker",
"STATIC_PATH")
config_dict['filePath'] = config_parse.get("/filelocker", "FILE_PATH")
config_dict['maxFileLifeDays'] = int(config_parse.get("/filelocker",
"MAX_FILE_LIFE_DAYS"))
config_dict['maxAnonShareDays'] = int(config_parse.get("/filelocker",
"MAX_ANON_SHARE_DAYS"))
config_dict['maxUserInactiveDays'] = int(config_parse.get("/filelocker",
"MAX_USER_INACTIVE_DAYS"))
config_dict['maxFileUploads'] = int(config_parse.get("/filelocker",
"MAX_FILE_UPLOADS"))
config_dict['maintFreqHours'] = int(config_parse.get("/filelocker",
"MAINT_FREQ_HOURS"))
config_dict['avscanner'] = config_parse.get("/filelocker", "AVSCANNER")
config_dict['deleteCommand'] = config_parse.get("/filelocker",
"DELETE_COMMAND")

config_dict['authType'] = config_parse.get("/auth", "AUTH_TYPE")

config_dict['smtpSender'] = config_parse.get("/mail", "SMTP_SENDER")
config_dict['smtpServer'] = config_parse.get("/mail", "SMTP_SERVER")
config_dict['smtpStartTLS'] = config_parse.get("/mail", "SMTP_STARTTLS")
config_dict['smtpPort'] = config_parse.get("/mail", "SMTP_PORT")
config_dict['smtpAuthRequired'] = config_parse.get("/mail",
"SMTP_AUTH_REQUIRED")
config_dict['smtpUser'] = config_parse.get("/mail", "SMTP_USER")
config_dict['smtpPass'] = config_parse.get("/mail", "SMTP_PASS")

config_dict['DB_Host'] = config_parse.get("/mysql", "DB_Host")
config_dict['DB_User'] = config_parse.get("/mysql", "DB_User")
config_dict['DB_Password'] = config_parse.get("/mysql", "DB_Password")
config_dict['DB_Name'] = config_parse.get("/mysql", "DB_Name")


sys.stdout.write("Starting Filelocker daemon\n")
if os.path.isfile(config_dict['pidFile']):
FILE = open(config_dict['pidFile'], 'r')
pid = int(FILE.read().strip())
FILE.close()
try:
os.kill(pid, 0)
except os.error, args:
if args[0] == errno.ESRCH: # NO SUCH PROCESS
sys.stdout.write("Stale PID file detected, removing...\n")
else:
sys.stdout.write("Filelocker daemon is already running, or stale
PID file exists!\n")
sys.exit(0)

#retCode = createDaemon()
#Attach the config values to the cherrypy object
cherrypy.filelocker_config = config_dict
#load local libs
sys.path.append("./lib")
from Root import Root
import fldb
import users
import files
import globalvars

Logger = globalvars.getLogger()
pid = os.getpid()
FILE = open(config_dict['pidFile'],"w")
FILE.write(str(pid))
FILE.close()

#Add any css or js files that should be conglomerated into a single file
css = [config_dict['staticPath']+'/style/filelocker.css',
config_dict['staticPath']+'/style/calendar.css',
config_dict['staticPath']+'/style/hovertip.css']
js = [config_dict['staticPath'] + '/javascript/main.js',
config_dict['staticPath'] + '/javascript/shareManager.js',
config_dict['staticPath'] + '/javascript/fileManager.js']

#Minify and store
minifyAndUnite(css, "%s/style/compiled.css" % config_dict['staticPath'],
config_dict['staticRoot'])
minifyAndUnite(js, "%s/javascript/compiled.js" %
config_dict['staticPath'], config_dict['staticRoot'])

cherrypy.engine.on_start_thread_list.append(fldb.connect)
cherrypy.config.update(CONFIG_PATH)
cherrypy.tree.mount(Root(), '/', config=CONFIG_PATH)
cherrypy.server.quickstart()
cherrypy.engine.start(blocking=False)



Logger.info("Cherrypy started, filelocker application mounted, going into
maintenance mode...")
while 1:
_maintenance()
time.sleep(maintenanceFrequency*60*60)

def stop():
sys.path.append("./lib")
import globalvars
config_parse = ConfigParser.ConfigParser()
fh = open(CONFIG_PATH, 'r')
config_parse.readfp(fh)
fh.close()
pidFile = config_parse.get("filelocker", "PIDFN")
if os.path.isfile(pidFile):
FILE = open(pidFile, 'r')
pid = int(FILE.read().strip())
FILE.close()
try:
os.kill(pid, signal.SIGTERM)
except os.error, args:
globalvars.getLogger().critical("OS Error: %s" % args)
if args[0] != errno.ESRCH: # NO SUCH PROCESS
sys.stdout.write("Error stopping: %s\n" % str(args[0]))
else:
sys.stdout.write("Stale PID file, removing...\n")
except Exception, e:
sys.stdout.write("Error stopping: %s\n" % str(e))
else:
os.kill(pid, 9)
os.remove(pidFile)
sys.stdout.write("Filelocker daemon stopped\n")
else:
sys.stdout.write("Filelocker daemon is not running\n")

def printUsage():
sys.stdout.write("Available options:\nstart - starts Filelocker
daemon\nstop - perform graceful stop\nrestart - perform graceful restart")

if __name__ == "__main__":
argv = sys.argv
if (len(argv) < 2):
start()
else:
if argv[1] == "start":
start()
elif argv[1] == "stop":
stop()
sys.exit(0)
elif argv[1] == "restart":
stop()
start()
else:
sys.stdout.write("Unknown command '%s'\n" % argv[1])
printUsage()


--
View this message in context: http://www.nabble.com/Running-cherrypy3-as-a-daemon-tp15034184p15036485.html
Sent from the cherrypy-users mailing list archive at Nabble.com.

Herb...@googlemail.com

unread,
Jan 23, 2008, 3:15:13 AM1/23/08
to cherrypy-users
That's really nice of you hctv19, but that's also a little bit
suprising to me. Is so much custom code required to deploy a
cherrypy application?

What's an easy way to deploy it?

On my local machine I just do python app.py, to get it going. That's
all I need, but I also want to send it to the background. python
app.py & doesn't work, as I described in my first post.

Does anyone know of an easier way?

If not, can anyone recommend me a python framework that makes this
easier?


Thanks,
Herb
> View this message in context:http://www.nabble.com/Running-cherrypy3-as-a-daemon-tp15034184p150364...

Graham Dumpleton

unread,
Jan 23, 2008, 3:20:07 AM1/23/08
to cherrypy-users
You can try:

nohup python app.py > app.log 2>&1

Any output from the program will go to app.log file.

Graham

On Jan 23, 7:15 pm, "HerbAs...@googlemail.com"
> ...
>
> read more »

Christian Wyglendowski

unread,
Jan 23, 2008, 9:59:00 AM1/23/08
to cherryp...@googlegroups.com
On 1/23/08, Herb...@googlemail.com <Herb...@googlemail.com> wrote:
>
> That's really nice of you hctv19, but that's also a little bit
> suprising to me. Is so much custom code required to deploy a
> cherrypy application?
>
> What's an easy way to deploy it?

I just got into using supervisor2.

http://www.plope.com/software/supervisor2/

It handles the daemonization for you, and lots of other stuff too.

Christian
http://www.dowski.com

Tim Roberts

unread,
Jan 23, 2008, 1:12:56 PM1/23/08
to cherryp...@googlegroups.com
Herb...@googlemail.com wrote:
> That's really nice of you hctv19, but that's also a little bit
> suprising to me. Is so much custom code required to deploy a
> cherrypy application?
>
> What's an easy way to deploy it?
>
> On my local machine I just do python app.py, to get it going. That's
> all I need, but I also want to send it to the background. python
> app.py & doesn't work, as I described in my first post.
>
> Does anyone know of an easier way?
>

You are logging to stdout and stderr. Change the logging to go to a
local file, and this problem should go away.

The "nohup" suggestion from Graham accomplishes the same thing via the
shell.

--
Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

Robert Brewer

unread,
Jan 23, 2008, 1:24:22 PM1/23/08
to cherryp...@googlegroups.com
HerbAsher wrote:
> Is so much custom code required to deploy a cherrypy application?
> What's an easy way to deploy it?

The errors you saw were due to leaving the log set to stdout, which
disappeared when you shut down the terminal. You can turn that off
directly via the config entry:

log.screen: False

Setting the config entry environment: 'production' will do that for you,
too (among other things).

> hctv19 wrote (heavily snipped):
> > def createDaemon():
> > pid = os.fork()
> > os.setsid()
> > os.umask(UMASK)
> > os._exit(0)


> > os.dup2(0, 1) # standard output (1)
> > os.dup2(0, 2) # standard error (2)

> > sys.stdout.write("Starting Filelocker daemon\n")

In CherryPy 3.1, cherrypy.engine can do all of the above via the
Daemonizer plugin:

from cherrypy.restsrv.plugins import Daemonizer, PIDFile
Daemonizer(cherrypy.engine).subscribe()

> > FILE = open(config_dict['pidFile'],"w")

...and manage pid files via:

PIDFile(cherrypy.engine, filename).subscribe()

> > Logger.info("Cherrypy started, filelocker application mounted,

...and act as a site-wide log:

python cherrypy\tutorial\tut01_helloworld.py
[23/Jan/2008:10:00:47] ENGINE Listening for SIGTERM.
[23/Jan/2008:10:00:47] ENGINE Bus STARTING
[23/Jan/2008:10:00:47] ENGINE Started thread 'restsrv _TimeoutMonitor'.
[23/Jan/2008:10:00:47] ENGINE Started thread 'restsrv Autoreloader'.
[23/Jan/2008:10:00:48] ENGINE Serving on 127.0.0.1:8080
[23/Jan/2008:10:00:48] ENGINE Bus STARTED
[23/Jan/2008:10:00:51] ENGINE Console event 0: shutting down bus
[23/Jan/2008:10:00:51] ENGINE Bus STOPPING
[23/Jan/2008:10:00:52] ENGINE HTTP Server
cherrypy._cpwsgi.CPWSGIServer(('127.0.0.1', 8080)) shut down
[23/Jan/2008:10:00:52] ENGINE Stopped thread 'restsrv _TimeoutMonitor'.
[23/Jan/2008:10:00:52] ENGINE Stopped thread 'restsrv Autoreloader'.
[23/Jan/2008:10:00:52] ENGINE Bus STOPPED
[23/Jan/2008:10:00:52] ENGINE Bus EXITING
[23/Jan/2008:10:00:52] ENGINE Waiting for child threads to terminate...

> > os.chdir(WORKDIR)


> > # maxfd = os.sysconf("SC_OPEN_MAX")

> > # Iterate through and close all file descriptors.
> > for fd in range(0, maxfd):

> > os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)

> > os.kill(pid, signal.SIGTERM)

All of *those* behaviors should be implemented as plugins for CherryPy
3.1, and contributed back to the project so we can distribute proven
implementations of them. See
http://www.cherrypy.org/wiki/WebSiteProcessBus#Writingdeploymentscripts.
Please, please, please do this to make CherryPy better.

> > sys.stdout.write("Available options:\nstart - starts Filelocker
> > daemon\nstop - perform graceful stop\nrestart - perform graceful
> restart")

I'd love to see a script in the restsrv folder which called
engine.start/stop/graceful.

We're so close to having a really powerful deployment solution. Any and
all of you who have big hairy startup scripts like hctv19 (that are
keeping up with CP), please take some time to switch to/improve/add to
the Plugins so CP 3.1 final can be a huge success.


Robert Brewer
fuma...@aminus.org

Herb...@googlemail.com

unread,
Jan 23, 2008, 1:26:27 PM1/23/08
to cherrypy-users
Thanks everyONE for the incredible help! Appreciate every single
answer. Great community.

Herb
> implementations of them. Seehttp://www.cherrypy.org/wiki/WebSiteProcessBus#Writingdeploymentscripts.
> Please, please, please do this to make CherryPy better.
>
> > > sys.stdout.write("Available options:\nstart - starts Filelocker
> > > daemon\nstop - perform graceful stop\nrestart - perform graceful
> > restart")
>
> I'd love to see a script in the restsrv folder which called
> engine.start/stop/graceful.
>
> We're so close to having a really powerful deployment solution. Any and
> all of you who have big hairy startup scripts like hctv19 (that are
> keeping up with CP), please take some time to switch to/improve/add to
> the Plugins so CP 3.1 final can be a huge success.
>
> Robert Brewer
> fuman...@aminus.org
Reply all
Reply to author
Forward
0 new messages