nohup nice python cposerver.py > /home/rshea/cposerver_print_out.txt &
... in order that it should remain active once i had logged off.
I now find that when there has been no activity for a while i get an
error from within CP which originates from MySQLdb - 'MySQL server has
gone away'.
Now if I log on and run mysql from the prompt i can access the database
as normal so can anyone explain why i get this message ?
By the way the message will go away if i stop and restart the server
I know the message is associated with MySQL being out of use for some
time but what i don't understand is (A) how i can stop it happening
(basically to tell MySQL "I don't care how long you have to wait until
you hear from me, keep listening !") and (B) why CP requires a restart
to acknowledge that MySQL is available.
Any information gratefully received.
thanks
richard.
ps - environment is MySQL 5.x, Gentoo is the OS, CP 2.1.1
Well it does look like the MySQL connection needs refreshing from time
to time because I get something similar. If I try to use my web-app
if it has not been accessed for a day or two, I get (last two lines of
traceback):
File "D:\Python24\Lib\site-packages\MySQLdb\connections.py", line
33, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (2013, 'Lost connection to MySQL server during query').
If I then retry it works (without me having to restart anything).
I'd say this isn'nt MySQLdb's or CherryPy's fault, because something
similar happens if I leave the MySQL command prompt client running and
try to reuse it after a day or two.
So I intend to follow this thread the recent thread "Running Scheduled
functions in Cherrypy ", and write some kind of database connection
keepalive function
The Scheduled functions - thread is
at:http://groups.google.com/group/cherrypy-users/browse_thread/thread/8b89e9af50254247?hl=en
An example pool can be found here:
http:/subpar.se/pool.py
This is not how a 'pool' works.
This code merely instantiates a bunch of objects then it uses them up
one by one (the get method removes from the queue), after that it
alwas creates a new instance.
Have you read:
http://groups.google.com/group/turbogears/browse_thread/thread/19d9952b4
8c0e858/5f8132b2049b49ce?lnk=st&q=mysqldb+gone+away&rnum=2#5f8132b2049b4
9ce
?
Given the large number of people who use CP just fine alongside (I
cannot say "with" since they only connect because you ask them to ;)
various databases, I think you'd be better off asking on the SQLObject
or MySQL lists.
Robert Brewer
System Architect
Amor Ministries
fuma...@amor.org
I think you make a reasonable point - I am the OP and I believe (not
easy to recreate the problem at will) I was getting it in some code
which uses PyDo so that would make it look like MySQLdb (which I
presume all the ORM's sit on top of).
Maybe it would be better to test the actual connection before returning
it... Just a thought that crossed my mind. Hmm, I have just looked on
the connection from MySQLdb and it doesn't have a check function... It
would be pretty useful - like having an isConnectionOpen method or
something similar.
To get the timeout one can do this:
mysqld --verbose --help
Find interactive_timeout.
My code
========================================================
timeout = (28800-(60*60)) #MySQL interactive timeout. Is set 1 hour
before the actual value (so no chances are taken).
class ConnectionPool:
def __init__(self, constructor=NiceShoeWrapper, pool_count=5):
self.q = Queue.Queue(pool_count)
self.constructor = constructor
for i in range(pool_count):
self.releaseConnection( constructor() )
def getSize(self):
return self.q.qsize()
def getConnection(self):
try:
put_stamp, obj = self.q.get(False)
if time()-put_stamp < timeout:
return obj
else:
obj.close()
return self.constructor()
except Queue.Empty:
return self.constructor()
def releaseConnection(self, obj):
try:
return self.q.put((time(), obj), False)
except Queue.Full:
#The queue is full, close the connection and ignore it
obj.close()
My unit test
========================================================
def test_pool():
assert amiweb.db_pool.getSize() == 5
#Test that the pool gets smaller
connections = []
for i in range(0, 5):
connections.append(amiweb.db_pool.getConnection())
assert amiweb.db_pool.getSize() == 0
#Test that connections can get back into the pool
map(amiweb.db_pool.releaseConnection, connections)
assert amiweb.db_pool.getSize() == 5
#Test that the connections in the pool are equal to the first ones
#i.e. connections should be reused
new_connections = []
for i in range(0, 5):
new_connections.append(amiweb.db_pool.getConnection())
assert new_connections == connections
#The pool is empty, a new connection should be returned
assert amiweb.db_pool.getSize() == 0
con = amiweb.db_pool.getConnection()
assert con != None
assert con not in new_connections
#Put this connection back in
amiweb.db_pool.releaseConnection(con)
assert amiweb.db_pool.getSize() == 1
#Put the new_connections back in and except the pool size to be size
5
map(amiweb.db_pool.releaseConnection, new_connections[:4])
assert amiweb.db_pool.getSize() == 5
#The last connection should be closed
#The last is the magic number...
con_last = new_connections[-1]
assert con_last.is_closed == False
amiweb.db_pool.releaseConnection(con_last)
assert con_last.is_closed == True
assert amiweb.db_pool.getSize() == 5
Yeah, that would be pretty convenient. I had an idea to extend the pool
to give me a cursor object directly if I wanted, that way the pool
could have a try-except where it tries to get a cursor from the pooled
connection, and if it fails just makes a new connection first.
Something like that anyway..maybe there are some practical reasons why
this is a bad idea, but I can't think of any right now :)