I know this problem shows on group from time to time, but suggested
solutions does not work for me (pool_recycle). Here is my situation:
1. I am using SQLAlchemy 0.4.7p1-2 from Pylons 0.9.6.2-2 on Debian
Lenny (xen-amd64)
2. From time to time I see in logs: "MySQL has gone away" and this is
reason why some of HTTP requests made to my Pylons app fail.
3. My settings in configuration file are as follows:
>>>
...
sqlalchemy.url = mysql://USER:PASS@HOST:3306/DBNAME?charset=utf8
sqlalchemy.pool_recycle = 60 #(wait_timeout on MySQL side is default,
which is 8 hours AFAIK)
sqlalchemy.convert_unicode = True
...
<<<
4. I have logging enabled, and here is what I see when request fail
(please look at connection identified as "at 2238cd0", its lifetime is
one minute but it looks that in very last moment (the same second
which it should be destroyed) this connection is returned from pool,
possibly because of some http request - unfortunately after that we
can see that error occurs). Also what is surprise for me why is whole
pool recreating after this?
P.S. Third column is thread id.
>>>
16:19:07,621 INFO 1115703632 [sqlalchemy.pool.QueuePool.0x..10]
Closing connection <_mysql.connection open to 'DB_HOST' at 2262480>
16:19:07,624 INFO 1115703632 [sqlalchemy.pool.QueuePool.0x..10]
Created new connection <_mysql.connection open to 'DB_HOST' at
2238cd0>
16:19:07,624 INFO 1115703632 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 2238cd0> checked
out from pool
16:19:07,740 INFO 1115703632 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 2238cd0> being
returned to pool
16:20:02,291 INFO 1107310928 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 2364540> checked
out from pool
16:20:02,413 INFO 1107310928 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 2364540> being
returned to pool
16:20:03,477 INFO 1132489040 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 222afb0> checked
out from pool
16:20:03,516 INFO 1132489040 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 222afb0> being
returned to pool
16:20:07,573 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10]
Connection <_mysql.connection open to 'DB_HOST' at 2238cd0> checked
out from pool
16:20:07,574 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10]
Invalidate connection <_mysql.connection open to 'DB_HOST' at 2238cd0>
(reason: OperationalError:(2006, 'MySQL server has gone away'))
16:20:07,574 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10]
Closing connection <_mysql.connection open to 'DB_HOST' at 2238cd0>
16:20:07,574 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10]
Connection None being returned to pool
16:20:07,575 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10]
Closing connection <_mysql.connection open to 'DB_HOST' at 2364540>
16:20:07,575 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10]
Closing connection <_mysql.connection open to 'DB_HOST' at 222afb0>
16:20:07,575 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10] Pool
disposed. Pool size: 5 Connections in pool: 0 Current Overflow: -5
Current Checked out connections: 0
16:20:07,575 INFO 1124096336 [sqlalchemy.pool.QueuePool.0x..10] Pool recreating
<<<
Any ideas?
Thanks for your help!
Cheers,
--
Kamil Gorlo
Another cause of "went away" messages is a query that exceeds the
configured memory resources on the server. Taking a look at MySQL's
logs may shed more light & give hints for which buffers need tuning if
that's the problem.
> But, assuming this is external problem - is there any way to tell
> SQLAlchemy to try another connection for the same request (instead of
> returning HTTP 500 for user), or maybe other pooling strategy or even
> something else?
Yes, with a simple pool event listener you can ensure the liveliness of
connections before the pool hands them out for use. Usage example is
attached.
Cheers,
Jason
class LookLively(object):
"""Ensures that MySQL connections checked out of the
pool are alive."""
def checkout(self, dbapi_con, con_record, con_proxy):
try:
try:
dbapi_con.ping(False)
except TypeError:
dbapi_con.ping()
except dbapi_con.OperationalError, ex:
if ex.args[0] in (2006, 2013, 2014, 2045, 2055):
# caught by pool, which will retry with a new connection
raise exc.DisconnectionError()
else:
raise