MySQL server has gone away
comes up, meaning the database conneciton has timed out.
Is there anyway to inject in a simple test query (and possibly
reconnect) function into SQLAlchemy so it runs this procedure to ensure
a valid connection?
db= create_engine('mysql://root@localhost/test', pool_recycle=True)
meta = BoundMetaData(db)
check out the FAQ entry on this:
note that the recycle only occurs on the connect() operation.
also turn on "echo_pool=True" so that you can see the connections being
recycled in the standard output
But as a stop-gap measure (and if you aren't expecting data rates in
the 1000's per second), you can set wait_timeout (set in seconds) to
something hideously high.
Monty
I've gone over the TG connection code a number of times and I can't
see anything wrong with it. This isn't after a long idle time either.
Sometimes I can start the app and have the "MySQL server has gone
away" exception after 2 or 3 requests.
To reproduce this is simple. Quickstart a new turbogears project with
SA and identity:
tg-admin quickstart -i --sqlalchemy
Change the dburi in dev.cfg to a mysql db. Then use ab to pound on
the server. The magic numbers that *always* causes the app to fail at
some point during the process is as follows:
ab -n 500 -c 10 http://localhost:8080/
I'm pretty sure that this isn't a TG problem (although I don't have an
issue with being proven wrong :) so is this an SA issue or a MySQLdb
issue?
Thanks,
Lee
--
Lee McFadden
blog: http://www.splee.co.uk
work: http://fireflisystems.com
skype: fireflisystems
Almost the same for me (MySQL 5.0.22, SA 0.2.8 and 0.3.1, using CherryPy).
Using pool_recycle works nice on a non-high load environments. But while I
benchmark my app (using siege or ab), this always produces "Lost connection
during MySQL query...", and sometimes "MySQL server has gone away...".
Note the version of MySQLdb is important: using 1.2.2b1 (didn't tested the
last 1.2.2b2) can reduce those errors, but not completely though. Also,
MySQLdb is not thread-safe, so using it in a multithreaded environment can be
painful... (but I'm sure that's transparent using TG).
Since SA 0.2.4 (pool_recycle did not exist at this time), I use a specific
pool, derived from the QueuePool class, which *always* check the current
checked-out connection is valid, using connection.ping() (MySQL specific, I
think not all DBs support this). I'm still using my pool as of 0.3.1, to
prevent those kind of errors.
I didn't find out why this occurs: having a test-case is very difficult. I'd
also investigated the way the app access MySQL. IIRC, while using a socket,
if a connection is timed-out, reaccessing the server
automatically "regenerate" the connection (well, actually, I've not observed
this behavior for all the mysql servers I use), without producing those
errors. I have this kind of lines in MySQL logs when this occurs:
Connect slelong@localhost on dev
blablabla
Well, that's a lot of obscure observations... without a clean solution ! But
maybe someone will have some ideas...
Cheers,
--
Sébastien LELONG
sebastien.lelong[at]sirloon.net
So, for non-threadsafe DBAPIs we use the SingletonThreadPool, like
this:
meta = BoundMetaData('mysql://scott:tiger@localhost/test',
poolclass=pool.SingletonThreadPool)
It appears that I will have to make SingletonThreadPool the defualt
pool for the mysql module the same way it is for sqlite ....although Im
going to check MySQLDB now to see if thread-safety has been worked into
newer versions.
So i have checked in a fix in 2132 that does not prevent the error but
*does* keep it from being catastrophic, and its that the mysql dialect
catches error code 2006 and 2014 and invalidates the connection before
re-throwing the error, so that at least the pool doesnt get filled up
with busted connections.
as far as SingletonThreadPool, its not a terrific default setting since
for an applciation that keeps opening new threads (as some web
containers do), youre going to be opening new connections all the time.
also i think this issue is also platform-specific to some degree as
some colleagues on irc are unable to reproduce the problem on their
linux machines.
I've been able to recreate the problem on (OSX 10.4.8, MySQL 4.1,
MySQLdb 1.2.1p2), (Fedora Core 6, MySQL 5.0, MySQLdb 1.2.2b2) and also
on FreeBSD (although I have no version numbers for that system as it
was someone on IM yesterday who tested for me). Not sure if that
helps at all.
In the meantime I'm going to create a patch for TG to use
SingletonThreadPool as lower performance is better than no
performance.
Thanks again for your quick responses Michael.
On 12/8/06, Michael Bayer <zzz...@gmail.com> wrote:
> So i have checked in a fix in 2132 that does not prevent the error but
> *does* keep it from being catastrophic, and its that the mysql dialect
> catches error code 2006 and 2014 and invalidates the connection before
> re-throwing the error, so that at least the pool doesnt get filled up
> with busted connections.
So in a webserver environment this makes just the request where the
error happens fail, instead of making all subsequent requests on the
same thread fail.. correct? If so, half the battle is won in my case.
Arnar
Well, I've made the changes already to work with the current release
of SQLAlchemy and I'll revert them once the fix that you've added here
has been tested and added to the latest SA release.
I'll run some tests with SA's trunk and see if this fixes the problem
without using the SingletonThreadPool.
Thanks again for getting this fixed so quickly.
This works perfectly for me !!! *Many* thanks for this bugfix.
Cheers
Seb
This issue seems very similar to one I found using the MySQL Zope
database adapter. The symptom was the same (MySQL has gone away), and I
think the underlying issue might be the same. Both Alchemy and Zope DA
starting having this problem only after upgrading to MySQL 5.
It seems that the Alchemy patch resolves the issue while closing the
connection, but the real problem might be that the database closed the
connection to the client, and the client was never made aware? The
Zope DA uses a isConnected method to determine the validity of the
client connection. The fix for it involved making this method a
little more robust using the MySQL connection's ping, which (usually)
automatically reconnects to the DB as necessary. If a
MySQLdb.OperationalError is thrown while pinging, it closes the
connection. Perhaps the issue is similar for Alchemy, it's just not
manifesting itself until the connection is being closed. (Perhaps it
is being closed *because* the connection is invalid?)
I'm not sure if these issues are related, but they seem to be very
similar. Just thought this information might give more ideas/another
angle with which to view the problem.