URGENT: 2014, "Commands out of sync; you can't run this command now"

1,584 views
Skip to first unread message

Arun Kumar PG

unread,
Jul 13, 2007, 1:47:33 AM7/13/07
to sqlal...@googlegroups.com
Hi Guys,

I am getting this error when multiple requests are coming to the application and following this the my server crashes!

FYI:
I am using MySQLDb library and connextions are managed by sqlalchemy.pool

ProgrammingError: (2014, "Commands out of sync; you can't run this command now")

I know it's a MySQL error but has anyone encountered this while using SQL Alchemy ? Quick help will be appreciated.

--
Cheers,

- A

Michael Bayer

unread,
Jul 13, 2007, 2:12:56 AM7/13/07
to sqlal...@googlegroups.com


dont share connections or sessions between threads.

Arun Kumar PG

unread,
Jul 13, 2007, 2:27:48 AM7/13/07
to sqlal...@googlegroups.com
I create an orm session as soon as the request comes in and store the reference to the same in the curent thread i.e. threading.currentThread().session = new_session. This session is stored in the current thread so that I can get the same session across all DAO objects. so basically ever DAO in the request chain can simply get the session by saying threading.currenrThread.session and use it. Finally, once the request is over this session object is removed from the current thread i.e. del <session>.

I can see that during multiple request the thread ids are different so I believe that all of them are having their own copy of session. Further, I am using pool module of sqlalchemy from where a connection is returned to orm session.

My guess is that somewhere in that connection management things are getting schewed up -

any thoughts? Possible troubleshooting areas ? I really need to fix this ASAP as this is a critical application.

thx for the help!
 
Cheers,

- A

Michael Bayer

unread,
Jul 13, 2007, 3:47:00 AM7/13/07
to sqlal...@googlegroups.com

On Jul 13, 2007, at 2:27 AM, Arun Kumar PG wrote:

> I create an orm session as soon as the request comes in and store
> the reference to the same in the curent thread i.e.
> threading.currentThread().session = new_session. This session is
> stored in the current thread so that I can get the same session
> across all DAO objects. so basically ever DAO in the request chain
> can simply get the session by saying
> threading.currenrThread.session and use it. Finally, once the
> request is over this session object is removed from the current
> thread i.e. del <session>.
>
> I can see that during multiple request the thread ids are different
> so I believe that all of them are having their own copy of session.
> Further, I am using pool module of sqlalchemy from where a
> connection is returned to orm session.
>
> My guess is that somewhere in that connection management things are
> getting schewed up -

there was a bug like this at one time in the pool, but it was fixed
probably a year ago, and nobody has reported this issue since. are
the objects which you load from the session being shared between
threads ? i.e. a second thread issues a lazy-load operation on an
object's attribute ? that counts as multi-threaded session access.


Arun Kumar PG

unread,
Jul 13, 2007, 4:42:06 AM7/13/07
to sqlal...@googlegroups.com
Thx Michael.

Well, I don't think that I am doing that. To give you a picture of the object model this is how the hierarchy is:

                                                                                                          BaseOrmDao (+GetSession() this returns the session attached to the current thread)
                                                                                                                  ^
                                                                                                                  |
Request - >   PreProcessor  <-> Controller <-> Manager <-> DaoFactory <-> DAOs
                             |
                            V
                   (orm.session
                    attached to thread
                    here)

To clarify the DaoFactory will return a new DAO object back to the manager always, that means a DAO object. Also, a new instance of Controller and Manager is made per request.

To answer your question there is no point in the communication where two threads share the same object. (Not sure if SQLAlchemy does so when mapped objects are used in the DAO layer ?)

Any thoughts?

On 7/13/07, Michael Bayer <mik...@zzzcomputing.com> wrote:


On Jul 13, 2007, at 2:27 AM, Arun Kumar PG wrote:

> I create an orm session as soon as the request comes in and store
> the reference to the same in the curent thread i.e.
> threading.currentThread ().session = new_session. This session is

> stored in the current thread so that I can get the same session
> across all DAO objects. so basically ever DAO in the request chain
> can simply get the session by saying
> threading.currenrThread.session and use it. Finally, once the
> request is over this session object is removed from the current
> thread i.e. del <session>.
>
> I can see that during multiple request the thread ids are different
> so I believe that all of them are having their own copy of session.
> Further, I am using pool module of sqlalchemy from where a
> connection is returned to orm session.
>
> My guess is that somewhere in that connection management things are
> getting schewed up -

there was a bug like this at one time in the pool, but it was fixed
probably a year ago, and nobody has reported this issue since.  are
the objects which you load from the session being shared between
threads ?  i.e. a second thread issues a lazy-load operation on an
object's attribute ?  that counts as multi-threaded session access.








--
Cheers,

- A

Michael Bayer

unread,
Jul 13, 2007, 10:07:29 AM7/13/07
to sqlal...@googlegroups.com

On Jul 13, 2007, at 4:42 AM, Arun Kumar PG wrote:

>
> Any thoughts?
>

my thoughts are, either the bug still remains in SA's connection pool
and everyone is magically avoiding the problem somehow (when it was
fixed, people who observed the problem all the time noted that it
went away completely, and we also have some "threading" tests to
double check it), or your site is receiving tremendously huge impact
so that a more rare condition comes out more frequently, or theres
something subtle in your app making this happen, or you need to
ensure that youre on SA 0.3.8 and the latest version of MySQLDB.

so far I can only look into the first point above and will confer
with some developers today on this.

Michael Bayer

unread,
Jul 13, 2007, 10:17:40 AM7/13/07
to sqlal...@googlegroups.com
also send along a full stack trace so at least it can be seen where
this is occuring.

Arun Kumar PG

unread,
Jul 13, 2007, 10:24:33 AM7/13/07
to sqlal...@googlegroups.com
Sure Michael I will get back on this in a while as I am researching on this. Thanks for your support. I hope this gets resolved sooner as I am very much dependent on this and the application is really critical and should be up in next couple days!

Thanks and i will get back on this in next couple hours.

On 7/13/07, Michael Bayer <mik...@zzzcomputing.com > wrote:

also send along a full stack trace so at least it can be seen where
this is occuring.





--
Cheers,

- A

Arun Kumar PG

unread,
Jul 15, 2007, 11:24:59 PM7/15/07
to sqlal...@googlegroups.com
Hi Michael,

I figured out the problem. It was a connection sharing issue. Looks like different connection objects were getting returned from the pool (which was created using the creator approach in create_engine()) when relations were getting loaded as a part of processing. Due to this sometimes connection swapping was happening among the different request threads.

I resolve this I created a threadsafe QueuePool and passed a class wrapping the same while creating engine. This helps the same connection getting returned for the same thread.

Programming error!

Hopefully I have tested everything and this does not crops up again :)

Thanks for the support!

- A
--
Cheers,

- A

Michael Bayer

unread,
Jul 16, 2007, 9:46:16 AM7/16/07
to sqlal...@googlegroups.com

On Jul 15, 2007, at 11:24 PM, Arun Kumar PG wrote:

> Hi Michael,
>
> I figured out the problem. It was a connection sharing issue. Looks
> like different connection objects were getting returned from the
> pool (which was created using the creator approach in create_engine
> ()) when relations were getting loaded as a part of processing. Due
> to this sometimes connection swapping was happening among the
> different request threads.

do you mean, multiple create_engine() calls were occuring ? or are
you talking about the issue i mentioned earlier, that lazy-loaders
were firing off against a session in a different thread ? does that
mean your mapped objects *are* in fact being used in threads other
than where they were created ?

>
> I resolve this I created a threadsafe QueuePool and passed a class
> wrapping the same while creating engine. This helps the same
> connection getting returned for the same thread.

can you please describe specifically what you mean here ? QueuePool,
i would hope, is threadsafe already. Or do you just mean you passed
the "threadlocal" flag to QueuePool ? that doesnt seem like it would
fix the session-related problem since that issue occurs when it holds
onto a single connection while flushing.

i just need to understand what you did, since if theres any way i can
defensively prevent or at least document the situation its pretty
important.


Arun Kumar PG

unread,
Jul 19, 2007, 2:08:57 PM7/19/07
to sqlal...@googlegroups.com
Apologies for not responding for a while Was stuck in the project.

Ok. So this is what happening The mapped objects are created during the first time request to the application. So create_engine is getting called one time only passing in the creator as a lambda: db_obj where db_obj is the ref to method returning MySQldb connection.

starting over again.. request comes, handled by a thread in the threadpool, check if mapped objects are already created or not. If yes return else create mapped objects (create_engine()... as mentioned above) and thread returned back to the pool. (FYI: this is an httpserver having a threadpool for request handling)

Subsequent request now does not create mapped objects or create engine. It simply uses the existing mapped objects and does ORM actions.

The problem was coming when lazy loading happens during multiple requests. I guess the underlying connection pool (in case of using creator approach) is not using threadlocal approach as different connections are checked in/out when I look at the pool log and exchanged as well among different request handling threads.

Can that be the problem ?

Also, is the underlying connection pool use threadlocal strategy in case of using creator approach while creating engine  ? don't know if strategy flag is for that ?

However, when i passed in a pool.QueuePool instance with use_threadlocal= True everything worked just fine.

Any thoughts where the problem could be ?
--
Cheers,

- A

Michael Bayer

unread,
Jul 19, 2007, 2:49:17 PM7/19/07
to sqlal...@googlegroups.com

On Jul 19, 2007, at 2:08 PM, Arun Kumar PG wrote:

> Apologies for not responding for a while Was stuck in the project.
>
> Ok. So this is what happening The mapped objects are created during
> the first time request to the application. So create_engine is
> getting called one time only passing in the creator as a lambda:
> db_obj where db_obj is the ref to method returning MySQldb connection.
>
> starting over again.. request comes, handled by a thread in the
> threadpool, check if mapped objects are already created or not. If
> yes return else create mapped objects (create_engine()... as
> mentioned above) and thread returned back to the pool. (FYI: this
> is an httpserver having a threadpool for request handling)
>
> Subsequent request now does not create mapped objects or create
> engine. It simply uses the existing mapped objects and does ORM
> actions.
>
> The problem was coming when lazy loading happens during multiple
> requests. I guess the underlying connection pool (in case of using
> creator approach) is not using threadlocal approach as different
> connections are checked in/out when I look at the pool log and
> exchanged as well among different request handling threads.
>

OK, this is exactly the issue; youre caching mapped objects, which
have unfired lazy loaders, and then sharing those mapped objects
among threads. The lazy loader needs to consult a session in order
to load its contents, since thats where the ORM locates information
about how to get a connection (for example, if your sessions are
bound to engines, and not your tables, this would be essential). The
session, when its inside of a SessionTransaction as well as within a
flush() process, holds onto a single pooled connection to do its
work. If another thread accesses the session during this time, youll
get a conflict.

> Also, is the underlying connection pool use threadlocal strategy in
> case of using creator approach while creating engine ? don't know
> if strategy flag is for that ?
>
> However, when i passed in a pool.QueuePool instance with
> use_threadlocal= True everything worked just fine.

when you do that, the QueuePool will return the same connection for a
particular thread which was already in use. this is part of what
happens when you use create_engine('...', strategy='threadlocal').
However it doesnt have any ability to stop you from sharing one of
those checked-out connections with another thread. It shouldn't
change anything here, actually; the session still checks out a
connection, and holds onto it during a transaction or flush() and
that's still the same connection it will hand out to any other thread
during that time.

Arun Kumar PG

unread,
Jul 19, 2007, 4:39:40 PM7/19/07
to sqlal...@googlegroups.com

Will this be a problem even if I attach a new session per incoming request i.e. thread handling request ? So basically it's because of having the same copy of mapped objects ? How can I solve the above problem in existing way without using a QueuePool ? By creating mapped objects per request ?

> Also, is the underlying connection pool use threadlocal strategy in
> case of using creator approach while creating engine  ? don't know
> if strategy flag is for that ?
>
> However, when i passed in a pool.QueuePool instance with
> use_threadlocal= True everything worked just fine.

when you do that, the QueuePool will return the same connection for a
particular thread which was already in use.  this is part of what
happens when you use create_engine('...', strategy='threadlocal').
However it doesnt have any ability to stop you from sharing one of
those checked-out connections with another thread.  It shouldn't
change anything here, actually; the session still checks out a
connection, and holds onto it during a transaction or flush() and
that's still the same connection it will hand out to any other thread
during that time.







--
Cheers,

- A

Michael Bayer

unread,
Jul 19, 2007, 5:02:25 PM7/19/07
to sqlal...@googlegroups.com

On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:

>
> Will this be a problem even if I attach a new session per incoming
> request i.e. thread handling request ? So basically it's because of
> having the same copy of mapped objects ? How can I solve the above
> problem in existing way without using a QueuePool ? By creating
> mapped objects per request ?

the objects that were loaded within a particular session stay there
until you remove them. therefore, whatever session you are using to
load the objects, you should dispose of before putting the objects
into a thread-global scope (you can call clear() on it to empty it
out). Also, you probably want to load all of their related items
either explicitly or through eager loading - since when the objects
are detached, the lazy loaders will raise errors when called.

Or, you can create your mapped objects per request, yes, or perhaps
per thread.


Arun Kumar PG

unread,
Jul 19, 2007, 5:04:27 PM7/19/07
to sqlal...@googlegroups.com
Or, you can create your mapped objects per request, yes, or perhaps
per thread.

>> how much can this cost in terms of performance ?

On 7/20/07, Michael Bayer <mik...@zzzcomputing.com> wrote:
Cheers,

- A

Arun Kumar PG

unread,
Jul 19, 2007, 5:06:33 PM7/19/07
to sqlal...@googlegroups.com
or may be just keep on using the QueuePool approach as it will always make sure to return the same connection to the current thread ?


--
Cheers,

- A



--
Cheers,

- A

Michael Bayer

unread,
Jul 19, 2007, 5:58:33 PM7/19/07
to sqlal...@googlegroups.com

On Jul 19, 2007, at 5:06 PM, Arun Kumar PG wrote:

> or may be just keep on using the QueuePool approach as it will
> always make sure to return the same connection to the current thread ?
>

like i said, i dont see how that helps any. a single Session thats
in flush() holds onto a single connection and returns it regardless
of what thread accesses it. the threadlocal pool setting doesnt have
any effect on threadsafety.


Arun Kumar PG

unread,
Jul 19, 2007, 6:06:35 PM7/19/07
to sqlal...@googlegroups.com
That is what I am trying to figure out. It works perfectly when I do this.

On 7/20/07, Michael Bayer <mik...@zzzcomputing.com > wrote:



--
Cheers,

- A

Michael Bayer

unread,
Jul 19, 2007, 10:28:28 PM7/19/07
to sqlalchemy
perhaps the nature of the conflict is different, then. are you able
to observe what stack traces or at least approximately what operations
are taking place when the conflict occurs ? does the conflict occur
frequently and easily with just a little bit of concurrency or is it
something that only happens under very high load conditions ?


Arun Kumar PG

unread,
Jul 19, 2007, 11:16:30 PM7/19/07
to sqlal...@googlegroups.com
the stack trace points to pool.py (I will get the exact stack trace as I am away from my box currently)


>> does the conflict occur frequently and easily with just a little bit of concurrency or is it
something that only happens under very high load conditions ?

this is happening primarily in a use case wherein the logic does some processing (this includes accessing many relations - i believe many lazy loaders fire here). since this use case generates some csv data it takes about 6-7 secs depending on the data set so when other requests comes in while other is in progress we encounter the 2014 error.

however as mentioned earlier when i use threadlocal queue pool it just vanishes and no matter how many requests i send after that it just works fine.
--
Cheers,

- A

Michael Bayer

unread,
Jul 19, 2007, 11:35:01 PM7/19/07
to sqlal...@googlegroups.com
and how is your session connected to the database ? are you using create_session(bind_to=<something>) ? or are you binding your MetaData to the engine ? are you using BoundMetaData ?

Michael Bayer

unread,
Jul 19, 2007, 11:39:12 PM7/19/07
to sqlal...@googlegroups.com
"passing in the creator as a lambda: db_obj where db_obj is the ref to method returning MySQldb connection."

...can we see the exact code for this please ?


Arun Kumar PG

unread,
Jul 19, 2007, 11:41:37 PM7/19/07
to sqlal...@googlegroups.com
BoundMetaData is what I am using.

On 7/20/07, Michael Bayer <mik...@zzzcomputing.com> wrote:



--
Cheers,

- A
Reply all
Reply to author
Forward
0 new messages