ODBC Connection is busy error

1,495 views
Skip to first unread message

BruceC

unread,
Apr 23, 2008, 11:43:07 PM4/23/08
to sqlalchemy
Hi all,

Hoping someone can help me out here. We have a web app that has been
live for a while, using Pylons, SA & PYODBC. Our DB server is SQL
Server 2005, so we're using mssql.py for our connection there. It all
runs on Apache 2.2.6...

Since our system went live we have been getting more & more errors
like this: "DBAPIError: (Error) ('HY000', '[HY000] [Microsoft][SQL
Native Client]Connection is busy with results for another command
(0)') u'SELECT" ...snip valid SQL string...endsnip

In each case, the sql is fine, & works quite happily when run directly
on the db. I know that SQL Server 2005 has a new feature called MARS
(multiple active result sets). We are using a MARS enabled driver (the
most recent 2005 native client ODBC driver), and so I am a bit
confused as to why this is happening.

If anyone can help me out here, that would be appreciated :)

Paul Johnston

unread,
Apr 24, 2008, 1:04:32 PM4/24/08
to sqlal...@googlegroups.com
Hi,

>Since our system went live we have been getting more & more errors
>like this: "DBAPIError: (Error) ('HY000', '[HY000] [Microsoft][SQL
>Native Client]Connection is busy with results for another command
>(0)') u'SELECT" ...snip valid SQL string...endsnip
>
>

I've seen this error too, in fact some of the unit tests trigger it. The
cause is that a single ODBC connection can only have one query active at
a time, even if you have multiple cursors. I believe this is different
to most other DBAPI drivers.

I have no idea how to fix this in SQLAlchemy, have thought about it a
bit without success. A workaround is to recode your app to it fetches
results right after each query.

Paul

Rick Morrison

unread,
Apr 24, 2008, 1:20:44 PM4/24/08
to sqlal...@googlegroups.com
....perhaps we could simply reset the pyodbc cursor before issuing a new SQL operation?


class MSSQLExecutionContext(default.DefaultExecutionContext):
    def pre_exec(self):
        if self.dialect.clear_previous_results:
            self.cursor.clear_previous_results_somehow_idunnohow()
        ....


Michael Bayer

unread,
Apr 24, 2008, 2:15:08 PM4/24/08
to sqlal...@googlegroups.com

we used to have a feature on the Pool called "auto_close_cursors" -
this would track all open cursors at the pool level, and when a
connection is re-checked in would close them all. In reality, this
feature was in most cases just hiding opened cursors that should have
been closed at higher levels, and added a lot of latency at a crucial
point in execution. So in modern SQLAlchemy, cursors are closed
automatically when: 1. the statement is a non-row returning statement
like INSERT, UPDATE, etc. 2. all result rows are exhausted from a
result. (When the ORM is used, result rows are always fully exhausted
unless the "yield_per" Query option is used). So if you are using
straight connection or engine result sets, and are not explicitly
exhausting all rows, call result.close() to explicitly release the
cursor. That will solve the problem.

BruceC

unread,
Apr 24, 2008, 11:54:50 PM4/24/08
to sqlalchemy
Thank you to everybody for your comments on this problem...

Michael, re: your suggestion about result.close(), is this something
that I could add to mssql.py, or do you think it's something that I
would need to add throughout my application everytime I access the db?
(It's a big application...)

Cheers!

Michael Bayer

unread,
Apr 25, 2008, 12:21:42 AM4/25/08
to sqlal...@googlegroups.com
only if youre leaving dangling ResultProxys opened with pending
results, which should be a fairly rare occurence in a normal
application. if you simply call fetchall() on results instead of
fetchone(), that alone would eliminate any hanging cursors.

Esceo

unread,
Apr 27, 2008, 2:55:45 AM4/27/08
to sqlalchemy
Just a side note, is the pool code thread safe?
i.e. no two threads would possibly checkout the same connection at any
point in time?
> >> cursor.  That will solve the problem.- Hide quoted text -
>
> - Show quoted text -

BruceC

unread,
Apr 27, 2008, 8:10:35 AM4/27/08
to sqlalchemy
Hi Michael,

Thanks for your persistence :)

I've searched through my app, & we don't seem to have any calls to
fetchone(), so I'm not sure what else to look for, but I'll try to do
some debugging with ResultProxys, & see if that leads to any possible
answers.

On a side note, in our Pylons development.ini, we have these settings:
sqlalchemy.default.max_overflow = -1
sqlalchemy.default.pool_size = 32

Is there anything there that could be having an effect on this?

Michael Bayer

unread,
Apr 27, 2008, 7:13:48 PM4/27/08
to sqlal...@googlegroups.com

On Apr 27, 2008, at 8:10 AM, BruceC wrote:

>
> Hi Michael,
>
> Thanks for your persistence :)
>
> I've searched through my app, & we don't seem to have any calls to
> fetchone(), so I'm not sure what else to look for, but I'll try to do
> some debugging with ResultProxys, & see if that leads to any possible
> answers.
>
> On a side note, in our Pylons development.ini, we have these settings:
> sqlalchemy.default.max_overflow = -1
> sqlalchemy.default.pool_size = 32
>
> Is there anything there that could be having an effect on this?

well, try sending along a stack trace when you catch it
happening....what conditions seem to correlate with its occurence ?
does it appear to be related to a high degree of concurrent
activity ? the max_overflow means the pool has no limit to how many
connections it can open in response to requests.

Michael Bayer

unread,
Apr 27, 2008, 7:22:19 PM4/27/08
to sqlal...@googlegroups.com

On Apr 27, 2008, at 2:55 AM, Esceo wrote:

>
> Just a side note, is the pool code thread safe?
> i.e. no two threads would possibly checkout the same connection at any
> point in time?

we're pretty confident its completely threadsafe as of 0.3.11, we have
several different kinds of tests for its thread-safety now and we
havent heard of anyone having these kinds of issues on the "thread-
sensitive" DBAPIs like MySQLDB in a long time.

What my suspicion here is, is that the connection is being returned to
the pool and properly being checked out elsewhere (even in the same
thread), but theres a cursor still opened somewhere. As I've
mentioned, we removed the "check for any remaining open cursors" logic
a while back since its not needed in most cases, since the ResultProxy
automatically closes cursors in all cases except when SELECT results
are pending (and in that case the end-user should be closing the
result set or fetching all results) - in that case, the cursor still
gets closed when the resultproxy is garbage collected but garbage
collection is not always immediate, even with an object such as RP
which intentionally does not have circular references for this reason.

What I'm not sure of at this point is if theres some cursor usage
specific to the MS-SQL dialect that might be external to the
ResultProxy....if Rick could comb through that for me that would be
helpful.

Rick Morrison

unread,
Apr 27, 2008, 8:55:18 PM4/27/08
to sqlal...@googlegroups.com
> What I'm not sure of at this point is if theres some cursor usage
> specific to the MS-SQL dialect that might be external to the
> ResultProxy....if Rick could comb through that for me that would be
> helpful.

The cursor is used pre_exec() if an INSERT statement tries to set a literal PK on a sequence-like column (called IDENTITY in MSSQL). MSSQL needs to have a special mode turned on to do those.

It's used in do_exec() to turn that "literal insert" mode back off.

Finally it's also used post_exec() after INSERT statements on tables with IDENTITY columns to fetch the newly-inserted PK value.


that's about it for cursor usage -- nothing on SELECT, only inserts.







 







Michael Bayer

unread,
Apr 27, 2008, 9:06:50 PM4/27/08
to sqlal...@googlegroups.com

Yeah, I see that its used. But no new cursor is opened, so this
doesnt seem like it could be involved.

It is possible we could re-introduce "check for open cursors" as a
pool events extension. It would raise an error if any connection is
returned with associated cursors still opened and could track down
issues like these.

Rick Morrison

unread,
Apr 27, 2008, 9:27:03 PM4/27/08
to sqlal...@googlegroups.com

It is possible we could re-introduce "check for open cursors" as a
pool events extension.  It would raise an error if any connection is
returned with associated cursors still opened and could track down
issues like these.


That would be a great diagnostic tool for this: It's hard to track down the issue now, as the problem doesn't show itself until the cursor is re-used on some later, unrelated query.



BruceC

unread,
Apr 27, 2008, 9:58:33 PM4/27/08
to sqlalchemy
Our Pylons setup emails every traceback error to our developers, &
with about 80 users, we've been getting a couple of hundred error
emails a day, almost all with this same error (Tomorrow we will have
about 1200 users on our system). The following is an excerpt of the
traceback message. Every error has at least these final lines of the
message in common:

Module sqlalchemy.orm.attributes:44 in __get__
<< if instance is None:
return self
return self.impl.get(instance._state)

def get_history(self, instance, **kwargs):>> return
self.impl.get(instance._state)

Module sqlalchemy.orm.attributes:279 in get
<< if passive:
return PASSIVE_NORESULT
value = callable_()
if value is not ATTR_WAS_SET:
return self.set_committed_value(state,
value)>> value = callable_()

Module sqlalchemy.orm.strategies:483 in __call__
<< q = q.filter(strategy.lazy_clause(instance))

result = q.all()
if strategy.uselist:
return result>> result = q.all()

Module sqlalchemy.orm.query:860 in all
<< This results in an execution of the underlying query.
"""
return list(self)>> return list(self)

Module sqlalchemy.orm.query:921 in __iter__
<< if self._autoflush and not self._populate_existing:
self.session._autoflush()
return self._execute_and_instances(context)

def _execute_and_instances(self, querycontext):>> return
self._execute_and_instances(context)

Module sqlalchemy.orm.query:924 in _execute_and_instances
<<
def _execute_and_instances(self, querycontext):
result = self.session.execute(querycontext.statement,
params=self._params, mapper=self.mapper,
instance=self._refresh_instance)
return self.iterate_instances(result,
querycontext=querycontext)>> result =
self.session.execute(querycontext.statement, params=self._params,
mapper=self.mapper, instance=self._refresh_instance)

Module sqlalchemy.orm.session:605 in execute
<< engine = self.get_bind(mapper, clause=clause, **kwargs)

return self.__connection(engine,
close_with_result=True).execute(clause, params or {})

def scalar(self, clause, params=None, mapper=None,
**kwargs):>> return self.__connection(engine,
close_with_result=True).execute(clause, params or {})

Module sqlalchemy.engine.base:846 in execute
<< for c in type(object).__mro__:
if c in Connection.executors:
return Connection.executors[c](self, object,
multiparams, params)
else:
raise exceptions.InvalidRequestError("Unexecutable
object type: " + str(type(object)))>> return Connection.executors[c]
(self, object, multiparams, params)

Module sqlalchemy.engine.base:897 in execute_clauseelement
<< else:
keys = None
return
self._execute_compiled(elem.compile(dialect=self.dialect,
column_keys=keys, inline=len(params) > 1), distilled_params=params)

def _execute_compiled(self, compiled, multiparams=None,
params=None, distilled_params=None):>> return
self._execute_compiled(elem.compile(dialect=self.dialect,
column_keys=keys, inline=len(params) > 1), distilled_params=params)

Module sqlalchemy.engine.base:909 in _execute_compiled
<<
context.pre_execution()
self.__execute_raw(context)
context.post_execution()
self._autocommit(context)>> self.__execute_raw(context)

Module sqlalchemy.engine.base:918 in __execute_raw
<< self._cursor_executemany(context.cursor,
context.statement, context.parameters, context=context)
else:
self._cursor_execute(context.cursor,
context.statement, context.parameters[0], context=context)

def _execute_ddl(self, ddl, params, multiparams):>>
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)

Module sqlalchemy.engine.base:962 in _cursor_execute
<< self.dialect.do_execute(cursor, statement, parameters,
context=context)
except Exception, e:
self._handle_dbapi_exception(e, statement, parameters,
cursor)
raise
>> self._handle_dbapi_exception(e, statement, parameters, cursor)

Module sqlalchemy.engine.base:944 in _handle_dbapi_exception
<< if self.__close_with_result:
self.close()
raise exceptions.DBAPIError.instance(statement,
parameters, e, connection_invalidated=is_disconnect)
finally:
del self._reentrant_error>> raise
exceptions.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
DBAPIError: (Error) ('HY000', '[HY000] [Microsoft][SQL Native
Client]Connection is busy with results for another command (0)')
u'SELECT ...snip...'

I don't know whether this helps, but many thanks for looking at the
issue :)

Michael Bayer

unread,
Apr 28, 2008, 12:02:54 AM4/28/08
to sqlal...@googlegroups.com

On Apr 27, 2008, at 9:58 PM, BruceC wrote:

>
> I don't know whether this helps, but many thanks for looking at the
> issue :)

unfortunately it doesnt say much at all. You're not sharing a Session
between multiple threads, right ?

Michael Bayer

unread,
Apr 28, 2008, 12:04:23 AM4/28/08
to sqlal...@googlegroups.com

and by "sharing", I mean the Session, as well as any objects that are
attached to it, since that stack trace illustrated specifically an
attribute lazy-loading its contents from the database.

BruceC

unread,
Apr 28, 2008, 4:04:05 AM4/28/08
to sqlalchemy
We're not intentionally sharing a session between multiple
threads... :) I'll check out the code to see if there is any sharing
going on, but I'm pretty sure that's not the case.

You mention lazy-loading as if it might be playing a part in this
issue - is that right? We are using lazy & eager loading in various
parts of the app...

Michael Bayer

unread,
Apr 28, 2008, 9:58:42 AM4/28/08
to sqlal...@googlegroups.com

On Apr 28, 2008, at 4:04 AM, BruceC wrote:

>
> We're not intentionally sharing a session between multiple
> threads... :) I'll check out the code to see if there is any sharing
> going on, but I'm pretty sure that's not the case.
>
> You mention lazy-loading as if it might be playing a part in this
> issue - is that right? We are using lazy & eager loading in various
> parts of the app...

the lazy loading can present an issue only to the degree that the
object still maintains a link to its originating Session. So if the
lazy loader fires off in a thread other than that where the Session
normally resides, you can have concurrent access to it, which in the
case of the Session already being within a transaction, could create
concurrent access on a single connection.

Basically if you dont share anything between threads, in theory this
kind of issue should not occur.

Reply all
Reply to author
Forward
0 new messages