QueuePool limit size reached, using expression API only

199 views
Skip to first unread message

Mitchell Hashimoto

unread,
Apr 9, 2012, 2:43:44 PM4/9/12
to sqlal...@googlegroups.com
Hi,

I am continually getting this sort of error after some amount of time: QueuePool limit of size 30 overflow 10 reached, connection timed out, timeout 30

We're using only the SQLAlchemy Core expressions API, so we're not wrapping anything in sessions, so I'm not sure how this is happening.

Any pointers?

Thanks,
Mitchell

Michael Bayer

unread,
Apr 9, 2012, 3:03:29 PM4/9/12
to sqlal...@googlegroups.com
close your connections after you are finished with them.


--
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/QlU8RHFQOwQJ.
To post to this group, send email to sqlal...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.

Claudio Freire

unread,
Apr 9, 2012, 3:25:11 PM4/9/12
to sqlal...@googlegroups.com
On Mon, Apr 9, 2012 at 4:03 PM, Michael Bayer <mik...@zzzcomputing.com> wrote:
> close your connections after you are finished with them.

They should be automatically returned to the pool when unreferenced.

The OP may be storing stray references somewhere, or associating them
somehow to a reference cycle that takes time to be freed.

In any case, explicit closing may not be the greatest idea (that
connection won't go back to the pool I think, not sure, please SA
gurus confirm), rather, they should be de-referenced thoroughly.

Michael Bayer

unread,
Apr 9, 2012, 3:36:11 PM4/9/12
to sqlal...@googlegroups.com

Code that deals with Connection explicitly should definitely have an explicit plan in place to close them (where "close" on Connection will return the DBAPI connection to the pool). Relying on dereferencing is not very clean and also doesn't work deterministically with environments like Pypy and jython that don't use reference counting.

Using context managers, i.e. "with engine.connect() as conn", is the most straightforward.


Claudio Freire

unread,
Apr 9, 2012, 5:39:09 PM4/9/12
to sqlal...@googlegroups.com
On Mon, Apr 9, 2012 at 4:36 PM, Michael Bayer <mik...@zzzcomputing.com> wrote:
> Using context managers, i.e. "with engine.connect() as conn", is the most straightforward.

IIRC, context managers are new in SA, aren't they?

Michael Bayer

unread,
Apr 9, 2012, 5:49:16 PM4/9/12
to sqlal...@googlegroups.com
there's some new-er ones including that one, we've had a few for some years

> --
> You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

Michael Bayer

unread,
Apr 9, 2012, 5:50:33 PM4/9/12
to sqlal...@googlegroups.com
better one to use from engine is begin() (also new in 0.7.6):

with engine.begin() as conn:
...

that way everything you do with conn is on the same transaction.

On Apr 9, 2012, at 5:39 PM, Claudio Freire wrote:

> --
> You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

Claudio Freire

unread,
Apr 9, 2012, 6:01:48 PM4/9/12
to sqlal...@googlegroups.com
On Mon, Apr 9, 2012 at 6:50 PM, Michael Bayer <mik...@zzzcomputing.com> wrote:
> better one to use from engine is begin() (also new in 0.7.6):
>
> with engine.begin() as conn:
>    ...
>
> that way everything you do with conn is on the same transaction.

Yeah, because I'm using 0.5.8 (and couldn't switch to 0.6.x yet, the
app breaks with it).

Mitchell Hashimoto

unread,
Apr 9, 2012, 8:24:53 PM4/9/12
to sqlal...@googlegroups.com


On Monday, April 9, 2012 12:03:29 PM UTC-7, Michael Bayer wrote:
close your connections after you are finished with them.

So I suppose my confusion is where is the "connection" being made. I have a singleton engine instance running around, and when I query, I basically do something like this:

query = select([fields])
result = engine.execute(query).fetchall()

Therefore I'm using implicit connections. The reference to the ResultProxy is quickly gone, which I thought would implicitly close the connection as well.

What exactly am I supposed to do here?

Mitchell
 


On Apr 9, 2012, at 2:43 PM, Mitchell Hashimoto wrote:

Hi,

I am continually getting this sort of error after some amount of time: QueuePool limit of size 30 overflow 10 reached, connection timed out, timeout 30

We're using only the SQLAlchemy Core expressions API, so we're not wrapping anything in sessions, so I'm not sure how this is happening.

Any pointers?

Thanks,
Mitchell

--
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/QlU8RHFQOwQJ.
To post to this group, send email to sqlal...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscribe@googlegroups.com.

Michael Bayer

unread,
Apr 9, 2012, 8:32:44 PM4/9/12
to sqlal...@googlegroups.com
On Apr 9, 2012, at 8:24 PM, Mitchell Hashimoto wrote:



On Monday, April 9, 2012 12:03:29 PM UTC-7, Michael Bayer wrote:
close your connections after you are finished with them.

So I suppose my confusion is where is the "connection" being made. I have a singleton engine instance running around, and when I query, I basically do something like this:

query = select([fields])
result = engine.execute(query).fetchall()

Therefore I'm using implicit connections. The reference to the ResultProxy is quickly gone, which I thought would implicitly close the connection as well.

What exactly am I supposed to do here?

That pattern will return the connection to the pool immediately after use - so for that to be the only pattern at play, you'd have to have some execute() or fetchall() calls that are taking so long to complete that concurrent executions are timing out.   You'd want to look at any processes/threads hanging or taking very long and causing other concurrent connections to time out.

If you aren't using threads or concurrency, and expect that only one connection should be in use at a time for a given engine, then I'd give the AssertionPool a quick try which will ensure you're only checking out one connection at a time, illustrating a stack trace where a second concurrent checkout would be occurring.


Mitchell Hashimoto

unread,
Apr 9, 2012, 8:34:10 PM4/9/12
to sqlal...@googlegroups.com

In addition to fetchall() there are some fetchone() calls as well. I'm
assuming in these cases I need to explicitly close the ResultProxy?

Mitchell

>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.

> To post to this group, send email to sqlal...@googlegroups.com.
> To unsubscribe from this group, send email to

> sqlalchemy+...@googlegroups.com.

Michael Bayer

unread,
Apr 9, 2012, 8:37:07 PM4/9/12
to sqlal...@googlegroups.com

On Apr 9, 2012, at 8:32 PM, Michael Bayer wrote:

> You'd want to look at any processes/threads hanging

correction, "threads". other processes wouldn't have any impact here.


Michael Bayer

unread,
Apr 9, 2012, 8:39:13 PM4/9/12
to sqlal...@googlegroups.com

On Apr 9, 2012, at 8:34 PM, Mitchell Hashimoto wrote:

>
> In addition to fetchall() there are some fetchone() calls as well. I'm
> assuming in these cases I need to explicitly close the ResultProxy?
>

Well yes, if you only called fetchone(), there's no indication yet that there aren't more rows available, and the ResultProxy keeps open both the cursor as well as the ultimate DBAPI connection checked out, only releasing these resources when the cursor is exhausted.

It's for this reason if you have a lot of one-row results, you can use result.first() or result.scalar() which will close things out after returning the single result.

Reply all
Reply to author
Forward
0 new messages