Database connection closed after each request?

4,709 views
Skip to first unread message

Glenn Maynard

unread,
Jul 23, 2009, 5:24:05 PM7/23/09
to django...@googlegroups.com
Why is each thread's database connection closed after each request?

Opening a database connection is expensive; it's taking 150ms. Even
10ms would be far too much, for a request that otherwise takes only
30ms. Django threads are reused in FastCGI, and database connections
are reusable; why do threads not reuse their database connections?

Disabling django.db.close_connection fixes this, bringing a trivial
request from 170ms to 35ms after the first.

150ms to localhost also seems grossly expensive, and I'm still
investigating that. However, even if I bring it down to 10ms, that's
still spending 25% of the time for the request on something that
shouldn't be necessary at all.

--
Glenn Maynard

Carlos A. Carnero Delgado

unread,
Jul 23, 2009, 10:02:24 PM7/23/09
to django...@googlegroups.com
Hi,

On Thu, Jul 23, 2009 at 5:24 PM, Glenn Maynard<gl...@zewt.org> wrote:
> Why is each thread's database connection closed after each request?

I believe that this is related to Django's shared-nothing-by-default approach.

HTH,
Carlos.

Glenn Maynard

unread,
Jul 23, 2009, 10:50:16 PM7/23/09
to django...@googlegroups.com

In this case, that's a terrible-performance-by-default approach.
(It's also not a default, but the only behavior, but I'll probably
submit a patch to add a setting for this if I don't hit any major
problems.)

I can think of obscure cases where this would matter--if something
changes a per-connection setting, like changing the schema search
path, and doesn't put it back--but for most people this is just an
unnecessary chunk of overhead. (I wonder if Postgres has a way to
quickly reset the connection state, without having to tear it down
completely.)

The only other change I've had to make so far is making
TransactionMiddleware always commit or rollback, not just on is_dirty,
so it doesn't leave read locks held after a request. (Of course, that
sometimes adds an SQL COMMIT where there wasn't one before, but I'll
take a 500us per request overhead over 100ms in a heartbeat.)

--
Glenn Maynard

James Bennett

unread,
Jul 23, 2009, 11:36:26 PM7/23/09
to django...@googlegroups.com
On Thu, Jul 23, 2009 at 9:50 PM, Glenn Maynard<gl...@zewt.org> wrote:
> In this case, that's a terrible-performance-by-default approach.
> (It's also not a default, but the only behavior, but I'll probably
> submit a patch to add a setting for this if I don't hit any major
> problems.)

Please do a bit more research and reflection on the topic before you
start submitting patches, because this isn't quite what you're making
it out to be.

In the case of a fairly low-traffic site, you're not going to notice
any real performance difference (since you're not doing enough traffic
for connection overhead to add up). In the case of a high-traffic
site, you almost certainly want some sort of connection-management
utility (like pgpool) regardless of what Django does, in which case it
becomes rather moot (since what you're doing is getting connection
handles from pgpool or something similar).

Meanwhile, the codebase stays much simpler and avoids some pitfalls
with potential resource and state leaks.

(and, in general, I don't believe that connection-management utilities
belong in an ORM; keeping them in a different part of the stack
drastically increases flexibility in precisely the cases where you
need it most)


--
"Bureaucrat Conrad, you are technically correct -- the best kind of correct."

Glenn Maynard

unread,
Jul 24, 2009, 2:31:40 AM7/24/09
to django...@googlegroups.com
On Thu, Jul 23, 2009 at 11:36 PM, James Bennett <ubern...@gmail.com> wrote:
> Meanwhile, the codebase stays much simpler and avoids some pitfalls
> with potential resource and state leaks.

All pgpool2 does to reset the session to avoid all of these pitfalls
is issue "ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT".
There's nothing complex about that. (With that added, the
TransactionMiddleware change I mentioned earlier isn't needed,
either.)

I see no need for a complex connection pooling service. You're making
this sound much more complicated than it is, resulting in people
needing to use configurations much more complicated than necessary.

--
Glenn Maynard

James Bennett

unread,
Jul 24, 2009, 4:39:24 PM7/24/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 1:31 AM, Glenn Maynard<gl...@zewt.org> wrote:
> I see no need for a complex connection pooling service.  You're making
> this sound much more complicated than it is, resulting in people
> needing to use configurations much more complicated than necessary.

Except this is what it turns into. So suppose a patch is added which
does nothing except keep the database connection open; well, that's
problematic because it means a server process/thread that's not
handling a request at the moment is still tying up a handle to a DB
connection. So somebody will say it'd be much better if Django
maintained a pool of connections independent of request/response
cycles, and just doled them out as needed.

And then you need configuration to manage the size of the pool, when
connections get recycled, how to reset connections portably...

And then somebody will notice that it's got all the infrastructure for
pgpool's "poor man's replication" and helpfully submit a patch for
that...

And then we end up in the unenviable state of having wasted a bunch of
time reimplementing features already available in the standard tool
people should've been using from the start, but without getting any of
that tool's useful flexibility (e.g., changing the stuff behind the
pool without changing the application layer).

Or we could just accept that there are tools right now which will do
all this and more for you, and that if you've got enough traffic
through your app that the overhead of DB connections is problematic,
you should be using one of those tools.

Glenn Maynard

unread,
Jul 24, 2009, 8:03:30 PM7/24/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 4:39 PM, James Bennett<ubern...@gmail.com> wrote:
> Except this is what it turns into. So suppose a patch is added which
> does nothing except keep the database connection open; well, that's
> problematic because it means a server process/thread that's not
> handling a request at the moment is still tying up a handle to a DB
> connection. So somebody will say it'd be much better if Django
> maintained a pool of connections independent of request/response
> cycles, and just doled them out as needed.

What I need is sensible, simple and faster, but since someone else
might want to turn it into something complex and unnecessary, it
shouldn't be done? Sorry, this argument applies to every change
anyone might possibly make. There's no "slippery slope" here. I
don't want a connection pooler (and all the issues that brings, like
not being able to use "local" authentication in Postgres); just to
eliminate needless database reconnections.

--
Glenn Maynard

James Bennett

unread,
Jul 24, 2009, 8:17:11 PM7/24/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 7:03 PM, Glenn Maynard<gl...@zewt.org> wrote:
> What I need is sensible, simple and faster, but since someone else
> might want to turn it into something complex and unnecessary, it
> shouldn't be done?

While the specific thing you personally are asking for might not be
that much, we can't really commit it to Django and then say "Hey, we
only did this for Glenn and nobody else gets to ask for features
building on it". And since there have already been multiple requests
for Django to grow full-blown connection-pooling utilities, I feel
pretty confident that's where it would end up.

If you don't like it, simply detach the signal handler that closes the
connection (and register your own handler that resets the connection).
But Django itself shouldn't add that because it really throws open a
Pandora's box of things which shouldn't be in Django but which will be
demanded anyway.

Andy

unread,
Jul 24, 2009, 8:36:16 PM7/24/09
to Django users
On Jul 24, 4:39 pm, James Bennett <ubernost...@gmail.com> wrote:
> Except this is what it turns into. So suppose a patch is added which
> does nothing except keep the database connection open; well, that's
> problematic because it means a server process/thread that's not
> handling a request at the moment is still tying up a handle to a DB
> connection. So somebody will say it'd be much better if Django
> maintained a pool of connections independent of request/response
> cycles, and just doled them out as needed.

I think you're mixing 2 different things together: persistent
connection & connection pooling.

There's no reason that you can't have persistent connection without
connection pooling.

For connection pooling you can make an argument whether it belongs in
an ORM or not (although SQLAlchemy supports connection pooling just
fine)

But on the other hand if all a user wants is simple persistent
connection then it would seem logical for the ORM to provide that.

Ideally Django should offer choices to users: if a user doesn't mind
tearing down and building up a DB connection every time a request is
processed, he could use non-persistent connection, but if he wants to
save that time & reuse DB connection, he should be able to choose
persistent connection.


> if you've got enough traffic
> through your app that the overhead of DB connections is problematic,
> you should be using one of those tools.

But this isn't about traffic at all. It's about latency, which has
nothing to do with traffic.

Amazon has done studies on the effects of response time. They varied
the response times of their website and observed the resultant changes
in user behavior. What they found is that for every 50ms increase in
response time, the rate at which users order an item drops by 10%.

Seen under this light, the additional 150ms latency resulting from non-
persistent DB connection is huge - it implies almost 30% fewer
customer orders. And it has nothing to do with traffic.

I could be running a tiny e-commerce site. My traffic could be
minimal. And I probably wouldn't have the expertise/time/money to run
a pooling system for my tiny site. But I still wouldn't want to lose
30% of my orders just because I can't have persistent DB connections.




Andy

unread,
Jul 24, 2009, 8:38:13 PM7/24/09
to Django users
On Jul 23, 10:50 pm, Glenn Maynard <gl...@zewt.org> wrote:
> In this case, that's a terrible-performance-by-default approach.
> (It's also not a default, but the only behavior, but I'll probably
> submit a patch to add a setting for this if I don't hit any major
> problems.)

Agreed.

Please share any patches you have.

Alex Gaynor

unread,
Jul 24, 2009, 8:54:18 PM7/24/09
to django...@googlegroups.com
I'd just like to take a moment to point out that that simply *cannot*
be, else the only logical conclusion would be that .5s of latency
results in 0 sales, which plainly makes no sense. It's also worth
noting, that as James has pointed out, if you merely want to persist
connections it's a matter of unregistering the signal handler and
registering your own to reset the connection.

Alex

--
"I disapprove of what you say, but I will defend to the death your
right to say it." -- Voltaire
"The people's good is the highest law." -- Cicero
"Code can always be simpler than you think, but never as simple as you
want" -- Me

Glenn Maynard

unread,
Jul 24, 2009, 8:55:29 PM7/24/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 8:17 PM, James Bennett<ubern...@gmail.com> wrote:
> While the specific thing you personally are asking for might not be
> that much, we can't really commit it to Django and then say "Hey, we
> only did this for Glenn and nobody else gets to ask for features
> building on it". And since there have already been multiple requests
> for Django to grow full-blown connection-pooling utilities, I feel
> pretty confident that's where it would end up.

There are lots of requests for ways to build specific types of
queries, too. You can't really commit a QuerySet method for one thing
and then...

Anyway, even if I was to put together a patch for this, it wouldn't be
for quite a while--not until after I've tested and used it in
production for quite a while, done more extensive benchmarking, and in
any case I wouldn't submit anything but bug reports right now with the
tracker so (understandably) backlogged from the release freeze. So
relax, I'm not jumping head-first into this.

--
Glenn Maynard

Glenn Maynard

unread,
Jul 24, 2009, 9:02:37 PM7/24/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 8:36 PM, Andy<selfor...@gmail.com> wrote:
> Seen under this light, the additional 150ms latency resulting from non-
> persistent DB connection is huge - it implies almost 30% fewer
> customer orders. And it has nothing to do with traffic.

By the way, I switched my connections from TCP to a Unix socket with
local authentication and it dropped to 5-10ms. (I suspect it was
setting up SSL to localhost--seems like Postgres should know better,
but I havn't investigated.) I still consider 5-10ms significant (all
delays are cumulative and the framework should have a small baseline
"latency footprint"), but I havn't done enough benchmarking and timing
yet, and obviously that's an order of magnitude lower.

Also, you may be connecting to a database on another server, or local
authentication may not be available; not reconnecting for each request
seems just like sensible behavior unless you really are using a pooler
and explicitly don't want it to. It's easy to be stuck with a set
configuration on a shared server, and if you're stuck with a 150ms
configuration, that's a perceptible delay that's easily avoided.

I've attached the change as I'm running it now. I could isolate it
outside of a patch (disconnect the signal, as James suggested, and
connect my own), but it does need to know a little about the backend
internals (on Postgres, to set the isolation level; and resetting the
connection will be different with other databases).

(Obviously, this isn't anything like a submittable patch; I'm just
making it available since he asked.)

> I'd just like to take a moment to point out that that simply *cannot*
> be, else the only logical conclusion would be that .5s of latency
> results in 0 sales, which plainly makes no sense.

I suspect the actual formula is something like pow(0.9,
(seconds_latency / 0.050)); in other words, 500ms latency would imply
34.8% as many sales. That's pretty believable.

--
Glenn Maynard

django-postgres-persistent-db-connection.diff

James Bennett

unread,
Jul 24, 2009, 9:20:00 PM7/24/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 7:55 PM, Glenn Maynard<gl...@zewt.org> wrote:
> There are lots of requests for ways to build specific types of
> queries, too.  You can't really commit a QuerySet method for one thing
> and then...

So, let's walk through this logically.

Suppose we start with a patch which does the sort of thing you
apparently want: hold on to the connection, and reset it after each
request/response cycle. That patch would never get committed, because
it would have even worse side effects (see: "spinning up a server
process ties up a DB handle for as long as the process lives?") and
could quite realistically cause resource starvation.

How do we avoid that? Well, we'd need some way to say "I'm done with
this connection for now but I'll be needing a connection again soon,
so hang on to it to avoid the overhead of re-establishing it". Which
is pretty much the definition of connection pooling.

So we see that this isn't a pointless slippery-slope argument:
connection pooling is an inevitable outcome of your approach, just one
you've yet to accept.

But it gets worse: you'd want this mechanism to be independent of the
web server processes/threads in which the connections are used,
because you don't want to have to have one connection pool per server
process (since that just reintroduces the problem of latency every
time you spin one up and increases overhead and code complexity) and
you don't want the connection pool to be killed if the process which
hosted it gets recycled by the server. Which points to... an external
connection-pooling utility.

Such utilities already exist and are usable right now. They don't
belong in Django (for many reasons, "don't reinvent the wheel" being
only one of them).

Andy

unread,
Jul 24, 2009, 9:26:03 PM7/24/09
to Django users
On Jul 24, 8:54 pm, Alex Gaynor <alex.gay...@gmail.com> wrote:
> "Seen under this light, the additional 150ms latency resulting from non-
> persistent DB connection is huge - it implies almost 30% fewer
> customer orders. And it has nothing to do with traffic."
>
> I'd just like to take a moment to point out that that simply *cannot*
> be, else the only logical conclusion would be that .5s of latency
> results in 0 sales, which plainly makes no sense.

Actually it doesn't mean that at all.

First of all, the 10% decrease in order for each additional 50ms
increase in latency is *multiplicative*, NOT additive. For the 150ms
increase in latency that non-persistent connection costs you, the
decrease in order is 1 - 0.9^3 = 27.1%, hence I wrote "almost 30%"

Hence for a 0.5s increase in latency, *if the relationship still holds
over a range that big*, the corresponding drop in order would be 1-
0.9^10 = 65%. Which isn't exactly impossible. But my guess is for
increase in latency that large, the original "50ms results in 10%
drop" relationship will no longer holds.

Regardless, the point is that latency is a big deal whether your site
is large or small. Even a small latency increase can have a big impact
on user behavior. Telling every mom & pop site owner out there "Go set
up pgpool or hack the Django signal handler if you don't want high
latency" is not very user friendly. Especially when the solution
(providing optional persistent DB connection) is relatively simple.

Glenn Maynard

unread,
Jul 25, 2009, 2:40:53 PM7/25/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 9:20 PM, James Bennett<ubern...@gmail.com> wrote:
> Suppose we start with a patch which does the sort of thing you
> apparently want: hold on to the connection, and reset it after each
> request/response cycle. That patch would never get committed, because
> it would have even worse side effects (see: "spinning up a server
> process ties up a DB handle for as long as the process lives?") and
> could quite realistically cause resource starvation.

8-16 connections from the 8-16 backend threads I'm starting are going
to swamp Postgresql? Sorry, no.

--
Glenn Maynard

Glenn Maynard

unread,
Jul 25, 2009, 5:26:36 PM7/25/09
to django...@googlegroups.com
On Fri, Jul 24, 2009 at 9:02 PM, Glenn Maynard<gl...@zewt.org> wrote:
> By the way, I switched my connections from TCP to a Unix socket with
> local authentication and it dropped to 5-10ms.  (I suspect it was

If you're using a connection pooler, you don't want to disconnect and
reconnect from it all the time--you want to stay connected and let it
worry about not keeping idle sessions open. That's its job! Django
should keep the connection open, and if people want to run so many
Django backends that this creates too many idle connections, then
*that* is the exact scenario for using a connection pooler.

100 connections, without connection pooling, with full
disconnect/reconnect: 3.385s
100 connections, without connection pooling, with connection reset: 2.681s
100 connections, with pybouncer, with full disconnect/reconnect: 2.908s
100 connections, with pybouncer, with connection reset: 2.754s

You're saying that Django should reconnect constantly, and if that's
too slow, people should use a pooler to fix it. This is precisely
backwards. Django should stay connected, and if people don't want
idle connections to the database, *that* is when you use a pooler.
The current behavior--avoiding idle connections to the database--is
exactly what you're against: the behavior that should be handled by a
pooler. That's what they're for.

--
Glenn Maynard

Amitay Dobo

unread,
Jul 26, 2009, 10:17:58 PM7/26/09
to Django users
I was quite surprised to find out there isn't any connection pooling
built in to Django or (at least some of) its DB beack-ends when I've
first researched about it.
Other people have correctly pointed out that this is a latency issue
that affects both low and high traffic sites. Since there is already
an abstraction layer over the various db backends, I find it quite
logical to implement the connection pooling on top of it, instead of
relaying on an external tool. I believe an optional simple pool with
min/max connections settings would do the trick for most of the people
without any side effects, and the line before the slippery slope can
be drawn there. It might be a good idea to leave an extension point
for manipulating a connection object taken out of the pool, in order
to reset its state.

When I've researched for connection pooling solutions I've found
various hacks for cloning and modifying the backends to use sqlalchemy
or psycopg connection pools. This seems like a pretty crude solution,
which perhaps highlights the need for a better solution (or at least a
better extension point) within the framework.

I have found little information regarding any "external connection-
pooling utilities", except the mention of pgpool in Django book. I
believe there might be a few situations when installing such an
external utility is problematic (shared hosting), and more cases where
developers would prefer to simply switch on a setting. Nonetheless, I
would be happy to be educated about external pooling utilities which
were proven to work well with Django and various DB backends (namely
PostgresSql and MySql).

So to sum up: I vote up connection pooling. Where do I sign up?

>
> So we see that this isn't a pointless slippery-slope argument:
> connection pooling is an inevitable outcome of your approach, just one
> you've yet to accept.
>
> But it gets worse: you'd want this mechanism to be independent of the
> web server processes/threads in which the connections are used,
> because you don't want to have to have one connectionpoolper server
> process (since that just reintroduces the problem of latency every
> time you spin one up and increases overhead and code complexity) and
> you don't want the connectionpoolto be killed if the process which

James Bennett

unread,
Jul 26, 2009, 10:24:38 PM7/26/09
to django...@googlegroups.com
On Sun, Jul 26, 2009 at 9:17 PM, Amitay Dobo<ami...@gmail.com> wrote:
> So to sum up: I vote up connection pooling. Where do I sign up?

On some other project's mailing list?

Connection pooling doesn't belong in Django. I've outlined one reason
for that above. Google a bit for things like "django connection pool"
and you'll likely find others (including the fact that it hurts your
flexibility and mingles layers of the stack which don't and shouldn't
need to know about each other's bits).

Glenn Maynard

unread,
Jul 27, 2009, 4:43:28 PM7/27/09
to django...@googlegroups.com
On Sun, Jul 26, 2009 at 10:17 PM, Amitay Dobo<ami...@gmail.com> wrote:
> So to sum up: I vote up connection pooling. Where do I sign up?

Thread hijacking. Thanks, always appreciated.

--
Glenn Maynard

Mike

unread,
Aug 29, 2009, 8:08:58 AM8/29/09
to Django users
Hi,

I made some small custom psycopg2 backend that implements persistent
connection using global variable. With this I was able to improve the
amount of requests per second from 350 to 1600 (on very simple page
with few selects) Just save it in the file called base.py in any
directory (e.g. postgresql_psycopg2_persistent) and set in settings:

DATABASE_ENGINE to projectname.postgresql_psycopg2_persistent

Here is a source: http://dpaste.com/hold/86948/

# Custom DB backend postgresql_psycopg2 based
# implements persistent database connection using global variable

from django.db.backends.postgresql_psycopg2.base import DatabaseError,
DatabaseWrapper as BaseDatabaseWrapper, \
IntegrityError
from psycopg2 import OperationalError

connection = None

class DatabaseWrapper(BaseDatabaseWrapper):
def _cursor(self, *args, **kwargs):
global connection
if connection is not None and self.connection is None:
try: # Check if connection is alive
connection.cursor().execute('SELECT 1')
except OperationalError: # The connection is not working,
need reconnect
connection = None
else:
self.connection = connection
cursor = super(DatabaseWrapper, self)._cursor(*args, **kwargs)
if connection is None and self.connection is not None:
connection = self.connection
return cursor

def close(self):
if self.connection is not None:
self.connection.commit()
self.connection = None

Mike

unread,
Aug 31, 2009, 6:47:34 AM8/31/09
to Django users
Hi,

I also would like to note that this code is not threadsafe - you can't
use it with python threads because of unexpectable results, in case of
mod_wsgi please use prefork daemon mode with threads=1

Graham Dumpleton

unread,
Aug 31, 2009, 7:56:22 AM8/31/09
to Django users
Provided you set WSGIApplicationGroup to %{GLOBAL} in mod_wsgi 2.X, or
instead use newer mod_wsgi 3.0, you could always use thread locals.
That way each thread would have its own connection.

You need to ensure main interpreter is used in mod_wsgi 2.X, as thread
locals weren't preserved beyond lifetime of request in sub
interpreters. This issue has been addressed in mod_wsgi 3.0.

Graham

Ed Menendez

unread,
Sep 1, 2009, 3:23:41 PM9/1/09
to Django users
Just to throw my two cents in. My background is high-traffic fantasy
sports websites (a different sort of geekness) and I've dealt with
connection pooling and connection persistence separately on other
ancient platforms. I will +1 Alex's observation that connection
pooling and persistent connections are two different things. This
problem also has little to do with high-traffic sites and all to do
with the normal use cases. I will also +1 that the default to require
reconnections is bad enough to be considered a bug. For reasons
already stated, this should never be the default longevity of
connections.

I implemented the connection pool hack (to get persistent connection)
with SQLAchemy using MySQL for an RBS Futbol Soccer game. And I think
it's a hack because basically, I had to buy the happy meal (pooling)
just to get the collectible toy (persistent connections).

Once we decouple connection pooling and persistent connections in our
heads... I think this becomes a simpler issue. If you had 50 web
threads going and each one had a connection to the database that stays
open forever, even at 1am when you have no users, that's not a
problem. Even those 50 threads sitting there is not a problem. As a
matter of fact, I much prefer to keep those 50 web threads open so
Python+Django+My Bloated Code isn't reinstantiated all the time.

If you have 1000 connections open, now you might want to have some
connection pooling. But that's a separate problem to be solved by a
separate package and discussed in a different forum.

I do understand this complicates the edge cases where you don't want
the connections to stay open and those who want to have connection
pooling, but since those are edge cases in my opinion, it should be a
little more difficult.

I believe the argument is being made that it's not Django's problem if
connections are kept alive. It's not a battery that needs to be
included. I would argue that this battery needs to be included as it's
a problem with moderate and low traffic sites. Including high traffic
sites in this discussion only confuses the issue since we're really
talking connection pooling for those cases.



On Jul 27, 4:43 pm, Glenn Maynard <gl...@zewt.org> wrote:

Jonathan

unread,
Nov 20, 2011, 8:45:13 AM11/20/11
to django...@googlegroups.com
Does anyone know if this progressed anywhere since '09?

ydjango

unread,
Jan 14, 2012, 9:36:17 PM1/14/12
to Django users
Any updates on MySQL connection pool for django. Has anyone
implemented it yet and willing to share?
Graham Dumpleton also raised it in G+ today.

yati sagade

unread,
Jan 15, 2012, 2:56:36 AM1/15/12
to django...@googlegroups.com
I haven't used it,  but this might help http://node.to/wordpress/2010/02/11/database-connection-pool-solution-for-django-mysql/ Also, have you tried using SQLAlchemy?


--
You received this message because you are subscribed to the Google Groups "Django users" group.
To post to this group, send email to django...@googlegroups.com.
To unsubscribe from this group, send email to django-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/django-users?hl=en.




--
Yati Sagade

(@yati_itay)


Matteius

unread,
Jan 15, 2012, 12:02:55 PM1/15/12
to Django users
I've heard/read that MySQL proxy supports a connection pool. See
(http://forge.mysql.com/wiki/MySQL_Proxy_FAQ) Basically this was a
good original observation about the overhead of opening/closing DB
connections for every request. I think tools exist out there such as
MySQL proxy that can practically negate such concerns as this one.

-Matteius

Daniel Gerzo

unread,
Jan 15, 2012, 3:06:47 PM1/15/12
to django...@googlegroups.com

You can use django-orm[1] for that.

[1] https://github.com/niwibe/django-orm

Kind regards
Daniel Gerzo

ydjango

unread,
Jan 16, 2012, 1:30:10 PM1/16/12
to Django users
Django-orm looks very useful. Any high DB traffic site using it in
production?
How mature is it?

deb.dasit2013

unread,
Oct 1, 2019, 2:21:02 AM10/1/19
to Django users
Hi Mike,
I tried with custom persistent connection, but results in same error. My environment is Django + postgres + nginx + gunicorn
Reply all
Reply to author
Forward
0 new messages