Threads and db connection handling question

4,466 views
Skip to first unread message

Cristiano Coelho

unread,
Jun 1, 2016, 6:34:05 PM6/1/16
to Django developers (Contributions to Django itself)
Let me start saying sorry if this actually belongs to django-users rather than developers.

I'm curious about (and if someone can point me to the code) how exactly django handles database connections (in particular when persistent connections are used). As I can tell from the docs, they are kept by thread and opened and closed (or returned to what ever pool it uses if persistent connections are used) per request.

Now what happens when you use either a new thread, or something like python's thread pool (either through the new python 3 api or old python 2 multiprocessing.pool.ThreadPool class)? It seems like connections are correctly opened, and commited if any data modification query is executed, but it also seems like they are never returned/closed, which is not bad in the case of a thread pool, as you know that thread will want to have that connection up for as long as it lives.
What happens exactly if the thread / thread pool dies? On postgres at least (with django 1.9.5) it seems like the connection is returned/closed in cases the whole app server is restarted, but might be left open if the thread unexpectly dies.

With postgres I have been experiencing some issues with connections leaking, my app uses some thread pools that are basically started with django. Now I can't really find the source of the leak, as the connections are correctly closed if I restart the machine (I'm using amazon cloud services), and it seems that they are also correctly closed on app updates which basically means restarting Apache, but in some very specific cases, those thread pools ends up leaking the connection.

Does django have any code to listen to thread exit and gracefully close the connection held by it? Also, is there any chance that a connection may leak if the server is restarted before a request is finished? As it seems like django returns the connection only after a request is over on those cases.

Also, if the connection gets corrupted/closed by the server, does django re try to open it or is that thread's connection dead for ever and basically the thread unusable?

There's really not a lot of documentation on what happens when you use django's ORM on threads that are not part of the current request, hopefully I can get pointed to some code or docs about this.

There's a good response here http://stackoverflow.com/questions/1303654/threaded-django-task-doesnt-automatically-handle-transactions-or-db-connections about some issues with threads and django connections but it seems old.

Tim Graham

unread,
Jun 1, 2016, 9:41:42 PM6/1/16
to Django developers (Contributions to Django itself)
Here's a ticket requesting the documentation you seek (as far as I understand):
https://code.djangoproject.com/ticket/20562

Cristiano Coelho

unread,
Jun 1, 2016, 10:14:02 PM6/1/16
to Django developers (Contributions to Django itself)
That's pretty close but on a much more difficult level, since it is about multi processing? Things starts to get very odd with multi processing and django, compared to threads that you can easily launch a new thread or pool without any special work, it is just the database connections handling that has some kind possible connection leak that I'm trying to figure out why. Also the stack overflow post I linked has some good explanation on the django db connection handling but is quite outdated and things probably have changed a bit and it isn't clear if there's any kind of connection handling on thread exit or you need to manually close any connection.

Stephen J. Butler

unread,
Jun 1, 2016, 10:19:29 PM6/1/16
to django-d...@googlegroups.com
Is there a good reason to do this with your own custom thread pool management inside Django and (I'm assuming) WSGI? Celery is a well understood solution to the problem of background tasks and has a really nice API.

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/9c330b4b-ebd7-4abb-b03d-dffa21d245af%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Cristiano Coelho

unread,
Jun 1, 2016, 10:27:23 PM6/1/16
to Django developers (Contributions to Django itself)
Yes, mainly some things that needs to be done async and you don't want to keep up the request for those such as email sending or logging. So this work is just offloaded to a thread pool and the request returns instantly. Also when you are using auto scaling services from amazon and since each thread pool is started per server process, it means your pools can also automatically scale, you don't really want to get into all the issues of configuring celery and messages queues and such just for this.

Florian Apolloner

unread,
Jun 2, 2016, 3:12:52 AM6/2/16
to Django developers (Contributions to Django itself)
Hi,

Django does not really use a Pool from which you can check out connections and return them back to the pool. Every thread has it's own connection which is initialized on first use -- there is nothing which closes connections (or checks for their lifetime with persistent connections) automatically if you are outside of a standard request/response cycle -- that is on you to ensure.

Cheers,
Florian

Cristiano Coelho

unread,
Jun 2, 2016, 7:39:51 AM6/2/16
to Django developers (Contributions to Django itself)
So what was stated on the stack overflow post that connections are somehow closed only at the end of a request through the request end signal is still the actual behavior?

Any best / suggested practices on how to handle connections on threads that are not part of the request cycle? Considering the connections are automatically opened for you, it would be great for them to be automatically closed/disposed for you on thread's death, which right now seems to happen some times, and some times not, leaking connections (something I'm trying to figure out what's going on).

Florian Apolloner

unread,
Jun 2, 2016, 10:48:33 AM6/2/16
to Django developers (Contributions to Django itself)
On Thursday, June 2, 2016 at 1:39:51 PM UTC+2, Cristiano Coelho wrote:
So what was stated on the stack overflow post that connections are somehow closed only at the end of a request through the request end signal is still the actual behavior?

Dunno, I do not read SO. Connections are closed at the start and end of request depending on the configuration.
 
Any best / suggested practices on how to handle connections on threads that are not part of the request cycle?

Same as in any other (non-Django) program: Manually call connection.connect/close() at the relevant points.
 
Considering the connections are automatically opened for you, it would be great for them to be automatically closed/disposed for you on thread's death

No it would not be great at all, connections could theoretically shared between threads etc… In general Django has no way of knowing when you want to close it. In the end a "dying" thread which is not properly closed is a bug in your code anyways.

Cheers,
Florian
 

Aymeric Augustin

unread,
Jun 2, 2016, 11:15:55 AM6/2/16
to django-d...@googlegroups.com
Hi Cristiano,

> On 02 Jun 2016, at 13:39, Cristiano Coelho <cristia...@gmail.com> wrote:
>
> it would be great for them to be automatically closed/disposed for you on thread's death, which right now seems to happen some times, and some times not, leaking connections (something I'm trying to figure out what's going on).

In the scenario you’re discussing here, connections are closed when Python garbage collects them, which can cause seemingly random behavior.

--
Aymeric.

Florian Apolloner

unread,
Jun 2, 2016, 12:05:09 PM6/2/16
to Django developers (Contributions to Django itself)

Assuming a new enough python (2.7.1 iirc), that should reliably happen once a thread is destroyed -- earlier versions would keep the connection till the actual thread local went out of scope.

Cristiano Coelho

unread,
Jun 2, 2016, 5:55:41 PM6/2/16
to Django developers (Contributions to Django itself)

El jueves, 2 de junio de 2016, 11:48:33 (UTC-3), Florian Apolloner escribió:

No it would not be great at all, connections could theoretically shared between threads etc… In general Django has no way of knowing when you want to close it. In the end a "dying" thread which is not properly closed is a bug in your code anyways.

Cheers,
Florian
 

Not always, for example, on amazon elastic beasntalk when you either restart the app server or upload a new version, it basically kills apache and all WSGI processes through a sigterm, so those thread pools are probably killed in a bad way and you don't really have control over that. Also you don't really have control on the life of a thread pool thread, so a given thread could be gracefully stopped by the pool implementation but you can't really do any cleanup code before it happens for that thread (at least not that I'm aware of for multiprocessing.pool.ThreadPool)

As ayneric pointed out, it seems like those connections are correctly closed most of the time when a thread dies, but for some reason, postgres would keep some connections opened. Are there any rare cases where even if the thread is stopped the connection won't be closed? The only thing I can think of are that those threads are never garbage collected or something.

Florian Apolloner

unread,
Jun 2, 2016, 6:32:15 PM6/2/16
to Django developers (Contributions to Django itself)
On Thursday, June 2, 2016 at 11:55:41 PM UTC+2, Cristiano Coelho wrote:
Not always, for example, on amazon elastic beasntalk when you either restart the app server or upload a new version, it basically kills apache and all WSGI processes through a sigterm

A SIGTERM is a normal signal an should cause a proper shutdown.
 
so those thread pools are probably killed in a bad way and you don't really have control over that.

Absolutely not, you are mixing up SIGTERM and SIGKILL.
 
Also you don't really have control on the life of a thread pool thread, so a given thread could be gracefully stopped by the pool implementation

Once again: there is no pool.

As ayneric pointed out, it seems like those connections are correctly closed most of the time when a thread dies, but for some reason, postgres would keep some connections opened.

If a connection is closed properly, postgres will close it accordingly. The only way possible for a connection to stay open while the app is gone is that you are running into tcp timeouts while getting killed with SIGTERM (dunno if the postgres protocol has keep alive support on the protocol level, most likely not). As long as you are not sending a SIGTERM, python should clean up properly which should call garbage collection, which then again should delete all connections and therefore close the connection. Any other behavior seems like a bug in Python (or maybe Django, but as long as Python shuts down properly I think we are fine).

Are there any rare cases where even if the thread is stopped the connection won't be closed? The only thing I can think of are that those threads are never garbage collected or something.

Depends on the python version you are using, especially thread local behavior changed a lot…

Cheers,
Florian

Florian Apolloner

unread,
Jun 2, 2016, 6:34:21 PM6/2/16
to Django developers (Contributions to Django itself)
On Friday, June 3, 2016 at 12:32:15 AM UTC+2, Florian Apolloner wrote:
while getting killed with SIGTERM (dunno if the postgres protocol has keep alive support on the protocol level, most likely not). As long as you are not sending a SIGTERM

Ups, now I am myself mixing up SIGKILL and SIGTERM -- I ment SIGKILL here which cannot be caught by the program itself.

Cristiano Coelho

unread,
Jun 2, 2016, 6:48:26 PM6/2/16
to Django developers (Contributions to Django itself)
Florian,

Sorry about the SIGTERM and SIGKILL confusion, I think I read somewhere some time ago that SIGTERM would instantly finish any pending request, so I assumed it would also kill any thread in not a really nice way. However now that you mention it, there's one SIGKILL from the apache logs (compared to the thousands of sigterm due to restarts). However the connections that were somehow stuck and never close dated from about 2 weeks ago, yes, there were connections that were opened 2 weeks ago and never closed, even if apache was restarded many times every day!

About thread pool, I'm talking about python's thread pool I'm using to offload work, not any django's pool, and these pools are the ones I have no control over its threads as they are completely managed by the thread pool library.

3 days has passed since I noticed those hanged out connections and the issue didn't repeat again yet, maybe it was some really odd condition that caused them, but the thread pool's threads connections are indeed being correctly closed on servers restart, so a very odd case created those hanged connections.

So just to be sure, is SIGTERM actually propagated to python code so it can gracefully kill all threads, garbage collect and close connections? Would a SIGKILL actually prevent any kind of cleanup leaving a chance for python/django leave some connections opened?

Maybe this is a postgres issue instead that happened for some very odd reason.


Finally, would it be possible through any kind of callbacks of the thread local object to fire a connection close before a thread dies? This would certainly help rather than waiting for the connection to get garbage collected. You mentioned that connections could end up being shared by threads, but I don't see that being something being done in django at all.

Stephen J. Butler

unread,
Jun 2, 2016, 7:22:21 PM6/2/16
to django-d...@googlegroups.com
I'm still a bit confused. What is the advantage of having connections closed automatically when the thread exits? It seems to me that you can quickly solve your problem by modifying your thread start routines:

from django.db import connection
from contextlib import closing

def my_thread_start():
    with closing(connection):
        # do normal work

You can even create a quick decorator if that's too much modification.

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.

Cristiano Coelho

unread,
Jun 2, 2016, 7:33:28 PM6/2/16
to Django developers (Contributions to Django itself)
I'm not starting threads by hand but rather a pool which handles any threads for me, I basically just send a function to the pool and leave it run.
You are right, I could wrap every function sent to the pool with the code you proposed, but I also don't want to open and close a connection on every function call, but instead only when the thread from the pool is no longer required and disposed, pretty much on application exist, although the pool handler can do what ever it wants with the threads.

Stephen J. Butler

unread,
Jun 2, 2016, 7:50:13 PM6/2/16
to django-d...@googlegroups.com
Do you expect your background threads to be equivalent to or greater than the number of requests you're normally servicing? Usually background tasks are much less frequent than the web requests, so a little overhead w/r/t database connections isn't even going to be noticed.

Looking at what Django does, at the start and end of each request it calls connection.close_if_unusable_or_obsolete(). That function does careful checks to see if the connection is even worth using. Unless you do something similar in your thread_start (adding more complication than I've suggested), having a TLS connection will cause more problems than it saves you. To make this work in general you'd at least need a hook at the point the thread is removed from and added back to the pool, not when the thread exits.

Also, the connection won't be opened unless you actually do something that needs it.

Personally, I think this sounds like something you're trying to optimize before you've profiled that the benefit it worth it.



Cristiano Coelho

unread,
Jun 2, 2016, 8:17:01 PM6/2/16
to Django developers (Contributions to Django itself)
Some of the pools might have some high load (such as the one that handles logging to the database and other sources) so opening and closing a connection for each call might end up bad.

Now that you mention django code, does connection.close_if_unusable_or_obsolete() always close the connection, or does it also handle the case where persistent connections are used (and so the connection is not closed if it is alive and in good state) ? If so, would it possible to simply replicate what django does on every request start and end as a wrapper of every function that's sent to the pool? That way connections are re used if possible and recycled/closed if they go bad.

Looking at the code this seems to be called on every request start and end.

def close_old_connections(**kwargs):
   
for conn in connections.all():
        conn
.close_if_unusable_or_obsolete()
signals
.request_started.connect(close_old_connections)
signals
.request_finished.connect(close_old_connections)

I guess something could be done but just with that thread's connection, then all functions sent to the pool will need to be sent through a wrapper that does this before and after every call.

Florian Apolloner

unread,
Jun 3, 2016, 3:41:16 AM6/3/16
to Django developers (Contributions to Django itself)


On Friday, June 3, 2016 at 12:48:26 AM UTC+2, Cristiano Coelho wrote:
So just to be sure, is SIGTERM actually propagated to python code so it can gracefully kill all threads, garbage collect and close connections? Would a SIGKILL actually prevent any kind of cleanup leaving a chance for python/django leave some connections opened?

Yes, SIGTERM is a signal Python should handle nicely, SIGKILL will just nuke the process from earth and prevent any cleanup I think. Note that the OS will clean up though (sooner or later). As for the open connections: Check on postgres where from they are and then check on that machine to which process they belong and maybe use gdb to find out more.
 
Maybe this is a postgres issue instead that happened for some very odd reason.

The only issue I see here on the postgres side would be that you have very long timeouts configured.
 
Finally, would it be possible through any kind of callbacks of the thread local object to fire a connection close before a thread dies? This would certainly help rather than waiting for the connection to get garbage collected.

Killing a thread and garbage collection should happen at the same time roughly due to refcounting imo.

Florian Apolloner

unread,
Jun 3, 2016, 3:42:52 AM6/3/16
to Django developers (Contributions to Django itself)


On Friday, June 3, 2016 at 2:17:01 AM UTC+2, Cristiano Coelho wrote:
Now that you mention django code, does connection.close_if_unusable_or_obsolete() always close the connection, or does it also handle the case where persistent connections are used (and so the connection is not closed if it is alive and in good state) ?

Yes, it does handle persistent connections properly, otherwise Django would not work ;)

Cristiano Coelho

unread,
Jun 3, 2016, 7:48:05 AM6/3/16
to Django developers (Contributions to Django itself)

El viernes, 3 de junio de 2016, 4:41:16 (UTC-3), Florian Apolloner escribió:

Yes, SIGTERM is a signal Python should handle nicely, SIGKILL will just nuke the process from earth and prevent any cleanup I think. Note that the OS will clean up though (sooner or later). As for the open connections: Check on postgres where from they are and then check on that machine to which process they belong and maybe use gdb to find out more.
 

When I saw the issue the first thing I checked was the source IP and was the amazon machine as expected, then I'm almost sure those connections were used by the thread pools because the last statement was a 'commit' where normally most of the statements listed through the list connections query are selects and those pools always end up with an insert so the commit makes sense.


The only issue I see here on the postgres side would be that you have very long timeouts configured.

Looks like postgres doesn't have any kind of connection timeout and you need to use pg bouncer for that.
 

Well, I guess that with the connection.close_if_unusable_or_obsolete() change it should improve the connection handling of the thread pools.


Aymeric Augustin

unread,
Jun 3, 2016, 9:01:38 AM6/3/16
to django-d...@googlegroups.com
Hello,

I have to say that, as the author of the “persistent connections” feature, I am confused by repeated references to a “connection pool” in this discussion. I chose *not* to implement a connection pool because these aren’t easy to get right. Users who need a connection pool are better off with a third-party option such as pgpool.

When persistent connections are enabled, each thread uses a persistent connection to the database — as opposed to one connection per HTTP request. That said connections aren’t shared or pooled between threads. This guarantees that each connection dies when the thread that opened it dies.

In practice, Django opens a database connection per thread and keeps it open after each request. When the next request comes in, if the connection still works (this is tested with something like “SELECT 1”) and its maximum age isn’t reached, Django re-uses it; otherwise it closes it and opens a new one. This is what the “close_if_unusable_or_obsolete” function does — as the name says :-)

I hope this helps,

-- 
Aymeric.

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.

Florian Apolloner

unread,
Jun 3, 2016, 10:22:16 AM6/3/16
to Django developers (Contributions to Django itself)


On Friday, June 3, 2016 at 1:48:05 PM UTC+2, Cristiano Coelho wrote:
El viernes, 3 de junio de 2016, 4:41:16 (UTC-3), Florian Apolloner escribió:

Yes, SIGTERM is a signal Python should handle nicely, SIGKILL will just nuke the process from earth and prevent any cleanup I think. Note that the OS will clean up though (sooner or later). As for the open connections: Check on postgres where from they are and then check on that machine to which process they belong and maybe use gdb to find out more.
 

When I saw the issue the first thing I checked was the source IP and was the amazon machine as expected, then I'm almost sure those connections were used by the thread pools

Don't be almost sure, investigate were that connection is coming from…
 
The only issue I see here on the postgres side would be that you have very long timeouts configured.

Looks like postgres doesn't have any kind of connection timeout and you need to use pg bouncer for that.

If the app died indeed, tcp keep alive should help.

Cristiano Coelho

unread,
Jun 3, 2016, 6:12:42 PM6/3/16
to Django developers (Contributions to Django itself)
Aymeric, I have never said anything about connection pool, I'm talking about thread pooling to perform async work (without the need to spawn a new thread every time and have control over the amount data that is offloaded) and the behaviour of django connections when used on a separate thread that's not part of the request/response cycle, since if you don't take care, everything will work as expected because django is nice enough to open db connections for you even if it is from a user spawned thread, but it won't close them for you, so when working with user spawned threads special care needs to be taken on the db connections.

I have added changes to the thread's pool code so it always wraps functions between connection.close_if_unusable_or_obsolete()  so the thread pool threads can always have a healthy connection and hopefully close invalid ones. I don't know if this is related to the connection leak I got that I can't seem to reproduce now but I will be keepin an eye.

It would be good to have a big warning on the django docs about doing queries on threads that are spawned by the user and are not part of a request/response cycle, leting the users know they have to explicitly close them.

Marcin Nowak

unread,
Jun 21, 2016, 6:53:21 AM6/21/16
to Django developers (Contributions to Django itself)


On Saturday, June 4, 2016 at 12:12:42 AM UTC+2, Cristiano Coelho wrote:
Aymeric, I have never said anything about connection pool, I'm talking about thread pooling to perform async work

I have similar requirements and issues with Django. It looks like it is completely unreliable in async environment, and the worst thing is that I can't force Django to create new connection within the thread/process. I'm fighting with this almost day by day, enssuring that no db operation is performed in asyncs callbacks, and I'm using ram to return results to main thread/process and then write them to db. 

Well, they'll say that Django is not designed for that cases. Maybe. My advise - don't use Django for medium and big projects. Using Django for that job was my biggest mistake.   From version to version it turns more into handy tool for blog creators/webdesigners, forcing everyone else to (for example) do nasty things with your database ("I'm a Django and I own your database, ha ha!").   

Marcin

Marcin Nowak

unread,
Jun 21, 2016, 7:03:30 AM6/21/16
to Django developers (Contributions to Django itself)
 
From version to version it turns more into handy tool for blog creators/webdesigners

Don't get me wrong. For smal/short-term projects (up to 1-2yr of operation) I'm still using Django and I would recommend it everyone for that kind of job. 
For long-term projects the first thing that will kill you is a db migration system (due to unnecessary python dependencies inside each migration file).

Marcin 
  

Aymeric Augustin

unread,
Jun 21, 2016, 8:34:11 AM6/21/16
to django-d...@googlegroups.com
On 21 Jun 2016, at 12:53, Marcin Nowak <marcin....@gmail.com> wrote:
It looks like it is completely unreliable in async environment, and the worst thing is that I can't force Django to create new connection within the thread/process.

Well, they'll say that Django is not designed for that cases. Maybe. My advise - don't use Django for medium and big projects.

Hello Marcin,

I fail to see how you get from “I’m having trouble using an async framework with Django” to “Django only works for small projects”. Indeed, using Django in an “async” context requires special care, until channels land — quotes around “async” because I don’t know whether you’re talking about gevent or asyncio or something else.

However, I know multiple Django-based projects containing millions of lines of codes that took hundreds of man.years to build, maintained by dozens of developers. That works fine. There’s no fundamental reason making Django unsuitable for such projects. It tends to scale better than other technologies thanks to strong conventions and limited use of magic.

Regarding database connections, if you weren’t using Django, you’d have to create a database connection with something like `psycopg2.connect()` and dispose of it with `connection.close()` when you no longer need it. When you’re using the Django ORM, database connections are opened automatically when needed. You still have to close them manually if you aren’t working within the traditional request-response cycle.

If the implicit connection establishment is causing trouble, it shouldn’t be hard to write a custom database backend that doesn’t have this behavior. You would add a method to establish a connection and raise an exception in connection.get_new_connection() if the connection is already established.

You’re writing that you can’t force Django to create a new connection. That’s as simple as connection.ensure_connection(), possibly after connection.close() if you want to close a pre-existing connection. ensure_connection() is a private API, but you’re already in unsupported territory if you’re running Django within an async framework, so that isn’t making your situation significantly worse.

Spreading FUD about Django’s supposed unsuitability for non-trivial projects isn’t going to help with your issues and isn’t going to create a context where people will look forward to helping you, so please consider avoiding gratuitous attacks if you need further help.

Best regards,

-- 
Aymeric.

Marcin Nowak

unread,
Jun 21, 2016, 9:03:50 AM6/21/16
to Django developers (Contributions to Django itself)


On Tuesday, June 21, 2016 at 2:34:11 PM UTC+2, Aymeric Augustin wrote:
 
Regarding database connections, if you weren’t using Django, you’d have to create a database connection with something like `psycopg2.connect()` and dispose of it with `connection.close()` when you no longer need it. When you’re using the Django ORM, database connections are opened automatically when needed. You still have to close them manually if you aren’t working within the traditional request-response cycle.

Yes, I'm using ORM. The connections are opening silently in main process/thread.
I've tried to close them manually, but I've had some kind of problems (don't remember what kind of), probably with wrapped `close()`.  


If the implicit connection establishment is causing trouble, it shouldn’t be hard to write a custom database backend that doesn’t have this behavior.

Maybe this will help. It's worth trying someday.

 
Spreading FUD about Django’s supposed unsuitability for non-trivial projects isn’t going to help with your issues and isn’t going to create a context where people will look forward to helping you, so please consider avoiding gratuitous attacks if you need further help.

This not a FUD but the honest advice. And not an attack but statement of a fact. It's nothing personal.

There are much more problems with Django used in some kind of projects, especially in long-term projects. I wrote about those many times and almost always I've got two answers: 1) do not upgrade  2) this will not be changed. Oh, and third - "write yourself", and I'm writing and rewriting old good parts of Django and not talking about this (this is only what I can do in that case). 

But I belive this is a only matter of time.  I remember how big was resistance of implementing `model.reload()`. 7 years of talking about that, right? And here we go - it is available from v1.8 as "refresh_from_db()" :)  So let's hope for the best :)
 
Marcin

Kartik Sharma

unread,
Mar 23, 2020, 5:30:11 PM3/23/20
to Django developers (Contributions to Django itself)
So in the end, i have to manually close the database connection after my threads in my threadpool are done executing, even though, django created them automatically when the threads were first created? This is just plain bad design.

On Tuesday, June 21, 2016 at 6:04:11 PM UTC+5:30, Aymeric Augustin wrote:
Reply all
Reply to author
Forward
0 new messages