--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/9c330b4b-ebd7-4abb-b03d-dffa21d245af%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
So what was stated on the stack overflow post that connections are somehow closed only at the end of a request through the request end signal is still the actual behavior?
Any best / suggested practices on how to handle connections on threads that are not part of the request cycle?
Considering the connections are automatically opened for you, it would be great for them to be automatically closed/disposed for you on thread's death
No it would not be great at all, connections could theoretically shared between threads etc… In general Django has no way of knowing when you want to close it. In the end a "dying" thread which is not properly closed is a bug in your code anyways.
Cheers,
Florian
Not always, for example, on amazon elastic beasntalk when you either restart the app server or upload a new version, it basically kills apache and all WSGI processes through a sigterm
so those thread pools are probably killed in a bad way and you don't really have control over that.
Also you don't really have control on the life of a thread pool thread, so a given thread could be gracefully stopped by the pool implementation
As ayneric pointed out, it seems like those connections are correctly closed most of the time when a thread dies, but for some reason, postgres would keep some connections opened.
Are there any rare cases where even if the thread is stopped the connection won't be closed? The only thing I can think of are that those threads are never garbage collected or something.
while getting killed with SIGTERM (dunno if the postgres protocol has keep alive support on the protocol level, most likely not). As long as you are not sending a SIGTERM
--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/562bf2f9-8a44-495c-bacd-9a42012aa011%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/b54b3699-d755-4d19-ac9f-ee0c0fb69d13%40googlegroups.com.
def close_old_connections(**kwargs):
for conn in connections.all():
conn.close_if_unusable_or_obsolete()
signals.request_started.connect(close_old_connections)
signals.request_finished.connect(close_old_connections)
So just to be sure, is SIGTERM actually propagated to python code so it can gracefully kill all threads, garbage collect and close connections? Would a SIGKILL actually prevent any kind of cleanup leaving a chance for python/django leave some connections opened?
Maybe this is a postgres issue instead that happened for some very odd reason.
Finally, would it be possible through any kind of callbacks of the thread local object to fire a connection close before a thread dies? This would certainly help rather than waiting for the connection to get garbage collected.
Now that you mention django code, does connection.close_if_unusable_or_obsolete() always close the connection, or does it also handle the case where persistent connections are used (and so the connection is not closed if it is alive and in good state) ?
Yes, SIGTERM is a signal Python should handle nicely, SIGKILL will just nuke the process from earth and prevent any cleanup I think. Note that the OS will clean up though (sooner or later). As for the open connections: Check on postgres where from they are and then check on that machine to which process they belong and maybe use gdb to find out more.
The only issue I see here on the postgres side would be that you have very long timeouts configured.
--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/5fbbb59e-4ed6-4795-b8e8-c97da791d3d1%40googlegroups.com.
El viernes, 3 de junio de 2016, 4:41:16 (UTC-3), Florian Apolloner escribió:
Yes, SIGTERM is a signal Python should handle nicely, SIGKILL will just nuke the process from earth and prevent any cleanup I think. Note that the OS will clean up though (sooner or later). As for the open connections: Check on postgres where from they are and then check on that machine to which process they belong and maybe use gdb to find out more.
When I saw the issue the first thing I checked was the source IP and was the amazon machine as expected, then I'm almost sure those connections were used by the thread pools
The only issue I see here on the postgres side would be that you have very long timeouts configured.
Looks like postgres doesn't have any kind of connection timeout and you need to use pg bouncer for that.
Aymeric, I have never said anything about connection pool, I'm talking about thread pooling to perform async work
From version to version it turns more into handy tool for blog creators/webdesigners
It looks like it is completely unreliable in async environment, and the worst thing is that I can't force Django to create new connection within the thread/process.
Well, they'll say that Django is not designed for that cases. Maybe. My advise - don't use Django for medium and big projects.
Regarding database connections, if you weren’t using Django, you’d have to create a database connection with something like `psycopg2.connect()` and dispose of it with `connection.close()` when you no longer need it. When you’re using the Django ORM, database connections are opened automatically when needed. You still have to close them manually if you aren’t working within the traditional request-response cycle.
If the implicit connection establishment is causing trouble, it shouldn’t be hard to write a custom database backend that doesn’t have this behavior.
Spreading FUD about Django’s supposed unsuitability for non-trivial projects isn’t going to help with your issues and isn’t going to create a context where people will look forward to helping you, so please consider avoiding gratuitous attacks if you need further help.