Can anyone offer a brief account of how Channels uses threads to service multiple requests?
My understanding is that there are multiple workers, and I imagine each is a thread which can handle one request (Django view-style request) at a time. Is this correct? I also don't quite understand how this is possible in one Python process given the Global Interpreter Lock.
Basically I am trying to reason about race conditions and scaling, and I don't have enough understanding to do it.
Apologies if this is document elsewhere, but I've so far not been able to find it.
- Jonathan