Thoughts on ASGI or Why I don't see myself ever wanting to use ASGI

조회수 2,936회
읽지 않은 첫 메시지로 건너뛰기

Donald Stufft

읽지 않음,
2016. 5. 6. 오후 12:11:5716. 5. 6.
받는사람 django-d...@googlegroups.com
Let me just start out saying that I think that ASGI is reasonably designed for
the pattern that it is attempting to produce. That being said, I am of the
belief that the fundamental way that ASGI is designed to work misses the mark
for the kind of feature that people should be using in the general case.

First here are the general assumptions that I have from my readings of the ASGI
spec and the code that I've looked at. It's entirely possible that I've missed
something or I'm viewing some component of it incorrectly since I have not
steeped myself in ASGI so I figure it'd be useful to give a run down of my
mental model of ASGI.

Some sort of connection comes in on an edge server (Daphne in this case)
which is written in such a way as to be highly concurrent (likely to be
async in some fashion). From there it takes the incoming connection, parses
it and turns it into a message (or many messages for chunked encoding or
websockets). Once it turns it into a message it pushes it into some sort of
queue where you have a number of readers pulling messages off that queue,
processing it, and then putting some kind of response back on a different
queue where the original edge server will be listening and can pull that
off the queue, turn it back into whatever format the the original
connection expected on the wire and send it off.

This has a number of purported benefits such as:

* Providing a mechanism for websockets in Django (this is the big one).
* Allowing background tasks to be written and run in the same process as
Django.
* Making it easier for people to do graceful restarts of their code base.
* Support for long polling (since the HTTP connection only stays open in the
async thread).
* Doing all of the above, while still being able to write sync code in Django.

Ok, so now let's break down why I don't personally like the fundamentals of
what ASGI is and why I don't see myself ever using it or wanting to use it.

In short, I think that the message bus adds an additional layer of complexity
that makes everything a bit more complex and complicated for very little actual
gain over other possible, but less complex solutions. This message bus also
removes a key part of the amount of control that the server which is *actually*
receiving the connection has over the lifetime and process of the eventual
request.

For an example, in traditional HTTP servers where you have an open connection
associated with whatever view code you're running whenever the client
disconnects you're given a few options of what you can do, but the most common
option in my experience is that once the connection has been lost the HTTP
server cancels the execution of whatever view code it had been running [1].
This allows a single process to serve more by shedding the load of connections
that have since been disconnected for some reason, however in ASGI since
there's no way to remove an item from the queue or cancel it once it has begun
to be processed by a worker proccess you lose out on this ability to shed the
load of processing a request once it has already been scheduled.

This additional complexity incurred by the message bus also ends up requiring
additional complexity layered onto ASGI to try and re-invent some of the
"natural" features of TCP and/or HTTP (or whatever the underlying protocol is).
An example of this would be the ``order`` keyword in the WebSocket spec,
something that isn't required and just naturally happens whenever you're
directly connected to a websocket because the ``order`` is just whatever bytes
come in off the wire. This also gets exposed in other features, like
backpressure where ASGI didn't currently have a concept of allowing the queue
to apply back pressure to the web connection but now Andrew has started to come
around to the idea of adding a bounding to the queue (which is good!) but if
the indirection of the message bus hadn't been added, then backpressure would
have naturally occurred whenever you ended up getting enough things processing
that it blocked new connections from being ``accept``d which would eventually
end up filling up the backlog and then making new connections hang block
waiting to connect. Now it's good that Andrew is adding the ability to bound
the queue, but that is something that is going to require care to tune in each
individual deployment (and will need regularly re-evaluated) rather than
something that just occurs naturally as a consequence of the design of the
system.

Anytime you add a message bus you need to make a few trade offs, the particular
trade off that ASGI made is that it should prefer "at most once" delivery of
messages and low latency to guaranteed delivery. This choice is likely one of
the sanest ones you can make in regards to which trade offs you make for the
design of ASGI, but in that trade off you end up with new problems that don't
exist otherwise. For example, HTTP/1 has the concept of pipelining which allows
you to make several HTTP requests on a single HTTP connection without waiting
for the responses before sending each one. Given the nature of ASGI it would be
very difficult to actually support this feature without either violating the
RFC or forcing either Daphne or the queue to buffer potentially huge responses
while it waits for another request that came before it to be finished whereas
again you get this for free using either async IO (you just don't await the
result of that second request until the first request has been processed) or
with WSGI if you're using generators (you just don't iterate over the result
until you're ready for it).

I believe the introduction of a message bus here makes things inherently more
fragile. In order to reasonable serve web sockets you're now talking about a
total of 3 different processes that need to be run (Daphne, Redis, and Django)
each that will exhibit it's own failure conditions and introduces additional
points of failure. Now this in itself isn't the worst thing because that's
often times unavoidable anytime you scale beyond a single process, but ASGI
adds that complication much sooner than more traditional solutions do.

ASGI purports to make it easier to gracefully restart your servers by making it
possible to restart the worker servers (since there is no long live open
connections to them) and simply spin up new ones. However, that's not really
the whole story, because while that is true, it really only exists as long as
your code changes don't touch something that Daphne needs to be aware of in
order to process incoming requests. As soon as Daphne needs restarted then
you're back in the same boat of needing another solution to graceful restarts
and since Daphne depends on project specific code, it's going to require to be
restarted much more frequently than other solutions that don't. It appears to
me like it would be difficult to be able to automatically determine whether or
not Daphne needs a restart on any particular deployment, so it will be common
for people to just need to restart the whole stack anyways.

So what sort of solution would I personally advocate had I the time or energy
to do so? I would look towards what sort of pure Python API (like WSGI itself)
could be added to allow a web server to pass websockets down into Django. I
admit that in some cases people would then need to layer on their own message
buses (since that's just about the only reasonable way to implement something
like Group().send()) but even here they'd be able to get added gains and "for
free" features by utilizing something that specializes in this sort of
multicast type of message (a pub/sub message bus more or less). Of course
currently no web servers would support whatever this new "WSGI but for
WebSockets" would be, so you'd need to implement something like Daphne that
could handle it in the intrim (or possibly forever if nobody implemented it)
but that's the same case as with ASGI now.

Handling scaling out to multiple processes and graceful restarts would be
handled the way they are today. Either you'd have some master process that
isn't specific to the Django code (like Daphne is) that would spin up new
processes, start sending traffic to them and then close out the old processes.
This generalizes out past a single machine too, where you'd have something like
HAProxy load balancing between machines and able to gracefully stop sending
requests to once instance and start sending them to a new instance. For
Websockets anytime you have a persistent connection to your worker you'll need
some way trigger your clients to disconnect and reconnect (so they get
scheduled onto the new server/process), but that's something you'll need with
ASGI anyways anytime you need to restart Daphne anyways (and since the thing
intiating the restart there is tied to your application code, a hook can be
provided that gets called on shut down that lets the application do some
application specific thing to tell people to reconnect).

In this solution, since everything is just HTTP (or Websockets, or whatever)
all the way down you end up getting to reuse all of the battle tested pieces
that already exist like HAProxy. It's also easier to simply drop in another
piece, possibly written in another language or another technology since
everywhere in the stack speaks HTTP/Websocket and you don't have to go and
teach say, Erlang how to ASGI.

[1] This gets exposed in a variety of ways in different servers. In gunicorn it
shows up as a SystemExit exception, in uWSGI I believe it shows up as an
IOError. In something like Twisted or AsyncIO it would likely show up as a
CancelledError.


-----------------
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

signature.asc

Aymeric Augustin

읽지 않음,
2016. 5. 6. 오후 1:09:4816. 5. 6.
받는사람 django-d...@googlegroups.com
Hello Donald, all,

Some thoughts inline below.

> On 06 May 2016, at 18:11, Donald Stufft <don...@stufft.io> wrote:
>
> For an example, in traditional HTTP servers where you have an open connection
> associated with whatever view code you're running whenever the client
> disconnects you're given a few options of what you can do, but the most common
> option in my experience is that once the connection has been lost the HTTP
> server cancels the execution of whatever view code it had been running [1].
> This allows a single process to serve more by shedding the load of connections
> that have since been disconnected for some reason, however in ASGI since
> there's no way to remove an item from the queue or cancel it once it has begun
> to be processed by a worker proccess you lose out on this ability to shed the
> load of processing a request once it has already been scheduled.

In theory this effect is possible. However I don't think it will make a
measurable difference in practice. A Python server will usually process
requests quickly and push the response to a reverse-proxy. It should have
finished to process the request by the time it's reasonable to assume the
client has timed-out.

This would only be a problem when serving extremely large responses in Python,
which is widely documented as a performance anti-pattern that must be avoided
at all costs. So if this effect happens, you have far worse problems :-)


> This additional complexity incurred by the message bus also ends up requiring
> additional complexity layered onto ASGI to try and re-invent some of the
> "natural" features of TCP and/or HTTP (or whatever the underlying protocol is).
> An example of this would be the ``order`` keyword in the WebSocket spec,
> something that isn't required and just naturally happens whenever you're
> directly connected to a websocket because the ``order`` is just whatever bytes
> come in off the wire.

I'm somewhat concerned by this risk. Out-of-order processing of messages
coming from a single connection could cause surprising bugs. This is likely one
of the big tradeoffs of the async-to-sync conversion channels operates. I
assume it will have to be documented.

Could someone confirm that this doesn't happen for regular HTTP/1.1 requests?
I suppose channels encodes each HTTP/1.1 request as a single message.

Note that out of order processing is already possible without channels e.g.
due to network latency or high load on a worker.

The design of channels seems similar to HTTP/2 — a bunch of messages sent in
either direction with no pretense to synchronize communications. This is a
scary model but I guess we'll have to live with it anyway...


> Anytime you add a message bus you need to make a few trade offs, the particular
> trade off that ASGI made is that it should prefer "at most once" delivery of
> messages and low latency to guaranteed delivery.

That’s already what happens today, especially on mobile connections. Many
requests or responses don’t get delivered. And it isn’t even a trade-off
against speed.


> This choice is likely one of
> the sanest ones you can make in regards to which trade offs you make for the
> design of ASGI, but in that trade off you end up with new problems that don't
> exist otherwise. For example, HTTP/1 has the concept of pipelining which allows
> you to make several HTTP requests on a single HTTP connection without waiting
> for the responses before sending each one. Given the nature of ASGI it would be
> very difficult to actually support this feature without either violating the
> RFC or forcing either Daphne or the queue to buffer potentially huge responses
> while it waits for another request that came before it to be finished whereas
> again you get this for free using either async IO (you just don't await the
> result of that second request until the first request has been processed) or
> with WSGI if you're using generators (you just don't iterate over the result
> until you're ready for it).

In this case, daphne forwarding to channels seems to be exactly in the same
position than, say, nginx forwarding to gunicorn. At worst, daphne can just
wait until a response is sent before passing the next request in the pipeline
to channels. At best, it can be smarter.

Besides I think pipelining is primarily targeted at static content which
shouldn't be served through Django in general.

Does anyone know if HTTP/2 allows sending responses out of order? This would
make sub-optimal handling of HTTP/1.1 pipelining less of a concern going
forwards. We could live with a less efficient implementation.

Virtually nothing done with Django returns a generator, except pathological
cases that should really be implemented differently (says the guy who wrote
StreamingHttpResponse and never actually used it). So I’m not exceedingly
concerned about this use case. It should work, though, even if it’s slow.


> I believe the introduction of a message bus here makes things inherently more
> fragile. In order to reasonable serve web sockets you're now talking about a
> total of 3 different processes that need to be run (Daphne, Redis, and Django)
> each that will exhibit it's own failure conditions and introduces additional
> points of failure. Now this in itself isn't the worst thing because that's
> often times unavoidable anytime you scale beyond a single process, but ASGI
> adds that complication much sooner than more traditional solutions do.

Yes, that’s my biggest concern with channels. However I haven’t seen anyone
suggesting fewer than three systems:

- frontend + queue + worker (e.g. channels)
- regular HTTP + websockets + pub/sub (e.g. what Mark Lavin described)

I share Mark’s concerns about handling short- and long-lived connections in
the same process. Channels solves this elegantly by converting long-lived
connections to a series of events to handle.


> So what sort of solution would I personally advocate had I the time or energy
> to do so? I would look towards what sort of pure Python API (like WSGI itself)
> could be added to allow a web server to pass websockets down into Django.

This sounds a lot like the proof-of-concept I demonstrated at DjangoCon US
2013, eventually reaching the conclusion that this wasn't a workable model,
mainly due to:

- the impossibility of mixing async and sync code in Python, because of the
explicit nature of async code written on top of asyncio (which I still
believe is the right choice even though it's a problem for Django).

- the great difficulty of implementing the ORM's APIs on top of an async
solution (although I came up with new ideas since then; also Amber Brown
showed an interesting proof-of-concept on top of Twisted at Django under
the Hood 2015).

I think it's important to keep a straightforward WSGI backend in case we crack
this problem and build an async story that depends on asyncio after dropping
support for Python 2.

I don't think merging channels as it currently stands hinders this possibility
in any way, on the contrary. The more Django is used for serving HTTP/2 and
websockets, the more we can learn.


Sorry Andrew, that was yet another novel to read… I hope it helps anyway…

--
Aymeric.

Andrew Godwin

읽지 않음,
2016. 5. 6. 오후 1:29:4016. 5. 6.
받는사람 django-d...@googlegroups.com
On Fri, May 6, 2016 at 10:09 AM, Aymeric Augustin <aymeric....@polytechnique.org> wrote:
Hello Donald, all,

Some thoughts inline below.

> On 06 May 2016, at 18:11, Donald Stufft <don...@stufft.io> wrote:
>
> For an example, in traditional HTTP servers where you have an open connection
> associated with whatever view code you're running whenever the client
> disconnects you're given a few options of what you can do, but the most common
> option in my experience is that once the connection has been lost the HTTP
> server cancels the execution of whatever view code it had been running [1].
> This allows a single process to serve more by shedding the load of connections
> that have since been disconnected for some reason, however in ASGI since
> there's no way to remove an item from the queue or cancel it once it has begun
> to be processed by a worker proccess you lose out on this ability to shed the
> load of processing a request once it has already been scheduled.

In theory this effect is possible. However I don't think it will make a
measurable difference in practice. A Python server will usually process
requests quickly and push the response to a reverse-proxy. It should have
finished to process the request by the time it's reasonable to assume the
client has timed-out.

This would only be a problem when serving extremely large responses in Python,
which is widely documented as a performance anti-pattern that must be avoided
at all costs. So if this effect happens, you have far worse problems :-)

I will also point out that I've introduced channel capacity and backpressure into the ASGI spec now (it's in three of the four backends, soon to be in the fourth) to help combat some of this problem, specifically relating to an overload of requests or very slow response readers.
 


> This additional complexity incurred by the message bus also ends up requiring
> additional complexity layered onto ASGI to try and re-invent some of the
> "natural" features of TCP and/or HTTP (or whatever the underlying protocol is).
> An example of this would be the ``order`` keyword in the WebSocket spec,
> something that isn't required and just naturally happens whenever you're
> directly connected to a websocket because the ``order`` is just whatever bytes
> come in off the wire.

I'm somewhat concerned by this risk. Out-of-order processing of messages
coming from a single connection could cause surprising bugs. This is likely one
of the big tradeoffs of the async-to-sync conversion channels operates. I
assume it will have to be documented.

Could someone confirm that this doesn't happen for regular HTTP/1.1 requests?
I suppose channels encodes each HTTP/1.1 request as a single message.

Yes, it encodes each request as a single main message, and the request body (if large enough) is chunked onto a separate "body" channel for that specific request; since only one reader touches that channel, it will get the messages in-order.

It is unfortunate that in-order processing requires a bit more work, but the alternative is having to sticky WebSocket connections to a single worker server, which is not great and kind of defeats the point of having a system like this.

I'd also like to point out that if a site has a very complex WebSocket protocol I would likely encourage them to write their own interface server to move some of the more order-sensitive logic closer to the client, and then just have that code generate higher-level events back into Django; Channels is very much a multi-protocol system, not just for WebSockets and HTTP.
 

Note that out of order processing is already possible without channels e.g.
due to network latency or high load on a worker.

The design of channels seems similar to HTTP/2 — a bunch of messages sent in
either direction with no pretense to synchronize communications. This is a
scary model but I guess we'll have to live with it anyway...

Yes, it's pretty similar to HTTP/2, which is not entirely a mistake. If you're going to take the step and separate the processes out, I think this model is the most reasonable one to take.
 

Does anyone know if HTTP/2 allows sending responses out of order? This would
make sub-optimal handling of HTTP/1.1 pipelining less of a concern going
forwards. We could live with a less efficient implementation.

It does; you can send responses in any order you like as long as you already got the request matching it. You can also push other requests _to the client_ with their own premade responses before you send a main response (Server Push).
 


> I believe the introduction of a message bus here makes things inherently more
> fragile. In order to reasonable serve web sockets you're now talking about a
> total of 3 different processes that need to be run (Daphne, Redis, and Django)
> each that will exhibit it's own failure conditions and introduces additional
> points of failure. Now this in itself isn't the worst thing because that's
> often times unavoidable anytime you scale beyond a single process, but ASGI
> adds that complication much sooner than more traditional solutions do.

Yes, that’s my biggest concern with channels. However I haven’t seen anyone
suggesting fewer than three systems:

- frontend + queue + worker (e.g. channels)
- regular HTTP + websockets + pub/sub (e.g. what Mark Lavin described)

I share Mark’s concerns about handling short- and long-lived connections in
the same process. Channels solves this elegantly by converting long-lived
connections to a series of events to handle.

I'm also working on an IPC-based channel layer (currently at https://github.com/andrewgodwin/asgi_ipc) that will let you drop one moving part provided you can host everything on one machine. This might me more suited to those who want to run small clusters of interface and worker servers with HAProxy balancing over them, which I think is how I might want to run things.
 

This sounds a lot like the proof-of-concept I demonstrated at DjangoCon US
2013, eventually reaching the conclusion that this wasn't a workable model,
mainly due to:

- the impossibility of mixing async and sync code in Python, because of the
  explicit nature of async code written on top of asyncio (which I still
  believe is the right choice even though it's a problem for Django).

- the great difficulty of implementing the ORM's APIs on top of an async
  solution (although I came up with new ideas since then; also Amber Brown
  showed an interesting proof-of-concept on top of Twisted at Django under
  the Hood 2015).

I think it's important to keep a straightforward WSGI backend in case we crack
this problem and build an async story that depends on asyncio after dropping
support for Python 2.

I agree, and I would love to see more native async code as we go forward that interoperates with both async webservers and async channel worker classes.

There is not yet a standard for "async WSGI", and I suspect ASGI can at least partially fulfill this role with the message formats (which are an improvement on the WSGI environ() for HTTP as things like encoding are fully defined, and also provide a WebSocket format that doesn't exist), if not by using something very close to the current API; after all, it's basically just an eventing API. 


I don't think merging channels as it currently stands hinders this possibility
in any way, on the contrary. The more Django is used for serving HTTP/2 and
websockets, the more we can learn.


Sorry Andrew, that was yet another novel to read… I hope it helps anyway…

No, thanks Aymeric, this was great to read. The more feedback the better.

Andrew 

Carl Meyer

읽지 않음,
2016. 5. 6. 오후 1:30:5316. 5. 6.
받는사람 django-d...@googlegroups.com
On 05/06/2016 11:09 AM, Aymeric Augustin wrote:
> I think it's important to keep a straightforward WSGI backend in case we crack
> this problem and build an async story that depends on asyncio after dropping
> support for Python 2.
>
> I don't think merging channels as it currently stands hinders this possibility
> in any way, on the contrary. The more Django is used for serving HTTP/2 and
> websockets, the more we can learn.

This summarizes my feelings about merging channels. It feels a bit
experimental to me, and I'm not yet convinced that I'd choose to use it
myself (but I'd be willing to try it out). As long as it's marked as
provisional for now and we maintain straight WSGI as an option, so
nobody's forced into it, we can maybe afford to experiment and learn
from it.

ISTM that the strongest argument in favor is that I think it _is_
significantly easier for a casual user to build and deploy their first
websockets app using Channels than using any other currently-available
approach with Django. Both channels and Django+whatever-async-server
require managing multiple servers, but channels makes a lot of decisions
for you and makes it really easy to keep all your code together. And (as
long as we still support plain WSGI) it doesn't remove the flexibility
for more advanced users who prefer different tradeoffs to still choose
other approaches. There's a lot to be said for that combination of
"accessible for the new user, still flexible for the advanced user", IMO.

Carl

signature.asc

Marc Tamlyn

읽지 않음,
2016. 5. 6. 오후 1:45:3416. 5. 6.
받는사람 django-d...@googlegroups.com
ISTM that the strongest argument in favor is that I think it _is_
significantly easier for a casual user to build and deploy their first
websockets app using Channels than using any other currently-available
approach with Django. Both channels and Django+whatever-async-server
require managing multiple servers, but channels makes a lot of decisions
for you and makes it really easy to keep all your code together. And (as
long as we still support plain WSGI) it doesn't remove the flexibility
for more advanced users who prefer different tradeoffs to still choose
other approaches. There's a lot to be said for that combination of
"accessible for the new user, still flexible for the advanced user", IMO.

This is exactly my opinion, and I think that will be shared by the vast majority of Django users. Obviously there's an issue if over a certain size you have to rearchitect your system because ASGI/Channels isn't good enough, but that's a problem we can't learn until this has been used by enough people for long enough. The most important things for me are:
- Does this not break existing Django deployments, which whatever esoteric systems people have (yes)
- Is it easy to get started with an exciting new feature (I believe so)

I'm going to try and build a prototype with Channels and verify point two. If that goes well, I feel we should merge the feature, with a strong provisional status and a call to the community to try and break it.

Andrew Godwin

읽지 않음,
2016. 5. 6. 오후 1:46:0616. 5. 6.
받는사람 django-d...@googlegroups.com
Want to just cover a few more things I didn't in my reply to Aymeric.

On Fri, May 6, 2016 at 9:11 AM, Donald Stufft <don...@stufft.io> wrote:

In short, I think that the message bus adds an additional layer of complexity
that makes everything a bit more complex and complicated for very little actual
gain over other possible, but less complex solutions. This message bus also
removes a key part of the amount of control that the server which is *actually*
receiving the connection has over the lifetime and process of the eventual
request.

True; however, having a message bus/channel abstraction also removes a layer of complexity that is caring about socket handling and sinking your performance by even doing a slightly blocking operation.

In an ideal world we'd have some magical language that let us all write amazing async code and that detected all possible deadlocks or livelocks before they happened, but that's not yet the case, and I think the worker model has been a good substitute for it in software design generally.
 

For an example, in traditional HTTP servers where you have an open connection
associated with whatever view code you're running whenever the client
disconnects you're given a few options of what you can do, but the most common
option in my experience is that once the connection has been lost the HTTP
server cancels the execution of whatever view code it had been running [1].
This allows a single process to serve more by shedding the load of connections
that have since been disconnected for some reason, however in ASGI since
there's no way to remove an item from the queue or cancel it once it has begun
to be processed by a worker proccess you lose out on this ability to shed the
load of processing a request once it has already been scheduled.

But as soon as you introduce a layer like Varnish into the equation, you've lost this anyway, as you're no longer seeing the true client socket. Abandoned requests are an existent problem with HTTP and WSGI; I see them in our logs all the time.
 

This additional complexity incurred by the message bus also ends up requiring
additional complexity layered onto ASGI to try and re-invent some of the
"natural" features of TCP and/or HTTP (or whatever the underlying protocol is).
An example of this would be the ``order`` keyword in the WebSocket spec,
something that isn't required and just naturally happens whenever you're
directly connected to a websocket because the ``order`` is just whatever bytes
come in off the wire. This also gets exposed in other features, like
backpressure where ASGI didn't currently have a concept of allowing the queue
to apply back pressure to the web connection but now Andrew has started to come
around to the idea of adding a bounding to the queue (which is good!) but if
the indirection of the message bus hadn't been added, then backpressure would
have naturally occurred whenever you ended up getting enough things processing
that it blocked new connections from being ``accept``d which would eventually
end up filling up the backlog and then making new connections hang block
waiting to connect. Now it's good that Andrew is adding the ability to bound
the queue, but that is something that is going to require care to tune in each
individual deployment (and will need regularly re-evaluated) rather than
something that just occurs naturally as a consequence of the design of the
system.

Client buffers in OSs were also manually tuned to begin with; I suspect we can hone in on how to make this work best over time once we have more experience with how it runs in the wild. I don't disagree that I'm reinventing existing features of TCP sockets, but it's also a mix of UDP features too; there's a reason a lot of modern protocols back onto UDP instead of TCP, and I'm trying to strike the balance.
 

Anytime you add a message bus you need to make a few trade offs, the particular
trade off that ASGI made is that it should prefer "at most once" delivery of
messages and low latency to guaranteed delivery. This choice is likely one of
the sanest ones you can make in regards to which trade offs you make for the
design of ASGI, but in that trade off you end up with new problems that don't
exist otherwise. For example, HTTP/1 has the concept of pipelining which allows
you to make several HTTP requests on a single HTTP connection without waiting
for the responses before sending each one. Given the nature of ASGI it would be
very difficult to actually support this feature without either violating the
RFC or forcing either Daphne or the queue to buffer potentially huge responses
while it waits for another request that came before it to be finished whereas
again you get this for free using either async IO (you just don't await the
result of that second request until the first request has been processed) or
with WSGI if you're using generators (you just don't iterate over the result
until you're ready for it).

Even with asyncio that data has to be buffered somewhere, whether it's in the client transmit buffer, the receiving OS buffer, or Python memory. If Daphne refuses to read() more from a socket it got a HTTP/1.1 pipeline request on before the response for the first one comes back, that would achieve the same affect as asyncio, no? (This may in fact be what it does already, I need to check the twisted.web pipeline handling)
 

ASGI purports to make it easier to gracefully restart your servers by making it
possible to restart the worker servers (since there is no long live open
connections to them) and simply spin up new ones. However, that's not really
the whole story, because while that is true, it really only exists as long as
your code changes don't touch something that Daphne needs to be aware of in
order to process incoming requests. As soon as Daphne needs restarted then
you're back in the same boat of needing another solution to graceful restarts
and since Daphne depends on project specific code, it's going to require to be
restarted much more frequently than other solutions that don't. It appears to
me like it would be difficult to be able to automatically determine whether or
not Daphne needs a restart on any particular deployment, so it will be common
for people to just need to restart the whole stack anyways.

Daphne only depends on one tiny piece of project code, the channel layer configuration. I don't imagine that changing nearly as often as actual business logic. You're right that once there's a new Daphne version or that config changes, it needs a restart too, but that's not going to be very common.
I agree with the want to use things like HAProxy in the stack, but I think your idea of handling WebSockets natively in Django is far more difficult and fragile than Channels is, mostly due to our ten-year history of synchronous code. We would have to audit a large amount of the codebase to ensure it was all async compatible, not to mention drop python 2 suport, before we'd even get close.

I'm not saying my solution is perfect, I'm saying it's pragmatic given our current position and likely future position. Channels adds a spectrum to Django where you can run it on anything between a single process, a single machine (with the IPC channel layer), or a cluster of machines.

I look forward to Python async being in a better place in five to ten years so we can revisit this and improve things (but hopefully keep a similar end-developer API, which I think is quite nice to use and reflects URL routing and view writing in a nice way), but I believe we need something that works well now, which means taking a few tradeoffs along the way; after all, it's not going to be forced on anyone, WSGI will still be there for a long time to come*.

(*At least until I get around to working out what an in-process asyncio WSGI replacement with WebSocket support might look like)

Andrew

Donald Stufft

읽지 않음,
2016. 5. 6. 오후 2:00:1116. 5. 6.
받는사람 django-d...@googlegroups.com, Andrew Godwin
On May 6, 2016, at 1:45 PM, Andrew Godwin <and...@aeracode.org> wrote:

Want to just cover a few more things I didn't in my reply to Aymeric.

On Fri, May 6, 2016 at 9:11 AM, Donald Stufft <don...@stufft.io> wrote:

In short, I think that the message bus adds an additional layer of complexity
that makes everything a bit more complex and complicated for very little actual
gain over other possible, but less complex solutions. This message bus also
removes a key part of the amount of control that the server which is *actually*
receiving the connection has over the lifetime and process of the eventual
request.

True; however, having a message bus/channel abstraction also removes a layer of complexity that is caring about socket handling and sinking your performance by even doing a slightly blocking operation.

In an ideal world we'd have some magical language that let us all write amazing async code and that detected all possible deadlocks or livelocks before they happened, but that's not yet the case, and I think the worker model has been a good substitute for it in software design generally.
 

For an example, in traditional HTTP servers where you have an open connection
associated with whatever view code you're running whenever the client
disconnects you're given a few options of what you can do, but the most common
option in my experience is that once the connection has been lost the HTTP
server cancels the execution of whatever view code it had been running [1].
This allows a single process to serve more by shedding the load of connections
that have since been disconnected for some reason, however in ASGI since
there's no way to remove an item from the queue or cancel it once it has begun
to be processed by a worker proccess you lose out on this ability to shed the
load of processing a request once it has already been scheduled.

But as soon as you introduce a layer like Varnish into the equation, you've lost this anyway, as you're no longer seeing the true client socket. Abandoned requests are an existent problem with HTTP and WSGI; I see them in our logs all the time.


I don’t believe that to be true. For example: The client connects to Varnish, Varnish connects to h2o, h2o connections to gunciorn which is running WSGI. The client closes the connection to Varnish, so Varnish closes the connection to h2o, so h2o closes the connection to gunicorn who can then throw a SystemExit exception and halt execution of the code.
It doesn’t have to be (entirely) buffered anywhere though is the point. You stop producing the data when your buffer fills up, until your consumer of that buffer drains it and it’s available for more data again. You’re not just growing a buffer unbounded.

 

ASGI purports to make it easier to gracefully restart your servers by making it
possible to restart the worker servers (since there is no long live open
connections to them) and simply spin up new ones. However, that's not really
the whole story, because while that is true, it really only exists as long as
your code changes don't touch something that Daphne needs to be aware of in
order to process incoming requests. As soon as Daphne needs restarted then
you're back in the same boat of needing another solution to graceful restarts
and since Daphne depends on project specific code, it's going to require to be
restarted much more frequently than other solutions that don't. It appears to
me like it would be difficult to be able to automatically determine whether or
not Daphne needs a restart on any particular deployment, so it will be common
for people to just need to restart the whole stack anyways.

Daphne only depends on one tiny piece of project code, the channel layer configuration. I don't imagine that changing nearly as often as actual business logic. You're right that once there's a new Daphne version or that config changes, it needs a restart too, but that's not going to be very common.

Right, but in an automated system it’ll be difficult to determine if Daphne or the worker processes need to be restarted. A human could figure it out but a machine would need to trace Python code to figure out if Daphne is affected or not.
You don’t need to write it asynchronously. You need an async server but that async server can execute synchronous code just fine using something like deferToThread. That’s how twistd -n web —wsgi works today. It gets a request and it deferToThread’s it to synchronous WSGI code.


I'm not saying my solution is perfect, I'm saying it's pragmatic given our current position and likely future position. Channels adds a spectrum to Django where you can run it on anything between a single process, a single machine (with the IPC channel layer), or a cluster of machines.

I look forward to Python async being in a better place in five to ten years so we can revisit this and improve things (but hopefully keep a similar end-developer API, which I think is quite nice to use and reflects URL routing and view writing in a nice way), but I believe we need something that works well now, which means taking a few tradeoffs along the way; after all, it's not going to be forced on anyone, WSGI will still be there for a long time to come*.

(*At least until I get around to working out what an in-process asyncio WSGI replacement with WebSocket support might look like)

Andrew

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/CAFwN1urfvxwUsGSsk3UHLMqZwrqTYfaCvgFQqfFqM%2BiGtkRUmg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
signature.asc

Aymeric Augustin

읽지 않음,
2016. 5. 6. 오후 3:49:4916. 5. 6.
받는사람 django-d...@googlegroups.com
On 06 May 2016, at 19:59, Donald Stufft <don...@stufft.io> wrote:

On May 6, 2016, at 1:45 PM, Andrew Godwin <and...@aeracode.org> wrote:

On Fri, May 6, 2016 at 9:11 AM, Donald Stufft <don...@stufft.io> wrote:

So what sort of solution would I personally advocate had I the time or energy
to do so? I would look towards what sort of pure Python API (like WSGI itself)
could be added to allow a web server to pass websockets down into Django.

I agree with the want to use things like HAProxy in the stack, but I think your idea of handling WebSockets natively in Django is far more difficult and fragile than Channels is, mostly due to our ten-year history of synchronous code. We would have to audit a large amount of the codebase to ensure it was all async compatible, not to mention drop python 2 suport, before we'd even get close.

You don’t need to write it asynchronously. You need an async server but that async server can execute synchronous code just fine using something like deferToThread. That’s how twistd -n web —wsgi works today. It gets a request and it deferToThread’s it to synchronous WSGI code.

Sure, this works for WSGI, but barring significant changes to Django, it doesn’t make it convenient to handle WSGI synchronously and WebSockets asynchronously with the same code base, let alone in the same process.

Problems begin when you want a synchronous function and an asynchronous one to call the same function that does I/O, for example `get_session(session_id)` or `get_current_user(user_id)`. Every useful service serving authenticated users starts with these.

If you’re very careful to never mix sync and async code, sure, it will work. It will be unforgiving, in the sense that it will be too easy to accidentally block the event loop handling the async bits. In the end, essentially, you end up writing two separate apps… and it's harder than writing actually them separately.

That’s why I’m pessimistic about running everything on an event loop as long as we don’t have a way to guarantee that Django never blocks.

-- 
Aymeric.

Donald Stufft

읽지 않음,
2016. 5. 6. 오후 3:56:3916. 5. 6.
받는사람 django-d...@googlegroups.com

On May 6, 2016, at 3:49 PM, Aymeric Augustin <aymeric....@polytechnique.org> wrote:

Sure, this works for WSGI, but barring significant changes to Django, it doesn’t make it convenient to handle WSGI synchronously and WebSockets asynchronously with the same code base, let alone in the same process.

User level code would not be handling WebSockets asynchronously, that would be left up to the web server (which would call the user level code using deferToThread each time a websocket frame comes in). Basically similar to what’s happening now, except instead of using the network and a queue to allow calling sync user code from an async process, you just use the primitives provided by the async framework.
signature.asc

Aymeric Augustin

읽지 않음,
2016. 5. 6. 오후 4:03:4216. 5. 6.
받는사람 django-d...@googlegroups.com
On 06 May 2016, at 21:56, Donald Stufft <don...@stufft.io> wrote:

On May 6, 2016, at 3:49 PM, Aymeric Augustin <aymeric....@polytechnique.org> wrote:

Sure, this works for WSGI, but barring significant changes to Django, it doesn’t make it convenient to handle WSGI synchronously and WebSockets asynchronously with the same code base, let alone in the same process.

User level code would not be handling WebSockets asynchronously, that would be left up to the web server (which would call the user level code using deferToThread each time a websocket frame comes in). Basically similar to what’s happening now, except instead of using the network and a queue to allow calling sync user code from an async process, you just use the primitives provided by the async framework.

Ah, right! I think this would be quite similar to a synchronous, in-memory channels backends.

-- 
Aymeric.

Carl Meyer

읽지 않음,
2016. 5. 6. 오후 4:20:0916. 5. 6.
받는사람 django-d...@googlegroups.com
On 05/06/2016 01:56 PM, Donald Stufft wrote:
> User level code would not be handling WebSockets asynchronously, that
> would be left up to the web server (which would call the user level code
> using deferToThread each time a websocket frame comes in). Basically
> similar to what’s happening now, except instead of using the network and
> a queue to allow calling sync user code from an async process, you just
> use the primitives provided by the async framework.

I think (although I haven't looked at it carefully yet) you're basically
describing the approach taken by hendrix [1]. I'd be curious, Andrew, if
you considered a thread-based approach as an option and rejected it? It
does seem like, purely on the accessibility front, it is perhaps even
simpler than Channels (just in terms of how many services you need to
deploy).

Carl

[1] http://hendrix.readthedocs.io/en/latest

signature.asc

Andrew Godwin

읽지 않음,
2016. 5. 6. 오후 4:31:5516. 5. 6.
받는사람 django-d...@googlegroups.com
Well, the thread-based approach is in channels; it's exactly how manage.py runserver works (it starts daphne and 4 workers in their own threads, and ties them together with the in-memory backend).

So, yes, I considered it, and implemented it! I just didn't think it was enough to have just that solution, which means some of the things a local-memory-only backend could have done (like more detailed operations on channels) didn't go in the API.

Andrew 

Carl Meyer

읽지 않음,
2016. 5. 6. 오후 5:12:0816. 5. 6.
받는사람 django-d...@googlegroups.com
Ha! Clearly I need to go have a play with channels. It does seem to me
that this is a strong mark in favor of channels on the accessibility
front that deserves more attention than it's gotten here: that the
in-memory backend with threads could be a reasonable way to set up even
a production deployment of many small sites that want websockets and
delayed tasks without requiring separate management of interface
servers, Redis, and workers (or separate WSGI and async servers). Of
course it has the downside that thread-safety becomes an issue, but
people have been deploying Django under mod_wsgi with threaded workers
for years, so that's not exactly new.

Of course, there's still internally a message bus between the server and
the workers, so this isn't exactly the approach Donald was preferring;
it still comes with some of the tradeoffs of using a message queue at
all, rather than having the async server just making its own decisions
about allocating requests to threads.

Carl

signature.asc

Andrew Godwin

읽지 않음,
2016. 5. 6. 오후 6:29:2216. 5. 6.
받는사람 django-d...@googlegroups.com
Yup, that's definitely the tradeoff of this approach; it's not quite as intelligent as a more direct solution could be. With an in-memory backend, however, you can take the channel capacity down pretty low to provide quicker backpressure to at least get _some_ of that back.

(Another thing I should mention - with the IPC backend, you could run an asyncio interface server on Python 3 and keep running your legacy business logic on a Python 2 worker, all on the same machine using speedy shared memory to communicate)

Andrew 
전체답장
작성자에게 답글
전달
새 메시지 0개