On ASGI...

1,084 views
Skip to first unread message

Tom Christie

unread,
Jun 1, 2017, 6:18:44 AM6/1/17
to Django developers (Contributions to Django itself)
I've been doing some initial work on a Gunicorn worker process that interfaces with an ASGI consumer callable, rather than a WSGI callable.


In the standard channels setup, the application is run behind a message bus...

    Protocol Server -> Channels <- Worker Process -> ASGI Consumer

In the Gunicorn worker implementation above, we're instead calling the consumer interface directly...

    Protocol Server -> ASGI Consumer

There's a few things that're promising here...

1. The ASGI consumer interface is suitable for async application frameworks, where WSGI necessarily can't be.

In WSGI the response gets returned when the callable returns, you can't queue an asynchronous task to perform some work later.
With an ASGI consumer, the messaging interface style means that you can push tasks onto the event loop and return immediately.
In short, you can use async...await under ASGI.

2. The uvloop and httptools implementations are seriously speedy.

For comparative purposes, plaintext hello world servers against a few different implementations on my MacBook Air

wrk -d20s -t10 -c200 http://127.0.0.1:8080/

                          Throughput Latency (stddev)
Go                      44,000 req/s     6ms      92%
uvloop+httptools, ASGI  33,000 req/s     6ms      67% 
meinheld, WSGI          16,000 req/s    12ms      91%
Node                     9,000 req/s    22ms      91%

As application developers those baselines aren't typically a priority, but if we want Python web frameworks to be able to nail the same kind of services that node and go currently excel in, then having both async support, and a low framework overhead *is* important.

It's not immediately clear to me if any of this is interesting to Django land directly or not. The synchronous nature of the framework means that having the separation of async application servers, and synchronous workers behind a channel layer makes a lot of sense. Tho you could perfectly well run a regular HTTP Django application on top of this implementation (replacing wsgi.py with an asgi.py that uses ASGIHandler) and be no worse off for it. (Sure you're running blocking operations while running in the context of an event loop, but that's no worse than running blocking operations in a standard WSGI configuration)

However it is valuable if you want to be able to write HTTP frameworks that support async...await, or if you want to support websockets and don't require the kinds of broadcast functionality that adding a channel layer provides for.

---

At the moment I'm working against the ASGI consumer interface as it's currently specified. There's a few things that I'm interested in next:

1. If there'd be any sense in mandating that the ASGI callable *may* be a coroutine. (requiring an asyncio worker or server implementation)
2. If there'd be any sense in including `.loop` as either a mandatory or as an optional attribute on a channel layer that supports the syncio extension.
3. Andrew's mentioned that he's been considering an alternative that maps more simply onto WSGI, I'd really like to see what he's thinking there.
4. Response streaming isn't a problem - you can send multiple message back to the channel, and run that off the event loop. However, I've not quite got my head around how you handle streaming request bodies, or around how you'd invert the interface so that from the application perspective there's something like an interface available for  `chunk = await body.read()`.
5. One other avenue of interest here might be if it's worth considering bringing ASGIHandler out of channels and into Django core, so that we can expose either an asgi consumer callable or a wsgi callable interface to Django, with `runworker` being only one of a number of possible ASGI deployments.

Plenty to unpack here, feedback on any aspects most welcome!

Cheers,

  T :)

Andrew Godwin

unread,
Jun 1, 2017, 1:42:50 PM6/1/17
to Django developers (Contributions to Django itself)
Thanks for the continued speedy research, Tom!

Weighing in on the design of an ASGI-direct protocol, the main issue I've had at this point is not HTTP (as there's a single request message and the body stuff could be massaged in somehow), but WebSocket, where you have separate "connect", "receive" and "disconnect" events.

Because of the separate events, having them covered in one consumer/function, even in an async style, is not possible under something that works in the same overall way as ASGI does at the base level; instead, it would have to be modified substantially so that whatever the server was bundled all those things together into a single asyncio function, and then we'd have to stick a key on there that made it clear what it was.

The thing I wanted to investigate, and which I started making progress towards at PyCon, was keeping the same basic architecture as ASGI (that is, server -> channel layer <- consumer), but stripping it way back to the in-memory layer with a very small capacity, so it basically just acts as a async-thread-to-async-thread channel in the same way things would in, for example, Go.

This resulted in me adding receive_async() to the memory layer so that works, but it's a relatively slow implementation as it has to coexist with the rest of the layer which works synchronously. I suspect there is potential for a very fast async-only layer that can trigger the await that's hanging in a receive_async() directly from a send() to a related channel, rather than sticking it onto a memory location and waiting. This could be done with generic channel names like in the spec ("websocket.connect"), and end up with a server that has less worker contexts/threads than sockets waiting, or with specific channel names per connection ("websocket.connect?j13DkW2") and then spin up one handling function per connection per message type, which seems sub-optimal.

Otherwise, it seems the only way to re-engineer it is to shove all the message types for a protocol into a single async handling function, which seems less than ideal to me, or start down the path of having these things represented as classes with callables on them - so you'd call "consumer.connect()", "consumer.receive()" etc., which is probably my preferred design for keeping the event separation nice and clean.

Andrew

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscribe@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/db158350-60a9-4950-b11c-83f2f7a9221c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tom Christie

unread,
Jun 2, 2017, 11:21:26 AM6/2/17
to Django developers (Contributions to Django itself)
> I suspect there is potential for a very fast async-only layer that can trigger the await that's hanging in a receive_async() directly from a send() to a related channel, rather than sticking it onto a memory location and waiting

Yup. Something that the gunicorn worker doesn't currently provide is `receive()/receive_async()` hooks, and instead current just  rely on `send()`ing every incoming message. For eg. HTTP the one thing that doesn't give you so easily is request body buffering or streaming, but you *could* handle that on the application side if you really wanted.

I think that the sensible next step for me will be to work towards adding that interface in, at which point we ought to have an implementation capable of running a Django service using ASGIHandler. (And yeah, I see that you'd be adding channel queues at that point.) ayncio.Queue is looking like a mightily useful bit of tooling right now.

> the design of an ASGI-direct protocol

I think the important part of this is "how does asyncio code look in the context of an ASGI consumer interface" rather than specifics about server-direct-to-consumer configurations. Not unreasonable that an "An asyncio ASGI app, running behind a full channel layer" could be a thing too, in the future.

Perhaps a good way of narrowing this down would be to consider an asyncio runworker: Is the application responsible for pushing coroutines onto the event loop (advantage: more narrowly scoped), or do you extend the ASGI consumer callable definition to also allow it to be a coroutine (advantages: more natural interface, server can manage task cancellations).

Andrew Godwin

unread,
Jun 2, 2017, 2:01:56 PM6/2/17
to Django developers (Contributions to Django itself)
Right. I'll try and get a full async example up in channels-examples soon to show off how this might work; I did introduce a Worker class into the asgiref package last week as well, and the two things that you need to override on that are "the list of channels to listen to" and "handle this message".

While the consumer interface as it stands is tied into Django (where we _could_ allow coroutines, with some difficulty if I am to continue supporting Python 2.7 for Django 1.11), there's also room for asgiref to have either:

 - Something substantially similar to Django's consumer system, or
 - The ability to just plug in a callable/coroutine that gets called with messages as they arrive, and which provides you with channel names to listen on as an attribute

This second one is perhaps my preferred approach, as it's getting very close to the design of WSGI, and it would not be hard to re-layer Channels to exist behind this pattern. I think at that point the main difference to WSGI is the need to persist state somewhere outside the locals() of the function (as you'll still have separate coroutines for connect/receive/disconnect).

Andrew

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscribe@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.

Tom Christie

unread,
Jun 7, 2017, 7:05:02 AM6/7/17
to Django developers (Contributions to Django itself)
Making some more progress - https://github.com/tomchristie/uvicorn
I'll look into adding streaming HTTP request bodies next, and then into adding a websocket protocol.

I see that the consumer interface is part of the channels API reference, rather than part of the ASGI spec.
Is the plan to eventually include the consumer interface as part of the ASGI spec, and make it more clearly separate to channels?


> The ability to just plug in a callable/coroutine that gets called with messages as they arrive, and which provides you with channel names to listen on as an attribute

This sort of interface is exactly what I'm looking for, yes. From my POV the consumer callable is the primary bit of interface, rather than the channel layers.

The consumer interface as it currently stands *is* sufficient for writing alternative frameworks against, but it's possible that there could be a more limited, refined interface to be had here. (Eg. limiting the message argument to being something that contains only serializable data and channel interfaces.)

What would be great would be to have *just* the ASGI consumer callable interface pulled into Django core, and have channels be one of the possible ways of deploying against that.

> persist state somewhere outside the locals() of the function (as you'll still have separate coroutines for connect/receive/disconnect).

I assume you'd use the name of the reply_channel as the key to the state there, right?

Incidentally asyncio implementations have less of a requirement here, as you could write a single coroutine handler that's called on `connect`,
and that can then can non-blocking reads to a channel for incoming data, and could support broadcast via an asyncio interface onto redis pub/sub.

Cheers,

  Tom :)

Andrew Godwin

unread,
Jun 8, 2017, 3:15:58 AM6/8/17
to Django developers (Contributions to Django itself)
On Wed, Jun 7, 2017 at 7:05 PM, Tom Christie <christ...@gmail.com> wrote:
Making some more progress - https://github.com/tomchristie/uvicorn
I'll look into adding streaming HTTP request bodies next, and then into adding a websocket protocol.

I see that the consumer interface is part of the channels API reference, rather than part of the ASGI spec.
Is the plan to eventually include the consumer interface as part of the ASGI spec, and make it more clearly separate to channels?

No, consumers as they are in Channels right now I see as a particularly Django thing (as they depend on Django-like things such as routing and the decorator suite).
 


> The ability to just plug in a callable/coroutine that gets called with messages as they arrive, and which provides you with channel names to listen on as an attribute

This sort of interface is exactly what I'm looking for, yes. From my POV the consumer callable is the primary bit of interface, rather than the channel layers.

The consumer interface as it currently stands *is* sufficient for writing alternative frameworks against, but it's possible that there could be a more limited, refined interface to be had here. (Eg. limiting the message argument to being something that contains only serializable data and channel interfaces.)

What would be great would be to have *just* the ASGI consumer callable interface pulled into Django core, and have channels be one of the possible ways of deploying against that.

Well, messages are already massively limited, but I agree it's potentially a better interface to have everyone develop against as a base.
 

> persist state somewhere outside the locals() of the function (as you'll still have separate coroutines for connect/receive/disconnect).

I assume you'd use the name of the reply_channel as the key to the state there, right?

Incidentally asyncio implementations have less of a requirement here, as you could write a single coroutine handler that's called on `connect`,
and that can then can non-blocking reads to a channel for incoming data, and could support broadcast via an asyncio interface onto redis pub/sub.


The problem here is that there isn't exactly one incoming channel for each WebSocket, so you can't listen within a consumer-like - they always have to be "takes a message, sends zero or more messages, exits" in terms of their pattern. This is semi-deliberate, as having sticky routing makes things way harder to scale, and the restriction about not allowing blocking-listening within a consumer makes deadlock way, way harder.

Any interface like this would literally just be "this function gets called with every event, but you can't listen for events on your own"; HTTP request bodies being not an event, but a stream, and they're fine as they are deliberately on a special channel per request. 

Andrew

Tom Christie

unread,
Jun 8, 2017, 8:55:14 AM6/8/17
to Django developers (Contributions to Django itself)
> Any interface like this would literally just be "this function gets called with every event, but you can't listen for events on your own"

Gotcha, yes. Although that wouldn't be the case with asyncio frameworks, since the channel reader would be a coroutine.
Which makes for interesting design thinking if you want to provide an consumer interface that's suitable for both framework styles.

> HTTP request bodies being not an event, but a stream, and they're fine as they are deliberately on a special channel per request. 

Which it turns out actually makes the cooperative-tasks in a single-process implementation slightly awkward. (Doable, but rather more complex)
For the purposes of my implementation it makes sense that if you've got a synchronous HTTP callable, then the server should buffer the incoming data
and only dispatch a single message. (Not a problem wrt. Django, since ASGIHandler ends up effectively doing that in any case.)

Anyways, thanks for talking through all this - it's been immensely helpful!

  - T :)

Andrew Godwin

unread,
Jun 8, 2017, 8:58:34 AM6/8/17
to Django developers (Contributions to Django itself)
On Thu, Jun 8, 2017 at 8:55 PM, Tom Christie <christ...@gmail.com> wrote:
> Any interface like this would literally just be "this function gets called with every event, but you can't listen for events on your own"

Gotcha, yes. Although that wouldn't be the case with asyncio frameworks, since the channel reader would be a coroutine.
Which makes for interesting design thinking if you want to provide an consumer interface that's suitable for both framework styles.

Well, it would be the case, it's just that it will launch a new coroutine for each message, the way I'm thinking. It's either that or just one coroutine context that processes messages linearly, and then you launch lots of those in parallel.
 

> HTTP request bodies being not an event, but a stream, and they're fine as they are deliberately on a special channel per request. 

Which it turns out actually makes the cooperative-tasks in a single-process implementation slightly awkward. (Doable, but rather more complex)
For the purposes of my implementation it makes sense that if you've got a synchronous HTTP callable, then the server should buffer the incoming data
and only dispatch a single message. (Not a problem wrt. Django, since ASGIHandler ends up effectively doing that in any case.)


Right, and this is the ultimate tradeoff I had to make in ASGI - the WebSocket design problem means you simply cannot design around having things all work in a single coroutine unless you go all the way and guarantee that things are _always_ in the same thread so you don't need to do sticky routing.

Andrew 

Tom Christie

unread,
Jun 9, 2017, 8:22:43 AM6/9/17
to Django developers (Contributions to Django itself)
Figure I may as well show the sort of thing I'm thinking wrt. a more constrained consumer callable interface...

* A callable, taking two arguments, 'message' & 'channels'
* Message being JSON-serializable python primitives.
* Channels being a dictionary of str:channel
* Channel instances expose `.send()`, `.receive()` and `.name` interfaces.

Extensions such as groups/statistics/flush would get expressed instead as channel interfaces,
eg. a chat example...

    def ws_connected(message, channels):
        channels['reply'].send({'accept': True})
        channels['groups'].send({
            'group': 'chat',
            'add': channels['reply'].name
        })

    def ws_receive(message, channels):
        channels['groups'].send({
            'group': 'chat',
            'send': message['text']
        })

    def ws_disconnect(message, channels):
        channels['groups'].send({
            'group': 'chat',
            'discard': channels['reply'].name
        })

My thinking at the moment is that there isn't any great way of supporting both asyncio and sync implementations under the same interface.
If you're in asyncio land, it makes sense to *only* expose awaitable channel operations as you don't ever want to be able to block the task pool.

I think the best you can really do is express two distinct modes of interface.

sync: (callable interface, blocking send/receive interface)
asyncio (coroutine interface, coroutine send/receive interface)

Presumably the equivalent would be true of eg. twisted.

(There's a couple of diff things you can do to bridge from the asyncio interface -> sync interface if that's useful)

Andrew Godwin

unread,
Jun 10, 2017, 10:21:49 AM6/10/17
to Django developers (Contributions to Django itself)
On Fri, Jun 9, 2017 at 8:22 PM, Tom Christie <christ...@gmail.com> wrote:
Figure I may as well show the sort of thing I'm thinking wrt. a more constrained consumer callable interface...

* A callable, taking two arguments, 'message' & 'channels'
* Message being JSON-serializable python primitives.
* Channels being a dictionary of str:channel
* Channel instances expose `.send()`, `.receive()` and `.name` interfaces.

Extensions such as groups/statistics/flush would get expressed instead as channel interfaces,
eg. a chat example...

    def ws_connected(message, channels):
        channels['reply'].send({'accept': True})
        channels['groups'].send({
            'group': 'chat',
            'add': channels['reply'].name
        })

    def ws_receive(message, channels):
        channels['groups'].send({
            'group': 'chat',
            'send': message['text']
        })

    def ws_disconnect(message, channels):
        channels['groups'].send({
            'group': 'chat',
            'discard': channels['reply'].name
        })

So is the channels object just a place to stuff different function handlers? Why not just pass the channel layer there and use the API on that directly? e.g.: 
   channel_layer.group_send("chat", message["text"])

I was thinking more like:

def handler(channel_layer, channel_name, message):
    ...

And then frameworks can do per-channel-name dispatch if they like, and use the channel layer for the send/group methods.
 

My thinking at the moment is that there isn't any great way of supporting both asyncio and sync implementations under the same interface.
If you're in asyncio land, it makes sense to *only* expose awaitable channel operations as you don't ever want to be able to block the task pool.

I think the best you can really do is express two distinct modes of interface.

sync: (callable interface, blocking send/receive interface)
asyncio (coroutine interface, coroutine send/receive interface)

Presumably the equivalent would be true of eg. twisted.

(There's a couple of diff things you can do to bridge from the asyncio interface -> sync interface if that's useful)

Yup, you can't make cross-compatible ones. This is why ASGI right now has a receive() method (sync), receive_twisted(), and receive_async(), because all three have different signatures. I'm hopeful the Twisted and asyncio ones could be merged, though.

Andrew 

Tom Christie

unread,
Jun 12, 2017, 10:53:08 AM6/12/17
to Django developers (Contributions to Django itself)
> def handler(channel_layer, channel_name, message):

Oh great! That's not a million miles away from what I'm working towards on my side.
Are you planning to eventually introduce something like that as part of the ASGI spec?

> So is the channels object just a place to stuff different function handlers?

No, it's just a different interface style onto exactly the same set of channel send/receive functionality.
It's the difference between this:

    def hello_world(channel_layer, channel_name, message):
        ...
        channel_layer.send(message['reply_channel'], response)

And this:

    def hello_world(message, channels):
        ...
        channels['reply'].send(response)

> Why not just pass the channel layer there and use the API on that directly? e.g.: channel_layer.group_send("chat", message["text"])

With the groups example I wanted to demonstrate that we don't need extra "extensions" API introduced onto the channel layer in order to support a broadcast interface.

> then frameworks can do per-channel-name dispatch if they like

Yup, the channel name is certainly necessary.
A possible alternative is including that in the message itself, which I also quite like as an option because you more naturally end up with a nice symmetry of having the signature of the child routes match the signature of the parent.

   def app(message, channels):
        channel = message['channel']
        if channel == 'http.request':
           http_request(message, channels)
        elif channel == 'websocket.connect':
           websocket_connect(message, channels)
        elif ...

That's what I'm rolling with for the moment, but it's not something I'm necessarily wedded to.

I've done a bunch more work towards all this, so it'd worth checking out https://github.com/tomchristie/uvicorn in it's current state. That should make the interface style I'm considering more clear. (Focusing on asyncio there, but could equally present an equivalent-but-syncronous interface.)

There are also now initial implementations for both WSGI-to-ASGI and ASGI-to-WSGI adapters.

Thanks!

  - Tom :)

Andrew Godwin

unread,
Jun 13, 2017, 9:15:39 PM6/13/17
to Django developers (Contributions to Django itself)
On Mon, Jun 12, 2017 at 10:53 PM, Tom Christie <christ...@gmail.com> wrote:
> def handler(channel_layer, channel_name, message):

Oh great! That's not a million miles away from what I'm working towards on my side.
Are you planning to eventually introduce something like that as part of the ASGI spec?

I had been pondering doing so for a while and I think this thread has catalysed me to actually do it (and incorporate support for it into the Channels/asgiref stuff too, so everything can use a common Worker class if it's running in a separate process)
 

> So is the channels object just a place to stuff different function handlers?

No, it's just a different interface style onto exactly the same set of channel send/receive functionality.
It's the difference between this:

    def hello_world(channel_layer, channel_name, message):
        ...
        channel_layer.send(message['reply_channel'], response)

And this:

    def hello_world(message, channels):
        ...
        channels['reply'].send(response)

This is the sort of thing I would not put in a spec, but leave the framework to do if it likes (the same way Django adds things like message.reply_channel to the original message)
 

> Why not just pass the channel layer there and use the API on that directly? e.g.: channel_layer.group_send("chat", message["text"])

With the groups example I wanted to demonstrate that we don't need extra "extensions" API introduced onto the channel layer in order to support a broadcast interface.

Well the extensions are there for a reason :) But again, I think my goal in ASGI is to provide a solid but not necessarily pretty base, and let things build on top of it as they wish.
 

> then frameworks can do per-channel-name dispatch if they like

Yup, the channel name is certainly necessary.
A possible alternative is including that in the message itself, which I also quite like as an option because you more naturally end up with a nice symmetry of having the signature of the child routes match the signature of the parent.

   def app(message, channels):
        channel = message['channel']
        if channel == 'http.request':
           http_request(message, channels)
        elif channel == 'websocket.connect':
           websocket_connect(message, channels)
        elif ...

That's what I'm rolling with for the moment, but it's not something I'm necessarily wedded to.

I actually pondered this during the spec development (also considered having a separate channel names dict inside, with the current name and multiple reply names, e.g. "send" and "close"). Right now, I like the base level interface as it is, because the ideological separation makes me happier - you get out exactly what you put in, and there's no special keyword to reserve - but the truth is that several of the backends actually do this to support the process-local channels efficiently, so maybe it is worth revisiting.
 

I've done a bunch more work towards all this, so it'd worth checking out https://github.com/tomchristie/uvicorn in it's current state. That should make the interface style I'm considering more clear. (Focusing on asyncio there, but could equally present an equivalent-but-syncronous interface.)


I note that your examples do not include "receiving messages from a WebSocket and sending replies" - I would love to see how you propose to tackle this given your current API, and I think it's the missing piece of what I understand.
 
There are also now initial implementations for both WSGI-to-ASGI and ASGI-to-WSGI adapters.


There's a mostly-complete one in asgiref over here for WSGI-to-ASGI: https://github.com/django/asgiref/blob/master/asgiref/wsgi.py

If you are feeling like it, I would love to have just one pair of canonical adapters in the asgiref repo that everyone can use. The ASGI-to-WSGI adapter can probably be written based on the callable pattern I described at the top.
 
Andrew

Tom Christie

unread,
Jun 14, 2017, 9:53:34 AM6/14/17
to Django developers (Contributions to Django itself)
> I note that your examples do not include "receiving messages from a WebSocket and sending replies" - I would love to see how you propose to tackle this given your current API, and I think it's the missing piece of what I understand.

I've just added an `echo` WebSocket example.

I've also now added support for broadcast, currently implemented using Redis Pub/Sub.
There's a full example for a chat server that can be properly distributed. (*)

Aside: Right now that *happens* to be implemented as middleware, but there's no reason it couldn't equally well be integrated at the server level, so not a detail to get sidetracked by. More important is how the application interface for it looks.

(*) Yes, you'd have sticky WebSockets.

Andrew Godwin

unread,
Jun 15, 2017, 11:03:14 PM6/15/17
to Django developers (Contributions to Django itself)
Ah, I see, you are assuming sticky sockets. That makes things a lot easier to architecture, but a whole lot harder to load-balance (you have to get your load-balancer to deliberately close sockets when a server is overloaded as many will not go away by themselves).

Still, it makes scaling down a lot easier, so it's nicer to program against in that sense. Goes against the way most Python web code is written though (imagine if you had to use sticky sessions for HTTP clients so they always hit the same server!). I wonder if there is a way of doing something like this well, so that it's easy to write but also lets you scale later.

Andrew

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscribe@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.

Tom Christie

unread,
Jun 16, 2017, 5:49:10 AM6/16/17
to Django developers (Contributions to Django itself)
> I wonder if there is a way of doing something like this well, so that it's easy to write but also lets you scale later.

It's not obvious that sticky websockets are *necessarily* problematic for typically use cases. Couple of things you'd want:

* Have clients be responsible for graceful reconnects. (This seems like a reasonable policy in any case.)
* Have instances enforce a maximum number of concurrent websocket requests that they'll accept. (Optionally combined with load-balancing websocket and HTTP to different sets of instances.)

A totally different tack would be a websocket load-balancer proxy that transparently handles reconnects across the pool of servers, as required. It doesn't look like Nginx supports that, but it might not be a ridiculous proposal given that it does support acting as a websocket proxy to multiple servers.

However what I'd *like* to do would be to write a consumer that routes messages to a channel layer. At that point uvicorn would be a fully fledged alternative implementation to daphne.

One question there: The channel layer `.send(...)` method is currently a regular method. Should there be twisted/asyncio equivalents in the spec? Given that I'm writing an asyncio server I'd ideally like to be able to use `await channel_layer.send(...)`. (Granted it's only eg. a quick redis hop, and I do also have the option of running those operations within a separate thread.)

Andrew Godwin

unread,
Jun 16, 2017, 11:01:46 AM6/16/17
to Django developers (Contributions to Django itself)
Right - as long as you make clients deal with reconnection (which obviously they should), then as long as your load-balancing has a way to shed connections from a sticky server by closing them then all should be fine.

Honestly, I have always been annoyed at the no-local-context thing in channels; it makes writing code a lot easier, and while the current design may technically be more scalable I'm not sure that holds true given that sockets already have to be sticky thanks to TCP, and my own personal views on "scaling down". My ideal solution is one that allows both approaches, and I'd like to investigate that further. I think you're getting closer to the sort of thing I'm imagining with the uvcorn designs, but I feel like there's still something a little extra that could be done so it's possible to offload over a network easily (as you mention, letting consumers go to channel layers).

This is one of the original reasons the message specification is separate from the channel layer stuff; I always knew that my particular arrangement of send-and-receive-in-processes wouldn't work for everyone and everything, so I wanted a common message format we could definitely all agree on too. If we end up with one format for channel names and messages that is spread across two consumption forms (in-process async and cross-process channel layers), I think that would still be a useful enough standard and make a lot more people happy.

As for making send() async - I resisted this so far on the grounds that it is "non-blocking", but it involves a network call, so it's not actually non-blocking, obviously. It also technically returns a result, in that you have to be prepared to catch the ChannelFull exception, so it's not like you can have a synchronous API that then ends up being fire-and-forget on an async backend.

I wish there was a nicer way to achieve this than having send_async() and send_group_async() methods (etc.), but the only other alternative I see is having the methods all mirrored on an .async object, as in "channel_layer.async.send()". I'm not sure how I feel about that - thoughts?

Andrew

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscribe@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.

Josh Smeaton

unread,
Jun 17, 2017, 6:53:54 AM6/17/17
to Django developers (Contributions to Django itself)
FuncName() and FuncNameAsync() are common patterns in .NET land with async/await. The snake case translation would be funcname_async. From a quick scan, the JS world hasn't settled on a convention yet, though there is a bit of discussion about how to differentiate the names. Personally I don't mind the _async suffix, and I've been using it where there are both sync and async versions.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.

Tom Christie

unread,
Jun 21, 2017, 8:33:11 AM6/21/17
to Django developers (Contributions to Django itself)
> My ideal solution is one that allows both approaches, and I'd like to investigate that further. I think you're getting closer to the sort of thing I'm imagining with the uvcorn designs, but I feel like there's still something a little extra that could be done so it's possible to offload over a network easily (as you mention, letting consumers go to channel layers).

Indeed. Centering on the consumer interface doesn't mean that using a channel layer isn't an option, just that it's not the base case. It also composes really nicely, in the same way as WSGI middleware eg...

app = router({
    'http.request': wsgi_adapter(wsgi_app),
    'websocket.*': redis_channel_layer(...)
})


>  If we end up with one format for channel names and messages that is spread across two consumption forms (in-process async and cross-process channel layers), I think that would still be a useful enough standard and make a lot more people happy.

Yes. I'm not sure if there aren't also reasonable ways to have standard non-asyncio frameworks deployed directly behind a server, too. Personally I'm looking at using your work to fill the gap of having no WSGI equivalent in the asyncio framework space.

> I wish there was a nicer way to achieve this than having send_async() and send_group_async() methods (etc.), but the only other alternative I see is having the methods all mirrored on an .async object, as in "channel_layer.async.send()". I'm not sure how I feel about that - thoughts?

First thought is that the same issue likely applies to a bunch of the extension methods. That's a good argument in favour of keeping the API surface area as minimal as possible. I'm still keen on an API that only exposes data and channel send/receive primitives (and associated error handling), and simply doesn't allow for anything beyond that.

The naming aspect has plenty of bikeshedding potential. My current preference probably sounds a little counter-intuitive, in that I'd be happy to see synchronous and asyncio version of the interface be two incompatible takes on the same underlying interface. ie. name them send() and receive() in *both* cases. You don't ever want to expose the sync version to an asyncio framework, or vice versa.

Asyncio essentially introduces a language within a language, eg. it wouldn't be unreasonable to see an `apython` interpreter in the future, that fully replaced all the incompatible parts of the standard library with coroutine equivalents, so I wouldn't have a problem with treating it almost as two separate language implementations against the same spec. I don't think it's worth getting hung up on resolving this aspect just yet tho. Less contentious would be at least asking the question "should we treat the interface as asyncio-first (eg. send/send_blocking) or sync-first (send/send_async)?"

Cheers,

  Tom

Andrew Godwin

unread,
Jun 23, 2017, 3:06:35 AM6/23/17
to Django developers (Contributions to Django itself)

>  If we end up with one format for channel names and messages that is spread across two consumption forms (in-process async and cross-process channel layers), I think that would still be a useful enough standard and make a lot more people happy.

Yes. I'm not sure if there aren't also reasonable ways to have standard non-asyncio frameworks deployed directly behind a server, too. Personally I'm looking at using your work to fill the gap of having no WSGI equivalent in the asyncio framework space.

Well, non-asyncio frameworks are presumably WSGI, so the job is just to translate them across.
 

> I wish there was a nicer way to achieve this than having send_async() and send_group_async() methods (etc.), but the only other alternative I see is having the methods all mirrored on an .async object, as in "channel_layer.async.send()". I'm not sure how I feel about that - thoughts?

First thought is that the same issue likely applies to a bunch of the extension methods. That's a good argument in favour of keeping the API surface area as minimal as possible. I'm still keen on an API that only exposes data and channel send/receive primitives (and associated error handling), and simply doesn't allow for anything beyond that.

The naming aspect has plenty of bikeshedding potential. My current preference probably sounds a little counter-intuitive, in that I'd be happy to see synchronous and asyncio version of the interface be two incompatible takes on the same underlying interface. ie. name them send() and receive() in *both* cases. You don't ever want to expose the sync version to an asyncio framework, or vice versa.

Asyncio essentially introduces a language within a language, eg. it wouldn't be unreasonable to see an `apython` interpreter in the future, that fully replaced all the incompatible parts of the standard library with coroutine equivalents, so I wouldn't have a problem with treating it almost as two separate language implementations against the same spec. I don't think it's worth getting hung up on resolving this aspect just yet tho. Less contentious would be at least asking the question "should we treat the interface as asyncio-first (eg. send/send_blocking) or sync-first (send/send_async)?"


This is an interesting take, and one I have also considered. I'm not sure where I sit on it - on one hand, it's nice to be able to have a common interface that sync and async code can write to, on the other hand, it's often cleaner code to keep these separate (they could still be in the same package, just two different top-level classes you import).

What I do still want is the ability for sync and async code to work with each other across processes, but that's why there's a cross-network abstraction in the first place.

Andrew

Tom Christie

unread,
Jun 23, 2017, 3:57:02 AM6/23/17
to Django developers (Contributions to Django itself)
Raising this across in aio-libs for a bit of exploratory discussion... https://groups.google.com/forum/#!topic/aio-libs/7rJ8Pb1y7aA

> I'm not sure where I sit on it - on one hand <...> on the other hand <...>

Same really, yes. :)
Reply all
Reply to author
Forward
0 new messages