Feature: Support Server-Sent Events

490 views
Skip to first unread message

Emil Stenström

unread,
May 30, 2015, 12:52:36 PM5/30/15
to django-d...@googlegroups.com
Hi,

This is the third feature proposal as part of my general drive for
getting Django to work better for javascript heavy sites.

Support Server-Sent Events
--------------------------

If you want a snappy user experience, AJAX polling isn't enough. There
are two major ways to get out of the request-response cycle. Namely:
websockets and server-sent events.

Websockets are complicated beasts, and getting cross-browser support for
them requires implementing several protocol versions. They are also
bidirectional, meaning you can send stuff to the server with them.

Server-Sent Events use normal HTTP, and are natively supported
everywhere except Internet Explorer. Since they don't use a custom
protocol there are several polyfills that enable IE support too,
bringing browser compliance close to 100%. SSE only supports sending
data from server to client, not the other way around.

The ease of implementation for server-sent events makes it a much better
Django fit in my opinion. Also, I don't think the need for client ->
server requests is that big, you can easily solve that with AJAX calls
instead.

More reading on how it works:
http://www.html5rocks.com/en/tutorials/eventsource/basics/

---

Just like websockets, server-sent events rely on persistent connections
to each connected client. This means that we need a separate process to
make this work properly. I understand this is a controversial
suggestion, but it allows the rest of Django to continue working like
before, and still offer async support to users. It's the best of two worlds.

So how would you implement such a process in a performant way? With
Python 3's asyncio package (for Python 2 support there's trollius:
https://pypi.python.org/pypi/trollius). This means no external
dependencies (except for old pythons).

You would start the process separately, add a script tag to your page,
and all clients that connected to the page would be connected to the
process. Now Django could push messages through that process as part of
the request-response cycle, and all clients would be notified. Other
processes (a cron job?) could do the same.

Simply stated: we would finally get async support in Django, without
having to rewrite large parts of Django. Also, we need no persistent
message storage. Connected clients get the messages, then it's discarded.

There is existing work on this that could be a starting point for a more
Django:y API: https://github.com/brutasse/asyncio-sse

---

I think server-sent events would be a great reason to choose Django over
other frameworks. It would finally answer the request for "async"
features in Django, with a system that's as easy to setup (just start
the process) as to implement.

Would anyone we willing to work with me on this? Do you think it makes
sense to put this in Django? I don't see any need for this code to
change often, and it fits well with the "building apps quickly and with
less code" mantra.

Thoughts? Ideas?

(And yes, this idea would work great together with the js template
rendering. Pushing database updates from the server to the client, and
they instantly updating the template accordingly)

Regards,

Emil Stenström
Twitter: @EmilStenstrom

Emil Stenström

unread,
May 30, 2015, 1:17:36 PM5/30/15
to django-d...@googlegroups.com, Collin Anderson
Hi Collin,

I'll answer in this thread to keep here things tidy.

Since Server-Sent Events are just HTTP you can scale them the same way you scale your normal web app. Split users so some users hit one server and some users hit another. Then let Django communicate with all server processes instead of just one.

But I wouldn't say this would have to be the responsibility of Django, just like scaling your database isn't Django's concern. As long as you can decide who connects to what machine it should be able to scale horizontally.

/Emil

On Saturday, 30 May 2015 19:04:26 UTC+2, Collin Anderson wrote:
Hi Emil,

I also like "server sent events" (EventSource). They get through proxies much more reliably than WebSockets. :)

"You would start the process separately, add a script tag to your page, and all clients that connected to the page would be connected to the process. Now Django could push messages through that process as part of the request-response cycle, and all clients would be notified. Other processes (a cron job?) could do the same."

Have you thought about how to scale this across multiple machines?

Thanks,
Collin


On Saturday, May 30, 2015 at 12:51:33 PM UTC-4, Emil Stenström wrote:
Hi!

A couple of weeks ago I held a presentation on PyCon Sweden 2015 with
the title "Why Django Sucks". The idea was to follow in the footsteps of
great conference talks like these:
https://www.youtube.com/playlist?list=PLGYrvlVoAdf9j3v_teol3s7hl8gkUd8E2

These talks are great because they encourage people to take an outside,
critical, perspective of Django and where it's going.

My talk was well received, with many both interested and skeptical
questions afterwards. Unfortunately, the final video was missing the
sound, so the camera crew is working on fixing that now. I'll post it to
this thread as soon as I get it. Meanwhile, here's a text summary:

---

The theme for my talk was that Django's bad support for Javascript heavy
sites. Everyone is using javascript for their sites nowadays. Problem
is, Django (and large parts if the Django community) has long had a
approach to Javascript that can be summed up with Simon Willison's reply
to someone asking for AJAX support in 2005:

"For me "Ajax support" really is pure marketing fluff - as far as I'm
concerned EVERY web framework supports Ajax unless it does something
truly moronic like refuse to let you output documents that don't have
the standard header/footer template."
Source: http://bit.ly/django2005

The problem with this mindset (I'm not picking at Simon from 10 years
ago) is that the web as large is moving towards MORE javascript, with
live notifications, live updating templates, and reusable template
components. And these are hard to implement in Django as things work
today. I see this is the biggest competitive disadvantage Django has
going forward.

So, what specific features am I proposing? I will get to a specific set
of features soon. But first I want to make clear that a completely
different set of features could get us to the same goal. That is: it's
possible to agree on the broader goal for Django, and disagree on my
specific set of features. If you don't agree on the features, I would
love to see your proposed feature list.

Just to give one alternate path: In that old thread from 2005, Jacob
Kaplan-Moss suggested exposing the ORM through Javascript with an RPC API:
https://groups.google.com/d/msg/django-developers/XmKfVxyuyAU/lkp6N1HTzG4J

Jacobs suggestion is interesting, but I have three other features that I
would like to discuss. I think they would greatly ease building
javascript heavy sites with Django.

*I will split the specific suggestions into three different e-mail
threads, so we can discuss them separately*.

Here's a short intro to them:

1. Template Components
React.js popularized the notion that in front-end development, code
organization should be based on interface components, not split up into
HTML, Javascript and CSS. It's simply a different way to organize the
code for your front-end, that I strongly think Django should make
easier. (read more in separate thread)

2. Support a client side template language
The multiple template engine work has made it possible to switch out
Django Templates with another engine. One of the most powerful things
this enables is to use a template language that is executable both on
the server and client. This means you can do the same kind of live
updates to your page that the Meteor.js people are doing, and just
re-render parts of the DOM as a direct result of a database update.
(read more in separate thread)

3. Support Server-Sent Events
If you want a snappy user experience, polling isn't enough. There are
two major ways to get out of the request-response cycle. Namely:
websockets and server-sent events. Server-Sent Events have a much
simpler protocol, and could be implemented with asyncio (no external
dependencies except for backwards compatibility) in a performant way. It
would be a great reason to choose Django over other frameworks. (read
more in separate thread)

---

This is not a "request for features". I am in no way saying that I think
you should build things for me. Instead I'm willing to work on  them
together with anyone interested, if these are features that the core
team would be interested in.

But first I would like to see if you:
1. Agree on the main point that Django should do more for javascript
heavy sites.
2. Agree that one or more of the specific features that I'm proposing
would be a good fit for Django (see separate threads).

No matter what, I would love to hear your thoughts and ideas here.

Florian Apolloner

unread,
May 30, 2015, 5:19:25 PM5/30/15
to django-d...@googlegroups.com
Hi Emil,

while supporting server-sent events (or even websockets for that matter) would be great, this is basically a problem which has to be tackled in WSGI first in my opinion. That said, when you talk about a separate process, how would it look like (aside from using asycio), ie how would it use Django's current featureset which is basically blocking everywhere…

Cheers,
Florian

Emil Stenström

unread,
May 30, 2015, 5:40:26 PM5/30/15
to django-d...@googlegroups.com
Hi,

The separate process would have none of Django's features, it would just be a way to send messages to connected clients. Here's an example of how it could work:
  • Client A and Client B connects to my site. Django serves the start page as normal.
  • The start page serves up a javascript file that makes Client A and Client B open a connection to the SSE process. The clients listen for messages on that connection.
  • Client A clicks a button on the site, that sends an normal ajax request to Django. In the view a message is passed from Django to the SSE process.
  • The SSE process passes the message though the open connection to both Client A and Client B who are connected.
  • In our own custom code we can decide what we want to do with the message. Maybe just show it in a list somewhere.

So the SSE process is VERY simple. It just connects to clients and passes on messages the all clients connected.

Does this make sense?

/E

Javier Guerra Giraldez

unread,
May 30, 2015, 11:24:50 PM5/30/15
to django-d...@googlegroups.com
On Sat, May 30, 2015 at 4:19 PM, Florian Apolloner
<f.apo...@gmail.com> wrote:
> ie how would it use Django's current featureset which is basically blocking
> everywhere…

On Sat, May 30, 2015 at 4:40 PM, Emil Stenström <e...@kth.se> wrote:
> The separate process would have none of Django's features, it would just be
> a way to send messages to connected clients.


take a look on how it's done with uWSGI: [1]. Basically, the http
request arrives to a Django view in the normal way; then it generates
a response with a custom header, which the uWSGI system recognizes
(much like X-Sendfile), and routes the request to a gevent-based wsgi
app that uses an sse package (probably [2]) to keep the connection.

in the given example, there's no further communication between Django
and the SSE process, but the author then comments it could be done
either with the uWSGI caching framework, or via Redis.

If I were to implement this today (and in fact, i have an application
that might need this in the near future), i would use basically this
scheme (as I'm using uWSGI for all my deployments), and Redis. The
SSE process would keep in Redis the current set of connected users,
and the Django process would send messages via Redis to the SSE
process. Of course, not only 'broadcast' messages to every connected
user, but user-specific messages too.

I don't see how would i do it for a reusable app, since it looks most
of the code would run 'outside' Django. Doing it for the Django core
seems even more problematic, since it would also have to be much more
flexible in requirements, both for the WSGI container (does mod_wsgi
support async python code? i guess gunicorn does), and also for the
communications between the 'main' Django views and the SSE part.
(probably the cache API would be appropriate)



[1]: https://uwsgi-docs.readthedocs.org/en/latest/articles/OffloadingWebsocketsAndSSE.html
[2]: https://github.com/niwinz/sse

--
Javier

Roberto De Ioris

unread,
May 31, 2015, 2:25:41 AM5/31/15
to django-d...@googlegroups.com
I obviously agree, but take in account that this uWSGI plugin simplified
the steps a lot:

https://github.com/unbit/uwsgi-sse-offload

--
Roberto De Ioris
http://unbit.com

Javier Guerra Giraldez

unread,
May 31, 2015, 3:08:02 AM5/31/15
to django-d...@googlegroups.com
On Sun, May 31, 2015 at 1:23 AM, Roberto De Ioris <rob...@unbit.it> wrote:
> I obviously agree, but take in account that this uWSGI plugin simplified
> the steps a lot:
>
> https://github.com/unbit/uwsgi-sse-offload


nice. it certainly looks cleaner than having an external gevent
process. does it support sending messages to a specific subscriber
(or subset of subscribers)?

maybe the easiest way would be to set some variable either in the
router or in the app view function and use it as part of the redis
pubsub channel name.

--
Javier

Roberto De Ioris

unread,
May 31, 2015, 3:40:43 AM5/31/15
to django-d...@googlegroups.com
the channel name can be specified via variables, so it should be pretty
versatile from this point of view. The plugin is pretty tiny so if you
have ideas to improve it, feel free to open a github issue

Emil Stenström

unread,
May 31, 2015, 4:52:26 AM5/31/15
to django-d...@googlegroups.com
Could you help me understand why this have to be done inside a web server container?

When I've previously read about reasons for that they tend to be things like "handling slow clients", something that an event loop is excellent at automatically. To me, this means that this process could run outside of all this, and simply be started with "python manage.py runsse 0.0.0.0:9000".

Also, I don't think you would need to mix redis (or any other persistent storage) into this. The connected clients could simply be stored in an in-memory array, that is discarded if the server crashes. When the server is started again the clients will automatically connect again, and the list of clients would be built up again.

Federico Capoano

unread,
May 31, 2015, 5:15:28 AM5/31/15
to django-d...@googlegroups.com
Hey Emil,

On Sunday, May 31, 2015 at 10:52:26 AM UTC+2, Emil Stenström wrote:
... 
Also, I don't think you would need to mix redis (or any other persistent storage) into this. The connected clients could simply be stored in an in-memory array, that is discarded if the server crashes. When the server is started again the clients will automatically connect again, and the list of clients would be built up again.

How would a distributed setup be handled by this approach?
I mean: multiple instances of the sse behind a loadbalancer.

Federico 

Javier Guerra Giraldez

unread,
May 31, 2015, 5:16:17 AM5/31/15
to django-d...@googlegroups.com
On Sun, May 31, 2015 at 3:52 AM, Emil Stenström <e...@kth.se> wrote:
> Could you help me understand why this have to be done inside a web server
> container?

AFAICT, it doesn't have to be done in the container, but currently it
must be 'outside' of Django. But having help from the container
allows a single request to be passed from one handler (a Django view)
to another (the SSE process). it might be easier if you use an URL
that goes directly to the SSE process and isn't touched by Django, but
then you need some kind of router, or 'outer url dispatcher'. unless
the SSE requests can be at a different port from the start? in that
case, the SSE process will need its own URL dispatcher.


> Also, I don't think you would need to mix redis (or any other persistent
> storage) into this. The connected clients could simply be stored in an
> in-memory array, that is discarded if the server crashes. When the server is
> started again the clients will automatically connect again, and the list of
> clients would be built up again.

in-memory as a Python object? but i think the SSE handler must be a
different interpreter instance... unless it's a thread... not sure if
the GIL would be an issue... or an "external" in-memory table? well,
that's what Redis is.


--
Javier

Emil Stenström

unread,
May 31, 2015, 6:05:05 AM5/31/15
to django-d...@googlegroups.com

I think the simplest way would be to just set up two different processes, and don't let all clients connect to the same one. Django could then maintain a list of all processes and send the same message to each process. Each process would then be responsible to forward it to its own set of clients.

Emil Stenström

unread,
May 31, 2015, 6:12:41 AM5/31/15
to django-d...@googlegroups.com
On Sunday, 31 May 2015 11:16:17 UTC+2, Javier Guerra wrote:
On Sun, May 31, 2015 at 3:52 AM, Emil Stenström <e...@kth.se> wrote:
> Could you help me understand why this have to be done inside a web server
> container?

AFAICT, it doesn't have to be done in the container, but currently it
must be 'outside' of Django.  But having help from the container
allows a single request to be passed from one handler (a Django view)
to another (the SSE process).  it might be easier if you use an URL
that goes directly to the SSE process and isn't touched by Django, but
then you need some kind of router, or 'outer url dispatcher'.  unless
the SSE requests can be at a different port from the start? in that
case, the SSE process will need its own URL dispatcher.

Since the only communication between the processes needed is to send messages, I don't think you need integration at the python level. Simply starting each process on a different port and having a way for Django to post a message to that port, would get us a long way. The SSE process would then need only one connection point, "/" if you will. Clients connect to it via a <script> tag (that's a GET), and Django posts to it via POST.
 
> Also, I don't think you would need to mix redis (or any other persistent
> storage) into this. The connected clients could simply be stored in an
> in-memory array, that is discarded if the server crashes. When the server is
> started again the clients will automatically connect again, and the list of
> clients would be built up again.

in-memory as a Python object? but i think the SSE handler must be a
different interpreter instance... unless it's a thread... not sure if
the GIL would be an issue... or an "external" in-memory table? well,
that's what Redis is.

The idea here is to keep dependencies to a minimum. That simplifies both deployment and setup considerably. I'm thinking a simple Python array with the currently connected clients. This array would live entirely inside the new process and doesn't need to be shared with anyone. When it's time to send a message to clients you simply loop over that array and send to each client.

Florian Apolloner

unread,
May 31, 2015, 9:56:09 AM5/31/15
to django-d...@googlegroups.com


On Saturday, May 30, 2015 at 10:40:26 PM UTC+1, Emil Stenström wrote:
Client A clicks a button on the site, that sends an normal ajax request to Django. In the view a message is passed from Django to the SSE process.

How, you still need some kind of interprocess communication

So the SSE process is VERY simple. It just connects to clients and passes on messages the all clients connected.


VERY simple is an oversimplification in my opinion. I also do not see any reason for supporting it inside Django currently when things like autobahn.ws exist, the only thing missing there is the communication between the processes. I am not sure what people are expecting here from Django (and from your explanations I am still not really convinced or see a usecase at all). Since the message passing between the server processes should be language/framework agnostic anyways, this would be better suited for a third party project anyways. Reimplementing one of the existing SSE/Websockets implementations does not really seem like a win to me either.

Cheers,
Florian

Joe Tennies

unread,
May 31, 2015, 10:52:50 AM5/31/15
to django-d...@googlegroups.com
I'm going to kind of reiterate what Florian said.

The fact that you keep describing your idea as another process/thread that has back and forth communication with the actual Django instance seems to indicate to me that it's another program. I think people here tend to follow more of the UNIX philosophy of a collection of smaller simple programs that can easily interact (monolithic stack being ignored as that's a much older decision). If you want tighter integration with Django, I think it would best be done via your program instead of Django itself.

Now that stated, you may work on your project and discover that there is some additional things that would be easier if done in Django. I'm thinking some kind of way to register on the Django Signals on a separate processor or another feature that is useful outside just your project. That would be a good time to discuss such a feature.

Keep up the good work.

- Joe

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/86b8ff28-b35f-49cd-8e95-30ace07c9d51%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Joe Tennies
ten...@gmail.com

Emil Stenström

unread,
May 31, 2015, 5:51:59 PM5/31/15
to django-d...@googlegroups.com
On Sunday, 31 May 2015 16:52:50 UTC+2, Rotund wrote:
The fact that you keep describing your idea as another process/thread that has back and forth communication with the actual Django instance seems to indicate to me that it's another program.

Yes, I was not clear about the distinction between separate process and separate program. I see this as a separate UNIX process.
 
I think people here tend to follow more of the UNIX philosophy of a collection of smaller simple programs that can easily interact (monolithic stack being ignored as that's a much older decision).

Isn't this exactly what I'm proposing? A small simple program that gives Django users access to server sent events in their Django project?
 
If you want tighter integration with Django, I think it would best be done via your program instead of Django itself.

I don't think whether this program is inside Django or not changes the technical implementation. I fully agree that this could be developed as a separate project, but I also think that Django as a project would benefit from including this and giving people a way to do async without needing on external dependencies. The whole batteries included applied to modern sites. I believe the problem here is that if Django keeps pushing away from things you need for JS heavy sites, it will slowly become irrelevant.

Keep up the good work.
 
Thanks!

Emil Stenström

unread,
May 31, 2015, 6:03:24 PM5/31/15
to django-d...@googlegroups.com
On Sunday, 31 May 2015 15:56:09 UTC+2, Florian Apolloner wrote:
On Saturday, May 30, 2015 at 10:40:26 PM UTC+1, Emil Stenström wrote:
Client A clicks a button on the site, that sends an normal ajax request to Django. In the view a message is passed from Django to the SSE process.

How, you still need some kind of interprocess communication

Yes, interprocess communication is needed. The simplest way to get the two programs to talk would be to use a HTTP POST to the other program (which would run on another port). This would also make it possible for other programs to send messages through the same connection.
 

So the SSE process is VERY simple. It just connects to clients and passes on messages the all clients connected.

VERY simple is an oversimplification in my opinion. I also do not see any reason for supporting it inside Django currently when things like autobahn.ws exist, the only thing missing there is the communication between the processes.

Is the argument here that "since there are other ways of doing it I see no reason to do it in Django?". Autobahn is a huge dependency on your program, when all you need for most usecases is a small event loop and a way to pass messages. Setting up websockets and redis pubsub is also a huge hassle. I see the need for something simple, something you could get up and running quickly.
 
I am not sure what people are expecting here from Django (and from your explanations I am still not really convinced or see a usecase at all).

A simple use-case is Facebook style notifications. When something happens to a user you want to send that user a notification right away, not 10 seconds later because that's how often you were polling the server. Another use-case is a user chat. When a user sends a message you want that message to show up right away. Or maybe you keep track of server uptime on a status page. When a server goes down you want your users to be notified immediately, not later.
 
Since the message passing between the server processes should be language/framework agnostic anyways, this would be better suited for a third party project anyways. Reimplementing one of the existing SSE/Websockets implementations does not really seem like a win to me either.

I agree that they should be language/framework agnostic, that's why I think a HTTP Post would work great to send messages. And I agree this could be developed as a separate project to start with. I don't agree that this wouldn't be something worthwhile to include in Django.

Curtis Maloney

unread,
May 31, 2015, 7:42:38 PM5/31/15
to django-d...@googlegroups.com
I think the real questions are:

1. What is stopping a 3rd party product from providing the features you want?

If there is something Django can do to make it easier, point it out, and I'll gladly champion a good feature.

2. Why should your solution be the "blessed" solution?

The discussion clearly shows there are several ways to skin this cat... why is your way better than any other?  Until there is a clear winner [see migrations and South] it should live as a 3rd party app, with Django providing whatever mechanisms/support it can.

Personally, I've used DOM-SSE to implement a simple chat service in Django, using Redis for the PUB-SUB server.  It was only really feasible because (a) I used gevent, (b) no ORM means no need for async DB adapter, and (c) the py-redis module is Pure Python, so can be monkey patched.

[FYI I've also translated that same code to raw WSGI app, and using async-io, available on github if you're interested </shameless-plug>]

--
Curtis


--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.

Andrew Godwin

unread,
Jun 1, 2015, 7:05:34 AM6/1/15
to django-d...@googlegroups.com
Just to chime in here - I've long been in favour of some kind of support for event-driven stuff inside Django, but as Curtis is saying, there's nothing here that couldn't be done in a third party app first and then proven there before any possible merge into core.

I also don't think that this proposal goes far enough, in a way - any push for a system that allows asychronous calling like this should be available to the request-response cycle as well as websocket/push clients (I want the ability to make my database calls in parallel with my external API calls, perhaps). I have some ideas down this path, but even those are the sort of thing that need a couple of changes to Django to make things work smoothly but the bulk can be implemented outside.

If there's specific things Django needs changed to support this properly, I'm all ears, but I'm not sure we should just lump a certain pattern of socket worker in core straight away.

Andrew

Emil Stenström

unread,
Jun 1, 2015, 5:56:17 PM6/1/15
to django-d...@googlegroups.com

On Monday, 1 June 2015 01:42:38 UTC+2, Curtis Maloney wrote:
I think the real questions are:

1. What is stopping a 3rd party product from providing the features you want?

Nothing as far as I see it. Will develop as a third part 
 
2. Why should your solution be the "blessed" solution?

The discussion clearly shows there are several ways to skin this cat... why is your way better than any other?  Until there is a clear winner [see migrations and South] it should live as a 3rd party app, with Django providing whatever mechanisms/support it can.

Personally, I've used DOM-SSE to implement a simple chat service in Django, using Redis for the PUB-SUB server.  It was only really feasible because (a) I used gevent, (b) no ORM means no need for async DB adapter, and (c) the py-redis module is Pure Python, so can be monkey patched.

[FYI I've also translated that same code to raw WSGI app, and using async-io, available on github if you're interested </shameless-plug>]

There are basically two ways of getting out of the request-response cycle: websockets and SSE. Websockets are complicated, and SSEs are easy, with a protocol that is similar to what Django already has. So SSEs are clearly a better fit with Django. Given that we want to give people a way to do async in Django (open for debate, I think so), I think it makes sense to talk about the technical implementation.

I would love to see your code, especially if I can compare the two versions, and maybe ever write one using the model I'm proposing.

Emil Stenström

unread,
Jun 1, 2015, 6:11:18 PM6/1/15
to django-d...@googlegroups.com
Thanks for you reply Andrew,


On Monday, 1 June 2015 13:05:34 UTC+2, Andrew Godwin wrote:
Just to chime in here - I've long been in favour of some kind of support for event-driven stuff inside Django, but as Curtis is saying, there's nothing here that couldn't be done in a third party app first and then proven there before any possible merge into core.

That seems to be the argument for all my three suggestions: Build it as a third party app, and if you get everyone using your three apps we might consider adding them to Django. I was hoping for a more "product driven" approach, where we look at the world around us, see that Django is lagging behind in a major area for modern apps, and start playing around with different ways of tackling the problem. Given that there are people that agree this is a problem for Django, I was hoping for more of a "pull" from the community. "We want something that solves this problem, this way, without doing this".

I also don't think that this proposal goes far enough, in a way - any push for a system that allows asychronous calling like this should be available to the request-response cycle as well as websocket/push clients (I want the ability to make my database calls in parallel with my external API calls, perhaps). I have some ideas down this path, but even those are the sort of thing that need a couple of changes to Django to make things work smoothly but the bulk can be implemented outside.

I agree that this would be great, but given that Django won't be rewritten to support async everything any time soon I think this will be a far to big first step. In my experience, starting small and iterating is a much better way to get great things going. And my subset of server->client message passing is just that.

If there's specific things Django needs changed to support this properly, I'm all ears, but I'm not sure we should just lump a certain pattern of socket worker in core straight away.

"Lumping something in" is not something in is not something I've suggested. I'm fully prepared that this will take time and effort.

Andrew Godwin

unread,
Jun 2, 2015, 5:25:14 AM6/2/15
to django-d...@googlegroups.com
Hi Emil,

I agree that there perhaps needs to be a more "pull" here than just making a third party app, but I feel I can speak from a good place when I say third party apps can absolutely prove core Django features in a way that gives them much faster release cycles and freedom from things like LTS commitments initially, so they can be rapidly improved upon before being deemed worth merging. The alternative is either merging something into core that's not ready, dooming it to a slow release cycle and perhaps community pushback, or developing it as a parallel branch which is not something that's going to be long-term sustainable.

This problem has been something I've been wanting to tackle for a while now - I've always felt that the next step for Django is to start moving away from being so married to the request-response cycle of traditional HTTP - and I've been formulating a plan over the last few months about how to tackle it. It's relatively ambitious, but I think entirely achievable and would be almost completely backwards-compatible. Unfortunately, it's taken a while to get over the 1.7 release cycle and migrations work being slightly overwhelming, or it would have been sooner.

I'm sounding out some of the ideas here at DjangoCon Europe, and hopefully come to the list with an initial proposal or implementation soon - I think the best way to approach this is to sit down and design the API and core code architecture, start proving it's possible, and then present that, because in my experience having code examples can really make a proposal a lot easier to understand. Of course, I'm also fully prepared for people to shoot down my ideas - being a core team member doesn't give me some magical ability to push through changes.

Andrew

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-develop...@googlegroups.com.
To post to this group, send email to django-d...@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers.

Emil Stenström

unread,
Jun 2, 2015, 1:03:55 PM6/2/15
to django-d...@googlegroups.com
Hi,


On Tuesday, 2 June 2015 11:25:14 UTC+2, Andrew Godwin wrote:
Hi Emil,

I agree that there perhaps needs to be a more "pull" here than just making a third party app, but I feel I can speak from a good place when I say third party apps can absolutely prove core Django features in a way that gives them much faster release cycles and freedom from things like LTS commitments initially, so they can be rapidly improved upon before being deemed worth merging. The alternative is either merging something into core that's not ready, dooming it to a slow release cycle and perhaps community pushback, or developing it as a parallel branch which is not something that's going to be long-term sustainable.

With "pull" I mean someone saying "I think this is something that would fit well with Django, given that it's implemented in a way that satisfied X, Y and Z". I'm of course not saying "merge my code now" when there is no code :) So this should definitely be a third party app first.

I also see how you must have great trust in the model that South was developed under. You built something great, *everyone* started using it, you put down months of work into rewriting it for Django, and it's now part of the handful (?) of third party projects that made it into core. So it is possible, but just not very likely.
 
This problem has been something I've been wanting to tackle for a while now - I've always felt that the next step for Django is to start moving away from being so married to the request-response cycle of traditional HTTP - and I've been formulating a plan over the last few months about how to tackle it. It's relatively ambitious, but I think entirely achievable and would be almost completely backwards-compatible. Unfortunately, it's taken a while to get over the 1.7 release cycle and migrations work being slightly overwhelming, or it would have been sooner.

Hearing that you are interested in working on the "async problem" makes me very happy. And solving async "generally" would of course satisfy the same problem I'm hoping to solve with my parallel process approach. Really looking forward to this.
 
I'm sounding out some of the ideas here at DjangoCon Europe, and hopefully come to the list with an initial proposal or implementation soon - I think the best way to approach this is to sit down and design the API and core code architecture, start proving it's possible, and then present that, because in my experience having code examples can really make a proposal a lot easier to understand. Of course, I'm also fully prepared for people to shoot down my ideas - being a core team member doesn't give me some magical ability to push through changes.

 I'll take the code example lesson with me for next time :) Looking forward to read you proposal.

Tobias Oberstein

unread,
Jun 6, 2015, 9:08:46 AM6/6/15
to django-d...@googlegroups.com
Hi,

FWIW, here is how you can add real-time push to Django apps without reinventing the world in Django (read: using blocking code):

http://crossbar.io/docs/Adding-Real-Time-to-Django-Applications/

This is using the HTTP/REST bridge services of Crossbar.io:

http://crossbar.io/docs/HTTP-Bridge-Services/

Essentially, you can push a real-time events from within Django by doing a plain old (outgoing) HTTP/POST to the Crossbar.io bridge service, which will forward the PubSub event to all authorized subscribers via WebSocket (or any of the WAMP fallback or alternative transports like Long-poll or RawSocket).

Cheers,
/Tobias
Reply all
Reply to author
Forward
0 new messages