Multiple AJAX sends within a views.py-function

119 views
Skip to first unread message

Christoph

unread,
Jun 11, 2010, 11:15:55 AM6/11/10
to Django users
Hi,

normally in views.py I have a function that takes a request and
returns a render_to_response or the like. I, however, would like it to
take a request and have Django reply with multiple responses. Is there
a way to do this, or is this not possible with HTTP anyway?

What I am trying to achieve: Have a a view that gets called and in it
I have a JS/AJAX script that asks for some data. However, calculating
the data takes a while and it comes one after the other. So I thought
that I could send the data as soon as it becomes available.

In my example I have a graph (using flot) and it would also look
natural to have the data points show up one by one.

A different approach: Have JS ask for more data (using GET) until the
view responses sets a flag (NO_MORE_DATA = True). I don't like this,
since for me this looks like it defies the A in AJAX and the view
would lose all parameters (I.e. which points it already sent and which
not). However, I don't know much JS, nor AJAX nor do I understand the
HTTP protocol good enough.

Maybe this has been done before? Is there a way of having server-side
generated AJAX-actions? Is there a way of having Django send something
within a views-function (as opposed to returning it at the end)?

Some possible code:

def my_view(request):
data =
MyModel.objects.filter('something').order_by('somethingelse')
for item in list(data): # Note, I don't do this but this is just
to show how what I want
send_json(do_something(item)) # send_json() is the crucial
(non-existing) function that I am looking for
return None # or maybe return Done or something like it

Best regards,
Christoph

Euan Goddard

unread,
Jun 11, 2010, 11:36:20 AM6/11/10
to Django users
If you're worried about the data getting out of order use a counter in
JS and always ensure that you only update the page when you get the
correct (i.e. current) counter back.

I think what you're talking about isn't possible in normal HTTP. I
think you have a one request, one response situation.

Euan

Rafael Nunes

unread,
Jun 11, 2010, 11:59:44 AM6/11/10
to django...@googlegroups.com
Can't you use XMPP?

> --
> You received this message because you are subscribed to the Google Groups "Django users" group.
> To post to this group, send email to django...@googlegroups.com.
> To unsubscribe from this group, send email to django-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/django-users?hl=en.
>
>

Dmitry Dulepov

unread,
Jun 12, 2010, 2:40:02 AM6/12/10
to django...@googlegroups.com
Hi!

Christoph wrote:
> normally in views.py I have a function that takes a request and
> returns a render_to_response or the like. I, however, would like it to
> take a request and have Django reply with multiple responses. Is there
> a way to do this, or is this not possible with HTTP anyway?

HTTP works with request->response model. You can't have many responses to a
single request.

> What I am trying to achieve: Have a a view that gets called and in it
> I have a JS/AJAX script that asks for some data. However, calculating
> the data takes a while and it comes one after the other. So I thought
> that I could send the data as soon as it becomes available.

How much time does it take? If it is really long, you will have to do some
magic spawning a python subprocess to create the data and check fromyour
django app if it is ready.

May be there is an easier way to achieve your real goal? What is your
original task? Why data prepration takes so long? May be you can optimize
it with indexes?

--
Dmitry Dulepov
Twitter: http://twitter.com/dmitryd/
Web: http://dmitry-dulepov.com/

Ian McDowall

unread,
Jun 14, 2010, 7:13:56 AM6/14/10
to Django users
As the other posters say, this is not possible with standard HTTP and
certainly not with Django.

By default, HTTP is one request / one response and Django (and other
web frameworks) are built on that. Take a look at the HTTP spec and
try looking at the packet contents for HTTP requests - it is quite
educational.

Paying attention to the innards of HTTP, it is possible to have a
response that is delivered in more than one packet and so the client
can receive part of the response followed by more of the response
later and so on. I think that this is normally used as a technique
for long-polling and I am doubtful that the browser client will handle
it as you want and even more doubtful that a Django (or similar)
framework will handle it as you want. If you want to experiment with
HTTP and server internals then it could be quite interesting but as a
way to actually get the job done it is likely to be a red herring.

Other techniques beyond straight HTTP include web sockets and server
events - both part of HTML5. These would let your client receive
updates from the server as and when ready. Unfortunately, these are
not well supported by current browsers (AFAIK Chrome supports we
sockets and Opera has a version of server sent events) and you will
need a custom server. I am using similar techniques to implement real
time event handling but I have had to build my own server and accept
that I cannot use certain browsers.

To go back to your original question and assuming that you want a
simple way of getting the job done then I think that you either have
to accept the delay and do the calculation in one go or handle
multiple requests. Performing the calculation in one go may take some
time but AJAX is asynchronous so the user can see what ever progress
info you want to show (i.e. an hourglass). If the work really will
take too long then multiple requests looks necessary but remember that
you will need to cache the calculation between requests or do it in a
separate thread and make it available. Remember that each Django
request is normally run in isolation and nothing is persisted when it
is finished unless you write it to a database. if you can make the
calculation such that nothing needs to persist (or each request
includes the result from the previous one) then that may be OK.

Is there any option to pre-calculate so the results are easily
availabl?
Regards, Ian McDowall

On Jun 11, 4:15 pm, Christoph <christophsieden...@gmail.com> wrote:

Christoph Siedentop

unread,
Jun 15, 2010, 3:45:30 PM6/15/10
to django...@googlegroups.com
Hi Dmitry, hi Ian,

thanks for the help.

I got it to work. Here is what I am doing.

Django gets a request for "/data.json". A view function is called.
This is my function:

def data(request):
return HttpResponse(subsampling(), mimetype='application/javascript')

subsampling() is an iterator, i.e. it yields data every once in while.
Look at: http://docs.djangoproject.com/en/dev/ref/request-response/#passing-iterators

I am yielding simplejson.dumps(my_dict) + '\n' which is then received
by the client. The original request for data.json came from an ajax
function (using jQuery). At the beginning using 'beforeSend' I start a
function that takes the XmlHttpRequest.responseText and sees how much
data has arrived and appends that data to the already existing data. I
do this every 10ms and stop once the xmr.readyState indicates that the
connection was closed.

I am quite satisfied with this result. It does not use much memory and
is much faster (and feels much faster) than before.

I tested it under Chromium, Firefox and Konqueror. I will add some
additional functionality and make it a nice graphics module in the
late summer. Probably GPL'ed. It would be for people like me, who have
lots of data and want to make them available interactively.

Regards,
Christoph

Ian McDowall

unread,
Jun 16, 2010, 11:11:54 AM6/16/10
to Django users
Cool. That is a clever way to subvert (I don't mean this negatively)
the response generation. I do have a couple of comments:

1) It relies on the response being sent to the client as it is
generated and not buffered by the server. That is clearly working for
you and I don't know the internals of the different web servers to
know if any would break this. I suspect this will work with all
servers so nice trick.

2) I would be worried by resources on the web server if you expect
many connections of this type. In most servers that I have seen, each
request is assigned to a thread from a pool and the thread is not
freed up until the request is completed. Each of these requests will
tie up a thread until it is completed (I think). This is likely to
work well for a small number of simultaneous connections but if you
had more simultaneous clients than threads in your pool, I would
expect new requests to be blocked / delayed.

If you only expect one or a small number of clients to use this
request at one time then you are fine. If you want to scale this then
I think that you may have a problem. I suggest testing this by
setting up more simultaneous clients than your server has threads set
in the pool. The test might be fiddly to set up and you could
reconfigure the server to have fewer threads and add delays into the
calculations to make it easier to test.

This is the reason why I chose to build my own custom server for long-
running requests but that causes a lot of extra work and possible bugs
so I don't recommend it if there is any alternative.

Cheers
Ian

On Jun 15, 8:45 pm, Christoph Siedentop <christophsieden...@gmail.com>
wrote:
> Hi Dmitry, hi Ian,
>
> thanks for the help.
>
> I got it to work. Here is what I am doing.
>
> Django gets a request for "/data.json". A view function is called.
> This is my function:
>
> def data(request):
>       return HttpResponse(subsampling(), mimetype='application/javascript')
>
> subsampling() is an iterator, i.e. it yields data every once in while.
> Look at:http://docs.djangoproject.com/en/dev/ref/request-response/#passing-it...
>
> I am yielding  simplejson.dumps(my_dict) + '\n' which is then received
> by the client. The original request for data.json came from an ajax
> function (using jQuery). At the beginning using 'beforeSend' I start a
> function that takes the XmlHttpRequest.responseText and sees how much
> data has arrived and appends that data to the already existing data. I
> do this every 10ms and stop once the xmr.readyState indicates that the
> connection was closed.
>
> I am quite satisfied with this result. It does not use much memory and
> is much faster (and feels much faster) than before.
>
> I tested it under Chromium, Firefox and Konqueror. I will add some
> additional functionality and make it a nice graphics module in the
> late summer. Probably GPL'ed. It would be for people like me, who have
> lots of data and want to make them available interactively.
>
> Regards,
> Christoph
>
<<History snipped for brevity.>>

Christoph Siedentop

unread,
Jun 16, 2010, 6:05:46 PM6/16/10
to django...@googlegroups.com
Hi Ian,

On Wed, Jun 16, 2010 at 4:11 PM, Ian McDowall <i.d.mc...@gmail.com> wrote:
> Cool. That is a clever way to subvert (I don't mean this negatively)
> the response generation.  I do have a couple of comments:

:-) Thanks

> 1) It relies on the response being sent to the client as it is
> generated and not buffered by the server.

True, although that would only affect the server, and if a server
wants to be slow, who am I to tell it to be quicker.^^ The amount of
data has to be sent some way or another. The client does not care,
whether all the data arrives at once or slowly drips in. This is
nicely seen if I reload the page, where the data comes in much more
quickly. (That is also nice, one sees that data is arriving, making
the wait not as bad. ) The request generates a "data.json"-response
which can just be cached and subsequently sent to the client in one
go.

> That is clearly working for
> you and I don't know the internals of the different web servers to
> know if any would break this.  I suspect this will work with all
> servers so nice trick.

I have run it under the Django internal server and apache2. It works for both.

>
> 2) I would be worried by resources on the web server if you expect
> many connections of this type.  In most servers that I have seen, each
> request is assigned to a thread from a pool and the thread is not
> freed up until the request is completed.  Each of these requests will
> tie up a thread until it is completed (I think).  This is likely to
> work well for a small number of simultaneous connections but if you
> had more simultaneous clients than threads in your pool, I would
> expect new requests to be blocked / delayed.
>
> If you only expect one or a small number of clients to use this
> request at one time then you are fine. If you want to scale this then
> I think that you may have a problem.  I suggest testing this by
> setting up more simultaneous clients than your server has threads set
> in the pool.  The test might be fiddly to set up and you could
> reconfigure the server to have fewer threads and add delays into the
> calculations to make it easier to test.
>

I am writing exams right now but I should test this sometime. Thanks
for pointing it out.

Cheers,
Christoph

Reply all
Reply to author
Forward
0 new messages