> --
> You received this message because you are subscribed to the Google Groups "Django users" group.
> To post to this group, send email to django...@googlegroups.com.
> To unsubscribe from this group, send email to django-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/django-users?hl=en.
>
>
Christoph wrote:
> normally in views.py I have a function that takes a request and
> returns a render_to_response or the like. I, however, would like it to
> take a request and have Django reply with multiple responses. Is there
> a way to do this, or is this not possible with HTTP anyway?
HTTP works with request->response model. You can't have many responses to a
single request.
> What I am trying to achieve: Have a a view that gets called and in it
> I have a JS/AJAX script that asks for some data. However, calculating
> the data takes a while and it comes one after the other. So I thought
> that I could send the data as soon as it becomes available.
How much time does it take? If it is really long, you will have to do some
magic spawning a python subprocess to create the data and check fromyour
django app if it is ready.
May be there is an easier way to achieve your real goal? What is your
original task? Why data prepration takes so long? May be you can optimize
it with indexes?
--
Dmitry Dulepov
Twitter: http://twitter.com/dmitryd/
Web: http://dmitry-dulepov.com/
thanks for the help.
I got it to work. Here is what I am doing.
Django gets a request for "/data.json". A view function is called.
This is my function:
def data(request):
return HttpResponse(subsampling(), mimetype='application/javascript')
subsampling() is an iterator, i.e. it yields data every once in while.
Look at: http://docs.djangoproject.com/en/dev/ref/request-response/#passing-iterators
I am yielding simplejson.dumps(my_dict) + '\n' which is then received
by the client. The original request for data.json came from an ajax
function (using jQuery). At the beginning using 'beforeSend' I start a
function that takes the XmlHttpRequest.responseText and sees how much
data has arrived and appends that data to the already existing data. I
do this every 10ms and stop once the xmr.readyState indicates that the
connection was closed.
I am quite satisfied with this result. It does not use much memory and
is much faster (and feels much faster) than before.
I tested it under Chromium, Firefox and Konqueror. I will add some
additional functionality and make it a nice graphics module in the
late summer. Probably GPL'ed. It would be for people like me, who have
lots of data and want to make them available interactively.
Regards,
Christoph
On Wed, Jun 16, 2010 at 4:11 PM, Ian McDowall <i.d.mc...@gmail.com> wrote:
> Cool. That is a clever way to subvert (I don't mean this negatively)
> the response generation. I do have a couple of comments:
:-) Thanks
> 1) It relies on the response being sent to the client as it is
> generated and not buffered by the server.
True, although that would only affect the server, and if a server
wants to be slow, who am I to tell it to be quicker.^^ The amount of
data has to be sent some way or another. The client does not care,
whether all the data arrives at once or slowly drips in. This is
nicely seen if I reload the page, where the data comes in much more
quickly. (That is also nice, one sees that data is arriving, making
the wait not as bad. ) The request generates a "data.json"-response
which can just be cached and subsequently sent to the client in one
go.
> That is clearly working for
> you and I don't know the internals of the different web servers to
> know if any would break this. I suspect this will work with all
> servers so nice trick.
I have run it under the Django internal server and apache2. It works for both.
>
> 2) I would be worried by resources on the web server if you expect
> many connections of this type. In most servers that I have seen, each
> request is assigned to a thread from a pool and the thread is not
> freed up until the request is completed. Each of these requests will
> tie up a thread until it is completed (I think). This is likely to
> work well for a small number of simultaneous connections but if you
> had more simultaneous clients than threads in your pool, I would
> expect new requests to be blocked / delayed.
>
> If you only expect one or a small number of clients to use this
> request at one time then you are fine. If you want to scale this then
> I think that you may have a problem. I suggest testing this by
> setting up more simultaneous clients than your server has threads set
> in the pool. The test might be fiddly to set up and you could
> reconfigure the server to have fewer threads and add delays into the
> calculations to make it easier to test.
>
I am writing exams right now but I should test this sometime. Thanks
for pointing it out.
Cheers,
Christoph