how does Django handle outbound requests?

128 views
Skip to first unread message

Glenn Rutkowski

unread,
Jun 25, 2021, 5:43:35 PM6/25/21
to Django users
Can anyone point me in the right direction?  We are using Django as an API Endpoint that accepts a request and then repackages it and sends it to an external service and waits for a response.  I'm interested in the external call and how it is handled.  If it times out or takes forever, what it the effect on the Django server?  Am I blocking any other execution from happening while waiting for the response?  

From what I have read it seems as if the best idea is to pass the job off to a queue and then check for job status alerts or updates.  If I have to rethink this - I'd rather do it now. 

Thanks for reading, looking forward to responses!

-g



Michael Ross

unread,
Jun 27, 2021, 11:51:40 AM6/27/21
to django...@googlegroups.com, Glenn Rutkowski
In my setup(s) I often have:

web -> nginx -> uwsgi -> django -> backend api

Any of these can time out.
I use the requests library for calls to the backend api,
if this times out it throws a exception which if unhandled gets passed up the chain as a 500 error.

While the backend request is running, django blocks,
but uwsgi has multiple django instances started so other requests can be handled.

If I'd use a queue depends strongly on the specific purpose of the request.
... and on how long the requests will take. Will it likely time out, i. e. do you do something long-running?
And if the client needs just "got it working on it" as a response
or actual response data.
E.g. for handling financial data -- where the request has to be processed, and processed exactly once -- I'd use a queue.
With checking for duplicate requests, failed requests, ability to retry and so on.
Other extreme, if it's something like "Make me a PDF of this" I'll just pass execptions right back to the browser
and inform the human to please press this button again.




>
>
>
> --
> You received this message because you are subscribed to the Google Groups "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/20152777-2847-4b94-974c-27be70ee04can%40googlegroups.com.


--
Michael Ross <g...@ross.cx>

Thomas Lockhart

unread,
Jun 27, 2021, 1:37:42 PM6/27/21
to django...@googlegroups.com
It is quite often recommended to keep it simple and then optimize if you need to. The optimization woiuld be to decouple the external request using a queue, then provide a response to the client separately. Just as you suggest.

You can also design your API to allow this optimization, without actually implementing the queue feature yet. But adding a work queue using celery is pretty darn easy (at least after you have done the first one) so that is probably the easiest part of your future work.

If you already think you know your likely range of loading, then you can also use a test rig to load your system and see if it can keep up without bogging down Django or your other components. Then optimize if needed before you go into production...

hth

- Tom


Glenn Rutkowski

unread,
Jun 27, 2021, 5:58:06 PM6/27/21
to Django users
Tom & Michael,

Thanks for taking the time to reply.  The thing that has me stuck is this - I really want to get a good understanding of it before I proceed with my project:

    While the backend request is running, django blocks,  
    but uwsgi has multiple django instances started so other requests can be handled.

Because this project is financial in nature I have some issues that I must deal with.  First, the outbound API call goes out to post a transaction into a very archaic marketplace.  Sometimes the market responds with when successful within 30 seconds, sometimes I'll get a timeout after 3 minutes, sometimes it'll just hang for as long as I wait, and then IF the external API decides to error out I'll get either a 500 or an XML fault document..... it's always fun working with 20 year old technologies :) 

So, as for the number of concurrent requests that we translate - about 500 in a 5 minute window.  Which means my server takes 500 requests, converts the request into outbound XML/SOAP API calls (we use the requests library though) and starts waiting on responses.  

Bottom line is that I just don't know if/how/when the number of django instances will start slowing me down without adding more hardware/processors/containers/servers/whatever to solve the issue.

Someone also pointed me towards this:  https://docs.djangoproject.com/en/3.2/topics/async/  
which will give me additional reading for this afternoon :)

I'm not sure there is a follow up question in here as much as just some rambling discussion points.....

-g
Reply all
Reply to author
Forward
0 new messages