Suggested way to issue requests from inside a view callable

45 views
Skip to first unread message

jens.t...@gmail.com

unread,
May 1, 2018, 1:23:30 AM5/1/18
to pylons-discuss
Hello,

I guess following up on the thread on Pyramid’s sub-requests, what is the recommended way to issue a (synchronous) request to an external server from within a view callable? Using the requests package, or are there Pyramid plugins available (didn’t find any at first glance).

Thank you for recommendations…
Jens

Steve Piercy

unread,
May 1, 2018, 3:31:39 AM5/1/18
to pylons-...@googlegroups.com
On 4/30/18 at 10:23 PM, jens.t...@gmail.com pronounced:

> I guess following up on the thread on Pyramid’s sub-requests
> <http://Pyramid sub-requests>, what is the recommended way to issue a
> (synchronous) request to an external server from within a view callable?
> Using the requests <http://docs.python-requests.org/en/master/> package, or
> are there Pyramid plugins available (didn’t find any at first glance).

Requests, even though it is for humans.

--steve

------------------------
Steve Piercy, Eugene, OR

Michael Merickel

unread,
May 1, 2018, 10:20:41 AM5/1/18
to Pylons
requests is great and is what I use. There is a little-known feature in webob that probably deserves a mention though. You can build a webob.Request object and call send() or get_response() on it to get back a webob.Response object from the third party. It uses the stdlib http.client under the hood.

https://docs.pylonsproject.org/projects/webob/en/stable/api/client.html

❯ env/bin/ipython
Python 3.6.4 (default, Feb 25 2018, 17:42:04)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: from webob import Request

In [2]: req = Request.blank('https://httpbin.org/get')

In [3]: resp = req.send()

In [4]: resp.json_body
Out[4]:
{'args': {},
 'headers': {'Accept-Encoding': 'identity',
  'Connection': 'close',
  'Host': 'httpbin.org'},
 'origin': '173.20.144.105',

In [5]: list(resp.headers.items())
Out[5]:
[('Connection', 'keep-alive'),
 ('Server', 'gunicorn/19.7.1'),
 ('Date', 'Tue, 01 May 2018 14:14:29 GMT'),
 ('Content-Type', 'application/json'),
 ('Access-Control-Allow-Origin', '*'),
 ('Access-Control-Allow-Credentials', 'true'),
 ('X-Powered-By', 'Flask'),
 ('X-Processed-Time', '0'),
 ('Content-Length', '196'),
 ('Via', '1.1 vegur')]


--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscribe@googlegroups.com.
To post to this group, send email to pylons-discuss@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/75f3cb8f-7984-445a-b0fc-e7ffda1a7768%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jonathan Vanasco

unread,
May 1, 2018, 12:40:29 PM5/1/18
to pylons-discuss
It depends on what you're communicating with, and why.

If your synchronous tasks are done "quickly" or must be "blocking" -- like doing some oAuth or hitting an external API that is guaranteed to return a request within a second or two, I would just use `requests` from within Pyramid.

If you're concerned with extended processing on your end, or not using systems that guarantee a response within a given amount of time... I would use Pyramid to trigger a Celery task, and then have the page reload every 5 seconds to poll the Celery backend for status.

The reason for the latter usage pattern is that you will end up with a mix of browser timeouts / dropped connections AND tying up Pyramid workers while you wait on the upstream data.  Blocking while waiting on `requests` to process the response will end up creating a bottleneck.

jens.t...@gmail.com

unread,
May 1, 2018, 5:34:15 PM5/1/18
to pylons-discuss
Thank you everybody, requests it is then! 👌


On Wednesday, May 2, 2018 at 2:40:29 AM UTC+10, Jonathan Vanasco wrote:
If your synchronous tasks are done "quickly" or must be "blocking" -- like doing some oAuth or hitting an external API that is guaranteed to return a request within a second or two, I would just use `requests` from within Pyramid.

Yup, quick API calls that can be blocking.

 
On Wednesday, May 2, 2018 at 2:40:29 AM UTC+10, Jonathan Vanasco wrote:
If you're concerned with extended processing on your end, or not using systems that guarantee a response within a given amount of time... I would use Pyramid to trigger a Celery task, and then have the page reload every 5 seconds to poll the Celery backend for status.
 
Jonathan, funny you mention Celery. I have used it for a while but the experience has been horrible—the thing is ridden with bugs and problems, barely maintained, and the list of issues on Github grows daily. Which is why I raised this discussion: https://stackoverflow.com/questions/46517613/python-task-queue-alternatives-and-frameworks

Curious though, I rolled the Celery task integration myself because I didn’t find any specific module for Pyramid. Is there some explicit support module out there?

Cheers,
Jens

Jonathan Vanasco

unread,
May 1, 2018, 6:48:46 PM5/1/18
to pylons-discuss


On Tuesday, May 1, 2018 at 5:34:15 PM UTC-4, jens.t...@gmail.com wrote:
 
Jonathan, funny you mention Celery. I have used it for a while but the experience has been horrible—the thing is ridden with bugs and problems, barely maintained, and the list of issues on Github grows daily. Which is why I raised this discussion: https://stackoverflow.com/questions/46517613/python-task-queue-alternatives-and-frameworks

Curious though, I rolled the Celery task integration myself because I didn’t find any specific module for Pyramid. Is there some explicit support module out there?

There is a pyramid_celery integration package https://github.com/sontek/pyramid_celery

I ended up rolling my own as well though, because I didn't know about it at the time.

We also have multiple Pyramid and Twisted systems that communicate with Celery over Redis and Postgres, so needed to build some glue anyways.

I'm not particularly happy with Celery, but it works for our needs.  The biggest problem with it is the documentation - it's almost always out of date or wrong.  Once you accept that and just ignore it, defaulting to reading the sourcecode, it's much easier to use.  That being said, I'm much more likely do deploy a replacement service in Erlang or similar than switch to another Python system. 
Reply all
Reply to author
Forward
0 new messages