>>>>> Richard Kettlewell <inv...@invalid.invalid> writes:
>>>>> Ivan Shmakov <iv...@siamics.net
>>>>> Wouter Verhelst <w...@uter.be
[Cross-posting to news:comp.infosystems.www.misc.]
>>> libcurl is great for its purpose: to provide downloads of URLs for
>>> applications whose primary purpose is not that. It abstracts away
>>> much of the internals of getting things from the network, and just
>>> gives the application that uses it a simple interface, but one
>>> without features, allowing simple one-way data transfers.
>> All HTTP/1.1 transfers are "one-way," are they not?
> No. For instance a POST request with a body may receive a response
> with a body - a two-way transfer.
Definitions. It can hence be argued that all HTTP/1.1 transfers
are two-way; for instance, a GET request without a body is still
transferred client-to-server, and the response goes the opposite way.
The point is, whatever it is called, Libcurl appears to support it.
(Frankly, by "two-way" I've assumed it's meant "both directions
>>> Finally, the libcurl API does not really work well for pipelining
>>> multiple transfers over multiple connections to the same host,
>>> which browsers tend to do.
>> Then that would remain the responsibility of the calling
>> application. Or Libcurl may be improved in this regard. Or
>> (perhaps the best solution of these) there may be a separate library
>> on top of Libcurl to multiplex requests across connections.
> It does have a pipelining option, I’ve not tried using it though.
Neither have I. But I imagine it won't necessarily be able to
see that the request to /bigsearch.cgi has stalled, and thus the
request to /static/about.html on the same server should go over
a new connection.
> In 2018 the obvious objection to adopting curl is that it’s written
> in C, a poor choice for security-critical code.
Yeah. The same kind of utterly insecure software that no one uses
anymore, like GnuTLS, OpenSSL, Linux, GnuPG, Apache httpd, etc.