I write to inform you about exciting developments in C++ networking.
First, a bit of background. Networking comes in three flavors:
* Networking TS <https://cplusplus.github.io/networking-ts/draft.pdf>
* Boost.Asio <https://www.boost.org/doc/libs/1_69_0/doc/html/boost_asio.html>
* Standalone Asio <https://github.com/chriskohlhoff/asio>
These three are for the most part identical, except that Asio flavors
have additional features like ssl::stream and signal_set which are not
in the TS, but will very likely appear in a future update or version.
We've had Asio for over a decade now, but there is a shortage of
experts. Some people believe this shortage is because Asio in
particular (and thus, Networking TS since they have identical
interfaces) is "difficult to use." I believe it is wrong to blame Asio
for this. Concurrent programs in general are hard to write. This is
applicable:
"Unfortunately, today's reality is that only thoughtful experts can
write explicitly concurrent programs that are correct and efficient.
This is because today's programming models for concurrency are subtle,
intricate, and fraught with pitfalls that easily (and frequently)
result in unforeseen races (i.e., program corruption) deadlocks (i.e.,
program lockup) and performance cliffs (e.g., priority inversion,
convoying, and sometimes complete loss of parallelism and/or even
worse performance than a single-threaded program). And even when a
correct and efficient concurrent program is written, it takes great
care to maintain — it's usually brittle and difficult to maintain
correctly because current programming models set a very high bar of
expertise required to reason reliably about the operation of
concurrent programs, so that apparently innocuous changes to a working
concurrent program can (and commonly do, in practice) render it
entirely or intermittently nonworking in unintended and unexpected
ways. Because getting it right and keeping it right is so difficult,
at many major software companies there is a veritable priesthood of
gurus who write and maintain the core concurrent code."
- Herb Sutter, "The Trouble with Locks",
<http://www.drdobbs.com/cpp/the-trouble-with-locks/184401930>
Although this was written in 2005 it is still relevant today. It is
understandable that Asio will be the first target of anger and
frustration when writing concurrent programs, since it is on the
"front line" so to speak. There has also been a distinct shortage of
*good* tutorials and examples for Asio. Articles or blog posts which
teach you step by step, explaining everything, and giving example code
which demonstrates best practices.
Boost.Beast is my low-level HTTP/WebSocket library which builds on Boost.Asio:
<https://github.com/boostorg/beast>
In the original release of Beast, the documentation stated "prior
understanding of Boost.Asio is required." However, field experience
has shown that users ignore that requirement and attempt to write
complex, concurrent programs as their first-time introduction to both
Beast and Asio. Based on feedback from committee members, and to serve
users better, the scope of Beast has been enlarged to include
first-time users of networking. The upcoming Boost 1.70 release
reflects this new scope and I am excited to announce some very nice
things which you can access today.
First of all, Beast documentation and examples no longer use the
"boost::asio" namespace, they the namespace alias "net::". While this
is cosmetic, it reinforces the notion when inspecting code that it is
equally applicable to Boost.Asio, Asio, and Networking TS (identifiers
which are not in the TS, such as signal_set, are still qualified with
boost::asio).
A new documentation page explains the three flavors of networking:
<https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_io.html>
This is also explained in my 2018 CppCon presentation:
<https://youtu.be/7FQwAjELMek?t=444>
I have added a "Networking Refresher", a complete overview of
networking from soup to nuts. No prior knowledge or understanding of
networking is required, everything is explained in detail so if you
want to learn this is the place to start. I also kept it short, but it
is loaded with hyperlinks for further learning:
<https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_io/asio_refresher.html>
There was a recent paper in Kona, P1269R0 ("Three Years with the
Networking TS") about difficulty of implementing timeouts. To address
this, Beast now has a stream class which implements configurable
timeouts for you, and callers no longer have to fiddle with timers
manually anymore. Everything "Just Works." It achieves the P1269R0
author's goal of having timeouts "built-in to asynchronous
operations", but in a way that fits in with the design of the TS:
<https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_io/timeouts.html>
I feel that this `beast::basic_stream` serves as an existence proof
that the current design of Networking TS is sound - the TS offers a
flexible toolbox which lets you build your own framework the way that
you want it, without making odd choices for you. We are still
discovering ways of leveraging it to maximum use. The
beast::websocket::stream also has built-in timeouts, but they are
enhanced to support "idle pings" (keeping client connections alive)
and everything is fully configurable:
<https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_websocket/timeouts.html>
All you need to do to get sensible, suggested websocket timeouts is
add one line of code after creating your stream:
ws.set_option(websocket::stream_base::timeout::suggested(
beast::role_type::server));
To address the cumbersome boilerplate of writing composed operations
(specifically the need to propagate the associated allocator and
associated executor, and to avoid invoking the completion handler from
within the initiating function when the operation would complete
immediately) Beast adds two new utility base classes, with plentiful
documentation and examples throughout:
There are two well-rounded examples which show you step by step how to
write these things in a safe way:
I have a big, new open source project which implements a server, that
uses the `system_context`, taking full advantage of native Windows and
Mac OS system-level execution context features. To support this use
case and industry feedback, the examples in Beast now default to being
always thread-safe. All examples use a "strand", and leverage P1322R0
("Networking TS enhancement to enable custom I/O executors"). Yes,
this paper which was approved in Kona, is now implemented in both
Boost.Beast, and Boost.Asio, including all of the Beast examples, so
if you pick up Boost 1.70 (or the master branches from github) you can
start playing with this as soon as you're done reading this message!!
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1322r0.html>
We have an active #beast channel in the C++ Slack
(https://slack.cpp.al) where experienced users can help, no question
is too small! I hope you will join me and the rest of the Beast and
Asio community in exploring what the very powerful Networking TS and
Asio libraries have to offer, and build something great together!
Regards
P.S. Don't forget to star the repository! <https://github.com/boostorg/beast>
_______________________________________________
Boost-users mailing list
Boost...@lists.boost.org
https://lists.boost.org/mailman/listinfo.cgi/boost-users
I have followed your stuff since you were on the Juce Forum (audio
plugin developer here)
and I was wondering if you ever took a look at
It's a network library that use coroutine and boost asio
I was quite impressed when I saw a demo of it at a French c++ user Group.
just my 2 cents.
--
Olivier Tristan
Research & Development
www.uvi.net
Fellow C++, Boost, and WG21 Enthusiasts, lend me your ear!
I write to inform you about exciting developments in C++ networking.
There was a recent paper in Kona, P1269R0 ("Three Years with the
Networking TS") about difficulty of implementing timeouts. To address
this, Beast now has a stream class which implements configurable
timeouts for you, and callers no longer have to fiddle with timers
manually anymore. Everything "Just Works." It achieves the P1269R0
author's goal of having timeouts "built-in to asynchronous
operations", but in a way that fits in with the design of the TS:
<https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_io/timeouts.html>
I feel that this `beast::basic_stream` serves as an existence proof
that the current design of Networking TS is sound - the TS offers a
flexible toolbox which lets you build your own framework the way that
you want it, without making odd choices for you. We are still
discovering ways of leveraging it to maximum use. The
beast::websocket::stream also has built-in timeouts, but they are
enhanced to support "idle pings" (keeping client connections alive)
and everything is fully configurable:
Remember though I"m stuck in C++11 so no lambda capture assignment expressions.
On Thu, 14 Mar 2019 at 16:49, Vinnie Falco via Boost-users
<boost...@lists.boost.org> wrote:
> There was a recent paper in Kona, P1269R0 ("Three Years with the
> Networking TS") about difficulty of implementing timeouts. To address
> this, Beast now has a stream class which implements configurable
> timeouts for you, and callers no longer have to fiddle with timers
> manually anymore. Everything "Just Works." It achieves the P1269R0
> author's goal of having timeouts "built-in to asynchronous
> operations", but in a way that fits in with the design of the TS:
>
> <https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_io/timeouts.html>
I have not used the synchronous API at all. But section 3.1 of P1269R0
seems to make a good point about the lack of timeouts in synchronous
calls, doesn't it?
beast::basic_stream makes using timeouts in the async API way simpler,
but is the sync API really basically unusable because of the lack of
timeouts thing?
Sigh. A few comments from real-world experience.
> We've had Asio for over a decade now, but there is a shortage of experts. Some people believe this shortage is because Asio in
-- Stian
As a followup and a concrete example, here's what C++ is competing against; this piece of C# code starts an asynchronous read of a file and waits for it to complete up to some timeout value; only platform facilities are used here (i.e., no external libraries).
How does ReadAsync complete? Executor? Thread? OS-callback? Don't know, don't care, it works. I have an idea of how to accomplish the same in C++, and it's not pleasant -- worker thread, promise/future, blocking queue and CancelSynchronousIO. Cannot even use std::async because CancelSynchronousIO needs a target thread ID.
> I believe it is wrong to blame Asio for this.
I disagree. Recently, I've been coding parallel, including networked & asynchronous, programs in C# and Java, and the experience has been a JOY. You get threads, executors, synchronization primitives, tasks, cancellation, monadic futures and concurrent (blocking and non-blocking) data structures out of the box with the platform, plus a myriad of extensions as libraries. As for executors, you don't need to be bothered with them unless you really want to for some reason.
Compare the documentation for Vertx (vertx.io) or netty (both Java toolkits for writing asynchronous reactive programs) with that of asio.
Once I attempted to write an asio service, tried to understand the simple example from the documentation, and I gave up. I used a thread, blocking call and CancelSynchronousIO (and I consider myself fortunate to develop for Windows only that has it).
Asio _is_ a relatively nice wrapper around socket and serial port APIs, but that's about it
IMO. On the other hand, I could have written the same wrappers around native APIs in two days and not haul along what I consider the baggage of technical debt that is asio in the codebase.
Conclusion, if any: many people just want/need thread per client and synchronous IO for simplicity, but until asio/networking TS provide for timeouts and cancellation of synchronous requests, it's a wrong tool for those people, me included.
> I see now...your ideas on how to implement it in C++
> Remember, C++ is often used where no other high-level language has gone before.
I'm well aware of that. However, how is that an argument for also not providing user-friendly, "instant productivity" high-level wrappers? By not doing that, developers having simple needs with reasonable [1] defaults are doomed to chase the rabbit-hole of complex specifications and reinvent the wheel again and again. Countless hours of programmer productivity wasted.
[1] Yes, what is reasonable? Look towards C# or Java. The wheel has already been invented.
Also, when it's easier (at least to me) to grasp raw Win32/Linux APIs than to study asio concepts and how they fit together, then the whole purpose of the standard library is defeated. Because: given an option of A: use time to learn concepts valid in C++ world only; option B: use (less) time to learn and use general OS mechanisms and concepts.. I know which route I'm going to choose and which route gives more long-term and more reusable knowledge. And probably a faster way towards working code, which is what matters in the end.
-- Stian
> Nothing stops you or anyone else from supplying that missing code.
?! As I explained in the previous mail, (lack of) TIME stops me. I'm an "ordinary programmer" who *needs* networking/serial/async, but I don't need ridiculously high performance, am not networking expert and have no desire to become one. I have even less desire to spend 5X or more time developing software that will have unnecessarily high performance (!) compared to 1X development effort for "satisfactory performance" (i.e., "it works for the use case"). And I believe that I'm not alone there.
> That's not how I read the OP's point. He seemed to me to be pointing out that .NET APIs provide a stupid-simple API on top of lots of complexity which itself sits on top of the Win32 winsock API.
That's the correct interpretation.
> Networking ought to be some stupid-simple coroutinised Ranges i/o API on the top. […]
This.
-- Stian
We *may* get this, if everything works out.
Eric Niebler has a fairly complete concept of what a high level Ranges
i/o ought to look like.
Dalton Woodard is working on a generic mechanism for Ranges i/o to
discover any low level scatter-gather i/o implementation, such that
Ranges can "just work" with whatever you feed it.
Elias Kosunen is working on an iostreams << replacement matching the >>
replacement which is fmt, just standardised into C++ 20.
Zach Laine and others are working on Unicode string support.
I'm working on a generic low level i/o library API, and have just
internally distributed the first draft of an enhanced C++ memory and
object model which has first class support for exchanging
representations of objects between multiple C++ programs. I'll be
releasing draft 2 to SG12 next week, I'll be speaking on the topic at
ACCU in April followed by attending WG14 in May, all building towards
generating a head of steam for progress within WG21 over
Cologne-Belfast-Prague-Bulgaria.
We'll see how things go.
Niall
+1
Lot's people are saying this. But no one is listening.
Robert Ramey
On Sat, Mar 16, 2019 at 4:55 PM Niall Douglas via Boost-users <boost...@lists.boost.org> wrote:
> But it's not like they lack implementation and userbase experience,
> either. Most of these are multi-year old codebases with real world users
> using them in production, and giving ample feedback rounding off the
> rough edges over time.
There are many libraries that pass this rather low bar.
On Mar 16, 2019, at 7:42 PM, Emil Dotchevski via Boost-users <boost...@lists.boost.org> wrote:On Sat, Mar 16, 2019 at 6:57 PM Jon Kalb via Boost-users <boost...@lists.boost.org> wrote:
> Of any individual C++ library author today, my guess is that Eric would have the best chance of attracting a large user base for his personal GitHub, but I would still consider that exposure as below the bar of what I’d like to see for a library that we are going to put into the standard (given that, once in, we’ll never be able to drop support or modify the ABI).It appears that, depending on whom you know, it might be easier to get a library in the C++ standard compared to getting it into Boost.
This speaks volumes, if not about the quality of the libraries being standardized, at least about the different mindset of the people involved.
> I know that getting a library into Boost is no small task (Niall may know that better than anyone on the planet), but if it is worth putting in std then it is worth putting in Boost first.It appears that he has learned his lesson and won't repeat that mistake. :)
I'd say Chris right now would probably disagree. Beman probably would
disagree, Filesystem took far longer at WG21 than anyone reasonable
would have expected, and that's after it had gone through the Boost
process multiple times. Beman told me it took him between twenty and
twenty-five years of effort, depending on how you counted it. He wasn't
sure if he would do it again, if he could have travelled back in time to
tell his younger self.
I've heard Eric in the past say it is equally hard to get a library into
Boost as into WG21, and they are both quite different processes with
different needs and requirements. And he's said on more than one
occasion that he only has enough in him to get Ranges past either Boost,
or WG21. He doesn't have enough in him to do both.
I'd echo Eric's sentiments on this completely. I don't have it in me to
ever get a fundamentals library into Boost again. Besides, I'd likely
end up getting divorced and my children no longer speaking to me. It's
not worth it, personally speaking.
>> > I know that getting a library into Boost is no small task (Niall may
>> know that better than anyone on the planet), but if it is worth
>> putting in std then it is worth putting in Boost first.
>>
>> It appears that he has learned his lesson and won't repeat that
>> mistake. :)
>
> I hope not. This is a “mistake” that bears repeating.
I can see me doing a small niche library one day. But never again for a
fundamentals library.
I would also say that the Boost review process is a poor fit for niche
libraries. There isn't a sufficient critical mass of domain experts for
a good review, it ends up becoming a popularity vote by the
inexperienced-in-that-very-specific-topic. In reviews in the past I've
been dismayed to see the review of someone I know is a domain expert in
a library being weighted equally to reviews by those who were not. But
there is little you can do about it, when a mere five people are
reviewing, there isn't the numbers to weigh one much heavier than the
others. So a library ends up passing, despite that the domain expert
found severe flaws in the proposed design.
I'm all for more libraries heading to Boost first, where Boost is a good
fit for them. Boost was great for Outcome, precisely because Outcome had
so much review feedback available. Fundamental libraries have that. But
niche libraries don't fit Boost well. For example, I don't personally
think that the Graphics proposal would fit a Boost review well. The
bikeshedding would be enormous, the topic is quite niche, and the true
correct design is probably three separate Graphics libraries, the same
way as XML processing has three entirely separate ideal ways of library
support. There also several big, well established, dominant API
libraries none of which have a snowball's chance at WG21 due to using
ancient C++, or C.