This library focuses on a common and popular use-case for JSON. It provides
a container to hold parsed and serialised JSON types. It provides more
flexibility and better benchmark performance than its competitors.
JSON highlights the following features in the documentation:
- Fast compilation
- Require only C++11
- Fast streaming parser and serializer
- Easy and safe API with allocator support
- Constant-time key lookup for objects
- Options to allow non-standard JSON
- Compile without Boost, define BOOST_JSON_STANDALONE
- Optional header-only, without linking to a library
(a point I would like to add in highlight: it has cool Jason logo 😝)
To quickly understand and get the flavour of the library take a look at
"Quick Look"
<http://master.json.cpp.al/json/usage/quick_look.html>
You can find the source code to be reviewed here:
<https://github.com/CPPAlliance/json/tree/master>
You can find the latest documentation here:
<http://master.json.cpp.al/>
Benchmarks are also given in the document which can be found here:
<http://master.json.cpp.al/json/benchmarks.html>
Some people have also given the early reviews, the thread can be found here:
<https://lists.boost.org/Archives/boost/2020/09/249745.php>
Please provide in your review information you think is valuable to
understand your choice to ACCEPT or REJECT including JSON as a
Boost library. Please be explicit about your decision (ACCEPT or REJECT).
Some other questions you might want to consider answering:
- What is your evaluation of the design?
- What is your evaluation of the implementation?
- What is your evaluation of the documentation?
- What is your evaluation of the potential usefulness of the library?
- Did you try to use the library? With which compiler(s)? Did you have
any problems?
- How much effort did you put into your evaluation? A glance? A quick
reading? In-depth study?
- Are you knowledgeable about the problem domain?
More information about the Boost Formal Review Process can be found
here: <http://www.boost.org/community/reviews.html>
Thank you for your effort in the Boost community.
--
Thank you,
Pranam Lashkari, https://lpranam.github.io/
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
However, I have some experience of integrating this library into a private
project and I felt it might be valuable to share my experiences.
* - What is your evaluation of the design?*
My personal opinion is that the design is sane and well-reasoned. Any areas
with which I have previously taken issue have been raised with the authors
and concerns covered. Some effort was made to explore the effect of ideas I
presented and outcomes were measured. My opinion is that the final design
is largely data-driven.
* - What is your evaluation of the implementation?*
I have found no faults in the implementation during use. There is the
slightly off-putting fact that the default text representation of parsing
integers that are complete powers of 10 are expressed in scientific
notation. Unusual as it seems however, this is strictly conformant with the
JSON standard.
* - What is your evaluation of the documentation?*
The documentation is clear and succinct, the fact that it takes steps to
elucidate the rationale behind design decisions ought to head off a number
of "Wait! Why?" questions.
* - What is your evaluation of the potential usefulness of the library?*
The library has already proven useful to me.
For me personally, the ability to map the parser directly to C++ objects
without going through the intermediate json::value data structure would
offer a minor improvement in performance.
I have started exploring the building of such a parse handler which I
intend to offer as something to go into the examples section at some future
date assuming I have the time to finish it. Notwithstanding, the fact that
I can supply a custom area-style memory resource to the parser/value
largely offsets this concern in practice. Essentially by voiding the
building of the DOM I can avoid one memory allocation and some redundant
copies. In practice, neither one memory allocation nor the memory copies
have proven measurably expensive in my uses of the library.
Whether this ultimately belongs in the JSON library or should be a
dependent library is not for me to say.
It is worth noting that the separation of concerns between parser and
handler is helpful in that it makes this work possible without having to
rewrite any parsing code.
* - Did you try to use the library? With which compiler(s)? Did you have
any problems?*
I have used the library with GCC 9&10, and Clang 9&10. Standards selected
were C++17 and C++20. I chose the boost-dependent (default) option rather
than standalone because I was also using the boost libraries Asio, Beast,
Program Options and System.
* - How much effort did you put into your evaluation? A glance? A quick
reading? In-depth study?*
I have written an application that uses the library: A cryptocurrency
market-making bot that faced off to the Deribit websocket/json API.
* - Are you knowledgeable about the problem domain?*
Yes. In a previous market data distribution engine I used Nlohmann JSON
(high level but slow), RapidJSON (low level but fast) and JSMN (super low
level and blindingly fast but no DOM representation, only provides indexes
into data).
Regards,
R
--
Richard Hodges
hodg...@gmail.com
office: +442032898513
home: +376841522
mobile: +376380212
If the proposed library were called Boost.Serialisation2 or something, I
would see your point.
But it's called Boost.JSON. It implements JSON. It does not implement
CBOR. I don't think it's reasonable to recommend a rejection for a
library not doing something completely different to what it does.
Speaking wider that this, if the format here were not specifically JSON,
I'd also be more sympathetic - binary as well as text
serialisation/deserialisation is important. But JSON is unique, most
users would not choose JSON except that they are forced to do so by
needing to talk to other stuff which mandates JSON.
At work we have this lovely very high performance custom DB based on
LLFIO. It has rocking stats. But it's exposed to clients via a REST API,
and that means everything goes via JSON. So the DB spends most of its
time fairly idle compared to what it is capable of, because JSON is so
very very slow in comparison.
If we could choose anything but JSON, we would, but the customer spec
requires an even nastier and slower text format than JSON. We expect to
win the argument to get them to "upgrade" to JSON, but anything better
than that is years away. Change is hard for big governmental orgs.
In any case, CBOR is actually a fairly lousy binary protocol. Very
inefficient compared to alternatives. But the alternatives all would
require you to design your software differently to what JSON's very
reference count centric design demands.
Niall
I see CBOR not as a separate format, but as an encoding for JSON (with
some additional features that can safely be ignored). I use it to store
and transmit JSON data, and would not use it for anything else.
JSON data exists independently of the JSON serialization format. This
is in fact a core principle of Boost.JSON: the data representation
exists independently of the serialization functions.
> Speaking wider that this, if the format here were not specifically JSON,
> I'd also be more sympathetic - binary as well as text
> serialisation/deserialisation is important. But JSON is unique, most
> users would not choose JSON except that they are forced to do so by
> needing to talk to other stuff which mandates JSON.
>
> At work we have this lovely very high performance custom DB based on
> LLFIO. It has rocking stats. But it's exposed to clients via a REST API,
> and that means everything goes via JSON. So the DB spends most of its
> time fairly idle compared to what it is capable of, because JSON is so
> very very slow in comparison.
This is exactly the sort of problem that CBOR excels at. The server
produces JSON. The client consumes JSON. Flip a switch, and the server
produces CBOR instead. Ideally the client doesn't have to be changed at
all. One line of code changed in the server, and suddenly you have
twice the data throughput.
> In any case, CBOR is actually a fairly lousy binary protocol. Very
> inefficient compared to alternatives. But the alternatives all would
> require you to design your software differently to what JSON's very
> reference count centric design demands.
It may be lousy as a general-purpose binary protocol, but it's a fairly
good binary JSON representation. Which is why it belongs in a JSON
library if it belongs anywhere.
--
Rainer Deyke (rai...@eldwood.com)
> -----Original Message-----
> From: Boost <boost-...@lists.boost.org> On Behalf Of Pranam Lashkari via Boost
> Sent: 14 September 2020 08:30
> To: boost <bo...@lists.boost.org>
> Cc: Pranam Lashkari <plashk...@gmail.com>
> Subject: [boost] [review][JSON] Review of JSON starts today: Sept 14 - Sept 23
>
> Boost formal review of Vinnie Falco and Krystian Stasiowski's library JSON starts today and will run for 10
> days ending on 23 Sept 2020. Both of these authors have already developed a couple of libraries which
> are accepted in Boost(boost beast and Static String)
>
> This library focuses on a common and popular use-case for JSON. It provides a container to hold parsed
> and serialised JSON types. It provides more flexibility and better benchmark performance than its
> competitors.
>
> JSON highlights the following features in the documentation:
>
> - Fast compilation
> - Require only C++11
> - Fast streaming parser and serializer
> - Easy and safe API with allocator support
> - Constant-time key lookup for objects
> - Options to allow non-standard JSON
> - Compile without Boost, define BOOST_JSON_STANDALONE
> - Optional header-only, without linking to a library
What I knew about JSON could have be written on a postage stamp, but I have at least read the documentation.
It is an example of how it should be done. It has good examples and good reference info. I could quickly see how to use it, but didn't have a need, and didn't feel it useful as others already have, and there are benchmarks too.
On that basis alone, my view is ACCEPT, FWIW.
> (a point I would like to add in highlight: it has cool Jason logo 😝)
(😝 indeed - My only recommendation is to replace this with a Boost logo ASAP ! No - more than that - I make it a condition for acceptance.)
Paul
PS That there are other libraries doing similar (but fairly different) things is no reason to reject this library.
Thank you very much for your time in advance.
On Mon, Sep 14, 2020 at 1:00 PM Pranam Lashkari <plashk...@gmail.com>
wrote:
> Boost formal review of Vinnie Falco and Krystian Stasiowski's library JSON
> starts today and will run for 10 days ending on 23 Sept 2020. Both of these
> authors have already developed a couple of libraries which are accepted in
> Boost(boost beast and Static String)
>
> This library focuses on a common and popular use-case for JSON. It
> provides a container to hold parsed and serialised JSON types. It provides
> more flexibility and better benchmark performance than its competitors.
>
> JSON highlights the following features in the documentation:
>
> - Fast compilation
> - Require only C++11
> - Fast streaming parser and serializer
> - Easy and safe API with allocator support
> - Constant-time key lookup for objects
> - Options to allow non-standard JSON
> - Compile without Boost, define BOOST_JSON_STANDALONE
> - Optional header-only, without linking to a library
>
> (a point I would like to add in highlight: it has cool Jason logo 😝)
>
>
> To quickly understand and get the flavour of the library take a look at
> "Quick Look"
> <http://master.json.cpp.al/json/usage/quick_look.html>
>
> You can find the source code to be reviewed here:
> <https://github.com/CPPAlliance/json/tree/master>
>
> You can find the latest documentation here:
> <http://master.json.cpp.al/>
>
> Benchmarks are also given in the document which can be found here:
> <http://master.json.cpp.al/json/benchmarks.html>
>
> Some people have also given the early reviews, the thread can be found
> here:
> <https://lists.boost.org/Archives/boost/2020/09/249745.php>
>
>
>
> Please provide in your review information you think is valuable to
> understand your choice to ACCEPT or REJECT including JSON as a
> Boost library. Please be explicit about your decision (ACCEPT or REJECT).
>
> Some other questions you might want to consider answering:
>
> - What is your evaluation of the design?
> - What is your evaluation of the implementation?
> - What is your evaluation of the documentation?
> - What is your evaluation of the potential usefulness of the library?
> - Did you try to use the library? With which compiler(s)? Did you have
> any problems?
> - How much effort did you put into your evaluation? A glance? A quick
> reading? In-depth study?
> - Are you knowledgeable about the problem domain?
>
> More information about the Boost Formal Review Process can be found
> here: <http://www.boost.org/community/reviews.html>
>
> Thank you for your effort in the Boost community.
>
> --
> Thank you,
> Pranam Lashkari, https://lpranam.github.io/
>
--
Thank you,
Pranam Lashkari, https://lpranam.github.io/
My gopher friend actually makes fun of us because we don't have
json.Unmarshal(): https://blog.golang.org/json
It's him making fun of us/me that gave me the idea to integrate
Boost.Serialization and Boost.Hana's Struct.
```go
type Bid struct {
Price string
Size string
NumOrders int
}
type OrderBook struct {
Sequence int64 `json:"sequence"`
Bids []Bid `json:"bids"`
Asks []Bid `json:"asks"`
}
...
var book OrderBook
json.Unmarshal(buffer, &book)
```
But that's a subject that I'll want to discuss with Robert Ramey after
Boost.JSON's review.
In my job, we have our own in-house serialization framework. I guess
that's what many others do as well in C++, but the requirements on the
JSON library to be usable in serialization frameworks would be quite
similar. Compare Boost.Serialization and QDataStream from Qt.
--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/
> In my job, we have our own in-house serialization framework. I guess
> that's what many others do as well in C++, but the requirements on the
> JSON library to be usable in serialization frameworks would be quite
> similar.
The easiest way to make a JSON input archive for Boost.Serialization is to
use Boost.JSON and go through json::value. Boost.Serialization typically
wants fields to appear in the same order they were written, but JSON allows
arbitrary reordering. Reading a json::value first and then deserializing
from that is much easier than deserializing directly from JSON.
For output, it's easier to bypass json::value and write JSON directly.
It may be easier, but it's also the wrong way. There are libraries
whose usage patterns leak some structure into user code itself. We
have Lua and its virtual stack, for instance. Boost.Serialization is
one of such libraries. That's really a topic that I'd rather delay the
explanation on.
Boost.Serialization un-capturable structure makes it impossible to
untangle the serialization format from ordered trees. For a JSON
archive, this means arrays everywhere, and json::value doesn't really
help here.
But there's a catch. The user can have very valid concerns to control
serialization to just one archive or another. Here's where he'll
overload his types directly to a single archive. And here's where he
can use archive extensions from one model. For the JSON iarchive, the
only extension we need to expose is the pull parser object. This and
some accompanying algorithms (json::partial::skip(),
json::partial::scanf(), ...) is a much better answer than what you
propose.
It's not really hard if you understand the pull parser model. But I'm
more excited about Boost.Hana's Struct integration actually. We can
discuss all in detail after Boost.JSON's review. Please don't
misdirect discussions with comments such as "it'd be easier with
json::value". That's hardly a comment from somebody who explored the
subject.
--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/
_______________________________________________
> But I'm more excited about Boost.Hana's Struct integration actually.
Is this like
https://pdimov.github.io/describe/doc/html/describe.html#example_serialization
or do you have something else in mind?
Yes and no. If you rely on universal serialization, you fall back to
Boost.Serialization implied ordered trees.
The TMP code could look like the example you linked (e.g. one
mp11::mp_for_each() here and there). Are you willing to submit a new
reflection library to Boost? I'd be glad to hear all about it after
Boost.JSON's review. One of such questions would be: how does it
compare to Boost.Hana's Struct?
The reason why I'm trying to delay this debate is because I know we'll
have lots of noise from non-interested parties and I'd rather not deal
with that. A couple extra days and all the unwanted noise would
vanish.
There are a few questions to sort it out:
- Do you want to enable Boost.Hana by default or use an opt-in mechanism?
- Overload rules to choose the most specific serialization.
- Integration to Boost.Serialization's extra features. This will
require a larger time investment, but has nothing to do with
Boost.Hana or integration to any other reflection library.
- And of course, concerns raised by interested stakeholders.
The end game would be:
```json
{
"foo": 42,
"bar": "hello world"
}
```
(de)serializes effortlessly to:
```cpp
struct Foobar
{
int foo;
std::string bar;
};
```
just like Go's json.Unmarshal()
--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/
_______________________________________________
Switch "foo" and "bar" here for generality.
> (de)serializes effortlessly to:
>
> ```cpp
> struct Foobar
> {
> int foo;
> std::string bar;
> };
> ```
It's already possible to make this work using Boost.JSON.
https://pdimov.github.io/describe/doc/html/describe.html#example_from_json
Just add the `parse` call.
> Are you willing to submit a new reflection library to Boost?
Yes, I'm waiting for the review to end to not detract from the Boost.JSON
discussions.
You're missing the point. I don't want the DOM intermediate
representation. It's not needed. The right choice for the archive
concept is to use a pull parser and read directly from the buffer.
The DOM layer adds a high cost here.
> Yes, I'm waiting for the review to end to not detract from the Boost.JSON
> discussions.
Looking forward to it.
--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/
_______________________________________________
> You're missing the point. I don't want the DOM intermediate
> representation. It's not needed.
I get that. I also get the appeal and the utility of pull parsers. But my
point is that I can make that work today, quite easily, using Boost.JSON.
It's 2020. Boost has zero pull JSON parsers. (I counted them, twice.) The
"boost" implementation on https://github.com/kostya/benchmarks#json uses
PropertyTree and is leading from behind. Maybe Boost.JSON can be refactored
and a pull parser can be inserted at the lowest level. But in the meantime,
there are people who have actual need for a nlohmann/json (because of speed)
or RapidJSON (because of interface) replacement, and we don't have it. It
doesn't make much sense to me to wait until 2032 to have that in Boost.
On Sat, Sep 19, 2020 at 7:12 PM Pranam Lashkari <plashk...@gmail.com>
My rough count of accept votes indicates that Boost.JSON is going to be accepted, so you get what you want, but I feel we gave up on trying to achieve the best possible technical solution for this problem out of a wrong sense of urgency (also considering the emails by Bjørn and Vinícius I does not seem like we need to wait for 2032 for a different approach).
This is C++, we strifes for "zero overhead" and "maximum efficiency for all reasonable use cases", and to achieve that requires a careful interface design. I only worry about interfaces, because implementations can be improved at any time. However if an interface is designed to requires more work than absolutely necessary, then this cannot be fixed afterwards.
You said yourself that Boost.JSON is not as efficient as it could be during the conversion of "my data type" to JSON, because the existing data has to be copied into the json::value first. I am a young member of the Boost family, but my feeling is that this would have been a reason to reject the design in the past. Designing abstractions that enable users to get maximum performance if they want it is a core value of C++.
As my previous examples of pybind11 and cereal have shown, the lasting legacies of Boost are excellent interfaces. Making good interface is very difficult and that's where the review process really shines. We have not achieved that here, since valid concerns are pushed aside by the argument: we have to offer a solution right now.
Best regards,
Hans
For the record, I've had offlist email discussions about proposed
Boost.JSON with a number of people where the general feeling was that
there was no point in submitting a review, as negative review feedback
would be ignored, possibly with personal retribution thereafter, and the
library was always going to be accepted in any case. So basically it
would be wasted effort, and they haven't bothered.
I haven't looked at the library myself, so I cannot say if the concerns
those people raised with it are true, but what you just stated above
about lack of trying for a best possible technical solution is bang on
the nail if one were to attempt summarising the feeling of all those
correspondences.
Me personally, if I were designing something like Boost.JSON, I'd
implement it using a generator emitting design. I'd make the supply of
input destructive gather buffer based, so basically you feed the parser
arbitrary sized chunks of input, and the array of pointers to those
discontiguous input blocks is the input document. As the generator
resumes, emits and suspends during parse, it would destructively modify
in-place those input blocks in order to avoid as much dynamic memory
allocation and memory copying as possible. I'd avoid all variant
storage, all type erasure, by separating the input syntax lex from the
value parse (which would be on-demand, lazy), that also lets one say "go
get me the next five key-values in this dictionary" and that would
utilise superscalar CPU concurrency to go execute those in parallel.
I would also attempt to make the whole JSON parser constexpr, not
necessarily because we need to parse JSON at compile time, but because
it would force the right kind of design decisions (e.g. all literal
types) which generate significant added value to the C++ ecosystem. I
mean, what's the point of a N + 1 yet another JSON parser when we could
have a Hana Dusíková all-constexpr regex style JSON parser?
Consider this: a Hana Dusíková type all-constexpr JSON parser could let
you specify to the compiler at compile time "this is the exact structure
of the JSON that shall be parsed". The compiler then bangs out optimum
parse code for that specific JSON structure input. At runtime, the
parser tries the pregenerated canned parsers first, if none match, then
it falls back to runtime parsing. Given that much JSON is just a long
sequence of identical structure records, this would be a very compelling
new C++ JSON parsing library, a whole new better way of doing parsing.
*That* I would get excited about.
Niall
> For the record, I've had offlist email discussions about proposed
> Boost.JSON with a number of people where the general feeling was that
> there was no point in submitting a review, as negative review feedback
> would be ignored, possibly with personal retribution thereafter, and the
> library was always going to be accepted in any case.
Personal retribution, really?
> Consider this: a Hana Dusíková type all-constexpr JSON parser could let
> you specify to the compiler at compile time "this is the exact structure
> of the JSON that shall be parsed". The compiler then bangs out optimum
> parse code for that specific JSON structure input. At runtime, the parser
> tries the pregenerated canned parsers first, if none match, then it falls
> back to runtime parsing.
That's definitely an interesting research project, but our ability to
imagine it does not mean that people have no need for what's being
submitted - a library with the speed of RapidJSON and the usability of JSON
for Modern C++, with some additional and unique features such as incremental
parsing.
To go on your tangent, I, personally, think that compile-time parsing is
overrated because it's cool. Yes, CTRE is a remarkable accomplishment, and
yes, Tim Shen, the author of libstdc++'s <regex> also thinks that
compile-time regex parsing is the cure for <regex>'s ills. But I don't think
so. In my unsubstantiated opinion, runtime parsing can match CTRE's
performance, and the only reason current engines don't is because they are
severely underoptimized.
Similarly, I doubt that a constexpr JSON parser will even match simdjson,
let alone beat it.
Some have interpreted it as such yes. I have a raft of private email
that just arrived there after my post here recounting their stories
about being on the receiving end of Vinnie's behaviour, and/or thanking
me for writing that post.
Richard, I appreciate your "they're being snowflakes" response and
standing up for your friend, that was good of you. You should be aware
that I've known Vinnie longer than you, possibly as long as anyone here.
I think you'll find you're in the "get it done" philosophical camp (at
least that's my judgement of you from studying the code you write), so
Vinnie's fine with you. I have noticed, from watching him on the
internet, he tends to find most issue with those in the "aim for
perfection" philosophical camp. Vinnie particularly dislikes other
people's visions of abstract perfection if it makes no sense to him, if
it's abtuse, or he doesn't understand it. If you're in that camp, then
you might have a very different experience than what you've had.
Nevertheless, I believe Vinnie's opinion is important as representative
of a significant minority of C++ users, and I think it ought to continue
to be heard. I might add that the said "snowflakes" that I've spoken to
have all to date agreed with that opinion, we're perfectly capable of
withstanding severe technical criticism, indeed some of us serve on WG21
where every meeting is usually a battering of oneself.
Anyway, I have no wish to discuss this further, all I want to say has
been said.
>> Consider this: a Hana Dusíková type all-constexpr JSON parser could
>> let you specify to the compiler at compile time "this is the exact
>> structure of the JSON that shall be parsed". The compiler then bangs
>> out optimum parse code for that specific JSON structure input. At
>> runtime, the parser tries the pregenerated canned parsers first, if
>> none match, then it falls back to runtime parsing.
>
> To go on your tangent, I, personally, think that compile-time parsing is
> overrated because it's cool. Yes, CTRE is a remarkable accomplishment,
> and yes, Tim Shen, the author of libstdc++'s <regex> also thinks that
> compile-time regex parsing is the cure for <regex>'s ills. But I don't
> think so. In my unsubstantiated opinion, runtime parsing can match
> CTRE's performance, and the only reason current engines don't is because
> they are severely underoptimized.
Hana's runtime benchmarks showed her regex implementation far outpacing
any of those in the standard libraries. Like, an order of magnitude in
absolute terms, linear scaling to load instead of exponential for common
regex patterns. A whole new world of performance.
Part of why her approach is so fast is because she didn't implement all
of regex. But another part is because she encodes the parse into
relationships between literal types which the compiler can far more
aggressively optimise than complex types. So basically the codegen is
way better, because the compiler can eliminate a lot more code.
> Similarly, I doubt that a constexpr JSON parser will even match
> simdjson, let alone beat it.
simdjson doesn't have class leading performance anymore. There are
faster alternatives depending on your use case.
Niall
On 9/23/20 1:55 PM, Niall Douglas via Boost wrote:
> Consider this: a Hana Dusíková type all-constexpr JSON parser could let
> you specify to the compiler at compile time "this is the exact structure
> of the JSON that shall be parsed". The compiler then bangs out optimum
> parse code for that specific JSON structure input. At runtime, the
> parser tries the pregenerated canned parsers first, if none match, then
> it falls back to runtime parsing. Given that much JSON is just a long
> sequence of identical structure records, this would be a very compelling
> new C++ JSON parsing library, a whole new better way of doing parsing.
> *That* I would get excited about.
Great. There's really quite a lot of things to imagine about future C++
and libraries to be written in future C++ that get me excited.
There are also things about C++ that really don't excite me and probably
most other people as well. To name a few examples: std::vector, std::string,
etc. They are not perfect, they are not fancy, they are not even pretty.
But they are useful. Almost every day. To many, if not most C++ developers.
And they perform well. In many ordinary use-cases.
That's where I could see Boost Json: It's not perfect and probably also not
pretty in parser-aesthetic terms (judging from some the review comments).
But for me it combines a simple and widely-used user interface (similar to
nlohmann's) with decent performance (similar to rapidjson). That gives me
90% of both worlds. And I get it now / soon. As a user of Json libraries,
I find this a worthwhile trade-off.
Max
wow, quite intimidating words. calm down, fella.
I know you're under big pressure. Boost's review is no small
undertaking. And pressure does blind the best of our judgments.
However... you're being a little paranoid here. Just like you were
paranoid when you demanded a second review manager, just like you were
being paranoid when you opened this whole thread on the premise of "I
find it interesting that people are coming out of the woodwork" (and
that was your response to a **single** reject vote). That's a pattern
here.
Niall never said the process is being rigged. You're imagining it.
Niall just said people were discouraged to send any review thanks to
your past behaviours. I find that quite easy to believe, actually.
I've spent 50 hours on a review for which your answer summed up to
cheap baits. It was so depressing that I didn't even reply. And you
had my feedback... long time ago. You answer? "Let's put up to
review". You never actually considered my feedback. So yeah, you're
not quite warming to receive feedback. I wouldn't be surprised to hear
a declaration such as Niall's.
There's no need for an apology here. Just move on (you all -- Niall
included), and try to learn something out of this event.
--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/
_______________________________________________
Wie switched almost completely from Boost.Serialization over to cereal,
for similar reasons. Pre-C++11 libs get phased out from projects in
active maintenance (wherever possible) and are discouraged for new
projects where C++17 is the targeted standard.
Ciao
Dani
--
PGP/GPG: 2CCB 3ECB 0954 5CD3 B0DB 6AA0 BA03 56A1 2C4638C5
Thank you very much to everyone who has invested time to review this
library.
On Tue, Sep 22, 2020 at 9:37 PM Pranam Lashkari <plashk...@gmail.com>
> For the record, I've had offlist email discussions about proposed
> Boost.JSON with a number of people where the general feeling was that
> there was no point in submitting a review, as negative review feedback
> would be ignored, possibly with personal retribution thereafter, and the
> library was always going to be accepted in any case. So basically it
> would be wasted effort, and they haven't bothered.
Unless a impassioned on-list reply counts as "personal retribution" I
think it is safe to say that the aforementioned retribution never took
place. However, the false claim that "the library was always going to
be accepted in any case" is really harmful to the reputation of the
Boost Formal Review process.
As I believe that the review process is a vital piece of social
technology that has made the Boost C++ Library Collection the best of
breed, I'd like to avoid having the review of the upcoming proposed
Boost.URL submission tainted with similar aspersions.
Therefore let me state unequivocally, I have no interest in
persecuting individuals for criticizing my library submissions. In
fact I welcome negative feedback as it affords the opportunity to make
the library better - regardless of who is providing the feedback. I am
very happy to hear criticisms of my libraries even from those
individuals who are actively hostile.
However I do have an interest in vigorously opposing bad ideas, such
as this one which was tacked on to the end of the message quoted
above:
> Consider this: a Hana Dusíková type all-constexpr JSON parser could
> let you specify to the compiler at compile time "this is the exact
> structure of the JSON that shall be parsed". The compiler then
> bangs out optimum parse code for that specific JSON structure
> input. At runtime, the parser tries the pregenerated canned parsers
> first, if none match, then it falls back to runtime parsing
The totality of the experience gained in developing Boost.JSON
suggests that this proposed design is deeply flawed. The bulk of the
work in achieving the performance comparable to RapidJSON went not
into the parsing but in the allocation and construction of the DOM
objects during the parse. This necessitated a profound coupling
between parsing and creation of json::value objects.
I realize of course that this will invite contradictory replies ("all
you need to do is...") but as my conclusion was achieved only after
months of experimentation culminating in the production of a complete,
working prototype, I would just say: show a working prototype then
let's talk.
Regards
I'm concerned that folks feel that way: Boost has always had a robust
and frankly sometimes bruising review process, but IMO we have ended up
with better libraries as a result. So I hope everyone will feel free to
submit reviews as they feel fit.
> I realize of course that this will invite contradictory replies ("all
> you need to do is...") but as my conclusion was achieved only after
> months of experimentation culminating in the production of a complete,
> working prototype, I would just say: show a working prototype then
> let's talk.
We are at heart empiricists. Working code always triumphs!
Best, John.