The tuple solution works because you are using std::tuple as a boost::mpl::vector substitute and implementing catamorphisms over packs through "tuple processing" naming them differently from what they substantially are.
Are you certain that making constexpr overloads of such algorithms like all_of etc is better than addressing the real issue, meaning the lack of first class support for the fundamentals allowing such constructs? Shouldn't these things be considered eventually by expanding <type_traits> to boost::mpl territory?
Seems more like another popular substitute by hasty inspiration than a solution by design.
On 2 September 2014 12:59, David Krauss <pot...@gmail.com> wrote:
> The initializer_list overloads of std::min and std::max are constexpr. Why
> not add similar overloads to std::all_of, any_of, and none_of?
Sounds like a decent idea. Any particular reason why we wouldn't make
the existing overloads constexpr?
On September 2, 2014 8:35:07 PM EEST, rhalb...@gmail.com wrote:
>And why not lambdas and all of std::array, std::bitset, std::tuple,
>std::complex as well?
>
>Or ultimately: full compile-time function evaluation as is possible in
>D
>(anything not involving I/O, so including virtuals and dynamic
>allocation).
A simple explanation to this is political pride over technical substance. You will get many non-reasons supporting this stance. It is more convenient to proceed this way because hasty inspirations are more popular for expert beginners.
On 2 September 2014 20:35, <rhalb...@gmail.com> wrote:Chances are that most of things in that set of examples are good candidates for
> And since std::move is also constexpr as of C++14, why not make std::swap
> constexpr as well?
>
> And why not lambdas and all of std::array, std::bitset, std::tuple,
> std::complex as well?
additional constexpr.
> Or ultimately: full compile-time function evaluation as is possible in DIt's, however, likely that at some point there's going to be
> (anything not involving I/O, so including virtuals and dynamic allocation).
increasing resistance
towards imposing such compile-time evaluation of everything on every
implementation.
Compile time evaluation implies implementation of constructs that have been naively considered as "abominations" for the language. See work done by Alexandrescu on static if. Some of it could get resurrected by Ville. Still not enough.
You essentially caught the gist of it. Concepts are nothing more than glorified shorthands over sfinae hacks. Their political justification is the overwhelming non-reason for not going through with full blown compile time evaluation in a friendly way. Their actual intent would have been met if they behaved more like typeclasses instead of sfinae shorthands.
In another thread, Sutton says that he would consider concepts a failed experiment if they required any kind of template metaprogramming in order to work. Problem is that template metaprogramming is the quintessential tool for compile time evaluation and properly done, annihilates concepts the way they are right now. The more you advance constexpr metaprogramming to become as wide as template one, the less need you have for concepts. That is one big issue for constexpr.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.
The issue is not static if alone. The committee does not have a real argument against compile time evaluation. Just politics.
(*) Feel free to count the number of implementations that support
C++14 constexpr.
The result of that count is one. The expectation is that there should
eventually be half
a dozen more.
With all due respect, people incapable of fixing their compilers are either lazy or ignorant in their overwhelming majority. They have the burden of proof of why they can't do it if there are no conflicts with what already is in a language. "I said so" does not count. I see people working in clang++ being able to keep up with language feature implementations as fast as these come about. I see experimental gcc branches working up their way with features.
Just because some sponsoring companies may have problems putting together a decent compiler team does not mean that C++ has to remain in the dark ages, especially when it comes to compile time evaluation.
Though as you know, I have understanding respect for your position as secretary, lack of *eventual* compiler support is not a valid reason for not solving a problem the right way. Your argument expresses that of certain people, but I think you are misguided. By analogy, one should not make an omelette because he would have to break some eggs. As such, the argument of compiler complexity either safeguards incompetence or is used as a deflector shield for "external" contributions piggybacking on poetical correctness in order to avoid liability of irresponsible decisions.
Properly solved problems, anticipate eventual compiler "inconsistencies" later on by forcing well defined semantics. I do not believe that such "difficulty" arguments have any real merit in the realm of compile time evaluation.
It is all basically politics as I have said before.
Having a C++ language feature extension that is not in the standard is paragonable to a language fork, when it comes to compile time evaluation. If successful, it could be repeated to the point of wondering why the committee subgroup responsible was against it, placing them in an awkward position. Given that good interests are intertwined in such an old player in the industry like C++, that is engineered to not happen. Cause and effect.
Respectfully, the committee members involved in crucial decisions like this have an immense burden of proof on their shoulders. I respect them for that, but I do not have to respect unprovable non-arguments even if they come out of critically acclaimed people. Even monkeys fall from trees.
You have no valid technical reason against compile time evaluation.
--
---
You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-proposals/qcKUf-U7_YU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to std-proposal...@isocpp.org.
Implementers should do as fine a job as the committee does, since some of them are members of the committee. Their refuge behind unprovable complexity claims is unfounded on technical terms when it harms the language by forcing programmers to go into complex library constructs that strain the compilers more than language features would.
Again, no valid reason against compile time evaluation is given, especially since far too many people have uses for it to the point of implementing such behaviour through template meta-programming. Just like David did with std::tuple for example on this thread.
Addressing others by proxy does not validate indefensible claims. The committee is proven wrong in all your points when decisions based on social non-arguments are put forward by sponsored lobbying. Especially when it has to handle budding drafts that are not proposals yet, that haven't been put to any vote, yet delegating their essence to their sponsored members using arguments as bogus as the ones for the lack of compile time evaluation. Worse if it is misguiding people.
So don't ask people for proposals if in the end you are the only ones who want to do them. And I am not addressing you directly as Ville, but as a responsible secretary. If you think that people should have respect for such an argument, the answer is "No".
On Tue, Sep 2, 2014 at 9:15 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:
(*) Feel free to count the number of implementations that support
C++14 constexpr.
The result of that count is one. The expectation is that there should
eventually be half
a dozen more.And what is holding the non-Clang compilers back exactly? IIRC, Clang moved from C++11-style to C++14-style constexpr in the matter of a few months last year.
It's sad that the slowest adaptors determine the pace of progress. Surely the committee is not in the business of protecting half a dozen feet-dragging compilers from competitive pressures?It would be interesting to see what would happen if someone hacked Clang to introduce full D-style CTFE as an extension, (or TS or whatever the politically accepted phrase is)
On Tue, Sep 2, 2014 at 12:38 PM, Rein Halbersma <rhalb...@gmail.com> wrote:
On Tue, Sep 2, 2014 at 9:15 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:
(*) Feel free to count the number of implementations that support
C++14 constexpr.
The result of that count is one. The expectation is that there should
eventually be half
a dozen more.
And what is holding the non-Clang compilers back exactly? IIRC, Clang moved from C++11-style to C++14-style constexpr in the matter of a few months last year.
The implementation of C++14 constexpr in Clang took, in total, about a week. But the starting point wasn't the same as it is for most compilers. There are basically two different ways that people have historically implemented constant expression evaluation in C and C++ compilers:
1) "fold": the AST is rewritten as constant expressions are simplified. In some implementations this happens as you parse, in others it happens as a separate step. So when you build a + operation whose operands are 1 and 1, you end up with an expression "2" and no evidence that you ever had a '+'. This also means you can use essentially the same code to perform various kinds of optimization.2) Real evaluation: the AST represents the code as written, and a separate process walks it and produces a symbolic value from it.
Most implementations seem to do (1) in some way or another. This is fairly well-suited for C++11 constexpr (where you can in-place-rewrite calls to constexpr functions to the corresponding returned expression, substitute in values for function parameters, and just keep on folding), but not well-suited to C++14 constexpr. You can, in principle, use a similar technique in C++14, but mutability and non-trivial control flow means you need to keep a lot more things symbolic, retain an evaluation environment, and perform rewrites in the appropriate sequencing order. [Even in C++11, this approach is somewhat tricky, because (for instance) you need to track whether the fold invoked undefined behavior (which would render an expression non-constant) or otherwise did something that's not allowed in a constant expression.]
Clang has always done (2). This made the C++11 constexpr implementation more complex (because Clang couldn't rely on the existing code generation codepaths to perform constant expression evaluation; all sorts of new forms of evaluation needed to be implemented) but made implementing the C++14 constexpr much more straightforward, since most of the necessary infrastructure was built as part of the C++11 constexpr implementation.
It's sad that the slowest adaptors determine the pace of progress. Surely the committee is not in the business of protecting half a dozen feet-dragging compilers from competitive pressures?
It would be interesting to see what would happen if someone hacked Clang to introduce full D-style CTFE as an extension, (or TS or whatever the politically accepted phrase is)
I think the remaining holes in C++14 constexpr are:
* lambdas* destructors* polymorphism (virtual functions, dynamic_cast, typeid, exceptions [for polymorphic catch])
* dynamic memory allocation
The other restrictions in 5.19 exist to prevent "bad things" (reading the value representation of an object, undefined behavior, uninitialized stuff, ...) that would be non-portable, unimplementable (inline asm, depending on values that don't exist until runtime, ...), or -- occasionally -- politically undesirable (goto considered harmful).
I'm confident we'll have a good answer for lambdas + constexpr in C++17.
Constexpr destructors add some implementation cost (implementations would need to track whether and when to run destructors).
The committee had some fears about implementation issues with polymorphism (requiring the compiler to track the dynamic type of an object), but we already actually require such tracking, to support things like checking the validity of a base-to-derived cast. I don't think there are pressing technical issues here.
Dynamic memory allocation is a challenge, due to three factors:1) Values will be represented differently during translation and during execution, so it's not reasonable to expose the value representation of objects during constexpr evaluation. This is mostly a problem for placement allocation functions, since these allow a constexpr evaluation to get a view of the same storage as both a T* and as (say) an array of unsigned char.2) Even if restricted to just the normal allocation functions, there is still the issue that the normal allocation functions are replaceable. We have (in C++14) adopted a rule that the compiler is not actually required to call these functions if it can satisfy the allocation in some other way, so this problem is mitigated; in constexpr evaluations, we would (out of necessity) simply not call the allocation function at all.3) If dynamic storage were able to "leak" from compile time evaluation to runtime evaluation, we would need a mechanism to set up an initial heap state of some form. This is both a desirable feature and a very technically challenging one.
The simplest case for dynamic allocation -- support non-placement new and delete only, do not allow the end result of a constant expression evaluation to refer to dynamically-allocated memory, and do not call a replacement global allocation/deallocation function -- is completely straightforward in Clang's implementation. I can't speak for the complexity that would be required in other implementations.
--
Sounds like a decent idea. Any particular reason why we wouldn't make
the existing overloads constexpr?
The tuple solution works because you are using std::tuple as a boost::mpl::vector substitute and implementing catamorphisms over packs through "tuple processing" naming them differently from what they substantially are.
Are you certain that making constexpr overloads of such algorithms like all_of etc is better than addressing the real issue, meaning the lack of first class support for the fundamentals allowing such constructs? Shouldn't these things be considered eventually by expanding <type_traits> to boost::mpl territory?
Seems more like another popular substitute by hasty inspiration than a solution by design.
It's, however, likely that at some point there's going to be
increasing resistance
towards imposing such compile-time evaluation of everything on every
implementation.
Just a naive and honest question: what are the main technical obstacles for compile-time evaluation of all non-I/O expressions?
(and prefereably without the constexpr keyword, implicit would do nicely) Works for D, so it seems it can be done, but maybe D's compilation model is too different from C++'s?
The issue is not static if alone. The committee does not have a real argument against compile time evaluation. Just politics.
The result of that count is one.
The expectation is that there should
eventually be half
a dozen more.
Dynamic memory allocation is a challenge, due to three factors:
1) Values will be represented differently during translation and during execution, so it's not reasonable to expose the value representation of objects during constexpr evaluation. This is mostly a problem for placement allocation functions, since these allow a constexpr evaluation to get a view of the same storage as both a T* and as (say) an array of unsigned char.2) Even if restricted to just the normal allocation functions, there is still the issue that the normal allocation functions are replaceable. We have (in C++14) adopted a rule that the compiler is not actually required to call these functions if it can satisfy the allocation in some other way, so this problem is mitigated; in constexpr evaluations, we would (out of necessity) simply not call the allocation function at all.3) If dynamic storage were able to "leak" from compile time evaluation to runtime evaluation, we would need a mechanism to set up an initial heap state of some form. This is both a desirable feature and a very technically challenging one.The simplest case for dynamic allocation -- support non-placement new and delete only, do not allow the end result of a constant expression evaluation to refer to dynamically-allocated memory, and do not call a replacement global allocation/deallocation function -- is completely straightforward in Clang's implementation. I can't speak for the complexity that would be required in other implementations.
On 2014–09–03, at 12:50 AM, George Makrydakis <irreq...@gmail.com> wrote:The tuple solution works because you are using std::tuple as a boost::mpl::vector substitute and implementing catamorphisms over packs through "tuple processing" naming them differently from what they substantially are.
Packs are neither tuples (fixed size, unlike elements) nor vectors (fixed size, like elements), they are lists (variable size, like elements) or perhaps queues, since insertion and removal occur at either end.
Anyway, I don’t think taxonomy is a good guiding force.
Template-ids work perfectly well as metadata elements. Naming a class template specialization such as tuple<void> does not cause an instantiation, so there are no semantic requirements on the types in the list. Type-list operations can invariably be implemented without instantiating the type-list class.
Metacomputation often generates runtime types, and using tuple as the basis of computation saves a rebind operation. For a program which generates many types, that may halve the metadata the compiler needs to deal with.
Are you certain that making constexpr overloads of such algorithms like all_of etc is better than addressing the real issue, meaning the lack of first class support for the fundamentals allowing such constructs? Shouldn't these things be considered eventually by expanding <type_traits> to boost::mpl territory?
We already have tuple_element and tuple_size.
Seems more like another popular substitute by hasty inspiration than a solution by design.
bool all_of( std::initializer_list< bool > ) is simple and obvious, does that make it “hasty”? I happen to do a lot of metaprogramming, but the function isn’t specific to metaprogramming at all.
Adding a new core language construct for type lists when we already have all the right machinery, and plenty of legacy code already works without, sounds like over-specific optimization for niche uses at the expense of current practice. Impeding progress on unrelated issues for the sake of increasing pressure to adopt a specific direction on one issue is dirty politics.
On 2014–09–03, at 2:29 AM, George Makrydakis <irreq...@gmail.com> wrote:The issue is not static if alone. The committee does not have a real argument against compile time evaluation. Just politics.
In practice the approaches to metaprogramming are more divergent than to usual programming, for many reasons:
1. It feels different and unfamiliar, so beginners are tempted to throw knowledge out the window and “just hack it.”2. Metaprogramming is more tractable without side effects, so imperative programming tends to get excluded. This was true even in macro preprocessor days. This is the real reason static_if will never happen.3. Implementations have various quirks, leading to numerous and divergent superstitious workarounds. I still see new code using typedef char yes[2]; sizeof sfinae_test( blah ) == sizeof (yes). This is just an illustrative example, but distraction and circumlocution have hindered the evolutionary development of C++ best practices.
“Politics” are inevitable. Good politics involves not only consensus, but the modesty to limit eternal commitments to judgments of absolute certainty. High-level metaprogramming in general, not only C++, is too new to expect consensus nor certainty about what is necessary or so harmful as to be useless.
According to the wishes of the committee in currently active EWG#30, you cannot defend such argument. You also validated my own argument in your previous paragraph.
There are lots of very basic manipulations that are either really hard or impossible to do with argument packs unless you use something that causes a big recursive template instantiation, which is expensive at compile-time and can cause bad error messages. I want to be able to index argument packs with integral constant expressions, "take" or "drop" the first N elements of the pack, etc.
On September 2, 2014 11:37:47 PM EEST, Ville Voutilainen <ville.vo...@gmail.com> wrote:C++ needs volunteers, but it seems that certain people will have problems with this. I am just informing a new potential author on what to expect, given that you tried to motivate another person
I think what's misguiding is all these unsubstantiated claims about
"social non-arguments",
"sponsored lobbying" and "bogus arguments".
George
I may be wrong, but I get the impression that your experience with the committee is dominated by one large, unfortunate, experience or "data point". Before extrapolating that into conclusions about the committee, I would suggest many additional data points are required.
Tony
What argument? I see a lot of discussion about taxonomy. You left a paragraph incomplete.
EWG30 says:There are lots of very basic manipulations that are either really hard or impossible to do with argument packs unless you use something that causes a big recursive template instantiation, which is expensive at compile-time and can cause bad error messages. I want to be able to index argument packs with integral constant expressions, "take" or "drop" the first N elements of the pack, etc.
This does not express a prejudice against using std::tuple as a type-list nor commit to a new core language facility. The immediately linked proposals don’t describe any new core facilities either.
My main argument is simply that std::tuple is a better choice for EWG30-style operations than a dedicated std::packer class, as mentioned in N4115, or something heavier like mpl::vector. This subject perhaps deserves a paper.
Also, I believe that the gain of avoiding wrapping pack expansions as tuple<T...> isn’t worth the cost of a new core language feature. However, this isn’t as concrete and I suspect such a feature will eventually happen anyway. It could be well and good, as long as there are other concomitant gains.
However, this really has no bearing on the proposal in this thread, since as mentioned it’s not really a metaprogramming facility but just a generic facility with as much application in metaprogramming as anywhere else a braced-init-list of Booleans might arise.
On September 5, 2014 1:10:40 AM EEST, Gabriel Dos Reis <g...@axiomatics.org> wrote:
>Rein Halbersma <rhalb...@gmail.com> writes:
>
>[...]
>
>| The simplest case for dynamic allocation -- support non-placement
>| new and delete only, do not allow the end result of a constant
>| expression evaluation to refer to dynamically-allocated memory,
>| and do not call a replacement global allocation/deallocation
>| function -- is completely straightforward in Clang's
>| implementation. I can't speak for the complexity that would be
>| required in other implementations.
>|
>|
>| Could you give an example of what type of code would be possible /
>| hard?
>
...
>If you add dynamic allocation, you need to describe what that means
>(e.g. the form of what is a value) through that chain. As far I know,
>nobody has presented a thoughtful coherent analysis of the issue and a
>solution.
>
That "value" in that case, has to be immutable by the fact that as you said, the actors of the previous phases cannot be assumed to be present in the next one.
Your main problem with the constexpr runtime barrier is that people reason in terms of variables as their "values", instead of (recursive) constexpr expressions dealing with immutable values being consumed during this process.
Thinking about this declarative behavior of constexpr with imperative runtime semantics such as "dynamic allocation" is just wrong and is a reason for confusion. It is not about form, but about immutability.