> namespace A { template<typename T> concept bool S() { return true; } }
> namespace B { using A::S; }
> namespace C { template<typename T> concept bool S() { return A::S(); }
> Now I used the different technique to make a function alias, and the
> concepts A::S, B::S, C::S has the same semantics if they are invoked or used
> in requires clause and I think that the fact that the are not equivalent in
> terse notation, meaning that:
> void foo(A::S, B::S);
> void foo(A::S, C::S);
> will declare different function will be surprising.
Why is this surprising? If these were types and not concepts, you
would have this:
namespace A { struct S { }; }
namespace B { using A::S; }
namespace C { struct S { }; }
void foo(A::S, A::S); // #1
void foo(A::S, B::S); // redeclares #1
void foo(A::S, C::S); // overloads #1
This is the same reason why the E's from the previous email should
probably be equivalent.
If you write a name in the position of a type, and it does not act
like a type (e.g., it means different things each time it is written), then I
would consider that very surprising. And I suspect I would have a very
hard time teaching that inconsistency.
Many people have suggested making concepts work like auto, but it's
the wrong choice. The notation was designed to best serve common cases
and simple uses. If you find yourself writing algorithms that take
many different but related types, then write it as a template.
Andrew
int distance(Input_iterator i, Input_iterator j) {
while (i != j) // ...
}
> But you will still have a auto case, which is the name written in the place
> of a type, so you will need to teach that consistency anyway. You will need
> also teach that the different placeholder than auto behaves differentlu
Auto is a keyword with distinct semantics. Any name that is not auto
and is used as a type should not behave like auto.
> But if you start teaching this feature, starting from the fact that we have
> a concept of the placeholder in the language, that when used as the
> parameter, can be replaced by a type (something like a wildcard). We have
> two type of placeholder of the language, one that can be replaced by any
> type (auto like * in bash) and one that restricts acceptable set of types
> (concept-name like abc* in bash). I do not have any intuition that different
> instantiation of same pattern should be replaced by the same content, and
> even has a oposite one derived from regular expression when the repetition
> of same patterns (placeholders) can match to different content:
> .*:.* f(auto, auto)
> [a-z]*:[a-z]* f(Concept, Concept)
> \([a-z]*\):\1 f(Concept{C}, C)
The analogy from regular expressions to C++ is more than a bit
tenuous. For one thing, it overlooks the fact that you have an
algorithm written in terms of those concepts.
For example:
int distance(Input_iterator i, Input_iterator j) {
while (i != j) // ...
}
If i and j have different types, then these constraints are
essentially useless when I call the function with arguments that can't
be compared for equality. You're back to the good ol' (current) days
where every error is caught during instantiation and not at the point
of use.
Also, there is an expectation that, at some point in the future, we
will be able to check a template definition separately from its
instantiation. If we adopt the idea that concepts are like auto, then
every algorithm written like distance() above would be
under-constrained. If separate checking is required by the language,
then all of those functions become be ill-formed.
On Fri, Oct 3, 2014 at 3:57 PM, Andrew Sutton <andrew....@gmail.com> wrote:
int distance(Input_iterator i, Input_iterator j) {
while (i != j) // ...
}
If you look at the work Eric and others are doing on Ranges, the above should maybe be:int distance(Input_iterator i, Sentinel j) {
Or, maybe something like
int distance(Input_iterator i, SentinelFor<Input_iterator> j) {where SentinelFor<T> means it has the proper == and can be used as a Sentinel for the iterator.Not sure if that changes the discussion though.
...
One thing that I'd like to be kept in mind is this:void foo(){
....Iterator it = c.begin();
....
....
Iterator it2 = c2.begin();
....
}
(ie "Iterator" is not coming from the function signature; it is a concept used here to mean "like auto, but constrained to be at least an Iterator". The idea being that "auto everywhere" throws away too much type info (in my opinion) - I don't care what type 'it' is, but I do need it to be an Iterator, else the code following it might still compile, but not work right - same problem we have today with templates sans concepts)If/When we get that syntax (and I sure hope we do) we will again have the same problem - are it and it2 the same type? It would probably be easiest if the answer here was the same as the answer for the distance() function. Also this case:void bar(Container & c)
{Container ctmp = ...; // is this the same type as c? or is this just a "constrained auto"Iterator it = c.begin(); // constrained auto
...
}
ie inside a normal template, I can reuse T and it is now a distinct type. If Input_iterator i and Input_iterator j are not the same type, what is the type of k in the following:
int distance(Input_iterator i, Input_iterator j) {Input_iterator k; // is that same type as i or j?while (i != j) // ...k = ...}
I would love for people who feel
passionately differently to conduct a similar detailed analysis of
real-world algorithms as comprehensively as the Palo Alto TR did so that
we can have material for comprehensive technical discussion.
-- Gaby
If you write a name in the position of a type, and it does not act
like a type (e.g., it means different things each time it is written), then I
would consider that very surprising. And I suspect I would have a very
hard time teaching that inconsistency.
Andrew
> I would like to ask you to explain me what are benefits of having the same
> concept = same type approach versus the placeholder = type approach combined
> with placeholder{id} syntax proposed in N3878 (I want to emphasise that this
> addition is necessary. For the every case of the algorithm in the form you
> have analyzed in the Palo Alto TR, that takes form for example:
n3878 wasn't around when the concepts proposal was design, discussed
in SG8 or in EWG (May, 2013). And n3878 does not propose changes to
the semantics of the original concepts proposal; it is purely an
extension.
> Output_iterator algo(Input_iterator, Input_iterator, Output_iterator); //1
> Wit the alternative proposed here in can be written as:
> Output_iterator{O} algo(Input_iterator{I}, I, O); //2
> Which is equally simple and readable, as the alternative proposed above.
I do not think that this is simple and readable. It requires
understanding and applying more language rules to read the
declaration.
> The message given with the terse syntax notation was that is designed to
> provide a way for the novices to write a generic function equally simple to
> normal functions, but when you receive a feedback about this syntax for your
> target group (I am considering myself as a novice: I do not take part in
> developing of concepts nor have any real world experience wit writing them),
> the input is rejected with the message, that you should become an expert
> first, to propose changes in the syntax for novice people. I see this to
> points as being
> contradictory.
No, I don't think so. As Gaby said, the current rules are based on a
careful analysis of existing code, and I'll add to that a lot of
discussion about how to simplify the authoring of those algorithms.
Feedback that's backed by studies is generally taken more seriously
than feedback without it.
> void f(Sequence&& f);
> Will not have perfect forwarding semantics like:
> void f(auto&&);
> But will only accept rvalues like:
> void f(std::vector<T>&& t);
>
> template<typename T>
> void f(T&&) requires Sequence<T>;
> This candidate will accept the reference as type T (no changes in that), but
> the references will be rejected in check of Sequence<T>, because their are
> not default construtible.
Forwarding and concepts do not always interact well. We've known about
this problem since before proposing concepts lite, at least since 2010
or so. That problem shows up when you emulate concepts using library
features too.
The general feeling seems to be that it's easy enough to work around
with library features, so there haven't been any serious language
proposals to address the problem.
Adding yet another meaning for && is not the right choice.
You can't use a concept name to declare a variable, so it can't be
inconsistent.
The reason that is disallowed is that we don't have an analysis of
placeholders used in the body of functions.
Andrew
--
You received this message because you are subscribed to a topic in the Google Groups "SG8 - Concepts" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/concepts/BFmvN_w-PEs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to concepts+u...@isocpp.org.
To post to this group, send email to conc...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/concepts/.
Andrew
> Returning to previous point, the concepts should not only address writing
> the STL algorithms, and having a terse syntax that is based only on analysis
> may reduce it applicability. Looking only on the amount of people that are
> opting for another solution, give a hint that this may be actually a case.
> Also the prefference of same concept = same type for STL algoritms may be a
> result of fact that we use two parameters to resent a range.
The study is a starting point. If we believed that the results
wouldn't generalize, then we would not have proposed the rule. Thus
far, we have yet to be proven wrong.
Remember, the terse notation is designed for simple cases. It is not a
replacement for the full template syntax. If you need more degrees of
freedom in a declaration, it may not be one of those simple cases that
the terse notation is designed for.
> But again, I think that examples that I have pointed shows, that if we allow
> the concepts are allowed to be used to declare variable (and from other post
> it seems that you want to have such functionality), they cannot be
> consistent with approach taken for function parameter.
I'm not sure that it needs to be.
> If the analysis of other part of the library (algoritms/functional
> components) or the introduction will show up that current approach to terse
> was specific to use case, would it be possible to change that after
> publication of Concept TS, or we will stay with it forever? Maybe the
> decision about terse syntax should be prolonged until more part of library
> is conceptualized? For example by making the repetition of same concept name
> in functions parameter ill formed.
Adding concepts to the library is independent of terse notation. I see
no reason for this aspect of the design to be reconsidered.
Andrew
>> I think you are missing the point. The terse notation is designed to
>> simplify the writing of simple templates. It is not a general
>> mechanism to be used in every situation. Tweaking the notation to make
>> it work for more declarations was never a goal.
>
>
> That depends on the definition of simple templates.
If you are writing a function that takes arguments of the same type,
constrained by some concept C, you can write it like this:
void f(C x, C y);
If your template is not like this, you should consider declaring it as
a template. This is what I mean by simple.
For the other side, maybe we should take a lesson for a first concept design (for C++11) and do not rush the Concepts TS at the cost of discarding any inputs that changes it design.
Given that such situations are relatively common and we want to create a
terse and novice friendly syntax, I have no doubts why the same concept
= same type approach was selected in the light of above example.
I have been following this debate. It is 10 times as much work to
counter an objection or
a vague new idea as it is to generate such, so I have stayed out of the
way for lack of time
and hope that people would come to their senses.
"Concepts" is not a new idea. Some of us have been working in this field
on and off for 20+
years and we are not happy about suggestions to wait until "perfection"
comes from ideas
that have been aired (and considered) many times before and/or are
unsupported claims of
improvements.
Perfection never comes.
Most of the debates have been focused on the terse notation. Consider
Keep simple things simple, and remember that most programmers don't give
a tinker's cuss
about type theory.
There are two ways of looking at simplicity: Factor
for maximum
orthogonality and Huffman coding. The former gives you exactly one way
to do anything
and that way requires you to explicitly specify every aspect of your
solution. The latter,
introduces a series of shorthands so that each level of complexity can
be expresses minimally.
With concepts (lite), we take the latter approach and it (naturally)
requires more rules
than the orthogonal design. Another example is range-for. Without it,
C++ would be simpler,
but many C++ programs more complicated.
I think that the best solution to many of the complicated
use-of-concepts problems is to
explicitly use "requires" or (better still) introducing well-defined
sets of types.
For example:
Mergeable{For, For2, Out}
Out merge(For, For, For2, For2, Out out);
This is shorter and more explicit than the previous version. It also
deals with the
requirements on combinations of argument types.
There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators
of the same type with pairs of different iterator types. Yes, but such
pairs are not of
unrelated iterators. [Iterator:Iterator) sequences will never go away
and the new style
will be something like [Iterator:Sentinel) pairs. I suspect, we'll see
something like
Range{Iter, Sentinel}
void sort(Iter,Sentinel);
where Sentinel will have a well-defined relation to Iterator defined in
Range.
We might even see
void sort(Iter,Iter_Sentinel);
The lambda syntax, by itself, has no way of expressing relations among
"auto" arguments.
That's (probably) fine because we can deduce such constraints from the
use of the lambda.
However, we have no such luxury with templates in general and concepts
allows us to express
relationships (or the lack thereof) for lambda arguments.
Some of the comments seem to consider it a failing of concepts that it
does not directly
map onto existing library notions. However, we design libraries to
approximate ideals.
When a better solution is found we should not restrict ourselves to the
equivalent of
a library solution.
In fact, we need language extensions exactly where
library solutions
do not approximate our ideals sufficiently closely, e.g., because they
require too much
cleverness.
So, my opinion is that we should finish the concepts specification and
ship it so that
the users (many of whom are anxiously waiting) can benefit. Unless, I
hear of some
fundamental problem, we should ship now - and I have not seen any
objection that I
consider fundamental (in this discussion thread or elsewhere).
course, I wasn't suggesting to wait for perfection, instead I was suggesting that we wait for more uses of and experience with concept predicates. It is foolish to ignore the experience of people who have used real concept predicates either in C++ or in other languages that have something similiar(such as D with static_if).
There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators
I believe you mean Eric Niebler.
Yes, when there is a better solution. Currently, concepts lite is not a better solution than the library solution. With the library solution I have specialization, higher-order predicates, interoperability with other libraries, and conditional overloading. Features I use on a regular basis.
Plus, it will either need to be deprecated like auto_ptr or it will continue to hinder the development of newer features based on concepts in the future.
Le 05/10/2014 17:59, Paul Fultz II a écrit :
>
>
> Yes, when there is a better solution. Currently, concepts lite is not
> a better solution than the library solution. With the library solution
> I have specialization, higher-order predicates, interoperability with
> other libraries, and conditional overloading. Features I use on a
> regular basis.
Could you please expand on each of those points? What you mean by them,
how you do it with library, and where concept-lite fails to allow them?
For instance, I fail to see where concept-lite hinder interoperability
with other libraries.
From my current point of view (which may be outdated), the main
advantages of concept-lite over a library solution are the following:
- Syntax that is useable even by non expert
- Especially when considering overloading on concepts
- The concept specification is part of the interface, not of the
implementation
---
Loïc Joly
Paul Fultz II <pful...@gmail.com> writes:
[...]
| So, my opinion is that we should finish the concepts specification
| and
| ship it so that
| the users (many of whom are anxiously waiting) can benefit.
| Unless, I
| hear of some
| fundamental problem, we should ship now - and I have not seen any
| objection that I
| consider fundamental (in this discussion thread or elsewhere).
|
|
|
| Rushing a solution seems foolish, considering it is not a better
| solution than the current library solutions.
I don't know exactly what you meant that to be. So, I have a question:
do you believe that after nearly 20+ years working continuously on
this, having a Technical Specification that consolidates the essence of
what we believe concepts should be based on practical experience,
analysis of real-world algorithms, talking to programmers in the field,
including learning from past failures is foolish?
On 5 October 2014 10:59, Paul Fultz II <pful...@gmail.com> wrote:
course, I wasn't suggesting to wait for perfection, instead I was suggesting that we wait for more uses of and experience with concept predicates. It is foolish to ignore the experience of people who have used real concept predicates either in C++ or in other languages that have something similiar(such as D with static_if).How long should we wait? There is always something new we could be considering, because no one else is standing still.
There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators
I believe you mean Eric Niebler.No, I'm pretty sure he means me. :-) While Eric is doing great work on ranges, earlier I brought up some of the same observations that you did with the terse syntax.
However, my observations are not objections. I think we should ship the TS and see where this goes, even if I personally might have made different choices. None of this is even close to being a show stopper, at least for me.
Consistency is really hard to argue in this case. The syntax:void DoIt(InputIterator i, InputIterator j)is both consistent as a placeholder for a type as well as inconsistent with auto. The authors of concepts have good reasons for picking the former. Will some people get confused? Yes. If we pick the latter, will some people get confused? Yes. We cannot please everyone. I don't see how waiting indefinitely on this makes any progress in making a better (as opposed to a different) choice.Yes, when there is a better solution. Currently, concepts lite is not a better solution than the library solution. With the library solution I have specialization, higher-order predicates, interoperability with other libraries, and conditional overloading. Features I use on a regular basis.That's you. In my experience, all but the most dedicated of experts finds both SFINAE and template error messages painful.
Plus, it will either need to be deprecated like auto_ptr or it will continue to hinder the development of newer features based on concepts in the future.Hinder the development? That's a bold statement. How is concepts as specified boxing us into a corner?
Regards,
Botond
Tony V E <tvan...@gmail.com> writes:
[...]
| One thing that I'd like to be kept in mind is this:
|
| void foo()
| {
| ....
| Iterator it = c.begin();
| ....
| ....
| Iterator it2 = c2.begin();
| ....
| }
|
| (ie "Iterator" is not coming from the function signature; it is a
| concept used here to mean "like auto, but constrained to be at least
| an Iterator". The idea being that "auto everywhere" throws away too
| much type info (in my opinion) - I don't care what type 'it' is, but I
| do need it to be an Iterator, else the code following it might still
| compile, but not work right - same problem we have today with
| templates sans concepts)
Please, remember that the short-notation is just that: a short hand for
a specific configuration that shows up frequently. It isn't meant to
cover every situation. For that indeed, we already have the general
notation. Let's not loose sight of that in the heat of the arguments.
Covering adequately function parameter types is, from my perspective,
for more important than covering declarations of local variables,
because local variables are cases that are -- for the vast majority of
cases -- well covered today (and will be covered tomorrow) by an
existing language feature.
The current choice for function parameter
types is based on careful analysis of real-word algorithms (e.g. the
standard library algorithms) and how to simplify and support their
authoring and consumption. I would love for people who feel
passionately differently to conduct a similar detailed analysis of
real-world algorithms as comprehensively as the Palo Alto TR did so that
we can have material for comprehensive technical discussion.
The most important aspect of the terse notation is that it allows us to think about generic
programming in the same way as we think of "ordinary" programming. In particular, the terse
syntax serves very well in the (simple) cases where we can think of a concepts as a "type of
a type."
> - Higher-order predicates
>
> For example, say I want to find all the drawable type in a tuple, I could
> write something like this:
>
> auto drawables = filter<is_drawable<_>>(t);
>
> This can't be done with the current proposal without first wrapping it in a
> class first.
True. You cannot do this without making providing a class that checks
the constraint. But it's not hard;
template<typename T>
struct is_drawable : false_type { };
template<Drawable T> // Drawable is a concept
struct is_drawable : true_type { };
This is hardly a show stopper.
> - Interoperability
>
> This goes along with the higher order part as well, but additionally,
> concept predicate classes can't be used for the terse notation:
>
> void draw(is_drawable& x);
I don't know what this means.
>> - Especially when considering overloading on concepts
>
>
> Yes, conditional overloading can help with that but its not the same.
In what way is it not?
>> - The concept specification is part of the interface, not of the
>> implementation
>
>
> That of course is the same for the library solution.
Not really. The library solution embeds the constraint in some place
other than where it is obvious: the return type, a default argument of
an unused parameter, the default argument of a template parameter.
I feel like I've just reiterated most of the points made in the
concepts proposals (not the TS wording). Have you read the proposals?
Andrew
> Except these can't be used directly in a integral constant class, such as:
>
> template<class T>
> struct is_drawable
> : bool_<requires (T&& x) { x.draw(); } >
> {};
No, they can't. A constraint is not guaranteed to return true or
false: a substitution failure in a non-SFINAE context is still an
error.
You have to do the exactly the same with constraints that you do with
type traits: wrap the check to make it SFINAE friendly.
> Really? I use this technique a lot. Futhermore, there are several generic
> libraries that use specializations as well, such as Boost.Geometry,
> Boost.Hana, etc. Plus, more will hit the fan as generic programming and
> concept-based overloading become more widespread.
Really. A more conventional approach is to associate a type with a tag
class and dispatch on that. You could for example, ask for a function
to return some kind of range_tag in order to find definite evidence
that a type satisfies a concept. You actually need to do this when two
concepts have the same syntax but different semantics (e.g., input
iterator vs. forward iterator).
A quick search through Boost.Geometry didn't turn up any instances of
this. Can you point to them? I don't know anything about Hana.
Concept-based overload has been around for a long time. See
"Concept-controlled polymorphism" by Jaako Jarvi et al, published in
2003. And the original STL documentation before that.
>> True. You cannot do this without making providing a class that checks
>> the constraint. But it's not hard;
>>
>> template<typename T>
>> struct is_drawable : false_type { };
>>
>> template<Drawable T> // Drawable is a concept
>> struct is_drawable : true_type { };
>>
>> This is hardly a show stopper.
>
>
> Yes, and now you lose concept-based overloading. And like I said may require
> refactoring to accomodate this or even worse this could be in a third-party
> library which can't be changed. Perhaps the original authors could be
> convinced to make the concept predicate an integral constant instead.
> However, from a library perspective at what point should we decide if our
> predicates should be integral constants or `concept bool`s? Why is that we
> have two very different ways now of defining concept predicates? How should
> one choose between the two. It seems that defining the predicates as
> integral constant is more future proof for a library, so lets hope thats
> what libraries chooses. And if thats case why do we have `concept bool` at
> all?
I'm not sure I understand your claim about losing concept-based
overloading. There are a lot of "may"s and "could"s that make it hard
to understand whatever point it is that you're trying to make.
But yes, you will still be able to write type traits when you have
concepts. Sometimes, they may even be useful -- for example, in
associating tag classes with types. I don't know where that boundary
does, and I feel pretty confident saying that nobody else does either.
Only experience and experimentation with the language feature will
help you find that line.
Standing with your fingers in your ears yelling, "but concepts doesn't
do X", when you've (very clearly) never tried using them, is not a
poor substitute for experimentation.
We have "concept bool" for two reasons. First, you can't lookup
concept names as type-specifiers without a keyword to differentiate
regular function and variable templates from concepts. Second, the
language handles constraints very differently than normal expressions
in order to support overloading.
> Also, one shouldn't have to trade in concept-overloading for specialization.
> The problem is the current design makes the relationships implicit. If the
> relationships were explicit then the compiler can always enforce the
> realtionship in spite of specializations. Perhaps something like this:
This was tried in C++0x concepts, and it turned out the be a fairly
contentious decision. You can see a reference to that in Bjarne's
article on the "The C++0x "Remove Concepts" Decision" in Dr. Dobbs.
I'm sure that others on this list can give you more insight into that debate.