Types introduced by constrained type specifier

242 views
Skip to first unread message

Tomasz

unread,
Oct 2, 2014, 3:58:02 PM10/2/14
to conc...@isocpp.org
In the section $8.4.1/17 of newest draft of Concept TS the following example is presented:
void f4(E a, E<> b, E<0> c);
The types of a , b , and c are distinct invented template type parameters even though the constraints
associated by the each of the constrained-type-specifiers (7.1.6.5) are equivalent.

In the example bellow we are referring to the same entity E<0> using different syntax and this is causing the separate type being introduced.

I would like to see a qualification, what is behavior of following example that uses concepts with using declarttion. Lets assume that I have following defintions:
namespace A { template<typename T> concept bool S = true; }
namespace B { using A::S; }
namespace C { using A::S; }

What would following declaration means (in the comment I am presenting my attempt to resolve declaration):
void foo(A::S, A::S, A::S); //template<typename T> void foo(T, T, T);
void foo(A::S, B::S, C::S); //template<typename T1, typename T2, typename T3> void foo(T1, T2, T3);
void foo(A::S, A::S, C::S) //template<typename T1, typename T2> void foo(T1, T1, T3)

And if we use using directive:
using namespace A;
using namespace B;
using namespace C:

void foo(S, A::S, B::S, C::S); //template<typename T1, typename T2, typename T3, typename T4> void foo(T1, T2, T3, T4)

Andrew Sutton

unread,
Oct 3, 2014, 7:13:49 AM10/3/14
to conc...@isocpp.org
> I would like to see a qualification, what is behavior of following example
> that uses concepts with using declarttion.

Hmm... this probably is missing from that section.

> namespace A { template<typename T> concept bool S = true; }
> namespace B { using A::S; }
> namespace C { using A::S; }
>
> What would following declaration means (in the comment I am presenting my
> attempt to resolve declaration):
> void foo(A::S, A::S, A::S); //template<typename T> void foo(T, T, T);
> void foo(A::S, B::S, C::S); //template<typename T1, typename T2, typename
> T3> void foo(T1, T2, T3);
> void foo(A::S, A::S, C::S) //template<typename T1, typename T2> void foo(T1,
> T1, T3)


Since they name the same concept, all the parameters would have the same type.

> And if we use using directive:
> using namespace A;
> using namespace B;
> using namespace C:
>
> void foo(S, A::S, B::S, C::S); //template<typename T1, typename T2, typename
> T3, typename T4> void foo(T1, T2, T3, T4)


Same as before. They name the same concept, so all the parameters
would have the same type.

Andrew

Tomasz Kamiński

unread,
Oct 3, 2014, 7:33:32 AM10/3/14
to conc...@isocpp.org
And as I understand your example both E, E<>, E<0> are naming the same partialy specialized concept, equivalent to:
template<typename T> concept bool E0 = E<T, 0>.
I am unable to  point any actual difference between my examples that uses different name to reffer the same concept and the example presented in the paper, I would be greatfull if you could provide me an explanation, where lies the boundary between the naming of the same concept.

--
You received this message because you are subscribed to a topic in the Google Groups "SG8 - Concepts" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/concepts/BFmvN_w-PEs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to concepts+u...@isocpp.org.
To post to this group, send email to conc...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/concepts/.

Andrew Sutton

unread,
Oct 3, 2014, 7:49:18 AM10/3/14
to conc...@isocpp.org
> And as I understand your example both E, E<>, E<0> are naming the same
> partialy specialized concept, equivalent to:
> template<typename T> concept bool E0 = E<T, 0>.
> I am unable to point any actual difference between my examples that uses
> different name to reffer the same concept and the example presented in the
> paper, I would be greatfull if you could provide me an explanation, where
> lies the boundary between the naming of the same concept.

There is a difference between the lookup of a name introduced through
'using' or 'using namespace' and the lookup of a template
specialization. In particular, from your previous examples, A::S,
B::S, etc. don't require the instantiation of default template
arguments in order to determine what declaration or is referred to by
the name.

From an implementation perspective, it is easier to consider those
names to be different.

But this does seem a bit inconsistent. Maybe E, E<>, and E<0> should
be equivalent.

Andrew

Tomasz Kamiński

unread,
Oct 3, 2014, 1:25:29 PM10/3/14
to conc...@isocpp.org
W dniu 03.10.2014 o 13:48, Andrew Sutton pisze:
I think that the fact that different rules are used for constrain
equivalence are different that the one used for checking which template
function is more specialized (subsumes vs referring to the same entity)
will cause a lot of surprises for the programmer. Let discuss a bit
modified example:
namespace A { template<typename T> concept bool S() { return true; } }
namespace B { using A::S; }
namespace C { template<typename T> concept bool S() { return A::S(); }
Now I used the different technique to make a function alias, and the
concepts A::S, B::S, C::S has the same semantics if they are invoked or
used in requires clause and I think that the fact that the are not
equivalent in terse notation, meaning that:
void foo(A::S, B::S);
void foo(A::S, C::S);
will declare different function will be surprising. I am also sure that
you can show a lot of examples, when the approach that same type is
introduced if concepts are equivalent, rather than refer same entity,
will led to counterintuitive behavior. My point is that there is no
approach that will avoid causing strange semantics in some of the
situations, like in the case of operator. descried in Design and
Evolution of C++

Furthermore, I perceive that having different semantics for the same
language concept (placeholder) is bad for the language, especially from
the learning perspective. I think it is a lot of easier to get into
rule, that we have a concept placeholder (auto and concept name) that
can be used to create a generic function and every placeholder
introduces new independent type, is easier than understanding when
actually the placeholder refer to the same type (my first point). The
status quo that we have currently may be compared to the imaginary
situation where the at method for the map will cause the undefined
behavior if the key association does not exists, while the at method for
vector/array will check if the index is not out of range and throw
exception.

Let also discuss the alternative solution, the rule that each
placeholder is introducing new type. The problem with this approach is
that to create a function signature that accepts a two parameter of the
same type requires use of decltype(name_of_parameter), that means if we
want to write simple copy function:
//1. same concept = same type approach
Output_iterator copy(Input_iterator first, Input_iterator last,
Output_iterator dest);

//2. placeholder = new type
auto copy(Input_iterator first, decltype(first) last, Output_iterator
dest) -> decltype(dest);
Given that such situations are relatively common and we want to create a
terse and novice friendly syntax, I have no doubts why the same concept
= same type approach was selected in the light of above example. However
today we have a third alternative, the syntax proposed in:
http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3878.pdf:
//3. placeholder = new type and N3878 syntax
Output_iterator{O} copy(Input_iterator{I} first, I last, O dest);
Which provides solution with similar readability to first option.

I think that, in the light of the arguments presents above, it is worth
to reconsider the same type = same concept approach, especially that it
seams to provide more intuitive solution, while preserving readiness of
the declarations.

Also, I want to point out, that I am opting for every placeholder = new
type approach combined with placeholder{type_id} syntax usage in
function parameters and variable declaration and have no strong
preference for the changes in template introduction syntax (plain C{T}
versus template<C{T}>), because I perceive them as equivalently
functional. Maybe it is worth to discuss this changes separately.

Andrew Sutton

unread,
Oct 3, 2014, 2:08:19 PM10/3/14
to conc...@isocpp.org
> namespace A { template<typename T> concept bool S() { return true; } }
> namespace B { using A::S; }
> namespace C { template<typename T> concept bool S() { return A::S(); }
> Now I used the different technique to make a function alias, and the
> concepts A::S, B::S, C::S has the same semantics if they are invoked or used
> in requires clause and I think that the fact that the are not equivalent in
> terse notation, meaning that:
> void foo(A::S, B::S);
> void foo(A::S, C::S);
> will declare different function will be surprising.


Why is this surprising? If these were types and not concepts, you
would have this:

namespace A { struct S { }; }
namespace B { using A::S; }
namespace C { struct S { }; }

void foo(A::S, A::S); // #1
void foo(A::S, B::S); // redeclares #1
void foo(A::S, C::S); // overloads #1

This is the same reason why the E's from the previous email should
probably be equivalent.

If you write a name in the position of a type, and it does not act
like a type (e.g., it means different things each time it is written), then I
would consider that very surprising. And I suspect I would have a very
hard time teaching that inconsistency.

Many people have suggested making concepts work like auto, but it's
the wrong choice. The notation was designed to best serve common cases
and simple uses. If you find yourself writing algorithms that take
many different but related types, then write it as a template.

Andrew

Tomasz

unread,
Oct 3, 2014, 2:48:57 PM10/3/14
to conc...@isocpp.org


W dniu piątek, 3 października 2014 20:08:19 UTC+2 użytkownik Andrew Sutton napisał:
> namespace A { template<typename T> concept bool S() { return true; } }
> namespace B { using A::S; }
> namespace C { template<typename T> concept bool S() { return A::S(); }
> Now I used the different technique to make a function alias, and the
> concepts A::S, B::S, C::S has the same semantics if they are invoked or used
> in requires clause and I think that the fact that the are not equivalent in
> terse notation, meaning that:
> void foo(A::S, B::S);
> void foo(A::S, C::S);
> will declare different function will be surprising.


Why is this surprising? If these were types and not concepts, you
would have this:

Because concepts are also type predicates. And if the functions have the same semantics, then the should behave the same. Why should functions with exactly same behavior have a different semantics. The important point of my example is that the C::S has the same behavior as A::S, but then they behave differently in type introduction. It is a matter if you compare concepts to types or to functions, but both view seams to be equally acceptable.

namespace A { struct S { };  }
namespace B { using A::S; }
namespace C { struct S { }; }

void foo(A::S, A::S); // #1
void foo(A::S, B::S); // redeclares #1
void foo(A::S, C::S); // overloads #1

This is the same reason why the E's from the previous email should
probably be equivalent.

If you write a name in the position of a type, and it does not act
like a type (e.g., it means different things each time it is written), then I
would consider that very surprising. And I suspect I would have a very
hard time teaching that inconsistency.
But you will still have a auto case, which is the name written in the place of a type, so you will need to teach that consistency anyway. You will need also teach that the different placeholder than auto behaves differentlu

But if you start teaching this feature, starting from the fact that we have a concept of the placeholder in the language, that when used as the parameter, can be replaced by a type (something like a wildcard). We have two type of placeholder of the language, one that can be replaced by any type (auto like * in bash) and one that restricts acceptable set of types (concept-name like abc* in bash). I do not have any intuition that different instantiation of same pattern should be replaced by the same content, and even has a oposite one derived from regular expression when the repetition of same patterns (placeholders) can match to different content:
.*:.*          f(auto, auto)
[a-z]*:[a-z]*  f(Concept, Concept)
\([a-z]*\):\1  f(Concept{C}, C) 

Many people have suggested making concepts work like auto, but it's
the wrong choice. The notation was designed to best serve common cases
and simple uses. If you find yourself writing algorithms that take
many different but related types, then write it as a template.

My understanding of the concepts is rather focused on they predicate/restrictions side and I have direct mental coleration with pattern matching (especially rexpexp) and form that perspective the same patter (concepnt) same matched content (type) approach is counterintuitive. Maybe my approach is utterly flawed, but I think that other programmer can have similar understanding, and approach you have selected will be counterintuitive for them.

Also as I tried to point out, without the syntax proposed in N3878 and every placeholder = new type rule, we fail to provide terse and simple syntax, but I found following examples to be equally readable:
  Output_iterator copy(Input_iterator first, Input_iterator last, Output_iterator dest);
  Output_iterator{O} copy(Input_iterator{I} first, I last, O dest);


Andrew

Botond Ballo

unread,
Oct 3, 2014, 2:57:35 PM10/3/14
to conc...@isocpp.org
FWIW, I completely agree.

Unfortunately, I'm a bit late to join this design discussion, and
my attempts to argue for this have largely been met with "we've
already discussed this, re-opening that discussion would delay
the Concepts Lite TS", which is perhaps not an unreasonable

perspective given that Concepts is a language feature we've been

wanting to have for a very long time.


I'm hopeful that design considerations like this will be re-opened
for discussion during the next stage of the Concepts proposal
(possibly C++17), and I intend to make a case for this then.


Regards,
Botond

Tomasz

unread,
Oct 3, 2014, 3:21:53 PM10/3/14
to conc...@isocpp.org, botond...@yahoo.ca

For the other side, maybe we should take a lesson for a first concept design (for C++11) and do not rush the Concepts TS at the cost of discarding any inputs that changes it design. I am not saying that this matter should be discussed because of post of single person (like me), but the placeholder = type approach seems to have a lot of support, deducing from issues risen on this forum or even from the last post from Andrew "Many people have suggested making concepts work like auto, but it's the wrong choice",

Also I am not aware if you proposal was already discussed in the standard, and I am reopening the topic. The sentence "we've already discussed this, re-opening that discussion would delay the Concepts Lite TS" may equally refer to discussion on your proposal as well to discussion that happened when shorthand syntax was firstly proposed.

Andrew Sutton

unread,
Oct 3, 2014, 3:57:42 PM10/3/14
to conc...@isocpp.org
> But you will still have a auto case, which is the name written in the place
> of a type, so you will need to teach that consistency anyway. You will need
> also teach that the different placeholder than auto behaves differentlu

Auto is a keyword with distinct semantics. Any name that is not auto
and is used as a type should not behave like auto.

> But if you start teaching this feature, starting from the fact that we have
> a concept of the placeholder in the language, that when used as the
> parameter, can be replaced by a type (something like a wildcard). We have
> two type of placeholder of the language, one that can be replaced by any
> type (auto like * in bash) and one that restricts acceptable set of types
> (concept-name like abc* in bash). I do not have any intuition that different
> instantiation of same pattern should be replaced by the same content, and
> even has a oposite one derived from regular expression when the repetition
> of same patterns (placeholders) can match to different content:
> .*:.* f(auto, auto)
> [a-z]*:[a-z]* f(Concept, Concept)
> \([a-z]*\):\1 f(Concept{C}, C)

The analogy from regular expressions to C++ is more than a bit
tenuous. For one thing, it overlooks the fact that you have an
algorithm written in terms of those concepts. For example:

int distance(Input_iterator i, Input_iterator j) {
while (i != j) // ...
}

If i and j have different types, then these constraints are
essentially useless when I call the function with arguments that can't
be compared for equality. You're back to the good ol' (current) days
where every error is caught during instantiation and not at the point
of use.

Also, there is an expectation that, at some point in the future, we
will be able to check a template definition separately from its
instantiation. If we adopt the idea that concepts are like auto, then
every algorithm written like distance() above would be
under-constrained. If separate checking is required by the language,
then all of those functions become be ill-formed.

At that point you a) give up on separate checking or b) break a ton of
code. I would prefer not to do either.

This rule is very well motivated, even if the reasons for it are not
well published.

Andrew

Tony V E

unread,
Oct 3, 2014, 4:27:34 PM10/3/14
to conc...@isocpp.org
On Fri, Oct 3, 2014 at 3:57 PM, Andrew Sutton <andrew....@gmail.com> wrote:


int distance(Input_iterator i, Input_iterator j) {
   while (i != j) // ...
}


If you look at the work Eric and others are doing on Ranges, the above should maybe be:

    int distance(Input_iterator i, Sentinel j) {
 
Or, maybe something like

    int distance(Input_iterator i, SentinelFor<Input_iterator> j) {

where SentinelFor<T> means it has the proper == and can be used as a Sentinel for the iterator.

Not sure if that changes the discussion though.

...

One thing that I'd like to be kept in mind is this:

void foo()
{
    ....
    Iterator it = c.begin();
    ....
    ....
    Iterator it2 = c2.begin();
    ....
}

(ie "Iterator" is not coming from the function signature;  it is a concept used here to mean "like auto, but constrained to be at least an Iterator".  The idea being that "auto everywhere" throws away too much type info (in my opinion) - I don't care what type 'it' is, but I do need it to be an Iterator, else the code following it might still compile, but not work right - same problem we have today with templates sans concepts)

If/When we get that syntax (and I sure hope we do) we will again have the same problem - are it and it2 the same type?  It would probably be easiest if the answer here was the same as the answer for the distance() function.  Also this case:

void bar(Container & c)
{
     Container ctmp = ...; // is this the same type as c? or is this just a "constrained auto"

     Iterator it = c.begin(); // constrained auto
     ...
}

ie inside a normal template, I can reuse T and it is now a distinct type.  If Input_iterator i and Input_iterator j are not the same type, what is the type of k in the following:


int distance(Input_iterator i, Input_iterator j) {
   Input_iterator k;   // is that same type as i or j?
   while (i != j) // ...
       k = ...
}

If 'k' is allowed above, then i and j need to be the same type, I think.

For the "constrained auto" case, maybe "auto Iterator" could be used:

void foo()
{
    ....
    auto Iterator it = c.begin();
    ....
    ....
    auto Iterator it2 = c2.begin();
    ....

    // it and it2 might not be the same type
}

And then maybe also use that for Concepts TS:

    int distance(Input_iterator i, auto Input_iterator j) // not the same type

?
Tony



Tomasz

unread,
Oct 3, 2014, 4:28:21 PM10/3/14
to conc...@isocpp.org


W dniu piątek, 3 października 2014 21:57:42 UTC+2 użytkownik Andrew Sutton napisał:
> But you will still have a auto case, which is the name written in the place
> of a type, so you will need to teach that consistency anyway. You will need
> also teach that the different placeholder than auto behaves differentlu

Auto is a keyword with distinct semantics. Any name that is not auto
and is used as a type should not behave like auto.  
Why? Lets assume that we have following concept:
concept bool Any = true;
What are the reason to make following declaration different:
void foo(auto, auto);
void foo(Any, Any);

From my perspective the fact that both auto and concept name as act placeholder for a type is more important, that the fact that one is a keyword and second is a user-defined entity.
 

> But if you start teaching this feature, starting from the fact that we have
> a concept of the placeholder in the language, that when used as the
> parameter, can be replaced by a type (something like a wildcard). We have
> two type of placeholder of the language, one that can be replaced by any
> type (auto like * in bash) and one that restricts acceptable set of types
> (concept-name like abc* in bash). I do not have any intuition that different
> instantiation of same pattern should be replaced by the same content, and
> even has a oposite one derived from regular expression when the repetition
> of same patterns (placeholders) can match to different content:
> .*:.*          f(auto, auto)
> [a-z]*:[a-z]*  f(Concept, Concept)
> \([a-z]*\):\1  f(Concept{C}, C)

The analogy from regular expressions to C++ is more than a bit
tenuous. For one thing, it overlooks the fact that you have an
algorithm written in terms of those concepts.
But terse noation for a Concepts (and the concepts) itself should not only cover the task of writing algoritms. Lets say I want to write a simple utility function like make pair. with the syntax from N2878, I can write:
auto make_pair(MoveConstructible{A} a, MoveConstructible{B} b)
{ return std::pair<A,B>(std::move(a), std::move(b)); }

While with your approach, I will need to fallback to full syntax. Are you sure that the syntax that you have selected is not to much influenced by a scope you have selected (algoritms)? I am not sure if repeated use of same type would be so common if we use range to represent pair of iterators.
 
For example:

int distance(Input_iterator i, Input_iterator j) {
   while (i != j) // ...
}

If i and j have different types, then these constraints are
essentially useless when I call the function with arguments that can't
be compared for equality. You're back to the good ol' (current) days
where every error is caught during instantiation and not at the point
of use.
Why do you assume that in the situation when every concept name introduces the separate type the people
that attempt to write a constrained template, will write under-constrained one? I mean that we will still have
a definition of the distance function that you have presented, instead of correct one:
int distance(Input_iterator{I} i, I j) {
   while (i != j) // ...
}
Also, there is an expectation that, at some point in the future, we
will be able to check a template definition separately from its
instantiation. If we adopt the idea that concepts are like auto, then
every algorithm written like distance() above would be
under-constrained. If separate checking is required by the language,
then all of those functions become be ill-formed.

Both of the approach presented allow to write a equally well constrain functions with syntax of similar simplicity, so above point holds if you assume that your syntax will somehow cause that every function will be well-constrained, but the other one will cause that more function will be under-constrained. I do not get your point.

Tomasz

unread,
Oct 3, 2014, 5:01:56 PM10/3/14
to conc...@isocpp.org


W dniu piątek, 3 października 2014 22:27:34 UTC+2 użytkownik Tony Van Eerd napisał:


On Fri, Oct 3, 2014 at 3:57 PM, Andrew Sutton <andrew....@gmail.com> wrote:


int distance(Input_iterator i, Input_iterator j) {
   while (i != j) // ...
}


If you look at the work Eric and others are doing on Ranges, the above should maybe be:

    int distance(Input_iterator i, Sentinel j) {
 
Or, maybe something like

    int distance(Input_iterator i, SentinelFor<Input_iterator> j) {

where SentinelFor<T> means it has the proper == and can be used as a Sentinel for the iterator.

Not sure if that changes the discussion though.

...


One thing that I'd like to be kept in mind is this:

void foo()
{
    ....
    Iterator it = c.begin();
    ....
    ....
    Iterator it2 = c2.begin();
    ....
}
If think that you pointed a very important question: how long the rule that same concept names the same type applies? In the signature? In the signature and function body? Inside scope? The decision taken for this situation will also be not trivial.
On the other side you have provided a very good argumentation for rule placeholder = new type and placeholder{T} allow to easily specify your intention in all of your examples.
void foo()
{
    //Types of it and it2 may be different  
    Iterator it = c.begin();
    Iterator it2 = c2.begin();
}

void foo()
{
    //Types of it and it2 must be the same
    Iterator{I} it = c.begin();
    I it2 = c2.begin();
}

(ie "Iterator" is not coming from the function signature;  it is a concept used here to mean "like auto, but constrained to be at least an Iterator".  The idea being that "auto everywhere" throws away too much type info (in my opinion) - I don't care what type 'it' is, but I do need it to be an Iterator, else the code following it might still compile, but not work right - same problem we have today with templates sans concepts)

If/When we get that syntax (and I sure hope we do) we will again have the same problem - are it and it2 the same type?  It would probably be easiest if the answer here was the same as the answer for the distance() function.  Also this case:

void bar(Container & c)
{
     Container ctmp = ...; // is this the same type as c? or is this just a "constrained auto"

     Iterator it = c.begin(); // constrained auto
     ...
}
Again:
void bar(Container & c)
{
     Container ctmp = ...; // ctmp may have different type than c
     Iterator it = c.begin(); // some iterator
     ...
}

Again:
void bar(Container{C} & c)
{
     C ctmp = ...; // ctmp have same type as c
     C::const_iterator it = c.begin(); // we are specific about iterator type
}

ie inside a normal template, I can reuse T and it is now a distinct type.  If Input_iterator i and Input_iterator j are not the same type, what is the type of k in the following:

int distance(Input_iterator i, Input_iterator j) {
   Input_iterator k;   // is that same type as i or j?
   while (i != j) // ...
       k = ...
}
Again:

int distance(Input_iterator i, Input_iterator j) {
   Input_iterator k; 

int distance(Input_iterator{I} i, I j) {
   I k;   // same type as i, j, l
}

int distance(Input_iterator{I} i, Input_iterator{J} j) {
   I k;   // same type as i
   J k2; // same type as j
}

Gabriel Dos Reis

unread,
Oct 4, 2014, 11:56:09 AM10/4/14
to conc...@isocpp.org
Tony V E <tvan...@gmail.com> writes:

[...]

| One thing that I'd like to be kept in mind is this:
|
| void foo()
| {
|     ....
|     Iterator it = c.begin();
|     ....
|     ....
|     Iterator it2 = c2.begin();
|     ....
| }
|
| (ie "Iterator" is not coming from the function signature;  it is a
| concept used here to mean "like auto, but constrained to be at least
| an Iterator".  The idea being that "auto everywhere" throws away too
| much type info (in my opinion) - I don't care what type 'it' is, but I
| do need it to be an Iterator, else the code following it might still
| compile, but not work right - same problem we have today with
| templates sans concepts)

Please, remember that the short-notation is just that: a short hand for
a specific configuration that shows up frequently. It isn't meant to
cover every situation. For that indeed, we already have the general
notation. Let's not loose sight of that in the heat of the arguments.

Covering adequately function parameter types is, from my perspective,
for more important than covering declarations of local variables,
because local variables are cases that are -- for the vast majority of
cases -- well covered today (and will be covered tomorrow) by an
existing language feature. The current choice for function parameter
types is based on careful analysis of real-word algorithms (e.g. the
standard library algorithms) and how to simplify and support their
authoring and consumption. I would love for people who feel
passionately differently to conduct a similar detailed analysis of
real-world algorithms as comprehensively as the Palo Alto TR did so that
we can have material for comprehensive technical discussion.

-- Gaby

Tomasz

unread,
Oct 4, 2014, 1:21:54 PM10/4/14
to conc...@isocpp.org, g...@axiomatics.org
 
I would like to ask you to explain me what are benefits of having the same concept = same type approach versus the placeholder = type approach combined with placeholder{id} syntax proposed in N3878 (I want to emphasise that this addition is necessary. For the every case of the algorithm in the form you have analyzed in the  Palo Alto TR, that takes form for example:
Output_iterator algo(Input_iterator, Input_iterator, Output_iterator); //1
Wit the alternative proposed here  in can be written as:
Output_iterator{O} algo(Input_iterator{I}, I, O); //2
Which is equally simple and readable, as the alternative proposed above. I want to point out, that I totally agreed with your decision when the only alternative presented in the discussion was to use delctype(), so instead of //1 we have to write:
auto algo(Input_iterator first, decltype(first) last, Output_iterator out) -> decltype(out); //3
No arguments could convince me that concise syntax for writting algorithm could be should look as //3, but with N3878 we have a acceptable alternative (//2) I think that other arguments.

I would love for people who feel
passionately differently to conduct a similar detailed analysis of
real-world algorithms as comprehensively as the Palo Alto TR did so that
we can have material for comprehensive technical discussion.

The message given with the terse syntax notation was that is designed to provide a way for the novices to write a generic function equally simple to normal functions, but when you receive a feedback about this syntax for your target group (I am considering myself as a novice: I do not take part in developing of concepts nor have any real world experience wit writing them), the input is rejected with the message, that you should become an expert first, to propose changes in the syntax for novice people. I see this to points as being
contradictory. Please take into consideration that nearly nobody on this forum is questioning the set of concepts chosen for the STL (from Palo Alto TR) or the mechanic for selecting better overload. It does not require to be an expert to have an opinion on the syntax you preffer, and if the message that comes from authors is that we have a new short syntax for you (Syntax for novices? I am novice!), you should expect that peopole will put their suggestion.

Personally I am astonished how small is the set of concepts for STL (I red Palo Alto TR) and how consistent and relatively simple is design that you have come up with. Sometimes I post on this forum various suggestion, but usually after several post from Andrew I realize how silly their was. My last discovery was that if we have for example concept Sequence (one of the requirements is that is default constructible), the signature:
void f(Sequence&& f);
Will not have perfect forwarding semantics like:
void f(auto&&);
But will only accept rvalues like:
void f(std::vector<T>&& t);
And the generic function will be equally simple to write as normal ones. And if you want perfect wording then you can use full template syntax.

//Why I think that

void f(Sequence&& f)
Will not have a forwarding semantics. The function is equivalent to:
template<typename T>
void f(T&&) requires Sequence<T>;
This candidate will accept the reference as type T (no changes in that), but the references will be rejected in check of Sequence<T>,  because their are not default construtible.


-- Gaby

Tomasz Kamiński

unread,
Oct 4, 2014, 3:30:00 PM10/4/14
to conc...@isocpp.org
2014-10-03 20:07 GMT+02:00 Andrew Sutton <andrew....@gmail.com>:

If you write a name in the position of a type, and it does not act
like a type (e.g., it means different things each time it is written), then I
would consider that very surprising. And I suspect I would have a very
hard time teaching that inconsistency.

As Tony Van Eeerd has pointed out the rule is not consistency applied in your current wording: repetition of the same concept name resolves to the same type only in the function parameters declaration, and if the same concept is mentioned in the function body again, it evaluates to unrelated type, also references to same concept name in one scope (or enclosing and nested scopes) deduces separate types, like auto. As we are discussing the alternative that is consistent across the whole program (placeholder = type), let discuss what would be consequences of applying the same concept = same rule in various context:

1) Function parameters declaration and function body: This will make semantics of the program depended on the way that function is defined, I mean, if we declare function as:
void foo(Iterator it1, Iterator it2)
{
  Iterator it3 = ...;
}
With the same concept = same type rule extended to function body, the type of it3 would be required to be the same as it1, but if we rewrite above function using full syntax:
template<typename I>
  requires Iterator<I>
void foo(I it1, I it2)
{
  Iterator it3  = ...;
}
Then the type of it3 is in depended to types of it (I).

2) Inside one scope: Introduction of new variable during the code re-factoring will change meaning of unrelated declarations, example:
{
   std::set<std::string> names_set = ...;
   std::vector<std::string> chosen(names_set.lower_bound("ala"), names_set.end());
   Iterator vb = chosen.begin();
   /* ... do something with vb */
}
Let assume that someone wants to split the vector constructor, as is perceived as being to long:
{
   std::set<std::string> names_set = ...;
   Iterator it = names_set.lower_bound("ala");
   std::vector<std::string> chosen(it, names_set.end());
   Iterator vb = choseb.begin();
   /* ... do something with vb */
}
Again if we apply the same concept = same type to declaration found in same scope, then the types of it and vb will required to be the same, so the program will be invalid.

3) Inside scope and it enclosing scopes: Some as above, but the inserted declaration can be really distance from the one the becomes affected. Also if the variable will be declared using concept as a placeholder in global or namespace scope, then the meaning of every declaration in a program that uses same concept will be changes, but only in translation units were newly declared variable is visible. That means that, when we add include to the file with non conflicting declarations is added to a cpp file, the meaning of some variable declartion can still change.

Currently the rule same concept = same type is applied only for the function parameter, and giving the problems listed above, it cannot be applied consistency in every context, furthermore outside of the function parameter declaration every placeholder introduces new type (some concept name placeholder works like auto). This makes a learning of language harder (as every exception introduced to otherwise general rules) and in that light I again would ask to reconsider this decision. Especially, that there seems to be no real advantages of this approach over the combination of placeholder = type approach and placeholder{id} syntax.
Andrew

Andrew Sutton

unread,
Oct 4, 2014, 3:44:46 PM10/4/14
to conc...@isocpp.org, Gabriel Dos Reis
> I would like to ask you to explain me what are benefits of having the same
> concept = same type approach versus the placeholder = type approach combined
> with placeholder{id} syntax proposed in N3878 (I want to emphasise that this
> addition is necessary. For the every case of the algorithm in the form you
> have analyzed in the Palo Alto TR, that takes form for example:

n3878 wasn't around when the concepts proposal was design, discussed
in SG8 or in EWG (May, 2013). And n3878 does not propose changes to
the semantics of the original concepts proposal; it is purely an
extension.


> Output_iterator algo(Input_iterator, Input_iterator, Output_iterator); //1
> Wit the alternative proposed here in can be written as:
> Output_iterator{O} algo(Input_iterator{I}, I, O); //2
> Which is equally simple and readable, as the alternative proposed above.


I do not think that this is simple and readable. It requires
understanding and applying more language rules to read the
declaration.


> The message given with the terse syntax notation was that is designed to
> provide a way for the novices to write a generic function equally simple to
> normal functions, but when you receive a feedback about this syntax for your
> target group (I am considering myself as a novice: I do not take part in
> developing of concepts nor have any real world experience wit writing them),
> the input is rejected with the message, that you should become an expert
> first, to propose changes in the syntax for novice people. I see this to
> points as being
> contradictory.


No, I don't think so. As Gaby said, the current rules are based on a
careful analysis of existing code, and I'll add to that a lot of
discussion about how to simplify the authoring of those algorithms.

Feedback that's backed by studies is generally taken more seriously
than feedback without it.


> void f(Sequence&& f);
> Will not have perfect forwarding semantics like:
> void f(auto&&);
> But will only accept rvalues like:
> void f(std::vector<T>&& t);
>
> template<typename T>
> void f(T&&) requires Sequence<T>;
> This candidate will accept the reference as type T (no changes in that), but
> the references will be rejected in check of Sequence<T>, because their are
> not default construtible.

Forwarding and concepts do not always interact well. We've known about
this problem since before proposing concepts lite, at least since 2010
or so. That problem shows up when you emulate concepts using library
features too.

The general feeling seems to be that it's easy enough to work around
with library features, so there haven't been any serious language
proposals to address the problem.

Adding yet another meaning for && is not the right choice.

Andrew

Andrew Sutton

unread,
Oct 4, 2014, 3:58:17 PM10/4/14
to conc...@isocpp.org
> As Tony Van Eeerd has pointed out the rule is not consistency applied in
> your current wording: repetition of the same concept name resolves to the
> same type only in the function parameters declaration, and if the same
> concept is mentioned in the function body again, it evaluates to unrelated
> type, also references to same concept name in one scope (or enclosing and
> nested scopes) deduces separate types, like auto.

You can't use a concept name to declare a variable, so it can't be
inconsistent.

The reason that is disallowed is that we don't have an analysis of
placeholders used in the body of functions.

Andrew

Tomasz Kamiński

unread,
Oct 4, 2014, 4:18:35 PM10/4/14
to conc...@isocpp.org
2014-10-04 21:44 GMT+02:00 Andrew Sutton <andrew....@gmail.com>:
> I would like to ask you to explain me what are benefits of having the same
> concept = same type approach versus the placeholder = type approach combined
> with placeholder{id} syntax proposed in N3878 (I want to emphasise that this
> addition is necessary. For the every case of the algorithm in the form you
> have analyzed in the  Palo Alto TR, that takes form for example:

n3878 wasn't around when the concepts proposal was design, discussed
in SG8 or in EWG (May, 2013). And n3878 does not propose changes to
the semantics of the original concepts proposal; it is purely an
extension.


> Output_iterator algo(Input_iterator, Input_iterator, Output_iterator); //1
> Wit the alternative proposed here  in can be written as:
> Output_iterator{O} algo(Input_iterator{I}, I, O); //2
> Which is equally simple and readable, as the alternative proposed above.


I do not think that this is simple and readable. It requires
understanding and applying more language rules to read the
declaration.
For me it is equally simple. I found the syntax pattern and the id that allows me to capture things that was matched against this instance of the pattern intuitive. But this is matter of personal opinion, so if someone have right to make decision based on it, that person is one of the people who are making proposal. Anyway, besides the opinion on readability, which is subjective, are you aware with other problems with this syntax.


> The message given with the terse syntax notation was that is designed to
> provide a way for the novices to write a generic function equally simple to
> normal functions, but when you receive a feedback about this syntax for your
> target group (I am considering myself as a novice: I do not take part in
> developing of concepts nor have any real world experience wit writing them),
> the input is rejected with the message, that you should become an expert
> first, to propose changes in the syntax for novice people. I see this to
> points as being
> contradictory.


No, I don't think so. As Gaby said, the current rules are based on a
careful analysis of existing code, and I'll add to that a lot of
discussion about how to simplify the authoring of those algorithms.

Feedback that's backed by studies is generally taken more seriously
than feedback without it.


Returning to previous point, the concepts should not only address writing the STL algorithms, and having a terse syntax that is based only on analysis may reduce it applicability. Looking only on the amount of people that are opting for another solution, give a hint that this may be actually a case. Also the prefference of same concept = same type for STL algoritms may be a result of fact that we use two parameters to resent a range.

> void f(Sequence&& f);
> Will not have perfect forwarding semantics like:
> void f(auto&&);
> But will only accept rvalues like:
> void f(std::vector<T>&& t);
>
> template<typename T>
> void f(T&&) requires Sequence<T>;
> This candidate will accept the reference as type T (no changes in that), but
> the references will be rejected in check of Sequence<T>,  because their are
> not default construtible.

Forwarding and concepts do not always interact well. We've known about
this problem since before proposing concepts lite, at least since 2010
or so. That problem shows up when you emulate concepts using library
features too.

The general feeling seems to be that it's easy enough to work around
with library features, so there haven't been any serious language
proposals to address the problem.

Adding yet another meaning for && is not the right choice.

You misunderstood what I want to say. I was astonished how the concepts allow the programmer to write a generic functions that takes rvalue reference of the Sequence just like the function that takes specific type (std::vector). Otherwise if such function needs to be written we have check if T is not reference. This is a good think, and I tough that this was a concious decision.



2014-10-04 21:57 GMT+02:00 Andrew Sutton <andrew....@gmail.com>:

You can't use a concept name to declare a variable, so it can't be
inconsistent.

The reason that is disallowed is that we don't have an analysis of
placeholders used in the body of functions.
But again, I think that examples that I have pointed shows, that if we allow the concepts are allowed to be used to declare variable (and from other post it seems that you want to have such functionality), they cannot be consistent with approach taken for function parameter.

If the analysis of other part of the library (algoritms/functional components) or the introduction will show up that current approach to terse was specific to use case, would it be possible to change that after publication of Concept TS, or we will stay with it forever? Maybe the decision about terse syntax should be prolonged until more part of library is conceptualized? For example by making the repetition of same concept name in functions parameter ill formed.
 

Andrew


--
You received this message because you are subscribed to a topic in the Google Groups "SG8 - Concepts" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/concepts/BFmvN_w-PEs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to concepts+u...@isocpp.org.
To post to this group, send email to conc...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/concepts/.




Andrew

Andrew Sutton

unread,
Oct 4, 2014, 4:58:58 PM10/4/14
to conc...@isocpp.org
> Returning to previous point, the concepts should not only address writing
> the STL algorithms, and having a terse syntax that is based only on analysis
> may reduce it applicability. Looking only on the amount of people that are
> opting for another solution, give a hint that this may be actually a case.
> Also the prefference of same concept = same type for STL algoritms may be a
> result of fact that we use two parameters to resent a range.

The study is a starting point. If we believed that the results
wouldn't generalize, then we would not have proposed the rule. Thus
far, we have yet to be proven wrong.

Remember, the terse notation is designed for simple cases. It is not a
replacement for the full template syntax. If you need more degrees of
freedom in a declaration, it may not be one of those simple cases that
the terse notation is designed for.


> But again, I think that examples that I have pointed shows, that if we allow
> the concepts are allowed to be used to declare variable (and from other post
> it seems that you want to have such functionality), they cannot be
> consistent with approach taken for function parameter.

I'm not sure that it needs to be.

> If the analysis of other part of the library (algoritms/functional
> components) or the introduction will show up that current approach to terse
> was specific to use case, would it be possible to change that after
> publication of Concept TS, or we will stay with it forever? Maybe the
> decision about terse syntax should be prolonged until more part of library
> is conceptualized? For example by making the repetition of same concept name
> in functions parameter ill formed.

Adding concepts to the library is independent of terse notation. I see
no reason for this aspect of the design to be reconsidered.

Andrew

Tomasz

unread,
Oct 4, 2014, 5:29:24 PM10/4/14
to conc...@isocpp.org
W dniu sobota, 4 października 2014 22:58:58 UTC+2 użytkownik Andrew Sutton napisał:
> Returning to previous point, the concepts should not only address writing
> the STL algorithms, and having a terse syntax that is based only on analysis
> may reduce it applicability. Looking only on the amount of people that are
> opting for another solution, give a hint that this may be actually a case.
> Also the prefference of same concept = same type for STL algoritms may be a
> result of fact that we use two parameters to resent a range.

The study is a starting point. If we believed that the results
wouldn't generalize, then we would not have proposed the rule. Thus
far, we have yet to be proven wrong.

Remember, the terse notation is designed for simple cases. It is not a
replacement for the full template syntax. If you need more degrees of
freedom in a declaration, it may not be one of those simple cases that
the terse notation is designed for.


> But again, I think that examples that I have pointed shows, that if we allow
> the concepts are allowed to be used to declare variable (and from other post
> it seems that you want to have such functionality), they cannot be
> consistent with approach taken for function parameter.

I'm not sure that it needs to be.

My first point was about the issue, I think that consistency is a really value for a complex systems or entities (like C++ language), because consistency means that to understand a language you must learn a bunch of generic rules. I is easy to forget and make mistake that in function parameters the concepts placeholder are behaving differently than auto, while in variable declarations they are. Also as you said several times in these forums, you want to make language simpler to learn and exceptions to general rules, like every placeholder maps to same type, except the situation when you use the same concept twice in function parameters, and by same concepts, I do not mean that it implies the same parameters (as it is checked during overload resolution), but actialy same name, makes is harder. Similiar things even apply to spoken languages, the ones that are considered to be hardest to learn, are the one that have more exceptions than general rules.

My disagreement with current approach for terse notation it mostly based on the thing that is add to new exceptions to the language, which every C++ programmer must remember:
  - the various placeholder are threated differently (auto vs concept name) and even concept name will probably be treated differently in different context (introduction of variable versus parameter)
  - we have two form of concepts equivalence: one discarding its names and based on requirements and one for checking if they should introduce same type in parameter based on name
And I would accept this, if otherwise the terse notation would be unusable (decltype), but with placeholder{type} syntax is no longer a case.


> If the analysis of other part of the library (algoritms/functional
> components) or the introduction will show up that current approach to terse
> was specific to use case, would it be possible to change that after
> publication of Concept TS, or we will stay with it forever? Maybe the
> decision about terse syntax should be prolonged until more part of library
> is conceptualized? For example by making the repetition of same concept name
> in functions parameter ill formed.

Adding concepts to the library is independent of terse notation. I see
no reason for this aspect of the design to be reconsidered.

 I mean that, if during the process of adding the concepts to the library, it would be found that the current approach leads to less uses of terse notation than the placeholder = type approach, then terse notation should be reconsidered.

Tomasz

unread,
Oct 4, 2014, 5:51:19 PM10/4/14
to conc...@isocpp.org
And the question that you need to answer (for yourself), is that: Is same concept = same type rule for terse worth enough to justify adding such exceptions/inconsistencies to the language in the light of alternative use of placeholder{type} syntax?

As you probably already know my answer is no, because I think that we should avoid complicating language. But I perceive the complexity as the number of rules. And that is the reason that I really think that your concepts design is excellent, becuase it based on small set of fatures rule subsumes relation for predicates and predicates based on validity of expression. Comparing to C++11 design:
  - no concept inheritance
  - no maps
  - no function signatures
  - no impact of function body semantics
  - and probably more.
I would like to see, that you keep this less is more approach for terse notation.

Andrew Sutton

unread,
Oct 4, 2014, 5:52:29 PM10/4/14
to conc...@isocpp.org
> I mean that, if during the process of adding the concepts to the library,
> it would be found that the current approach leads to less uses of terse
> notation than the placeholder = type approach, then terse notation should be
> reconsidered.

I think you are missing the point. The terse notation is designed to
simplify the writing of simple templates. It is not a general
mechanism to be used in every situation. Tweaking the notation to make
it work for more declarations was never a goal.

Andrew

Tomasz

unread,
Oct 4, 2014, 6:02:52 PM10/4/14
to conc...@isocpp.org

That depends on the definition of simple templates. During the process of adding concepts to the rest of library, the set of "simple" template usage (that now contains examples derived from STL algoritms) will be extended with the example comming from use or directly from specific libraries. With the new set of simple examples, we may still found that the placeholder = type will cover larger subset of it. And this would be a valid input to change (not tweak) a notation.

But for me the point that we need to add to more features (to avoid world concept) to the language to support current notation is more important.
Andrew

Andrew Sutton

unread,
Oct 4, 2014, 6:25:40 PM10/4/14
to conc...@isocpp.org
>> I think you are missing the point. The terse notation is designed to
>> simplify the writing of simple templates. It is not a general
>> mechanism to be used in every situation. Tweaking the notation to make
>> it work for more declarations was never a goal.
>
>
> That depends on the definition of simple templates.

If you are writing a function that takes arguments of the same type,
constrained by some concept C, you can write it like this:

void f(C x, C y);

If your template is not like this, you should consider declaring it as
a template. This is what I mean by simple.

> During the process of
> adding concepts to the rest of library, the set of "simple" template usage
> (that now contains examples derived from STL algoritms) will be extended
> with the example comming from use or directly from specific libraries. With
> the new set of simple examples, we may still found that the placeholder =
> type will cover larger subset of it. And this would be a valid input to
> change (not tweak) a notation.

If the template cannot be written in the form above, I advise writing
it as a template. Changing the programming language is a bit harder.

Andrew

Tomasz

unread,
Oct 4, 2014, 7:09:22 PM10/4/14
to conc...@isocpp.org
W dniu niedziela, 5 października 2014 00:25:40 UTC+2 użytkownik Andrew Sutton napisał:
>> I think you are missing the point. The terse notation is designed to
>> simplify the writing of simple templates. It is not a general
>> mechanism to be used in every situation. Tweaking the notation to make
>> it work for more declarations was never a goal.
>
>
> That depends on the definition of simple templates.

If you are writing a function that takes arguments of the same type,
constrained by some concept C, you can write it like this:

void f(C x, C y);

If your template is not like this, you should consider declaring it as
a template. This is what I mean by simple.


I think that the the aim of the terse notation should be to simply the most common use cases, because otherwise their is no point to introduce such shorthand. And I belive that in the case of examples you have investigated, the most common cases were such notation could be actually applied (accepting two types matching same constraint, but have no inter type constrain) have exactly same types and that was the reason to select such design.

I am writting that because you post gave me impression that the definition of "simple" template was choose to match your solution, instead of developing solution basing on the set of (most common) simple templates. And I really (want to) belive, that if the study of template usage in area you have chosen, would show that in the most common cases functions that accepts argument that fulfills different type, actually accepts different types, then other design decision would be made in regard to terse syntax. Otherwise, backup your argument with study of examples would be so invalid.

Paul Fultz II

unread,
Oct 4, 2014, 7:26:15 PM10/4/14
to conc...@isocpp.org, botond...@yahoo.ca

For the other side, maybe we should take a lesson for a first concept design (for C++11) and do not rush the Concepts TS at the cost of discarding any inputs that changes it design.

 There really is no need for urgency to get concepts in the language considering that currently with library support concept predicates can be written in a simple manner and produce readable error messages already in C++11.
In fact, I would rather see more work on getting reflection or modules in the language over concepts. This is not to say that we shouldn't have native features for concept predicates, but it can be wise to be patient and see how concept predicates will be used in C++11 so that better design decisions can be made to make concepts both useful for library writers and novices both.

- Paul

Paul Fultz II

unread,
Oct 4, 2014, 7:40:17 PM10/4/14
to conc...@isocpp.org

Given that such situations are relatively common and we want to create a
terse and novice friendly syntax, I have no doubts why the same concept
= same type approach was selected in the light of above example.

This syntax is not novice friendly. I would not recommend beginners to use the terse syntax because its confusing semantics can hinder their understanding as they move from novice level to expert level.

- Paul

Andrew Sutton

unread,
Oct 4, 2014, 9:37:43 PM10/4/14
to conc...@isocpp.org
> This syntax is not novice friendly. I would not recommend beginners to use
> the terse syntax because its confusing semantics can hinder their
> understanding as they move from novice level to expert level.

I taught this in my generic programming class last spring. The
students had absolutely no problem understanding the concept. And, for
the record, few students had any significant C++ experience at the
beginning of the semester.

From what experience do you derive your observations?

Andrew

Bjarne Stroustrup

unread,
Oct 5, 2014, 9:50:50 AM10/5/14
to conc...@isocpp.org
I have been following this debate. It is 10 times as much work to
counter an objection or
a vague new idea as it is to generate such, so I have stayed out of the
way for lack of time
and hope that people would come to their senses.

"Concepts" is not a new idea. Some of us have been working in this field
on and off for 20+
years and we are not happy about suggestions to wait until "perfection"
comes from ideas
that have been aired (and considered) many times before and/or are
unsupported claims of
improvements.

Perfection never comes.

Most of the debates have been focused on the terse notation. Consider

void sort(Sortable& c); // or "Range" or "Container"

I have never met a novice or "average programmer who did not like it.
Many that I talk to
cannot wait to use it and cannot understand why the committee doesn't
just get on with it.
Their estimate of the collective IQ of the committee is not flattering.

The most important aspect of the terse notation is that it allows us to
think about generic
programming in the same way as we think of "ordinary" programming. In
particular, the terse
syntax serves very well in the (simple) cases where we can think of a
concepts as a "type of
a type."

When I show,

void sort(Forward_iterator, Forward_iterator);

most ask for the first version (void sort(Sortable&);), many nod sagely
(recognizing the
standard), and a few language experts/lawyers asks whether the two
Forward_iterators
required must be of the same type. The answer is yes, and they are happy.

The suggestion that the terse syntax is hard to teach and hard to use is
in my experience
bogus (mere conjecture). I have explained it to many hundreds of
programmers (of widely
varying levels of experience) and at several levels of detail. To do
that well, it is
important to emphasize that the syntax exists to express the simplest
cases and whenever
you want to express something complicated, fall back to the shorthand
notation or even
to explicit "requires." The requires clauses will never go away - they
are logically
necessary. Nor will the "good old" template<class T> go away for the
next decade or two
- they are in massive use.

Keep simple things simple, and remember that most programmers don't give
a tinker's cuss
about type theory. There are two ways of looking at simplicity: Factor
for maximum
orthogonality and Huffman coding. The former gives you exactly one way
to do anything
and that way requires you to explicitly specify every aspect of your
solution. The latter,
introduces a series of shorthands so that each level of complexity can
be expresses minimally.
With concepts (lite), we take the latter approach and it (naturally)
requires more rules
than the orthogonal design. Another example is range-for. Without it,
C++ would be simpler,
but many C++ programs more complicated.

Now, the fact that there are algorithms that take arguments of types
that must match the
same concept but can be different has been know "forever." I think I
discuss the problem
in my 2002 paper that first introduced templates using "auto" and
without the "template"
keyword. At the time, I think I considered indices. Andrew Sutton
considered this problem
in detail about 3 years ago and suggested a notation equivalent to the
one presented in
the paper he wrote with Botond Ballo:

void merge(Forward_iterator{For}, For, Forward_iterator{For2},
For2, Output_iterator);

I don't dislike this solution. It's a compatible extension to what we
have and I may even
support it in the future (no guarantees), but I do not want to require
it. I am not (yet)
convinced that it would be sufficiently useful.

I think that the best solution to many of the complicated
use-of-concepts problems is to
explicitly use "requires" or (better still) introducing well-defined
sets of types.
For example:

Mergeable{For, For2, Out}
Out merge(For, For, For2, For2, Out out);

This is shorter and more explicit than the previous version. It also
deals with the
requirements on combinations of argument types.

There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators
of the same type with pairs of different iterator types. Yes, but such
pairs are not of
unrelated iterators. [Iterator:Iterator) sequences will never go away
and the new style
will be something like [Iterator:Sentinel) pairs. I suspect, we'll see
something like

Range{Iter, Sentinel}
void sort(Iter,Sentinel);

where Sentinel will have a well-defined relation to Iterator defined in
Range.

We might even see

void sort(Iter,Iter_Sentinel);

The lambda syntax, by itself, has no way of expressing relations among
"auto" arguments.
That's (probably) fine because we can deduce such constraints from the
use of the lambda.
However, we have no such luxury with templates in general and concepts
allows us to express
relationships (or the lack thereof) for lambda arguments.

Some of the comments seem to consider it a failing of concepts that it
does not directly
map onto existing library notions. However, we design libraries to
approximate ideals.
When a better solution is found we should not restrict ourselves to the
equivalent of
a library solution. In fact, we need language extensions exactly where
library solutions
do not approximate our ideals sufficiently closely, e.g., because they
require too much
cleverness.

So, my opinion is that we should finish the concepts specification and
ship it so that
the users (many of whom are anxiously waiting) can benefit. Unless, I
hear of some
fundamental problem, we should ship now - and I have not seen any
objection that I
consider fundamental (in this discussion thread or elsewhere).

Paul Fultz II

unread,
Oct 5, 2014, 11:59:30 AM10/5/14
to conc...@isocpp.org, b...@cs.tamu.edu


On Sunday, October 5, 2014 9:50:50 AM UTC-4, Bjarne Stroustrup wrote:
I have been following this debate. It is 10 times as much work to
counter an objection or
a vague new idea as it is to generate such, so I have stayed out of the
way for lack of time
and hope that people would come to their senses.

"Concepts" is not a new idea. Some of us have been working in this field
on and off for 20+
years and we are not happy about suggestions to wait until "perfection"
comes from ideas
that have been aired (and considered) many times before and/or are
unsupported claims of
improvements.

Perfection never comes.

Of course, I wasn't suggesting to wait for perfection, instead I was suggesting that we wait for more uses of and experience with concept predicates. It is foolish to ignore the experience of people who have used real concept predicates either in C++ or in other languages that have something similiar(such as D with static_if).
 

Most of the debates have been focused on the terse notation. Consider

Yes, the terse notation is a minor problem, whereas concepts lite has bigger problems that make it a no-go such as lack of specialization.
 

Its not that novices can't understand the terse notation. In fact, it maps exactly how they understand the language, but it has some problems.

1) Its easy for a novice to specify the function wrong that leads to unnecessary errors:

    Number max(Number x, Number y)
    {
        return x > y ? x : y;
    }

If the terse syntax was auto based this wouldn't be a problem. Futhermore, even if they were to define a algorithm like this:

    Iterator find_if(Iterator start, Iterator last, Predicate p);

The function would just be "underspecified" rather than incorrectly specified. So an error would still occur if they mismatch the iterators, it just won't be caught by the initial predicates.

2) It leads to a novice having an incorrect model to understand concept predicates. And could lead them to have difficult understanding why certain things don't work or why there is a difference between the auto based version and the concept-based version.
 

Keep simple things simple, and remember that most programmers don't give
a tinker's cuss
about type theory.

Which is reason even more to prefer the auto-based terse syntax. Since a novice really wouldn't care about how the types are defined, whereas as anyone beyond a novice would find the syntax surprising.

There are two ways of looking at simplicity: Factor
for maximum
orthogonality and Huffman coding. The former gives you exactly one way
to do anything
and that way requires you to explicitly specify every aspect of your
solution. The latter,
introduces a series of shorthands so that each level of complexity can
be expresses minimally.
With concepts (lite), we take the latter approach and it (naturally)
requires more rules
than the orthogonal design. Another example is range-for. Without it,
C++ would be simpler,
but many C++ programs more complicated.

However, range-for was based on library based design. Plus, most features added to the language interoperate with the libraries that previously used library-based solutions.


I think that the best solution to many of the complicated
use-of-concepts problems is to
explicitly use "requires" or (better still) introducing well-defined
sets of types.
For example:

     Mergeable{For, For2, Out}
     Out merge(For, For, For2, For2, Out out);

This is shorter and more explicit than the previous version. It also
deals with the
requirements on combinations of argument types.

What? The idea of duplicating predicates so you can express relationship is an awful idea.
 

There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators  

I believe you mean Eric Niebler.
 
of the same type with pairs of different iterator types. Yes, but such
pairs are not of
unrelated iterators. [Iterator:Iterator) sequences will never go away
and the new style
will be something like [Iterator:Sentinel) pairs. I suspect, we'll see
something like

     Range{Iter, Sentinel}
     void sort(Iter,Sentinel);

where Sentinel will have a well-defined relation to Iterator defined in
Range.

We might even see

     void sort(Iter,Iter_Sentinel);

The lambda syntax, by itself, has no way of expressing relations among
"auto" arguments.
That's (probably) fine because we can deduce such constraints from the
use of the lambda.
However, we have no such luxury with templates in general and concepts
allows us to express
relationships (or the lack thereof) for lambda arguments.

I don't understand what you mean with the lambda syntax.
 

Some of the comments seem to consider it a failing of concepts that it
does not directly
map onto existing library notions. However, we design libraries to
approximate ideals.
When a better solution is found we should not restrict ourselves to the
equivalent of
a library solution.

Yes, when there is a better solution. Currently, concepts lite is not a better solution than the library solution. With the library solution I have specialization, higher-order predicates, interoperability with other libraries, and conditional overloading. Features I use on a regular basis.
 
In fact, we need language extensions exactly where
library solutions
do not approximate our ideals sufficiently closely, e.g., because they
require too much
cleverness.

Well, native language features can have cleaner syntax(reduce the usage of boilerplate code or macros), and the compiler can provide much deeper diagnostics and analysis. I am not arguing we shouldn't have native features for concept predicates in the languages. Rather, since C++11, we have nice library-based solutions that we can have concepts lite mature more before introducing it into the language.
 

So, my opinion is that we should finish the concepts specification and
ship it so that
the users (many of whom are anxiously waiting) can benefit. Unless, I
hear of some
fundamental problem, we should ship now - and I have not seen any
objection that I
consider fundamental (in this discussion thread or elsewhere).


Rushing a solution seems foolish, considering it is not a better solution than the current library solutions. Plus, it will either need to be deprecated like auto_ptr or it will continue to hinder the development of newer features based on concepts in the future.

- Paul

Gabriel Dos Reis

unread,
Oct 5, 2014, 1:37:03 PM10/5/14
to Tomasz, conc...@isocpp.org
I can see the contradiction between what I said and the strawman you
just made and proceeded to ridicule. That is fine, but it is also
competing with the most fundamental issues on eating the limited time
resource we have.

| Please take into consideration that nearly nobody on
| this forum is questioning the set of concepts chosen for the STL (from
| Palo Alto TR) or the mechanic for selecting better overload. It does
| not require to be an expert to have an opinion on the syntax you
| preffer, and if the message that comes from authors is that we have a
| new short syntax for you (Syntax for novices? I am novice!), you
| should expect that peopole will put their suggestion.

Understand that nobody is requiring you to be an expert to have an opinion.
We are all entitled to our opinions.

Gabriel Dos Reis

unread,
Oct 5, 2014, 1:57:10 PM10/5/14
to conc...@isocpp.org, b...@cs.tamu.edu
Paul Fultz II <pful...@gmail.com> writes:

[...]

| So, my opinion is that we should finish the concepts specification
| and
| ship it so that
| the users (many of whom are anxiously waiting) can benefit.
| Unless, I
| hear of some
| fundamental problem, we should ship now - and I have not seen any
| objection that I
| consider fundamental (in this discussion thread or elsewhere).
|
|
|
| Rushing a solution seems foolish, considering it is not a better
| solution than the current library solutions.

I don't know exactly what you meant that to be. So, I have a question:
do you believe that after nearly 20+ years working continuously on
this, having a Technical Specification that consolidates the essence of
what we believe concepts should be based on practical experience,
analysis of real-world algorithms, talking to programmers in the field,
including learning from past failures is foolish?

| Plus, it will either need
| to be deprecated like auto_ptr or it will continue to hinder the
| development of newer features based on concepts in the future.

I don't know the exact basis for that assertion, but a Technical
Specification allows the community to reconsider a design or a technical
solution when the in-the-field experience shows the need for something
else. I see no parallel to "auto_ptr" nor any material hindrance to
development of new features in the future.

We are not helping ourselves if we have to spend so much scarce
resources on strawman arguments.

-- Gaby

Loïc Joly

unread,
Oct 5, 2014, 2:10:53 PM10/5/14
to conc...@isocpp.org
Le 05/10/2014 17:59, Paul Fultz II a écrit :
>
>
> Yes, when there is a better solution. Currently, concepts lite is not
> a better solution than the library solution. With the library solution
> I have specialization, higher-order predicates, interoperability with
> other libraries, and conditional overloading. Features I use on a
> regular basis.

Could you please expand on each of those points? What you mean by them,
how you do it with library, and where concept-lite fails to allow them?
For instance, I fail to see where concept-lite hinder interoperability
with other libraries.

From my current point of view (which may be outdated), the main
advantages of concept-lite over a library solution are the following:
- Syntax that is useable even by non expert
- Especially when considering overloading on concepts
- The concept specification is part of the interface, not of the
implementation

---
Loïc Joly



Tomasz

unread,
Oct 5, 2014, 4:01:08 PM10/5/14
to conc...@isocpp.org, b...@cs.tamu.edu
For me the most important point that I have presented, but get lost because of other things that has been posted (especially one straw hat that I have created and as consequence derailed discussion, which I now regret) is:

The syntax proposed in N3878 (which I refer to as placeholder{id}) with combination of same treatment of auto and placeholder introduced by concept name (placeholder = type), allow us to preserve a terse notation, without adding new rules to the language, that are required to support approach where same concept name means same type of function parameters. So if, and only if, you do not see real disadvantages of forcing syntax from N3838, then it is better to have simpler language.

I was not questioning use of terse syntax, or any other part of concepts, except one single rule that causes that parameter introduced by same concept, have same type, because I personally perceive the placeholder{id} syntax as equally simple as one currently proposed and even think, that it will be more intuitive and useful if the concepts name were allowed to be used to introduce variable (in light of examples presented in this thread).

Andrew Sutton

unread,
Oct 5, 2014, 6:05:49 PM10/5/14
to conc...@isocpp.org
> The syntax proposed in N3878 (which I refer to as placeholder{id}) with
> combination of same treatment of auto and placeholder introduced by concept
> name (placeholder = type), allow us to preserve a terse notation, without
> adding new rules to the language, that are required to support approach
> where same concept name means same type of function parameters. So if, and
> only if, you do not see real disadvantages of forcing syntax from N3838,
> then it is better to have simpler language.

N3878 is not being considered for the concepts TS. That ship has
sailed. This doesn't mean that it can't be reconsidered when the TS is
evaluated for incorporation into the C++ standard.

I see fewer advantages to that proposal than I once thought I did. I
am not sure that I would support it today.

I think it is important to emphasize the fact that a TS provides an
opportunity for the community to evaluate a new language or library
feature before it is incorporated into the standard. It is not written
in stone.

Andrew

Bjarne Stroustrup

unread,
Oct 6, 2014, 9:33:37 AM10/6/14
to Tomasz, conc...@isocpp.org
Aesthetics is an important issue in language design, but personal preferences are at best a weak guide. I tend to try mine out by writing and teaching to see their effects on other people. "Intuitive" is to me a "red flag" - all too often meaning "I don't really know why" or "similar to what I'm used to." I do not want to enforce the renaming by concept_name{name} even though I might accept it as a compatible extension later on.

Nevin Liber

unread,
Oct 6, 2014, 11:13:31 AM10/6/14
to conc...@isocpp.org
On 5 October 2014 10:59, Paul Fultz II <pful...@gmail.com> wrote:

course, I wasn't suggesting to wait for perfection, instead I was suggesting that we wait for more uses of and experience with concept predicates. It is foolish to ignore the experience of people who have used real concept predicates either in C++ or in other languages that have something similiar(such as D with static_if).

How long should we wait?  There is always something new we could be considering, because no one else is standing still.
 
There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators  

I believe you mean Eric Niebler.

No, I'm pretty sure he means me. :-)  While Eric is doing great work on ranges, earlier I brought up some of the same observations that you did with the terse syntax.

However, my observations are not objections.  I think we should ship the TS and see where this goes, even if I personally might have made different choices.  None of this is even close to being a show stopper, at least for me.

Consistency is really hard to argue in this case.  The syntax:

void DoIt(InputIterator i, InputIterator j)

is both consistent as a placeholder for a type as well as inconsistent with auto.  The authors of concepts have good reasons for picking the former.  Will some people get confused?  Yes.  If we pick the latter, will some people get confused?  Yes.  We cannot please everyone.  I don't see how waiting indefinitely on this makes any progress in making a better (as opposed to a different) choice.

Yes, when there is a better solution. Currently, concepts lite is not a better solution than the library solution. With the library solution I have specialization, higher-order predicates, interoperability with other libraries, and conditional overloading. Features I use on a regular basis.

That's you.  In my experience, all but the most dedicated of experts finds both SFINAE and template error messages painful.
 
Plus, it will either need to be deprecated like auto_ptr or it will continue to hinder the development of newer features based on concepts in the future.

Hinder the development?  That's a bold statement.  How is concepts as specified boxing us into a corner? 
--
 Nevin ":-)" Liber  <mailto:ne...@eviloverlord.com(847) 691-1404

Botond Ballo

unread,
Oct 6, 2014, 11:17:44 AM10/6/14
to conc...@isocpp.org
> Currently, concepts lite is not a
> better solution than the library solution. With the library solution I
> have specialization, higher-order predicates, interoperability with
> other libraries, and conditional overloading. Features I use on a
> regular basis.

Out of curiosity, what library do you use to provide concepts support?

Regards,
Botond

Paul Fultz II

unread,
Oct 6, 2014, 12:05:34 PM10/6/14
to conc...@isocpp.org


On Sunday, October 5, 2014 1:10:53 PM UTC-5, loic.act...@numericable.fr wrote:
Le 05/10/2014 17:59, Paul Fultz II a écrit :
>
>
> Yes, when there is a better solution. Currently, concepts lite is not
> a better solution than the library solution. With the library solution
> I have specialization, higher-order predicates, interoperability with
> other libraries, and conditional overloading. Features I use on a
> regular basis.

Could you please expand on each of those points? What you mean by them,
how you do it with library, and where concept-lite fails to allow them?
For instance, I fail to see where concept-lite hinder interoperability
with other libraries.

Sure.

- Specializations

Because concept predicates are traditionally defined as integral constant classes, they always allow specialization. So, if a define a concept predicate to check if something is drawable:

    TICK_TRAIT(is_drawable)
    {
        template<class T>
        auto requires(T&& x) -> decltype(
            x.draw()
        );
    };

Since this is just a class, I can specialize it if this predicate doesn't work like this:

    template<>
    struct is_drawable<foo>
    : std::false_type
    {};

There are two reasons why the original predicate will fail:

1) The `draw` method has static_assert in it that causes an hard error for certain types, which can cause the checking to fail.

2) The predicate produces a false positive. Since, perhaps, the `draw` method does not mantain the original semantics. We are wanting to check for objects that can draw on a screen, but the draw method for the `foo` actually draws money from a bank account.

Concept predicates rely on "duck" interface, that is objects that implicity quack like duck, but when you run across a quacking goose, you will need specizalizations to explicitly say that it isn't a duck.  This feature is **extremely** important because dealing with a quacking goose without specialization requires huge refactorings(perhaps in a third-party library, which may not always be possible). Having multiple semantics for functions is not bad design, that is why we have namespaces. However, with the current design of concepts lite that have implicit relationships(and I mean relationships and not interfaces) leads to disallowing specializations. This point also is more important than the other points I metioned since the lanuage can be extended to support the other points, but the current design of concepts lite cannot be extend to support specializations as easily since it requires a core redesign.

- Higher-order predicates

For example, say I want to find all the drawable type in a tuple, I could write something like this:

    auto drawables = filter<is_drawable<_>>(t);

This can't be done with the current proposal without first wrapping it in a class first.

- Interoperability

This goes along with the higher order part as well, but additionally, concept predicate classes can't be used for the terse notation:

    void draw(is_drawable& x);

- Conditional overloading

As another example, say we want to print types in C++. We want it to work recursively over ranges as well as fusion sequences. Using a library solution it would look like this:

    TICK_TRAIT(is_range)
    {
        template<class T>
        auto requires(T&& x) -> decltype(
            begin(x),
            end(x)
        );
    };

    constexpr auto print = fit::fix(fit::conditional(
        FIT_STATIC_LAMBDA(auto, const std::string& s)
        {
            std::cout << s << std::endl;
        },
        FIT_STATIC_LAMBDA(auto self, const auto& sequence, TICK_PARAM_REQUIRES(boost::fusion::traits::is_sequence<decltype(sequence)>()))
        {
            boost::fusion::for_each(sequence, [&](auto x)
            {
                self(x);
            });
        },
        FIT_STATIC_LAMBDA(auto self, const auto& range, TICK_PARAM_REQUIRES(is_range<decltype(range)>()))
        {
            for(const auto& x:range) self(x);
        },
        FIT_STATIC_LAMBDA(auto self, const auto& x)
        {
            std::cout << x << std::endl;
        }

    ));

Now granted native syntax for this would be ideal, something like this(its just a rough sketch to get an idea):

    void print(auto, const std::string& s)
    {
        std::cout << s << std::endl;
    }

    template<class T>
    void print(auto self, const T& x) if (boost::fusion::traits::is_sequence<T>())
    {
        boost::fusion::for_each(x, [&](auto elem)
        {
            print(elem);
        });
    }
    else if (is_range<T>())
    {
        for(const auto& elem:x) print(elem);
    }
    else
    {
        std::cout << x << std::endl;
    }

Now, conditonal overloading is useful here because the predicates are unrelated but can overlap. So, say an `std::array` is both a fusion sequence as well as a range. However, even with all the macros stuff the library solution is much simpler than the concepts lite solution.

 

 From my current point of view (which may be outdated), the main
advantages of concept-lite over a library solution are the following:
- Syntax that is useable even by non expert

Yes, of course, syntatic sugar can help a lot for novices as well as experts. I am not arguing against that point. However, if the behind the scenes it limits its usefulness, do really want to add syntax that we have to try and relace later?
 
- Especially when considering overloading on concepts

Yes, conditional overloading can help with that but its not the same.
 
- The concept specification is part of the interface, not of the
implementation

That of course is the same for the library solution.
 

---
Loïc Joly



Paul Fultz II

unread,
Oct 6, 2014, 12:27:07 PM10/6/14
to conc...@isocpp.org, b...@cs.tamu.edu, g...@axiomatics.org


On Sunday, October 5, 2014 12:57:10 PM UTC-5, Gabriel Dos Reis wrote:
Paul Fultz II <pful...@gmail.com> writes:

[...]

|     So, my opinion is that we should finish the concepts specification
|     and
|     ship it so that
|     the users (many of whom are anxiously waiting) can benefit.
|     Unless, I
|     hear of some
|     fundamental problem, we should ship now - and I have not seen any
|     objection that I
|     consider fundamental (in this discussion thread or elsewhere).
|    
|
|
| Rushing a solution seems foolish, considering it is not a better
| solution than the current library solutions.

I don't know exactly what you meant that to be.  So, I have a question:
do you believe that after nearly 20+ years working continuously on
this, having a Technical Specification that consolidates the essence of
what we believe concepts should be based on practical experience,
analysis of real-world algorithms, talking to programmers in the field,
including learning from past failures is foolish?

Well it doesn't seem like the proposal has been based on any practical experience. Just starting with the how `concept bool` is defined as a function seems like that either they were unaware of how concept predicates are defined in C++ or it had been incorrectly assumed that previous concept predicates were a complete failure. Futhermor, real-world usage of real solutions to generic programming are treated as an "abomination". And now features are being design around novices' misperceptions about how generic programming works.


Paul Fultz II

unread,
Oct 6, 2014, 12:36:30 PM10/6/14
to conc...@isocpp.org


On Monday, October 6, 2014 10:13:31 AM UTC-5, Nevin Liber wrote:
On 5 October 2014 10:59, Paul Fultz II <pful...@gmail.com> wrote:

course, I wasn't suggesting to wait for perfection, instead I was suggesting that we wait for more uses of and experience with concept predicates. It is foolish to ignore the experience of people who have used real concept predicates either in C++ or in other languages that have something similiar(such as D with static_if).

How long should we wait?  There is always something new we could be considering, because no one else is standing still.

That could be the case if there wasn't any other solution. Also, it is the case that a simpler more orthogonal design should come first, then huffman coding could come afterwards when there is more experience using the features.
 
 
There was a suggestion that Nevin Liber's work on ranges would replace
pairs of iterators  

I believe you mean Eric Niebler.

No, I'm pretty sure he means me. :-)  While Eric is doing great work on ranges, earlier I brought up some of the same observations that you did with the terse syntax.

ok sorry, about that.
 

However, my observations are not objections.  I think we should ship the TS and see where this goes, even if I personally might have made different choices.  None of this is even close to being a show stopper, at least for me.

Well, I don't think the terse syntax is a showstopper either. I think the lack of specialization is a show stopper.
 

Consistency is really hard to argue in this case.  The syntax:

void DoIt(InputIterator i, InputIterator j)

is both consistent as a placeholder for a type as well as inconsistent with auto.  The authors of concepts have good reasons for picking the former.  Will some people get confused?  Yes.  If we pick the latter, will some people get confused?  Yes.  We cannot please everyone.  I don't see how waiting indefinitely on this makes any progress in making a better (as opposed to a different) choice.

Yes, when there is a better solution. Currently, concepts lite is not a better solution than the library solution. With the library solution I have specialization, higher-order predicates, interoperability with other libraries, and conditional overloading. Features I use on a regular basis.

That's you.  In my experience, all but the most dedicated of experts finds both SFINAE and template error messages painful.

If they find error message from `enable_if` painful then they will find the error message from concepts lite painful as well.
 
 
Plus, it will either need to be deprecated like auto_ptr or it will continue to hinder the development of newer features based on concepts in the future.

Hinder the development?  That's a bold statement.  How is concepts as specified boxing us into a corner? 

By having implicit relationships, which prevents specializations, and can prevent conversions between relationships if the same syntax was used for type erasure.
 

Paul Fultz II

unread,
Oct 6, 2014, 12:46:03 PM10/6/14
to conc...@isocpp.org, botond...@yahoo.ca

I use the Tick library:

https://github.com/pfultz2/Tick

But before that I used a bunch of ugly macros to define concept predicates.
 

Regards,
Botond

Andrew Sutton

unread,
Oct 6, 2014, 1:44:11 PM10/6/14
to conc...@isocpp.org
>> Could you please expand on each of those points? What you mean by them,
>> how you do it with library, and where concept-lite fails to allow them?
>> For instance, I fail to see where concept-lite hinder interoperability
>> with other libraries.
>
>
> Sure.

I'm not even sure where to star with this. But I'll try...


> Because concept predicates are traditionally defined as integral constant
> classes, they always allow specialization. So, if a define a concept
> predicate to check if something is drawable:
>
> TICK_TRAIT(is_drawable)
> {
> template<class T>
> auto requires(T&& x) -> decltype(
> x.draw()
> );
> };

Yes, I know. I write a library in 2009 that did much the same thing.
It was based on ideas in the Boost.ConceptCheck library which had
similar aims (it was published in '03, IIRC). The library was the
basis for a lot of experimentation that went into the concept
definitions in n3351.

Concepts Lite has explicit language support for stating requirements
like as an expression:

requires (T&& x) { x.draw(); }

It also includes the ability to write checks for the result types of
those operations, names of associated types, and some other nice
features.

> Since this is just a class, I can specialize it if this predicate doesn't
> work like this:
>
> template<>
> struct is_drawable<foo>
> : std::false_type
> {};

Concepts Lite does not require these kinds of specializations since
failure is determined by lookup.

> Concept predicates rely on "duck" interface, that is objects that implicity
> quack like duck, but when you run across a quacking goose, you will need
> specizalizations to explicitly say that it isn't a duck. This feature is
> **extremely** important because dealing with a quacking goose without
> specialization requires huge refactorings(perhaps in a third-party library,
> which may not always be possible). Having multiple semantics for functions
> is not bad design, that is why we have namespaces. However, with the current
> design of concepts lite that have implicit relationships(and I mean
> relationships and not interfaces) leads to disallowing specializations. This
> point also is more important than the other points I metioned since the
> lanuage can be extended to support the other points, but the current design
> of concepts lite cannot be extend to support specializations as easily since
> it requires a core redesign.

I think that this argument is being dramatically oversold. If it were
as important as you claim, then all of our generic libraries would
have been using this technique for the past 20 years to remove the
likelihood of occurrence. And, in the past 8 years of looking at many
large generic libraries (around 2-3 MLoC), I have found 0 instances of
implementors using this technique to ensure correct instantiation.

I also wrote a paper in 2010 about my concept emulation library. I
specifically discussed this issue there too.


> - Higher-order predicates
>
> For example, say I want to find all the drawable type in a tuple, I could
> write something like this:
>
> auto drawables = filter<is_drawable<_>>(t);
>
> This can't be done with the current proposal without first wrapping it in a
> class first.

True. You cannot do this without making providing a class that checks
the constraint. But it's not hard;

template<typename T>
struct is_drawable : false_type { };

template<Drawable T> // Drawable is a concept
struct is_drawable : true_type { };

This is hardly a show stopper.


> - Interoperability
>
> This goes along with the higher order part as well, but additionally,
> concept predicate classes can't be used for the terse notation:
>
> void draw(is_drawable& x);

I don't know what this means.


> - Conditional overloading
>
> As another example, say we want to print types in C++. We want it to work
> recursively over ranges as well as fusion sequences. Using a library
> solution it would look like this:
>

> Now granted native syntax for this would be ideal, something like this(its
> just a rough sketch to get an idea):
>
> void print(auto, const std::string& s)
> {
> std::cout << s << std::endl;
> }
>
> template<class T>
> void print(auto self, const T& x) if
> (boost::fusion::traits::is_sequence<T>())
> {
> boost::fusion::for_each(x, [&](auto elem)
> {
> print(elem);
> });
> }
> else if (is_range<T>())
> {
> for(const auto& elem:x) print(elem);
> }
> else
> {
> std::cout << x << std::endl;
> }
>
> Now, conditonal overloading is useful here because the predicates are
> unrelated but can overlap. So, say an `std::array` is both a fusion sequence
> as well as a range. However, even with all the macros stuff the library
> solution is much simpler than the concepts lite solution.

You can do this with Concepts Lite by overload the algorithms based on
their constraints. The compiler chooses the most specialized template,
which includes ordering them by their constraints.

void print(const std::string& s) {
std::cout << s << std::endl;
}

void print(const fusion::Sequence& seq) {
// print fusion stuff.
}

void print(const Range& range) {
// print range stuff.
}

void print(const auto& x) {
std::cout << x << std::endl;
}

That didn't seem particularly difficult. And feel free to define the
concept fusion::Sequence like this:

template<typename T>
concept bool Sequence = fusion::traits::is_sequence<T>::value;



>> - Especially when considering overloading on concepts
>
>
> Yes, conditional overloading can help with that but its not the same.

In what way is it not?


>> - The concept specification is part of the interface, not of the
>> implementation
>
>
> That of course is the same for the library solution.

Not really. The library solution embeds the constraint in some place
other than where it is obvious: the return type, a default argument of
an unused parameter, the default argument of a template parameter.

I feel like I've just reiterated most of the points made in the
concepts proposals (not the TS wording). Have you read the proposals?

Andrew

Tony V E

unread,
Oct 6, 2014, 2:34:13 PM10/6/14
to conc...@isocpp.org
The difference between the above 2 forms of print() is that one has an obvious/explicit ordering of the overloads, whereas the other is not as obvious nor as easily 'tweaked' per function (instead you tweak the Concept definitions - ie a Container is defined to be Not an Iterator so nothing can be both)).

For better or worse.

I'm currently writing a bunch of concept-like code, using pre-C++11, and

- it "can" be done, for certain definitions of 'can', but I find it a huge pain
- I'd really like something better, I'm hoping that is ConceptsLite.

Tony

Tony V E

unread,
Oct 6, 2014, 2:42:21 PM10/6/14
to conc...@isocpp.org
On Sat, Oct 4, 2014 at 11:56 AM, Gabriel Dos Reis <g...@axiomatics.org> wrote:
Tony V E <tvan...@gmail.com> writes:

[...]

| One thing that I'd like to be kept in mind is this:
|
| void foo()
| {
|     ....
|     Iterator it = c.begin();
|     ....
|     ....
|     Iterator it2 = c2.begin();
|     ....
| }
|
| (ie "Iterator" is not coming from the function signature;  it is a
| concept used here to mean "like auto, but constrained to be at least
| an Iterator".  The idea being that "auto everywhere" throws away too
| much type info (in my opinion) - I don't care what type 'it' is, but I
| do need it to be an Iterator, else the code following it might still
| compile, but not work right - same problem we have today with
| templates sans concepts)

Please, remember that the short-notation is just that: a short hand for
a specific configuration that shows up frequently.  It isn't meant to
cover every situation.  For that indeed, we already have the general
notation.  Let's not loose sight of that in the heat of the arguments.


Hopefully I'm not coming across as heated.  I'd just wanted to take a moment to consider some potential future implications. I think they might even argue towards the "same type" side, ie agreeing with the Concepts Lite current proposal.
 
Covering adequately function parameter types is, from my perspective,
for more important than covering declarations of local variables,
because local variables are cases that are -- for the vast majority of
cases -- well covered today (and will be covered tomorrow) by an
existing language feature.

I find the local variable case lacking.  I think there is a hole between "auto" and "exactly this type".  What I really want to say is "a type that requires...."
But I don't want that to get in the way of the current proposal.  Just to have a quick check that there are no big conflicts that would prevent getting it later.
 
  The current choice for function parameter
types is based on careful analysis of real-word algorithms (e.g. the
standard library algorithms) and how to simplify and support their
authoring and consumption.  I would love for people who feel

passionately differently to conduct a similar detailed analysis of
real-world algorithms as comprehensively as the Palo Alto TR did so that
we can have material for comprehensive technical discussion.


Most of the current STL is built from Iterator,Iterator, even though the second Iterator is typically not iterated, and need not be an Iterator.
But I don't think that invalidates the work of the Palo Alto TR.

Tony

Tony V E

unread,
Oct 6, 2014, 3:07:23 PM10/6/14
to conc...@isocpp.org


On Sun, Oct 5, 2014 at 9:50 AM, Bjarne Stroustrup <b...@cs.tamu.edu> wrote:

The most important aspect of the terse notation is that it allows us to think about generic
programming in the same way as we think of "ordinary" programming. In particular, the terse
syntax serves very well in the (simple) cases where we can think of a concepts as a "type of
a type."


I am fascinated that we are going down the road started by Gottlob Frege (not his coincidentally titled "Concept Notation" http://en.wikipedia.org/wiki/Begriffsschrift, but more his "Foundations of Arithmetic" http://en.wikipedia.org/wiki/The_Foundations_of_Arithmetic) where types and test-for-attributes were argued to be interchangeable (The concept "red" and the set of all things with the property "is red" are isomorphic).  Which was later found problematic due to Russell's Paradox[*].

At the same time, we are going down Russell's path (and answer to Frege) of "system of types" (layers of types of types of types...) from Principia Mathematica (http://en.wikipedia.org/wiki/Principia_Mathematica), in that a Concept is not a type - it is one level higher (I can't check that int is a Number but can't check that Number is a Number).  (Can I check for properties of a Concept, using meta-Concepts? In a future version maybe?)

I say "fascinating" in that it all scares me a bit - because both Russell and Frege failed (and we ended with Godel's incompleteness theorem), yet from all I can tell, we are (so far) walking a fine line that avoids these problems.  Should we be learning from the mathematicians, or should they be learning from us?

[*] As hard as I try, I have yet to reproduce Russell's paradox in Concepts lite. Basically, we avoid Russell's paradox because we can't pass Concepts to Concepts, and we avoid many other recursive-like problems because either the compiler says "I don't know that symbol yet (or it is an incomplete type) or the compiler recurses and crashes. :-)

Tony

None of the above is an opinion against the Concepts Lite proposal.  More of a word of caution and/or a request for someone with more time and/or brains than I to allay my slight trepidation.  (I should probably just rest easy knowing that Alex knows all of math history and all about Concepts, and is much smarter than me.)

I also bring this up because it is, I think, at the heart of some differences of opinion - ie whether you see a Concept as a pass-fail test or 'set of types', or both.  Great power that it can be both, as long as it doesn't get muddled in the middle.

Andrew Sutton

unread,
Oct 6, 2014, 3:26:50 PM10/6/14
to conc...@isocpp.org
> [*] As hard as I try, I have yet to reproduce Russell's paradox in Concepts
> lite. Basically, we avoid Russell's paradox because we can't pass Concepts
> to Concepts, and we avoid many other recursive-like problems because either
> the compiler says "I don't know that symbol yet (or it is an incomplete
> type) or the compiler recurses and crashes. :-)

It shouldn't be possible. The constraint language is a very simple
propositional logic. It's not Turing complete either :)

Concepts are not allowed to be recursive. I'm just missing a check in
the implementation that allows it. I suspect that rule could be
weakened, but it would have to be weakened carefully (e.g., recursive
checks become atomic?).

Andrew

Botond Ballo

unread,
Oct 6, 2014, 6:52:00 PM10/6/14
to conc...@isocpp.org
> On Monday, October 6, 2014 1:43 PM, Andrew Sutton <andrew....@gmail.com> wrote:

>> Since this is just a class, I can specialize it if this predicate
> doesn't
>> work like this:
>>
>> template<>
>> struct is_drawable<foo>
>> : std::false_type
>> {};
>
> Concepts Lite does not require these kinds of specializations since
> failure is determined by lookup.


I think what Paul was going for is a case where 'foo' does have
a method named draw(), but it has the wrong semantics for our
concept, so we want it to fail the concept check in spite of
passing the 'has a draw() method' check in the concept definition.

Regards,
Botond

Botond Ballo

unread,
Oct 6, 2014, 7:01:03 PM10/6/14
to conc...@isocpp.org
The problem is, neither Sequence nor Range subsume the other, so
to disambiguate, you need to do something like:

template <Sequence S> requires not Range<S>

// oops, can't use the shorthand anymore unless I define
// a SequenceNotRange concept

void print(const S& seq) {
// print fusion stuff
}

void print(const Range& r) {
// print range stuff

}

(assuming you want to use the Range implementation for types
which are both). As the set of mutually non-subsuming concepts
grows in size, this gets uglier and uglier to write compared to
a hypothetical:

if (T is Range) {
// range stuff
} else if (T is Sequence) {
// sequence stuff
} else {
// other stuff
}

(I'm not saying this is a significant objection to Concepts Lite.
I'm just helping to explain what I understand to be Paul's point.)


Regards,
Botond

Andrew Sutton

unread,
Oct 6, 2014, 7:20:54 PM10/6/14
to conc...@isocpp.org
> I think what Paul was going for is a case where 'foo' does have
> a method named draw(), but it has the wrong semantics for our
> concept, so we want it to fail the concept check in spite of
> passing the 'has a draw() method' check in the concept definition.

I'm fully aware of that. I mentioned this in the paper that I wrote 4
years ago. And let me emphasize that in the intervening years, I've
never seen anybody use this technique to avoid that potential
conflict.

Andrew

Andrew Sutton

unread,
Oct 6, 2014, 7:34:44 PM10/6/14
to conc...@isocpp.org
> The problem is, neither Sequence nor Range subsume the other, so
> to disambiguate, you need to do something like:
>
> template <Sequence S> requires not Range<S>
>
> // oops, can't use the shorthand anymore unless I define
> // a SequenceNotRange concept

It's fine if they aren't related. A subsumption requirement is not
required for overload resolution.

I have trouble believing that a Range (along the lines of what Eric
Niebler is working on) and a compile-time sequence of types will
satisfy the same sets of requirements. In other words, they'll be be
disjoint and overload resolution will select among candidates whose
constraints are satisfied.

And if they do share the same syntax, then you can simplify further,
assuming you plan to use that shared syntax.

template<typename T>
requires fusion::Sequence<T> or Range<T>
void print(const T& t) {
// do the right thing
}

And it's not a serious tragedy if you can't use terse notation to
properly express constraints. I'm very happy with the declaration
above -- assuming that it's valid, of course.

Andrew

Botond Ballo

unread,
Oct 6, 2014, 7:38:16 PM10/6/14
to conc...@isocpp.org
> I have trouble believing that a Range (along the lines of what Eric

> Niebler is working on) and a compile-time sequence of types will
> satisfy the same sets of requirements. In other words, they'll be be
> disjoint and overload resolution will select among candidates whose
> constraints are satisfied.


Paul already mentioned an example which is both a Fusion sequence
and a range: std::array.

Regards,
Botond

Andrew Sutton

unread,
Oct 6, 2014, 7:48:29 PM10/6/14
to conc...@isocpp.org
>> Niebler is working on) and a compile-time sequence of types will
>> satisfy the same sets of requirements. In other words, they'll be be
>> disjoint and overload resolution will select among candidates whose
>> constraints are satisfied.
>
>
> Paul already mentioned an example which is both a Fusion sequence
> and a range: std::array.

But that's a hypothetical argument, and moreover it appears to be
wrong. Minimally, it appears that fusion iterators are not
incrementable. Range iterators are. So the concepts are syntactically
differentiable and so my original example should work as written.

Andrew

Botond Ballo

unread,
Oct 6, 2014, 8:02:49 PM10/6/14
to conc...@isocpp.org
The fusion iterators for an std::array would be different objects
(and of different types) than the range iterators. The range iterators
would be incrementable, and the fusion iterators wouldn't. Both
concept checks would pass.

Regards,
Botond

Andrew Sutton

unread,
Oct 6, 2014, 8:33:16 PM10/6/14
to conc...@isocpp.org
> The fusion iterators for an std::array would be different objects
> (and of different types) than the range iterators. The range iterators
> would be incrementable, and the fusion iterators wouldn't. Both
> concept checks would pass.

Ah, you're right. Fusion adapts array as a model, so it should pass both.

Well, in that case, caveat implementor. When you overload a function
in a way that multiple candidates are viable for a specific set of
arguments, you're going to get ambiguous lookups. It's an old problem.

There are easy enough workarounds. For example, you could overload the
ambiguous cases (there can't be that many).

template<typename T, int N>
void print(const array<T, N>& a);

That's more specialized than any of the previously mentioned
overloads. It gets picked for arrays.

Or even better? Catch them all.

template<typename T>
requires Range<T> and fusion::Sequence<T>
void print(const T& t) {
// pick an iteration method.
}

This is also more constrained than any of the other overloads, and it
happens to match all of the examples that satisfy both Range and
fusion::Sequence.

Or call make a call to some adaptor at the call site guarantees only
one constraint is satisfied.

array<int, 3> { ... };
print(as_range(a)); // Constructs a Range-only type

There are lots of interesting solutions to interesting problems. Some
better than others. It might be worth exploring the limits of the
proposed features before trying to argue about weaknesses.

Andrew

Paul Fultz II

unread,
Oct 6, 2014, 11:14:14 PM10/6/14
to conc...@isocpp.org

Except these can't be used directly in a integral constant class, such as:

template<class T>
struct is_drawable
: bool_<requires (T&& x) { x.draw(); } >
{};
 

Really? I use this technique a lot. Futhermore, there are several generic libraries that use specializations as well, such as Boost.Geometry, Boost.Hana, etc. Plus, more will hit the fan as generic programming and concept-based overloading become more widespread.
 


> - Higher-order predicates
>
> For example, say I want to find all the drawable type in a tuple, I could
> write something like this:
>
>     auto drawables = filter<is_drawable<_>>(t);
>
> This can't be done with the current proposal without first wrapping it in a
> class first.

True. You cannot do this without making providing a class that checks
the constraint. But it's not hard;

template<typename T>
  struct is_drawable : false_type { };

template<Drawable T> // Drawable is a concept
  struct is_drawable : true_type { };

This is hardly a show stopper.

Yes, and now you lose concept-based overloading. And like I said may require refactoring to accomodate this or even worse this could be in a third-party library which can't be changed. Perhaps the original authors could be convinced to make the concept predicate an integral constant instead. However, from a library perspective at what point should we decide if our predicates should be integral constants or `concept bool`s? Why is that we have two very different ways now of defining concept predicates? How should one choose between the two. It seems that defining the predicates as integral constant is more future proof for a library, so lets hope thats what libraries chooses. And if thats case why do we have `concept bool` at all?

Also, one shouldn't have to trade in concept-overloading for specialization. The problem is the current design makes the relationships implicit. If the relationships were explicit then the compiler can always enforce the realtionship in spite of specializations. Perhaps something like this:

    template<class T>
    concept bool ForwardIterator = requires(T&& x)
    {
        // Requirements for forward iterator
    };

    template<class T>
    concept bool BidirectionalIterator : ForwardIterator<T> = requires(T&& x)
    {
        // Requirements for bidirectional iterator
    };

    template<class T>
    concept bool RandomAccessIterator : BidirectionalIterator<T> = requires(T&& x)
    {
        // Requirements for random access iterator
    };
Then if the `RandomAccessIterator` needed to be specialized:

    template<class T>
    concept bool RandomAccessIterator<boost::filter_iterator<T>> = false;

Now the specializations only apply to `RandomAccessIterator`, but do not affect the relationships.
 


> - Interoperability
>
> This goes along with the higher order part as well, but additionally,
> concept predicate classes can't be used for the terse notation:
>
>     void draw(is_drawable& x);

I don't know what this means.

An integral constant class cannot be used as parameter for the terse notation. Unless I am mistaken. It it can be used, then ignore this point.
 

Your solution doesn't work for two reasons. First, it is ambiguous since the compiler can't decide whether `std::array` is more specialized for a `Range` or a `FusionSequence`.

Secondly, even if the compiler could magically decide which is more specialized, it still would fail because of name lookup. Each name has to be seen before, for each function that is called. Not only is this difficult for a novice to understand, this is difficult for most experts. This is where conditional overloading(or even `static if`) is much simpler and more straightforward solution.
 


>> - Especially when considering overloading on concepts
>
>
> Yes, conditional overloading can help with that but its not the same.

In what way is it not?

I am referring to the library solution, which works well for free functions, but requires a little more work member functions.
 


>> - The concept specification is part of the interface, not of the
>> implementation
>
>
> That of course is the same for the library solution.

Not really. The library solution embeds the constraint in some place
other than where it is obvious: the return type, a default argument of
an unused parameter, the default argument of a template parameter.

It is obvious(maybe not pretty) because they are using `enable_if` regardless of where they put the constraint. Furthermore, the compiler knows where the constraint is at, and will point the user to the constraint when it fails.
 

I feel like I've just reiterated most of the points made in the
concepts proposals (not the TS wording). Have you read the proposals?

Yes, although I haven't read the latest. I do recall you discussing this in the proposal.
 

Andrew

Paul Fultz II

unread,
Oct 7, 2014, 9:23:08 AM10/7/14
to conc...@isocpp.org

That won't work recursively. It won't work for `std::vector<std::array<int, 3>>` or `std::tuple<std::array<int, 3>>`.
 

Andrew Sutton

unread,
Oct 7, 2014, 9:31:49 AM10/7/14
to conc...@isocpp.org
> Except these can't be used directly in a integral constant class, such as:
>
> template<class T>
> struct is_drawable
> : bool_<requires (T&& x) { x.draw(); } >
> {};

No, they can't. A constraint is not guaranteed to return true or
false: a substitution failure in a non-SFINAE context is still an
error.

You have to do the exactly the same with constraints that you do with
type traits: wrap the check to make it SFINAE friendly.


> Really? I use this technique a lot. Futhermore, there are several generic
> libraries that use specializations as well, such as Boost.Geometry,
> Boost.Hana, etc. Plus, more will hit the fan as generic programming and
> concept-based overloading become more widespread.

Really. A more conventional approach is to associate a type with a tag
class and dispatch on that. You could for example, ask for a function
to return some kind of range_tag in order to find definite evidence
that a type satisfies a concept. You actually need to do this when two
concepts have the same syntax but different semantics (e.g., input
iterator vs. forward iterator).

A quick search through Boost.Geometry didn't turn up any instances of
this. Can you point to them? I don't know anything about Hana.

Concept-based overload has been around for a long time. See
"Concept-controlled polymorphism" by Jaako Jarvi et al, published in
2003. And the original STL documentation before that.


>> True. You cannot do this without making providing a class that checks
>> the constraint. But it's not hard;
>>
>> template<typename T>
>> struct is_drawable : false_type { };
>>
>> template<Drawable T> // Drawable is a concept
>> struct is_drawable : true_type { };
>>
>> This is hardly a show stopper.
>
>
> Yes, and now you lose concept-based overloading. And like I said may require
> refactoring to accomodate this or even worse this could be in a third-party
> library which can't be changed. Perhaps the original authors could be
> convinced to make the concept predicate an integral constant instead.
> However, from a library perspective at what point should we decide if our
> predicates should be integral constants or `concept bool`s? Why is that we
> have two very different ways now of defining concept predicates? How should
> one choose between the two. It seems that defining the predicates as
> integral constant is more future proof for a library, so lets hope thats
> what libraries chooses. And if thats case why do we have `concept bool` at
> all?

I'm not sure I understand your claim about losing concept-based
overloading. There are a lot of "may"s and "could"s that make it hard
to understand whatever point it is that you're trying to make.

But yes, you will still be able to write type traits when you have
concepts. Sometimes, they may even be useful -- for example, in
associating tag classes with types. I don't know where that boundary
does, and I feel pretty confident saying that nobody else does either.
Only experience and experimentation with the language feature will
help you find that line.

Standing with your fingers in your ears yelling, "but concepts doesn't
do X", when you've (very clearly) never tried using them, is not a
poor substitute for experimentation.

We have "concept bool" for two reasons. First, you can't lookup
concept names as type-specifiers without a keyword to differentiate
regular function and variable templates from concepts. Second, the
language handles constraints very differently than normal expressions
in order to support overloading.

> Also, one shouldn't have to trade in concept-overloading for specialization.
> The problem is the current design makes the relationships implicit. If the
> relationships were explicit then the compiler can always enforce the
> realtionship in spite of specializations. Perhaps something like this:

This was tried in C++0x concepts, and it turned out the be a fairly
contentious decision. You can see a reference to that in Bjarne's
article on the "The C++0x "Remove Concepts" Decision" in Dr. Dobbs.

I'm sure that others on this list can give you more insight into that debate.


>> > This goes along with the higher order part as well, but additionally,
>> > concept predicate classes can't be used for the terse notation:
>> >
>> > void draw(is_drawable& x);
>>
>> I don't know what this means.
>
>
> An integral constant class cannot be used as parameter for the terse
> notation. Unless I am mistaken. It it can be used, then ignore this point.


It cannot. An early design actually allowed it, but that interacts
badly the overloading mechanism, which is based on the comparison of
constraints. Because class templates can be (partially) specialize, it
provides an opportunity for a programmer to subvert the underlying
constraint logic for overloading. So it doesn't happen.


> Your solution doesn't work for two reasons. First, it is ambiguous since the
> compiler can't decide whether `std::array` is more specialized for a `Range`
> or a `FusionSequence`.

Determining which is more specialized compares only templates and
constraints. It does not include the arguments. And as I said, the
compiler doesn't always need to determine which is more specialized.
Range and fusion::Sequence are great examples of this.

In most cases, a type will satisfy one and not the other. The
candidate with the unsatisfied constraints is not viable.

In some cases, a type can satisfy both (say, std::array). In those
ambiguous cases, you will need to disambiguate using some method:
overload specifically on array, overload on types that are both Ranges
and fusion::Sequences, o explicitly convert or adapt the argument at
the call site to a type that satisfies only one.

I mentioned this in a previous email with Botond.


> Secondly, even if the compiler could magically decide which is more
> specialized, it still would fail because of name lookup. Each name has to be
> seen before, for each function that is called. Not only is this difficult
> for a novice to understand, this is difficult for most experts. This is
> where conditional overloading(or even `static if`) is much simpler and more
> straightforward solution.

That you need to declare a function before you use it is not exactly
new or surprising. That's how the language works. I don't know of any
experts that are confused by this, and our CS1 students seem to have
little trouble grasping the concept either.

What experts and novices are you polling to reach this conclusion?


>> I feel like I've just reiterated most of the points made in the
>> concepts proposals (not the TS wording). Have you read the proposals?
>
>
> Yes, although I haven't read the latest. I do recall you discussing this in
> the proposal.

Perhaps you should re-read them. Or better yet, get the compiler and
actually try using the language features. Having some experience with
them will lend some credibility to your arguments.

Andrew

Andrew Sutton

unread,
Oct 7, 2014, 9:32:36 AM10/7/14
to conc...@isocpp.org
>>
>> array<int, 3> { ... };
>> print(as_range(a)); // Constructs a Range-only type
>
>
> That won't work recursively. It won't work for `std::vector<std::array<int,
> 3>>` or `std::tuple<std::array<int, 3>>`.

But this will.

template<typename T>
requires Range<T> and fusion::Sequence<T>
void print(const T& t) {
// pick an iteration method.
}

Andrew

Paul Fultz II

unread,
Oct 9, 2014, 5:13:12 PM10/9/14
to conc...@isocpp.org


On Tuesday, October 7, 2014 9:31:49 AM UTC-4, Andrew Sutton wrote:
> Except these can't be used directly in a integral constant class, such as:
>
> template<class T>
> struct is_drawable
> : bool_<requires (T&& x) { x.draw(); } >
> {};

No, they can't. A constraint is not guaranteed to return true or
false: a substitution failure in a non-SFINAE context is still an
error.

You have to do the exactly the same with constraints that you do with
type traits: wrap the check to make it SFINAE friendly.

Is it possible to also do this:


template<class T>
struct is_drawable
: std::false_type
{};

template<class T> requires (T&& x) { x.draw(); }
struct is_drawable
: std::true_type
{};
 


> Really? I use this technique a lot. Futhermore, there are several generic
> libraries that use specializations as well, such as Boost.Geometry,
> Boost.Hana, etc. Plus, more will hit the fan as generic programming and
> concept-based overloading become more widespread.

Really. A more conventional approach is to associate a type with a tag
class and dispatch on that. You could for example, ask for a function
to return some kind of range_tag in order to find definite evidence
that a type satisfies a concept. You actually need to do this when two
concepts have the same syntax but different semantics (e.g., input
iterator vs. forward iterator).

A quick search through Boost.Geometry didn't turn up any instances of
this. Can you point to them? I don't know anything about Hana.

In Boost.Geometry, you specialize each type for a tag, and the each tag specializes the traits needed for the concept. Most libraries don't have an implicit trait by default. Boost.Range does but it unfortunately uses Boost.ConceptCheck to check it parameters, which leads to other problems.

The thing is specialization isn't needed for something like Boost.ConceptCheck, because it isn't used to make types inclusive into a concept, but rather to make them exclusive for when there is false positive. When there is a false positive with Boost.ConceptCheck, it isn't a big deal, you just avoid calling the function. However. when combined with overloading it becomes a bigger problem, because it will prevent us from calling the correct overload. As example, say we have a function foo that is overloaded for concept A and B:

void foo(A x);
void foo(B x);

Now if we try to call it with class C, then we want to call with concept A, but maybe class C inadvertently appears to implement concept B. We could have an ambiguous overload, or perhaps it could appear that concept B is more refined, so it will call the wrong function. Specializations allows us to clarify what interfaces it implements. This is important since the interface are implicitly opt-in. Specializations gives a mechanism to explicitly opt-out.

 

Concept-based overload has been around for a long time. See
"Concept-controlled polymorphism" by Jaako Jarvi et al, published in
2003. And the original STL documentation before that.


>> True. You cannot do this without making providing a class that checks
>> the constraint. But it's not hard;
>>
>> template<typename T>
>>   struct is_drawable : false_type { };
>>
>> template<Drawable T> // Drawable is a concept
>>   struct is_drawable : true_type { };
>>
>> This is hardly a show stopper.
>
>
> Yes, and now you lose concept-based overloading. And like I said may require
> refactoring to accomodate this or even worse this could be in a third-party
> library which can't be changed. Perhaps the original authors could be
> convinced to make the concept predicate an integral constant instead.
> However, from a library perspective at what point should we decide if our
> predicates should be integral constants or `concept bool`s? Why is that we
> have two very different ways now of defining concept predicates? How should
> one choose between the two. It seems that defining the predicates as
> integral constant is more future proof for a library, so lets hope thats
> what libraries chooses. And if thats case why do we have `concept bool` at
> all?

I'm not sure I understand your claim about losing concept-based
overloading. There are a lot of "may"s and "could"s that make it hard
to understand whatever point it is that you're trying to make.

Because if this were, for example, `ForwardIterator`, `BidirectionalIterator`, and `RandomAccessIterator` changing them to integral constant classes as you suggested would not let me do overloading:

template<class T>
struct is_forward_iterator
: std::false_type
{};

template<ForwardIterator T>
struct is_forward_iterator
: std::true_type
{};

template<class T>
struct is_bidirectional_iterator
: std::false_type
{};

template<BidirectionalIterator T>
struct is_bidirectional_iterator
: std::true_type
{};

template<class T>
struct is_random_access_iterator
: std::false_type
{};

template<RandomAccessIterator T>
struct is_random_access_iterator
: std::true_type
{};

And then I now try to write `advance`:

template<class Iterator>
void advance(Iterator& it, int n) requires is_forward_iterator<Iterator>();

template<class Iterator>
void advance(Iterator& it, int n) requires is_bidirectional_iterator<Iterator>();

template<class Iterator>
void advance(Iterator& it, int n) requires is_random_access_iterator<Iterator>();

std::vector<int> v;
advance(v, 1); // Ambiguous overload

It no longer can find the most specialized type.

 

But yes, you will still be able to write type traits when you have
concepts. Sometimes, they may even be useful -- for example, in
associating tag classes with types. I don't know where that boundary
does, and I feel pretty confident saying that nobody else does either.
Only experience and experimentation with the language feature will
help you find that line.

Standing with your fingers in your ears yelling, "but concepts doesn't
do X", when you've (very clearly) never tried using them, is not a
poor substitute for experimentation.

I have used concepts in my code. And specialization was very useful. Its understandable that maybe at first iteration concepts lite may not have specializations, but the design of concepts lite should not limit that possibility.


We have "concept bool" for two reasons. First, you can't lookup
concept names as type-specifiers without a keyword to differentiate
regular function and variable templates from concepts. Second, the
language handles constraints very differently than normal expressions
in order to support overloading.


I understand why we have a `concept bool` so the compiler can store special metadata that it can use to improve error messages and overloading. However, I dont understand why we would limit its usefulness. Futhermore, as a library writer, I would prefer writing my concept predicates as integral constants since it will make my library more future proof. Using `concept bool` won't allow me to do that.

 
> Also, one shouldn't have to trade in concept-overloading for specialization.
> The problem is the current design makes the relationships implicit. If the
> relationships were explicit then the compiler can always enforce the
> realtionship in spite of specializations. Perhaps something like this:

This was tried in C++0x concepts, and it turned out the be a fairly
contentious decision. You can see a reference to that in Bjarne's
article on the "The C++0x "Remove Concepts" Decision" in Dr. Dobbs.

I'm sure that others on this list can give you more insight into that debate.

That is not what I'm referring to. The contention was over explicit/implicit concept maps.

I am referring to making the relationships of concepts explicit. That is when the compiler decides which concept predicate is the most specialized it uses some form of subsumation which relies on the relationships between the concepts. That is take concept A:

template<class T>
concept bool A = requires(T&& x) { ... };

And another concept B that build on top of concept A:

template<class T>
concept bool B = A<T> && requires(T&& x) { ... };

So now we know the compiler know that concept B is more specialized concept, but it does this implicitly(ie it has to instantiate the concept bool). Whereas if the relationships are explicit perhaps like this:

template<class T>
concept bool B: A<T> = requires(T&& x) { ... };

The compiler could encode the `concept bool` in a way that it knows that B is more specialized then A without having to instantiate the concept bool.

Furthermore, since it is encoded this way it can allow the compiler to enforce that the relationship always stay the same even when specializing it, so specializing B would be like this:

template<class T>
concept bool B<T> = ...;

 We don't need to specify `A<T>` since it is always part of the relationships.


It is surprising to most C++ programmers I know, including me. Plus, in your example, you wrote it wrong as well.
 
Reply all
Reply to author
Forward
0 new messages