Implicit constraints?

135 views
Skip to first unread message

Andrzej Krzemieński

unread,
Feb 22, 2017, 7:37:53 AM2/22/17
to SG8 - Concepts
Looking at C++ Extensions for Ranges (N4622), it was necessary to devise a notion of "implicit constraints". In 4.1.1/6, we read:

Where a requires-expression declares an expression that is non-modifying for some constant lvalue operand, additional variants of that expression that accept a non-constant lvalue or (possibly constant) rvalue for the given operand are also required except where such an expression variant is explicitly required with differing semantics. Such implicit expression variants must meet the semantic requirements of the declared expression. The extent to which an implementation validates the syntax of these implicit expression variants is unspecified.

[ Example:
template <class T>
concept bool C() {
 
return requires(T a, T b, const T c, const T d) {
    c
== d; // #1
    a
= std::move(b); // #2
    a
= c; // #3
 
};
}

Expression #1 does not modify either of its operands, #2 modifies both of its operands, and #3 modifies
only its first operand a.
Expression #1 implicitly requires additional expression variants that meet the requirements for c == d
(including non-modification), as if the expressions
a == d;       a == b;             a == move(b);       a == d;
c
== a;       c == move(a);       c == move(d);
move
(a) == d; move(a) == b;       move(a) == move(b); move(a) == move(d);
move
(c) == b; move(c) == move(b); move(c) == d;       move(c) == move(d);
had been declared as well.
Expression #3 implicitly requires additional expression variants that meet the requirements for a = c (including non-modification of the second operand), as if the expressions a = b and a = move(c) had been declared. Expression #3 does not implicitly require an expression variant with a non-constant rvalue second
operand, since expression #2 already specifies exactly such an expression explicitly. —end example ]

[ Example: The following type T meets the explicitly stated syntactic requirements of concept C above but
does not meet the additional implicit requirements:
struct T {
 
bool operator==(const T&) const { return true; }
 
bool operator==(T&) = delete;
};

T fails to meet the implicit requirements of C, so C<T>() is not satisfied. Since implementations are not
required to validate the syntax of implicit requirements, it is unspecified whether or not an implementation
diagnoses as ill-formed a program which requires C<T>(). —end example ]

This looks like, Concepts Lite, as currently specified, did not provide a way to clearly and concisely express syntactic (only syntactic) requirements of Ranges TS, and it had to resort to English text in the paper. Should we draw a conclusion form this, that something is lacking in Concepts Lite?

Regards,
&rzej;

Matt Calabrese

unread,
Feb 22, 2017, 9:13:24 AM2/22/17
to conc...@isocpp.org
On Feb 22, 2017 07:37, "Andrzej Krzemieński" <akrz...@gmail.com> wrote:

This looks like, Concepts Lite, as currently specified, did not provide a way to clearly and concisely express syntactic (only syntactic) requirements of Ranges TS, and it had to resort to English text in the paper. Should we draw a conclusion form this, that something is lacking in Concepts Lite?

I personally would say yes, but that's controversial. Notably, even outside of the Concepts TS, most people *already* write their own code with these assumptions anyway. In other words, these are assumptions most people have had when writing generic code since the beginning, well before language-level concepts. So Concepts TS is not making things worse, it's just not making things any better for that situation, which is much more common than people realize. That said, we do know how to actually fix this, but it's very different from Concepts TS.

For people who are allergic to 0x concepts you can stop reading. I'm only describing it because it directly relates to the question. 0x concepts were able to fix this (at least initially they were -- I believe it ended up diverging towards the end, but I wasn't participating in the committee at the time and so I don't know all of the details, but I'm sure Ville will gladly clarify here ;)). The way it was handled in 0x concepts originally was due to the fact that calls to associated functions from within a constrained template had a similar effect as implicitly calling through a traits class where *only* associated functions with the *precisely* specified signature of required associated functions existed (the invocations implicitly went through what was called concept_map that was usually implicitly generated but could also be manually specified). If the user had multiple overloads to a single, unoverloaded function named in a concept, it didn't matter, because that function was effectively only called from within the *concept_map's* function, which had the very precise qualification for parameter types and a precise return type (though both were loosely matched). Users in the constrained code could rely on, for instance, a non-const l-value being a valid argument to an associated function that was specified to take a const T&, which they can't actually do today. Everything would also properly definition check.

Similarly, associated function contraints that returned something like "bool" loosely matched functions that could return a type convertible to bool, and from within the constrained definition, the exact bool result is all that was directly accessible, so users didn't have to cast function results. There are several more subtle cases described in the thread that I referenced in P0240 (the archived thread from 2012 is here: http://boost.2283326.n4.nabble.com/contract-concepts-pseudo-signatures-vs-usage-patterns-td4636900.html ). Dave Abrahams has some interesting examples of subtleties of the Concepts TS approach in general.

This all comes down to thw fact that constrained function template definitions in 0x were basically different kinds of entities from unconstrained function templates such that people could actually write their generic code in an intuitive way and have it actually be correct, without having to cast function arguments and return values (this isn't true today, nor is it true with Concepts TS). They didn't need to specify such subtle implicit constraints due to the nature of the 0x concepts design.

Going back to Ranges -- in Concepts TS you don't strictly *need* implicit constraints if in your generic code you *always* explicitly cast your arguments to the exact expected parameter type, including cv qualification and value category, and also cast your results. Alternatively, all calls could be made through traits (i.e. manually do what 0x concepts implicitly did via concept_maps). That Ranges calls out such implicit constraints hints to me that generic library developers really do want to be able to write their code in the intuitive way, even if Concepts TS can't provide them the necessary guarantees. I believe the Ranges authors probably wouldn't describe the situation as I have, since they seem to be in favor of the Concepts TS overall as-is. That's just my personal, biased assessment.

What it comes down to is a fundamental design decision for concepts -- do you want constrained definitions to behave exactly like unconstrained ones, or do you want the meaning to be dependent on the constraints so that code can be easier to correctly write/read and is more amenable to sensible definition checking. Concepts TS opted for the former while 0x concepts opted for the latter. This is one of the reasons I've voiced very strong skepticism that Concepts TS will be able to deliver any reasonable definition checking without some fundamental changes. Without a more 0x-style interpretation of constrained definitions, what would actually be allowed and what would be checked would be very much more subtle and specific than what users would expect. I don't see it as ever being practical. If you don't care about definition checking, this is less important, but iirc, polling has suggested that people want language-level concepts to eventually provide it (don't quote me on that as I haven't double-checked).

Andrew Sutton

unread,
Feb 22, 2017, 9:39:46 AM2/22/17
to conc...@isocpp.org, Casey Carter, Eric Niebler
This looks like, Concepts Lite, as currently specified, did not provide a way to clearly and concisely express syntactic (only syntactic) requirements of Ranges TS, and it had to resort to English text in the paper. Should we draw a conclusion form this, that something is lacking in Concepts Lite?

I would conclude that people writing pathological classes make my life harder. BTW, it's just plain Concepts now.

Maybe there's something we can do about this. The mechanism for checking these requirements are the usual lookup and overload resolution rules. Resolution guarantees that a single candidate can be found for the arguments. These implicit constraints really want to evaluate all the viable candidates, not just the one selected by overload resolution.

So... as a hastily drawn up strawman:

template<typename T>
concept bool C = requires (T x) {
  try { f(x) } -> bool; // try check the overload set
};

struct S { };
bool f(S);
bool f(S&);
bool f(S&&);
bool f(const S&);
bool f(const S&&);

static_assert(C<S>);

When we check for f(x), the "try" means that all viable candidates matching f(x) must: 
- have a return convertible to bool, 
- be accessible (for member functions),
- and be non-deleted (because the candidate might be selected during instantiation).

So, since x is an lvalue, every overload is viable, all return bool, and none are deleted. The assertion holds. Change one of those declarations to, say, bool f(S&) = delete, and the assertion fails.

Some thoughts about this approach.

- This avoids generating all possible constraints based on different combinations of parameter types (that's an n! algorithm). This approach is linear in the size of the viable candidate set.
- I think the granularity is right. This should be applied to the entire requirement, and not to individual operands. Doing otherwise would necessitate a new kind of type (universal references anyone?), new deduction rules, and a change to overload resolution that allows ambiguous overloads to be "okay" in this context.
- Maybe we want this to be the default rule for checking. Needs more thought. Experimentation would be good.

As a historical footnote, C++0x concepts had this problem too. It was easy to specify requirements for what was generally expected, but to fully cover all cases, you would still need to enumerate combinations of operand types. The problem there was worse because it potentially affected runtime performance. A good description of the problem and its solution is here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2576.pdf.


--
Andrew Sutton

Matt Calabrese

unread,
Feb 22, 2017, 11:38:13 AM2/22/17
to conc...@isocpp.org, Casey Carter, Eric Niebler
On Wed, Feb 22, 2017 at 9:39 AM, Andrew Sutton <andrew....@gmail.com> wrote: 
So... as a hastily drawn up strawman:

template<typename T>
concept bool C = requires (T x) {
  try { f(x) } -> bool; // try check the overload set
};

struct S { };
bool f(S);
bool f(S&);
bool f(S&&);
bool f(const S&);
bool f(const S&&);

I know it's a strawman, but IIUC, checking all of these for every combination of N arguments of each associated function would probably not be desirable, as it can imply doing substitution for each of those combinations, which can explode pretty quickly (i.e. imagine that f is one or more function templates and the amount of instantiations that might be involved). That said, I actually do agree that we would probably *want* (talking ideals) these to be checked but only if we were to expect people to actually be able to rely on being able to call the functions in that manner from within constrained definitions. In other words, this is assuming that we allow those individual calls to be done with varying levels of cv-qualification and category, and also that those calls are allowed to resolve to *different* functions. My personal stance is that the original 0x semantics were the "best" option (obviously controversial), or at the very least the most sound option, and that if people wanted to take advantage of other overloads, then they could specify those in the constraints for that concept. The downside is that, yes, those specific concepts would potentially be more verbose, but it keeps the simple definitions as something that's easy to grasp as well as correct.

On Wed, Feb 22, 2017 at 9:39 AM, Andrew Sutton <andrew....@gmail.com> wrote:
- Maybe we want this to be the default rule for checking. Needs more thought. Experimentation would be good.

At the very least, I think that most people would intuitively want to be able to rely on the associated functions being callable from within a constrained function using looser cv-qualification/category than is directly represented (which, by my interpretation, is why we see the notion of things like "implicit constraints" -- people want to write code in the intuitive way, as if their constrained type were an archetype without things like questionable overloads of their associated functions). Conclusions drawn *beyond* that seem to be what ends up being more controversial and I know that some people disagree with my personal views, and I understand and can live with that. In other words, there is common ground in that I haven't heard from people arguing that a usage pattern describing a function "f" being passed a const l-value should *not* be callable with a non-const lvalue or with an rvalue, even though *technically* that's all that we can safely rely on. If people do have that view, it would be interesting to hear it. While that's sound, you only tend to get there practically by being verbose and always either casting or always going through traits that have a very specific signature (in either case you get both the benefits and the drawbacks of original 0x concepts, but you have to be more manual about it, so I would be surprised if people advocate it as an ideal). Going through traits is what I do in my own generic code because I tend to be pedantic, but as I view it, these semantics would (ideally) be automatic in language-level concepts. Disciplined use of traits is "expert-only", verbose, and often not very practical, even if those are the desired semantics for the developer. I don't want to have to be pedantic and verbose and a language-expert to be sound. I'd rather the sound semantics be the default without explicitly creating and going through traits.

On Wed, Feb 22, 2017 at 9:39 AM, Andrew Sutton <andrew....@gmail.com> wrote: 
As a historical footnote, C++0x concepts had this problem too. It was easy to specify requirements for what was generally expected, but to fully cover all cases, you would still need to enumerate combinations of operand types. The problem there was worse because it potentially affected runtime performance. A good description of the problem and its solution is here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2576.pdf.

Right. 0x concepts didn't suffer from the specific problem we're talking about initially, since they more-strictly tried to interpreted the concept author's intent based on the types in the pseudo-signature (for better or for worse), always going through the forwarding function. The notion of "implicit constraints" was [effectively] implied by the resolution and was unnecessary prior to that.

--
You received this message because you are subscribed to the Google Groups "SG8 - Concepts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to concepts+unsubscribe@isocpp.org.
To post to this group, send email to conc...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/concepts/.

Casey Carter

unread,
Feb 22, 2017, 11:40:10 AM2/22/17
to SG8 - Concepts
On Wednesday, February 22, 2017 at 4:37:53 AM UTC-8, Andrzej Krzemieński wrote:
Looking at C++ Extensions for Ranges (N4622), it was necessary to devise a notion of "implicit constraints". In 4.1.1/6, we read:

Where a requires-expression declares an expression that is non-modifying for some constant lvalue operand, additional variants of that expression that accept a non-constant lvalue or (possibly constant) rvalue for the given operand are also required except where such an expression variant is explicitly required with differing semantics. Such implicit expression variants must meet the semantic requirements of the declared expression. The extent to which an implementation validates the syntax of these implicit expression variants is unspecified.

[snip]
This looks like, Concepts Lite, as currently specified, did not provide a way to clearly and concisely express syntactic (only syntactic) requirements of Ranges TS, and it had to resort to English text in the paper. Should we draw a conclusion form this, that something is lacking in Concepts Lite?


The issue here isn't really the inability to express the constraints - we could easily list all sixteen variants of |{const, not const} x {lvalue, rvalue}| ^ |{LHS, RHS}| for each such binary expression just as in the example - it's the compile time cost of checking all of the expression variants when evaluating the concept for satisfaction. Using StrictTotallyOrdered<T, U> as the most egregious example, sixteen variants per binary expression times six operators times 5 pairings of LHS and RHS type [(T, T), (U, U), (T, U), (U, T), (common_type_t<T, U>, common_type_t<T, U>)] means the compiler would have to perform overload resolution 480 times to verify concept satisfaction. We considered that kind of overhead unacceptable, but we did make validation of implicit expression variants unspecified to leave the door open for "debug modes" in libraries that do validate them, or future improvements to Concepts / compiler technology that would make that validation viable. 

Casey Carter

unread,
Feb 22, 2017, 12:36:55 PM2/22/17
to SG8 - Concepts, Ca...@carter.net, eric.n...@gmail.com
On Wednesday, February 22, 2017 at 6:39:46 AM UTC-8, Andrew Sutton wrote:
This looks like, Concepts Lite, as currently specified, did not provide a way to clearly and concisely express syntactic (only syntactic) requirements of Ranges TS, and it had to resort to English text in the paper. Should we draw a conclusion form this, that something is lacking in Concepts Lite?

I would conclude that people writing pathological classes make my life harder. BTW, it's just plain Concepts now.

Maybe there's something we can do about this. The mechanism for checking these requirements are the usual lookup and overload resolution rules. Resolution guarantees that a single candidate can be found for the arguments. These implicit constraints really want to evaluate all the viable candidates, not just the one selected by overload resolution.

So... as a hastily drawn up strawman:

template<typename T>
concept bool C = requires (T x) {
  try { f(x) } -> bool; // try check the overload set
};

struct S { };
bool f(S);
bool f(S&);
bool f(S&&);
bool f(const S&);
bool f(const S&&);

static_assert(C<S>);

When we check for f(x), the "try" means that all viable candidates matching f(x) must: 
- have a return convertible to bool, 
- be accessible (for member functions),
- and be non-deleted (because the candidate might be selected during instantiation).

So, since x is an lvalue, every overload is viable, all return bool, and none are deleted. The assertion holds. Change one of those declarations to, say, bool f(S&) = delete, and the assertion fails.

Some thoughts about this approach.

- This avoids generating all possible constraints based on different combinations of parameter types (that's an n! algorithm). This approach is linear in the size of the viable candidate set.
- I think the granularity is right. This should be applied to the entire requirement, and not to individual operands. Doing otherwise would necessitate a new kind of type (universal references anyone?), new deduction rules, and a change to overload resolution that allows ambiguous overloads to be "okay" in this context.
- Maybe we want this to be the default rule for checking. Needs more thought. Experimentation would be good.


If we make this rule the default, then many useful notions like:

template<class T>
concept bool C1 = requires(T t, T const ct) {
  { &t } -> T*;
  { &ct } -> T const*;
};

template<class T, class U>
concept bool C2 = requires(T t, T const ct) {
  { get(t) } -> Same<U>&;
  { get(ct) } -> Same<U const>&;
};

become self-contradictory and therefore unimplementable.

Andrew Sutton

unread,
Feb 22, 2017, 12:47:16 PM2/22/17
to conc...@isocpp.org, Ca...@carter.net, eric.n...@gmail.com

So... as a hastily drawn up strawman:

template<typename T>
concept bool C = requires (T x) {
  try { f(x) } -> bool; // try check the overload set
};

If we make this rule the default, then many useful notions like:

template<class T>
concept bool C1 = requires(T t, T const ct) {
  { &t } -> T*;
  { &ct } -> T const*;
};

template<class T, class U>
concept bool C2 = requires(T t, T const ct) {
  { get(t) } -> Same<U>&;
  { get(ct) } -> Same<U const>&;
};

become self-contradictory and therefore unimplementable.

Ah. Thanks!
 
--
Andrew Sutton

Andrew Sutton

unread,
Feb 22, 2017, 12:53:10 PM2/22/17
to conc...@isocpp.org, Casey Carter, Eric Niebler

struct S { };
bool f(S);
bool f(S&);
bool f(S&&);
bool f(const S&);
bool f(const S&&);

I know it's a strawman, but IIUC, checking all of these for every combination of N arguments of each associated function would probably not be desirable, as it can imply doing substitution for each of those combinations, which can explode pretty quickly (i.e. imagine that f is one or more function templates and the amount of instantiations that might be involved).

You don't need to check combinations (that would be a non-starter for me). This approach just builds the overload set once, filters non-viable candidates, and then checks each to determine accessibility and non-deletedness.

--
Andrew Sutton

Matt Calabrese

unread,
Feb 22, 2017, 1:05:10 PM2/22/17
to conc...@isocpp.org, Eric Niebler, Casey Carter
Ah, I see what you're describing now, sorry for misinterpreting. This would be pretty unique from a normal SFINAE-like check, but the result might be reasonable. It's hard for me to grasp all of the implications immediately, but it seems interesting to experiment with, even if it's not a default.

Andrew Sutton

unread,
Feb 22, 2017, 1:12:33 PM2/22/17
to conc...@isocpp.org, Eric Niebler, Casey Carter
You don't need to check combinations (that would be a non-starter for me). This approach just builds the overload set once, filters non-viable candidates, and then checks each to determine accessibility and non-deletedness.

Ah, I see what you're describing now, sorry for misinterpreting. This would be pretty unique from a normal SFINAE-like check, but the result might be reasonable. It's hard for me to grasp all of the implications immediately, but it seems interesting to experiment with, even if it's not a default.

It is a little different... I'm not sure what the implications are either :)

--
Andrew Sutton

Andrzej Krzemieński

unread,
Feb 23, 2017, 3:50:52 AM2/23/17
to SG8 - Concepts, Ca...@carter.net, eric.n...@gmail.com

I think the above would work with "Andrew's extension":

 
template<class T>
concept bool C1 = requires(T t, T const ct) {

 
try { &t } -> T*; // address of T& or weaker converts to T*
 
try { &ct } -> T const*; // address of const T& or weaker converts to T const*
};


template<class T, class U>
concept bool C2 = requires(T t, T const ct) {
  { get(t) } -> Same<U>&;
  { get(ct) } -> Same<U const>&;
};

Ok, this one would not, but is this Same<> also not a workaround for a missing language construct?

Regards,
&rzej;

Andrzej Krzemieński

unread,
Feb 23, 2017, 4:07:12 AM2/23/17
to SG8 - Concepts, Ca...@carter.net, eric.n...@gmail.com


W dniu środa, 22 lutego 2017 18:36:55 UTC+1 użytkownik Casey Carter napisał:

It seams that this inclusion of additional overloads only applies to inputs that are used for observing the objects value. It looks like what we should be annotating is not expressions but inputs:

template<class T, class U>
concept bool C2 = requires(T t, T const ct, __Treat_as_value const int i) {
 
{ get_at_index(t, i) } -> Same<U>&;
 
{ get_at_index(ct, i) } -> Same<U const>&;
};

This treats t and ct literally, but considers different cv/ref variants for i.

Or, a bit more intrusive change: treat reference arguments literally, and treat non-reference arguments with all cv/ref combinations:

template<class T, class U>
concept bool C2 = requires(T & t, T const& ct, int i) {
 
{ get_at_index(t, i) } -> Same<U>&;
 
{ get_at_index(ct, i) } -> Same<U const>&;
};

Now, t and ct are references (treated literally), and i is a value (all cv/ref variants considered).

I am not sure how it interacts with the mandatory copy elision, though.

Regards,
&rzej;

Casey Carter

unread,
Feb 23, 2017, 11:54:54 AM2/23/17
to SG8 - Concepts, Ca...@carter.net, eric.n...@gmail.com
On Thursday, February 23, 2017 at 12:50:52 AM UTC-8, Andrzej Krzemieński wrote:


W dniu środa, 22 lutego 2017 18:36:55 UTC+1 użytkownik Casey Carter napisał:
On Wednesday, February 22, 2017 at 6:39:46 AM UTC-8, Andrew Sutton wrote:

- Maybe we want this to be the default rule for checking. Needs more thought. Experimentation would be good.


If we make this rule the default, then many useful notions like:

template<class T>
concept bool C1 = requires(T t, T const ct) {
  { &t } -> T*;
  { &ct } -> T const*;
};

I think the above would work with "Andrew's extension":

 
template<class T>
concept bool C1 = requires(T t, T const ct) {
 
try { &t } -> T*; // address of T& or weaker converts to T*
 
try { &ct } -> T const*; // address of const T& or weaker converts to T const*
};


Given:

T* operator&(T&);
const T* operator&(const T&);

Both overloads are viable for an argument expression that is an lvalue T, but one of them (the second) does not return a type convertible to T*. try { &ct } -> const T* would be satisfied, but try { &t } -> T* would not be.

 

template<class T, class U>
concept bool C2 = requires(T t, T const ct) {
  { get(t) } -> Same<U>&;
  { get(ct) } -> Same<U const>&;
};

Ok, this one would not, but is this Same<> also not a workaround for a missing language construct?


Yes, we've been using { E } -> Same<T>; in the Ranges TS as a type-and-value-category constraint. The intent is to require both that E is a valid expression and that decltype((E)) is T. It doesn't achieve that goal, however. The compound-requirement emits both an expression constraint for E, as desired, and a deduction constraint from the type of E to Same<T> which requires that f(E) completes overload resolution given the invented abbreviated function template declaration void f(Same<T>);. All of which is roughly equivalent to:

template <class U>
requires
Same<U, T>()
void f(U);
using W = decltype(f(E));

Obviously U will never deduce a reference type here, so Same<U, T>() can never be satisfied if T is a reference type. We're currently debating whether to replace our "type-and-value-category constraint" syntax with:

E; requires Same<decltype((E)), T>();

or

{ E } -> requires Same<T>&&;

The first syntax is hideous, but precise. The second is a bit less hideous, but really only discriminates lvalues from rvalues: prvalues and xvalues of type remove_reference_t<T> will equivalently satisfy Same<T>&& and Same<T&&>&&.

Given that concepts fundamentally constrain expressions, and that expression have types *and* value categories, it would be nice if there was a more convenient language mechanism to express this sort of "type-and-value-category" constraint.

Casey Carter

unread,
Feb 23, 2017, 12:10:04 PM2/23/17
to SG8 - Concepts, Ca...@carter.net, eric.n...@gmail.com
On Thursday, February 23, 2017 at 1:07:12 AM UTC-8, Andrzej Krzemieński wrote:
It seams that this inclusion of additional overloads only applies to inputs that are used for observing the objects value. It looks like what we should be annotating is not expressions but inputs:

template<class T, class U>
concept bool C2 = requires(T t, T const ct, __Treat_as_value const int i) {
 
{ get_at_index(t, i) } -> Same<U>&;
 
{ get_at_index(ct, i) } -> Same<U const>&;
};

This treats t and ct literally, but considers different cv/ref variants for i.

Aside: feel free to refer to this feature as "implicit expression variants." The Ranges TS does so in a few places "the expression E does not generate implicit expression variants," and should probably italicize the definition in [concepts.lib.general.equality]/6 as a term of art.

My original intent was to choose a good implicit syntax that would enable implicit expression variants without requiring extensive changes to the syntax of requires expressions. "Argument is declared as a const T&" was a good choice for this. If I was doing it today, I would probably choose to explicitly tag parameters as you have here, possibly with a pretend-macro-like library keyword in the style of EXPLICIT.  Having the uses documented explicitly seems more valuable than not disturbing the "natural" declaration syntax.
 

Or, a bit more intrusive change: treat reference arguments literally, and treat non-reference arguments with all cv/ref combinations:

template<class T, class U>
concept bool C2 = requires(T & t, T const& ct, int i) {
 
{ get_at_index(t, i) } -> Same<U>&;
 
{ get_at_index(ct, i) } -> Same<U const>&;
};

Now, t and ct are references (treated literally), and i is a value (all cv/ref variants considered).

I'm wary of attaching semantics to the reference/non-reference distinction: most requires expression parameters are dependent, and need to be references to defend against array- and function-to-pointer decay. (There are still some requires expressions in the TS that haven't been armored  this way.)
Reply all
Reply to author
Forward
0 new messages