1,673 views

Skip to first unread message

Mar 21, 2019, 8:19:27 PM3/21/19

to ISO C++ Standard - Discussion

Hi all,

I posted a variation of this question on SO [1] but the answers and discussions therein made me even more confused. For the sake of self-containedness important points will be repeated here.

It's well known that < does not induce a strict weak ordering on the set of (IEEE-754) double values and the blame lies on NaNs. (Does it? More on that latter.) This poses a practical problem for associative containers and much has been said about this but let's consider something much simpler: std::min. (Quotes refer to n4800 [2] and are abbreviated for the sake of brevity.)

[alg.min.max]

template<class T>

constexpr const T& min(const T& a, const T& b);

1 Requires: [...] type T shall be Cpp17LessThanComparable (Table 24).

2 Returns: The smaller value.

3 Remarks: Returns the first argument when the arguments are equivalent. [...]

Cpp17LessThanComparable is defined in Table 24 which states that < is a strict weak ordering relation. The four properties that define strict ordering (e.g. in Wikipedia [3]) go along the line of "for all x in S, ...". Now the confusion starts.

Some people (me included) follow a, say, more theoretical approach and understand S as the set of all possible double values. We deduce that < is not a strict weak ordering on this set and thus, double is not Cpp17LessThanComparable. For us, instantiating std::min with T = double is UB, in particular, std::min(0.0, 1.0) is undefined. Please, do not get me wrong! One thing that everybody agrees is the intention that std::min(0.0, 1.0) must return 0.0. I'm simply arguing that, perhaps, the wording in the Standard is not precise enough to translate the intention into a indisputable specification.

Other people follow a, say, more pragmatical approach and see the Cpp17LessThanComparable requirement on T as applying to the "values which are passed to the algorithm". What I grasp from SO posts is that for them std::min(0.0, 1.0) is well defined (it's 0.0) but std::min(0.0, NaN) is not. The underlying idea seems to be that "everything is fine when NaNs are not involved".

We get to two questions:

1) Is the theoretical group right: instantiating std::min for T = double is UB?

2) Is the pragmatical group right: std::min(0.0, 1.0) well defined and std::min(0.0, NaN) not?

IMHO, the pragmatic approach (although, I insist, I agree with its intention) is too vague and leaves too many open questions. The mathematical definition of strict weak ordering doesn't mention any algorithm. Would it make sense for double to Cpp17LessThanComparable for algorithm A and not for algorithm B? Does "values passed to the algorithm" mean that double being Cpp17LessThanComparable depends on the run-time values of a and b (the two argumens in the declaration above)? If so, then what's the point of requiring, for instance, transitivity of < when only two values matter and there's not a third one to be compared against? Also, if only a and b matter, why would be std::min(0.0, NaN) undefined given that < is a strict weak ordering on {0.0, NaN}.

The Standard does give indications of the intention to say "everything is fine when NaNs are not involved". For instance, in the specification of std::clamp:

template<class T>

constexpr const T& clamp(const T& v, const T& lo, const T& hi);

1 Requires: The value of lo shall be no greater than hi. [...] type T shall be Cpp17 LessThanComparable (Table 24).

2 Returns: lo if v is less than lo, hi if hi is less than v, otherwise v.

3 [Note: If NaN is avoided, T can be a floating-point type.— end note]

The Note above (recall that notes are not normative) conveys the idea but I'm still not convinced this is the right way of allowing float-point types.

Another clue comes from [structure.requirements]/8 which came into the WD from P0898R0 [4] during the "conceptification" of the standard library:

8 Required operations of any concept defined in this document need not be total functions; that is, some arguments to a required operation may result in the required semantics failing to be satisfied. [Example: The required < operator of the StrictTotallyOrdered concept (17.5.4) does not meet the semantic requirements of that concept when operating on NaNs.— end example] This does not affect whether a type satisfies the concept.

Again it's only a note and it applies to concepts and not other named requirements like Cpp17LessThanComparable.

In any case, excluding NaN doesn't seem to be the right approach (at least as far as std::min is concerned).

First, as pointed out by Barry in [1] the same issue might occurs with other types/operator < which do not verify the Cpp17LessThanComparable requirement. How could one precisely define the rogue values that algorithms need to avoid (like NaNs for double)?

Second, as I pointed out in [1], one can argue that NaNs are not the issue but other double vales are. Specifically, recall that there's more than one NaN (exactly 2^52 - 1 values carrying different payloads). If S is the set of all NaN double values, then it can be shown that < is a strict weak ordering on S. Therefore, from the mathematical point of view, there would be no problem if Note 3 in the std::clamp's specification said "If non NaN are avoided, T can be a floating-point type".

FWIW, the resolution of Library Issue 2239 [5] removed the Cpp17LessThanComparable requirement for the version of std::min that takes a Compare but left it for the one using < because the Remarks element mentions equivalence.

The simplest way to specify std::min would be through code (I'm sure my Standardese is wrong but you get the idea):

template<class T>

constexpr const T& min(const T& a, const T& b);

1 Returns: a < b ? a : b provided this expression is well-formed.

There's no need for requiring T to be Cpp17LessThanComparable and there's no need to mention equivalence.

If I understand it correctly, as a matter of principle, the Standard avoids defining semantics through code to avoid giving the impression of enforcing the implementation. However, [structure.requirements]/7 already states that when C++ code is used to present semantic requirements it doesn't mean that's the way it must be implemented. That would be my favorite specification honoring std::min as a replacement for the macro with the same name. If sticking to the principle is really desirable, then a textual approach similar to std::clamp and could be used:

1 Returns: a if a is less than b, otherwise b.

(I would also suggest to remove Note 3 from std::clamp's specification.)

I have also comments regarding keys in associative containers but this post is getting too long and I want to hear your thoughts for now.

Thanks and regards,

Cassio.

[1] https://stackoverflow.com/questions/55153210/do-stdmin0-0-1-0-and-stdmax0-0-1-0-yield-undefined-behavior

[2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/n4800.pdf

[3] https://en.wikipedia.org/wiki/Weak_ordering#Strict_weak_orderings

[4] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0898r0.pdf

[5] http://cplusplus.github.io/LWG/lwg-defects.html#2239

I posted a variation of this question on SO [1] but the answers and discussions therein made me even more confused. For the sake of self-containedness important points will be repeated here.

It's well known that < does not induce a strict weak ordering on the set of (IEEE-754) double values and the blame lies on NaNs. (Does it? More on that latter.) This poses a practical problem for associative containers and much has been said about this but let's consider something much simpler: std::min. (Quotes refer to n4800 [2] and are abbreviated for the sake of brevity.)

[alg.min.max]

template<class T>

constexpr const T& min(const T& a, const T& b);

1 Requires: [...] type T shall be Cpp17LessThanComparable (Table 24).

2 Returns: The smaller value.

3 Remarks: Returns the first argument when the arguments are equivalent. [...]

Cpp17LessThanComparable is defined in Table 24 which states that < is a strict weak ordering relation. The four properties that define strict ordering (e.g. in Wikipedia [3]) go along the line of "for all x in S, ...". Now the confusion starts.

Some people (me included) follow a, say, more theoretical approach and understand S as the set of all possible double values. We deduce that < is not a strict weak ordering on this set and thus, double is not Cpp17LessThanComparable. For us, instantiating std::min with T = double is UB, in particular, std::min(0.0, 1.0) is undefined. Please, do not get me wrong! One thing that everybody agrees is the intention that std::min(0.0, 1.0) must return 0.0. I'm simply arguing that, perhaps, the wording in the Standard is not precise enough to translate the intention into a indisputable specification.

Other people follow a, say, more pragmatical approach and see the Cpp17LessThanComparable requirement on T as applying to the "values which are passed to the algorithm". What I grasp from SO posts is that for them std::min(0.0, 1.0) is well defined (it's 0.0) but std::min(0.0, NaN) is not. The underlying idea seems to be that "everything is fine when NaNs are not involved".

We get to two questions:

1) Is the theoretical group right: instantiating std::min for T = double is UB?

2) Is the pragmatical group right: std::min(0.0, 1.0) well defined and std::min(0.0, NaN) not?

IMHO, the pragmatic approach (although, I insist, I agree with its intention) is too vague and leaves too many open questions. The mathematical definition of strict weak ordering doesn't mention any algorithm. Would it make sense for double to Cpp17LessThanComparable for algorithm A and not for algorithm B? Does "values passed to the algorithm" mean that double being Cpp17LessThanComparable depends on the run-time values of a and b (the two argumens in the declaration above)? If so, then what's the point of requiring, for instance, transitivity of < when only two values matter and there's not a third one to be compared against? Also, if only a and b matter, why would be std::min(0.0, NaN) undefined given that < is a strict weak ordering on {0.0, NaN}.

The Standard does give indications of the intention to say "everything is fine when NaNs are not involved". For instance, in the specification of std::clamp:

template<class T>

constexpr const T& clamp(const T& v, const T& lo, const T& hi);

1 Requires: The value of lo shall be no greater than hi. [...] type T shall be Cpp17 LessThanComparable (Table 24).

2 Returns: lo if v is less than lo, hi if hi is less than v, otherwise v.

3 [Note: If NaN is avoided, T can be a floating-point type.— end note]

The Note above (recall that notes are not normative) conveys the idea but I'm still not convinced this is the right way of allowing float-point types.

Another clue comes from [structure.requirements]/8 which came into the WD from P0898R0 [4] during the "conceptification" of the standard library:

8 Required operations of any concept defined in this document need not be total functions; that is, some arguments to a required operation may result in the required semantics failing to be satisfied. [Example: The required < operator of the StrictTotallyOrdered concept (17.5.4) does not meet the semantic requirements of that concept when operating on NaNs.— end example] This does not affect whether a type satisfies the concept.

Again it's only a note and it applies to concepts and not other named requirements like Cpp17LessThanComparable.

In any case, excluding NaN doesn't seem to be the right approach (at least as far as std::min is concerned).

First, as pointed out by Barry in [1] the same issue might occurs with other types/operator < which do not verify the Cpp17LessThanComparable requirement. How could one precisely define the rogue values that algorithms need to avoid (like NaNs for double)?

Second, as I pointed out in [1], one can argue that NaNs are not the issue but other double vales are. Specifically, recall that there's more than one NaN (exactly 2^52 - 1 values carrying different payloads). If S is the set of all NaN double values, then it can be shown that < is a strict weak ordering on S. Therefore, from the mathematical point of view, there would be no problem if Note 3 in the std::clamp's specification said "If non NaN are avoided, T can be a floating-point type".

FWIW, the resolution of Library Issue 2239 [5] removed the Cpp17LessThanComparable requirement for the version of std::min that takes a Compare but left it for the one using < because the Remarks element mentions equivalence.

The simplest way to specify std::min would be through code (I'm sure my Standardese is wrong but you get the idea):

template<class T>

constexpr const T& min(const T& a, const T& b);

1 Returns: a < b ? a : b provided this expression is well-formed.

There's no need for requiring T to be Cpp17LessThanComparable and there's no need to mention equivalence.

If I understand it correctly, as a matter of principle, the Standard avoids defining semantics through code to avoid giving the impression of enforcing the implementation. However, [structure.requirements]/7 already states that when C++ code is used to present semantic requirements it doesn't mean that's the way it must be implemented. That would be my favorite specification honoring std::min as a replacement for the macro with the same name. If sticking to the principle is really desirable, then a textual approach similar to std::clamp and could be used:

1 Returns: a if a is less than b, otherwise b.

(I would also suggest to remove Note 3 from std::clamp's specification.)

I have also comments regarding keys in associative containers but this post is getting too long and I want to hear your thoughts for now.

Thanks and regards,

Cassio.

[1] https://stackoverflow.com/questions/55153210/do-stdmin0-0-1-0-and-stdmax0-0-1-0-yield-undefined-behavior

[2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/n4800.pdf

[3] https://en.wikipedia.org/wiki/Weak_ordering#Strict_weak_orderings

[4] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0898r0.pdf

[5] http://cplusplus.github.io/LWG/lwg-defects.html#2239

Mar 22, 2019, 11:26:12 AM3/22/19

to std-dis...@isocpp.org, Cassio Neri

On 21/03/2019 20.19, Cassio Neri wrote:

> The simplest way to specify std::min would be through code (I'm sure my

> Standardese is wrong but you get the idea):

>

> template<class T>

> constexpr const T& min(const T& a, const T& b);

>

> 1 Returns: a < b ? a : b provided this expression is well-formed.

Since it is supposed to return "the first argument when the arguments
> The simplest way to specify std::min would be through code (I'm sure my

> Standardese is wrong but you get the idea):

>

> template<class T>

> constexpr const T& min(const T& a, const T& b);

>

> 1 Returns: a < b ? a : b provided this expression is well-formed.

are equivalent", shouldn't that be `b < a ? b : a`? (Which is the actual

definition in at least libstdc++...)

--

Matthew

Mar 22, 2019, 12:27:48 PM3/22/19

to ISO C++ Standard - Discussion

Good point.

Regards,

Cassio.

Mar 25, 2019, 5:52:33 AM3/25/19

to ISO C++ Standard - Discussion

Il giorno venerdì 22 marzo 2019 01:19:27 UTC+1, Cassio Neri ha scritto:

First, as pointed out by Barry in [1] the same issue might occurs with other types/operator < which do not verify the Cpp17LessThanComparable requirement. How could one precisely define the rogue values that algorithms need to avoid (like NaNs for double)?

Just introduce explicitly the notion of "domain" of a strict weak order, as the subset of the set of all values of T where all the guarantees hold. Amend Cpp17LessThanComparable saying that it does not require the domain to be the set of all values. Then just add the following clause to std::min (and similarly to every function that uses the Cpp17LessThanComparable requirement):

Expects: For the first form, type T is Cpp17LessThanComparable (Table 24). a and b are in the domain of the induced strict weak order.

(notice that the second part applies to all forms, not just the first one)

This would make it explicitly UB to use NaNs, which is the correct IMHO, since it allows the implementation to either propagate a or b or even trap, without externally imposed overhead. This approach would work for all floating point types, but also for custom types that share similar issues.

Just my two eurocent,

Message has been deleted

Mar 25, 2019, 4:10:11 PM3/25/19

to std-dis...@isocpp.org

(Trying again after Google deleted my message as spam.)

On Mon, Mar 25, 2019 at 9:52 AM Alberto Barbati <alberto...@gmail.com> wrote:

Il giorno venerdì 22 marzo 2019 01:19:27 UTC+1, Cassio Neri ha scritto:

First, as pointed out by Barry in [1] the same issue might occurs with other types/operator < which do not verify the Cpp17LessThanComparable requirement. How could one precisely define the rogue values that algorithms need to avoid (like NaNs for double)?Just introduce explicitly the notion of "domain" of a strict weak order, as the subset of the set of all values of T where all the guarantees hold. Amend Cpp17LessThanComparable saying that it does not require the domain to be the set of all values. Then just add the following clause to std::min (and similarly to every function that uses the Cpp17LessThanComparable requirement):

The problem with this definition of "domain" is that there might be more than one "subset of the set of all values of T where all the guarantees hold". In my original post, for instance, I've mentioned that < is a strict weak ordering on S = { NaN double values } as much as it is on R = { double values that are not NaN }.

Here is another example:

struct X {

unsigned value;

bool operator < (const X& a, const X& b) const {

return a.value % 2 == b.value % 2 && a.value < b.value;

}

};

In words, if a.value and b.value have the same parity, then a < b is equivalent to a.value < b.value. Otherwise, a and b are not comparable. Although < is not a strict weak ordering on the set of all X values (which, I shall also denote by X), < is a strict weak ordering on E = { x in X ; x.value % 2 == 0 } and also on O = { x in X ; x.value % 2 == 1 }. Hence, we can't refer to "**the** subset of the set of all values of X where all the guarantees hold" because there are (at least) two of them and there is no obvious reason to prefer one over another.

I still think that std::min (the same holds for std::max and std::clamp) requiring T to be Cpp17LessThanComparable creates a lot of unnecessary issues. Again, IMHO, the best option would be defining std::min(a, b) (after Matthew Woehlke's correction) as

Returns 'b < a ? b : a' provided this expression is well formed (using the right Standardese for that.)

or its more wordy variant:

Returns b if b is less than a or a otherwise.

Regards,

Cassio.

Mar 25, 2019, 4:47:12 PM3/25/19

to std-dis...@isocpp.org

On Mon, Mar 25, 2019 at 4:10 PM Cassio Neri <cassi...@gmail.com> wrote:

Again, IMHO, the best option would be defining std::min(a, b) (after Matthew Woehlke's correction) asReturns 'b < a ? b : a' provided this expression is well formed (using the right Standardese for that.)

or its more wordy variant:Returns b if b is less than a or a otherwise.

You don't think it's unfortunate for min(a,b) and min(b,a) to be different?

Mar 25, 2019, 5:22:59 PM3/25/19

to std-dis...@isocpp.org, Cassio Neri

Just follow established practice.

min(a, b) is a < b ? a : b

max(a, b) is a > b ? a : b

Yes it means that min(a, b) != min(b, a)

That's just the way min/max is with floating-point.

This is the same behaviour as the x86 instructions as well as most numerical computing libraries;

Message has been deleted

Mar 26, 2019, 9:19:53 AM3/26/19

to ISO C++ Standard - Discussion, cassi...@gmail.com

(Repost after Google rejection)

(Trying again after Google deleted my message as spam.)On Mon, Mar 25, 2019 at 9:52 AM Alberto Barbati <alberto...@gmail.com> wrote:Il giorno venerdì 22 marzo 2019 01:19:27 UTC+1, Cassio Neri ha scritto:

First, as pointed out by Barry in [1] the same issue might occurs with other types/operator < which do not verify the Cpp17LessThanComparable requirement. How could one precisely define the rogue values that algorithms need to avoid (like NaNs for double)?Just introduce explicitly the notion of "domain" of a strict weak order, as the subset of the set of all values of T where all the guarantees hold. Amend Cpp17LessThanComparable saying that it does not require the domain to be the set of all values. Then just add the following clause to std::min (and similarly to every function that uses the Cpp17LessThanComparable requirement):The problem with this definition of "domain" is that there might be more than one "subset of the set of all values of T where all the guarantees hold". In my original post, for instance, I've mentioned that < is a strict weak ordering on S = { NaN double values } as much as it is on R = { double values that are not NaN }.

It's not that the domain gets implicitly defined by the property "the subset
of the set of all values of T where all the guarantees hold". As your
examples show, this direction is a no-go. The idea is that the domain
has to be specified together with the comparison operation and it's the
couple (Type, Domain) that satisfies the Cpp17LessThanComparable, not
just the Type.

Alternatively, you can just allow multiple domains to coexist and use wording like this:

Expects: For the first form, type T is Cpp17LessThanComparable (Table
24). a and b are in the *same* domain among those induced strict weak order.

I still think that std::min (the same holds for std::max and std::clamp) requiring T to be Cpp17LessThanComparable creates a lot of unnecessary issues. Again, IMHO, the best option would be defining std::min(a, b) (after Matthew Woehlke's correction) asReturns 'b < a ? b : a' provided this expression is well formed (using the right Standardese for that.)

or its more wordy variant:Returns b if b is less than a or a otherwise.

Mar 26, 2019, 11:16:35 AM3/26/19

to std-dis...@isocpp.org, Hyman Rosen

And, no, I consider it a feature that `&min(a, b)` is deterministic for

`a == b`. I'm pretty sure that was intentional.

--

Matthew

Reply all

Reply to author

Forward

Message has been deleted

Message has been deleted

0 new messages

Search

Clear search

Close search

Google apps

Main menu