On Thursday, March 14, 2013 7:24:23 PM UTC+4, Bjarne Stroustrup wrote:
> Concepts and concepts lite are meant to make good generic
> programming simpler.
I doubt that such expectations about simplicity are reasonable.
Specifying correct constraints is more difficult than it may seem at
first glance. While templates with _properly_ specified constraints
may become easier to use, it's likely that only language lawyers will
be able to write templates correctly in accordance with the new
fashion and we will have lots and lots of poorly constrained templates
(that are even worse than formally unconstrained templates). I can
suggest simple test: ask people how they would specify constraints for
the following templates:
1)
template <class T, class U>
typename std::common_type<T, U>::type
min(T &&t, U &&u)
{ return t < u ? FORWARD(t) : FORWARD(u); }
2)
template <class T, class... Params>
typename std::enable_if
<
!std::is_array<T>{},
std::unique_ptr<T>
>::type make_unique(Params &&... params)
{ return std::unique_ptr<T>(new T(FORWARD(params)...)); }
3)
template <class F, class... Params>
typename std::result_of<F &&(Params &&...)>::type
invoke(F &&f, Params &&... params)
{ return FORWARD(f)(FORWARD(params)...); }
where FORWARD is defined as
#define FORWARD(x) static_cast<decltype(x) &&>(x)
I think, the vast majority of _advanced_ C++ programmers will be
unable to provide proper formal constraints even for such simple
cases.
How about something more difficult, e.g., variadic min?
http://liveworkspace.org/code/2GxNZd$0It's not obvious how to reasonably define and use such primitive
constraints as Less_than_comparable, Weakly_ordered, and
Totally_ordered. The header cons_type_traits.h provides the following
definitions:
template<typename _Tp>
constexpr bool Weakly_ordered()
{
__declval _Tp __a, __b;
return __is_valid_expr(bool = {__a < __b})
&& __is_valid_expr(bool = {__a > __b})
&& __is_valid_expr(bool = {__a <= __b})
&& __is_valid_expr(bool = {__a >= __b});
}
template<typename _Tp, typename _Up>
constexpr bool Weakly_ordered()
{
__declval _Tp __t;
__declval _Up __u;
return Weakly_ordered<_Tp>() && Weakly_ordered<_Up>()
&& __is_valid_expr(bool = {__t < __u})
&& __is_valid_expr(bool = {__u < __t})
&& __is_valid_expr(bool = {__t > __u})
&& __is_valid_expr(bool = {__u > __t})
&& __is_valid_expr(bool = {__t <= __u})
&& __is_valid_expr(bool = {__u <= __t})
&& __is_valid_expr(bool = {__t >= __u})
&& __is_valid_expr(bool = {__u >= __t});
}
template<typename _Tp, typename _Up>
constexpr bool Totally_ordered()
{
return Equality_comparable<_Tp, _Up>() &&
Weakly_ordered<_Tp, _Up>();
}
If I constrain my min with such Totally_ordered as was suggested by
Andrew, I won't be able to write something like this
class type_index
{
public:
type_index(std::type_info const &x) noexcept :
m_typeid(&x) {}
size_t hash_code() const noexcept
{ return m_typeid->hash_code(); }
const char *name() const noexcept
{ return m_typeid->name(); }
std::type_info const &get() const noexcept
{ return *m_typeid; }
private:
std::type_info const *m_typeid;
};
inline bool operator ==(
type_index const &x, type_index const &y) noexcept
{ return x.get() == y.get(); }
inline bool operator !=(
type_index const &x, type_index const &y) noexcept
{ return !(x == y); }
inline bool operator <(
type_index const &x, type_index const &y) noexcept
{ return x.get().before(y.get()); }
inline bool operator >(
type_index const &x, type_index const &y) noexcept
{ return y < x; }
inline bool operator <=(
type_index const &x, type_index const &y) noexcept
{ return !(y < x); }
inline bool operator >=(
type_index const &x, type_index const &y) noexcept
{ return !(x < y); }
type_index type_idx = get_type_index();
type_index min_type_idx = min(type_idx, typeid(T)); // ill-formed
because 'std::type_info const &' isn't Weakly_ordered (there is no
suitable operator < for operands of such type), in spite of the fact
that there are no any problems with heterogeneous comparison between
'type_index' and 'std::type_info const &'. The presence of
Weakly_ordered<_Tp>() && Weakly_ordered<_Up>()
doesn't look reasonable. What can we say about the symmetry "T is
LessThanComparable/Weakly_ordered/Totally_ordered with U iff U is
LessThanComparable/Weakly_ordered/Totally_ordered with T"?
Sometimes classes define operator < as a member function. Then
implicit conversions can be applied only to the right operand. I
consider such classes as poorly defined, but people frequently do such
things. In particular, std::type_index is defined so. We can compare
an expression of type std::type_index (taken as the left operand) with
an expression of type const std::type_info (taken as the right
operand), but the symmetric comparison between const std::type_info
and std::type_index is not valid. Do we really need the symmetry in
such templates as heterogeneous min? I don't think so.
The need in presence of ==, !=, >, <=, >= for min is also
controversial.
On the other hand, Weakly_ordered doesn't care about constness and
value category of operands. For example,
template <class T, class U>
requires
std::Totally_ordered<T, U>() &&
Has_common_type<T, U>()
typename std::common_type<T, U>::type
min(T &&t, U &&u)
{
return t < u ? FORWARD(t) : FORWARD(u);
}
can be changed to
template <class T, class U>
requires
std::Totally_ordered<T, U>() &&
Has_common_type<T, U>()
typename std::common_type<T, U>::type
min(T &&t, U &&u)
{
// '<' is replaced with '>'
return t > u ? FORWARD(u) : FORWARD(t);
}
without introducing any problems, but if we change this to
template <class T, class U>
requires
std::Totally_ordered<T, U>() &&
Has_common_type<T, U>()
typename std::common_type<T, U>::type
min(T &&t, U &&u)
{
auto const &x = t;
auto const &y = u;
return x < y ? FORWARD(t) : FORWARD(u);
}
the constraint check std::Totally_ordered<T, U>() becomes
insufficient: the ability to compare non-const lvalues doesn't imply
the ability to compare const lvalues. C++11 - Table 18 describes
LessThanComparable requirement in terms of expressions with arbitrary
value category and optionally added constness. A similar comprehensive
heterogeneous version of LessThanComparable could be formulated as
follows:
A type T is LessThanComparable with a type U iff for every
well-formed lvalue or rvalue t of possibly const type
std::remove_reference<T>::type and for every well-formed lvalue or
rvalue u of possibly const type std::remove_reference<U>::type the
following declarations are well-formed:
bool implicit_conversion_result = t < u;
bool explicit_conversion_result(t < u);
[Note that theoretically a type may be implicitly convertible to bool,
but not contextually convertible to bool. For example, the code
template <class T, class U>
requires
std::Totally_ordered<T, U>() &&
Has_common_type<T, U>()
typename std::common_type<T, U>::type
min(T &&t, U &&u)
{
return t < u ? FORWARD(t) : FORWARD(u);
}
struct B
{
operator int() const;
explicit operator bool() const = delete;
};
struct X
{
friend B operator <(X const &, X const &);
friend B operator >(X const &, X const &);
friend B operator <=(X const &, X const &);
friend B operator >=(X const &, X const &);
friend B operator ==(X const &, X const &);
friend B operator !=(X const &, X const &);
};
auto &&res = min(X(), X());
is ill-formed, because for the given application of min the result of
t < u is not contextually convertible to bool, but for the deduced T
and U the constraint check std::Totally_ordered<T, U>() will be
successful, because t < u is implicitly convertible to bool.]
A comprehensive concept check for the aforementioned definition of
heterogeneous LessThanComparable has to include tests for 32
declarations (left const/non-const lvalue/rvalue operand *
right const/non-const lvalue/rvalue operand * implicit/explicit
conversion to bool):
#define CONVERTIBLE(source_expr, destination_type) \
destination_type = { source_expr }; \
destination_type(source_expr)
template <class T, class U>
constexpr bool Less_than_comparable_impl_lvalueness() noexcept
{
return requires
{
CONVERTIBLE(declval<T &>() < declval<U &>(), bool);
CONVERTIBLE(declval<T>() < declval<U &>(), bool);
CONVERTIBLE(declval<T &>() < declval<U>(), bool);
CONVERTIBLE(declval<T>() < declval<U>(), bool);
};
}
template <class T, class U>
constexpr bool Less_than_comparable_impl_constness() noexcept
{
return
Less_than_comparable_impl_lvalueness<T, U>() &&
Less_than_comparable_impl_lvalueness<T const, U>() &&
Less_than_comparable_impl_lvalueness<T, U const>() &&
Less_than_comparable_impl_lvalueness<T const, U const>();
}
template <class T, class U>
constexpr bool Less_than_comparable() noexcept
{
using ET = typename std::remove_reference<T>::type;
using EU = typename std::remove_reference<U>::type;
return Less_than_comparable_impl_constness<ET, EU>();
}
If we add checks for >, <=, =>, we will have 128 checks. If we also
add the symmetry check (x @ y <-> y @ x), we will have 256 checks.
Such theoretically perfect concept check leads to complicated code
and potentially slow compilation. You can say: "OK, let's ignore rare
errors and handle only the most common ones. Then we can significantly
reduce the number of checks, although sometimes we can get long error
messages as before." But where to draw a line between "rare" and
"common"? Is the following definition good enough?
template <class T, class U>
constexpr bool Less_than_comparable() noexcept
{
using AT = typename std::remove_reference<T>::type;
using AU = typename std::remove_reference<U>::type;
return requires
{
bool = { declval<AT const &>() < declval<AU const &>() };
bool = { declval<AT>() < declval<AU>() };
};
}
How high skill and how much time do we need in order to find an
optimal solution?
Consider another example:
// Value type
template<typename _Tp>
struct __value_type
{ using type = __subst_fail; };
template<typename _Tp>
requires __is_valid_type(typename _Tp::value_type)
struct __value_type<_Tp>
{ using type = typename _Tp::value_type; };
// Any type claiming that their value type is void lies.
template<typename _Tp>
requires __is_valid_type(typename _Tp::value_type)
&& __is_same(typename _Tp::value_type, void)
struct __value_type<_Tp>
{ using type = __subst_fail; };
template<typename _Tp>
struct __value_type<_Tp*>
{ using type = _Tp; };
template<typename _Tp>
struct __value_type<const _Tp*>
{ using type = _Tp; };
// Value_type
template<typename _Tp>
using Value_type = typename __value_type<_Tp>::type;
template<typename _Iter, typename _Tp>
constexpr bool Writable()
{
__declval _Iter __i;
__declval _Tp __x;
return __is_valid_expr(*__i = std::forward<_Tp>(__x));
}
template<typename _Iter, typename _Tp>
constexpr bool Output_iterator()
{
return Writable<_Iter, _Tp>() && Advanceable<_Iter>();
}
template<typename _In, typename _Out>
constexpr bool Indirectly_movable()
{
return Input_iterator<_In>()
&& Output_iterator<_Out, Value_type<_In>&&>();
}
// True when values can be copied from In to Out.
template<typename _In, typename _Out>
constexpr bool Indirectly_copyable()
{
return Indirectly_movable<_In, _Out>()
&& Output_iterator<_Out, Value_type<_In>>();
}
template<Input_iterator _In, Advanceable _Out>
requires Indirectly_copyable<_In, _Out>()
inline _Out
copy(_In __first, _In __last, _Out __result)
{
__glibcxx_requires_valid_range(__first, __last);
return (std::__copy_move_a2<__is_move_iterator<_In>::__value>
(std::__miter_base(__first), std::__miter_base(__last),
__result));
}
Do you see any problems in Indirectly_movable and Indirectly_copyable
(which are part of the presented GCC implementation)? If not, I'll
give you a hint: try to figure out what will happen if _In is
'const std::unique_ptr<int> *' and _Out is 'std::unique_ptr<int> *'.
In spite of the fact that std::unique_ptr<int> is not assignable from
an lvalue or rvalue of type const std::unique_ptr<int>,
Indirectly_movable
<
const std::unique_ptr<int> *,
std::unique_ptr<int> *
>()
and
Indirectly_copyable
<
const std::unique_ptr<int> *,
std::unique_ptr<int> *
>()
will return true, because in
__is_valid_expr(*__i = std::forward<_Tp>(__x))
the expression std::forward<_Tp>(__x) becomes an rvalue of type
std::unique_ptr<int> in all cases.
If you try to compile
std::unique_ptr<int> const p1;
std::unique_ptr<int> p2;
std::copy(&p1, &p1 + 1, &p2);
you will see that the constraint check doesn't work properly: the
error is diagnosed inside the definition of specialization for
auxiliary class template std::__copy_move.
On the other hand, the above std::copy is overconstrained (again, the
root of the problem is inside the definition of Indirectly_copyable).
Under the C++11 rules we can write:
std::vector<int> v1 = { 11, 22, 33 };
std::vector<std::reference_wrapper<int>> v2;
std::copy(v1.begin(), v1.end(), std::back_inserter(v2));
For the given example the "modernized" version of std::copy eventually
requires (without any visible reason) that an lvalue of type
std::back_insert_iterator<std::vector<std::reference_wrapper<int>>>
shall be assignable from an rvalue of type int, but this condition
is not satisfied, because an rvalue of type int cannot be implicitly
converted to std::reference_wrapper<int>, so the call
std::copy(v1.begin(), v1.end(), std::back_inserter(v2))
will be ill-formed. From my point of view, such application of
"concepts lite" is worse than useless.
When we write a constraint definition, we should keep in mind all
potentially useful applications of templates, for which the constraint
is designed, and not focus solely on purely abstract considerations.
In particular, the relation "CopyAssignable subsumes MoveAssignable"
is reasonable, but the relation "Indirectly_copyable subsumes
Indirectly_movable" is not reasonable (we should not rely on vague and
contrived analogies here).
The similarity between std::iterator_traits<Iter>::value_type and
decltype(*std::declval<Iter>()) is delusive: the former and the latter
aren't interchangeable (and the former is useless in most cases).
Let's go further.
N3580 "Concepts Lite: Constraining Templates with Predicates" - 5.1
contains the following text:
The _is_convertible_to trait determines whether a user-defined
conversion sequence can be found from one type T to another U.
Currently, this is implemented as a library feature with a complex
implementation.
These traits, together with the __is_base_of intrinsic, have
special logical rules in our implementation of the subsumes
algorithm. In particular, the following statements are always
valid:
__is_same(T, U) |- __is_convertible_to(T, U)
__is_same(T, U) |- __is_base_of(U, T)
__is_base_of(U, T) |- __is_convertible_to(T, U)
That is, whenever the left side of |- is true, the right side is
also true. This has the nice property of ordering these
relationships by their “strength”. These properties are used in
some STL implementations to select optimized algorithms when
comparing values of the same type vs. those that are simply
convertible.
If __is_same, __is_base_of, and __is_convertible_to are supposed to
have the same semantics as std::is_same, std::is_base_of, and
std::is_convertible respectively, then neither of these 3 assumptions
is correct in general. For example, if we have
class A { A(A const &); };
then the following conditions shall hold:
std::is_same<A, A>::value == true
std::is_base_of<A, A>::value == true
std::is_convertible<A, A>::value == false
std::is_same<int, int>::value == true
std::is_base_of<int, int>::value = false
I can imagine that there may be some useful applications of such
"subsumes" relations (at least because they actually hold for some
subsets of type pairs), but when we begin to operate with the
categories "sometimes works" or "almost works", it looks like we are
dealing with hacks.
Conclusion:
When we don't consider details, concepts may seem to be simple, but
their actual complexity can be well understood only when we consider
them in depth.
> Some of you may find this hard to believe, but "back then" there was
> quite serious opposition to function declarations because "they
> restricted the way functions could be used and the way separate
> compilation could be used" and also serious opposition to virtual
> functions "because pointers to functions are so much more flexible."
> I see concepts lite (and concepts) in the same light as goto/for,
> unchecked-function-arguments/function-declarations,
> pointers-to-functions/abstract-classes.
Before we start to speculate on vague analogies, could you remind
why we don't have compile-time checked exception-specifications?