Thanks for that link.
Unfortunately, to me, that proposal mostly misses the point. It is
great that functions such as "trunc" and "fmod" can me made constexpr.
It is /long/ overdue that functions like "abs" and "fmin" are made
constexpr - these can be implemented with a one-line template function
which every mainstream compiler will optimise smoothly.
However, the proposal does not go far enough to suit me.
The restriction on "the function being closed on the set of extended
rational numbers" is not necessary, and not useful. Clearly it is not
necessary - gcc manages happily to have functions like "sin" defined as
constexpr. Clearly it is not useful, as it excludes important and
popular functions. It is not even sensibly defined - there is no such
thing as a "set of extended rational numbers" and no definition of it in
the proposal. Either you consider floating point as an approximation
for real numbers (in which case these functions /are/ closed, at least
for a useful subdomain), or you consider them as functions on the set of
numbers representable as computer floating point numbers of various
types, in which case the functions are /still/ closed (on a useful
subdomain). Meanwhile, functions like "division" are constexpr without
being closed on the field (you can't divide by 0), "abs" is to be
constexpr without being well-defined over the whole domain (you can't
take the abs value of the lowest possible integer on most systems).
Basic arithmetic can overflow. The whole thing is inconsistent, and too
limited to be useful in many cases where you really want compile-time
generation of results (such as for trig and power operations). All it
gives you is constexpr for functions that are almost never used (who
uses "islessequal(x, y)" rather than "x < y" ?) except in very niche
cases, and functions that are trivial to implement as constexpr template
functions.
In my opinion, the dependency of floating point operations on a diffuse
"floating point environment" is something that should have been
deprecated eons ago. The function "sin" should return an approximation
to the sine of its argument. It is never going to be mathematically
correct (except for occasional values, like 0). The IEEE standards give
reasonable criteria for accuracy, supported by lots of hardware and
software implementations. But functions like this should be state-free
- they are /functions/, in the mathematical sense. They should not
depend on the state. Sure, there will be /some/ people who want to
specify rounding mode FE_DOWNWARD at some points in their code, and
FE_UPWARD at other points. IMHO, they should be using types
double_round_downward and double_round_upward, not type double with a
set state. That would give them better control of exactly what they are
doing - while the rest of us, almost every user of floating point maths,
can use double in FE_DONT_KNOW_AND_DONT_CARE_AT_ALL rounding mode.
Compatibility with C, and compatibility with existing programs, is of
course paramount. I would propose a better compromise would be to
introduce a nested namespace std::ce:: for constexpr compatible
functions like std::ce::sin, std::ce::pow, std::ce::abs. Rounding modes
would be fixed - a compiler would either support a single rounding mode,
or make it configurable with compiler switches, pragmas, etc. Ideally,
these functions would give compile-time errors on failure (like
std::ce::sqrt(-1) ), but I would be happy enough with a "garbage in,
garbage out" solution. And when used outside constexpr context, the
nice C++ solution would be exceptions - again, I would be happy with
undefined behaviour, and I think adoption would be easier if this is
implementation defined. This namespace would include a variety of
useful functions from <cmath>, but would certainly avoid the unnecessary
name variants that are left-overs from C before <tgmath.h>.