Yes, quarter of possible signed addition or subtraction
operations are undefined behavior by standard.
There may be are non-standard compiler intrinsics that
help us to deal with overflows more simply or we can use
a type for result where every resulting value fits.
For example we can detect if distance between every two
signed longs fits into unsigned long with a compile-time
check:
static_assert(ULONG_MAX/2 >= LONG_MAX);
So we can luckily write a distance function that can never
fail if it compiles:
#include <iostream>
#include <limits.h>
unsigned long distance(long x, long y)
{
static_assert(ULONG_MAX/2 >= LONG_MAX);
return (x > y) ? (unsigned long)x - (unsigned long)y
: (unsigned long)y - (unsigned long)x;
}
int main()
{
std::cout << "Distance between 3 and -42 is "
<< distance(3, -42) << std::endl;
}
I prefer to use int64_t and uint64_t that guarantee that
UINT64_MAX/2 >= INT64_MAX so I don't need that
static_assert.
However quite common is that meaningful limits for a
number are rather different from what the integer type
of processor has.
For example we need to have variable that represents
reasonably reachable temperatures with accuracy of
one centi degree of Celsius.
The int32_t that we might want to use has range
-
2147483648 to
2147483647 so most of that range will
be erroneous values.
Absolute zero is −273.15°C, can't go below that.
Temperatures within stars can reach millions of degrees
(thanks to pressure) but at surface of Sun we have "only"
5505°C. Therefore we should check limits like
-27315 to 600000 (far under INT32_MIN and INT32_MAX)
to figure if our values are outside of sanity limits or not.