...
>> "When two pointers are subtracted, both shall point to elements of the
>> same array object, or one past the last element of the array object; the
>> result is the difference of the subscripts of the two array elements.
>> The size of the result is implementation-defined, and its type (a signed
>> integer type) is ptrdiff_t defined in the <stddef.h> header.
>> If the result is not representable in an object of that type, the
>> behavior is undefined." (C2011 6.5.6p9).
>>
>> If the result of a pointer subtraction were always guaranteed to be
>> representable in ptrdiff_t, then that last sentence would be vacuous.
>
> Then don't use ptrdiff_t. I suspect like most people I wasn't even aware
> it existed.
I hope you're wrong about that - it's a fairly basic aspect of C, like
[u]intptr_t or size_t. But my degree was in Physics, not CS, so I don't
have any idea how bad the average CS major's education might have been.
If you never need to store the result of pointer subtractions, there's
no need to use ptrdiff_t. If you're calculating the difference between
pointers, and know enough about the calculation to at least roughly
estimate the minimum and maximum possible values that might result, and
know that both the minimum and maximum differences are small enough to
be stored in an integer of a given type, you can use an integer of that
type to store the result. ptrdiff_t is needed only if you have no other
information to go on about the size of a pointer difference that you
need to store - and if ptrdiff_t cannot represent such a value, it's
likely to be the case that there is no integer type supported by that
implementation that can. If there were such a type, it would have been
used as ptrdiff_t.
>> In the C standard, 7.20.1.4p1 describes intptr_t and uintptr_t, and says
>> "These types are optional". A fully conforming implementation may have
>> pointers that are too big to be representable using any supported
>> integer type, and the same is true of pointer differences.
>
> So long as mathematical operations can be done on the pointer types (which
> is a given or they'd be no use as pointers) then they are de facto integer
> types and that statement is wrong.
Yes, but when these issues come into play, the relevant mathematical
operations cannot be done on pointer types. The result of a pointer
subtraction has the type ptrdiff_t, and if an actual difference results
in a value that cannot be represented in that type, the behavior is
undefined, and on real-life implementations where this can happen, the
likely result will be the same as trying to calculate a value by any
other means that is too large to be represented in that type. That
wording is deliberately vague, because the "likely result" I'm referring
to can be (and probably is) different for different implementations.
Possible results include rolling over large positive differences to
become large negative ones and vice-versa, or saturation arithmetic, or
the raising of a signal. How many cases do you know of where any one of
those three results would be acceptable?
You're used to systems where ptrdiff_t and [u]intptr_t can be defined by
the implementation to be types that are big enough to store any pointer
difference, and any suitably-converted pointer value, respectively. So
am I. But the standard was written by a committee that, collectively,
had far broader experience than either you or I, and that wording was
added because the committee was well aware of systems for which that was
not the case. Unless you know which systems the committee thought had
that issue, and can authoritatively tell them that they were wrong about
those systems, your beliefs to the contrary don't count for much.