On 16/12/2019 22:03, Bart wrote:
> On 16/12/2019 15:21, David Brown wrote:
>> On 16/12/2019 13:10, Bart wrote:
>
>>> You mean that even when int128 types are built-in, there is no support
>>> for 128-bit constants? And presumably not for output either, according
>>> to my tests with gcc and g++:
>>>
>>> __int128_t a;
>>> a=170141183460469231731687303715884105727;
>>> std::cout << a;
>>>
>>> The constant overflows, and if I skip that part, I get pages of errors
>>> when trying to print it. On gcc, I have no idea what printf format to
>>> use.
>>>
>>
>> There is no printf support for 128-bit integers, except for systems
>> where "long long" is 128-bit. (Or, hypothetically, 128-bit "long",
>> "int", "short" or "char".) As you know full well, gcc does not have
>> printf, since gcc is a compiler and printf is from the standard library.
>
> So? printf is supposed to reflect the available primitive types. If a
> compiler is extended to 128 buts, then printf should support that too.
> How it does that is not the concern of the programmer.
The C standards form the contract. A C compiler supports a given C
standard, and a C library supports the given C standard. (A compiler
and library can also be designed together and made to support each
other.) If you have a C compiler that conforms to a standard and a C
library that conforms to a standard, then together they form a C
implementation for that standard. The same applies to C++.
It is entirely possible for one part of this to support features that
are not supported by the other. If these are extensions, not required
by the standards, then that's fine.
And printf - according to its definition in the standard - does not have
any support for integer sizes bigger than "intmax_t" as defined by the
implementation (generally, by the ABI for the platform). It doesn't
matter if the compiler has support for other types - the standard printf
does not support them.
>
> (However printf currently doesn't even directly support int64_t, so
> don't hold your breath for int128.)
Any conforming C99 printf will support "long long int", which is at
least 64 bits (and generally exactly 64 bits). You can't blame the
compiler just because /you/ happen to use it with an outdated and
non-conforming library. Anyone who uses gcc as part of a conforming
implementation has printf that supports int64_t.
(This discussion was somewhat interesting the first couple of times it
came up - it must have been explained to you a dozen times or more.)
>
>>> This is frankly astonishing with such a big and important language, and
>>> with comprehensive compilers such as gcc and g++. Even my own 'toy'
>>> language can do this:
>>
>> It is only astonishing if you don't understand what it means for a
>> language to have a standard. It's easy to do this with a toy language
>> (and that is an advantage of toy languages). It would not be
>> particularly difficult for the gcc or clang developers to support it
>> either. But the ecosystem around C is huge - you can't change the
>> standard in a way that breaks compatibility, which is what would have to
>> happen to make these types full integer types.
>
> What exactly would be the problem in supporting an integer constant
> bigger than 2**64-1? This part is inside the compiler not the library,
> so if a type bigger than 2**64-1 exists, then the constant can be of
> that type.
I don't think the size of the constants itself is an issue. The problem
is that you can't have a fully supported integer type bigger than
maxint_t in C, and maxint_t is constrained by the ABI for the platform.
>
>> I am not convinced that there is any great reason for having full
>> support for 128-bit types here - how often do you really need them?
>
> A few weeks ago there was a thread on clc that made use of 128-bit
> numbers to create perfect hashes of words in a dictionary. And the
> longest word in the dictionary, with the correct ordering of prime
> numbers, would just fit into 128 bits.
You don't mean "perfect hash" here - you mean "hash". "Perfect hash
function" has a specific meaning.
128-bit hashes are, in general, pointless. They are far bigger than
necessary to avoid a realistic possibility of an accidental clash, and
far too small to avoid intentional clashes.
Of course it is possible to find uses of 128-bit integers - especially
for obscure and artificial problems like that thread. I did not suggest
they were never useful - I am suggesting they are /rarely/ useful. And
I am suggesting that it is even rarer that there would be need for
/full/ support for such types. I didn't follow the thread in question,
but I doubt if it needed constants of 128 bit length, or printf support
for them, or support for abs, div, or other integer-related functions in
the C standard library. (Since it was a C discussion, I assume it
didn't need any integer functions from the C++ library.)
Of course arbitrary precision arithmetic involves more work. But that's
what you need for things like public key cryptography. I am not
suggestion that you should use arbitrary precision arithmetic to handle
128 bit numbers - I am saying that it is rare that you need something
bigger than 64-bit but will be satisfied with 128-bit.