#ifdef __HP__
#define KUL_DM_HP __HP__
#define KUL_CHAR_BIT 8
/* We use these to defined u/intptr_t if
__intptr_t_defined is not defined */
#define KUL_IP_TYPE huge
#define KUL_UP_TYPE unsigned huge
#define KUL_SIZEOF_INTPTR __HP__
#define KUL_SIZEOF_SHORT 2
#define KUL_SIZEOF_INT 4
#define KUL_SIZEOF_LONG 4
#define KUL_SIZEOF_LLONG 8
#define KUL_SIZEOF_HUGE __HP__
/* Used GNUC variants as base since more compilers basing on that
standard then Visual C Compiler suite, __INT8_MAX__ etc should also be
defined along with __SCHAR_MAX__ etc */
#define KUL__HUGE_TYPE huge
#define KUL__PRIhugePFX "p"
#define KUL__SCNhugePFX "p"
#define KUL__INT8_TYPE __int8
#define KUL__INT16_TYPE __int16
#define KUL__INT32_TYPE __int32
#define KUL__INT64_TYPE __int64
#define KUL__INT128_TYPE __int128
#define KUL__PRIint8PFX "hh"
#define KUL__PRIint16PFX "h"
#define KUL__PRIint32PFX ""
#define KUL__PRIint64PFX "ll"
#define KUL__PRIint128PFX "I128"
/* An example of what to define for systems supporting other integers
this is primarily for developers of specialised software needing such
large sizes such as games, __huge_defined should also be defined to
prevent libraries like this one trying to set huge to __int128 and
PRIdHUGE etc to "I128d" etc */
#ifdef __int256_defined
#define KUL_I256_TYPE signed __int256
#define KUL_U256_TYPE unsigned __int256
#define KUL__PRIint128PFX "I256"
#endif
#ifdef __int512_defined
#define KUL_I512_TYPE signed __int512
#define KUL_U512_TYPE unsigned __int512
#define KUL__PRIint128PFX "I512"
#endif
#endif
#ifdef __int256_defined
#define KUL_I256_TYPE signed __int256
#define KUL_U256_TYPE unsigned __int256
#define KUL__PRIint256PFX "I256"
#define KUL__SCNint256PFX "I256"
#endif
#ifdef __int512_defined
#define KUL_I512_TYPE signed __int512
#define KUL_U512_TYPE unsigned __int512
#define KUL__PRIint512PFX "I512"
#define KUL__SCNint512PFX "I512"
#endif
#define HUGE_C(VAL) VAL##P
#define UHUGE_C(VAL) VAL##UP
#define INT256_C(VAL) VAL##I256
#define UINT256_C(VAL) VAL##UI256
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/662618b7-20a4-4706-bb52-170428f98620%40isocpp.org.
Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.
On Monday, 4 March 2019 12:30:44 PST Myriachan wrote:
> Another problem out there is that 128-bit integers are larger than intmax_t
> on existing implementations. intmax_t cannot change due to ABI issues on
> current platforms. Should the Standard allow the existence of int128_t
> when intmax_t is the same as int64_t?
>
> One solution to the intmax_t issue is to say that intmax_t is considered
> the size of preprocessor integers, and is >= long long, but may not be the
> largest integer type.
Third option is to deprecate intmax_t.
On Monday, March 4, 2019 at 2:09:06 AM UTC-8, Andrey Semashev wrote:Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.The Standard already supports implementations having integer types that are not char/short/int/long/long long, so that doesn't really need to change. So yeah, I agree that it would be better to just have an int128_t and such.
One problem with the Standard as it exists now is that implementations are not allowed to define the intN_t, int_leastN_t, int_fastN_t series for anything other than N in { 8, 16, 32, 64 }. This is divergent from the C standard's definition of stdint.h, which does allow values outside this range. I want to file a proposal to fix this divergence in C++.
Another problem out there is that 128-bit integers are larger than intmax_t on existing implementations. intmax_t cannot change due to ABI issues on current platforms. Should the Standard allow the existence of int128_t when intmax_t is the same as int64_t?One solution to the intmax_t issue is to say that intmax_t is considered the size of preprocessor integers, and is >= long long, but may not be the largest integer type.
GCC and Clang support 128-bit integers on some 64-bit platforms already (AArch64, PowerPC 64 and x86-64 included). However, it's incomplete support: they don't support integer literals of that size.
On Monday, March 4, 2019 at 3:30:44 PM UTC-5, Myriachan wrote:On Monday, March 4, 2019 at 2:09:06 AM UTC-8, Andrey Semashev wrote:Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.The Standard already supports implementations having integer types that are not char/short/int/long/long long, so that doesn't really need to change. So yeah, I agree that it would be better to just have an int128_t and such.+1.One problem with the Standard as it exists now is that implementations are not allowed to define the intN_t, int_leastN_t, int_fastN_t series for anything other than N in { 8, 16, 32, 64 }. This is divergent from the C standard's definition of stdint.h, which does allow values outside this range. I want to file a proposal to fix this divergence in C++.+1, although I could have sworn such a proposal was already in existence. (That is, a proposal to permit but not require vendors to define `int24_t`, `int128_t`, and so on.)
Another problem out there is that 128-bit integers are larger than intmax_t on existing implementations. intmax_t cannot change due to ABI issues on current platforms. Should the Standard allow the existence of int128_t when intmax_t is the same as int64_t?One solution to the intmax_t issue is to say that intmax_t is considered the size of preprocessor integers, and is >= long long, but may not be the largest integer type.Yes, that's already what's happening in practice (e.g. on libc++, and on libstdc++ in -std=gnu++17 mode). There exist integral types (types for which is_integral_v<T> is true) which are larger and wider than intmax_t.GCC and Clang support 128-bit integers on some 64-bit platforms already (AArch64, PowerPC 64 and x86-64 included). However, it's incomplete support: they don't support integer literals of that size.Notice that as long as the vendor supports constexpr computations on __int128 (which both Clang and GCC do), then it's pretty easy to write a user-defined literal (UDL) that would allow for example `36893488147419103232_int128` as an integer literal. So integer literals can be added at the C++ library level; they don't need to be built into the compiler.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/1dc12c59-7feb-4ecb-a842-0178cc5c36c8%40isocpp.org.