New integer type for 128bit+ Data Models

1,458 views
Skip to first unread message

Lee Shallis

unread,
Mar 4, 2019, 3:37:26 AM3/4/19
to ISO C++ Standard - Future Proposals
Here's an extract from a library I'm currently writing that illustrates the name & nature of this integer type
#ifdef __HP__
#define KUL_DM_HP __HP__
#define KUL_CHAR_BIT 8
/* We use these to defined u/intptr_t if
__intptr_t_defined is not defined */

#define KUL_IP_TYPE huge
#define KUL_UP_TYPE unsigned huge
#define KUL_SIZEOF_INTPTR __HP__
#define KUL_SIZEOF_SHORT 2
#define KUL_SIZEOF_INT 4
#define KUL_SIZEOF_LONG 4
#define KUL_SIZEOF_LLONG 8
#define KUL_SIZEOF_HUGE __HP__
/* Used GNUC variants as base since more compilers basing on that
standard then Visual C Compiler suite, __INT8_MAX__ etc should also be
defined along with __SCHAR_MAX__ etc */

#define KUL__HUGE_TYPE huge
#define KUL__PRIhugePFX "p"
#define KUL__SCNhugePFX "p"
#define KUL__INT8_TYPE __int8
#define KUL__INT16_TYPE __int16
#define KUL__INT32_TYPE __int32
#define KUL__INT64_TYPE __int64
#define KUL__INT128_TYPE __int128
#define KUL__PRIint8PFX "hh"
#define KUL__PRIint16PFX "h"
#define KUL__PRIint32PFX ""
#define KUL__PRIint64PFX "ll"
#define KUL__PRIint128PFX "I128"
/* An example of what to define for systems supporting other integers
this is primarily for developers of specialised software needing such
large sizes such as games, __huge_defined should also be defined to
prevent libraries like this one trying to set huge to __int128 and
PRIdHUGE etc to "I128d" etc */

#ifdef __int256_defined
#define KUL_I256_TYPE signed __int256
#define KUL_U256_TYPE unsigned __int256
#define KUL__PRIint128PFX "I256"
#endif
#ifdef __int512_defined
#define KUL_I512_TYPE signed __int512
#define KUL_U512_TYPE unsigned __int512
#define KUL__PRIint128PFX "I512"
#endif
#endif

As you can see this data model has a fixed nature, no more of this LLP64 etc nonsense, huge itself can already be implemented in some cases by GCC (via __int128), hence the mentioned __huge_defined, an alternative define for the data model is __PP__ however I think that unclear to code so __HP__ was my initial idea (had no more thus far). Since this involves pointer sizes that have yet to be made I believe this to be without conflict (except perhaps Hewllet Packard computers)

Message has been deleted

Lee Shallis

unread,
Mar 4, 2019, 3:47:28 AM3/4/19
to ISO C++ Standard - Future Proposals
Before anyone says it I noticed my f**k up at the end of the code after posting it, shoulda had this:
#ifdef __int256_defined
#define KUL_I256_TYPE signed __int256
#define KUL_U256_TYPE unsigned __int256
#define KUL__PRIint256PFX "I256"
#define KUL__SCNint256PFX "I256"

#endif
#ifdef __int512_defined
#define KUL_I512_TYPE signed __int512
#define KUL_U512_TYPE unsigned __int512
#define KUL__PRIint512PFX "I512"
#define KUL__SCNint512PFX "I512"
#endif


Lee Shallis

unread,
Mar 4, 2019, 4:01:56 AM3/4/19
to ISO C++ Standard - Future Proposals
I also noticed just now I didn't mention a constant style
#define HUGE_C(VAL) VAL##P
#define UHUGE_C(VAL) VAL##UP
#define INT256_C(VAL) VAL##I256
#define UINT256_C(VAL) VAL##UI256


Jake Arkinstall

unread,
Mar 4, 2019, 4:32:20 AM3/4/19
to std-pr...@isocpp.org
Huge is a bit of a step up from long, linguistically speaking. I'd (jokingly) prefer "really big" so that we can call the 256 bit integer "mindbogglingly big" in reference to Douglas Adams, given that it would be enough to describe 50 billion diameters of the known universe in units of micro planck lengths.

Personally I view any context in which a 128 bit *integer* is required as one which is rather specialised and, as in all specialised settings, resorting to non-standard compiler functionality is so common that the concept of doing so might as well be written in the standard. For most purposes that I can think of for such things (cryptography being a big one), they're usually broken down into smaller chunks, stored contiguously. I certainly can't think of a use case for gaming - I know that GPUs have interfaces with more than 64 bits but I was not aware that this was in the context of single values. As gaming is an example you provide, can you elaborate on what purpose it would have? (only out of casual interest) 

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/662618b7-20a4-4706-bb52-170428f98620%40isocpp.org.

Lee Shallis

unread,
Mar 4, 2019, 4:57:14 AM3/4/19
to ISO C++ Standard - Future Proposals
@Jake
I don't actually develop games myself but the as far as use cases within them goes I would think it would be keeping graphics in memory, there's also position of things such as the land/building blocks in dragon quest builders (that particular case probably is fine with 64bit since it already exists), games are usually designed to use more memory if they have the option of doing so (e.g. FFXV) but fallback, there's also emulators like PCSX2 which would benefit from having native integers as large as 128bit, it would also give compiler makers a reasonable way to develop portable binaries, e.g. gcc128, gcc256, gcc512, then a simple gcc that loads the appropriate executable depending on the system pointer size - a portable function will be needed for that e.g. dlopen("sys_ptr.so"), findaddr( "sys_ptr_width") system("gcc" sys_ptr_width()) (Yeah I know that was bad code but it's psuedo code so who cares)

Andrey Semashev

unread,
Mar 4, 2019, 5:09:06 AM3/4/19
to std-pr...@isocpp.org
On 3/4/19 12:32 PM, Jake Arkinstall wrote:
> Huge is a bit of a step up from long, linguistically speaking. I'd
> (jokingly) prefer "really big" so that we can call the 256 bit integer
> "mindbogglingly big" in reference to Douglas Adams, given that it would
> be enough to describe 50 billion diameters of the known universe in
> units of micro planck lengths.

Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Myriachan

unread,
Mar 4, 2019, 3:30:44 PM3/4/19
to ISO C++ Standard - Future Proposals
On Monday, March 4, 2019 at 2:09:06 AM UTC-8, Andrey Semashev wrote:
Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.

The Standard already supports implementations having integer types that are not char/short/int/long/long long, so that doesn't really need to change.  So yeah, I agree that it would be better to just have an int128_t and such.

One problem with the Standard as it exists now is that implementations are not allowed to define the intN_t, int_leastN_t, int_fastN_t series for anything other than N in { 8, 16, 32, 64 }.  This is divergent from the C standard's definition of stdint.h, which does allow values outside this range.  I want to file a proposal to fix this divergence in C++.

Another problem out there is that 128-bit integers are larger than intmax_t on existing implementations.  intmax_t cannot change due to ABI issues on current platforms.  Should the Standard allow the existence of int128_t when intmax_t is the same as int64_t?

One solution to the intmax_t issue is to say that intmax_t is considered the size of preprocessor integers, and is >= long long, but may not be the largest integer type.

GCC and Clang support 128-bit integers on some 64-bit platforms already (AArch64, PowerPC 64 and x86-64 included).  However, it's incomplete support: they don't support integer literals of that size.

Melissa

Thiago Macieira

unread,
Mar 4, 2019, 3:34:36 PM3/4/19
to std-pr...@isocpp.org
On Monday, 4 March 2019 12:30:44 PST Myriachan wrote:
> Another problem out there is that 128-bit integers are larger than intmax_t
> on existing implementations. intmax_t cannot change due to ABI issues on
> current platforms. Should the Standard allow the existence of int128_t
> when intmax_t is the same as int64_t?
>
> One solution to the intmax_t issue is to say that intmax_t is considered
> the size of preprocessor integers, and is >= long long, but may not be the
> largest integer type.

Third option is to deprecate intmax_t.

--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel System Software Products



Myriachan

unread,
Mar 4, 2019, 3:51:13 PM3/4/19
to ISO C++ Standard - Future Proposals
On Monday, March 4, 2019 at 12:34:36 PM UTC-8, Thiago Macieira wrote:
On Monday, 4 March 2019 12:30:44 PST Myriachan wrote:
> Another problem out there is that 128-bit integers are larger than intmax_t
> on existing implementations.  intmax_t cannot change due to ABI issues on
> current platforms.  Should the Standard allow the existence of int128_t
> when intmax_t is the same as int64_t?
>
> One solution to the intmax_t issue is to say that intmax_t is considered
> the size of preprocessor integers, and is >= long long, but may not be the
> largest integer type.

Third option is to deprecate intmax_t.


I wouldn't mind that, either.  But it would require coordination with WG14.

I just noticed that WG14 recently had proposal N2303 in WG14's most-recent meeting about this exact intmax_t issue.  The proposal suggested essentially the solution of having intmax_t being a largest type for anything standard, such as the preprocessor.  I'll have to look up what happened to it.


Melissa

Thiago Macieira

unread,
Mar 4, 2019, 4:05:35 PM3/4/19
to std-pr...@isocpp.org
On Monday, 4 March 2019 12:51:10 PST Myriachan wrote:
> I just noticed that WG14 recently had proposal N2303 in WG14's most-recent
> meeting about this exact intmax_t issue. The proposal suggested
> essentially the solution of having intmax_t being a largest type for
> anything standard, such as the preprocessor. I'll have to look up what
> happened to it.
>
> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2303.pdf

Thanks, appreciated. Such a definition would be convenient (the type in which
the preprocessor calculates stuff), with a legacy naming.

Arthur O'Dwyer

unread,
Mar 4, 2019, 6:20:15 PM3/4/19
to ISO C++ Standard - Future Proposals
On Monday, March 4, 2019 at 3:30:44 PM UTC-5, Myriachan wrote:
On Monday, March 4, 2019 at 2:09:06 AM UTC-8, Andrey Semashev wrote:
Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.

The Standard already supports implementations having integer types that are not char/short/int/long/long long, so that doesn't really need to change.  So yeah, I agree that it would be better to just have an int128_t and such.

+1.
 
One problem with the Standard as it exists now is that implementations are not allowed to define the intN_t, int_leastN_t, int_fastN_t series for anything other than N in { 8, 16, 32, 64 }.  This is divergent from the C standard's definition of stdint.h, which does allow values outside this range.  I want to file a proposal to fix this divergence in C++.

+1, although I could have sworn such a proposal was already in existence. (That is, a proposal to permit but not require vendors to define `int24_t`, `int128_t`, and so on.)
 
Another problem out there is that 128-bit integers are larger than intmax_t on existing implementations.  intmax_t cannot change due to ABI issues on current platforms.  Should the Standard allow the existence of int128_t when intmax_t is the same as int64_t?
One solution to the intmax_t issue is to say that intmax_t is considered the size of preprocessor integers, and is >= long long, but may not be the largest integer type.

Yes, that's already what's happening in practice (e.g. on libc++, and on libstdc++ in -std=gnu++17 mode). There exist integral types (types for which is_integral_v<T> is true) which are larger and wider than intmax_t.

GCC and Clang support 128-bit integers on some 64-bit platforms already (AArch64, PowerPC 64 and x86-64 included).  However, it's incomplete support: they don't support integer literals of that size.

Notice that as long as the vendor supports constexpr computations on __int128 (which both Clang and GCC do), then it's pretty easy to write a user-defined literal (UDL) that would allow for example `36893488147419103232_int128` as an integer literal.  So integer literals can be added at the C++ library level; they don't need to be built into the compiler.

Relevant reading:

@Lee: I can't quite figure out the point of the code you posted. It seems to be assuming that __HP__ will be #defined to "16"? Have you talked to anyone at HP about making that happen? ;P

–Arthur

Myriachan

unread,
Mar 4, 2019, 7:01:11 PM3/4/19
to ISO C++ Standard - Future Proposals
On Monday, March 4, 2019 at 3:20:15 PM UTC-8, Arthur O'Dwyer wrote:
On Monday, March 4, 2019 at 3:30:44 PM UTC-5, Myriachan wrote:
On Monday, March 4, 2019 at 2:09:06 AM UTC-8, Andrey Semashev wrote:
Personally, I find short, long, long long, and continuing the trend,
long long long, etc. to be useless. Typically, if you're using something
other than the default (unsigned) int then you want to know how small or
large the integer is, either to conserve memory, increase data density
or widen the range of values. I would say, future extensions should
follow the intN_t pattern instead of inventing new modifiers for int.

The Standard already supports implementations having integer types that are not char/short/int/long/long long, so that doesn't really need to change.  So yeah, I agree that it would be better to just have an int128_t and such.

+1.
 
One problem with the Standard as it exists now is that implementations are not allowed to define the intN_t, int_leastN_t, int_fastN_t series for anything other than N in { 8, 16, 32, 64 }.  This is divergent from the C standard's definition of stdint.h, which does allow values outside this range.  I want to file a proposal to fix this divergence in C++.

+1, although I could have sworn such a proposal was already in existence. (That is, a proposal to permit but not require vendors to define `int24_t`, `int128_t`, and so on.)

Hmm, yeah, that'd be good to know about.  I do remember a proposal that would allow big integers as uintN_t types.  In my opinion, though, uintN_t should not be a class.
 
 
Another problem out there is that 128-bit integers are larger than intmax_t on existing implementations.  intmax_t cannot change due to ABI issues on current platforms.  Should the Standard allow the existence of int128_t when intmax_t is the same as int64_t?
One solution to the intmax_t issue is to say that intmax_t is considered the size of preprocessor integers, and is >= long long, but may not be the largest integer type.

Yes, that's already what's happening in practice (e.g. on libc++, and on libstdc++ in -std=gnu++17 mode). There exist integral types (types for which is_integral_v<T> is true) which are larger and wider than intmax_t.

GCC and Clang support 128-bit integers on some 64-bit platforms already (AArch64, PowerPC 64 and x86-64 included).  However, it's incomplete support: they don't support integer literals of that size.

Notice that as long as the vendor supports constexpr computations on __int128 (which both Clang and GCC do), then it's pretty easy to write a user-defined literal (UDL) that would allow for example `36893488147419103232_int128` as an integer literal.  So integer literals can be added at the C++ library level; they don't need to be built into the compiler.


This is only 99% true.  For example, the following ought to be legal by the definition of the _C macros, but would not be in the case of an implementation based on a hypothetical operator ""i128:

void *meow = UINT128_C(0);

I brought this up regarding Isabelle Muerte's P1280R1, where I said that Microsoft couldn't make a 100%-compatible operator ""i32, because i32 is a Visual C++ native size suffix, and the ability to assign a zero literal to a pointer type without a cast could allow it to be detected.  So Microsoft could only get 99% of the way there - but it's not likely that this matters to anyone.

Melissa

Jake Arkinstall

unread,
Mar 4, 2019, 7:13:52 PM3/4/19
to std-pr...@isocpp.org
I'm all for the (u)intN_t types (I use them almost exclusively), but I think they should also come with a (u)int_t<N> mapping (and a rounding up to a power of 2), with some limits that are implementation defined, and a specified "minimal" range in the standard - then implementations are free to expand the range to describe as many billions of observable universes as they want, as long as they define at least the minimal range.

As an extension, I'd like to see this put into action in the STL where e.g. the size is known. For example, a std::size_t as a size type of a std::array is ludicrous in most practical situations, whereas

std::array<T, N>::size_type = uint_t<static_cast<uint16_t>(ceil(log2(N)))>

just makes intuitive sense to me. C++ programmers hardcoding binary sizes of integers that the compiler can decide from template arguments seems like a strange concept in 2019.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Michael Hava

unread,
Mar 20, 2019, 10:36:10 AM3/20/19
to ISO C++ Standard - Future Proposals
Honestly, the very last thing C++ needs is even more confusion/options on the actual type of size_type…
Reply all
Reply to author
Forward
0 new messages