Stabilize the numeric types in all platforms

223 views
Skip to first unread message

HarD Gamer

unread,
Aug 22, 2016, 9:17:55 AM8/22/16
to ISO C++ Standard - Future Proposals
Is a good ideal make the numeric types have the same size in all platforms. Now we have the uglies int32_t, uint16_t...., then why the pretty short, int,..., doesn't have a fixed size?
My proposal:
|   Type     | Fixed bytes | typedef for |
|=========================================
|   short    |     16      |   int16_t   | 
|    int     |     32      |   int32_t   |
|  unsigned  | 32 (n >= 0) |  uint32_t   |
|    long    |     64      |   int64_t   |
|  long long |   is needled right now?   |
|----------------------------------------|
|   float    |     32      |=============|
|  double    |     64      |=============|
|long double |    128      |=============|			
|========================================|

D. B.

unread,
Aug 22, 2016, 9:20:49 AM8/22/16
to std-pr...@isocpp.org
I must ask if you ever actually think or such before posting proposals, like whether there's a fundamental reason these things haven't been done already.

In this case it's because the basic types are meant to map efficiently to the most appropriate types on the native platform. Just because you, me, and many other people use x64 does not mean that everyone else with different CPUs having different types and strengths must be bent to our whim.

And those who need them, have the exact-width typedefs, Don't like their "uglies" names? Then make your own typedefs that wrap around them.

The fundamental types will not be constrained to fixed widths just to fit your aesthetic preferences, nor will the naming in the stdlib.

Please think before proposing.

Ren Industries

unread,
Aug 22, 2016, 9:35:14 AM8/22/16
to std-pr...@isocpp.org
How in the world would we support long double as 128 bit? I don't know of literally any platform with 128 bit floating points.
Is this meant to for the POWER9? Because SPARC dropped quad precision I think.

To answer your question, no, it isn't a good ideal.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CACGiwhGMecEx6gOSyOYUXP8S051QCOQ3Om2jg_Ahj_w7b%2BNL1g%40mail.gmail.com.

Ville Voutilainen

unread,
Aug 22, 2016, 9:47:20 AM8/22/16
to ISO C++ Standard - Future Proposals
On 22 August 2016 at 16:35, Ren Industries <renind...@gmail.com> wrote:
> How in the world would we support long double as 128 bit? I don't know of
> literally any platform with 128 bit floating points.
> Is this meant to for the POWER9? Because SPARC dropped quad precision I
> think.
>
> To answer your question, no, it isn't a good ideal.


Let's see the author of the original post implement the idea and run
the changed semantics on existing codebases.
The results are most likely going to be interesting.

Bo Persson

unread,
Aug 22, 2016, 10:32:50 AM8/22/16
to std-pr...@isocpp.org
If you only want to write code for systems where the sizes are
"correct", you can verify this by adding some checks. For example:

static_assert(std::is_same_v<int, std::int32_t>,
"Oops strange computer!");

and then just use int instead of the ugly name.


Just one problem with trying to fix the sizes is that the Windows API
uses 32-bit long in both 32-bit and 64-bit mode. Not much the C++
standard can change now.


You might also want to consider this:

http://stackoverflow.com/questions/6971886/exotic-architectures-the-standards-committees-care-about



Bo Persson


Thiago Macieira

unread,
Aug 22, 2016, 11:05:05 AM8/22/16
to std-pr...@isocpp.org
On segunda-feira, 22 de agosto de 2016 06:17:54 PDT HarD Gamer wrote:
> Is a good ideal make the numeric types have the same size in all platforms.

No. There's a good reason why they have different sizes.

And changing the sizes is a huge ABI break.

--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel Open Source Technology Center

D. B.

unread,
Aug 22, 2016, 11:40:00 AM8/22/16
to std-pr...@isocpp.org
It's quite perplexing how some people underestimate the breadth of uses and platforms on which the language runs - or simply ignore that they are not its only users - to such a degree that they think it can just abandon fundamental concepts to suit their aesthetic whims. Or do they just think all other users' cases and platforms are silly and the language should be dumbed down to more intuitively reflect the en vogue platform of the day? When the issues amount to trivialities that - if one is bothered enough - can be solved using typedefs and other mechanisms, it's all the more bizarre that some people think their preferences are (A) important and (B) so important as to justify breaking the language for everyone else.

But that's just how I see it.

HarD Gamer

unread,
Aug 22, 2016, 6:06:49 PM8/22/16
to ISO C++ Standard - Future Proposals
My proposal is to fix the size (the table is only a example).
It's only because the standard library never use the uglies  types in functions, classes and etc.
My proposal is to cross-platform programmer that need, for example

uniform_int_distribution<int> dist(10, numeric_limits<int>::max()); // assume int 32-bit

use_new_number(dist(eng)));
 
returning a 32-bit integer. What if in another platform int is 16-bit? What if is needled use a std function that requires a int? The value will be truncated in that 32-bit platform.


Ren Industries

unread,
Aug 22, 2016, 10:56:48 PM8/22/16
to std-pr...@isocpp.org
Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?
All the currently working code would break, and for a lot of legacy systems the actual code might be gone.

Even if everyone had the source code to every program, many programs may depend on int being 16 bits.
The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.

Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?

Do any of those really strike you as a good idea? No?
That's why this is a non-starter.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

FrankHB1989

unread,
Aug 22, 2016, 11:32:28 PM8/22/16
to ISO C++ Standard - Future Proposals


在 2016年8月22日星期一 UTC+8下午9:20:49,D. B.写道:
I must ask if you ever actually think or such before posting proposals, like whether there's a fundamental reason these things haven't been done already.

In this case it's because the basic types are meant to map efficiently to the most appropriate types on the native platform.
This is not quite true now.

The problem is how to keep "most appropriate". There is no guarantee, literally and normatively. This leaves them useless for application programmers in general. The worse things is they misleads programmers with wrong assumptions.

In fact there might be multiple interpretations about "native", and the programs need nothing of such assumptions. For example Win32 vs. Cygwin, both can be not "native" enough comparing to underlying NT. Whether same widths of integers used or not on such implementations has nothing to do with conformance. (Even different implementations on such platforms will probably choose types closer to those specified by the ISA spec, this is a detailed strategy of implementation rather than a general guarantee.) The sense of "appropriate" totally up to the particular implementation being used.

If an appropriate ABI is needed, it should always define exact integer types elsewhere, rather then depending on the language specifications.

Thus, except the type 'int' (which is needed by `main` and operator overloading), keeping them is merely for compatibility.

 

FrankHB1989

unread,
Aug 22, 2016, 11:40:34 PM8/22/16
to ISO C++ Standard - Future Proposals


在 2016年8月23日星期二 UTC+8上午10:56:48,Ren Industries写道:
Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?
All the currently working code would break, and for a lot of legacy systems the actual code might be gone.

Even if everyone had the source code to every program, many programs may depend on int being 16 bits.
The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.
 
Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?

There is no guarantee about "natural size".

There have been the names for "fastest" ones: intN_fast_t/uintN_fast_t.

It is reasonable that things like intN_t cannot be mandated for every platforms by practical reasons. But why not int_leastN_t/int_fastN_t? The fact of keeping too many integer types without guarantees in the language specification is the true ugliness. Their co-existence also frustrates the normative type system (e.g. unsigned long vs. unsigned when they are the same thing in the ABI). I prefer most of them will be eventually conditionally-supported.

BTW, I also don't think we have good reasons to provide more wrapper types by the language specification. They can be unusable placeholders too easily, causing compatibility and maintenance nightmare. Why should programmers suffer from a `wchar_t` with 8-bit (yeah, damned Bionic)?

 
Do any of those really strike you as a good idea? No?
That's why this is a non-starter.
On Mon, Aug 22, 2016 at 6:06 PM, HarD Gamer <rodrigo...@gmail.com> wrote:
My proposal is to fix the size (the table is only a example).
It's only because the standard library never use the uglies  types in functions, classes and etc.
My proposal is to cross-platform programmer that need, for example

uniform_int_distribution<int> dist(10, numeric_limits<int>::max()); // assume int 32-bit

use_new_number(dist(eng)));
 
returning a 32-bit integer. What if in another platform int is 16-bit? What if is needled use a std function that requires a int? The value will be truncated in that 32-bit platform.


--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.

To post to this group, send email to std-pr...@isocpp.org.

Nicol Bolas

unread,
Aug 23, 2016, 12:18:35 AM8/23/16
to ISO C++ Standard - Future Proposals
On Monday, August 22, 2016 at 11:40:34 PM UTC-4, FrankHB1989 wrote:
在 2016年8月23日星期二 UTC+8上午10:56:48,Ren Industries写道:
Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?
All the currently working code would break, and for a lot of legacy systems the actual code might be gone.

Even if everyone had the source code to every program, many programs may depend on int being 16 bits.
The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.
 
Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?

There is no guarantee about "natural size".

Yes, there is no guarantee about what that means. But compiler writers would encounter problems if they start arbitrarily violating the expectations of users of that platform. As such, `int` will generally be the "natural size" for the processor in question.

There have been the names for "fastest" ones: intN_fast_t/uintN_fast_t.

It is reasonable that things like intN_t cannot be mandated for every platforms by practical reasons. But why not int_leastN_t/int_fastN_t?

They are mandated for all platforms. The only optional ones are those that use a specific bitdepth and the `intptr_t`s.

Every valid C++11 implementation must provide those types. Even freestanding implementations, which are permitted to only provide a subset of the full standard library, are required, to provide these types.
 

FrankHB1989

unread,
Aug 23, 2016, 12:36:13 AM8/23/16
to ISO C++ Standard - Future Proposals


在 2016年8月23日星期二 UTC+8下午12:18:35,Nicol Bolas写道:
On Monday, August 22, 2016 at 11:40:34 PM UTC-4, FrankHB1989 wrote:
在 2016年8月23日星期二 UTC+8上午10:56:48,Ren Industries写道:
Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?
All the currently working code would break, and for a lot of legacy systems the actual code might be gone.

Even if everyone had the source code to every program, many programs may depend on int being 16 bits.
The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.
 
Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?

There is no guarantee about "natural size".

Yes, there is no guarantee about what that means. But compiler writers would encounter problems if they start arbitrarily violating the expectations of users of that platform. As such, `int` will generally be the "natural size" for the processor in question.

This still depends on what degree of "natural" means. If I want to make it correct for the whole language (esp. when dealing with exotic platforms... actually I do assert on CHAR_BIT, etc), I can rely on no such assumptions.

To me `int` only means:
  • return type of global `main`
  • necessary type for overloading of postfix `++`/`--`
  • prvalue type decayed from from `errno` (`errno` itself can be volatile qualified)
  • binary interop required by underlying ABI
  • some other things required or introduced by platform-specific API
I don't use `int` elsewhere, because it does nothing I want. Even the cases above has still problematic in essential, but practically I can avoid type safety issues of them (yeah, printf format is always a pain to other wrapper types, too).

The last 2 bullets are also fit for other fundamental integer types. They are the only "natural" cases (rather than merely historical reasons) can make sense. I think the standard can do few about them, either. So the type `int` is not so essentially special besides it just had been hard-coded in some language rules long times ago.
 
There have been the names for "fastest" ones: intN_fast_t/uintN_fast_t.

It is reasonable that things like intN_t cannot be mandated for every platforms by practical reasons. But why not int_leastN_t/int_fastN_t?

They are mandated for all platforms. The only optional ones are those that use a specific bitdepth and the `intptr_t`s.

 
Every valid C++11 implementation must provide those types. Even freestanding implementations, which are permitted to only provide a subset of the full standard library, are required, to provide these types.
OK, I should have said, they are not mandated by the core language, or, not guaranteed both built-in and portable.

Thiago Macieira

unread,
Aug 23, 2016, 12:49:54 AM8/23/16
to std-pr...@isocpp.org
On segunda-feira, 22 de agosto de 2016 21:36:12 PDT FrankHB1989 wrote:
> OK, I should have said, they are not mandated *by the core language*, or,
> not guaranteed both built-in and portable.

They are mandated in a header that complements the core language. Just like
<intializer_list>, <type_traits> and <limits>, those types cannot be
implemented by anyone except the compiler vendor.

Therefore, for all intents and purposes, the minimum-width and fast integer
types are guaranteed to be portable.

The exact-width ones aren't guaranteed to be portable because their concept
isn't portable. Some machines don't have 8-, 16-, 32- and 64-bit types (they
could be multiples of 9 bits). Also note how int32_t is required to use two's
complement representation, which again excludes some machines.

D. B.

unread,
Aug 23, 2016, 1:24:12 AM8/23/16
to std-pr...@isocpp.org
On Tue, Aug 23, 2016 at 5:49 AM, Thiago Macieira <thi...@macieira.org> wrote:
Also note how int32_t is required to use two's
complement representation, which again excludes some machines.

Good point but applies to all the exact-width signed types in <cstdint>, not just the 32-bit one.

FrankHB1989

unread,
Aug 23, 2016, 7:31:20 AM8/23/16
to ISO C++ Standard - Future Proposals


在 2016年8月23日星期二 UTC+8下午12:49:54,Thiago Macieira写道:
On segunda-feira, 22 de agosto de 2016 21:36:12 PDT FrankHB1989 wrote:
> OK, I should have said, they are not mandated *by the core language*, or,
> not guaranteed both built-in and portable.

They are mandated in a header that complements the core language. Just like
<intializer_list>, <type_traits> and <limits>, those types cannot be
implemented by anyone except the compiler vendor.

Therefore, for all intents and purposes, the minimum-width and fast integer
types are guaranteed to be portable.

The exact-width ones aren't guaranteed to be portable because their concept
isn't portable. Some machines don't have 8-, 16-, 32- and 64-bit types (they
could be multiples of 9 bits). Also note how int32_t is required to use two's
complement representation, which again excludes some machines.

Good point about 2's complements. However, all of these are what I have already known. I still wonder why these (at least partially) width-aware types are not in the core language while traditional fundamental integer types are still in it. In a systemic view of design, this is an inappropriate inversion of abstraction layers -- all implementations in reality build these fundamental types on some integer types with known width provided by underlying specification, or just with the width specified artificially. Since we always will have some exact width integers in the stacks of implementation, why not lift the uncertainty to the higher level? Though I don't think declaring `short` or `long` by users very useful, but it gives us an opportunity to have the core language cleaner, with less confusion, meeting better to the principle of zero-cost abstraction mentally, and being a little easier to implement.
Or perhaps better keeping them all out of the core language?


Nicol Bolas

unread,
Aug 23, 2016, 9:33:09 AM8/23/16
to ISO C++ Standard - Future Proposals
On Tuesday, August 23, 2016 at 7:31:20 AM UTC-4, FrankHB1989 wrote:
在 2016年8月23日星期二 UTC+8下午12:49:54,Thiago Macieira写道:
On segunda-feira, 22 de agosto de 2016 21:36:12 PDT FrankHB1989 wrote:
> OK, I should have said, they are not mandated *by the core language*, or,
> not guaranteed both built-in and portable.

They are mandated in a header that complements the core language. Just like
<intializer_list>, <type_traits> and <limits>, those types cannot be
implemented by anyone except the compiler vendor.

Therefore, for all intents and purposes, the minimum-width and fast integer
types are guaranteed to be portable.

The exact-width ones aren't guaranteed to be portable because their concept
isn't portable. Some machines don't have 8-, 16-, 32- and 64-bit types (they
could be multiples of 9 bits). Also note how int32_t is required to use two's
complement representation, which again excludes some machines.

Good point about 2's complements. However, all of these are what I have already known. I still wonder why these (at least partially) width-aware types are not in the core language while traditional fundamental integer types are still in it.

... why does it matter if it's a reserved keyword or not? It's there; it's available on every platform that provides C++11 or better. You can use it to solve the problem you have with types not having a relatively fixed size.

The placement of something in the standard library or core language is not based on some evaluation of the merit of the feature.

Thiago Macieira

unread,
Aug 23, 2016, 10:45:52 AM8/23/16
to std-pr...@isocpp.org
On terça-feira, 23 de agosto de 2016 04:31:20 PDT FrankHB1989 wrote:
> Good point about 2's complements. However, all of these are what I have
> already known. I still wonder why these (at least partially) width-aware
> types are not in the core language while traditional fundamental integer
> types are still in it. In a systemic view of design, this is an
> inappropriate inversion of abstraction layers -- all implementations in
> reality build these fundamental types on some integer types with known
> width provided by underlying specification, or just with the width
> specified artificially. Since we always will have some exact width integers
> in the stacks of implementation, why not lift the uncertainty to the higher
> level? Though I don't think declaring `short` or `long` by users very
> useful, but it gives us an opportunity to have the core language cleaner,
> with less confusion, meeting better to the principle of zero-cost
> abstraction mentally, and being a little easier to implement.
> Or perhaps better keeping them all out of the core language?

Like Nicol said, I don't see what the issue is about having to #include
something before you can use the types. I find that having the exact-width
types *not* be reserved keywords is a better option, because it won't break
code that has myNS::int32_t, nor code that was ABI-dependent but fully valid
that defined (and defines) int32_t in the global namespace.

Moreover, as we've said time and again, the exact-width integer types aren't
required to exist in all implementations. Your statement saying "we will
always have some exact width integers" is incorrect. So what's the point of
having a reserved word that can't be used?

You may not think that using short or long is very useful, but others may
disagree with you. I personally agree partially and disagree on other parts.
More to the point, changing the fundamental integer types now is a simple non-
starter. Leave them alone.

Farid Mehrabi

unread,
Aug 25, 2016, 2:13:07 PM8/25/16
to std-proposals
​going O.T ,but seems essential.plzzzzz do not blame anybody(O.P) for​ needing more time to become like you. A technical answer with enough details means eveything.

thanks everyone,
FM.


--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.

To post to this group, send email to std-pr...@isocpp.org.



--
how am I supposed to end the twisted road of  your hair in such a dark night??
unless the candle of your face does shed some light upon my way!!!

Tony V E

unread,
Aug 25, 2016, 5:25:31 PM8/25/16
to Standard Proposals
On Thu, Aug 25, 2016 at 2:12 PM, Farid Mehrabi <farid....@gmail.com> wrote:
​going O.T ,but seems essential.plzzzzz do not blame anybody(O.P) for​ needing more time to become like you. A technical answer with enough details means eveything.

thanks everyone,
FM.


+1

--
Be seeing you,
Tony

FrankHB1989

unread,
Aug 30, 2016, 7:48:42 AM8/30/16
to ISO C++ Standard - Future Proposals
 
在 2016年8月23日星期二 UTC+8下午9:33:09,Nicol Bolas写道:
I don't think "to solve the problem you have with types not having a relatively fixed size" useful in reality. Overflow of signed integers leads to UB. Wrapped behavior on unsigned integers will cause unexpected result too easily. Without a concrete limit of the range, they are dangerous (if not that useless). So we need to know the size sooner or later. When size is needed, they are almost always suboptimal choices. With such types we actually have some implicit weak guarantees, e.g. an INT_MAX should be greater than 32767. That's too subtle on readability. And I find nothing reasonable to encourage such use. Then keeping them would cause teachability problems, probably.

Well, I admit I am not motivated enough to push the change forward currently since it does not benefit much at once. But getting consensus on some points of design should be good.

The placement of something in the standard library or core language is not based on some evaluation of the merit of the feature.

Please also note "it works" should not be a normal reason in a process of standardization. There will be great risks to simply pile features without enough alternative choices considered carefully, and this will probably waste time a lot in future (e.g. `export` and dynamic exception specification). To do the "right thing" pedantically is unlikely to be a premature optimization in such cases than in ordinary projects.

There should be at least one topic concerned here: the return type of `std::uncaught_exceptions`.



FrankHB1989

unread,
Aug 30, 2016, 8:15:01 AM8/30/16
to ISO C++ Standard - Future Proposals


在 2016年8月23日星期二 UTC+8下午10:45:52,Thiago Macieira写道:
On terça-feira, 23 de agosto de 2016 04:31:20 PDT FrankHB1989 wrote:
> Good point about 2's complements. However, all of these are what I have
> already known. I still wonder why these (at least partially) width-aware
> types are not in the core language while traditional fundamental integer
> types are still in it. In a systemic view of design, this is an
> inappropriate inversion of abstraction layers -- all implementations in
> reality build these fundamental types on some integer types with known
> width provided by underlying specification, or just with the width
> specified artificially. Since we always will have some exact width integers
> in the stacks of implementation, why not lift the uncertainty to the higher
> level? Though I don't think declaring `short` or `long` by users very
> useful, but it gives us an opportunity to have the core language cleaner,
> with less confusion, meeting better to the principle of zero-cost
> abstraction mentally, and being a little easier to implement.
> Or perhaps better keeping them all out of the core language?

Like Nicol said, I don't see what the issue is about having to #include
something before you can use the types. I find that having the exact-width
types *not* be reserved keywords is a better option, because it won't break
code that has myNS::int32_t, nor code that was ABI-dependent but fully valid
that defined (and defines) int32_t in the global namespace.

Actually I don't much care about the #include stuff. Keeping fixed-with types away with hard-coded keywords also sounds better to me, though ABI issues here are only caused by historical reasons -- e.g. if we does not have them in current language, we would probably not reserve mangled encoding for troublesome things like `unsigned long`, so many problems would not exist at all.

However, I care why the so-called "fundamental" types are there even there are nothing good reasons to use them? And, why not change, besides the compatibility issues?
 
Moreover, as we've said time and again, the exact-width integer types aren't
required to exist in all implementations. Your statement saying "we will
always have some exact width integers" is incorrect. So what's the point of
having a reserved word that can't be used?

No. I did not mean the type of the C++ language, but the fixed-width integer notions in the whole stack of the implementation, including ABI references like ISA spec, and internal specification of hardware design, etc. Since you can't physically implement "unspecified" width effectively, you have to know at least one of the exact widths. That width is then erased in some high level language like C and C++ before you recover it in a even higher abstraction. It is a waste of effort on leaking abstraction.

BTW, I wish fixed-with integer type with certain strategies (2's complement, wrapping behavior, UB or not on overflow, etc) mandated some day, at least on hosted implementations, as BS put arbitrarily-width integer type in the wishlist of standardization (IIRC).
 
You may not think that using short or long is very useful, but others may
disagree with you. I personally agree partially and disagree on other parts.
More to the point, changing the fundamental integer types now is a simple non-
starter. Leave them alone.

It is still arguable what is deserved to be true "fundamental", before they are proposed to be changed.
 

Tony V E

unread,
Aug 30, 2016, 5:50:24 PM8/30/16
to Standard Proposals
On Tue, Aug 30, 2016 at 7:48 AM, FrankHB1989 <frank...@gmail.com> wrote:
 

There should be at least one topic concerned here: the return type of `std::uncaught_exceptions`.




IIRC (I don't recall which meeting or which subgroup however) that was discussed at committee.  We intentionally picked int.  It wasn't just "go with whatever was in the proposal", it was discussed and debated.  Consensus was int, and I don't think it was contentious.


For the general topic of signed vs unsigned, see Stroustrup et al at

https://www.youtube.com/watch?v=Puio5dly9N8

First mentioned at 9:50,
41:08 for the long answer
1:02:50 for where they just give the short answer: “sorry”


Reply all
Reply to author
Forward
0 new messages