| Type | Fixed bytes | typedef for | |========================================= | short | 16 | int16_t | | int | 32 | int32_t | | unsigned | 32 (n >= 0) | uint32_t | | long | 64 | int64_t | | long long | is needled right now? | |----------------------------------------| | float | 32 |=============| | double | 64 |=============| |long double | 128 |=============| |========================================|
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CACGiwhGMecEx6gOSyOYUXP8S051QCOQ3Om2jg_Ahj_w7b%2BNL1g%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/0d61be1e-2020-4f09-baa6-64dca463cb68%40isocpp.org.
I must ask if you ever actually think or such before posting proposals, like whether there's a fundamental reason these things haven't been done already.
In this case it's because the basic types are meant to map efficiently to the most appropriate types on the native platform.
Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?All the currently working code would break, and for a lot of legacy systems the actual code might be gone.Even if everyone had the source code to every program, many programs may depend on int being 16 bits.The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.
Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?
Do any of those really strike you as a good idea? No?That's why this is a non-starter.
On Mon, Aug 22, 2016 at 6:06 PM, HarD Gamer <rodrigo...@gmail.com> wrote:
My proposal is to fix the size (the table is only a example).It's only because the standard library never use the uglies types in functions, classes and etc.My proposal is to cross-platform programmer that need, for exampleuniform_int_distribution<int> dist(10, numeric_limits<int>::max()); // assume int 32-bituse_new_number(dist(eng)));returning a 32-bit integer. What if in another platform int is 16-bit? What if is needled use a std function that requires a int? The value will be truncated in that 32-bit platform.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
在 2016年8月23日星期二 UTC+8上午10:56:48,Ren Industries写道:Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?All the currently working code would break, and for a lot of legacy systems the actual code might be gone.Even if everyone had the source code to every program, many programs may depend on int being 16 bits.The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?There is no guarantee about "natural size".
There have been the names for "fastest" ones: intN_fast_t/uintN_fast_t.
It is reasonable that things like intN_t cannot be mandated for every platforms by practical reasons. But why not int_leastN_t/int_fastN_t?
On Monday, August 22, 2016 at 11:40:34 PM UTC-4, FrankHB1989 wrote:在 2016年8月23日星期二 UTC+8上午10:56:48,Ren Industries写道:Yes, what if another platform is 16 bit? How in the world are we to change that platform so it supports 32 bit ints?All the currently working code would break, and for a lot of legacy systems the actual code might be gone.Even if everyone had the source code to every program, many programs may depend on int being 16 bits.The code might not make sense using 32 bit integers. For example, some DSP's are still 16 bit; emulating 32 bits on 16 bit processors will more than half the speed of the code. And much code on these sorts of units is time critical; they might not have that many cycles to spare.Finally, the fundamental meaning of "int" would change; int would no longer be the natural size for the platform. So why the hell are we calling it int anymore? It's a 32 bit integer, not a platform neutral integer. Either we need to name it more strictly (such as int32_t or something) or we need to give a new name to "platform neutral" integer or "most natural" integer or "fastest" integer. Maybe intnatural? natural? fixed?There is no guarantee about "natural size".
Yes, there is no guarantee about what that means. But compiler writers would encounter problems if they start arbitrarily violating the expectations of users of that platform. As such, `int` will generally be the "natural size" for the processor in question.
There have been the names for "fastest" ones: intN_fast_t/uintN_fast_t.
It is reasonable that things like intN_t cannot be mandated for every platforms by practical reasons. But why not int_leastN_t/int_fastN_t?
They are mandated for all platforms. The only optional ones are those that use a specific bitdepth and the `intptr_t`s.
Every valid C++11 implementation must provide those types. Even freestanding implementations, which are permitted to only provide a subset of the full standard library, are required, to provide these types.
Also note how int32_t is required to use two's
complement representation, which again excludes some machines.
On segunda-feira, 22 de agosto de 2016 21:36:12 PDT FrankHB1989 wrote:
> OK, I should have said, they are not mandated *by the core language*, or,
> not guaranteed both built-in and portable.
They are mandated in a header that complements the core language. Just like
<intializer_list>, <type_traits> and <limits>, those types cannot be
implemented by anyone except the compiler vendor.
Therefore, for all intents and purposes, the minimum-width and fast integer
types are guaranteed to be portable.
The exact-width ones aren't guaranteed to be portable because their concept
isn't portable. Some machines don't have 8-, 16-, 32- and 64-bit types (they
could be multiples of 9 bits). Also note how int32_t is required to use two's
complement representation, which again excludes some machines.
在 2016年8月23日星期二 UTC+8下午12:49:54,Thiago Macieira写道:On segunda-feira, 22 de agosto de 2016 21:36:12 PDT FrankHB1989 wrote:
> OK, I should have said, they are not mandated *by the core language*, or,
> not guaranteed both built-in and portable.
They are mandated in a header that complements the core language. Just like
<intializer_list>, <type_traits> and <limits>, those types cannot be
implemented by anyone except the compiler vendor.
Therefore, for all intents and purposes, the minimum-width and fast integer
types are guaranteed to be portable.
The exact-width ones aren't guaranteed to be portable because their concept
isn't portable. Some machines don't have 8-, 16-, 32- and 64-bit types (they
could be multiples of 9 bits). Also note how int32_t is required to use two's
complement representation, which again excludes some machines.
Good point about 2's complements. However, all of these are what I have already known. I still wonder why these (at least partially) width-aware types are not in the core language while traditional fundamental integer types are still in it.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/7777451.RoO7Qxo2g3%40tjmaciei-mobl1.
going O.T ,but seems essential.plzzzzz do not blame anybody(O.P) for needing more time to become like you. A technical answer with enough details means eveything.thanks everyone,FM.
The placement of something in the standard library or core language is not based on some evaluation of the merit of the feature.
On terça-feira, 23 de agosto de 2016 04:31:20 PDT FrankHB1989 wrote:
> Good point about 2's complements. However, all of these are what I have
> already known. I still wonder why these (at least partially) width-aware
> types are not in the core language while traditional fundamental integer
> types are still in it. In a systemic view of design, this is an
> inappropriate inversion of abstraction layers -- all implementations in
> reality build these fundamental types on some integer types with known
> width provided by underlying specification, or just with the width
> specified artificially. Since we always will have some exact width integers
> in the stacks of implementation, why not lift the uncertainty to the higher
> level? Though I don't think declaring `short` or `long` by users very
> useful, but it gives us an opportunity to have the core language cleaner,
> with less confusion, meeting better to the principle of zero-cost
> abstraction mentally, and being a little easier to implement.
> Or perhaps better keeping them all out of the core language?
Like Nicol said, I don't see what the issue is about having to #include
something before you can use the types. I find that having the exact-width
types *not* be reserved keywords is a better option, because it won't break
code that has myNS::int32_t, nor code that was ABI-dependent but fully valid
that defined (and defines) int32_t in the global namespace.
Moreover, as we've said time and again, the exact-width integer types aren't
required to exist in all implementations. Your statement saying "we will
always have some exact width integers" is incorrect. So what's the point of
having a reserved word that can't be used?
You may not think that using short or long is very useful, but others may
disagree with you. I personally agree partially and disagree on other parts.
More to the point, changing the fundamental integer types now is a simple non-
starter. Leave them alone.
There should be at least one topic concerned here: the return type of `std::uncaught_exceptions`.
https://www.youtube.com/watch?v=Puio5dly9N8
First mentioned at 9:50,
41:08 for the long answer
1:02:50 for where they just give the short answer: “sorry”