There is a /huge/ difference between a flag that /adds/ semantics to the
C++ standard language, and one that /removes/ semantics.
Currently, gcc with an appropriate -std flag is quite close to standards
compliant for a variety of C++ (and C) standards. It is not perfect,
but AFAIK its as close as any other mainstream compiler. This is the
case regardless of -fwrapv or -fno-wrapv (the default). A user can
choose to add extra semantics to the language with -fwrapv if they find
it useful and are willing to pay the price in lost optimisations and
poorer safety and bug-catching tools.
But if C++20 /requires/ wrapping semantics, then there is no longer a
choice. If the compiler allows "-fno-wrapv", then (in that mode) it
will no longer be a compliant C++20 compiler. Code written correctly to
C++20 could fail to work correctly - and it could easily be silent
failures. (If you are using special modes, like "-no-exceptions", code
that relies on exceptions will have loud compile-time failures.) Such
silent non-compliance would not be acceptable.
>
>> Turning broken code with arbitrary bad behaviour into
>> broken code with predictable bad behaviour is not particularly useful.
>
> That is where I have different experience. At least 60% of effort put
> into development seems to be about fixing defects and part of that cost
> is caused by unreliability or instability of misbehavior that makes it
> harder to figure what actually is wrong. Lot of debugging tools are
> basically turning bad behavior that does who knows what into reliable
> bad behavior that raises signals, throws exceptions, breaks or
> terminates.
You misunderstand me. For debugging purposes, consistent bad behaviour
is useful. If the program crashes or does something weird in the same
place each test run, it is a lot easier to find and fix the problem.
But when you are testing your 16-bit MS-DOS apple cart program, and you
add your 32,768th apple, it doesn't matter if the program prints out the
balance as -32,768 apples or if it fails to print out anything - you can
find the problem.
With undefined behaviour on overflow, rather than wrapping behaviour,
there is a solid chance you will get consistent effects from any
particular compilation - but the details may change with other changes
to the code or other compilation options. What you don't get is
/predictable/ effects - but that doesn't matter for debugging. If you
had been predicting effects properly, you wouldn't have the bug in the
first place!
And with overflow being undefined behaviour, and therefore always an
error, you can use tools like gcc's sanitizer to give you a clear,
consistent and helpful debug message when the problem occurs.
Any tool or feature that makes it easier to find mistakes as early as
possible, is a good tool in my book. Having integer overflow as
undefined behaviour makes this easier - that is the most important
reason I have for wanting it. Wrapping overflow makes bugs /harder/ to
find. It increases the chance that the errors will go unnoticed, by
quietly continuing with invalid results.
>
>> So you think this feature - wrapping overflows - is so useful and
>> important that it should be added to the language and forced upon all
>> compilers, and yet it also is so unlikely to be correct code that
>> compilers should warn about it whenever possible and require specific
>> settings to disable the warning? Isn't that a little inconsistent?
>
> Yes, wrapping feature makes logical defects to behave more predictably
> and Yes, I consider it good.
That makes no sense. You prefer the behaviour of your defects to be
predictable? To be useful, that would mean you would have to know your
code has a defect - and in that case, surely you would fix the code
rather than wanting to run it with predictable errors?
The nearest use would be for debugging, where you want to work backwards
from the bad effect you get to figure out what caused it.
Predictability is sometimes useful then, but two's complement wrapping
is not nearly as helpful as trap on overflow behaviour - which would be
impossible when you have wrapping as part of the language definition.
> Yes, wrapping feature is sometimes useful
> also on its own.
Occasionally, yes. It happens often enough that it is important the
language has this capability. It happens rarely enough that it is not a
problem to have somewhat ugly code to do it. For all platforms where
you have two's complement representation of signed integers, you can
handle this with conversions to and from unsigned types. It's not hard
to wrap it all in a simple class if you want neater code.
> Yes, there are compiler intrinsic functions so I can
> live without the feature.
Unsigned arithmetic is not a compiler intrinsic. You don't need to
detect overflow to get wrapping behaviour.
(Having said that, I would like to see some standard library classes
that cover the features of many common intrinsics, such as gcc and
clang's overflow builtins.)
> Yes, I would still like warnings. Yes, I can
> live without warnings. Yes, way to disable warnings can be good. Yes,
> way to enable non-wrapping optimizations can be good.
If wrapping behaviour is required for the language standard, you won't
get these.
> Yes, I can live
> without non-wrapping optimizations in 95% of code and do those manually
> in rest. I am not sure how it all is inconsistent. There just are
> priorities what I favor more and these priorities are likely bit
> different for all people.
>
What is inconsistent is to want a feature that is almost always an
indication of incorrect code.
>> I'd be much happier to see some standardised pragmas like:
>>
>> #pragma STDC_OVERFLOW_WRAP
>> #pragma STDC_OVERFLOW_TRAP
>> #pragma STDC_OVERFLOW_UNDEFINED
>>
>> (or whatever variant is preferred)
>
> That would be even better indeed, but what the proposal suggested
> was simpler to implement and to add to standard leaving that possible
> but beyond scope of it.
>
The proposal suggests something that I am convinced is a huge step
backwards (making wrapping behaviour required), while failing to provide
a simple, standardised way to let programmers make choices.
(I note the proposal suggests, as you do, that compilers could still
have flags like "-fno-wrapv". To me, this shows that the proposal
authors are making the same mistake you are about the consequences of
changing the semantics of the language.)
>>
>> Again, what do you think does not work with -fwrapv?
>
> I have used it rarely and experimentally. It did sometimes optimize
> int i loops when it should not.
That tells me nothing, I am afraid.
> Yes, loop optimization might give up
> to 10% performance difference on extreme case but that is again about
> requiring some "-fno-wrapv" to allow compiler to do that optimization
> not other way around. When currently "-fwrapv" is defective and
Without further references than a vague memory of something that wasn't
quite what you expected, I will continue to assume that "-fwrapv" is
/not/ defective and works exactly as it says it will. Show me examples,
bug reports, or something more definite and I will change that
assumption. As I have said before, no one claims gcc is bug-free.
> std::numeric_limits<int>::is_modulo is false then it is valid
> to weasel out of each such case by saying that it is not a bug.
In gcc, is_modulo is false because ints are not guaranteed to be
wrapping. They /might/ have wrapping behaviour, depending on the flags
and the code in question, but won't necessarily have it.
Should the value of is_modulo be dependent on whether -fwrapv is in
force or not? I don't think so - I think making these features
dependent on compiler settings would open a large can of worms.
Remember, you are free to change the -fwrapv setting in the middle of a
file using pragmas, or with function attributes - I would not like to
see is_modulo changing to fit. It is better to keep it at the
pessimistic "false" setting. (That is, at least, my opinion on this
particular matter - but I can see why some people could think differently.)
>
>>> The well-defined behavior can not make wrong code to give
>>> correct answers. The wrong answers will just come more consistently
>>> and portably with wrap and so I can trust unit tests for my
>>> math of embedded system ran on PC bit more.
>>>
>>
>> If your unit tests rely on wrapping for overflow, then those unit tests
>> are broken. "More consistent wrong answers" is /not/ a phrase you want
>> to hear regarding test code! You want your embedded code to run quickly
>> and efficiently - but there is no reason not to have
>> -fsanitize=signed-integer-overflow for your PC-based unit tests and
>> simulations.
>
> I did not say that tests rely on undefined behavior or code relies on
> undefined behavior. I mean that most actual code (including unit tests)
> written by humans (and gods we can't hire) contains defects.
> It reduces effort of finding and fixing when these defects behave more
> uniformly. Usage of various debugging tools is good idea that helps
> to reduce that effort too but is orthogonal to it and not in conflict.
I accept what you are saying, but I think you are wrong. I disagree
that wrapping overflow is useful for debugging, as I explained earlier.
However, compilers like gcc give you the choice. Add "-fwrapv" when you
find it helps with debugging - just as you might add sanitizer options
and warning options. It does not have to be part of the language, which
would reduce choice and options and limit your debugging tools.
Indeed.
A key problem with relying on wrapping behaviour is that it is very
subtle - it is typically invisible in the code, without studying it in
depth.
> I suspect that people will use it only on limited but important cases
> (like for self-diagnosing or for cryptography).
Cryptography would typically use unsigned types.
Having library features for detecting overflow would be clearer, safer,
and more portable than relying on wrapping behaviour for those that want
to check for problems after they have done their arithmetic that might
overflow. I really dislike the concept of running into the traffic and
then checking if you have been run over, instead of looking first and
crossing when it is safe. It would be better to have types similar to
std::optional which track the validity of operations - letting you
happily perform arithmetic and at the end check for any failures.
A big issue I have with the whole concept is that currently we have
/types/ for which overflow is defined (unsigned types) and types for
which it is not defined. But overflow behaviour should be part of the
operations, not the types.
> Other possible option
> would to standardize compiler intrinsic functions for those cases.
I would say making a standard library section that has the required
behaviour - compilers can implement this using intrinsics or builtins if
they like.
> That means the (sometimes surprising) optimizations will stay valid
> by default. I haven't seen people objecting much when they then need
> to mark or to rewrite questionable places to suppress false positive
> diagnostics about well-defined code. I likely miss some depth of it
> or am too naive about something else and it is hard to predict future.
>
Predictions are hard, especially about the future!