I am not at all keen on the idea of the standards defining certain
behaviour and then compilers having flags to disregard those definitions
- that is, I think, a terrible idea. If trapping were to be added to
the standards, it would be much better if the standards offered a
user-selectable choice (I suppose via pragmas).
Remember, features like gcc's "-ffast-math" do not instruct the compiler
to ignore parts of the /C/ standard - they ignore parts of the IEEE
floating point standards. This is a very different matter. As far as I
know, "gcc -ffast-math" is no more non-compliant than "gcc".
>
>> I do not want that situation in integer code.
>
> Then there won't be.
I am entirely confident that trapping on overflow will never be required
by the C or C++ standards anyway, so this is hypothetical. But it is
feasible that there could be optional support defined (such as via a new
library section in C++), and there I would want the consistency.
>
>> I am much happier with the function of
>> "-fsanitize=signed-integer-overflow" here - it generates more checks,
>> less efficient code, but the functionality is clear and consistent.
>> I'll use that when I am happy to take the performance hit in the name of
>> debugging code.
>
> It is not trapping math there but debugging tool that caught "undefined
> behavior". It does not help me on cases when I need trapping maths.
>
Trapping maths can be handled in many ways - kill with an error message
is often useful in debugging, but as you say it is not useful for
handling it within the program. Throwing an exception, calling a trap
handler, returning a signalling NaN, are other alternatives. My point
was that I wanted the semantics of -fsanitize=signed-integer-overflow
for detecting the overflow - how that overflow is another matter.
>>
>>> I feel dealing with such issue" to be *lot* easier than dealing
>>> with utter undefined behavior (that all such examples are by
>>> current scripture). For me every solution is better than
>>> undefined behavior.
>>
>> The whole point is that you /don't/ hit undefined behaviour.
>
> The whole issue that you seemingly don't understand is that there are
> cases (and those seem to be majority) where I don't need to have undefined
> behaviors nor wrapping behaviors nor arbitrarily growing precisions.
> I want it to trap by default and sometimes to have snapping behavior.
>
> But I have either undefined behavior with signed or wrapping
> behavior with unsigned and non-portable intrinsic functions.
> So I have to choose what hack is uglier to write trapping or
> snapping myself manually and everyone else in same situation
> like me has to do same.
I am all in favour of choice, but there are a few things I disagree with
you. First, you seem to want trapping to be the default in the
standards - that is a very costly idea to impose on everyone else, as
well as being rather difficult to specify and implement.
Secondly, you seem to want this for all arithmetic. I'm fine with using
trapping explicitly in code when it is particular useful - that would
work naturally with exceptions in C++. But I would want the lower-level
code that is not going to overflow, to be free from the cost of this
checking. (Compilers might be able to eliminate some of the overhead,
but they won't get everything.)
Thirdly, you seem to think trapping is useful semantics in general - I
disagree. There are times when it could be convenient in code where
overflow is a realistic but unlikely scenario, mostly it sounds like you
are happy with releasing code that is buggy as long as the consequences
of the bugs are somewhat limited. I don't like that attitude, and I
don't think you do either - so there is maybe still something I don't
understand here.
>
>> To me, undefined behaviour is /better/ than the alternatives for most
>> purposes. It helps me write better and more correct code, and as a
>> bonus it is more efficient.
>
> It is better only when I know that it can no way overflow because the
> values make sense in thousands while the variables can count in billions.
To me, that should /always/ be the case - though I am quite happy with
values that only make sense up to a thousand with a variable that can
count to a thousand. I always want to know the limits of my values, and
there is no need to work with bigger sizes than needed.
>
>> Maybe this is something that stems from my programming education - I was
>> taught in terms of specifications. Any function has a specification
>> that gives its pre-conditions and its post-conditions. The function
>> will assume the pre-conditions are valid, and establish the
>> post-conditions. If you don't ensure you satisfy the pre-conditions
>> before calling the function, you have no write to expect any know
>> behaviour at all from the function. This principle is known popularly
>> as "garbage in, garbage out". It has been understood since the birth of
>> the programmable computer:
>>
>> """
>> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into
>> the machine wrong figures, will the right answers come out?' I am not
>> able rightly to apprehend the kind of confusion of ideas that could
>> provoke such a question.
>> """
>
> Yes and I want to turn the language full of specified undefined
> behaviors and contradictions and useless features and defects into
> programs where there are zero.
You can't. It won't work. (Well, you can probably deal with
contradictions in the language, though I don't know what you are
referring to here.) Languages /always/ have undefined behaviours. And
there is rarely a benefit in turning undefined behaviour into defined
behaviour unless that behaviour is correct.
If a program tries to calculate "foo(x)", and the calculation overflows
and gives the wrong answer, the program is broken. Giving a definition
to the overflow - whether it is wrapping, trapping, throwing, saturating
- does not make the program correct. It is still broken.
Some types of defined behaviour can make it easier to find the bug
during testing - such as error messages on overflows. Some make it
harder to find the bugs, such as wrapping. Some make it easier to limit
the damage from the bugs, such as throwing an exception. None change
the fact that the program has a bug.
And if you look at the language, there are /loads/ of undefined
behaviours. Integer overflows are just a tiny one, and one that has
become a good deal less common since 32-bit int became the norm. C++
will still let programmers shoot themselves in the foot in all sorts of
ways - worrying about the splinters in their fingers will not fix the
hole in their feet.
> When wrong figures
> are put into my programs then I want the program to give
> answers that follow from those and when contradicting figures
> are put in then I want my programs to refuse in civil manner.
That is fair enough - that is why you should check the figures that are
put into the program, not the calculations in the good code that you
have written.
> For me quarter of good cases working is not good
> enough quality. And I'm frustrated when I can't find a way to
> use it and have to use some kind of nonconforming hacks by
> compiler vendor.
>
>> If you want to say that some programmers don't know that integer
>> overflow is undefined behaviour, and think it wraps, then blame the way
>> C and C++ is taught and what the books say, and do something about that
>> - don't try to dumb down and water out the language.
>
> A language that does not have simple features nor convenient ways to
> implement such is dumbed down by its makers, not by me.
>
No language covers everything that all programmers want - there are
always compromises.
Yes, you did. At least, that's what the definition of the languages
says. The words used in a programming language don't always correspond
to their normal meanings in English. Nor do they always remain the same
as they move from compiler extensions through to standards. It is
unfortunate, but true - and it is the programmer's job to write in a way
that is clear to the reader /and/ precise in the language.
(Incidentally, you write better English many people I know who have it
as their first language. And it is not as if Estonian is close to English!)
>
>> And once you start saying "C++ programmers can't be trusted to write
>> expressions that don't overflow", where do you stop? Can you trust them
>> to use pointers, or do you want run-time checks on all uses of pointers?
>> Can you trust them to use arrays? Function pointers? Memory
>> allocation? Multi-threading? There are many, many causes of faults and
>> bugs in C++ programs that are going to turn up a lot more often than
>> integer overflows (at least in the 32-bit world - most calculations
>> don't come close to that limit).
>
> Sometimes I want to write code where overflow is not my programming
> error but allowable contradiction in input data. There I want to throw or
> to result with INF on overflow.
That should be handled by C++ classes that have such semantics on
overflow - which you use explicitly when you want that behaviour. I'd
be happy to see a standard library with this behaviour.
> In similar manner like I want it to
> throw or to result with NaN when kilograms were requested
> to add to meters or to subtract from degrees of angle or diameter
> of 11-th apple from 10 is requested.
That is a completely different matter. Personally, I would not want an
exception or a NaN here - I would want a compile time error. Again,
this can be handled perfectly well in C++ using strong class types. And
again, it would be nice to see such libraries standardised.
In both these cases, I think concepts will make the definition and use
of such class template libraries a good deal neater.
> Choice between if I want silent
> or trapping failures depends on if I want to process the whole data
> regardless of inconsistencies or constraint violations in it or to
> interrupt on first such problem in data. Don't such use-cases make sense?
Yes, which is why it is far better to deal with it using explicit
choices of the types used rather than making it part of the language.
Which is fine for many people. And since hardware assistance would give
you very little benefit for what you want here, it is not surprising
that it does not exist. (Hardware assistance could be a big help in a
wide variety of other undefined behaviour, bugs, and low-level features
in C and C++, but that is a different discussion.)
I don't think anyone will tell you __builtin_add_overflow leads to
pretty code! But you write these sorts of things once, and hide them
away in a library so that the use of the functions is simple and clear.