On 23/08/16 23:30,
supe...@casperkitty.com wrote:
> On Tuesday, August 23, 2016 at 2:07:36 PM UTC-5, gwowen wrote:
>> supercat writes:
>>> Compiler writers don't sit down and say "Hey, let's write a useless
>>> compiler". They do, however, fail to recognize what has traditionally
>>> made C useful for many applications.
>>
>> Of the uncountable stupid things you've said, this may be the stupidest.
>
> There are many applications (including 99.99999% of embedded programs)
> for which C's usefulness is a result of widespread support for features
> beyond what the Standard mandates. Do you disagree with that?
Most of the code in most embedded programs that are written in C can be
written using only standard-mandated features. And most of the rest can
all be done using implementation-dependent behaviour, as allowed by the
standards.
That is not to say that most embedded programs are written in a way that
does not depend on undefined or unspecified behaviour acting in a
particular way. And sometimes code can be smaller or faster while
relying on what one might call "undocumented implementation-dependent
behaviour" from compilers.
People who take embedded programming seriously - like other serious
programmers - strive to avoid undefined behaviour in their code. But
they are often much happier to rely on implementation-dependent
behaviour than "mainstream" C programmers, because they often know
exactly which compiler and target processor their code will run on.
But since serious embedded programmers know that they are fallible, and
they know that compiler writers are fallible, they will write code that
/should/ be independent of details of the compiler, flags, etc., and
then lock down exact versions of the tools and flags just in case they
have accidentally relied on "undocumented implementation-defined behaviour".
>
> At least some compiler writers have taken the attitude that code which
> relies upon such features has no real meaning, and works only by
> "happenstance", no matter how many earlier compilers have interpreted it
> consistently. Do you disagree with that?
I disagree with that. Certainly compiler writers will not bother unduly
with source code that has no real meaning - but if many compilers have
treated code in one way before, then they will take that into account.
But they will not artificially limit the efficiency of code written by
people who understand C programming, in order to placate those who don't
understand.
Let us take an example, and see how gcc handles it. In C, signed
integer overflow is undefined behaviour. But in the underlying
processor, in most cases, signed arithmetic overflow wraps as two's
complement arithmetic in a defined manner. Some people may have written
code that relies on this hardware behaviour - their code perhaps worked
on some compilers during testing. Other people may have written code
which can result in more efficient object code if the compiler assumes
integer overflow does not occur.
Should the compiler limit the efficiency of the good programmer's code,
just because the bad programmer does not understand the way C works
here? On the other hand (the one you favour), should the compiler break
code that worked fine previously, simply in order to slightly speed up
someone else's code?
gcc lets you answer "no" to both points - it gives /you/ the choice.
The compiler can optimise on the assumption that signed overflow will
not occur - but /only/ if the optimisation option is enabled (with
"-fstrict-overflow", or -O2). You can keep the old code working by
limiting optimisation to -O1 or -O0 (with -O0 being the default), or by
explicitly disabling that optimisation with "-fno-strict-overflow".
You can also tell gcc that you want integer overflow to work as two's
complement wraparound in all circumstances, and that the compiler can
optimise using that knowledge, with the "-fwrapv" option.
What if you are worried that you might accidentally have signed
overflow? gcc supports "-ftrapv" on some targets to catch overflows at
run-time. The newer "-fsanitize=signed-integer-overflow" also makes
checks, and is probably better for future use.
Then there are compile-time warnings, the "-Wstrict-overflow" flag that
will warn when strict overflow optimisations affect code. It is
configurable to different levels. Warnings on overflows of constant
calculations need to be explicitly disabled if you don't want them.
gcc is the compiler that you apparently consider most "evil" regarding
optimisations, with clang as a close competitor (clang supports the same
options here, I believe). Yet this shows just how much effort the gcc
folks put in to helping people work with legacy, incorrect code that
happened to work before - or people who would rather program in a
language slightly different from C in which integer overflow always
wraps. Other cases such as type-based alias analysis are similar, with
a similar set of flags and options.
And note that in the embedded world, far and away the biggest single
compiler used is gcc. (I don't mean most embedded programs are compiled
with gcc - but more are compiled with gcc than with any other compiler.)
It turns out that in real life, people seem quite capable of using the
"evil" gcc compiler to make working embedded systems. And the gcc
developers are perfectly aware that this is a major market for their
compiler, especially for C (as distinct from C++, Go, or other languages
supported by gcc).
>
> Such dismissal may reasonably be interpreted as a refusal to recognize the
> existence and usefulness of such features. Do you disagree with that?
>
See above.
> With what, then, are you disagreeing?
>
In a nutshell, I disagree with your ignorant paranoia and conspiracy
theories.
You have made good points in the past regarding surprising or hidden
undefined behaviour in innocent-looking code. But your ideas about
compiler optimisations, and the motivations and priorities of compiler
developers, are completely and utterly wrong.