Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Does -O3 enable more warnings than just -Wall -Wextra with gcc?

233 views
Skip to first unread message

luser- -droog

unread,
Dec 1, 2013, 12:48:47 AM12/1/13
to
I thought I remembered reading in this group that enabling maximum
optimizations with -O3 enabled extra code-path analysis which could
turn up warnings that otherwise would not be noticed. Not that it
"enables" warnings per se, but allows more warnings to be issued,
since more were detected.

Searching the group yielded no results. And nothing under
http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#Warning-Options
or
http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#Optimize-Options

Am I full of crap?

glen herrmannsfeldt

unread,
Dec 1, 2013, 2:30:38 AM12/1/13
to
Could be, or it might find fewer. At higher optimization levels the
compiler knows the flow structure better, and might generate fewer false
positives.

-- glen

Stephen Sprunk

unread,
Dec 1, 2013, 2:47:10 AM12/1/13
to
On 30-Nov-13 23:48, luser- -droog wrote:
> I thought I remembered reading in this group that enabling maximum
> optimizations with -O3 enabled extra code-path analysis which could
> turn up warnings that otherwise would not be noticed. Not that it
> "enables" warnings per se, but allows more warnings to be issued,
> since more were detected.

Right. In a sense, many warnings are a side effect; without the
relevant optimization being enabled, GCC isn't doing the analysis
necessary to generate the warning.

Another way of looking at it is that, by default, GCC generates a near
literal translation of your code; code that invokes undefined behavior
will do what most programmers would expect, so there is no need to warn
them. When you enable optimizations, though, GCC will do clever things
that are legal (in most cases) but likely to cause unexpected results in
the case of undefined behavior, hence the warnings.

A diagnostic is required by the Standard in certain cases (constraint
violations?), but those tend to be fatal errors rather than warnings.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Ben Bacarisse

unread,
Dec 1, 2013, 7:02:43 AM12/1/13
to
luser- -droog <mij...@yahoo.com> writes:

> I thought I remembered reading in this group that enabling maximum
> optimizations with -O3 enabled extra code-path analysis which could
> turn up warnings that otherwise would not be noticed. Not that it
> "enables" warnings per se, but allows more warnings to be issued,
> since more were detected.

Yes. A case in point (though not requiring -O3) is the recent post
about longjmp clobbering locals. The warning disappears when
optimisation is turned off. In fact the warning changes when using -Os
rather than, say, -O1. When optimising for space, the warning is more
accurate in that it names the actual object whose value might become
indeterminate.

<snip>
--
Ben.

Keith Thompson

unread,
Dec 2, 2013, 5:12:08 AM12/2/13
to
Stephen Sprunk <ste...@sprunk.org> writes:
> On 30-Nov-13 23:48, luser- -droog wrote:
>> I thought I remembered reading in this group that enabling maximum
>> optimizations with -O3 enabled extra code-path analysis which could
>> turn up warnings that otherwise would not be noticed. Not that it
>> "enables" warnings per se, but allows more warnings to be issued,
>> since more were detected.
>
> Right. In a sense, many warnings are a side effect; without the
> relevant optimization being enabled, GCC isn't doing the analysis
> necessary to generate the warning.

I believe that's correct.

> Another way of looking at it is that, by default, GCC generates a near
> literal translation of your code; code that invokes undefined behavior
> will do what most programmers would expect, so there is no need to warn
> them. When you enable optimizations, though, GCC will do clever things
> that are legal (in most cases) but likely to cause unexpected results in
> the case of undefined behavior, hence the warnings.

I don't think gcc invokes that kind of reasoning when deciding
whether to issue a warning (and IMHO it shouldn't). If a given
construct has undefined behavior, I'd expect gcc or any reasonable
compiler to warn about it if it's able to do so, even if the behavior
is well defined *by the compiler* at the current optimization
level. If it fails to warn about something at -O0 (little or
no optimization), I expect it's because it doesn't have enough
information, not because it chooses not to warn about it. And the
definition of "what most programmers would expect" is slippery.

> A diagnostic is required by the Standard in certain cases (constraint
> violations?), but those tend to be fatal errors rather than warnings.

gcc issues non-fatal warnings by default for a lot of constraint
violations. (IMHO this is unfortunate.) You can override this with
"-pedantic-errors".

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Stephen Sprunk

unread,
Dec 2, 2013, 11:53:13 AM12/2/13
to
On 02-Dec-13 04:12, Keith Thompson wrote:
> Stephen Sprunk <ste...@sprunk.org> writes:
>> Another way of looking at it is that, by default, GCC generates a
>> near literal translation of your code; code that invokes undefined
>> behavior will do what most programmers would expect, so there is no
>> need to warn them. When you enable optimizations, though, GCC will
>> do clever things that are legal (in most cases) but likely to cause
>> unexpected results in the case of undefined behavior, hence the
>> warnings.
>
> I don't think gcc invokes that kind of reasoning when deciding
> whether to issue a warning (and IMHO it shouldn't).

I didn't mean to imply that GCC has such reasoning encoded in it, but
rather that appears to be the view that GCC's _programmers_ take.

> If a given construct has undefined behavior, I'd expect gcc or any
> reasonable compiler to warn about it if it's able to do so, even if
> the behavior is well defined *by the compiler* at the current
> optimization level.

That is clearly not the case with any compiler I've ever used, as much
as I might wish it were so. Undefined behavior, aside from constraint
violations, rarely seems to generate a warning. It's also a slippery
slope; should a compiler be required to warn about unspecified or
implementation defined behavior, too?

Also, one of C's strengths is the ability to write non-portable code in
the same language as portable code; some desirable things simply can't
be done without invoking undefined (according to the Standard) behavior,
so one could make a decent argument that shouldn't require warnings when
done deliberately.

> the definition of "what most programmers would expect" is slippery.

It may also be circular, since many programmers seem to learn what to
expect by experimenting with a particular compiler rather than by
reading the Standard (or even that compiler's documentation). It might
be better to say that GCC generates a warnings when optimization may
deliver _different_ results, without reference to expectations.

>> A diagnostic is required by the Standard in certain cases
>> (constraint violations?), but those tend to be fatal errors rather
>> than warnings.
>
> gcc issues non-fatal warnings by default for a lot of constraint
> violations. (IMHO this is unfortunate.) You can override this with
> "-pedantic-errors".

Or "-pedantic" combined with "-Werror". I nearly always use the latter,
since all of the environments I work in require that code compile with
no errors at "-W -Wall", and I use the former unless dealing with code
that requires certain GCC-specific extensions, e.g. the Linux kernel.

Keith Thompson

unread,
Dec 2, 2013, 2:55:41 PM12/2/13
to
Stephen Sprunk <ste...@sprunk.org> writes:
> On 02-Dec-13 04:12, Keith Thompson wrote:
>> Stephen Sprunk <ste...@sprunk.org> writes:
>>> Another way of looking at it is that, by default, GCC generates a
>>> near literal translation of your code; code that invokes undefined
>>> behavior will do what most programmers would expect, so there is no
>>> need to warn them. When you enable optimizations, though, GCC will
>>> do clever things that are legal (in most cases) but likely to cause
>>> unexpected results in the case of undefined behavior, hence the
>>> warnings.
>>
>> I don't think gcc invokes that kind of reasoning when deciding
>> whether to issue a warning (and IMHO it shouldn't).
>
> I didn't mean to imply that GCC has such reasoning encoded in it, but
> rather that appears to be the view that GCC's _programmers_ take.

I didn't phrase that particularly well. I was referring to the
reasoning of the gcc developers; I don't claim that gcc itself
"reasons".

Do you have any examples of this? What I think we're talking about is
cases where a given construct has undefined behavior, and gcc warns
about it only at a higher optimization level *specifically because the
behavior without optimization is considered acceptable*. I'm not aware
of any such cases. There are cases (I'm fairly sure) where gcc warns
about something only at higher optimization levels because it otherwise
doesn't have the information.

An example:

#include <stdio.h>
#include <limits.h>
int main(void) {
const int n = INT_MAX;
printf("%d\n", n + 1);
}

gcc warns about this with "-O1 -std=c99 -Wall", but not with
"-O0 -std=c99 -Wall".

>> If a given construct has undefined behavior, I'd expect gcc or any
>> reasonable compiler to warn about it if it's able to do so, even if
>> the behavior is well defined *by the compiler* at the current
>> optimization level.
>
> That is clearly not the case with any compiler I've ever used, as much
> as I might wish it were so. Undefined behavior, aside from constraint
> violations, rarely seems to generate a warning. It's also a slippery
> slope; should a compiler be required to warn about unspecified or
> implementation defined behavior, too?

Compilers aren't *required* to warn about anything. Decent ones warn
about undefined behavior when they can, but it's impossible to do so in
all cases.

> Also, one of C's strengths is the ability to write non-portable code in
> the same language as portable code; some desirable things simply can't
> be done without invoking undefined (according to the Standard) behavior,
> so one could make a decent argument that shouldn't require warnings when
> done deliberately.

Which is why casts typically inhibit warnings. But it's impossible in
general to determine whether a given violation is deliberate.

>> the definition of "what most programmers would expect" is slippery.
>
> It may also be circular, since many programmers seem to learn what to
> expect by experimenting with a particular compiler rather than by
> reading the Standard (or even that compiler's documentation). It might
> be better to say that GCC generates a warnings when optimization may
> deliver _different_ results, without reference to expectations.
>
>>> A diagnostic is required by the Standard in certain cases
>>> (constraint violations?), but those tend to be fatal errors rather
>>> than warnings.
>>
>> gcc issues non-fatal warnings by default for a lot of constraint
>> violations. (IMHO this is unfortunate.) You can override this with
>> "-pedantic-errors".
>
> Or "-pedantic" combined with "-Werror". I nearly always use the latter,
> since all of the environments I work in require that code compile with
> no errors at "-W -Wall", and I use the former unless dealing with code
> that requires certain GCC-specific extensions, e.g. the Linux kernel.

"-Werror" is often a good idea; on the other hand, it makes gcc
non-conforming, since it causes it to reject some conforming code.

David Brown

unread,
Dec 2, 2013, 3:02:16 PM12/2/13
to
On 02/12/13 17:53, Stephen Sprunk wrote:
> On 02-Dec-13 04:12, Keith Thompson wrote:
>> Stephen Sprunk <ste...@sprunk.org> writes:
>>> Another way of looking at it is that, by default, GCC generates a
>>> near literal translation of your code; code that invokes undefined
>>> behavior will do what most programmers would expect, so there is no
>>> need to warn them. When you enable optimizations, though, GCC will
>>> do clever things that are legal (in most cases) but likely to cause
>>> unexpected results in the case of undefined behavior, hence the
>>> warnings.
>>
>> I don't think gcc invokes that kind of reasoning when deciding
>> whether to issue a warning (and IMHO it shouldn't).
>
> I didn't mean to imply that GCC has such reasoning encoded in it, but
> rather that appears to be the view that GCC's _programmers_ take.
>
>> If a given construct has undefined behavior, I'd expect gcc or any
>> reasonable compiler to warn about it if it's able to do so, even if
>> the behavior is well defined *by the compiler* at the current
>> optimization level.
>
> That is clearly not the case with any compiler I've ever used, as much
> as I might wish it were so. Undefined behavior, aside from constraint
> violations, rarely seems to generate a warning. It's also a slippery
> slope; should a compiler be required to warn about unspecified or
> implementation defined behavior, too?
>

It is good to warn about code that might not perform as the programmer
expects - whether the behaviour is well-defined in the specs,
implementation defined, or undefined. (It's also good that such
warnings can be controlled by flags to the users' likings.)

For example, gcc will warn about "if (a = 3) ..." with the right flags.
The behaviour is clearly defined in the specs - there is no ambiguity
or undefined behaviour - but it is probably a typo by the programmer.

So there is no slippery slop that I can see here.

For the most part, undefined behaviour is flagged by gcc warnings if the
compiler can easily see that the behaviour is definitely undefined (say,
using a stack variable before it is initialised), left unflagged if the
behaviour is probably fine even if it is sometimes undefined (such as
shifts with signed data), and not yet flagged if it is clearly
undefined, but hard to implement a warning (such using a pointer outside
the range of valid data). Each new generation of gcc gets better at this.

> Also, one of C's strengths is the ability to write non-portable code in
> the same language as portable code; some desirable things simply can't
> be done without invoking undefined (according to the Standard) behavior,
> so one could make a decent argument that shouldn't require warnings when
> done deliberately.
>

What desirable things can only be achieved by invoking undefined
behaviour? There is a lot that requires implementation-defined
behaviour, and there are times when you need something that has no
spec'ed behaviour at all (using compiler extensions, inline assembly,
etc.), but I don't see off-hand when you would /need/ undefined behaviour.

James Kuyper

unread,
Dec 2, 2013, 3:27:13 PM12/2/13
to
On 12/02/2013 03:02 PM, David Brown wrote:
...
> What desirable things can only be achieved by invoking undefined
> behaviour? There is a lot that requires implementation-defined
> behaviour, and there are times when you need something that has no
> spec'ed behaviour at all (using compiler extensions, inline assembly,
> etc.), but I don't see off-hand when you would /need/ undefined behaviour.

Talking about these issue can get confusing unless you realize that the
relevant terms are specialized jargon with meanings defined by the C
standard, rather than having the meaning that would normally apply under
the rules of ordinary English. For instance, in the C standard,
"undefined behavior" doesn't mean "behavior that it not defined". It
means "behavior that is not defined by the C standard". Similarly, in
the C standard, "implementation-defined behavior" does not mean
"behavior that is defined by the implementation"; it means "behavior
that the implementation is required by the C standard to document".

Thus, it is not possible to achieve anything by writing code that has
behavior that is not defined by anything; but it is possible to achieve
something useful by writing code that has "undefined behavior" - because
something other than the C standard defines the behavior, such as the
compiler's documentation, or a platform-specific API, or the OS. Such
behavior is still "undefined", as far as the C standard is concerned.

Similarly, it is not necessary that the behavior be "implementation
defined", it's sufficient that it be defined by the implementation, even
if the standard imposes no requirement that the implementation document
the definition that it provides. Such behavior is not
"implementation-defined", as far as the C standard is concerned.


Stephen Sprunk

unread,
Dec 2, 2013, 4:21:21 PM12/2/13
to
On 02-Dec-13 14:02, David Brown wrote:
> On 02/12/13 17:53, Stephen Sprunk wrote:
>> On 02-Dec-13 04:12, Keith Thompson wrote:
>>> If a given construct has undefined behavior, I'd expect gcc or
>>> any reasonable compiler to warn about it if it's able to do so,
>>> even if the behavior is well defined *by the compiler* at the
>>> current optimization level.
>>
>> That is clearly not the case with any compiler I've ever used, as
>> much as I might wish it were so. Undefined behavior, aside from
>> constraint violations, rarely seems to generate a warning. It's
>> also a slippery slope; should a compiler be required to warn about
>> unspecified or implementation defined behavior, too?
>
> It is good to warn about code that might not perform as the
> programmer expects - whether the behaviour is well-defined in the
> specs, implementation defined, or undefined. (It's also good that
> such warnings can be controlled by flags to the users' likings.)
>
> For example, gcc will warn about "if (a = 3) ..." with the right
> flags. The behaviour is clearly defined in the specs - there is no
> ambiguity or undefined behaviour - but it is probably a typo by the
> programmer.

Keith's expectation above is that the compiler will warn about any
undefined behavior (that the compiler can detect) in the code presented,
apparently without regard to whether it was deliberate.

> So there is no slippery slop that I can see here.

The main motivation I see for warning about undefined behavior is that
the programmer be made aware that the code may not do what he expects it
to do (whatever that may be). That logic holds for
implementation-defined and unspecified behavior as well; the results are
constrained, but it's still possible that the code may not do what he
expects it to do, either on the current implementation or when moved to
another.

>> Also, one of C's strengths is the ability to write non-portable
>> code in the same language as portable code; some desirable things
>> simply can't be done without invoking undefined (according to the
>> Standard) behavior, so one could make a decent argument that
>> shouldn't require warnings when done deliberately.
>
> What desirable things can only be achieved by invoking undefined
> behaviour? There is a lot that requires implementation-defined
> behaviour, and there are times when you need something that has no
> spec'ed behaviour at all (using compiler extensions, inline
> assembly, etc.), but I don't see off-hand when you would /need/
> undefined behaviour.

The most obvious case would be an OS kernel, in particular device
drivers. You must do things that the C Standard leaves undefined, e.g.
writing to particular memory locations that are memory-mapped to device
registers or altering page tables, but it works because the OS (or
hardware, or ABI, or whatever) _does_ define the behavior in those
cases, even though the C Standard does not require it to do so or even
acknowledge such cases.

Most "interesting" OS APIs probably fall into the same category, and
that's fine because nobody _expects_ such code to be portable, but it is
of great value that one can do such things in C--and mix it with code
that _is_ expected to be portable, written in the same language.

Stephen Sprunk

unread,
Dec 2, 2013, 5:12:37 PM12/2/13
to
On 02-Dec-13 13:55, Keith Thompson wrote:
> Stephen Sprunk <ste...@sprunk.org> writes:
>> On 02-Dec-13 04:12, Keith Thompson wrote:
>>> Stephen Sprunk <ste...@sprunk.org> writes:
>>>> Another way of looking at it is that, by default, GCC generates
>>>> a near literal translation of your code; code that invokes
>>>> undefined behavior will do what most programmers would expect,
>>>> so there is no need to warn them. When you enable
>>>> optimizations, though, GCC will do clever things that are legal
>>>> (in most cases) but likely to cause unexpected results in the
>>>> case of undefined behavior, hence the warnings.
>
> Do you have any examples of this? What I think we're talking about
> is cases where a given construct has undefined behavior, and gcc
> warns about it only at a higher optimization level *specifically
> because the behavior without optimization is considered acceptable*.
> I'm not aware of any such cases. There are cases (I'm fairly sure)
> where gcc warns about something only at higher optimization levels
> because it otherwise doesn't have the information.

No specific examples come to mind, but generally speaking, GCC is
obviously capable of generating such warnings since it does so when
optimization is enabled, yet at some point someone chose not to do the
same analysis when optimization is disabled even though the warnings
could obviously be given without affecting code generation.

When I was first learning C, I had no trouble making my programs compile
quietly and work as expected with -O0. However, with -O3, I would get
dozens of new warnings--and my code no longer worked as expected. So,
correctly or not, I learned that GCC only warns me when it thinks it's
doing something that I don't expect.

>> Also, one of C's strengths is the ability to write non-portable
>> code in the same language as portable code; some desirable things
>> simply can't be done without invoking undefined (according to the
>> Standard) behavior, so one could make a decent argument that
>> shouldn't require warnings when done deliberately.
>
> Which is why casts typically inhibit warnings.

For the specific case where the problem is a disallowed implicit
conversion, sure, but useful undefined behavior is larger than that.

> But it's impossible in general to determine whether a given
> violation is deliberate.

Hence the problem with a proposal to _require_ warnings.

IIRC, MSVC has #pragmas to disable individual warnings, but in my
experience that is used more often to protect bad code from discovery
than to suppress incorrect/spurious warnings about valid code. I
couldn't in good conscience suggest that GCC add that misfeature.

James Kuyper

unread,
Dec 2, 2013, 5:25:33 PM12/2/13
to
On 12/02/2013 05:12 PM, Stephen Sprunk wrote:
> On 02-Dec-13 13:55, Keith Thompson wrote:
...
>> Do you have any examples of this? What I think we're talking about
>> is cases where a given construct has undefined behavior, and gcc
>> warns about it only at a higher optimization level *specifically
>> because the behavior without optimization is considered acceptable*.
>> I'm not aware of any such cases. There are cases (I'm fairly sure)
>> where gcc warns about something only at higher optimization levels
>> because it otherwise doesn't have the information.
>
> No specific examples come to mind, but generally speaking, GCC is
> obviously capable of generating such warnings since it does so when
> optimization is enabled, yet at some point someone chose not to do the
> same analysis when optimization is disabled even though the warnings
> could obviously be given without affecting code generation.

The analysis affects code generation by delaying completion by the
amount of time that is required to perform the analysis. gcc (rightly or
wrongly) chooses not to spend that time unless explicitly asked to
perform the optimizations enabled by that analysis. That doesn't
necessarily mean:

> ... that GCC only warns me when it thinks it's

Keith Thompson

unread,
Dec 2, 2013, 6:41:39 PM12/2/13
to
Stephen Sprunk <ste...@sprunk.org> writes:
> On 02-Dec-13 13:55, Keith Thompson wrote:
>> Stephen Sprunk <ste...@sprunk.org> writes:
>>> On 02-Dec-13 04:12, Keith Thompson wrote:
>>>> Stephen Sprunk <ste...@sprunk.org> writes:
>>>>> Another way of looking at it is that, by default, GCC generates
>>>>> a near literal translation of your code; code that invokes
>>>>> undefined behavior will do what most programmers would expect,
>>>>> so there is no need to warn them. When you enable
>>>>> optimizations, though, GCC will do clever things that are legal
>>>>> (in most cases) but likely to cause unexpected results in the
>>>>> case of undefined behavior, hence the warnings.
>>
>> Do you have any examples of this? What I think we're talking about
>> is cases where a given construct has undefined behavior, and gcc
>> warns about it only at a higher optimization level *specifically
>> because the behavior without optimization is considered acceptable*.
>> I'm not aware of any such cases. There are cases (I'm fairly sure)
>> where gcc warns about something only at higher optimization levels
>> because it otherwise doesn't have the information.
>
> No specific examples come to mind, but generally speaking, GCC is
> obviously capable of generating such warnings since it does so when
> optimization is enabled, yet at some point someone chose not to do the
> same analysis when optimization is disabled even though the warnings
> could obviously be given without affecting code generation.

I think your understanding is reversed (or, conceivably, mine is).

When you requesti optimization (via "-O3" or whatever),
gcc performs additional analysis if your code so that it can
determine what optimizations can be performed without breaking the
required behavior. A side effect of that analysis is that it can
detect things that might cause undefined behavior. For example
tracking the value that an object holds during execution both (a)
enables optimization (such as replacing a reference to the object
with a constant if the compiler can prove that it must hold some
value in particular), and (b) enables some warnings (such as an
overflow because it's been able to figure out that the value you're
incrementing at run time happens to be INT_MAX).

It refrains from performing that analysis at -O0 simply because
it takes additional time and memory, and "gcc -O0" means roughly
"Compiler: Please generate correct code quickly".

It might have made some sense not to tie these two things together, so
that the compiler could perform the analysis needed to diagnose (some
instances of) undefined behavior without generating optimized code, but
there probably isn't enough demand for that.

> When I was first learning C, I had no trouble making my programs compile
> quietly and work as expected with -O0. However, with -O3, I would get
> dozens of new warnings--and my code no longer worked as expected. So,
> correctly or not, I learned that GCC only warns me when it thinks it's
> doing something that I don't expect.

It's likely that your code has undefined behavior regardless of the
optimization level (in fact, UB is always independent of the
optimization level), and it just happened to "work" at "-O0" and not at
"-O3". Which is why compiling with "-O3", even if you don't intend to
run the optimized code, can be a good way to flush out bugs.

>>> Also, one of C's strengths is the ability to write non-portable
>>> code in the same language as portable code; some desirable things
>>> simply can't be done without invoking undefined (according to the
>>> Standard) behavior, so one could make a decent argument that
>>> shouldn't require warnings when done deliberately.
>>
>> Which is why casts typically inhibit warnings.
>
> For the specific case where the problem is a disallowed implicit
> conversion, sure, but useful undefined behavior is larger than that.
>
>> But it's impossible in general to determine whether a given
>> violation is deliberate.
>
> Hence the problem with a proposal to _require_ warnings.

I don't recall any such proposal.

> IIRC, MSVC has #pragmas to disable individual warnings, but in my
> experience that is used more often to protect bad code from discovery
> than to suppress incorrect/spurious warnings about valid code. I
> couldn't in good conscience suggest that GCC add that misfeature.

Philip Lantz

unread,
Dec 3, 2013, 12:32:08 AM12/3/13
to
Stephen Sprunk wrote:

> IIRC, MSVC has #pragmas to disable individual warnings, but in my
> experience that is used more often to protect bad code from discovery
> than to suppress incorrect/spurious warnings about valid code.

My experience is the opposite: I have frequently needed to use the MSVC
pragma to disable bogus warnings about perfectly good code. (One
example: it generated a warning about the declaration of a flexible
array member, which was defined and used according to the standard, and
correctly implemented by the compiler; I couldn't figure out why they
felt a need to generate a warning.)

> I couldn't in good conscience suggest that GCC add that misfeature.

Gcc has essentially the same feature, except that it puts the control on
the command line, rather than in the source file. Gcc has somewhat less
fine-grained control than MSVC. It also seems to have less problem with
bogus warnings, in my experience.

ais523

unread,
Dec 3, 2013, 12:43:38 AM12/3/13
to
Philip Lantz wrote:
> Stephen Sprunk wrote:
>
>> IIRC, MSVC has #pragmas to disable individual warnings, but in my
>> experience that is used more often to protect bad code from discovery
>> than to suppress incorrect/spurious warnings about valid code.
[snip]
>> I couldn't in good conscience suggest that GCC add that misfeature.
>
> Gcc has essentially the same feature, except that it puts the control on
> the command line, rather than in the source file. Gcc has somewhat less
> fine-grained control than MSVC. It also seems to have less problem with
> bogus warnings, in my experience.

Just thought that you should know that modern gcc does allow warning
control from the source file.

I mostly use this in case of false positives (or occasionally to
increase the warning level for a section of code to include a warning
that I want to make use of but which is disabled by default even with
-Wextra due to too many false positives).

Here's an example of the syntax:

== cut here ==
int main(void)
{
int x = 1/0;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdiv-by-zero"
int y = 1/0;
#pragma GCC diagnostic pop
int z = 1/0;
return x+y+z;
}
== cut here ==

I get warnings for the assignments to x and z, but not for the
assignment to y, because the pragma specifically requests no warning.

--
ais523

David Brown

unread,
Dec 3, 2013, 1:41:02 PM12/3/13
to
<<http://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html>>

I don't remember what version of gcc got this feature, but it seems to
be roughly equivalent to what you describe in MSVC.


David Brown

unread,
Dec 3, 2013, 2:23:42 PM12/3/13
to
I fully appreciate the need and use of such code - I just did not
realise that this is technically "undefined behaviour".

(I know that the terms "undefined behaviour" and "implementation defined
behaviour" have special meanings in C, and I believe I have a fair
understanding of them - partly due to the good folks in c.l.c - but I am
always happy to have my understandings refined, improved and corrected.)

Suppose the compiler encounters code like this:

extern pid_t waitpid(pid_t pid, int *status, int options);
pid_t x = waitpid(y, &status, 0);


What you are saying here is that the call to "waitpid" is undefined
behaviour, because the C standards don't say anything about this
function (it being an OS kernel call). To my understanding, it /is/
defined behaviour - the compiler will generate code to put the
parameters "y", "&status" and "0" onto the stack (or whatever is
required by the calling conventions), call the externally linked
function, and return the value to "x". The standards don't define the
behaviour of the function itself (unlike for standard functions such as
strcpy), but they define how the function is to be called.

The same applies to other things you mentioned, such as writing to
memory-mapped registers in device drivers. The action of these
registers is not defined, but the addresses and values written /is/
defined behaviour.


The common image of "undefined behaviour" is that the compiler can
generate code to make daemons fly out your nose when you write something
like "*(int*)0 = 0;" - it certainly cannot invoke nasal daemons when you
call "waitpid" !


I realise (as James pointed out) that "undefined behaviour" means
"behaviour undefined by the C standard", rather than generally unknown
behaviour (the behaviour of "waitpid" is hopefully well defined in the
kernel's documentation). But I feel that the act of invoking such
functions /is/ well defined in the C standards.



Presumably, given your and James' posts, something is wrong with my
reasoning above. I just don't see exactly where.

Keith Thompson

unread,
Dec 3, 2013, 3:05:19 PM12/3/13
to
David Brown <da...@westcontrol.removethisbit.com> writes:
[...]
waitpid(y, &status, 0) is a function call, and the standard discusses
how function calls work. It doesn't define the behavior of waitpid()
(POSIX does), so the behavior of that particular function is
undefined *by the C standard*. And if waitpid() happens to execute
`*(int*)0 = 0;`, then the call *could* in principle result in
nasal demons.

To determine whether the behavior of the call is defined by the C
standard, you'd have to look at the code that implements waitpid().

James Kuyper

unread,
Dec 3, 2013, 3:33:39 PM12/3/13
to
On 12/03/2013 03:05 PM, Keith Thompson wrote:
...
> waitpid(y, &status, 0) is a function call, and the standard discusses
> how function calls work. It doesn't define the behavior of waitpid()
> (POSIX does), so the behavior of that particular function is
> undefined *by the C standard*. And if waitpid() happens to execute
> `*(int*)0 = 0;`, then the call *could* in principle result in
> nasal demons.
>
> To determine whether the behavior of the call is defined by the C
> standard, you'd have to look at the code that implements waitpid().

I'm sure you're aware of the issue I'm about to raise, but for the sake
of other readers I want to point out that such functions are often
written in some other language (such as assembler). Even if they are
written in C, they are often written in code that is not strictly
conforming C. In some cases the use of another language or of
not-strictly conforming C will be indirect, through a subroutine call.
Regardless of how such code is reached, the behavior of that code will
not be defined by C.

Jorgen Grahn

unread,
Dec 3, 2013, 5:40:34 PM12/3/13
to
On Sun, 2013-12-01, luser- -droog wrote:
> I thought I remembered reading in this group that enabling maximum
> optimizations with -O3 enabled extra code-path analysis which could
> turn up warnings that otherwise would not be noticed. Not that it
> "enables" warnings per se, but allows more warnings to be issued,
> since more were detected.
>
> Searching the group yielded no results. And nothing under

From <slrnia9b1n.d...@frailea.sa.invalid>:

> #include<stdio.h>
> #include<math.h>
> #define M_PI 3.14159
>
> int main()
> {
> double theta, phi, sinth;
> double count;
> double incr;
> double s;
>
> s = ((double) 180)/M_PI; /* converting to radiens */
> incr = 0.5;
> theta = (double) 0;
>
> for(theta = incr; theta < (double) 180; theta += incr)
> sinth = sin(s *theta);
> for(phi = 0; phi < (double) 360 ; phi += incr/ sinth)
> count ++;
> printf("%f", count);
> return 0;
> }

% gcc -std=c99 -Wall -Wextra -pedantic -c foo.c
(no warnings)
% gcc -std=c99 -Wall -Wextra -pedantic -O1 -c foo.c
foo.c: In function 'main':
foo.c:19:28: warning: 'count' may be used uninitialized in this
function [-Wmaybe-uninitialized]

In this case enabling optimization /at all/ made a difference, but -O3
is what I used in that posting.

Perhaps there is more information (or desinformation) around the
postings

<slrnklo0ij.a...@frailea.sa.invalid>
<slrnjnlkla.1...@frailea.sa.invalid>
<slrnhvt1gh.s...@frailea.sa.invalid>

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Stephen Sprunk

unread,
Dec 3, 2013, 11:23:07 PM12/3/13
to
On 03-Dec-13 14:33, James Kuyper wrote:
> On 12/03/2013 03:05 PM, Keith Thompson wrote: ....
The internals of nearly every system call will invoke undefined
behavior, e.g. inline assembly for the syscall interface, but I was
thinking of cases where the desired effect either creates undefined
behavior in the caller's environment or essentially requires the caller
to invoke such itself to be useful, e.g. fork(), mmap(), aio_read().

An OS kernel will have lots of UB itself in areas that interact directly
with the hardware, e.g. device drivers and memory managers, though other
areas can be (but still often aren't) written portably. Being able to
do both in the same language is one of the key strengths of C.

David Brown

unread,
Dec 4, 2013, 2:44:04 AM12/4/13
to
I know how things /work/ here (the function call behaviour is defined,
but neither the C standards nor the C compiler can know or define
anything about the external function other than the
implementation-defined calling conventions). I guess I am just
surprised that the function is viewed as C "undefined behaviour".

I had been under the impression that "undefined behaviour" was
explicitly marked as such in the C standards - it is certainly often
marked. I had viewed the actions of an external function as being
outside the scope of the C standards, rather than being specifically
"undefined behaviour". But quoting from a near-final C11 draft (since
I've got N1570 handy):


If a ��shall�� or ��shall not�� requirement that appears outside of a
constraint or runtime-constraint is violated, the behavior is undefined.
Undefined behavior is otherwise indicated in this International Standard
by the words ��undefined behavior�� or by the omission of any explicit
definition of behavior. There is no difference in emphasis among these
three; they all describe ��behavior that is undefined��.


The function's behaviour is not defined by the C standard, ergo it is
"undefined behaviour".



But this means that there are two different types of "undefined
behaviour" in C - behaviour that the compiler /knows/ is bad, and
behaviour that the compiler does not know to be good. For example, the
compiler can use the undefined nature of signed overflow to simplify "(x
+ 1) > x" to "1" (for signed x). But it can't use the undefined nature
of "waitpid" to do something unexpected.

Malcolm McLean

unread,
Dec 4, 2013, 6:48:20 AM12/4/13
to
On Tuesday, December 3, 2013 8:33:39 PM UTC, James Kuyper wrote:
>
>
> I'm sure you're aware of the issue I'm about to raise, but for the sake
> of other readers I want to point out that such functions are often
> written in some other language (such as assembler). Even if they are
> written in C, they are often written in code that is not strictly
> conforming C. In some cases the use of another language or of
> not-strictly conforming C will be indirect, through a subroutine call.
> Regardless of how such code is reached, the behavior of that code will
> not be defined by C.
>
There are two common situations. One is that you need to read from or write to
a memory-mapped address. So typically that's written in C. The result of
*123 = 42 is probably not just to set memory location 123 to 42, there's
usually a side effect, like turning on an LED. Otherwise you'd just use
regular memory on the heap or the stack.
The other common case is that you need to issues some sort of special command
to the processor, like generate or return from an interrupt, or execute
a parallel multiply instruction, or start a background block transfer,
something that doesn't fit in the standard C read / write from memory,
execute arithmetical operations / branch on condition / make subroutine calls
programming model. Nowadays you normally see just one or two assembler
embedded in what's essentially still a C subroutine, because there will be
conditional jump / read/write / arithmetical logic around the special
operation, and it's easier to keep this in C.

C can't define what the behaviour of the subroutine will be, because you've
gone beyond the scope of the language. However the use of C is predicated
on the assumption that most of the C constructs will have an effect which
can be predicted by someone familiar with C. Otherwise you wouldn't try to
use C at all.

James Kuyper

unread,
Dec 4, 2013, 7:08:14 AM12/4/13
to
On 12/04/2013 02:44 AM, David Brown wrote:
...
> But this means that there are two different types of "undefined
> behaviour" in C - behaviour that the compiler /knows/ is bad, and
> behaviour that the compiler does not know to be good. For example, the
> compiler can use the undefined nature of signed overflow to simplify "(x
> + 1) > x" to "1" (for signed x). But it can't use the undefined nature
> of "waitpid" to do something unexpected.

That's not quite the right way to think about it. In general, the reason
why the committee chose to leave the behavior of a C construct undefined
was that doing so gives the implementation the freedom needed to do
whatever it considers is most appropriate. In the particular, if the
behavior of a call to waitpid() were well-defined, that would force the
implementor to make sure that the call had precisely the behavior
defined for it by the C standard (whatever that might be), whether or
not the OS provided a function of the same name, and it would have to do
so even if the behavior specified by the C standard were different from
the behavior provided by the OS function of the same name.

Because the behavior is undefined, an implementation of C is free to
implement the call as simply conforming to the calling conventions of
the corresponding platform, and leaving the details of what happens when
it is called up to the library containing the corresponding function.
--
James Kuyper

Keith Thompson

unread,
Dec 4, 2013, 11:31:42 AM12/4/13
to
Malcolm McLean <malcolm...@btinternet.com> writes:
[...]
> There are two common situations. One is that you need to read from or write to
> a memory-mapped address. So typically that's written in C. The result of
> *123 = 42 is probably not just to set memory location 123 to 42, there's
> usually a side effect, like turning on an LED. Otherwise you'd just use
> regular memory on the heap or the stack.

I presume you mean something like

*(int*)123 = 42;

since `*123` violates a constraint and will probably be rejected at
compile time.

Malcolm McLean

unread,
Dec 4, 2013, 12:37:29 PM12/4/13
to
On Wednesday, December 4, 2013 4:31:42 PM UTC, Keith Thompson wrote:
> Malcolm McLean <malcolm...@btinternet.com> writes:
>
>
> I presume you mean something like
>
> *(int*)123 = 42;
>
> since `*123` violates a constraint and will probably be rejected at
> compile time.
>
Yes, C needs the type. You might need to write a single byte or a word to
manipulate the memory mapped device. Probably a byte since 123 is obviously
not going to be word aligned.
In actual use the whole thing is usually wrapped up in macros, so the code will
read something like

led_on();

led_on will be defined as

SETLOWMAP(LED1, LED_ON)

SETLOWMAP as

SETLOWMAP(loc, val) (BASE[loc] = ((RAW_BYTE) val))

and so on, until you eventually get to the bits.


Les Cargill

unread,
Dec 4, 2013, 1:28:01 PM12/4/13
to
Keith Thompson wrote:
> Malcolm McLean <malcolm...@btinternet.com> writes:
> [...]
>> There are two common situations. One is that you need to read from or write to
>> a memory-mapped address. So typically that's written in C. The result of
>> *123 = 42 is probably not just to set memory location 123 to 42, there's
>> usually a side effect, like turning on an LED. Otherwise you'd just use
>> regular memory on the heap or the stack.
>
> I presume you mean something like
>
> *(int*)123 = 42;
>

Why would you presume that?

> since `*123` violates a constraint and will probably be rejected at
> compile time.
>

It could also be something much more convoluted but compiler-chain-
politically-correct - like declaring a struct-or-array
memory-mapped to 123, then writing to that.

--
Les Cargill

Keith Thompson

unread,
Dec 4, 2013, 1:57:25 PM12/4/13
to
Les Cargill <lcarg...@comcast.com> writes:
> Keith Thompson wrote:
>> Malcolm McLean <malcolm...@btinternet.com> writes:
>> [...]
>>> There are two common situations. One is that you need to read from
>>> or write to a memory-mapped address. So typically that's written in
>>> C. The result of *123 = 42 is probably not just to set memory
>>> location 123 to 42, there's usually a side effect, like turning on
>>> an LED. Otherwise you'd just use regular memory on the heap or the
>>> stack.
>>
>> I presume you mean something like
>>
>> *(int*)123 = 42;
>>
>
> Why would you presume that?
>
>> since `*123` violates a constraint and will probably be rejected at
>> compile time.

The answer to your question is immediately above this line.

> It could also be something much more convoluted but compiler-chain-
> politically-correct - like declaring a struct-or-array
> memory-mapped to 123, then writing to that.

Which would still be "something like" what I wrote -- but then I'd
be rather surprised if you could assign 42 to it (that would also
be a constraint violation which would *probably* be rejected at
compile time).

If your point is that a compiler could permit `*123` to be treated
as an lvalue, then you're right -- but I don't know of any compilers
that do so. In particular, such a compiler would have to make an
arbitrary decision about the type of the object. (Probably B or
early C compilers would have allowed it.)

If you're making a different point, could you elaborate?

David Brown

unread,
Dec 4, 2013, 6:46:11 PM12/4/13
to
This means that a compiler implementer could theoretical produce nasal
daemons when the programmer calls waitpid(), just because the writer
thought that is "most appropriate" ? He would not even have to document
it (as would be the case for "implementation-defined behaviour")?
Obviously no sane compiler implementer /would/ do such a thing, but
there seems to me to be a loophole there. Maybe the final bit of
"legalese" is just too subtle for me here - I'll ignore it for now, and
simply assume compiler writers write sensible tools!

Anyway, I think you have made things a step clearer for me on this point
- and I thank you and the others here for your patience.

David

Keith Thompson

unread,
Dec 4, 2013, 7:10:49 PM12/4/13
to
A *compiler* implementer presumably would not implement waitpid();
that's part of the runtime library.

Assuming that nasal demons are a physical possibility, there's certainly
nothing *in the C standard* that says a call to waitpid() cannot emit
them.

POSIX does define waitpid() with the following declaration:

pid_t waitpid(pid_t pid, int *stat_loc, int options);

(the type pid_t is defined elsewhere). Passing a misaligned pointer as
the second argument does have undefined behavior -- though you're likely
to run into more explicit undefined behavior before such a call.

I'm sure there are other POSIX functions that, for example, take a
pointer argument that must be non-null; passing a null pointer to such a
function would have behavior that is not defined by POSIX. (I'm too
lazy to search for an example.) waitpid(), on the other hand, has well
defined (by POSIX) behavior if you pass it a null pointer. The C
standard says nothing about which POSIX functions require non-null
pointers and which don't.

The "nasal demons" joke may be leading you a bit astray.

"Undefined behavior", as the term is used by the C standard, means
simply behavior that is not defined by the C standard. Some things are
explicitly stated to have undefined behavior; others have behavior that
is undefined due to the Standard's omission of any definition of the
behavior. Both cases are simply *undefined behavior*.

If your program's behavior is undefined, it means that if your program
produces nasal demons (or, more realistically, crashes or produces
unexpected output), then you can't complain that it's because the C
implementation has failed to conform to the C standard. If waitpid()
produces nasal demons, it's a bug, but it's not a C conformance failure.

If waitpid() doesn't work properly, then you *can* complain about the
implementation's non-conformance to the POSIX standard.

> Anyway, I think you have made things a step clearer for me on this point
> - and I thank you and the others here for your patience.

James Kuyper

unread,
Dec 4, 2013, 9:31:00 PM12/4/13
to
On 12/04/2013 06:46 PM, David Brown wrote:
> On 04/12/13 13:08, James Kuyper wrote:
...
>> That's not quite the right way to think about it. In general, the reason
>> why the committee chose to leave the behavior of a C construct undefined
>> was that doing so gives the implementation the freedom needed to do
>> whatever it considers is most appropriate. In the particular, if the
>> behavior of a call to waitpid() were well-defined, that would force the
>> implementor to make sure that the call had precisely the behavior
>> defined for it by the C standard (whatever that might be), whether or
>> not the OS provided a function of the same name, and it would have to do
>> so even if the behavior specified by the C standard were different from
>> the behavior provided by the OS function of the same name.
>>
>> Because the behavior is undefined, an implementation of C is free to
>> implement the call as simply conforming to the calling conventions of
>> the corresponding platform, and leaving the details of what happens when
>> it is called up to the library containing the corresponding function.
>>
>
> This means that a compiler implementer could theoretical produce nasal
> daemons when the programmer calls waitpid(), just because the writer
> thought that is "most appropriate" ?

Well, yes. In fact, it's quite likely to fail to link (and probably
won't even compile), on operating systems for which waitpid() is
meaningless.

An implementor is constrained not only by the C standard, but also by
whatever other standards the implementor chooses to support. Even that
choice is constrained, to an extent, by the demands of the implementor's
customers. An implementation supposedly targeting a POSIX environment
which implemented waitpid() in a way inconsistent with the requirements
of the POSIX standard would have, first, very angry customers, and
later, very few customers - but would not be violating any requirement
of the C standard in the process of losing those customers.

He would not even have to document
> it (as would be the case for "implementation-defined behaviour")?
> Obviously no sane compiler implementer /would/ do such a thing, but
> there seems to me to be a loophole there. Maybe the final bit of
> "legalese" is just too subtle for me here - I'll ignore it for now, and
> simply assume compiler writers write sensible tools!

You're putting too much responsibility on the C standard, and not enough
responsibility on other standards.
--
James Kuyper

David Brown

unread,
Dec 5, 2013, 3:14:51 AM12/5/13
to
Yes - I have been thinking that the compiler implementer only has to be
concerned with the C standards. Now that you've said it, it is obvious
that he also has to follow other standards and conventions relevant to
the target, such as the ABI, calling conventions, and compatibility with
the linker, link-loader, OS, etc., and he can also assume that any
external functions similarly obey such conventions and standards. It is
these that make a call to an external function defined behaviour, even
though it is undefined /C/ behaviour. (The fact that the particular
function I used as an example is covered by POSIX standards is just
coincidence - it could have been any external function.)

Thanks,

David

Malcolm McLean

unread,
Dec 5, 2013, 8:53:32 AM12/5/13
to
On Wednesday, December 4, 2013 11:46:11 PM UTC, David Brown wrote:
> On 04/12/13 13:08, James Kuyper wrote:
>
>
> This means that a compiler implementer could theoretical produce nasal
> daemons when the programmer calls waitpid(), just because the writer
> thought that is "most appropriate" ? He would not even have to document
> it (as would be the case for "implementation-defined behaviour")?
>
Yes. He provides a library with a C-callable interface binding to the symbol
waitpid. But he doesn't document what the function does. So if you call it,
anything could happen, constrained only by the physical capabilities of
the machine and, maybe, protections offered by the operating system.

Stephen Sprunk

unread,
Dec 8, 2013, 7:03:52 PM12/8/13
to
On 02-Dec-13 17:41, Keith Thompson wrote:
> Stephen Sprunk <ste...@sprunk.org> writes:
>> When I was first learning C, I had no trouble making my programs
>> compile quietly and work as expected with -O0. However, with -O3,
>> I would get dozens of new warnings--and my code no longer worked as
>> expected. So, correctly or not, I learned that GCC only warns me
>> when it thinks it's doing something that I don't expect.
>
> It's likely that your code has undefined behavior regardless of the
> optimization level (in fact, UB is always independent of the
> optimization level), and it just happened to "work" at "-O0" and not
> at "-O3". Which is why compiling with "-O3", even if you don't
> intend to run the optimized code, can be a good way to flush out
> bugs.

I'm sure that was the case, but I didn't know that at the time. What I
did know is that GCC did what I _expected_ my (UB-ridden) code to do at
-O0 but not at -O3, and that it only warned me of the latter.

IMHO, if an implementation is going to warn about questionable code, it
should do so regardless of the optimization level selected.

>>> But it's impossible in general to determine whether a given
>>> violation is deliberate.
>>
>> Hence the problem with a proposal to _require_ warnings.
>
> I don't recall any such proposal.

You're the one that made it!

Keith Thompson

unread,
Dec 8, 2013, 8:37:03 PM12/8/13
to
Stephen Sprunk <ste...@sprunk.org> writes:
> On 02-Dec-13 17:41, Keith Thompson wrote:
>> Stephen Sprunk <ste...@sprunk.org> writes:
>>> When I was first learning C, I had no trouble making my programs
>>> compile quietly and work as expected with -O0. However, with -O3,
>>> I would get dozens of new warnings--and my code no longer worked as
>>> expected. So, correctly or not, I learned that GCC only warns me
>>> when it thinks it's doing something that I don't expect.
>>
>> It's likely that your code has undefined behavior regardless of the
>> optimization level (in fact, UB is always independent of the
>> optimization level), and it just happened to "work" at "-O0" and not
>> at "-O3". Which is why compiling with "-O3", even if you don't
>> intend to run the optimized code, can be a good way to flush out
>> bugs.
>
> I'm sure that was the case, but I didn't know that at the time. What I
> did know is that GCC did what I _expected_ my (UB-ridden) code to do at
> -O0 but not at -O3, and that it only warned me of the latter.
>
> IMHO, if an implementation is going to warn about questionable code, it
> should do so regardless of the optimization level selected.

The problem is that doing so would require the compiler to do all
the additional analysis at -O0 that it does at -O3.

If you want additional warnings, compile with optimization to enable
the extra analysis that's required before the warnings can be issued.

If you want the compiler doing that analysis unconditionally (thus
slowing down -O0 compilations), you can certainly take it up with the
gcc maintainers. But they'd likely respond that typing "-O3" isn't
enough of a burden to justify changing the compiler's design.

>>>> But it's impossible in general to determine whether a given
>>>> violation is deliberate.
>>>
>>> Hence the problem with a proposal to _require_ warnings.
>>
>> I don't recall any such proposal.
>
> You're the one that made it!

Not that I recall. Can you cite the article where I said that?

Keith Thompson

unread,
Dec 8, 2013, 8:37:12 PM12/8/13
to
Stephen Sprunk <ste...@sprunk.org> writes:
> On 02-Dec-13 17:41, Keith Thompson wrote:
>> Stephen Sprunk <ste...@sprunk.org> writes:
>>> When I was first learning C, I had no trouble making my programs
>>> compile quietly and work as expected with -O0. However, with -O3,
>>> I would get dozens of new warnings--and my code no longer worked as
>>> expected. So, correctly or not, I learned that GCC only warns me
>>> when it thinks it's doing something that I don't expect.
>>
>> It's likely that your code has undefined behavior regardless of the
>> optimization level (in fact, UB is always independent of the
>> optimization level), and it just happened to "work" at "-O0" and not
>> at "-O3". Which is why compiling with "-O3", even if you don't
>> intend to run the optimized code, can be a good way to flush out
>> bugs.
>
> I'm sure that was the case, but I didn't know that at the time. What I
> did know is that GCC did what I _expected_ my (UB-ridden) code to do at
> -O0 but not at -O3, and that it only warned me of the latter.
>
> IMHO, if an implementation is going to warn about questionable code, it
> should do so regardless of the optimization level selected.

The problem is that doing so would require the compiler to do all
the additional analysis at -O0 that it does at -O3.

If you want additional warnings, compile with optimization to enable
the extra analysis that's required before the warnings can be issued.

If you want the compiler doing that analysis unconditionally (thus
slowing down -O0 compilations), you can certainly take it up with the
gcc maintainers. But they'd likely respond that typing "-O3" isn't
enough of a burden to justify changing the compiler's design.

>>>> But it's impossible in general to determine whether a given
>>>> violation is deliberate.
>>>
>>> Hence the problem with a proposal to _require_ warnings.
>>
>> I don't recall any such proposal.
>
> You're the one that made it!

Not that I recall. Can you cite the article where I said that?

Keith Thompson

unread,
Dec 8, 2013, 11:14:16 PM12/8/13
to
Keith Thompson <ks...@mib.org> writes:
[SNIP]

Sorry about the double post. I'm not sure how that happened.

Tim Rentsch

unread,
Dec 19, 2013, 4:10:12 PM12/19/13
to
Keith Thompson <ks...@mib.org> writes:

>> [discussing choice of gcc compiler options]
>
> "-Werror" is often a good idea; on the other hand, it makes gcc
> non-conforming, since it causes it to reject some conforming code.

(Presumably you meant "some strictly conforming code" there.)

I believe what you say is true, but only barely. In most cases
the culprit is not -Werror but not giving the right settings for
other warning options. In particular, the combination of

-Wno-unused-result
-Wno-div-by-zero
-Wno-deprecated
-Wno-deprecated-declarations
-Wno-overflow
-Wno-int-to-pointer-cast
-Wno-pointer-to-int-cast

gets gcc pretty close to being conforming even with -Werror.

There are some warnings (which ones?) that gcc gives that
cannot be separately turned off, and these interfere with
the behavior of -Werror being conforming. It would be nice
to file bug reports for gcc, to get compiler options for those
offending conditions put in.

On the practical side, do you know of any examples of "normal
code" (eg, as opposed to something written purposely to show
gcc's erroneous behavior) that gives a warning message under gcc
in one of its nominally conforming mode, with the above compiler
options in force?

Tim Rentsch

unread,
Dec 19, 2013, 8:28:30 PM12/19/13
to
> a thing, but there seems to me to be a loophole there. [snip
> elaboration]

The previous responses here haven't been very illuminating, and I
believe have steered the conclusions in a bad direction. First,
when something is "undefined behavior" (as with the "unspecified"
or "implementation-defined" labels) that is not a description of
behavior but rather a specification of what behaviors are
permissible. Also, something people often forget, what's being
addressed is not the behavior of /programs/ but the behavior of
/implementations/. Of course it is common to talk about how a
certain program construct must behave, but that's only a shorthand
for saying the implementation must produce an executable which
when run will act according to the semantic descriptions given in
the Standard. That may be a subtle difference, but here it is an
important one.

Upon encountering a call to waitpid(), what is an implementation
obliged to do? The answer is mostly found in 5.1.1.1 p1 and
5.1.1.2 p1. Each translation unit must be processed, linked with
library functions and also other TU's, and ultimately an executable
produced. During translation a call to waitpid() must be
processed just like any other function call. Similarly during
link time the linkage must be done just the way it would for
any other link-time connection. What does this mean for how
waitpid() behaves? There are two possibilities.

One is that there is a "magic" object file (magic in the sense
that its origins are unknown, but accepted by the linker even
though the compiler didn't produce it) that defines an external
reference for waitpid(), so the hookup is complete. A call to
waitpid() happens in the same way that other function calls do
(ie, the implementation is obliged to do that), but what
waitpid() does is outside the domain of the implementation.
Whatever happens is not defined behavior, unspecified behavior,
implementation-defined behavior, or undefined behavior, because
these terms make sense only in the context of a C implementation
processing (all or part of) a C program. Here that hasn't
happened. Probably the best term for this situation is
"extra-linguistic behavior" - it is simply outside the realm of
what the C standard considers. The call must be done normally;
after that some other set of rules is in charge.

The other possibility is that waitpid() is supplied by one of
the implementation's library components, ie, a component the
implementation knows about. In this case such a function would
count as an extension in the sense of 4 p6. This in turn says
something about the behavior - in section 4 p8, the Standard
requires implementations to defined (among other things) all
extensions. So what waitpid() does must be defined explicitly
(under this scenario) by the implementation, even though it does
not fall under the heading of 'implementation-defined'.

So perhaps you can rest easier now, knowing that in neither case
is the implementation off the hook for what happens when making
a call to waitpid() - after the call is made, maybe, but not
before. :)

Richard

unread,
Dec 24, 2013, 8:23:04 AM12/24/13
to
Keith Thompson <ks...@mib.org> writes:

> Keith Thompson <ks...@mib.org> writes:
> [SNIP]
>
> Sorry about the double post. I'm not sure how that happened.

So you compound it by adding an Off Topic followup? Please don't. Some
of us are still using 33.6 Winmodems.

--
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c
0 new messages