Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Infinite loop == undefined behaviour?

14 views
Skip to first unread message

Fuz

unread,
Mar 23, 2006, 6:20:42 AM3/23/06
to
I have some code along the lines of this (simplified for clarity;
ignoring lifetime-of-T issues):

template <typename T>
struct node
{
node* next;
T value;
};

template <typename T>
void destruct_list(node<T>* begin, node<T>* end)
{
while (begin != end)
{
begin->value.T::~T();
begin = begin->next;
}
}

When T has a trivial destructor, the compilers I've tested it with (VC8
and GCC 2.95) will optimise this function away to nothing. Which is
great, because it's exactly what I want.

However, what gives the compiler the right to optimise it away? If the
destructor call gets optimised away, we're just left with an empty loop
over a list. But let's say I were to call it like this:

void f()
{
node<int> n;
n.value = 5;
n.next = &n;

node<int>* end = 0;

destruct_list(&n, end);
}

... the correct behaviour here would be to go into an infinite loop, so
this optimisation changes the behaviour of the program. The compiler
is basically assuming that the loop must terminate at some point.

This suggests to me that infinite loops are not defined behaviour in
C++. I'm sure I've read something to that effect in the Standard
before, but I can't find it for the life of me. Can anyone please
point it out, or at least tell me what I'm missing to explain why a
compiler is allowed to make this optimisation.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Kai-Uwe Bux

unread,
Mar 23, 2006, 11:09:10 AM3/23/06
to
Fuz wrote:

> I have some code along the lines of this (simplified for clarity;
> ignoring lifetime-of-T issues):
>
> template <typename T>
> struct node
> {
> node* next;
> T value;
> };
>
> template <typename T>
> void destruct_list(node<T>* begin, node<T>* end)
> {
> while (begin != end)
> {
> begin->value.T::~T();
> begin = begin->next;
> }
> }
>
> When T has a trivial destructor, the compilers I've tested it with
> (VC8
> and GCC 2.95) will optimise this function away to nothing. Which is
> great, because it's exactly what I want.

Actually, it should be optimized to begin = end so that the post-
condition
of the loop is met. Only when further analysis establishes that this
assignment is redundant, it can be optimized away in a second step.

>
> However, what gives the compiler the right to optimise it away?

My guess would be [1.4/6]:

The observable behavior of the abstract machine is its sequence of
reads and writes to volatile data and calls to library I/O functions.

The compiler has established that nothing observable happens.

[snip]


Best

Kai-Uwe Bux

Greg Herlihy

unread,
Mar 23, 2006, 11:12:17 AM3/23/06
to
Fuz wrote:
> I have some code along the lines of this (simplified for clarity;
> ignoring lifetime-of-T issues):
>
> template <typename T>
> struct node
> {
> node* next;
> T value;
> };
>
> template <typename T>
> void destruct_list(node<T>* begin, node<T>* end)
> {
> while (begin != end)
> {
> begin->value.T::~T();
> begin = begin->next;
> }
> }
>
> When T has a trivial destructor, the compilers I've tested it with
> (VC8
> and GCC 2.95) will optimise this function away to nothing. Which is
> great, because it's exactly what I want.

The compiler does not optimize away the trivial destructor - the
trivial destructor has no code to start with. Or to put it another way:
what would the "missing" destructor code have done exactly?

>
> However, what gives the compiler the right to optimise it away? If
> the
> destructor call gets optimised away, we're just left with an empty
> loop
> over a list.

No, the loop is not empty. There remains the begin = begin->next
statement.

> But let's say I were to call it like this:
>
> void f()
> {
> node<int> n;
> n.value = 5;
> n.next = &n;
>
> node<int>* end = 0;
>
> destruct_list(&n, end);
> }
>
> ... the correct behaviour here would be to go into an infinite
> loop, so
> this optimisation changes the behaviour of the program. The compiler
> is basically assuming that the loop must terminate at some point.

This program does execute an infinite loop, so its behavior is indeed
correct. Nor is the program's behavior undefined for doing so. In fact,
many C++ programs, especially those running on embedded devices,
execute within an infinite loop.

Greg

kanze

unread,
Mar 24, 2006, 11:18:57 AM3/24/06
to
Greg Herlihy wrote:
> Fuz wrote:
> > I have some code along the lines of this (simplified for
> > clarity; ignoring lifetime-of-T issues):

> > template <typename T>
> > struct node
> > {
> > node* next;
> > T value;
> > };

> > template <typename T>
> > void destruct_list(node<T>* begin, node<T>* end)
> > {
> > while (begin != end)
> > {
> > begin->value.T::~T();
> > begin = begin->next;
> > }
> > }

> > When T has a trivial destructor, the compilers I've tested
> > it with (VC8 and GCC 2.95) will optimise this function away
> > to nothing. Which is great, because it's exactly what I
> > want.

> The compiler does not optimize away the trivial destructor -
> the trivial destructor has no code to start with. Or to put it
> another way: what would the "missing" destructor code have
> done exactly?

Arguably, it's an empty function. But that's really irrelevant
to his point. Whether the loop is executed or not has not
impact on the observable behavior of the program.

> > However, what gives the compiler the right to optimise it
> > away? If the destructor call gets optimised away, we're
> > just left with an empty loop over a list.

> No, the loop is not empty. There remains the begin =
> begin->next statement.

And if begin is never accessed after the loop (which the
compiler can trivially assertain here), the loop has no effect
on the observable behavior of the program.

> > But let's say I were to call it like this:

> > void f()
> > {
> > node<int> n;
> > n.value = 5;
> > n.next = &n;

> > node<int>* end = 0;

> > destruct_list(&n, end);
> > }

> > ... the correct behaviour here would be to go into an
> > infinite loop, so this optimisation changes the behaviour
> > of the program. The compiler is basically assuming that the
> > loop must terminate at some point.

> This program does execute an infinite loop, so its behavior is
> indeed correct. Nor is the program's behavior undefined for
> doing so.

Yes and no. In practice, infinitly loops don't exist, because
the hardware doesn't last that long. They exceed the resource
limits of the implementation, and so result in undefined
behavior.

> In fact, many C++ programs, especially those running on
> embedded devices, execute within an infinite loop.

I seem to recall seeing the question raised in comp.std.c, many,
many years back. Can "undefined behavior" violate causation.
At some point in time, the infinite loop in an embedded device
will exceed the resource limits of the system; the processor
will fail. This results in undefined behavior -- and I'm sure
that none of us would claim it an error in the compiler because
the infinite loop ceased when the processor failed. Does this
undefined behavior mean that nothing which the program did
before was defined?

I forget what the final decision was, or even if there was one.
It doesn't matter in practice, because in practice, undefined
behavior won't make time go backwards, even if the C++ standard
allows it. If a loop contains observable behavior, the compiler
will NOT remove it, even if it is infinite. The question here
is when the loop doesn't affect the observable behavior of the
program.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Fuz

unread,
Mar 24, 2006, 11:23:48 AM3/24/06
to
Kai-Uwe Bux wrote:
> Actually, it should be optimized to begin = end so that the post-
> condition of the loop is met.

Is that accurate? I'd no more expect the compiler to make that step
that I would expect it to optimise this:

int n = 0;
while (n != 1)
n *= 2;

... as 'int n = 1'. Which it might actually do; I haven't checked.


> My guess would be [1.4/6]:
>
> The observable behavior of the abstract machine is its sequence of
> reads and writes to volatile data and calls to library I/O functions.
>
> The compiler has established that nothing observable happens.

This is a fair point. However, any code with observable behaviour
*after* an infinite loop should never be called. The loop itself may
not exhibit observable behaviour, but by eliminating it and allowing
the following code to be executed, it's changing the behaviour of the
program.


kanze

unread,
Mar 24, 2006, 11:19:52 AM3/24/06
to

What if the calling code outputs something after calling this
function? Before the optimization, the output never took place;
after, it does. (Even if the calling code never outputs
anything, I think that exit() is considered a library I/O
function. If not, it should be.)

Execution time is not considered part of the observable
behavior. Changing something which would take a million years
into something which takes a microsecond is a perfectly legal
optimization. Changing something which would take an infinite
amount of time into something which would take a finite amount,
however? I'm not sure myself -- as Kai-Uwe points out, it means
that something that would never take place will take place.

I think that this is covered by §1.4/2: "If a program contains
no violations of the rules in this International Standard, a
conforming implementation shall, WITHIN ITS RESOURCE LIMITS,
accept and correctly execute that program." Unless the machine
the program will run on can be guaranteed to last an infinit
amount of time, a program with an infinite loop will sooner or
later exceed the programs resource limits. Since the standard
doesn't say what happens when the resource limits are exceeded
(except in specification cases like operator new), the behavior
is undefined.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Sean Kelly

unread,
Mar 25, 2006, 6:04:27 AM3/25/06
to
kanze wrote:

> Greg Herlihy wrote:
>
> > This program does execute an infinite loop, so its behavior is
> > indeed correct. Nor is the program's behavior undefined for
> > doing so.
>
> Yes and no. In practice, infinitly loops don't exist, because
> the hardware doesn't last that long. They exceed the resource
> limits of the implementation, and so result in undefined
> behavior.

I think it's incorrect to take this perspective with theoretical
problems, as they all necessarily ignore these issues. For example,
one could argue that the Halting Problem is invalid for the same
reason, but to do so seems kind of pointless.

> > In fact, many C++ programs, especially those running on
> > embedded devices, execute within an infinite loop.
>
> I seem to recall seeing the question raised in comp.std.c, many,
> many years back. Can "undefined behavior" violate causation.
> At some point in time, the infinite loop in an embedded device
> will exceed the resource limits of the system; the processor
> will fail.

As above, that doesn't render the loop finite for the purposes of
discussion.

> This results in undefined behavior -- and I'm sure
> that none of us would claim it an error in the compiler because
> the infinite loop ceased when the processor failed. Does this
> undefined behavior mean that nothing which the program did
> before was defined?

I would argue that behavior should be considered within the theoretical
context of the problem, which in this case is the virtual machine
described by the C++ standard. Thus, a typical infinite loop should
have well-defined behavior. Though I admit that this leaves resolving
the differences between theoretical and practical behavior as an
occasionally necessary problem. For example, I believe an infinitely
recursive function has well-defined behavior as well, even though this
is quite obviously limited by the finite stack space provided by actual
machines.

> If a loop contains observable behavior, the compiler
> will NOT remove it, even if it is infinite. The question here
> is when the loop doesn't affect the observable behavior of the
> program.

Perhaps that's a problem best left up to compiler implementors?


Sean

Kai-Uwe Bux

unread,
Mar 25, 2006, 7:06:26 PM3/25/06
to
Fuz wrote:

> Kai-Uwe Bux wrote:
>> Actually, it should be optimized to begin = end so that the post-
>> condition of the loop is met.
>
> Is that accurate? I'd no more expect the compiler to make that step
> that I would expect it to optimise this:
>
> int n = 0;
> while (n != 1)
> n *= 2;
>
> ... as 'int n = 1'. Which it might actually do; I haven't checked.

Well, the compiler can abstain from optimizing the loop. But if it does
something, it better makes sure that the postcondition of the loop is
satisfied. We may not know whether control flow reaches the point after the
loop statement, but we do know that *if* conroll flow reaches that point, n
will be 1.

>
>> My guess would be [1.4/6]:
>>
>> The observable behavior of the abstract machine is its sequence of
>> reads and writes to volatile data and calls to library I/O functions.
>>
>> The compiler has established that nothing observable happens.
>
> This is a fair point. However, any code with observable behaviour
> *after* an infinite loop should never be called.

Why? I admit that this is intuitive, but is it guaranteed by the standard?

> The loop itself may
> not exhibit observable behaviour, but by eliminating it and allowing
> the following code to be executed, it's changing the behaviour of the
> program.

I guess, the question is in part about the semantics of a while loop. The
standard says:

while ( condition ) {
substatement;
}

the substatement is executed until condition becomes false [6.5.1/1]. One
way of reading this is: at the sequence point after the while statement all
side effects of the execution of as many runs through substatment as
required have taken place and the condition is false. In particular, if the
side effects of an apparently infinite loop stabilize after finite time, I
do not see a reason that control flow should be trapped within the loop.

At the heart of the matter is not an infinite loop. The bottom line is a
simple two line snippet:

statement_one;
statement_two;

I think that the standard does not say that statement_two is executed only
after statement_one has been completed. All it says that before
statement_two is executed, all side effects of statement_one have taken
place. That is, the standard has a very weak notion of flow control that
would, for instance, allow for parallelization on certain architectures. If
statement_one happens to be an infinite loop without side effects, I think
the abstract machine is free to execute statement_two as soon as it can see
that statement_one has no further side effects (although, I admit that this
is couter intuitive).

Now, one might think: but statment_one (in the case of an infinite loop) is
not that simple a statement, it boils down to an infinite sequence of
statements itself:

statement_one_1;
statement_one_2;
statement_one_3;
statement_one_4;
...

With respect to this sequence, I would argue (again) that all these
statements could be done in parallel if their side effects to no interfere
with one another. In particular, they could be dealt with altogether in one
clock tick.


I should also add that I am not sure about this as an interpretation of the
standard. I am just offering this reading as a starting point to show that
it might not be impossible to make heads an tails of the compilers doings.


Best

Kai-Uwe Bux

Greg Herlihy

unread,
Mar 26, 2006, 7:45:06 AM3/26/06
to

Sure it does. A program without an infinite loop terminates - the one
without does not. Such a difference is clearly observable. Or consider
this variation:

int main()


{
node<int> n;
n.value = 5;
n.next = &n;

node<int>* end = 0;

destruct_list(&n, end);
std::cout << "good-bye\n";
}

Now the program without the infinite loop outputs "good-bye" while the
one with the infinite loop does not - even should the hardware fail at
some point - the program will not ever have output "good-bye". And the
Standard states that a program's calls to library I/O routines are part
of its observable behavior.

What is the resource whose limit is being exceeded by executing an
infinite loop? It cannot be "time" because time is not finite. A
resource by definition is finite (and exhaustible) - and time is
neither. And even if hardware has a limited lifespan - its lifespan is
not being exhausted by the programs it executes. In other words
executing a really, really long - but finite loop - would also be
undefined for the same reason.

As a practical matter any physical device, such as a computer, will
fail eventually - but an abstract machine such as the one the C++
standard specifies has no finite lifespan - so executing an infinite
loop is well-defined - because no resource is being exhausted.

Any C++ compiler that removes an infinite loop is simply in error. And
it's an error that appears to have been fixed in gcc, since gcc 4.0
(compiling with the -O3 flag for full optimization) does not optimize
the loop away. The Microsoft C++ compiler should be fixed to do the
same.

Greg

Greg Herlihy

unread,
Mar 26, 2006, 12:01:53 PM3/26/06
to

An "infinite" loop is defined as one in which its exit condition will
never be met, due to some inherent characteristic of the loop
[wikipedia]. And since there is no goto, break, throw or alternate
means of exiting from this loop, the program will not exit the loop.
And certainly the onus of proving that this loop is not infinite would
be on the one making such a claim - by producing a counter example.

>> The loop itself may
>> not exhibit observable behaviour, but by eliminating it and allowing
>> the following code to be executed, it's changing the behaviour of the
>> program.
>
> I guess, the question is in part about the semantics of a while
> loop. The
> standard says:
>
> while ( condition ) {
> substatement;
> }
>
> the substatement is executed until condition becomes false
> [6.5.1/1]. One
> way of reading this is: at the sequence point after the while
> statement all
> side effects of the execution of as many runs through substatment as
> required have taken place and the condition is false. In
> particular, if the
> side effects of an apparently infinite loop stabilize after finite
> time, I
> do not see a reason that control flow should be trapped within the
> loop.

Let's test this hypothesis heuristically: we'll start with the same
compiler that optimized away the loop in the original example
(Microsoft Visual C++ 2005 Express) and leave all of its settings
completely unchanged. Instead we will just edit the source code, so it
now looks like this:

#include <iostream>

template <typename T>
struct node
{
node* next;
T value;
};

template <typename T>
void destruct_list(node<T>* begin, node<T>* end)
{

while (1) {}
}

int main()


{
node<int> n;
n.value = 5;
n.next = &n;

node<int>* end = 0;

destruct_list(&n, end);
}

Now clearly the infinite loop in this program is even more apparent
than in the original - so any compiler performing the exact same set of
optimizations that eliminated the loop in the original, would also
eliminate the infinite loop in this program. What reason could there be
for the compiler to do otherwise?

And yet, I discovered that the Microsoft compiler does not "optimize"
away this infinite loop, as it does in the original program. Instead I
found that the infinite loop is compiled into the program, and the
program when run, never exits. So if it is the case that the compiler
intends to eliminate infinite loops, how does it manage to perform the
sophisticated analysis needed to detect the infinite loop in the first
program, but at the same time be utterly unable to recognize the
obvious infinite loop in the second?

Wouldn't a much more likely explanation be simply that the compiler had
the reverse intention: to eliminates a loop only if it is both empty
and finite? Certainly if that were the case, it would explain why the
compiler kept the obviously infinite loop but removed the non-obviously
infinite loop - because it simply failed to recognize that the loop was
not finite.

In fact, we can answer that question on behalf of the gcc 2.95
compiler. In gcc's case, the documentation describes the correct,
intended behavior. The section "Certain Changes We Don't Want to Make"
has this to say about empty loops:

Deleting "empty" loops.

Historically, GCC has not deleted "empty" loops under the
assumption that the most likely reason you would put one in a
program is to have a delay, so deleting them will not make
real programs run any faster.

However, the rationale here is that optimization of a nonempty
loop cannot produce an empty one. This held for carefully
written C compiled with less powerful optimizers but is not
always the case for carefully written C++ or with more powerful
optimizers. Thus GCC will remove operations from loops whenever
it can determine those operations are not externally visible
(apart from the time taken to execute them, of course). In case
the loop can be proved to be finite, GCC will also remove the
loop itself.


http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/
Non_002dbugs.html#Non_002dbugs

The last sentence is unequivocal - a loop is eliminated once it can be
proved to be finite. A loop that cannot be proved to be finite - even
an empty one - will not be eliminated. And as gcc has gotten better at
proving when a loop is finite, the fewer the number of infinite loops
it has been deleting by mistake. And indeed recent versions of gcc no
longer compile away the infinite loop in the original program - because
the compiler never intended to delete an infinite loop in the first
place.

Greg

Kai-Uwe Bux

unread,
Mar 26, 2006, 1:42:57 PM3/26/06
to
Greg Herlihy wrote:

> Kai-Uwe Bux wrote:
>> Fuz wrote:
>>

[snip]


>>> However, any code with observable behaviour
>>> *after* an infinite loop should never be called.
>>
>> Why? I admit that this is intuitive, but is it guaranteed by the
>> standard?
>
> An "infinite" loop is defined as one in which its exit condition will
> never be met, due to some inherent characteristic of the loop
> [wikipedia]. And since there is no goto, break, throw or alternate
> means of exiting from this loop, the program will not exit the loop.
> And certainly the onus of proving that this loop is not infinite would
> be on the one making such a claim - by producing a counter example.

Is the following loop infinite:

n = 1;
while ( n != 0 ) {
n = some_function( n );
}

What about

n = 1;
while ( n != 0 ) {
n = 1;
}

I contend that a compiler might be well within its rights to replace both by

n = 0;

I do not contend that a compiler will do that or should do that.

Also, as I already said, I am not sure about this reading of the standard.


>>> The loop itself may
>>> not exhibit observable behaviour, but by eliminating it and allowing
>>> the following code to be executed, it's changing the behaviour of the
>>> program.
>>
>> I guess, the question is in part about the semantics of a while
>> loop. The
>> standard says:
>>
>> while ( condition ) {
>> substatement;
>> }
>>
>> the substatement is executed until condition becomes false
>> [6.5.1/1]. One
>> way of reading this is: at the sequence point after the while
>> statement all
>> side effects of the execution of as many runs through substatment as
>> required have taken place and the condition is false. In
>> particular, if the
>> side effects of an apparently infinite loop stabilize after finite
>> time, I
>> do not see a reason that control flow should be trapped within the
>> loop.
>
> Let's test this hypothesis heuristically:

Since my hypothesis is about the C++ standard, I would prefer if your test
started from there. Note that I *never* disputed that a compiler is allowed
to not optimize the loop at all.

> we'll start with the same
> compiler that optimized away the loop in the original example
> (Microsoft Visual C++ 2005 Express) and leave all of its settings
> completely unchanged. Instead we will just edit the source code, so it
> now looks like this:
>
> #include <iostream>
>
> template <typename T>
> struct node
> {
> node* next;
> T value;
> };
>
> template <typename T>
> void destruct_list(node<T>* begin, node<T>* end)
> {
> while (1) {}
> }
>
> int main()
> {
> node<int> n;
> n.value = 5;
> n.next = &n;
>
> node<int>* end = 0;
>
> destruct_list(&n, end);
> }
>
> Now clearly the infinite loop in this program is even more apparent
> than in the original - so any compiler performing the exact same set of
> optimizations that eliminated the loop in the original, would also
> eliminate the infinite loop in this program. What reason could there be
> for the compiler to do otherwise?

The postcondition of while ( 1 ) {} can never be true. One can therefore
prove that flow control cannot reach the point after the loop.

Let me tell you where I was coming from when I posted my reading of the
standard. There is this business about defining the semantics of
programming language constructs axiomatically using pre- and
postconditions. The conventional way of describing the while-loop is:

P and C { S } P
while ( C ) { S } : -----------------------------------
P { while (C){S} } P and not C

This means: for any condition P such that P and C at the beginning of S
imply P at the end of S, the construct while(C){S} implies P and not C at
the end given P holds at its beginning. Now, if C is "true", then not true
would have to hold after the loop proving that the while statement cannot
complete.

On the other hand, with this definition of the while-loop,

n = 1;
while ( n != 0 ) {
n = 1;
}

and

n = 0;

are actually equivalent.

These axiomatic characterizations are used to prove that program
transformations do not change the behavior of correct programs. The
question therefore is: is the C++ standard compatible with such a
definition of the while-loop. My reasons to suspect that the answer might
be "yes" are two:

a) The standard does not closely tie the abstract machine to the textual
sequence of statements in a program. Usually, all it does is to require
that certain effects have taken place before others -- the synchronization
being required at sequence points.

b) Generally, the standard tries to give a lot of freedom to compiler
writers to optimize code. The approach to optimization via transformations
that preserve pre-postcondition semantics is not that recent and was most
likely known to the authors of the standard. I would be surprised if they
knowingly worded the standard in such a way that it rules out this kind of
optimization (unless the compiler can prove a loop finite -- which in
general it cannot without solving the halting problem).

> And yet, I discovered that the Microsoft compiler does not "optimize"
> away this infinite loop, as it does in the original program. Instead I
> found that the infinite loop is compiled into the program, and the
> program when run, never exits. So if it is the case that the compiler
> intends to eliminate infinite loops, how does it manage to perform the
> sophisticated analysis needed to detect the infinite loop in the first
> program, but at the same time be utterly unable to recognize the
> obvious infinite loop in the second?

The compiler has no intentions. It just applies code transformations
increasing efficiency for which it was established that said
transformations preserve the semantics of the program. If the compiler used
transformation based on the pre- and postcondition characterization of the
while-loop, then it does not come as a surprise that it eliminates a loop
where it can insert a simple statement to make its loop-condition false and
does not so if the loop-condition cannot be false.


> Wouldn't a much more likely explanation be simply that the compiler had
> the reverse intention: to eliminates a loop only if it is both empty
> and finite? Certainly if that were the case, it would explain why the
> compiler kept the obviously infinite loop but removed the non-obviously
> infinite loop - because it simply failed to recognize that the loop was
> not finite.

Actually, I don't think the original question is about what the compiler
intended to do. It is entirely about what the compiler is *allowed* to do.
No experiment will ever decide that.

[snip]


Best

Kai-Uwe Bux

kanze...@neuf.fr

unread,
Mar 26, 2006, 3:14:36 PM3/26/06
to
Sean Kelly wrote:
> kanze wrote:
>> Greg Herlihy wrote:

>>> This program does execute an infinite loop, so its
>>> behavior is indeed correct. Nor is the program's behavior
>>> undefined for doing so.

>> Yes and no. In practice, infinitly loops don't exist,
>> because the hardware doesn't last that long. They exceed
>> the resource limits of the implementation, and so result in
>> undefined behavior.

> I think it's incorrect to take this perspective with
> theoretical problems, as they all necessarily ignore these
> issues. For example, one could argue that the Halting Problem
> is invalid for the same reason, but to do so seems kind of
> pointless.

Is a C++ compiler a theoretical problem? What happens in the
case of infinite recursion? Explain why this case must obey
different rules?

>>> In fact, many C++ programs, especially those running on
>>> embedded devices, execute within an infinite loop.

>> I seem to recall seeing the question raised in comp.std.c,
>> many, many years back. Can "undefined behavior" violate
>> causation. At some point in time, the infinite loop in an
>> embedded device will exceed the resource limits of the
>> system; the processor will fail.

> As above, that doesn't render the loop finite for the purposes
> of discussion.

I didn't say it did. I said it lets the compiler off the hook.

Consider another simple case:

while ( true ) {
std::cout << "hello" << std::endl ;
}

In this case, of course, there is observable behavior, and the
compiler should not optimize the loop away. However, if your
code actually does output "hello" an infinit number of times,
your hardware is more reliable than mine. And perhaps as much
to the point: how do you test it?

Would you argue that the compiler is not conform because in
fact, the generated program does not output "hello" an infinit
number of times? And if not, why not?

I think that there's actually a very interesting question
involved here, and to tell the truth, I haven't the vaguest idea
how the standard should treat it. Last night, g++ failed to
compile a perfectly legal program -- of course, I happened to
accidentally pull the plug on the machine while it was
compiling. Certainly, no reasonable person would consider this
an error or a lack of conformity in g++. But why? The only
text I can find in the standard that would cover it is the
clause about insufficient resources.

Alternatively, one could view it as a lack of conformity, and
that g++ just forgot to document, that in order to be conform,
not only do you need the option -std=c++98, but that the
hardware must be in good working order and turned on through the
entire time the program is compiling and running. Pulling the
plug renders g++ non-conform just as surely as placing the
program in a file with the extension .f does.

Realistically, of course, I don't think it matters that much --
we all know that you cannot pull the plug and expect anything to
continue working correctly. But philosophically, I find it an
interesting question. Does the standard let g++ off the hook,
or only common sense.

>> This results in undefined behavior -- and I'm sure that none
>> of us would claim it an error in the compiler because the
>> infinite loop ceased when the processor failed. Does this
>> undefined behavior mean that nothing which the program did
>> before was defined?

> I would argue that behavior should be considered within the
> theoretical context of the problem, which in this case is the
> virtual machine described by the C++ standard. Thus, a
> typical infinite loop should have well-defined behavior.
> Though I admit that this leaves resolving the differences
> between theoretical and practical behavior as an occasionally
> necessary problem. For example, I believe an infinitely
> recursive function has well-defined behavior as well, even
> though this is quite obviously limited by the finite stack
> space provided by actual machines.

I totally agree. Isn't this the real reason why exceeding
resource limits is undefined behavior. In French, we'd say "À
l'impossible nul n'est tenu" -- no one is required to perform
the impossible. That goes for compilers as well.

So how does the infinite loop and the infinite recursion differ.
Except that typically, it will require a lot longer time before
the infinite loop (e.g. as my simple example above) fails. (Or
will it? I've got a 64 bit machine, and could, presumably,
configure 2^64 KBytes of virtual memory. If I did that, a
program with infinite recursion will very quickly start paging
like crazy, which will slow things down considerably. And
without doing any calculations to verify my hunch, I rather
suspect that the machine won't stay up long enough to use up
2^64 KB of stack.)

>> If a loop contains observable behavior, the compiler will
>> NOT remove it, even if it is infinite. The question here is
>> when the loop doesn't affect the observable behavior of the
>> program.

> Perhaps that's a problem best left up to compiler implementors?

Isn't that what undefined behavior does?

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

kanze...@neuf.fr

unread,
Mar 26, 2006, 3:11:24 PM3/26/06
to

Are you sure? In theory, you may be right, but practically,
I've terminated programs with infinite loops more than once.
(In once extreme case, the infinite loop was in the machine
microcode -- the "reset" button actually generated an NMI, and
I had to pull the power plug to stop the loop. But stop it
did.)

In theory, infinite recursion doesn't terminate either. So the
problem is exactly the same. If you don't think so, explain
why.

> Such a difference is clearly observable.

> Or consider
> this variation:

> int main()
> {
> node<int> n;
> n.value = 5;
> n.next = &n;

> node<int>* end = 0;

> destruct_list(&n, end);
> std::cout << "good-bye\n";
> }

> Now the program without the infinite loop outputs "good-bye"
> while the one with the infinite loop does not - even should
> the hardware fail at some point - the program will not ever
> have output "good-bye".

Have you tried pulling the plug immediately after starting the
program without the infinite loop? Before it manages to do its
output. (Pulling the plug is closest thing I can think of to a
hardware failure. Supposing, of course, the machine is not
running on batteries.)

The only real difference between the program is that the one
with the infinite loop waits a little longer (OK, a lot longer)
for the hardware to fail.

> And the Standard states that a program's calls to library I/O
> routines are part of its observable behavior.

It also says that exceeding resource limits is undefined
behavior, which lets the compiler off the hook.

>>>> destruct_list(&n, end);
>>>> }

The lifetime of the processor.

> It cannot be "time" because time is not finite. A resource by
> definition is finite (and exhaustible)

The computer itself is definitly a resource. And unless there
has been some serious improvements in reliability since I last
looked, its lifetime is finite.

A computer (and thus a running program) uses electricity.
Sooner or later, a break-down in the power supply will deprive
the computer of that.

> - and time is neither. And even if hardware has a limited
> lifespan - its lifespan is not being exhausted by the programs
> it executes. In other words executing a really, really long -
> but finite loop - would also be undefined for the same reason.

> As a practical matter any physical device, such as a computer,
> will fail eventually - but an abstract machine such as the one
> the C++ standard specifies has no finite lifespan - so
> executing an infinite loop is well-defined - because no
> resource is being exhausted.

And how is this logic any different than that for infinite
recursion. The abstract machine defined in the standard uses no
resources for much of anything, including a function call. Or
at least, the standard doesn't say anything about it.

> Any C++ compiler that removes an infinite loop is simply in
> error.

> From a quality of implementation point of view, I think that you
can argue the question both ways. From a standard's point of
view... it's long been accepted that the shell script:

echo "Resource limit exceeded"

is a fully conformant C++ compiler. Not a very useful one, and
not one which would be considered acceptable from a quality of
implementation point of view. But the standard doesn't (and
probably cannot) guarantee that a conformant compiler will be
useful.

> And it's an error that appears to have been fixed in gcc,
> since gcc 4.0 (compiling with the -O3 flag for full
> optimization) does not optimize the loop away. The Microsoft
> C++ compiler should be fixed to do the same.

As the original poster said, the compiler did what he wanted.
IMHO, it's a quality of implementation issue, and as I said,
there are arguments both ways with regards to which solution is
"better". (Historically, removing such empty loops, even when
they might have been infinite, has been considered "good".)

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

Kai-Uwe Bux

unread,
Mar 27, 2006, 9:43:51 AM3/27/06
to
Oops, I screwed up:

Actually, they are not: If one considers the condition

P : n == 1

then:

n == 1 and n != 0 at the beginning of { n=1 } imply n == 1
at the end.

thus:

n == 1 at the beginning of { while(n!=0){n=1} } implies
n == 1 and not ( n!= 0 ) after the while loop.

Thus: control flow cannot reach the point after this loop.


Now, I begin to wonder whether I can actually construct an example of a
"visibly infinite" loop (or at least a potentially infinite loop)
that can
be elided under the pre/postcondition criterion. It is harder than I
thought!

mar...@gmail.com

unread,
Mar 27, 2006, 1:08:21 PM3/27/06
to

Fuz wrote:
> I have some code along the lines of this (simplified for clarity;
> ignoring lifetime-of-T issues):
>
> template <typename T>
> struct node
> {
> node* next;
> T value;
> };
>
> template <typename T>
> void destruct_list(node<T>* begin, node<T>* end)
> {
> while (begin != end)
> {
> begin->value.T::~T();
> begin = begin->next;
> }
> }
>

>


> void f()
> {
> node<int> n;
> n.value = 5;
> n.next = &n;
>
> node<int>* end = 0;
>
> destruct_list(&n, end);
> }
>
> ... the correct behaviour here would be to go into an infinite loop, so
> this optimisation changes the behaviour of the program. The compiler
> is basically assuming that the loop must terminate at some point.

There is no correct behavior here. The behavior is undefined, because
you call the destructor for n.value more than once.

The compiler assumes that undefined behavior is not invoked, and that
therefore the loop *must* be finite (memory is a finite resource), and
that therefore it can optimize the loop away.

Mark Williams

mar...@gmail.com

unread,
Mar 27, 2006, 1:23:21 PM3/27/06
to

kanze wrote:
>
> What if the calling code outputs something after calling this
> function? Before the optimization, the output never took place;
> after, it does.

Why cant an infinite loop complete in a finite time? eg what if the nth
iteration of the loop runs in 2^-n seconds? Since there is no
observable behavior, thats clearly doable.

Mark Williams

Brian McKeever

unread,
Mar 29, 2006, 5:58:43 AM3/29/06
to
> Why cant an infinite loop complete in a finite time? eg what if the nth
iteration of the loop runs in 2^-n seconds?

I don't think that's relevant here. C++ while loops have a condition
expression, and they loop until it evaluates false. "Infinite loop"
means (perhaps imprecisely) a loop whose exit condition is never met,
not one that exits after executing a countably infinite number of
times.

Brian

Greg Herlihy

unread,
Mar 29, 2006, 5:51:16 AM3/29/06
to

mar...@gmail.com wrote:
> kanze wrote:
> >
> > What if the calling code outputs something after calling this
> > function? Before the optimization, the output never took place;
> > after, it does.
>
> Why cant an infinite loop complete in a finite time? eg what if the nth
> iteration of the loop runs in 2^-n seconds? Since there is no
> observable behavior, thats clearly doable.

...because an infinite loop, by definition, is one that never
completes. And neither the total number of iterations to date nor the
time it took to complete each one make any sort of difference; since
the exit condition is not met at the end of an iteration, the program
never decides to exit the loop but instead always begins another
iteration.

Greg

kanze

unread,
Mar 29, 2006, 5:53:02 AM3/29/06
to
mar...@gmail.com wrote:
> kanze wrote:

> > What if the calling code outputs something after calling
> > this function? Before the optimization, the output never
> > took place; after, it does.

> Why cant an infinite loop complete in a finite time? eg what
> if the nth iteration of the loop runs in 2^-n seconds? Since
> there is no observable behavior, thats clearly doable.

Now that's an interesting observation. A compiler can suppress
an infinite loop on the grounds that each pass executes in 0
seconds, 0 is less than 2^-n, \Sigma 2^-n is bound, so the
execution takes some finite time, and how much is not part of
the observable behavior.

Another take would be that we have the product of 0 and
infinity. Which is mathematically undefined, and when we are
dealing with limits, is often a finite number.

I like it.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Fuz

unread,
Mar 29, 2006, 5:52:00 AM3/29/06
to
mar...@gmail.com wrote:
> There is no correct behavior here. The behavior is undefined, because
> you call the destructor for n.value more than once.

Which is why I said: 'simplified for clarity; ignoring lifetime-of-T
issues'. I am very aware of how to correctly manage the lifetime of an
object, but I didn't want buffer allocation, placement new-ing,
explicit destructor invokation and deallocation to cloud the issue at
hand.

Believe me, the compiler does exactly the same thing regardless of my
bending of the rules for the purpose of this example.


> The compiler assumes that undefined behavior is not invoked, and that
> therefore the loop *must* be finite (memory is a finite resource), and
> that therefore it can optimize the loop away.

I don't follow your reasoning. No memory allocation is being done
here. There is no recursion so there is no chance of the stack being
overflown. There are no standard library calls. Where is the memory
problem?

Fuz

unread,
Mar 29, 2006, 5:53:24 AM3/29/06
to
Kai-Uwe Bux wrote:
> Why? I admit that this is intuitive, but is it guaranteed by the standard?

I think the whole issue boils down to this simple statement. The
problem seems to be that the Standard doesn't guarantee anything about
infinite loops. We're left with a quandary. The Standard doesn't say
it has defined behaviour but it doesn't say it's undefined either.
Does that mean it's undefined, unspecified, or something else? I
suspect that the best you can do is deal with infinite loops at a
per-implementation level, and at worst, assume they're undefined and
avoid them entirely.

As for what I originally thought I saw in the Standard on the issue, I
could've mistaken it for the clause about infinite template recursion
being undefined (I can't look up the paragraph in the Standard right
now but it's 14.7.1/12 in the Draft Standard).

Greg Herlihy

unread,
Mar 29, 2006, 6:04:59 AM3/29/06
to

Unless your actions were somehow part of the program's logical
execution sequence - then it cannot be said that the program exited the
infinite loop. Rather your intervention moved the machine's program
counter outside of the loop - but without this assistance, the program
left to run its own would still be stuck in that loop to this day.

> In theory, infinite recursion doesn't terminate either. So the
> problem is exactly the same. If you don't think so, explain
> why.

A program with infinite recursion (assuming each recursive call
requires memory allocation) will fail as soon as the amount of
available memory is exhausted. And storage, being finite, will never be
able to meet the requirements of a program whose memory usage is
constantly increasing.

In contrast, a program executing in an infinite loop has constant
memory requirements. So unlike the program with the infinite recursion,
it can run indefinitely. And unlike the recursive program - whose
behavior becomes undefined as soon as memory is exhausted - the program
executing in the loop remains well-defined.

The lifetime of the processor presents no upper bound on how long our
program can run. After all, we can routinely replace an aging processor
(after suspending and the program's execution state) with a brand new
one - and thereby keep the program running while ensuring that its
hardware never wears out. And as long as the new processor is not any
larger (or more expensive) than than the one replaced - we can continue
to keep the program running indefinitely. And the requirement here is
not that the program runs "forever" but to run indefinitely, that is,
with no provable upper bound on the duration of its execution.

Consider, in contrast, the maintenance required for a recursive program
that is just about to run out of memory. Swapping in new memory chips
won't fix the problem here. The memory has not worn out, rather the
problem is that it is almost all in use. As a stopgap solution we could
try adding more memory - but no matter how much memory we added, we
would eventually run out, because the program's memory needs are ever
increasing.

So we can safely conclude that a program executing in an infinite loop
is defined behavior, whereas a program executing infinite recursion is
not.

> > It cannot be "time" because time is not finite. A resource by
> > definition is finite (and exhaustible)
>
> The computer itself is definitly a resource. And unless there
> has been some serious improvements in reliability since I last
> looked, its lifetime is finite.

>


> > And it's an error that appears to have been fixed in gcc,
> > since gcc 4.0 (compiling with the -O3 flag for full
> > optimization) does not optimize the loop away. The Microsoft
> > C++ compiler should be fixed to do the same.
>
> As the original poster said, the compiler did what he wanted.
> IMHO, it's a quality of implementation issue, and as I said,
> there are arguments both ways with regards to which solution is
> "better". (Historically, removing such empty loops, even when
> they might have been infinite, has been considered "good".)

The original poster was glad that the compiler generated no code for an
int's destructor. But an int - being a fundamental type - has no
destructor to start with. So there is little reason to be grateful to a
compiler that generates no code in places where no code could exist. On
the other hand, a compiler that deletes a loop that should never have
completed - and thereby changes the entire behavior of the program - is
something to complain about.

So I think the more helpful answer to the original poster's complaint
would have been to acknowledge the obvious: the problem observed is a
bug with each of the compilers - and not with his expectation that a
infinite loop should not in fact end. No explanation - no matter how
exotic or tortured its logic - is ever going to persuade anyone that
the presence of an infinite in a program simply does not matter. Nor is
such an argument likely to make anyone believe that an infinite loop is
not anything that a programmer should ever be concerned about - because
no program is ever affected by its presence.

Unfortunately the reality is that a program that terminates is not even
comparable (much less identical) to one that does not. And certainly a
program that exhibits undefined behavior (were one to accept the
explanation being offered) could never be identical to a program that
has none. And yet such a contradiction would nonetheless have to be
valid, in order to make the case that an infinite loop in a C++ program
is purely "optional" - so whether the loop ends up in the resulting
program - or not - is a matter decided entirely by the whim of the
compiler.

Greg

ThosRTanner

unread,
Mar 30, 2006, 2:15:10 AM3/30/06
to

Fuz wrote:
> I have some code along the lines of this (simplified for clarity;
> ignoring lifetime-of-T issues):
>
> template <typename T>
> struct node
> {
> node* next;
> T value;
> };
>
> template <typename T>
> void destruct_list(node<T>* begin, node<T>* end)
> {
> while (begin != end)
> {
> begin->value.T::~T();
> begin = begin->next;
> }
> }
>
> When T has a trivial destructor, the compilers I've tested it with (VC8
> and GCC 2.95) will optimise this function away to nothing. Which is
> great, because it's exactly what I want.
Why is it what you want?

The loop you have is pretty much equivalent to

while (begin != end) { begin = begin->next; }

Unless the compiler knows a lot about the list you are calling (which
it might, but it'd be a very clever compiler!), it cannot safely assume
that following the chain of next pointers will terminate.

> However, what gives the compiler the right to optimise it away? If the
> destructor call gets optimised away, we're just left with an empty loop
> over a list. But let's say I were to call it like this:
>
> void f()
> {
> node<int> n;
> n.value = 5;
> n.next = &n;
>
> node<int>* end = 0;
>
> destruct_list(&n, end);
> }
>
> ... the correct behaviour here would be to go into an infinite loop, so
> this optimisation changes the behaviour of the program. The compiler
> is basically assuming that the loop must terminate at some point.

Quite. And it isn't allowed to do this.

> This suggests to me that infinite loops are not defined behaviour in
> C++. I'm sure I've read something to that effect in the Standard
> before, but I can't find it for the life of me. Can anyone please
> point it out, or at least tell me what I'm missing to explain why a
> compiler is allowed to make this optimisation.

Infinite loops are, um, infinite. I don't think the compiler is allowed
to make optimisations based on what it thinks ought to be happening.
There is a slight difference between
for (i = 0; i <= 12000; ++i) {}, which will clearly terminate after
doing nothing, and while (begin != end) { begin = begin->next; } which
will either terminate, invoke undefined behaviour or end up in a loop.

mar...@gmail.com

unread,
Mar 30, 2006, 2:39:23 AM3/30/06
to

Greg Herlihy wrote:
> mar...@gmail.com wrote:
> > kanze wrote:
> > >
> > > What if the calling code outputs something after calling this
> > > function? Before the optimization, the output never took place;
> > > after, it does.
> >
> > Why cant an infinite loop complete in a finite time? eg what if the nth
> > iteration of the loop runs in 2^-n seconds? Since there is no
> > observable behavior, thats clearly doable.
>
> ...because an infinite loop, by definition, is one that never
> completes.

A loop which enumerates the natural numbers, and does so on the
schedule above is both infinite, and terminates in a finite time.

> And neither the total number of iterations to date nor the
> time it took to complete each one make any sort of difference; since
> the exit condition is not met at the end of an iteration, the program
> never decides to exit the loop but instead always begins another
> iteration.

(Didnt Xeno use that exact argument to prove that nothing could move?)

Given the above sschedule, the nth iteration has completed by time
t=1-(2^-n), and so, in particular *every* iteration has completed by
time t=1.

What exactly do you suppose is happening at time t=2, for example?

It cant be executing iterations of the loop - they all finished more
than a second ago :-)

Mark Williams

mar...@gmail.com

unread,
Mar 30, 2006, 2:40:11 AM3/30/06
to

Fuz wrote:
> mar...@gmail.com wrote:
> > There is no correct behavior here. The behavior is undefined, because
> > you call the destructor for n.value more than once.
>
> Which is why I said: 'simplified for clarity; ignoring lifetime-of-T
> issues'. I am very aware of how to correctly manage the lifetime of an
> object, but I didn't want buffer allocation, placement new-ing,
> explicit destructor invokation and deallocation to cloud the issue at
> hand.

Unfortunately, it *does* affect the outcome.

>
> Believe me, the compiler does exactly the same thing regardless of my
> bending of the rules for the purpose of this example.
>

Then for *those* cases, the compiler *may* be incorrect (depending on
who's right in this thread).

> > The compiler assumes that undefined behavior is not invoked, and that
> > therefore the loop *must* be finite (memory is a finite resource), and
> > that therefore it can optimize the loop away.
>
> I don't follow your reasoning. No memory allocation is being done
> here. There is no recursion so there is no chance of the stack being
> overflown. There are no standard library calls. Where is the memory
> problem?

Memory is limited, so the set of valid objects is finite on entry to
the loop. No new objects are created during the loop. Therefore the
compiler can see that *either* an object has its destructor called more
than once *or* the loop executes a finite number of times. Its free to
assume the latter.

Mark Williams

Niklas Matthies

unread,
Mar 30, 2006, 6:49:30 AM3/30/06
to
On 2006-03-27 18:23, mar...@gmail.com wrote:
> kanze wrote:
>>
>> What if the calling code outputs something after calling this
>> function? Before the optimization, the output never took place;
>> after, it does.
>
> Why cant an infinite loop complete in a finite time? eg what if the
> nth iteration of the loop runs in 2^-n seconds? Since there is no
> observable behavior, thats clearly doable.

The crucial point is that there is no execution path leaving the loop.
So while the loop may complete in finite time, there's nothing left to
be executed afterwards.

-- Niklas Matthies

Sean Kelly

unread,
Mar 30, 2006, 7:18:28 AM3/30/06
to
kanze...@neuf.fr wrote:
> Sean Kelly wrote:
> > kanze wrote:
>
> >> Yes and no. In practice, infinitly loops don't exist,
> >> because the hardware doesn't last that long. They exceed
> >> the resource limits of the implementation, and so result in
> >> undefined behavior.
>
> > I think it's incorrect to take this perspective with
> > theoretical problems, as they all necessarily ignore these
> > issues. For example, one could argue that the Halting Problem
> > is invalid for the same reason, but to do so seems kind of
> > pointless.
>
> Is a C++ compiler a theoretical problem? What happens in the
> case of infinite recursion? Explain why this case must obey
> different rules?

A C++ compiler isn't theoretical, but the virtual machie described by
the C++ spec is; and necessarily so, since the spec must be meaningful
when applied to a variety of concrete machines. One could argue that
infinite recursion is a separate issue, since resources are commonly
required for each recursive call, and machnies have a finite amount of
such resources available. However, the spec must be vague on this
issue because resource availability and resource management is a
concrete issue. About the most the spec could provide is a comment
regarding the potential limitations of self-recursion. And it's
possible that a compiler could still modify such a function to resurce
indefinately. For example:

void fn() {
std::cout << "hello\n";
fn();
}

It's clear that the above function could be converted to this:

while( true ) {
std::cout << "hello\n";
}

and observable behavior would be consistent. Could a compiler then do
the same thing? I would assume so.

> >> I seem to recall seeing the question raised in comp.std.c,
> >> many, many years back. Can "undefined behavior" violate
> >> causation. At some point in time, the infinite loop in an
> >> embedded device will exceed the resource limits of the
> >> system; the processor will fail.
>
> > As above, that doesn't render the loop finite for the purposes
> > of discussion.
>
> I didn't say it did. I said it lets the compiler off the hook.

How can the compiler possibly be held responsible for external factors?

> Consider another simple case:
>
> while ( true ) {
> std::cout << "hello" << std::endl ;
> }
>
> In this case, of course, there is observable behavior, and the
> compiler should not optimize the loop away. However, if your
> code actually does output "hello" an infinit number of times,
> your hardware is more reliable than mine. And perhaps as much
> to the point: how do you test it?

It's not necessary to run the generated program to verify that a
compiler has generated correct code. In this case, the easiest way to
verify the output is correct is simply to examine it, though a
sufficiently diligent person could verify the code of the compiler
itself.

> Would you argue that the compiler is not conform because in
> fact, the generated program does not output "hello" an infinit
> number of times? And if not, why not?

No. I would argue the compiler is not conformant if it generates
incorrect output.

> I think that there's actually a very interesting question
> involved here, and to tell the truth, I haven't the vaguest idea
> how the standard should treat it. Last night, g++ failed to
> compile a perfectly legal program -- of course, I happened to
> accidentally pull the plug on the machine while it was
> compiling. Certainly, no reasonable person would consider this
> an error or a lack of conformity in g++. But why? The only
> text I can find in the standard that would cover it is the
> clause about insufficient resources.

This has actually become quite a problem in the US legal system,
largely due to a rule called "failure to warn." Basically, an
individual is within their rights to sue the manufacturer of a product
if something bad happened as a result of using the product and the
customer was not warned in advance that such an event may occur. This
is why products sold in the US are accompanied by an extensive list of
warnings about risks that should be obvious: climbing a metal ladder
outdoors during a lightning storm is a bad idea, etc. But is this
reasonable? The sturdiness of a ladder has no relation to the weather,
and one could argue that the ladder is performing according to spec
despite the potential death of its user to an errant lightning strike.
By the same token, I believe that the correctness of a program has no
relation to external factors such as power outages or the extinction of
humanity.

> Alternatively, one could view it as a lack of conformity, and
> that g++ just forgot to document, that in order to be conform,
> not only do you need the option -std=c++98, but that the
> hardware must be in good working order and turned on through the
> entire time the program is compiling and running. Pulling the
> plug renders g++ non-conform just as surely as placing the
> program in a file with the extension .f does.

See above. I see no reason to include such obvious qualifications in
the standard.

> Realistically, of course, I don't think it matters that much --
> we all know that you cannot pull the plug and expect anything to
> continue working correctly. But philosophically, I find it an
> interesting question. Does the standard let g++ off the hook,
> or only common sense.

I think it's sufficient for a text to omit discussion of the obvious,
provided doing so doesn't make the text any less meaningful.
Philosophic texts regularly omit such things for example, and it's rare
for someone will take issue with this as it rarely affects the strength
of the argument.

> > I would argue that behavior should be considered within the
> > theoretical context of the problem, which in this case is the
> > virtual machine described by the C++ standard. Thus, a
> > typical infinite loop should have well-defined behavior.
> > Though I admit that this leaves resolving the differences
> > between theoretical and practical behavior as an occasionally
> > necessary problem. For example, I believe an infinitely
> > recursive function has well-defined behavior as well, even
> > though this is quite obviously limited by the finite stack
> > space provided by actual machines.
>
> I totally agree. Isn't this the real reason why exceeding
> resource limits is undefined behavior. In French, we'd say "À
> l'impossible nul n'est tenu" -- no one is required to perform
> the impossible. That goes for compilers as well.

Agreed.

> So how does the infinite loop and the infinite recursion differ.
> Except that typically, it will require a lot longer time before
> the infinite loop (e.g. as my simple example above) fails. (Or
> will it? I've got a 64 bit machine, and could, presumably,
> configure 2^64 KBytes of virtual memory. If I did that, a
> program with infinite recursion will very quickly start paging
> like crazy, which will slow things down considerably. And
> without doing any calculations to verify my hunch, I rather
> suspect that the machine won't stay up long enough to use up
> 2^64 KB of stack.)

See above. And Greg has some other examples as well. But in general,
I would say that recursion is different because of certain
commonalities in hardware and software design. But I might not go so
far as to say that these limitations are inherent to the technique in
any theoretical sense.


Sean

Greg Herlihy

unread,
Mar 30, 2006, 8:56:13 AM3/30/06
to
mar...@gmail.com wrote:
> Greg Herlihy wrote:
> > mar...@gmail.com wrote:
> > > kanze wrote:
> > > >
> > > > What if the calling code outputs something after calling this
> > > > function? Before the optimization, the output never took place;
> > > > after, it does.
> > >
> > > Why cant an infinite loop complete in a finite time? eg what if the nth
> > > iteration of the loop runs in 2^-n seconds? Since there is no
> > > observable behavior, thats clearly doable.
> >
> > ...because an infinite loop, by definition, is one that never
> > completes.
>
> A loop which enumerates the natural numbers, and does so on the
> schedule above is both infinite, and terminates in a finite time.

Granted, in fact the loop could even finish immediately - since the
iterations are not performing any real work in the first place.

> > And neither the total number of iterations to date nor the
> > time it took to complete each one make any sort of difference; since
> > the exit condition is not met at the end of an iteration, the program
> > never decides to exit the loop but instead always begins another
> > iteration.
>
> (Didnt Xeno use that exact argument to prove that nothing could move?)
>
> Given the above sschedule, the nth iteration has completed by time
> t=1-(2^-n), and so, in particular *every* iteration has completed by
> time t=1.
>
> What exactly do you suppose is happening at time t=2, for example?
>
> It cant be executing iterations of the loop - they all finished more
> than a second ago :-)

Whether the loop finishes in finite time or not seems undecidable. And
in either case, the program behaves just the same. What is certain is
that the program will resume execution at the next statement after the
loop if - and only if - the loop's exit condition is true. And the exit
condition for an infinite loop is never true.

So whether the program is executing a loop or whether it has simply
stopped running - the upshot is that the program is stuck. And it will
remain so indefinitely.

Greg

kanze

unread,
Mar 30, 2006, 9:23:34 AM3/30/06
to

My actions, or a hardware failure, or... Since the machine is
part of the implementation, a hardware failure means that the
implementation has caused the loop to terminate.

But on further thought, from a pratical point of view, the
important point here is that although the loop has terminated,
the code following the loop isn't executed.

>> In theory, infinite recursion doesn't terminate either. So the
>> problem is exactly the same. If you don't think so, explain
>> why.

> A program with infinite recursion (assuming each recursive
> call requires memory allocation) will fail as soon as the
> amount of available memory is exhausted.

If your hardware can run for an infinite period of time without
failing, my hardware can have infinite memory:-).

In both cases, in real life, it doesn't happen. Memory and
hardware lifetime are finite resources.

> And storage, being finite, will never be able to meet the
> requirements of a program whose memory usage is constantly
> increasing.

> In contrast, a program executing in an infinite loop has
> constant memory requirements. So unlike the program with the
> infinite recursion, it can run indefinitely.

It still needs CPU time. And there's only so much of that,
before the CPU fails.

> And unlike the recursive program - whose behavior becomes
> undefined as soon as memory is exhausted - the program
> executing in the loop remains well-defined.

Until it ceases to be well-defined, because it no longer has
adequate resources.

(An interesting side-light: what happens if the compiler
converts the infinite recursion into an infinite loop, by means
of tail recursion optimization? From the standard's point of
view, of course, there is no problem: undefined behavior can
result in a program working. But I think it pretty much
indicates that the two inifinites are in some way similar.)

[...]

> So we can safely conclude that a program executing in an
> infinite loop is defined behavior, whereas a program executing
> infinite recursion is not.

Not according to the standard. A resource is a resource. A
program needs a CPU to operate, just as it needs sufficient
memory. If one or the other is missing, or not available in
sufficient quantity, a resource limitation has occurred.

>>> It cannot be "time" because time is not finite. A resource
>>> by definition is finite (and exhaustible)

>> The computer itself is definitly a resource. And unless
>> there has been some serious improvements in reliability
>> since I last looked, its lifetime is finite.

>>> And it's an error that appears to have been fixed in gcc,
>>> since gcc 4.0 (compiling with the -O3 flag for full
>>> optimization) does not optimize the loop away. The
>>> Microsoft C++ compiler should be fixed to do the same.

>> As the original poster said, the compiler did what he
>> wanted. IMHO, it's a quality of implementation issue, and as
>> I said, there are arguments both ways with regards to which
>> solution is "better". (Historically, removing such empty
>> loops, even when they might have been infinite, has been
>> considered "good".)

> The original poster was glad that the compiler generated no
> code for an int's destructor. But an int - being a
> fundamental type - has no destructor to start with. So there
> is little reason to be grateful to a compiler that generates
> no code in places where no code could exist.

If a destructor is trivial, no code could exist either. (Not
that that's relevant to much here.)

> On the other hand, a compiler that deletes a loop that should
> never have completed - and thereby changes the entire behavior
> of the program - is something to complain about.

Or not, depending on what you are doing. IMHO, it's a quality
of implementation issue; in practice, in the usual programming
environment, infinite loops that don't do anything only result
as a result of optimization. Or rather, they simply don't
occur in real programs. So allowing the compiler to suppose
that an empty loop does terminate, and suppress it, is a valid,
and maybe in some cases an important optimization.

> So I think the more helpful answer to the original poster's
> complaint would have been to acknowledge the obvious: the
> problem observed is a bug with each of the compilers - and not
> with his expectation that a infinite loop should not in fact
> end.

In the original code, the loop wasn't infinite. The issue was
simply that the compiler couldn't prove that it wasn't, and that
one could construct cases where the loop would be infinite if
the function was called. From a pratical point of view,
requiring the compiler to be able to prove that the loop isn't
infinite places too great a burden on the compiler, and
unnecessarily restricts useful and important optimizations.

> No explanation - no matter how exotic or tortured its logic -
> is ever going to persuade anyone that the presence of an
> infinite in a program simply does not matter.

Obviously, the real issue is quality of implementation, and what
is practically useful. Refusing to remove an empty loop, simply
because the compiler cannot prove it is not infinite, is only
acceptable if the compiler is capable of doing full program
analysis to be able to do such proofs.

The standard explicitly gives the compiler writers an out, in
order to make such choices based on the needs of their
customers. That out is that exceeding resource limitations is
undefined behavior. It's an enormous out -- you can use it to
get away with murder. But the idea is that where it applies is
to be a quality of implementation issue; it's up to the compiler
vendor and his customers to decide what's best in their case.

> Nor is such an argument likely to make anyone believe that an
> infinite loop is not anything that a programmer should ever be
> concerned about - because no program is ever affected by its
> presence.

> Unfortunately the reality is that a program that terminates is
> not even comparable (much less identical) to one that does
> not. And certainly a program that exhibits undefined behavior
> (were one to accept the explanation being offered) could never
> be identical to a program that has none. And yet such a
> contradiction would nonetheless have to be valid, in order to
> make the case that an infinite loop in a C++ program is purely
> "optional" - so whether the loop ends up in the resulting
> program - or not - is a matter decided entirely by the whim of
> the compiler.

The fact remains that a program with an endless loop has,
formally, undefined behavior. For that matter, every program
potentially has undefined behavior -- you don't need an endless
loop for your program to run out of "CPU" or electricity, or
whatever. Where and how the line is drawn is up to the
implementor, and is a quality of implementation issue.

--
James Kanze GABI Software

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Mar 30, 2006, 9:22:29 AM3/30/06
to
Niklas Matthies wrote:
> On 2006-03-27 18:23, mar...@gmail.com wrote:
>> kanze wrote:

>>> What if the calling code outputs something after calling
>>> this function? Before the optimization, the output never
>>> took place; after, it does.

>> Why cant an infinite loop complete in a finite time? eg what
>> if the nth iteration of the loop runs in 2^-n seconds? Since
>> there is no observable behavior, thats clearly doable.

> The crucial point is that there is no execution path leaving
> the loop. So while the loop may complete in finite time,
> there's nothing left to be executed afterwards.

That's an interesting point. In fact, of course, the language
doesn't define infinite loops -- being infinite is a result of
the end condition never being met. So the resulting program
flow is well defined and obvious. Except that...

The language does guarantee that the conditions which terminated
the loop were met. E.g. given something like the following:

bool b = true ;
while ( b ) {}
assert( ! b ) ;

the language guarantees that the assert will never fail.

It's also mathematically provable that if the i-th execution of
the loop takes 2^-i seconds, then the loop cannot turn for more
than a second.

I'm not sure what to make of this. Perhaps the only conclusion
we can make is that if a processor does not have a finite lower
bound on the execution time of a loop, you cannot implement C++
on it. Even if the standard seems to say that execution time is
never part of the observable behavior.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Fuz

unread,
Mar 30, 2006, 1:55:26 PM3/30/06
to
mar...@gmail.com wrote:
> Unfortunately, it *does* affect the outcome.

How? destruct_list<int>() exhibits no undefined behaviour of its own;
it's f() that makes the second destructor call on the int. No matter
how I define the calling code, destruct_list<int>() will always be
compiled in the same way, with the same optimisations.

Also, if I were to be pedantic:

12.4/14 says that 'the behavior is undefined if the destructor is
invoked for an object whose lifetime has ended (3.8). 3.8/1 says 'The
lifetime of an object of type T ends when: (for when T has no trivial
destructor) the storage which the object occupies is reused or
released.'. Calling 'int's destructor' does not end the life of that
int, therefore calling the destructor multiple times for an object of
trivial destructability is not undefined behaviour.

Anyway, this is precisely what I didn't want the discussion to focus on.

mar...@gmail.com

unread,
Mar 30, 2006, 1:59:09 PM3/30/06
to

Niklas Matthies wrote:
> On 2006-03-27 18:23, mar...@gmail.com wrote:
>> kanze wrote:
>>>
>>> What if the calling code outputs something after calling this
>>> function? Before the optimization, the output never took place;
>>> after, it does.
>>
>> Why cant an infinite loop complete in a finite time? eg what if the
>> nth iteration of the loop runs in 2^-n seconds? Since there is no
>> observable behavior, thats clearly doable.
>
> The crucial point is that there is no execution path leaving the loop.
> So while the loop may complete in finite time, there's nothing left to
> be executed afterwards.

But that was also Xeno's crucial point; and it really doesnt hold much
water:

In order to get from here to there, you have to first go halfway. Then
half the remaining distance. And again. And again...

After each step you ask "Are we there yet?". And the answer is *always*
no. So you *always* have to take another step.

There is no execution path leaving the loop :-)

Therefore, its impossible to get from here to there. Since "there"
could be anywhere, its impossible to move. QED.

Mark Williams

Allan W

unread,
Mar 30, 2006, 7:21:52 PM3/30/06
to
> > mar...@gmail.com wrote:
> > > Why cant an infinite loop complete in a finite time? eg what if the nth
> > > iteration of the loop runs in 2^-n seconds? Since there is no
> > > observable behavior, thats clearly doable.

I think you're discussing philosophy, not programming. But if I'm
wrong -- and this really is "clearly doable," then please post
example code with an infinite loop where each iteration N runs
in 2^-N seconds.

> Greg Herlihy wrote:
> > ...because an infinite loop, by definition, is one that never
> > completes.

mar...@gmail.com wrote:
> A loop which enumerates the natural numbers, and does so on the
> schedule above is both infinite, and terminates in a finite time.

Ah! This is the problem. No program (in C++ or any other real language)
can possibly enumerate over every natural number. Computer programs
can't even handle "integers" in the mathematical sense. C++ programs
get to use data type "int" which is a useful approximation, but has
limited range (typically -2147483648 to +2147483647, but the details
depend on the specific compiler you're using).

Also, on every processor I've ever heard or can imagine, incrementing
or
decrementing an int, or adding or subtracting two ints, takes a finite
amount of time.

> Given the above sschedule, the nth iteration has completed by time
> t=1-(2^-n), and so, in particular *every* iteration has completed by
> time t=1.

Every computer system has a finite number of processors.

Every processor implements something called a "clock cycle." As far
as I know, all clock cycles are fixed-duration -- but even if there is
some strange processor with variable-rate clock cycles, there is
surely a finite minimum and maximum rate. Therefore, there exists
some N where 2^-N seconds is less than a clock cycle.

Some processors today can execute more than one instruction
in a single clock cycle -- but there's always a finite maximum.

Therefore, it is impossible to continue to iterate infinitely as you
suggest. Sooner or later, a finite limit has been reached. QED

We can imagine a computer, faster than any real computer in
the world, with 2^20=1,048,576 processors, each with a clock
speed of 2^40=1,099,511,627,776 cycles per second
(1 Terahertz = 1000 Gigahertz), and capable of executing
2^10=1,024 instructions per clock cycle -- such a computer
still couldn't increment an int more than 2^70 times in one
second, and so your loop would fail to execute on the
schedule you have laid out.

> What exactly do you suppose is happening at time t=2, for example?
> It cant be executing iterations of the loop - they all finished more
> than a second ago :-)

Sometime before t=1.0, the computer gave up and said, "I can't do it!"
And then it imploded (or whatever Star-Trek type special effects you
want to imagine.) Unlikely? Maybe -- but not as unlikely as your
scenario.

Have we wasted enough time on this distraction yet?

Allan W

unread,
Mar 30, 2006, 7:24:17 PM3/30/06
to
kanze wrote:
> It's also mathematically provable that if the i-th execution of
> the loop takes 2^-i seconds, then the loop cannot turn for more
> than a second.
>
> I'm not sure what to make of this. Perhaps the only conclusion
> we can make is that if a processor does not have a finite lower
> bound on the execution time of a loop, you cannot implement C++
> on it. Even if the standard seems to say that execution time is
> never part of the observable behavior.

Explain how ANY processor can THEORETICALLY be built
without a lower bound on the execution time of it's
instructions, and I'll start to take this a little bit more seriously.

Even if it were possible to build a computer without a processor
clock (and this would be a HUGE change), there are other
limiting factors. Electrons can't travel faster than the speed of
light. If your computer forces electrons to flow through a finite
path, no matter how short, then the speed of your processor
would be finite, which means that for some N, the i-th
execution of the loop could not possibly execute in 2^-N seconds.

I don't know why this is bothering me so much. I've participated
in silly pointless discussions before... why does this one annoy
me so much? Maybe because it's got a pseudo-scientific bias,
but the so-called science is so obviously flawed. I'm not a
scientific genius, but I can CLEARLY see how this is gibberish.
And yet some really smart people, who usually don't descend
to this level, continue the conversation. I don't understand this.

Niklas Matthies

unread,
Mar 30, 2006, 7:35:57 PM3/30/06
to
On 2006-03-30 14:22, kanze wrote:
> Niklas Matthies wrote:
:

>> The crucial point is that there is no execution path leaving
>> the loop. So while the loop may complete in finite time,
>> there's nothing left to be executed afterwards.
>
> That's an interesting point. In fact, of course, the language
> doesn't define infinite loops -- being infinite is a result of
> the end condition never being met. So the resulting program
> flow is well defined and obvious. Except that...
>
> The language does guarantee that the conditions which terminated
> the loop were met. E.g. given something like the following:
>
> bool b = true ;
> while ( b ) {}
> assert( ! b ) ;
>
> the language guarantees that the assert will never fail.
>
> It's also mathematically provable that if the i-th execution of
> the loop takes 2^-i seconds, then the loop cannot turn for more
> than a second.
>
> I'm not sure what to make of this.

I'm not sure what problem you see. The fact that b is always true
means that the loop condition is always true which means that the
control flow edge from the loop header (which is taken when the loop
condition is false) to the assert statement will never be taken.
So this control flow edge is effectively non-existant and the assert
statement is unreachable.

The language gurantees that the assert will never fail *when it is
executed*, i.e. when it is reached by the control flow. But the
language also guarantees that the loop exit condition is never met,
hence the assert statement is never reached, so the first guarantee
becomes trivially true (false implies true).

In still other words:

bool b = true ;
while ( b ) {}

// (A)
assert( ! b ) ;

1) You can prove that b is always true (because it is initialized with
true and never modified). From this you can prove that the loop is
never exited, hence (A) is never reached.

2) You can prove that when (A) is reached then b is false (because the
loop condition must be false whenever (A) is reached). From this you
can prove that the assert will never fail.

1) and 2) don't contradict each other, because they are consitent
with each other for all control flows where (A) is not reached, which
is the case for all control flows, period.

-- Niklas Matthies

mark

unread,
Mar 30, 2006, 7:35:14 PM3/30/06
to

Fuz wrote:
> mar...@gmail.com wrote:
> > Unfortunately, it *does* affect the outcome.
>
> How? destruct_list<int>() exhibits no undefined behaviour of its own;

No, but (I maintain) the behavior is undefined if you call it on a
circular list.

Therefore, the compiler is entitled to assume that its *not* called on
a circular list.

Therefore the list must be finite.

Therefore the loop can be eliminated without worrying about infinite
loops.

>
> Also, if I were to be pedantic:
>
> 12.4/14 says that 'the behavior is undefined if the destructor is
> invoked for an object whose lifetime has ended (3.8). 3.8/1 says 'The
> lifetime of an object of type T ends when: (for when T has no trivial
> destructor) the storage which the object occupies is reused or
> released.'. Calling 'int's destructor' does not end the life of that
> int, therefore calling the destructor multiple times for an object of
> trivial destructability is not undefined behaviour.

12.4/14 actually says:
"Once a destructor is invoked for an object, the object no longer
exists; the behavior is undefined if the destructor is invoked for an


object whose lifetime has ended (3.8).

I claim that the lifetime of an object that no longer exists has ended.

> Anyway, this is precisely what I didn't want the discussion to focus on.

Then post an example of an infinite loop that is eliminated when no
(other) undefined behavior is invoked.

The simplest case would be to remove the destructor call from your
function. If it *still* removes the loop, then the rest of this
discussion becomes relevant.

Mark Williams

kanze

unread,
Mar 31, 2006, 8:35:57 AM3/31/06
to
Allan W wrote:
> kanze wrote:
>> It's also mathematically provable that if the i-th execution
>> of the loop takes 2^-i seconds, then the loop cannot turn
>> for more than a second.

>> I'm not sure what to make of this. Perhaps the only
>> conclusion we can make is that if a processor does not have
>> a finite lower bound on the execution time of a loop, you
>> cannot implement C++ on it. Even if the standard seems to
>> say that execution time is never part of the observable
>> behavior.

> Explain how ANY processor can THEORETICALLY be built without a
> lower bound on the execution time of it's instructions, and
> I'll start to take this a little bit more seriously.

You're not supposed to be taking it seriously.

The only serious comment one can make on the issue is that
compiler implementors should do what is best for their
customers. On general purpose machines, e.g. Windows and
mainstream Unix, an endless loop which doesn't do anything can
only be a programming error -- it seems reasonable for a
compiler to suppose that any empty loop is finite, and suppress
it, regardless of what it can prove. (Or for that matter,
regardless of what the standard says. Although in this case,
the standard definitly gives the implementation an out.)

> Even if it were possible to build a computer without a
> processor clock (and this would be a HUGE change), there are
> other limiting factors. Electrons can't travel faster than the
> speed of light. If your computer forces electrons to flow
> through a finite path, no matter how short, then the speed of
> your processor would be finite, which means that for some N,
> the i-th execution of the loop could not possibly execute in
> 2^-N seconds.

For all we know, time is quantic, and there is some minimum
unit.

Of course, this whole discussion got started because one
compiler did generate code in which the time for each pass in
the loop was strictly inferior to 2^-n seconds:-). Mainly 0, it
suppressed the loop.

> I don't know why this is bothering me so much. I've
> participated in silly pointless discussions before... why does
> this one annoy me so much? Maybe because it's got a
> pseudo-scientific bias, but the so-called science is so
> obviously flawed.

There's no "science" involved, pseudo- or not. It's pure
mathématics. It's lack of any possible connection with reality
is what makes it so amusing.

> I'm not a scientific genius, but I can CLEARLY see how this is
> gibberish. And yet some really smart people, who usually
> don't descend to this level, continue the conversation. I
> don't understand this.

I also read detective novels, Tolkien, and a lot of other things
that have no relationship to "real life". Why not? It's fun.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

mark

unread,
Apr 1, 2006, 9:11:38 PM4/1/06
to

Allan W wrote:

>> kanze wrote:
>
>>> > It's also mathematically provable that if the i-th execution of
>>> > the loop takes 2^-i seconds, then the loop cannot turn for more
>>> > than a second.
>>> >
>>> > I'm not sure what to make of this. Perhaps the only conclusion
>>> > we can make is that if a processor does not have a finite lower
>>> > bound on the execution time of a loop, you cannot implement C++
>>> > on it. Even if the standard seems to say that execution time is
>>> > never part of the observable behavior.
>
>>
>> Explain how ANY processor can THEORETICALLY be built
>> without a lower bound on the execution time of it's
>> instructions, and I'll start to take this a little bit more seriously.


Because the case we're talking about is when there is no observable
behavior, so the compiler doesnt actually have to do *anything* in any
iteration.

Its the same way that

void foo(long long a, long long b)
{
while (a--) {
long long c = b;
while (c--) {
}
}
}

Can be optimized away, even though for some inputs, no real world
computer could ever actually complete executing it before the end of
the universe.

So the problem becomes entirely theoretical: "Given that an iteration
can take arbitrarily little time (including zero), justify removing a
potentially non-terminating loop".

Its a kind of game.

[insults removed]

Mark Williams

0 new messages