Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

halting is weak?

125 views
Skip to first unread message

Ben Abraham

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In article <311F65...@ontos.com> to...@cybercom.net writes:
>Ben Abraham wrote:
>> True, but if what your saying is "when writing real programs we
>> can make assumptions about the particular implementation we
>> may be running on" you're asking for trouble, this is what leads
>> to non-portable code,etc...
>> ...
>
>And how do the esthetics of design and programming style affect
>reasoning about theoretical issues? To reason about Halting you
>don't even need to know a real programming language, never mind
>any worries about portability, maintenance, and other pure engineering
>issues.
>--
>to...@cybercom.net
>http://www.cybercom.net/~tonyk/

You must of missed the discussion - the discussion was trying
to say that the concrete nature of programming in the real world,
ie. engineering issues were at odds with the theoretical model, it
was NOT a discussion about halting that i was responding to.
what i was saying was that the theoretical model was not
necessarily at odds with the engineering issues, of course
what you said is true, but it has no relevence to what i
was talking about :)
(i think you read the Subject: without following the thread!, the
thread had turned into "real programming has nothing to do with
abstract machines", i was saying that keeping the abstract model
in mind might have benefits when considering such things as
portability, etc. (programming in algol is more portable than
z80 assembler, algol being a theoretic (RAM machine) language))
email me offline if you don't understand!

William Naylor

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In <4fdjbl$m...@monet.op.net> m...@op.net (Mark-Jason Dominus) writes:

>In article <4er73h$n...@tools.bbnplanet.com>,
>Barry Margolin <bar...@tools.bbnplanet.com> wrote:
>>In fact, I think it's pretty well recognized that a halting program
>>that worked for most real-world computer programs could be written.

>I'm surprised you think that. Who recognizes this? How would they
>(or you) write this putative halting program? How could it work on
>`most real-world programs' and yet fail on this simple program:
>
><Scheme and C version of `hail' deleted>
>

I repost some comments I made about this in an earlier thread:

Here is a simple algorithm which "solves" the halting problem.

This algorithm can solve your 'hail' problem for a finite sized integers.

What follows is a detailed discussion of the algorithm, including
discussions of:
1) How the algorithm works
2) Complexity
3) What sort of machine & programs the algorithm works for
4) Relationship of the algorithm to Turing's undecideability proof
for the halting problem.


HOW THE ALGORITHM WORKS
=======================

A program running on a FINITE memory machine can exhibit one of only two
possible behaviors:
1) It can run for some time and then enter a HALT state.
2) It can enter an infinite loop - meaning that the program enters
a sequence of states over and over again forever.

A program running on an INFINITE memory machine (such as an ideal Turing
machine) can exhibit a 3rd behavior in addition to the above two:
3) It can run forever, never re-entering the same state twice (for
example, an infinite recursion).

It is this 3rd behavior which gives algorithms for solving the halting
problem trouble. Turing's proof of the undecideablity of the halting
problem relies crucially on the assumption of infinite memory, although
this is not clearly stated in textbook versions of the proof.

My algorithm works for FINITE memory machines, but not for INFINITE memory
machines. The algorithm can be used in practice because real machines
are FINITE memory machines.

My algorithm works for FINITE memory machines by detecting
outcome #1 or #2. It simulates the execution
of the program in question on the machine in question. Because
the machine memory is FINITE, the number of possible states is FINITE,
and the simulated program must either halt or repeat a state after a finite
amount of time. The tricky part of the algorithm is detecting a repeated state
without using a huge amount of memory.

It turns out that this can be done by storing only a single copy of the
simulated machines state.

The pseudo-code for the algorithm follows:


halting_problem(machine,program)
{
initialize_simulated_machine(machine,program);

for(len=1;;len*=2) /* len is length of a run before saving state */
{
save_simulated_machine_state = get_simulated_machine_state(machine);

for(i=0;i<len;++i) /* i counts length of run */
{
execute_simulated_machine_instruction(machine);

simulated_machine_state = get_simulated_machine_state(machine);

if(simulated_machine_is_in_halt_state(simulated_machine_state))
{
return(MACHINE_HALTS);
}
else if(simulated_machine_state == save_simulated_machine_state)
{
return(MACHINE_LOOPS_FOREVER);
}
}
}
}


The algorithm simulates the program in question running on the machine
in question for 1 instruction.
Then for 2 instructions.
Then for 4 instructions.
Then for 8 instructions.
Etc.

After each pass, it stores the state of the simulated machine. During each
pass it compares the stored state against the current simulated state
after each instruction in order to detect looping.
This algorithm always halts and gives the correct
answer for any FINITE machine, for any possible program.


COMPLEXITY
==========

For a simulated machine with M bits of memory,
the algorithm must simulate
the machine, and store one copy of the state, and keep a loop counter, and
keep a few registers. The counter can require as much as M bits of memory.
Thus, the memory required by the algorithm can be no worse than

memory = 3*M + k

where k is a small constant.

In the worst case, the simulated machine enumerates the possible bit
combinations in its memory before looping or halting. The
algorithm must simulate the loop no more than twice to detect that it
is looping. Therefore, run time can be no worse than

time = C*M*2^M

where C is a small constant.

The algorithm can be made substantially faster by keeping a "hash" of
the machine states as well as the states themselves.


WHAT SORT OF MACHINES & PROGRAMS THE ALGORITHM WORKS FOR
========================================================

The algorithm always works for finite memory machines. Since real
physical machines are finite memory, the algorithm can be used in
practice, although the run time could be excessive for long, complex
loops. The machine running the algorithm must have 3 times as much
memory as the machine being simulated.

The algorithm can also be used on an infinite memory machine.
The state copy and state comparison operations are finite if only
the INITIALIZED portion of memory is copied or compared. The
algorithm terminates if the simulated program either halts (condition
#1 above) or infinite loops (condition #2 above). The algorithm
cannot detect condition #3 above, and will run forever if this
happens.


RELATIONSHIP OF THE ALGORITHM TO TURING'S UNDECIDEABILITY PROOF
===============================================================

FOR THE HALTING PROBLEM
=======================

Turing's proof is an argument about a program running on a machine
which attempts to analyze an identical program running on an IDENTICAL
MACHINE.

Turing's argument is simply irrelavent to the algorithm I give here for FINITE
machines:
It is impossible to construct Turing's paradoxical recursion on a FINITE
machine because the machine being analyzed must be smaller than the machine
running the halting program; it cannot fit into its memory otherwise.

Attempting to feed Turing's paradoxical program to my algorithm leads to
an infinite recursion. On a FINITE machine, the algorithm runs out of stack
memory and halts with a out-of-stack error. On an INFINITE machine,
the algorithm recurses forever.

--

------------------------------
e-mail: nay...@mti.sgi.com
or

Mark A Biggar

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In article <4fo6i1$h...@chronicle.mti.sgi.com> nay...@mti.sgi.com (William Naylor) writes:
>WHAT SORT OF MACHINES & PROGRAMS THE ALGORITHM WORKS FOR
>========================================================
>The algorithm always works for finite memory machines. Since real
>physical machines are finite memory, the algorithm can be used in
>practice, although the run time could be excessive for long, complex
>loops. The machine running the algorithm must have 3 times as much
>memory as the machine being simulated.

Great, until I hang a mag-tape drive on my machine. Now your machine
must have 3 times the memory of every mag-tape to ever exist now and
into the furture. You can't even tell me how big that is. Sounds
pretty unbounded to me. Looks alot like a real TM doesn't it?

Once you add mountable external memory, the difference between a real machine
and a ideal TM becomes efectively zero.

--
Mark Biggar
m...@wdl.loral.com


Matthias Blume

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In article <4fo6i1$h...@chronicle.mti.sgi.com> nay...@mti.sgi.com (William Naylor) writes:

Turing's proof of the undecideablity of the halting
problem relies crucially on the assumption of infinite memory, although
this is not clearly stated in textbook versions of the proof.

I don't know what textbooks you are using. Anyone who took his or her
theory of computation class seriously knows all that.

My algorithm works for FINITE memory machines, but not for INFINITE memory
machines. The algorithm can be used in practice because real machines
are FINITE memory machines.

Your algorithm does not work in practice, because in practice your
memory has on the order of 2^N bits with N >> 20. As you say
yourself, the worst-case running time is on the order of M*2^M, which
is 2^N * 2^2^N, which is NOT practical to wait for on any practical
computer, because none of those computers will live long enough,
neither will you, and perhaps neither will the universe.

The algorithm always works for finite memory machines. Since real
physical machines are finite memory, the algorithm can be used in
practice, although the run time could be excessive for long, complex
loops.

...to put it mildly.

So what's your point? This is all well-known, and rather irrelevant.

--
-Matthias

Bill Dubuque

unread,
Feb 13, 1996, 3:00:00 AM2/13/96
to
From: to...@cybercom.net (Antoun Kanawati)
Newsgroups: comp.theory,comp.lang.scheme
Date: Wed, 07 Feb 1996 07:34:46 -0400

In article <DONHAM.96...@linex.linex.com>, don...@linex.com (Jake
Donham) wrote:

: "Barry" == Barry Margolin <bar...@tools.bbnplanet.com> scrawls:
:
: Barry> think it's pretty well recognized that a halting program
: Barry> that worked for most real-world computer programs could be
: Barry> written.
:
: You really think so? I can think of whole classes of programs for
: which this is clearly not the case. Take for example a program
: modelling a chaotic system, which halts if and only if some modelled
: variable crosses some threshold. Inferring this kind of emergent
: behavior from a relatively simple piece of code would be pretty difficult.

In theory, any real computer can be modeled as an FSM. So, in theory,
you can always analyze such a beastly FSM and figure out exactly what
it will do. ...

Those who think they have halting intuition for small Turing machines
are fooling themselves. While the general halting problem is solvable
for machines with less than 4 states, the 4 state case is open and
the 5 state case is almost certainly unsolvable due to the fact that
it includes machines iterating Collatz-like congruential functions
(similar to the famous open 3x+1 problem), and such specific problems
are currently open (but expected unsolvable, as is the general case).

I have appended my related prior comp.theory posts below. See the
cited papers for further details. The papers of Brady and Machlin
are concerned with computing Busy Beavers and will quickly convince
you of the complexity of the halting problem for small machines.
In fact the discovery of the open Collatz-like functions in 5 state
machines happened accidentally while analyzing the behavior of the
current record-holding candidate for the 5 state Busy Beaver [Mic]
1993. A look at the analysis of the Busy Beaver candidate in [Mic]
should convince you of the power of these small machines. It is
very unlikely that a human programmer would have discovered the
Collatz algorithm encoded in this machine. Such results go to show
just how bad human intuition is on the upper reach of such "small"
machines.

As for "real-world" problems, it is known that Collatz-like
iterations represent various problems in number theory and
discrete dynamical systems (cf. Lagarias 1992) and such problems
may be encoded in surprisingly small Turing machines--just as
we have seen in the 5 state Busy beaver candidate. And lest
you think such problems have no application in the "real world"
simply have a look at the other papers in the volume in which
Lagarias 1992 appears, namely "The Unreasonable Effectiveness
of Number Theory". I would not be surprised to discover that
nature has already discovered some of these marvelously concise
algorithms and has exploited them in the design of basic
biological organisms--you can't get more "real world" than that.

-Bill

Date: Fri, 12 Jan 96 21:41:15 -0500
From: Bill Dubuque <w...@martigny.ai.mit.edu>
To: l...@cs.bu.edu
Cc: W.Ta...@math.canterbury.ac.nz, min...@media.mit.edu, wim...@sci.kun.nl,
w...@martigny.ai.mit.edu
In-Reply-To: Leonid Levin's message of Fri, 12 Jan 1996 13:51:16 -0500 <1996011218...@csb.bu.edu>
Subject: universal Turing machines

[this is a CC of a post to comp.theory]

...My recent interest in small Turing machines stems not from
universality but rather from the recent discovery of (Busy Beaver)
representations of open problems such as the Collatz (3x+1) like
congruential iteration discovered by R. Robinson and Michel [Mic].
I think it is interesting to investigate just what open problems
lie in these small machines and thus to discover the "boundary" of
undecidability in such models of computation. I find it rather
fascinating that if the early Busy Beaver researchers had pushed a
little further on the five state problem then they would have
discovered open Collatz-like congruential problems perhaps before such
problems were independentally discovered by Collatz, Erdos, Ulam, etc.
with independent motivation. It is now known that such congruential
iterations arise in many important problems (e.g. see Lagarias' very
interesting survey [Lag] for the connections with discrete dynamical
systems, diophantine approximations, continued fractions (metric
theory), quasicrystals, tiling, etc.). If other known open problems
arise in a similar manner in these small Turing machines this would
prove very interesting; perhaps yet another vindication of Wigner's
"the unreasonable effectiveness of mathematics".

Does anyone else know of other open or undecidable problems
that determine undecidability boundary points in the
(symbol, state) Turing machine space?

-Bill

[Lag] J.C. Lagarias, Number Theory and Dynamical Systems,
pp. 35-72 in "The Unreasonable Effectiveness of Number Theory",
Proc. Symposia in Applied Math vol 66 (1992), AMS.

[Mic] Busy beaver competition and Collatz-like problems,
Arch. Math. Logic (1993) 32:351-367.


Date: Wed, 3 Jan 96 21:53:28 -0500
From: Bill Dubuque <w...@martigny.ai.mit.edu>
Return-Path: <w...@martigny.ai.mit.edu>
To: l...@csb.bu.edu, min...@media.mit.edu, wim...@sci.kun.nl
Cc: w...@martigny.ai.mit.edu
Subject: Turing Machines: Universality & Halting Problem [was: Minimum states+alphabet for Univ. Turing Machine]

[This is a CC of a post to comp.theory]

From: l...@csb.bu.edu (Levin)
Date: 2 Jan 1996 20:18:29 GMT

In <4cbgcj$p...@zeus.cs.kun.nl>, Wim van Dam <wim...@sci.kun.nl> wrote: [...]

>>> Anyway, he concluded with a statement that the problem of determining a
>>> minimal state/alphabet product was an unsolved problem (in 1956). [...]

>> but his 4 symbol/7 state UTM is certainly worth looking at. [...]

>Marvin Minsky emailed me about this, saying:
>I have not heard of anything smaller, even now. [...]

There are universal Turing machines known with smaller
(symbol,state) product than the value of 28 for Minsky's
(4,7) machine. For example, Rogozhin found a (6,4) machine
(see the abstract of [Rob] below).

Note, however, that the (symbol,state) product may be dubious
as a rule for conservation of computing power in light of
recent results on Busy Beaver computations (see pp. 264-266
of [Bra]).

Since universal machines must have unsolvable halting problems,
lower bounds may be gleaned from work on the halting problem
for small machines. Here are some results along these lines:

The restricted halting problem (i.e. the halting problem on
blank input tape) is decidable for (2,3) and (2,4) Turing
machines. This was achieved during the solution of the Busy
Beaver Problem for 3 and 4 state machines [Bra] [Br2] [Mac].
These machines were enumerated by lazily defining their
instruction trees. Looping behavior was algorithmically
classified into a handful of classes, and the remaining
unclassified machines were analyzed by hand (ad hoc).

However, (2,5) machines probably reach intractability since
these machines can compute Collatz-like congruential
functions [Mic], and in general Conway [Con] [Bur] [Gu2] has
shown that deciding the termination of such iterations is
undecidable. Indeed, the famous 3x+1 problem is a notorious
example of such an open problem [Lag] [Guy]. Surprisingly,
the fact that (2,5) machines can compute Collatz-like
congruential functions was discovered by accident during
the analysis of the conjectured (2,5) beaver [Mic].

The general halting problem (i.e. with arbitrary vs. blank
input) is known decidable for (2,3) and (3,2) machines [Pav].
It is currently an open problem if the same is true for (2,4).
If so, then Collatz type congruential iterations may be
the 'simplest' example of undecidable behavior in Kleene
Turing machines.

You can find discussions of these matters and much more in
the papers referenced below.

-Bill

[Bra] Brady, A. H., The busy beaver game and the meaning of life,
in Herkin, R. (Ed) The Universal Turing Machine, pp. 259-277,
Oxford Univ Press 1988

[Br2] Brady, A.H. The determination of Rado's noncomputable
function Sigma(k) for four-state Turing machines,
Math. Comp. 40 #62 (1983) 647-665.

[Bur] Burckel, S. Functional equations associated with congruential
functions, Theor. Comp. Sci. 123 (1994) 397-406.

[Con] Conway, J. H. Unpredictable Iterations, Proc. 1972 Number
Theory Conference, Univ. of Colorado, Boulder, (1972) 49-52.

[Guy] Guy, Richard, Unsolved Problems in Number Theory, 2nd Edition,
Springer-Verlag, 1994, Section E16: Collatz's Sequence.

[Gu2] Guy, Richard, Conway's prime producing machine,
Math. Mag. 56 (1983) 26-33.

[Lag] Lagarias, J. The 3x+1 problem and its generalizations,
Amer. Math. Monthly, Jan 1985, 3-23.

[Mic] Michel, Pascal, Busy beaver competition and Collatz-like
problems, Arch. Math. Logic (1993) 32:351-367.

[Mac] Machlin, R (nee Kopp), and Stout, Q, The Complex Behavior
of Simple Machines, Physica D 42 (1990) 85-98

[Obe] Oberschelp, A. et. al. Castor Quadroplorum, Arch. Math.
Logic, (1988) 27:35-44.

[Pav] Pavlotskaya, L.M. Solvability of the halting problem
for certain classes of Turing machines, Math. Notes
USSR. 13 (1973) 537-541.

[Rob] Robinson, R.M. Minsky's small universal Turing machine,
Int'l Jnl. Math, 2 #5 (1991) 551-562.

Abstract of [Rob]:

Marvin L. Minsky constructed a 4-symbol 7-state universal Turing
machine in 1962. It was first announced in a postscript to [2]
and is also described in [3, Sec. 14.8]. This paper contains
everything that is needed for an understanding of his machine,
including a complete description of its operation.

Minsky's remains one of the minimal known universal Turing
machines. That is, there is no known such machine which decreases
one parameter without increasing the other. However, Rogozhin [6],
[7] has constructed seven universal machines with the following
parameters (symbols, states):

(2,24) (3,11) (4,7) (5,5) (6,4) (10,3) (21,2).

His (4,7) machine is somewhat different than Minsky's, but all of
his machines use a construction similar to that used by Minsky.
The following corrections should be noted...

A generalized (4,7) Turing machine closely related to Minsky's
was constructed in [5].

[1] John Cocke and Marvin Minsky, Universality of tag systems
with P = 2. Assoc. Comput. Mach. 11 (1964), 15-20

[2] M.L. Minsky, Size and structure of universal Turing machines
using tag systems, Recursive Function Theory, Proc. Sympos. Pure
Math. vol. V, AMS, Providence, R.I., 1962, pp. 229-238.

[3] Marvin L. Minsky, Computation: Finite and Infinite Machines,
Prentice-Hall, Englewood Cliffs, N.J., 1967

[4] Raphael M. Robinson, Primitive recursive functions, Bull.
Amer. Math. Soc. 53 (1947), 925-942.

[5] Raphael M. Robinson, Undecidability and nonperiodicity for
tilings of the plane, Invent. Math. 12 (1971), 177-209.

[6] Yu. V. Rogozhin, Seven universal Turing machines (Russian),
abstract, Fifth All-Union Conference on Math. Logic, Akad. Nauk.
SSSR Sibirsk. Otdel., Inst. Mat., Novosibirsk, 1979, p. 127.

[7] Yu. V. Rogozhin, Seven universal Turing machines (Russian),
Systems and Theoretical Programming, Mat. Issled. no. 69,
Akademiya Nauk Moldavskoi SSSR, Kishinev, 1982, pp. 76-90.

[8] Claude E. Shannon, A universal Turing machine with two
internal states, Automata Studies, Ann. of Math. Stud. 34 (1956)
157-165.

jsch...@alink.net

unread,
Feb 15, 1996, 3:00:00 AM2/15/96
to
In <BLUME.96F...@zayin.cs.princeton.edu>, bl...@zayin.cs.princeton.edu (Matthias Blume) writes:
>In article <4fo6i1$h...@chronicle.mti.sgi.com> nay...@mti.sgi.com (William Naylor) writes:
>
> Turing's proof of the undecideablity of the halting
> problem relies crucially on the assumption of infinite memory, although
> this is not clearly stated in textbook versions of the proof.
>
>I don't know what textbooks you are using. Anyone who took his or her
>theory of computation class seriously knows all that.
>

WHEN will people get it though thier pointy heads. The finite/infinite
memory aspect of Turing machines boils down to a very simple definition.
When they say "INFINITE" memory, they are simply stating that you have
enough memory to do the job. That's it. Nothing more. If you thing they
are stating anything else, you have already entirely missed the point of
the halting problem.

But then people would don't even begin to understand the problem
make STUPID remarks like:

>
> My algorithm works for FINITE memory machines, but not for INFINITE memory
> machines. The algorithm can be used in practice because real machines
> are FINITE memory machines.
>

This person wants a routine to check if a program will run on HIS
machine. But he THINKS this has something to do with the halting
problem.

An then remarks like this next one are made by people who THINK they
understand the problem:

>
>Your algorithm does not work in practice, because in practice your
>memory has on the order of 2^N bits with N >> 20. As you say
>yourself, the worst-case running time is on the order of M*2^M, which
>is 2^N * 2^2^N, which is NOT practical to wait for on any practical
>computer, because none of those computers will live long enough,
>neither will you, and perhaps neither will the universe.
>

While this person DOES have a real point, but it has nothing to do
with why the halting problem cannot be solved.

>
> The algorithm always works for finite memory machines. Since real
> physical machines are finite memory, the algorithm can be used in
> practice, although the run time could be excessive for long, complex
> loops.
>

>....to put it mildly.
>
>
Now together they make even bigger asses of themselves. If the
we were talking about a world where the halting problem was solved, it
might be an issue how long it took for the halting routine to finish. If
the runtime of the halting routine was not SIGNIFICANTLY, faster that
just executing the program, WHY would we want to even try ?????

>
>So what's your point? This is all well-known, and rather irrelevant.
>

The point is simple. The halting problem is a REPRESENTATION
problem. Every program HAS to be written in SOME language. But,
by virtue of Turing's proof, we KNOW that the halting routine for a
given language CANNOT be written in the same language. A more
powerfull language is nessisary.

However, once you make it to partial recursive functions, you are
screwed. There is NO more powerfull language. And if you can come
up with one, they will invent a Nobel prize in Mathmatic just for you.
It would be that big a deal. Why ??? Because if you could, do it
you would be defining an new machine that is more powerfull than
Turing's.

As any Ideot knows "ALL COMPUTERS ARE CREATED EQUAL",
some just run faster. Not a single machine has EVER been created
that could calculate something Turing's machine couldn't.

Simply put: If you think you have solved the halting problem, you
have only proven that your grasp of computer science is extreemly
weak, and I suggest you get some professional help. Like maybe
from a teacher.

Joe Schafer
jsch...@alink.net
jsch...@iquest.net

Matthias Blume

unread,
Feb 15, 1996, 3:00:00 AM2/15/96
to

Nice trolling, but I'll answer as if it were for real...

From: jsch...@alink.net
Subject: PEOPLE who solve halting are weak!
Date: 15 Feb 1996 04:07:15 GMT

> In <BLUME.96F...@zayin.cs.princeton.edu>, bl...@zayin.cs.princeton.edu (Matthias Blume) writes:
> >In article <4fo6i1$h...@chronicle.mti.sgi.com> nay...@mti.sgi.com (William Naylor) writes:
> >
> > Turing's proof of the undecideablity of the halting
> > problem relies crucially on the assumption of infinite memory, although
> > this is not clearly stated in textbook versions of the proof.
> >
> >I don't know what textbooks you are using. Anyone who took his or her
> >theory of computation class seriously knows all that.
> >
>
> WHEN will people get it though thier pointy heads.

Hey buddy, if you want to start a flame war and insult people, you
should first read what they really wrote, and then you should get your
own facts right. See below.

> An then remarks like this next one are made by people who THINK they
> understand the problem:
>
> >
> >Your algorithm does not work in practice, because in practice your
> >memory has on the order of 2^N bits with N >> 20. As you say
> >yourself, the worst-case running time is on the order of M*2^M, which
> >is 2^N * 2^2^N, which is NOT practical to wait for on any practical
> >computer, because none of those computers will live long enough,
> >neither will you, and perhaps neither will the universe.
> >
>
> While this person DOES have a real point, but it has nothing to do
> with why the halting problem cannot be solved.

That's right, and "the person" (me) has never said such a thing. I
know the halting problem is undecidable (in general). I also know the
problem is decidable for FSMs (that's all the other fellow tried to
say in so many words). My point was that this isn't RELEVANT in
PRACTICE, because real computers, even though they are FSMs, are way
too big to be sensibly modelled by the FSM model.

> > The algorithm always works for finite memory machines. Since real
> > physical machines are finite memory, the algorithm can be used in
> > practice, although the run time could be excessive for long, complex
> > loops.
> >
> >....to put it mildly.
> >
> >
> Now together they make even bigger asses of themselves.

I can only see one who makes a real ass of himself...

> If the
> we were talking about a world where the halting problem was solved, it
> might be an issue how long it took for the halting routine to finish. If
> the runtime of the halting routine was not SIGNIFICANTLY, faster that
> just executing the program, WHY would we want to even try ?????

Because we don't know in advance if the program will halt at all?
Isn't that the point of it all? Just wondering...

> >So what's your point? This is all well-known, and rather irrelevant.
> >
>
> The point is simple. The halting problem is a REPRESENTATION
> problem.

Ahh, here we go. But, sorry, you are WRONG. The halting problem for
general TMs (or partial recursive functions) is NOT a representation
problem.

> Every program HAS to be written in SOME language. But,
> by virtue of Turing's proof, we KNOW that the halting routine for a
> given language CANNOT be written in the same language. A more
> powerfull language is nessisary.

This is not necessarily true. If Turing's diagonalization
construction cannot be expressed in that language, then there is no
contradiction. For example, the primitive recursive functions all
halt, so the decision procedure can always say "yes", and that's
obviously also expressible as a primitive recursive function.
Since there is no numbering of the primitive recursive functions that
is primitive recursive itself, the Turing contruction doesn't work.

(I.e., in this case Turing's proof shows the non-existence of a
primitive recursive numbering, while for partial recursive functions
we already know such a numbering exists, and therefore the halting
procedure doesn't.)

> Simply put: If you think you have solved the halting problem, you
> have only proven that your grasp of computer science is extreemly
> weak, and I suggest you get some professional help. Like maybe
> from a teacher.

If you come out, insulting people, accusing them of telling something
untrue without even understanding what they really say, then your
graps of proper scientific conduct is extremely weak. You haven't
contradicted anything I said. In addition, you made some false
statements yourself, that show a lack of knowledge in the area of
basic computation theory. I suggest you seek professional advice
yourself. Or keep quite.

-Matthias
--
-Matthias

James Foster

unread,
Feb 16, 1996, 3:00:00 AM2/16/96
to
jsch...@alink.net wrote:

: Simply put: If you think you have solved the halting problem, you


: have only proven that your grasp of computer science is extreemly
: weak, and I suggest you get some professional help. Like maybe
: from a teacher.

Nicely put.

But constructions to square the circle, implement perpetual motion, and
others continue to roll in.

Hint to anyone who thinks they have a counterexample to a theorem: if
something is a theorem it is TRUE. There ARE NO COUNTEREXAMPLES. What
you have is a counterexample to something else...usually a
misunderstanding.
--
James A. Foster email: fos...@cs.uidaho.edu
Laboratory for Applied Logic Dept. of Computer Science
University of Idaho www: http://www.cs.uidaho.edu/~foster

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.6.2

mQCNAzDtvLEAAAEEAKAC21G2Be0K0DMgjLpxrwLmsYfCz8rWcfgyABjr3Ryfk1dO
nV7fFFpUF3xohR7die+/B2V9oqRQzTLeSF2ECKlsTY/yUyw2kn+P2ju1umh4Fwzd
cVTvc+H69q1+Ft3kmw/PE0Pan+g0PUGGJ43stw3q4OgBHdixbRd/f9giJFDxAAUR
tCZKYW1lcyBBLiBGb3N0ZXIgPGZvc3RlckBjcy51aWRhaG8uZWR1PokAlQMFEDD8
ReEXf3/YIiRQ8QEBFrAD/2AFuRWcD/3MENC3qJMC/Or1qxknjkK7Uv+TDf2LHPOY
GHBbG9PyWuXQ8of0Dd+JYwf/tzlO9Yk1s1zTdikfriak21FW0bCokxDIhA3myppZ
IZDWVA9CyvDYHuP5Ii1NkBvocab813JzDLZA+0iVN5sebGb9zSXR4Za47hlriHeP
=RDHK
-----END PGP PUBLIC KEY BLOCK-----

Ilias Kastanas

unread,
Feb 17, 1996, 3:00:00 AM2/17/96
to
In article <4fpgii$o...@peippo.cs.tut.fi>, Valmari Antti <a...@cs.tut.fi> wrote:
>
>I believe strongly that undecidability **does** affect the
>possibility of a halting tester for real-world programs
>(assuming, of course, that we forget about the finiteness of
>memory). A scan through any algorithm book reveals that
>not all loops are for-loops; there are lots of while-loops.
>It is true that (usually) there are known upper bounds for
>their numbers of executions. So, in principle, they could be
>replaced by for-loops. But that they *could* be replaced does
>not necessarily imply that they indeed *are* replaced. Quite
>the contrary: reading the books we see that they are not.


Besides, assuming we somehow limit programs to be primitive recursive,
or even Kalmar-elementary, a tester might still be intractable. If it si=
mulates/runs programs to any significant extent, it will surely be. Running
times for prim. recursive ( K.- elementary) functions are themselves prim.
recursive (K - elementary.).

>In manual program verification, loop invariants are used to
>prove properties of loops. Therefore, the halting tester would
>have to be able to invent and verify loop invariants (unless
>someone invents another way of solving the problem). Verification
>requires the ability to do reasoning in first-order predicate
>logic with the theory of integers -- known to be undecidable in
>general -- and the ability to invent -- not known to be easy for
>machines!


Very true; there is no chance unless if the program already contains
invariants or pre/post conditions or something. Not an adversarial tester,
a cooperating one!


>And now comes an important point regarding real-world programs:
>they contain errors. A programming error may convert a legal
>algorithm with intricate termination proof to a non-terminating
>program, perhaps into something like the "hail" program mentioned
>in earlier posts.


Testers can have bugs too (they cannot find them when self-applied!?).
Good point again. They must be robust and tolerant, and not digitally
pedantic and jittery. The halting _ function_ is all-or-nothing; a 'user'
layer is needed.


>Of course, the above reasoning is by no means a proof of the
>undecidability of halting of real-world programs. But I hope that
>it shows that the claim "real-world programs do not contain the
>kind of code utilised in undecidability proofs" is not necessarily
>valid. May be that real-world programs have been designed to


Right. Undecidability is not _caused_ by Turing's proof's program...
or _limited_ to it and its padded equivalents. The recursive (Delta-0-1)
sets simply aren't fine-grained enough to ape a complete Sigma-0-1 set.
Carpentry nails, nuts and screws do their job, but cannot make a wristwatch
mechanism.

>terminate, but the "reason" for termination is not always visible in
>the code. And when it is not, how do we know that finding the (or a)
>reason is decidable? And the reason is not even valid if the
>programmer made an error.


Establishing 'reasons' or similar properties, when precisely defined,
invariably turns out to be equivalent to Halting.
-
>One more comment (did someone make this already): computers
>are finite, but programming languages are not. The finiteness
>argument does not apply to real-world *programs*, it applies
>only to their executions in *real-world computers*.


In particular, TM finite controls _ara_ real-world programs.


Ilias

Ilias Kastanas

unread,
Feb 17, 1996, 3:00:00 AM2/17/96
to
In article <4fqoc3$q...@muir.math.niu.edu>,
Xcott Craver <c...@baker.math.niu.edu> wrote:
>In article <4foiih$l...@wdl1.wdl.loral.com>,
>Mark A Biggar <m...@dst17.wdl.loral.com> wrote:

( some [ . . . ]'s )

>>Once you add mountable external memory, the difference between a real machine
>>and a ideal TM becomes efectively zero.


Not true.


> Precisely! Everyone keeps on posting that a Turing machine has an
>"infinite tape". It's better to say that the tape is "unbounded", since at
>no time during the machine's computation does it need to use more than a finite
>number of squares.


True.

Turing, in his 1936 paper, wrote "if the machine writes down finitely
many symbols, then ---; otherwise, ---". Apparently he did not exclude
an infinite tape. But never mind.

> Thus, a computer with external memory is effectively a turing machine.
>Sure, it never has an infinite amount of memory at any one point in time, but
>it doesn't _have_ to. As long as you are able to go out and buy a new disk/
cartridge/whatever whenever the machine needs one, the memory size is unbounded.


Vacuously true; the condition fails.

> This is why the occasional comp.theory poster makes a big deal about the
>semantic difference between "infinite" and "unbounded" - because it IS a big
>deal. A real-world computer can never have infinite memory, but unbounded
>memory is both possible and fairly common.


Many other postings say the same. Sorry, it is not so.


This amounts to saying, I can write down any integer! Yes, any one.
I'll just go buy more paper if need be.

A TM might start writing to tape 10^10^10^10^10 symbols, one by one.
For any n, there is a TM (a universal one, say) and a computation of it
needing time and storage > n.

It probably won't happen to you or me on the next program we run, but
so what? With TMs it is a certainty.

TMs, LBAs, PDAs are idealizations. So are programs, the real numbers,
or circles. They are quite useful and effective, of course, and deserve
the attention they receive.

No, nobody has unboundedly much storage -- even though Windows
this-or-that seem to require it. And the question "Are computers FAs ?"
is not urgent, full of consequences, or deep. But if it is to be answered,
yes, FAs is what they are,

Ilias


Antoun Kanawati

unread,
Feb 17, 1996, 3:00:00 AM2/17/96
to
In article <4fo7fo$3...@sol.ctr.columbia.edu>,
b...@valinor.math.columbia.edu (Ben Abraham) wrote:
: ...
: (i think you read the Subject: without following the thread!, the
: thread had turned into "real programming has nothing to do with
: abstract machines", i was saying that keeping the abstract model

: in mind might have benefits when considering such things as
: portability, etc. (programming in algol is more portable than
: z80 assembler, algol being a theoretic (RAM machine) language))
: email me offline if you don't understand!

Well, you know, the RAM abstractions are closely related to the
usual CPU; it would make much more sense to use Z80 assembler if
one were to maintain a higher fidelity to the RAM abstraction.

I think you're confusing the issue of "sticking to design" (an
engineering thing) with theoretical issues. It is quite reasonable,
and we do it all the time, to model software solutions by borrowing
from theory as a great number of scanners and parsers (and their
generators) demonstrate. But to "program using the RAM abstraction"
is pushing the analogy a bit too far.
--
Antoun (Tony) Kanawati
to...@cybercom.net
http://www.cybercom.net/~tonyk/

Xcott Craver

unread,
Feb 17, 1996, 3:00:00 AM2/17/96
to
In article <4g3o05$h...@gap.cco.caltech.edu>,
Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:

> No, nobody has unboundedly much storage -- even though Windows
> this-or-that seem to require it. And the question "Are computers FAs ?"
> is not urgent, full of consequences, or deep. But if it is to be answered,
> yes, FAs is what they are,
>
> Ilias

I recently received a private e-mail to the same effect - namely, that
the real world always puts upper limits on how much of anything you can
have, and thus real-world computers are finite. I must concede.

However, even with a global upper bound on available memory halting
may still be undecidable, since the computer running the halting algorithm
will exist in the same universe with the same upper bound, and must
therefore use less memory than the simulated program would.

-Caj

Dara Gallagher

unread,
Feb 19, 1996, 3:00:00 AM2/19/96
to
fos...@cs.uidaho.edu (James Foster) writes:
> Hint to anyone who thinks they have a counterexample to a theorem: if
> something is a theorem it is TRUE. There ARE NO COUNTEREXAMPLES. What
> you have is a counterexample to something else...usually a
> misunderstanding.

I agree; but since we're being precise, shouldn't that be
a counterexample to an understanding or an example of a
misunderstanding?

> --
> James A. Foster email: fos...@cs.uidaho.edu
> Laboratory for Applied Logic Dept. of Computer Science
> University of Idaho www: http://www.cs.uidaho.edu/~foster

> -----BEGIN PGP PUBLIC KEY BLOCK-----
> Version: 2.6.2

> mQCNAzDtvLEAAAEEAKAC21G2Be0K0DMgjLpxrwLmsYfCz8rWcfgyABjr3Ryfk1dO
> nV7fFFpUF3xohR7die+/B2V9oqRQzTLeSF2ECKlsTY/yUyw2kn+P2ju1umh4Fwzd
> cVTvc+H69q1+Ft3kmw/PE0Pan+g0PUGGJ43stw3q4OgBHdixbRd/f9giJFDxAAUR
> tCZKYW1lcyBBLiBGb3N0ZXIgPGZvc3RlckBjcy51aWRhaG8uZWR1PokAlQMFEDD8
> ReEXf3/YIiRQ8QEBFrAD/2AFuRWcD/3MENC3qJMC/Or1qxknjkK7Uv+TDf2LHPOY
> GHBbG9PyWuXQ8of0Dd+JYwf/tzlO9Yk1s1zTdikfriak21FW0bCokxDIhA3myppZ
> IZDWVA9CyvDYHuP5Ii1NkBvocab813JzDLZA+0iVN5sebGb9zSXR4Za47hlriHeP
> =RDHK
> -----END PGP PUBLIC KEY BLOCK-----

--
_______________________________________________________________
Dara Gallagher. http://www.cs.tcd.ie/www/jgllgher/jgllgher.html
---------------------------------------------------------------

Stefan Kahrs

unread,
Feb 19, 1996, 3:00:00 AM2/19/96
to
James Foster writes:
[...]
foster> But constructions to square the circle, implement perpetual
foster> motion, and others continue to roll in.

Constructions to "implement perpetual motion" are of a different kind.
Its realisation requires to reject the "law" of the preservation of
mass and energy; this law may well be challenged --- we only have
empirical evidence to its truth. [We also had empirical evidence for
the truth of Newton's laws.]

The undecidability of the halting problem is not an empirical theorem,
it is provably true. However, one could argue here that the
everyday-reading of the word "undecidable" simply means "you cannot
decide it, by whatever means" while one usually identifies that with
"you cannot decide it using a Turing machine".
This identification is Church's thesis, which again is only asserted
by empirical evidence (and, of course, we can't do better than that).
So one might try to solve the halting problem of Turing machines with
something more powerful than Turing machines; the tricky bit is to
find that something and it's at least as difficult as building a motor
that produces more energy than it consumes.

foster> Hint to anyone who thinks they have a counterexample to a
foster> theorem: if something is a theorem it is TRUE. There ARE NO
foster> COUNTEREXAMPLES. What you have is a counterexample to
foster> something else...usually a misunderstanding.

I wouldn't say that so categorically.

If you have a counterexample to a theorem then you have a proof that
a certain logic is inconsistent: it's the logic (any logic) powerful
enough to prove that the counterexample really is a counterexample
and also to prove the theorem itself.
In more than 99.9% of all the cases this inconsistent logic is
called "informal mathematics as applied by a human being" :).

If that logic is a formal logic which is widely believed to be
consistent (say, Peano arithmetic), it might still be possible
that there is such a scenario [because: wide belief isn't good enough].
Such logics did exist in the history of formal logic (the last 100
years), at least if we relax the "widely" claim a bit.
However, one should realise that any argument that rejects
a widely held belief has to withstand a thorough examination,
and if it fails to do so the person who put the argument forward
might have to endure ridicule. (Proofs of NP=P spring to mind.)
Proving the inconsistency of Peano arithmetic by such means
would be a rather significant event in logic/mathematics.
Not the sort of thing you prove every other day.

Going back to the halting problem:
its undecidability with the reading "undecidable=not solvable by Turing machines"
is a theorem of Peano arithmetic. So, if you can find a TM that
solves halting of TM anyway then you have a proof that Peano
arithmetic is inconsistent.
--
Stefan Kahrs

Antoun Kanawati

unread,
Feb 20, 1996, 3:00:00 AM2/20/96
to to...@ontos.com
Michael Lauer wrote:
> There *is* a fundamental difference between `hail' and `most real-world
> programs.' I should think that most loops in typical
> spreadsheet/word-processor/etc programs are bounded in obvious ways:
>
> for(i=0; i<n; i++) { ... }
>
> Yours is not. ....
> ...

I guess the "real world" varies from place to place; 'cause my
little corner of it is recursive, self-referential, and all that.
And, that's only the metadata part of our product. Besides, the
"real world" includes people who make compilers, interpreters,
and Microsoft (who embeds BASIC in almost all of its apps). Or,
to make the statement clear: there are sufficiently many unbounded
"real world" applications out there; just count the Word's and
the Excel's.
--
to...@cybercom.net
http://www.cybercom.net/~tonyk/

Amy Caplan

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
A nice one line proof of halting(?):
H(P) is the proposed program which returns true if program P halts. Then
program A = ``L: if H(A) goto L''
which is results in the contradiction.

My questions:
This contradiction relies on the program A calling the halting program
as a subroutine. To me, this is a weak result -- it does not seem
to forbid the possibility of there being a halting test program H() that
works for all programs that do not call H() as a subroutine.

- The set of possible programs that do not call H() is large and significant;
- the set of programs that call H() seems to me to be (relatively) small
(albeit infinite, i.e. set of measure zero maybe), and perhaps not
significant (is there any use for these programs other than as an
element in the halting proof?)

Thus the halting theorem seems to me to be weak and insignificant.

Please explain the flaw in my understanding.
thanks

Mark A Biggar

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In article <4er73h$n...@tools.bbnplanet.com> bar...@tools.bbnplanet.com (Barry Margolin) writes:
>Furthermore, many of the computability theorems are based on the idealized
>Turing Machine, which has an infinite tape. When applied to real computer
>systems they don't apply, since all real systems have finite memory
>capacities.

I wish people would quit spouting this untruth. Turing Machines DO NOT
use an infinite tape, only an unbounded one. A Turing Machine that halts
after N steps can have used at most N units of tape. A real computer is
unbounded in exactly the same way, as long as you are willing to manufacture
and mount yet one one disk or magtape when ever it needs it.

--
Mark Biggar
m...@wdl.loral.com


Mark Meiss

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to

Take the following as a grain of salt, as my understanding of the matter
is hardly perfect.

A problem lies in your division of programs between those that call H()
and those that don't. You appear to have eliminated the self-reference
caused when H() processes an arbitrary program, but in fact you really
haven't. This is where Goedel's Theorem enters. If your language is
sufficiently powerful (I'm regarding a programming language as a formal
system here--please forgive me), then there exist alternate ways of
causing self-reference in the processing of your program by H(). The
only way to prevent this is to restrict the generality of your language.

This is a encapsulated summary and I have likely made an error; for an
entertaining and educational look at these issues, check out Douglas
Hofstadter's book _Goedel, Escher, Bach: an Eternal Golden Braid_.

--Mark

--
Mark Meiss (mme...@indiana.edu) | No eternal reward will
Indiana University CS Dept. | forgive us now for - Jim Morrison
Bloomington, Indiana, USA | wasting the dawn.

Barry Margolin

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In article <4eorcu$n...@nkosi.well.com>, Amy Caplan <am...@well.sf.ca.us> wrote:
>This contradiction relies on the program A calling the halting program
>as a subroutine. To me, this is a weak result -- it does not seem
>to forbid the possibility of there being a halting test program H() that
>works for all programs that do not call H() as a subroutine.

This dependence on A calling H() is necessary for this particular proof of
the halting theorem. I suspect that other proofs can be constructed that
don't rely on it, but they wouldn't be nearly as simple.

>- The set of possible programs that do not call H() is large and significant;

But even if there isn't a proof that doesn't involve this self-reference, I
think your division of programs into "significant" and "insignificant" is
rather arbitrary. The halting theorem isn't intended to be about actual
computer programs, but about *all* computer programs. In fact, I think
it's pretty well recognized that a halting program that worked for most
real-world computer programs could be written.

Furthermore, many of the computability theorems are based on the idealized
Turing Machine, which has an infinite tape. When applied to real computer
systems they don't apply, since all real systems have finite memory
capacities.

--
Barry Margolin
BBN PlaNET Corporation, Cambridge, MA
bar...@bbnplanet.com
Phone (617) 873-3126 - Fax (617) 873-6351

Matthias Blume

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4er73h$n...@tools.bbnplanet.com> bar...@tools.bbnplanet.com (Barry Margolin) writes:

In article <4eorcu$n...@nkosi.well.com>, Amy Caplan <am...@well.sf.ca.us> wrote:
>This contradiction relies on the program A calling the halting program
>as a subroutine. To me, this is a weak result -- it does not seem
>to forbid the possibility of there being a halting test program H() that
>works for all programs that do not call H() as a subroutine.

This dependence on A calling H() is necessary for this particular proof of
the halting theorem. I suspect that other proofs can be constructed that
don't rely on it, but they wouldn't be nearly as simple.

Well, this proof isn't actually that simple, because it avoided
discussing the only hard part: being able to define the
self-inspecting program A. (Well, H inspects A, but H is called as a
subroutine of A.) A serves as code and as data, and one has to show
that such a construction is possible in the underlying framework of
your choice (e.g., Turing Machines, mu-recursive functions, ...).

The crucial insight is the possibility of a Univeral Turing Machine;
if this already is a given, then proving undecidability of the halting
problem is rather trivial indeed.

Furthermore, many of the computability theorems are based on the idealized
Turing Machine, which has an infinite tape. When applied to real computer
systems they don't apply, since all real systems have finite memory
capacities.

This comes up every once in a while, but it is irrelevant. Even
though "real" machines have only finite memory (and could therefore be
considered FSMs), their state space is so mind-boggingly huge, that
the theoretically possible decision procedure would run much longer
than any real computer would be able to host it.

--
-Matthias

Ilias Kastanas

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <4eorcu$n...@nkosi.well.com>, Amy Caplan <am...@well.sf.ca.us> wrote:
>A nice one line proof of halting(?):
>H(P) is the proposed program which returns true if program P halts. Then
>program A = ``L: if H(A) goto L''
>which is results in the contradiction.
>
>My questions:
>This contradiction relies on the program A calling the halting program
>as a subroutine. To me, this is a weak result -- it does not seem
>to forbid the possibility of there being a halting test program H() that
>works for all programs that do not call H() as a subroutine.
>
>- The set of possible programs that do not call H() is large and significant;
>- the set of programs that call H() seems to me to be (relatively) small
> (albeit infinite, i.e. set of measure zero maybe), and perhaps not
> significant (is there any use for these programs other than as an
> element in the halting proof?)
>
>Thus the halting theorem seems to me to be weak and insignificant.
>
>Please explain the flaw in my understanding.
>thanks

No, we cannot say that the unsolvable Halting Problem is "solvable but
for epsilon". The short proof is nice but could be misleading. Here is
an elementary account of pertinent facts.


The set H of halting computations, encoded in the integers, is a Sigma-
0-1 (rec. enumerable) set. The set computed by any (terminating!) algorithm
is a Delta-0-1 (recursive) set, by definition.

H happens to be a "complete" Sigma-0-1 set (it "encodes" all Sigma-0-1
sets) and from this one shows H is not Delta-0-1. That is the unsolvabili-
ty of the Halting Problem.

One way to understand "complete": _any_ Sigma-0-1 set can be obtained
as the set of n's for which a certain Turing machine halts. In fact, this
means they are 'semi-decidable': there is an algorithm that given k, halts
and answers 'yes' if k is in the set. But in general it fails to terminate
if k is not in the set... and we don't know what to conclude; it might be
that on any given input the TM will terminate 'if we wait a little longer'
... and then again, it might never do so.

It also follows that a set is computable (Delta-0-1) if and only if
both the set and its complement are Sigma-0-1. Again, this is much less
innocent than it sounds!


There is a lot of 'difference' between H and any given computable (i.e.
Delta-0-1) set R. This can be seen from the closure properties of Delta-0-1
E.g. adjoining or removing a finite set to/from R yields another computable
set and hence not H. Adjoining infinitely many elements to R, that form a
computable set R', does the same. Any finite union (or intersection) of R
sets still yields a computable set; none of these manages to "approximate"
H any better; what is left over, H - R, is of the "same" complexity as H.

What about using infinitely many R's... a countable family R_1, R_2 ...
of computable sets? We could consider a _computable family_ and take its
union, say. This means: there is an algorithm that for any n specifies a
computable set R_n -- perhaps by generating an algorithm P_n that computes
R_n... P_n answers yes or no to any question "is m in R_ n?". Too bad...
such a union is still computable.. a Delta-0-1 set R. In fact what I just
described is an algorithm for R.

It just cannot be done... a gulf separates H from any R. H can in fact
be expressed as _some_ countable union of R's, an uncomputable family...
but this has 0 value and is really a triviality; after all, H is the coun-
table union of singletons (m), for the m that belong to H...


No useful 'reduction' is known. It is a strange world... Remember,
for any TM that computes a function, there are infinitely many other TM's
that compute the same function. Knowing the first k values f(0), ... f(k-1)
of a computable function f for some k gives _no_ information about the
other values of f... or even just about the next value, f(k).


Limited halting problems are solvable -- by restricting attention to
specifiable subsets of the set of computable functions. But it is useless
to think of them as "approximations" to H... after all, they are Delta-0-1,
good old R sets themselves!


Ilias

Ilias Kastanas

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <4erdr1$q...@wdl1.wdl.loral.com>,

Mark A Biggar <m...@dst1.wdl.loral.com> wrote:
>In article <4er73h$n...@tools.bbnplanet.com> bar...@tools.bbnplanet.com (Barry Margolin) writes:
>>Furthermore, many of the computability theorems are based on the idealized
>>Turing Machine, which has an infinite tape. When applied to real computer
>>systems they don't apply, since all real systems have finite memory
>>capacities.
>
>I wish people would quit spouting this untruth. Turing Machines DO NOT
>use an infinite tape, only an unbounded one. A Turing Machine that halts
>after N steps can have used at most N units of tape. A real computer is
>unbounded in exactly the same way, as long as you are willing to manufacture
>and mount yet one one disk or magtape when ever it needs it.


Unfortunately, a _new_ disk or magtape is needed; you cannot re-use
any. For any N, there are TM terminating computations that need more than
N units of tape. N could exceed the number of elementary particles in the
Universe.

So 'unbounded' is physically unattainable. Strictly speaking, any
computer, or even all existing ones networked together, would be no more
than a Finite Automaton. The number of states would be astronomical, but
so what.

On the other hand, 'idealizations' like Pushdown Automata and Turing
machines are very useful... an analogous situation with R (the reals),
continuous functions on R, and so on.


Ilias

Jake Donham

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
"Barry" == Barry Margolin <bar...@tools.bbnplanet.com> scrawls:

Barry> think it's pretty well recognized that a halting program
Barry> that worked for most real-world computer programs could be
Barry> written.

You really think so? I can think of whole classes of programs for
which this is clearly not the case. Take for example a program
modelling a chaotic system, which halts if and only if some modelled
variable crosses some threshold. Inferring this kind of emergent
behavior from a relatively simple piece of code would be pretty difficult.

Jake

Ben Abraham

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to


i think this machines are always bounded stuff is kind of silly.
one way to see this is to consider software. a software program
could be written (using arbitrary arithmetic,etc.) which could
run on any machine no matter how big or small, in one of many
currently available languages.
are we interested in the properties of this software program?
yes.
does machines are always bounded affect our discussion of this
program?
perhaps, but not really (ie. the program might crash due to
limits not inherent in the program).

Antoun Kanawati

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to
In article <4f8cgk$o...@sol.ctr.columbia.edu>,
b...@valinor.math.columbia.edu (Ben Abraham) wrote:
: i think this machines are always bounded stuff is kind of silly.

: one way to see this is to consider software. a software program
: could be written (using arbitrary arithmetic,etc.) which could
: run on any machine no matter how big or small, in one of many
: currently available languages.
: are we interested in the properties of this software program?
: yes.
: does machines are always bounded affect our discussion of this
: program?
: perhaps, but not really (ie. the program might crash due to
: limits not inherent in the program).

The analysis of real software is a software engineering problem,
whereas the decidability of program characteristics (halting, kind
of language accepted, etc...) are theoretical issues. The theory
helps us establish the limits of the practice, but it does not
automatically translate. Real programs on real computers are very
different from universal turing machines and input tapes. Sorta
like infinitely thin beams and point loads vs. real concrete and
steel beams and real loads. The mathematically tractable model of
beams and loads helps, but it takes a lot more to get a real beam
designed and constructed, and a lot of that depends on purely
empirical experience about the placement of steel bars, the drying
of concrete, various material properties, the weather, and all kinds
of things.

So, if you know that in the simple world of Turing machines and
unbounded resources, it is not possible to decide much about
programs (a 7 state universal turing machine), you can rest assured
that the problem is quite difficult on a real computer; a real computer
has an astronomically high number of distinct states, and very large
resources, and that's close enough to the ideal model. Clearly, the
approximation is not precise, since even astronomically large integers
are ultimately finite, but it's good enough for all practical purposes,
for the same reasons that an infinitely thin beam and various assumptions
about nice linear elastic behavior are acceptable for structural
design and analysis: they're good enough approximations of reality
(modulo some safety factors, of course).

Antoun Kanawati

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to
In article <DONHAM.96...@linex.linex.com>, don...@linex.com (Jake
Donham) wrote:

: "Barry" == Barry Margolin <bar...@tools.bbnplanet.com> scrawls:

In theory, any real computer can be modeled as an FSM. So, in theory,


you can always analyze such a beastly FSM and figure out exactly what

it will do. There are only so many bits that can change on a computer;
all of memory, and all of the external storage. So, by "simply" keeping
track of configurations, one can "easily" detect infinite loops. Also,
since it "only an FSM", we can also predict many more characteristics and
all that. There are some glitches relating to time/space requirements as
they relate to certain metrics of the universe and that kind of stuff, but
I am sure they'll be addressed in the next release of the universe :-)

Ben Abraham

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to

True, but if what your saying is "when writing real programs we
can make assumptions about the particular implementation we
may be running on" you're asking for trouble, this is what leads
to non-portable code,etc.. if it is possible (ie. you're not writing
a device driver or something) to write your program
in terms of some abstract machine (RAM machine for example) you
will never have to worry about the particulars of implementation
and will probably wind up with a nice piece of code. I always
try to write my code in terms of abstract machines (unless there
is a compelling reason not to (performance,etc.)) and feel that
are some very compelling reasons to do so.
when faced with decision whether or not to put some concrete
limitation on your program, you should give it a long hard look
before doing so. i write tons of software and in find that in
many practical problems one can write "abstract" code, and to
not write abstract code could be considered a blunder, to say
that abstract machines really are not practical is just not
true! (rams are really quite flexible (and languages written
for them)!)
ben

nwein...@macalstr.edu

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to

Does the "finiteness" of a real computer change in any way if you
program it in one of the newer functional languages that use lazy evaluation?
IIRC one of the applications of this technique is that it allows you to
reasonably represent certain kinds of infinite data structures.

--
Nicholas Weininger

"If all mankind minus one were of one opinion, and only one person were of the
contrary opinion, mankind would be no more justified in silencing that one
person, than he, if he had the power, would be justified in silencing mankind."
-John Stuart Mill

Richard A. O'Keefe

unread,
Feb 8, 1996, 3:00:00 AM2/8/96
to
Could I point out that there is an interesting class of programs for which
it _is_ possible to solve the halting problem?

Second-order polymorphic typed lambda calculus.

The algorithm always says "yes".

I've wondered whether it might make an interesting "glue" language.
--
"conventional orthography is ... a near optimal system for the
lexical representation of English words." Chomsky & Halle, S.P.E.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.

Mark-Jason Dominus

unread,
Feb 8, 1996, 3:00:00 AM2/8/96
to
In article <4er73h$n...@tools.bbnplanet.com>,
Barry Margolin <bar...@tools.bbnplanet.com> wrote:
>In fact, I think it's pretty well recognized that a halting program
>that worked for most real-world computer programs could be written.

I'm surprised you think that. Who recognizes this? How would they
(or you) write this putative halting program? How could it work on
`most real-world programs' and yet fail on this simple program:

/* C version */
void hail(int n) {
while (n > 1)
if (n % 2 == 1) n = n*3 + 1;
else n /= 2;
return;
}

; Scheme version
(defun hail (n)
(if (> n 1)
(if (odd? n) (hail (+ 1 (* 3 n)))
(hail (/ n 2))
)
nil
)
)

??? If it did work on `most real-world programs' and yet not work on
the above program, than what is the fundamental difference between
`hail' and `most real-world programs'? If it does work on the above
program, then why is the halting-ness of the above function still a
well-known unsolved research problem?

Michael Lauer

unread,
Feb 9, 1996, 3:00:00 AM2/9/96
to
Amy Caplan (am...@well.sf.ca.us) wrote:
> A nice one line proof of halting(?):
> H(P) is the proposed program which returns true if program P halts. Then
> program A = ``L: if H(A) goto L''
> which is results in the contradiction.

> My questions:
> This contradiction relies on the program A calling the halting program
> as a subroutine. To me, this is a weak result -- it does not seem
> to forbid the possibility of there being a halting test program H() that
> works for all programs that do not call H() as a subroutine.

> - The set of possible programs that do not call H() is large and significant;
> - the set of programs that call H() seems to me to be (relatively) small
> (albeit infinite, i.e. set of measure zero maybe), and perhaps not
> significant (is there any use for these programs other than as an
> element in the halting proof?)

> Thus the halting theorem seems to me to be weak and insignificant.

> Please explain the flaw in my understanding.
> thanks

Well, consider a universal Turing machine. If you feed it the
string corresponding to applying your H to some other TM, does
that count as calling H as a subroutine?

It's an easy consequence of the halting theorem that there's
no guaranteed-to-terminate procedure for determining whether two
computations are equivalent, in the sense of always producing the
same outputs (or failing to halt) given the same inputs. That means
that the halting problem could show up "in disguise," as it were.
You can't simply exclude programs that have a certain subroutine,
because you can't tell which ones those are.

I imagine there's an easy proof from the halting theorem that
there's no H of the form you suggest, which I'm too lazy to
try to find (or even try to formulate precisely). Such an H
certainly doesn't exist, though. Consider this program:

#include <bignum.h>
extern int isprime(bignum a);

int main()
{
bignum i, j;

for(i=4; 1; i += 2) {
int sumoftwoprimes = 0;
for(j=2; j<=i/2; j++) {
if(isprime(j) && isprime(i-j)) {
sumoftwoprimes = 1;
}
}
if(!sumoftwoprimes) {
break;
}
}

return 0;
}

Ignoring issues of finite storage space, this halts iff Goldbach's
conjecture is false. Since (unless I've missed some big news) Goldbach's
conjecture is unproven, you would need one heck of a halting checker to
check this.

I'll also point out that I only know of sensible definition of measure
on the space of programs; with that definition the set of programs that
call any given routine has nonzero measure.

HOWEVER, I doubt that many Windows apps depend on Goldbach's
conjecture. Generally we software engineers only code things that we
think will halt, and we only use the most simplistic methods to verify
that they will. You could certainly write a program that will tell you
that a useful class of programs halt, and I'd be surprised if there
weren't actually one on the market. Such verifiers might check that
all loops are bounded and that all recursive function-call loops are
grounded in obvious ways. The definition of "obvious way" could be
arbitrarily sophisticated, of course.

There are languages, such as the polymorphic typed lambda calculus, in
which all programs halt. There are certain things you can't do in such
languages---write a BASIC interpreter, for example---but they can
nonetheless express a wide variety of computations.

Followups to comp.theory.

Michael Lauer
mla...@cvbnet.cv.com

Mikael Tillenius

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
In <4fdjbl$m...@monet.op.net> m...@op.net (Mark-Jason Dominus) writes:

>In article <4er73h$n...@tools.bbnplanet.com>,
>Barry Margolin <bar...@tools.bbnplanet.com> wrote:
>>In fact, I think it's pretty well recognized that a halting program
>>that worked for most real-world computer programs could be written.

>I'm surprised you think that. Who recognizes this? How would they
>(or you) write this putative halting program? How could it work on
>`most real-world programs' and yet fail on this simple program:

<Scheme and C version of `hail' deleted>

>??? If it did work on `most real-world programs' and yet not work on
>the above program, than what is the fundamental difference between
>`hail' and `most real-world programs'?

I hope you don't include calls to `hail' in your programs. Most
real-world computer programs don't contain code that makes it hard to
prove if they terminates or not. After a quick look at the programs
I have written I think I can prove that they all terminate. To write
a *useful* (=fast, reasonable memory requirements etc.) program that
would do the same is probably a bit harder.

/Mikael

Ilias Kastanas

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
In article <1996Feb7.080543@apollo>, <nwein...@macalstr.edu> wrote:
>
> Does the "finiteness" of a real computer change in any way if you
>program it in one of the newer functional languages that use lazy evaluation?
>IIRC one of the applications of this technique is that it allows you to
>reasonably represent certain kinds of infinite data structures.


I'm afraid not. The issue is not language-dependent. Functional
programming, lazy evaluation etc. may allow you to be faster or leaner,
but they do not alter the fundamental facts about what is representable
and what isn't. All you can represent/compute "is" (via encoding) within
the set of recursive functions... of one variable. Any such function is
an infinite object (map N -> N) with a finite specification (TM or what-
ever). The finiteness of real computers limits us to some subset of the
set of TM computations. These facts, or the Halting problem or the like
are not affected by choice of programming language.


Ilias

Michael Lauer

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
Mark-Jason Dominus (m...@op.net) wrote:
> In article <4er73h$n...@tools.bbnplanet.com>,
> Barry Margolin <bar...@tools.bbnplanet.com> wrote:
> >In fact, I think it's pretty well recognized that a halting program
> >that worked for most real-world computer programs could be written.

> I'm surprised you think that. Who recognizes this? How would they
> (or you) write this putative halting program? How could it work on
> `most real-world programs' and yet fail on this simple program:

> /* C version */


> void hail(int n) {
> while (n > 1)
> if (n % 2 == 1) n = n*3 + 1;
> else n /= 2;
> return;
> }

> ; Scheme version
> (defun hail (n)
> (if (> n 1)
> (if (odd? n) (hail (+ 1 (* 3 n)))
> (hail (/ n 2))
> )
> nil
> )
> )

> ??? If it did work on `most real-world programs' and yet not work on


> the above program, than what is the fundamental difference between

> `hail' and `most real-world programs'? If it does work on the above
> program, then why is the halting-ness of the above function still a
> well-known unsolved research problem?

There *is* a fundamental difference between `hail' and `most real-world


programs.' I should think that most loops in typical
spreadsheet/word-processor/etc programs are bounded in obvious ways:

for(i=0; i<n; i++) { ... }

Yours is not. Making somewhat loose and imprecise use of fancy
terminology, typical software is \Delta_0 (all quantifiers are
bounded); `hail' is \Sigma_0 (unbounded existential quantifiers).
That's detectable syntactically.

Which is not to say that it wouldn't be a major pain in a language
like C. I don't know how practical it is to make sure that all
strings remain null-terminated, for example. It might be much
easier in Scheme, and maybe even easier in strongly-typed functional
languages.

Michael Lauer
mla...@cvbnet.cv.com

Antoun Kanawati

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to to...@cybercom.net
Ben Abraham wrote:
> True, but if what your saying is "when writing real programs we
> can make assumptions about the particular implementation we
> may be running on" you're asking for trouble, this is what leads
> to non-portable code,etc...
> ...

And how do the esthetics of design and programming style affect
reasoning about theoretical issues? To reason about Halting you
don't even need to know a real programming language, never mind
any worries about portability, maintenance, and other pure engineering
issues.
--
to...@cybercom.net
http://www.cybercom.net/~tonyk/

0 new messages