Must be the position of Jupiter. Or the price of corn. Shall I go on, or are
you going to post your code?
<< anyone got some suggestions? >>
Tarot cards? Replace the lubricating oil in your classic Triumph?
Why doesn't it occur to anyone that prose descriptions can sometimes tell
the story, but code *is* the story?
<< i am about to redo the code, and use structs, but that would cause a
couple of other problems. >>
Like, you would have to look at it? By the way, while you are looking at it,
how about letting us look at it, too, so we can offer meaningful
suggestions?
PPYC. Thank you.
--
Paul Lutus
www.arachnoid.com
someone wrote in message <376730B1...@osu.edu>...
>probably a simple question, but i am about to give up, and try another
>approach!
>
>i have text file containing "records" with the format LASTNAME,
>FIRSTNAME (INSTRUCTOR, COURSE). What i am trying to do is read in a
>record, place it in an array (using callac at the moment), then sort it
>on LASTNAME.
>
>problem is, I cannot seem to get the string (record) into the array.
>
>anyone got some suggestions?
>
>i am about to redo the code, and use structs, but that would cause a
>couple of other problems.
>
>thanks,
>jim
>
>
>
Since you haven't posted any code, and my crystal ball is
malfunctioning today, suggestions are inevitably scarce.
General advice for line-based input: use fgets(). If your
file format is not line-based, but consists of fixed-sized text
fields, you may chose between fscanf() approach or direct fread().
--
Regards,
Alex Krol
Disclaimer: I'm not speaking for Scitex Corporation Ltd
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
Paul Lutus wrote:
> << problem is, I cannot seem to get the string (record) into the array. >>
>
> Must be the position of Jupiter. Or the price of corn. Shall I go on, or are
> you going to post your code?
>
> << anyone got some suggestions? >>
>
>
well, the reasons i didn't post the code:
A) it's wrong and now butchered (was wrong to begin with and butchered to begin
with, so no harm, no foul),
B) i don't want people doing it for me, i can do it, i just needed a shove in
the right direction.
C) i personally don't like posting a crapload of code to newsgroups, when a
psuedo-description provides enough info for the push mentioned in B).
thanks again,
jim
--
Paul Lutus
www.arachnoid.com
someone wrote in message <37673821...@osu.edu>...
If you do not wan to post the code, you must be very precise and detailed
in your description. Posting relevant pieces of code is often the easier
alternative. I will try to make it easier for you to provide the essential
information. Please provide us with:
a) an exact description of the format of your input data file
b) an exact description of the data structure that you want to
use for storing one record from the data file
c) an exact description of how you want to combine multiple
data structure, ie. array or linked list or ....
d) a detailed description of what exactly it is that you do not
understand
Stephan
(initiator of the campaign against grumpiness in c.l.c)
We all know what perfect code looks like, and there's no point in posting
it. But if code has structural or syntax problems, that is a reason to post
it, not to hide it. If we can't see it, we can't fix it.
This situation exemplifies the rule. We have not been able to offer any
useful advice about your code, because we can't see it.
--
Paul Lutus
www.arachnoid.com
someone wrote in message <37673821...@osu.edu>...
>A) it's wrong and now butchered (was wrong to begin with and butchered to
begin
>with, so no harm, no foul),
>B) i don't want people doing it for me, i can do it, i just needed a shove
in
>the right direction.
>C) i personally don't like posting a crapload of code to newsgroups, when a
>psuedo-description provides enough info for the push mentioned in B).
>
>thanks again,
>jim
>
>
>
>Gaia dhuit,
>
>Paul Lutus dropped 2 cents in the slot of <comp.lang.c>
>in article <r4Q93.13156$_m4.2...@news2.giganews.com>, and wrote :
>
>>We all know what perfect code looks like,
>
>No we don't, cos there's no such thing.
>
>"Good" code, even "excellent" code - I do know what these look like.
>
>But, when there are so many different attributes of code that conflict
>(eg efficiency vs readability) code can never be perfect.
>
>>and there's no point in posting
>>it.
>
>If you have any perfect code I'd love to see it.
Here's a stab at a /bin/true type program:
int main(void)
{
return 0;
}
We could argue about how best to comment it but ignore that for now.
--
-----------------------------------------
Lawrence Kirby | fr...@genesis.demon.co.uk
Wilts, England | 7073...@compuserve.com
-----------------------------------------
>In article <377672e3...@news.iol.ie> j...@iol.ie "janus" writes:
>
>>Gaia dhuit,
>>
>>Paul Lutus dropped 2 cents in the slot of <comp.lang.c>
>>in article <r4Q93.13156$_m4.2...@news2.giganews.com>, and wrote :
>>
>>>We all know what perfect code looks like,
>>
>>No we don't, cos there's no such thing.
>>
>>"Good" code, even "excellent" code - I do know what these look like.
>>
>>But, when there are so many different attributes of code that conflict
>>(eg efficiency vs readability) code can never be perfect.
>>
>>>and there's no point in posting
>>>it.
>>
>>If you have any perfect code I'd love to see it.
>
>Here's a stab at a /bin/true type program:
>
>int main(void)
>{
> return 0;
>}
>
>We could argue about how best to comment it but ignore that for now.
Hmmm
main(void)
{
exit(EXIT_SUCCESS);
}
Oops. You incorrectly corrected a correct posting. Your program does not
include <stdlib.h>, so it won't compile. And although leaving int off main
is okay for now, it's not forward-compatible with what C9X is expected to
mandate (ie explicit return types).
--
Richard Heathfield
The bug stops here.
...
>>>If you have any perfect code I'd love to see it.
>>
>>Here's a stab at a /bin/true type program:
>>
>>int main(void)
>>{
>> return 0;
>>}
>>
>>We could argue about how best to comment it but ignore that for now.
>
>Hmmm
>
>main(void)
>{
> exit(EXIT_SUCCESS);
>}
In what way is that better than the original? I can think of a number of
reasons why it isn't as good:
1. It omits an explicit return type for main() which is poor style and
will cease to be correct C when C9X comes into force.
2. You make the code unnecessarily complex by using a library function
where none is required. As an assembly programmer I'm really quite
surprised you would make this sort of error. :-)
3. You aren't returning a value from a function that returns int. Of course
this isn't wrong but it is counter-intuitive (to the extent that some
compilers will warn about it). herefore it isn't a good thing
to do in "perfect" code.
4. You call exit() with no declaration for exit in scope. Since exit()
does not return int this is an error. This can be fixed by adding
#include <stdlib.h> but of course that is increasing the code
complexity even further.
5. You are using the identifier EXIT_SUCCESS without defining it. Same
solution and issue as 4.
Did I miss anything? :-)
>Paul Mesken <usu...@euronet.nl> wrote in article
>> main(void)
>> {
>> exit(EXIT_SUCCESS);
>> }
>>
>>
>
>Oops. You incorrectly corrected a correct posting. Your program does not
>include <stdlib.h>, so it won't compile. And although leaving int off main
>is okay for now, it's not forward-compatible with what C9X is expected to
>mandate (ie explicit return types).
Gee, I didn't know that. So main() will no longer be assumed int. I'd
better do some "intserting" in my programs :-) (perhaps I'd better do
some C9X draft reading, who knows what else will change)
>In article <376b1b22...@news.euronet.nl>
> usu...@euronet.nl "Paul Mesken" writes:
>
>...
>
>>>>If you have any perfect code I'd love to see it.
>>>
>>>Here's a stab at a /bin/true type program:
>>>
>>>int main(void)
>>>{
>>> return 0;
>>>}
>>>
>>>We could argue about how best to comment it but ignore that for now.
>>
>>Hmmm
>>
>>main(void)
>>{
>> exit(EXIT_SUCCESS);
>>}
>
>In what way is that better than the original? I can think of a number of
>reasons why it isn't as good:
>
>1. It omits an explicit return type for main() which is poor style and
> will cease to be correct C when C9X comes into force.
>
Okay, okay! I didn't know of the explicit stuff. I'll read the draft.
>2. You make the code unnecessarily complex by using a library function
> where none is required. As an assembly programmer I'm really quite
> surprised you would make this sort of error. :-)
>
My modus operandi is just different when doing C programming. In
Assembly we aim at _performance_. In C we aim at readability and
maintainability :-)
>3. You aren't returning a value from a function that returns int. Of course
> this isn't wrong but it is counter-intuitive (to the extent that some
> compilers will warn about it). herefore it isn't a good thing
> to do in "perfect" code.
>
Come on! Would there really be compilers who don't recognize a
function of the standard library that makes a return redundant?
>4. You call exit() with no declaration for exit in scope. Since exit()
> does not return int this is an error. This can be fixed by adding
> #include <stdlib.h> but of course that is increasing the code
> complexity even further.
>
Well, I assumed that people here would realize the inclusion of
<stdlib.h> was implicit and that the main function was not the whole
program but a fragment of it. Guess I overestimated again ;-)
>5. You are using the identifier EXIT_SUCCESS without defining it. Same
> solution and issue as 4.
>
Yeah, yeah
>Did I miss anything? :-)
Well, since I omitted the return type of main I could also have
omitted the "void" in its parameter list. That really is the only flaw
but I'm so used to putting it there I forgot to omit it.
>Gaia dhuit,
>
>Lawrence Kirby dropped 2 cents in the slot of <comp.lang.c>
>in article <929578...@genesis.demon.co.uk>, and wrote :
>
>>In article <377672e3...@news.iol.ie> j...@iol.ie "janus" writes:
>>
>>>But, when there are so many different attributes of code that conflict
>>>(eg efficiency vs readability) code can never be perfect.
>>>
>>>If you have any perfect code I'd love to see it.
>>
>>Here's a stab at a /bin/true type program:
>>
>>int main(void)
>>{
>> return 0;
>>}
>>
>>We could argue about how best to comment it but ignore that for now.
>
>That is excellent code.
>
No, it is not. It doesn't do nothing. It's useless. Who wants to run a
program that doesn't do nothing except returning with the message that
it terminated without error. It's a waste of bytes.
I say it has no right of existence!
[ says TechnoCrate with burning crosses in the background ;-) ]
In fact, this is a C implementation of an IBM mainframe utility,
called IEFBR14 (IBM's version is written in assembly language).
I'd guess that it is one of the most commonly run programs on IBM
mainframe systems. Hardly useless.
In the early 80s IBM released a version of the operating system
that had a bug in this program. I'm told (by people who are in a
position to know) that they actually have relased it with bugs
three times, but I only have first hand knowledge of the one.
Lawrence obviously has done better, perhaps demonstrating the
advantage of C over assembly language :-)
--
Michael M Rubenstein
Interesting logic. Then a program that *does* "do nothing" (as opposed to
"doesn't do nothing") are, by contrast, useful (read: meritorious)? Possibly
a closet Luddite here?
--
Paul Lutus
www.arachnoid.com
Paul Mesken wrote in message <376da3a3...@news.euronet.nl>...
>On Sat, 19 Jun 1999 10:13:25 GMT, j...@iol.ie (janus) wrote:
>
>>Gaia dhuit,
>>
>>Lawrence Kirby dropped 2 cents in the slot of <comp.lang.c>
>>in article <929578...@genesis.demon.co.uk>, and wrote :
>>
>>>In article <377672e3...@news.iol.ie> j...@iol.ie "janus" writes:
>>>
>>>>But, when there are so many different attributes of code that conflict
>>>>(eg efficiency vs readability) code can never be perfect.
>>>>
>>>>If you have any perfect code I'd love to see it.
>>>
>>>Here's a stab at a /bin/true type program:
>>>
>>>int main(void)
>>>{
>>> return 0;
>>>}
>>>
>>>We could argue about how best to comment it but ignore that for now.
>>
>>That is excellent code.
>>
>No, it is not. It doesn't do nothing. It's useless. Who wants to run a
>program that doesn't do nothing except returning with the message that
>it terminated without error. It's a waste of bytes.
>
>I say it has no right of existence!
>
...
>>>Here's a stab at a /bin/true type program:
>>>
>>>int main(void)
>>>{
>>> return 0;
>>>}
>>>
>>>We could argue about how best to comment it but ignore that for now.
>>
>>That is excellent code.
>>
>No, it is not. It doesn't do nothing. It's useless.
I already mentioned /bin/true. true is a Unix program whose purpose
is to generate a successful termination status. It is used quite
commonly in scripts. Far from being useles it is the code for quite an
important program.
><< It doesn't do nothing. It's useless. >>
>
>Interesting logic. Then a program that *does* "do nothing" (as opposed to
>"doesn't do nothing") are, by contrast, useful (read: meritorious)? Possibly
>a closet Luddite here?
Poetic Freedom ;-)
...
>>2. You make the code unnecessarily complex by using a library function
>> where none is required. As an assembly programmer I'm really quite
>> surprised you would make this sort of error. :-)
>>
>My modus operandi is just different when doing C programming. In
>Assembly we aim at _performance_. In C we aim at readability and
>maintainability :-)
Very commendable. However that's my point - you're making things unnecessary
complex which hinders readability and maintainability.
>>3. You aren't returning a value from a function that returns int. Of course
>> this isn't wrong but it is counter-intuitive (to the extent that some
>> compilers will warn about it). herefore it isn't a good thing
>> to do in "perfect" code.
>>
>Come on! Would there really be compilers who don't recognize a
>function of the standard library that makes a return redundant?
Example: MSVC 5.0 with -W3. A couple of lints I tried complained
about this too.
>>4. You call exit() with no declaration for exit in scope. Since exit()
>> does not return int this is an error. This can be fixed by adding
>> #include <stdlib.h> but of course that is increasing the code
>> complexity even further.
>>
>Well, I assumed that people here would realize the inclusion of
><stdlib.h> was implicit and that the main function was not the whole
>program but a fragment of it. Guess I overestimated again ;-)
We're talking about perfect programs, you can't assume anything.
>>5. You are using the identifier EXIT_SUCCESS without defining it. Same
>> solution and issue as 4.
>>
>Yeah, yeah
>
>>Did I miss anything? :-)
>Well, since I omitted the return type of main I could also have
>omitted the "void" in its parameter list. That really is the only flaw
>but I'm so used to putting it there I forgot to omit it.
Using prototype form is generally better than using non-prototype form.
I think that tips the scales when talking about a perfect program.
Inquiring minds want to know: what was/were the bugs?!?
[Crossposted to alt.folklore.computers, and followups redirected there.]
Steve Summit
s...@eskimo.com
...
>So, in all my years of assessing, analysing & marking code,
> I have to say it's the best code I've ever seen.
>
>But I might find your brace-style icky.
>
>So it's not perfect.
Different preferences for coding styles are obviously an unavoidable
fact of life. They are also highy subjective. Can we allow subjectivity
to be a factor when discussing perfection? I don't think so. We must try
to create a framework for judging perfection that is totally objective.
Being subjective about perfection is what art and IOCCC competitions
are all about. :-)
To paraphrase Dan Quayle, "If Gödel were alive today, he'd be turning over
in his grave."
Ahh -- so nice to have an easy out :)
--
Paul Lutus
www.arachnoid.com
Lawrence Kirby wrote in message <929910...@genesis.demon.co.uk>...
>In article <376da3a3...@news.euronet.nl>
> usu...@euronet.nl "Paul Mesken" writes:
>
>...
>
>>>>Here's a stab at a /bin/true type program:
>>>>
>>>>int main(void)
>>>>{
>>>> return 0;
>>>>}
>>>>
>>>>We could argue about how best to comment it but ignore that for now.
>>>
>>>That is excellent code.
>>>
>>No, it is not. It doesn't do nothing. It's useless.
>
>I already mentioned /bin/true. true is a Unix program whose purpose
>is to generate a successful termination status. It is used quite
>commonly in scripts. Far from being useles it is the code for quite an
>important program.
Mmm, there always has been an excessive amount of small, doubtfull
utilities on unix machines, I wonder when /bin/maybe comes up ;-)
<snip int main(void) { return 0; } and statement of its 'uselessness' >
>
> In fact, this is a C implementation of an IBM mainframe utility,
> called IEFBR14 (IBM's version is written in assembly language).
> I'd guess that it is one of the most commonly run programs on IBM
> mainframe systems. Hardly useless.
>
> In the early 80s IBM released a version of the operating system
> that had a bug in this program. I'm told (by people who are in a
> position to know) that they actually have relased it with bugs
> three times, but I only have first hand knowledge of the one.
> Lawrence obviously has done better, perhaps demonstrating the
> advantage of C over assembly language :-)
Er, yes. IEFBR14. I got an 'email' (sort of thing) last week at my client
site, explaining that as part of the Y2K changes, those of us using IEFBR14
at location <foo> had to stop using it and use another version at location
<bar> instead.
I can't help trying (and failing) to imagine some circumstance in which int
main(void){return 0;} might prove to be non-Y2K-compliant.
>We might come up with an abstract notion of perfect, and even code it, but
>then we'd collide with that specter of 20th century logic, Gödel's
>Incompleteness Theorem, the limitation that (among other things) prevents
>Turing's Halting problem from ever being solved in a satisfactory way.
There are subsets of the set of all possible programs where you can determine
whether the program terminates or not. My example is such a case. You
can simply say that a program is a candidate for being "perfect" only if
it can be shown to meet its specification which includes the conditions
for when it terminates.
>Mmm, there always has been an excessive amount of small, doubtfull
>utilities on unix machines, I wonder when /bin/maybe comes up ;-)
ln /usr/bin/random /bin/maybe
(if you have random anyway).
In one sense I agree, from the standpoint of common sense. But if the Turing
Halting Problem is insoluble for one program, there is a narrow sense in
which it is insoluble for all programs.
It goes like this. You design a program to analyze syntactically correct
computer programs for evidence that they halt, or don't. Then you submit a
very simple program, one that seems to have a certain outcome.
Both Turing and Gödel turn over in their respective graves, you scratch the
back of your neck, and suddenly it comes to you that the test program must
also meet the Turing criterion, not just the tested program. If not, the
result cannot be trusted.
Also, the simplicity of the program as represented in source code hides the
complexity of the (1) compiler and (2) practical running embodiment of the
program, either of which might prevent the "program" (source listing) from
halting or behaving as you would expect. Once you establish that the
compiler must also meet the Turing criterion, all certainty evaporates -- it
might as well be a discussion of the "Turing test," an even less
well-defined idea.
--
Paul Lutus
www.arachnoid.com
Lawrence Kirby wrote in message <929974...@genesis.demon.co.uk>...
>In article <ircb3.6823$En5.2...@news3.giganews.com>
> nos...@nosite.com "Paul Lutus" writes:
>
>>We might come up with an abstract notion of perfect, and even code it, but
>>then we'd collide with that specter of 20th century logic, Gödel's
>>Incompleteness Theorem, the limitation that (among other things) prevents
>>Turing's Halting problem from ever being solved in a satisfactory way.
>
>There are subsets of the set of all possible programs where you can
determine
>whether the program terminates or not. My example is such a case. You
>can simply say that a program is a candidate for being "perfect" only if
>it can be shown to meet its specification which includes the conditions
>for when it terminates.
>
><< There are subsets of the set of all possible programs where you can
>determine whether the program terminates or not. >>
>
>In one sense I agree, from the standpoint of common sense. But if the Turing
>Halting Problem is insoluble for one program, there is a narrow sense in
>which it is insoluble for all programs.
>
>It goes like this. You design a program to analyze syntactically correct
>computer programs for evidence that they halt, or don't. Then you submit a
>very simple program, one that seems to have a certain outcome.
>
>Both Turing and Godel turn over in their respective graves, you scratch the
>back of your neck, and suddenly it comes to you that the test program must
>also meet the Turing criterion, not just the tested program. If not, the
>result cannot be trusted.
>
>Also, the simplicity of the program as represented in source code hides the
>complexity of the (1) compiler and (2) practical running embodiment of the
>program, either of which might prevent the "program" (source listing) from
>halting or behaving as you would expect. Once you establish that the
>compiler must also meet the Turing criterion, all certainty evaporates -- it
>might as well be a discussion of the "Turing test," an even less
>well-defined idea.
This reminds me of waaay back when I started programming Assembly for
the Z80 (actually I poked the machine code in). The code looked
perfect to me but it didn't work, I boiled it down to a single
instruction which was a simple increment of register A. The program
could in my rookie opinion only fail if the increment wasn't carried
out for some mystical reason. So I included a second one (just to make
sure). Ofcourse it didn't work. The program only looked perfect to me
because I didn't see my own mistake and in my arrogance I started to
doubt the CPU itself.
Since then, and reinforced again and again, I have great faith in the
compiler/ assembler and an even greater faith in the computer.
So, apparantly humans are not perfect (because I've been proven to
make mistakes ;-) Therefor: not even logic can be trusted and Godel
might be wrong.
But as you mentioned: we've also common sense and from that (way more
usable) point of view I say Kirby's program is perfect (although I
still think it's useless whatever value it has for Unix environments).
BTW: in "the restaurant at the end of the universe" (I think, it's one
of Douglas Adam's volumes of "The hitchhiker's guide to the universe")
we meet the real ruler of the universe (who wields the power but
doesn't know it as opposed to the president of the universe whose
function is not to wield power but to draw attention away from it).
This person questions everything and it gets him nowhere fast which is
quite hilarious to read about ("Maybe they're not talking to me but
are singing to my cat" ;-)
><< There are subsets of the set of all possible programs where you can
>determine whether the program terminates or not. >>
>
>In one sense I agree, from the standpoint of common sense. But if the Turing
>Halting Problem is insoluble for one program, there is a narrow sense in
>which it is insoluble for all programs.
>
>It goes like this. You design a program to analyze syntactically correct
>computer programs for evidence that they halt, or don't.
No, I design a program to analyse the particular subset of syntactically
correct programs I am interested in. I do not attempt to design a program
to solve the halting problem in general. I might even be able to
include the program itself in the set of programs it can test.
IIRC the proof of the halting problem is to provide an algorithm to
construct a program based on the code of the checker program that the
checker program will not analyse correctly. All I have to do is
ensure that such a constructed program is not contained in the subset of
programs I want to check.
>Then you submit a
>very simple program, one that seems to have a certain outcome.
>
>Both Turing and Gödel turn over in their respective graves, you scratch the
>back of your neck, and suddenly it comes to you that the test program must
>also meet the Turing criterion, not just the tested program. If not, the
>result cannot be trusted.
The test program does not have to meet the Turing criterion since it
isn't trying to solve the halting problem for a Turing-equivalent
system.
>Also, the simplicity of the program as represented in source code hides the
>complexity of the (1) compiler and (2) practical running embodiment of the
>program, either of which might prevent the "program" (source listing) from
>halting or behaving as you would expect. Once you establish that the
>compiler must also meet the Turing criterion, all certainty evaporates -- it
>might as well be a discussion of the "Turing test," an even less
>well-defined idea.
Even though a compiler is a complex piece of software I believe it is
not impossible in the Turing/Godel sense to prove that it terminates as
long as suitable restrictions are imposed on the way it is written.
Anyway the fact that a real compiler can only address a finite amount
of memory is enough to show it isn't running on a Turing-equivalent machine.
That qualifier is broad enough that I would have to agree. But as to the
general case, I think it would be difficult to draw the line between those
programs that are suitably trivial and those to which the original criterion
must apply. Even this determination may not be decidable.
--
Paul Lutus
www.arachnoid.com
Lawrence Kirby wrote in message <930008...@genesis.demon.co.uk>...
><< Even though a compiler is a complex piece of software I believe it is not
>impossible in the Turing/Godel sense to prove that it terminates as long as
>suitable restrictions are imposed on the way it is written. >>
>
>That qualifier is broad enough that I would have to agree.
However it is a principle that should be implicit in the development
of any correct program.
I guess the important thing is toestablish that "suitable restrictions"
can, in fact, exist.
>But as to the
>general case, I think it would be difficult to draw the line between those
>programs that are suitably trivial and those to which the original criterion
>must apply. Even this determination may not be decidable.
As a basic principle if you build a program up from a number of independent
small elements each of which can be shown to terminate then you should
be able to prove that the program as a whole terminates. If you can't
then the whole theory of formal proofs is suspect. As regards a compiler
I'm fairly sure you could prove that say, a syntax analyser terminates.
If you turn off optimisation for the moment since it isn't necessary to
generate working object code probably most of not all of the components
of a compiler can be validated.
It is a basic principle of programming: if something is too complicated
break it down into simpler parts. Apply the principle recursively on
those parts.
Because of what this implies, I must say again that Gödel's Incompleteness
Theorem leads to the determination that the Turing Halting Problem is
insoluble in principle. My point was to point out that even drawing a line
in the sand between apparently trivial programs, and those to which this
condition clearly implies, may itself be insoluble for the same reason.
To the best of my knowledge, the judgment that a particular program halts is
therefore a personal judgment. It cannot be proven rigorously.
<< As a basic principle if you build a program up from a number of
independent small elements each of which can be shown to terminate then you
should be able to prove that the program as a whole terminates. >>
Now we are addressing the crux of the matter.
<< If you can't then the whole theory of formal proofs is suspect. >>
Even more the crux. The Theorem to which I refer establishes exactly this
point -- formal proofs are suspect.
<< It is a basic principle of programming: if something is too complicated
break it down into simpler parts. Apply the principle recursively on those
parts. >>
It is precisely on these grounds that Russell & Whitehead's magnum opus
"Principia Mathematica" was built, the volume that was meant to demonstrate
the connection between mathematics and the most basic principles of logic.
This was the most famous early casualty of the Theorem.
I believe there is a lower threshold, a program so trivial that the Theorem
does not apply. But on the other end, the general class of programs cannot
be determined to halt, and I strongly suspect that one cannot definitively
draw a line between programs that can, and cannot, be proven to halt.
--
Paul Lutus
www.arachnoid.com
Lawrence Kirby wrote in message <930058...@genesis.demon.co.uk>...
Interesting simile, as fractal generator programs by definition regress
indefinitely.
<< The problem is trying to create something too general. The trick is
limiting yourself to something that is useful without making the boundaries
excessively complex. >>
Actually, the problem is not asserting something, the problem is proving
anything. In the Theorem, for any axiomatic system that uses symbols, there
are insoluble problems, problems that require recourse to an external
meta-system, ad infinitum.
<< Building a specific, limited, framework can avoid the problem but is
ultimately limited in power. That is the real problem with formal proofs. >>
I think if you construct a simple system, the limits to determinacy are
approached more quickly. The problem is not avoided, but the system is
easier to interpret.
<< The fact that I can present a program that is classifiable demonstrates
that there are subsets (*) that are classifiable. >>
Such an example would fit into Gödel's scheme as an example of one of the
cases determinable within the system. Asserting it is easy -- but proving it
is another story, or defining the boundaries of validity.
IMHO you should have left it as "sunsets" and waited for interesting
responses :)
--
Paul Lutus
www.arachnoid.com
Lawrence Kirby wrote in message <930097...@genesis.demon.co.uk>...
<snip>
><< However it is a principle that should be implicit in the development of
>any correct program. >>
>
>Because of what this implies, I must say again that Gödel's Incompleteness
>Theorem leads to the determination that the Turing Halting Problem is
>insoluble in principle. My point was to point out that even drawing a line
>in the sand between apparently trivial programs, and those to which this
>condition clearly implies, may itself be insoluble for the same reason.
I can't draw a line that separates all soluble programs from all
non-soluble programs. I can draw a circle around a specific subset of
soluble programs. I'm sharpening my pencil now! :-)
>To the best of my knowledge, the judgment that a particular program halts is
>therefore a personal judgment. It cannot be proven rigorously.
My example program can be shown to halt in the abstract C machine. I can
create a set of rules to build more complex programs that can be shown
will halt. For example I can say that an if statement will halt
if its control expression and the statement(s) within it all halt
individually. I can say that a while() statement halts if the control
expression and loop body halt and the control expression can be
proved to evaluate to zero within a finite number of iterations of the
loop.
My construction rules will define a subset of the C language which is
not Turing-equivalent but may nevertheless be powerful enough to solve
useful problems. I imagine it is the sort of thing you can extend
like a fractal. You can always refine the rules just that little bit
further going into finer and finer detail.
><< As a basic principle if you build a program up from a number of
>independent small elements each of which can be shown to terminate then you
>should be able to prove that the program as a whole terminates. >>
>
>Now we are addressing the crux of the matter.
>
><< If you can't then the whole theory of formal proofs is suspect. >>
>
>Even more the crux. The Theorem to which I refer establishes exactly this
>point -- formal proofs are suspect.
>
><< It is a basic principle of programming: if something is too complicated
>break it down into simpler parts. Apply the principle recursively on those
>parts. >>
>
>It is precisely on these grounds that Russell & Whitehead's magnum opus
>"Principia Mathematica" was built, the volume that was meant to demonstrate
>the connection between mathematics and the most basic principles of logic.
>This was the most famous early casualty of the Theorem.
The problem is trying to create something too general. The trick is
limiting yourself to something that is useful without making the
boundaries excessively complex.
>I believe there is a lower threshold, a program so trivial that the Theorem
>does not apply. But on the other end, the general class of programs cannot
>be determined to halt, and I strongly suspect that one cannot definitively
>draw a line between programs that can, and cannot, be proven to halt.
The fact that I can present a program that is classifiable demonstrates
that there are subsets (*) that are classifiable. The problem is creating a
general framework. Building a specific, limited, framework can
avoid the problem but is ultimately limited in power. That is the
real problem with formal proofs.
* - I originally mistyped this as "sunsets".
...
>When discussing "perfection" in code yes.
>
>Code is a form of communication between
> a) machine & me
> b) you & me.
>
>In the former brace-style and other subjectivities are irrelavancies
> whittled away by the compiler,
> and thither can (indeed should) be sought perfection.
>
>In the latter brace-style, et cetera, are the defining qualities
> and therein should be sought excellence.
I agree but excellence in communicability needs to be objectively based
or else two arbitrary people will never be able to communicate optimally.
That may be an acceptable state of affairs for a natural language,
but not for a technical language when perfection is the goal.
>Another way of looking at it -
> coding may be a form of engineering,
> but it is also a form of writing.
>
>And although the objective qualities are more primal
> a judgement of code which excludes the subjective ones
> is necessarily degenerate.
Are you talking about aesthetics or something more?
>> I don't think so.
>
>And so we respectfully differ.
For the purposes of finding a perfect program, yes.
>> We must try
>>to create a framework for judging perfection that is totally objective.
>
>Didja ever see some code and say "That's nice" ?
Surely, but that's an entirely different set of criteria to saying
"That's perfect".
Yes -- but the halting problem is a *general* problem.
> My point was to point out that even drawing a line in the sand between
> apparently trivial programs, and those to which this
> condition clearly implies, may itself be insoluble for the same reason.
>
> To the best of my knowledge, the judgment that a particular program
> halts is therefore a personal judgment. It cannot be proven rigorously.
For *particular* programs, there can be proofs of termination which are
as rigorous as the language permits, and are not "personal judgements"
in the sense I take you to mean.
What the halting problem says (or demonstrates) is that there is no
algorithmic way to produce such a proof that works for all programs,
and (I think) no algorithmic way to distinguish the hard cases from
the easy ones.
It *doesn't* say that "for all programs x, you cannot prove x halts";
it says, "there is no universal method M for determining if an arbitrary
program halts".
> << If you can't then the whole theory of formal proofs is suspect. >>
>
> Even more the crux. The Theorem to which I refer establishes exactly
> this point -- formal proofs are suspect.
Goedel's theorem says something much more specific. It says that any
sufficently powerful formal method (at least powerful enough to do
general arithmetic) has at least one of two "defects": either
(a) it is inconsistent, or
(b) there are true theorems in that system that cannot be proved in
that system.
I think it is unnecessarily misleading to characterise this as
"formal proofs are suspect". In particular, if the Usual Logic we
deploy in program proving is consistent (and if it isn't, we're
in deep trouble -- and why does it weem to work?), then this means
that there are some things we just won't be able to prove.
So then we try and prove something different that's as good for practical
purposes. (Note that this is not a formal algorithmic process and is
not guaranteed to give an answer, so you can't deploy Goedel against
it.)
> It is precisely on these grounds that Russell & Whitehead's magnum opus
> "Principia Mathematica" was built, the volume that was meant to
> demonstrate the connection between mathematics and the most basic
> principles of logic.
> This was the most famous early casualty of the Theorem.
I fear you are mistaken. _PM_ fell over an inconsistency noted by
Frege, which Russel characterised with the set-of-all-sets problem:
consider the set S of all sets that are not members of themselves.
Is S a member of S?
The intuitive notion of set then explodes and the logicians attempt
to reassemble the resulting mess. I think "ramified type theory"
was (one of) Russell's solutions -- you prohibit sets from being
members of themselves *at all*.
> I believe there is a lower threshold, a program so trivial that the
> Theorem does not apply. But on the other end, the general class of
> programs cannot be determined to halt, and I strongly suspect that one
> cannot definitively draw a line between programs that can, and cannot,
> be proven to halt.
You can't, so you don't.
You establish techniques which will work on a sub-set of halting
programs, and when they don't work on some particular program, you
change the *program* until (a) the techniques work, or (b) your resources
run out; or (c) invent a new technique.
For example, the class of primitive recursive functions is guaranteed
to terminate (roughly speaking, because they count down to zero from
some positive integer). There have been proposals for programming
languages with no unbounded loops and no recursion; programs in those
languages are pretty much certain to terminate.
This process is not guaranteed to terminate, but then, in real life
termination isn't enough anyway.
--
Non-terminating Hedgehog
Epimenides claimed to have a formal proof that no formal proof
is correct, but you know what they say about Cretans.
> Non-terminating Hedgehog
Just the sort of Hedgehog needed for an argument with Paul Lutus.
--
MJSR
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
<< It *doesn't* say that "for all programs x, you cannot prove x halts"; it
says, "there is no universal method M for determining if an arbitrary
program halts". >>
It appears that the first quoted line contradicts the second one.
If (1) there is no algorithmic way to prove a case of the halting problem,
and if (2) there is no rigorous way to distinguish a trivial case from a
non-trivial one, then ipso facto there are no soluble cases at all. In
principle.
My point is, a person can look at a program and say "Clearly this one is so
trivial that I can say it certainly halts." The problem is in *proving* that
assertion.
<<
Goedel's theorem says something much more specific. It says that any
sufficently powerful formal method (at least powerful enough to do general
arithmetic) has at least one of two "defects": either
(a) it is inconsistent, or
(b) there are true theorems in that system that cannot be proved in
that system.
I think it is unnecessarily misleading to characterise this as "formal
proofs are suspect" ...
>>
It is not misleading, it is necessary.
<< In particular, if the Usual Logic we deploy in program proving is
consistent (and if it isn't, we're in deep trouble -- and why does it weem
to work?), then this means that there are some things we just won't be able
to prove. >>
All formal logical systems are generally consistent, and all of them have
boundaries to their consistency and applicability. Therefore this means, as
you say, there are some things we just won't be able to prove.
In all non-trivial formal logical systems that use symbols, your condition
(b) is always true, and (a) is usually true, depending on the complexity of
the system. Then the problem (again) is distinguishing one set from the
other.
<< I think "ramified type theory" was (one of) Russell's solutions -- you
prohibit sets from being members of themselves *at all*. >>
This may have been proposed, but it contains an obvious logical flaw (a set
of prohibited statements about sets), and PM really has been discredited, it
was not salvaged. Had the criterion you mention not have been brought up,
the Incompleteness Theorem would have been brought up as an objection
instead, as it has since.
<< [Paul] I strongly suspect that one cannot definitively draw a line
between programs that can, and cannot, be proven to halt.
[Chris] You can't, so you don't.
You establish techniques which will work on a sub-set of halting programs,
and when they don't work on some particular program, you change the
*program* until (a) the techniques work, or (b) your resources run out; or
(c) invent a new technique. >>
This tries to avoid the problem, it doesn't escape it. It doesn't answer the
question whether an algorithmic assertion about a program can be trusted,
and how one determines that a program change will solve the problem, or the
case is insoluble.
<< There have been proposals for programming languages with no unbounded
loops and no recursion; programs in those languages are pretty much certain
to terminate. >>
"Pretty much certain to terminate"? This is a new subject :)
Your prose suggests a pragmatic, practical way to minimize the consequences
of the Theorem, it doesn't negate it. And it doesn't show a way to establish
a distinction between a program that can be definitively shown to halt, and
one that can't.
--
Paul Lutus
www.arachnoid.com
Chris Dollin wrote in message <377101E8...@hpjavaux.cup.hp.com>...
<snip>
<snip>
> Your prose suggests a pragmatic, practical way to minimize the
consequences
> of the Theorem, it doesn't negate it. And it doesn't show a way to
establish
> a distinction between a program that can be definitively shown to halt,
and
> one that can't.
But one such distinction is easily made. All Windows programs are
guaranteed to halt within fairly short order. It might be minutes, hours or
even days, but they will halt well short of infinity.
Not a complete solution to the Halting Problem, of course, but we should
give Microsoft credit for making a start, anyway.
Yep :)
--
Paul Lutus
www.arachnoid.com
raw...@my-deja.com wrote in message <7kr5uf$d75$1...@nnrp1.deja.com>...
>In article <377101E8...@hpjavaux.cup.hp.com>,
> Chris Dollin <ke...@hpjavaux.cup.hp.com> wrote:
>>
>> I think it is unnecessarily misleading to characterise this as
>> "formal proofs are suspect". In particular, if the Usual Logic we
>> deploy in program proving is consistent (and if it isn't, we're
>> in deep trouble -- and why does it weem to work?), then this means
>> that there are some things we just won't be able to prove.
>
No. Statement (2) was "there exists a problem such that it is not possible
to tell whether it is easy or hard" but you are using it as "for all
problems, it is not possible to tell whether it is easy or hard". [Anyone
else remember the ACL dodgy-set-proof?]
>My point is, a person can look at a program and say "Clearly this one is so
>trivial that I can say it certainly halts." The problem is in *proving*
that
>assertion.
For certain programs (e.g. the register machine program which contains only
the single instruction HALT) you can prove it. All the halting problem says
is that there are some problems for which you can't. You seemed to accept
this a while ago when you wrote
>I believe there is a lower threshold, a program so trivial that the Theorem
>does not apply. But on the other end, the general class of programs cannot
>be determined to halt, and I strongly suspect that one cannot definitively
>draw a line between programs that can, and cannot, be proven to halt.
About existing techniques you wrote:
>This tries to avoid the problem, it doesn't escape it. It doesn't answer
the
>question whether an algorithmic assertion about a program can be trusted,
>and how one determines that a program change will solve the problem, or the
>case is insoluble.
There is no reason here why an algorithmic assertion cannot be trusted. The
halting problem simply says that the algorithm must sometimes produce the
"don't know" result. And even that's only true when the system in question
is Turing powerful, which anything that actually exists isn't.
[Attempted summary: Paul seems to think that Goedel means that
almost any proof of program termination is suspect; I disagree, claiming
that Goedel merely says that there's no *algorithm* for constructing
such proofs, but proofs may be had in a variety of situations. Along
the way, straw men are slain and we talk past each other. Help!].
> << What the halting problem says (or demonstrates) is that there is no
> algorithmic way to produce such a proof that works for all programs,
> and (I think) no algorithmic way to distinguish the hard cases from the
> easy ones.
> >>
>
> << It *doesn't* say that "for all programs x, you cannot prove x
> halts"; it says, "there is no universal method M for determining if an
> arbitrary program halts". >>
>
> It appears that the first quoted line contradicts the second one.
I don't see why you think so.
The first quoted paragraph and the second quote in the second paragraph
are supposed to mean the same thing: there is no *universal* method
for discovering if an arbitrary program halts.
The first quote in the second paragraph points out that this doesn't
prohibit there being halting proofs for *particular* programs.
Can you explain why you think there's a contradiction?
> If (1) there is no algorithmic way to prove a case of the halting
> problem,
This premise is false. For a *particular* case, there can be an
algorithmic proof. Consider the function over the natural numbers
fun f(n) = if n == 0 then 42 else f( n - 1 ) endif
Proof by induction will do the job.
> and if (2) there is no rigorous way to distinguish a trivial case from
> a non-trivial one, then ipso facto there are no soluble cases at all.
This premise is false as well. There are certainly ways to spot *some*
"trivial" cases -- as I said before, for example, all primitive recursive
functions terminate, so if you can show a program encodes a primitive
recursive function, it will terminate -- the question is whether there
are enough "trivial" cases to be useful.
I quote "trivial" because, in practice, the problem is hard.
> My point is, a person can look at a program and say "Clearly this one
> is so trivial that I can say it certainly halts." The problem is in
> *proving* that assertion.
Well, yes, but that's nothing to do with Goedel. Going from "it is
obvious that X" to a full proof of X is hard work. Consider, for example,
the four-colour conjecture. Sorry, theorem.
> <<
> Goedel's theorem says something much more specific. It says that any
> sufficently powerful formal method (at least powerful enough to do
> general arithmetic) has at least one of two "defects": either
>
> (a) it is inconsistent, or
> (b) there are true theorems in that system that cannot be proved in
> that system.
>
> I think it is unnecessarily misleading to characterise this as "formal
> proofs are suspect" ...
>
> >>
>
> It is not misleading, it is necessary.
I'm sorry, I simply don't agree. You'll have to explain further.
> << In particular, if the Usual Logic we deploy in program proving is
> consistent (and if it isn't, we're in deep trouble -- and why does it
> weem to work?), then this means that there are some things we just
> won't be able to prove. >>
>
> All formal logical systems are generally consistent,
You would like to prove this, perhaps? The Usual Logic we employ in
program proving has general arithmetic in it. (That's why I described
it like that.) Hence, by Goedel, it's either inconsistent or incomplete.
We certainly *hope* they're consistent, and I (for one) will act as
though they are -- but if you're going to make this confident claim,
I think you should explain why.
> and all of them have boundaries to their consistency and applicability.
> Therefore this means, as you say, there are some things we just won't
> be able to prove.
Yes, that's right; some things we can't prove even in principle
(as well as some things we simply haven't the resources to prove,
which I think is a more immediate problem!).
> In all non-trivial formal logical systems that use symbols, your
> condition (b) is always true, and (a) is usually true, depending on the
> complexity of the system. Then the problem (again) is distinguishing
> one set from the other.
> << I think "ramified type theory" was (one of) Russell's solutions --
> you prohibit sets from being members of themselves *at all*. >>
>
> This may have been proposed,
There's no "may" about it; my reticence is for the details, not the
brute fact.
> but it contains an obvious logical flaw (a set of prohibited statements
> about sets),
This is not an "obvious logical flaw". It is a decision about ways to
recover from the deep inconsistency of the set theory of the time.
You might not like it, you make think that you lose a lot of expressive
power, and that the loss isn't worth the gain; that doesn't make it
a logical flaw, any more than (for example) C's lack of closures and
garbage collection is a logical flaw.
> and PM really has been discredited, it was not salvaged.
I neither said nor implied otherwise; I was correcting your mis-statement
about the killer and giving some context about recovery attempts.
> Had the criterion you mention not have been brought up,
> the Incompleteness Theorem would have been brought up as an objection
> instead, as it has since.
I'm miles from my books, I'd like to check the chronology ... the
incompleteness theorem was a Hilbert-killer (because it blocked
utterly Hilbert's programme for the complete formalisation of
mathematics), but I can't recall how Hilbert & PM overlapped.
>> << [Paul] I strongly suspect that one cannot definitively draw a line
> between programs that can, and cannot, be proven to halt.
>
> [Chris] You can't, so you don't.
>
> You establish techniques which will work on a sub-set of halting
> programs, and when they don't work on some particular program, you
> change the *program* until (a) the techniques work, or (b) your
> resources run out; or (c) invent a new technique. >>
>
> This tries to avoid the problem, it doesn't escape it.
Of course. You can't find a universal solution, so you look for
non-universal ones.
> It doesn't answer the question whether an algorithmic assertion about a
> program can be trusted,
Of course not. It's an approach to solving a "practical" problem,
namely "I have a program that purports to satisfy specification X,
does it do so?". It says "you can make useful progress even if you
can't draw the line".
And (assuming that the algorithm was *correct*) algorithmic assertions
about a program are no less trustworthy than non-algorithmic ones.
Goedel's theorem doesn't say otherwise. Do you think it does?
> and how one determines that a program change will solve the problem,
Of course not. How could it? There's no algorithmic method to do so.
I can attempt to give heuristics -- indeed, would it not take us
even further off-topic (if that's possible ... where should we go
with this?) I would do so. How about "try increasingly large
perturabations to the program/proof, using iseas that have worked
in the past, until the problem is solved or allocated resources
exhausted".
> or the case is insoluble.
> << There have been proposals for programming languages with no
> unbounded loops and no recursion; programs in those languages are
> pretty much certain to terminate. >>
>
> "Pretty much certain to terminate"? This is a new subject :)
It's the subject of "the compiler might be buggy or the machine
might suffer an alpha attack; the odds are better that this causes
non-termination".
> Your prose suggests a pragmatic, practical way to minimize the
> consequences of the Theorem,
Why, thank you. That's exactly what I was aiming for.
> it doesn't negate it.
I made no claim that I would do so.
> And it doesn't show a way to establish a distinction between a program
> that can be definitively shown to halt, and one that can't.
It doesn't *show* one, it *uses* one, the obvious one; if a program
can be shown to halt by using one of a variety of techniques applicable
to programs of that type, the program halts; if it can be shown to
*not* halt ditto, it doesn't; otherwise you don't know.
In other words, if a proof exists, it is not suspect [at least not
for Goedel reasons]. However, a proof may not exist. In practice this
is no different from "a proof exists, but you haven't the resources
to find it".
--
wHOLe Hedgehog
In article <37725E98...@hpjavaux.cup.hp.com>,
Chris Dollin <ke...@hpjavaux.cup.hp.com> wrote:
>... There are certainly ways to spot *some* "trivial" cases -- as
>I said before, for example, all primitive recursive functions
>terminate, so if you can show a program encodes a primitive recursive
>function, it will terminate -- the question is whether there
>are enough "trivial" cases to be useful.
As a practical matter, consider the Berkeley Packet Filter (BPF).
BPF is something we stick into an operating system kernel, at or
near the device driver level. This allows us to inspect network
packets, so that we can build "network sniffers" to watch for
attempted breakins, or diagnose faulty software, or any number of
other interesting problems. These are most often applied to
Ethernets (they actually work on any arbitrary interface, including
frame relay and T1 and so on, but Ethernets are a nice concrete
example).
If you are familiar with Ethernet-level protocols, you know that
Ethernet packets are sent from some MAC address, to some MAC address.
Normally your hardware only picks up packets addressed directly to
it, and of course packets sent to the Ethernet broadcast address
(ff:ff:ff:ff:ff:ff). (More precisely, you get multicast addresses,
usually with some hardware filtering that passes all broadcasts
and some superset of the desired multicast addresses.) So, if you
want to "tap into" packets that are not actually for *you*, you
have to put your hardware into "promiscuous" mode, so that you pick
up *every* incoming packet.
You could code your kernel/drivers to pass *all* packets to some
user-level code, which would then pick out the ones it wants and
discard the rest. This is how some Unix-like systems do it. It
is not very efficient and often loses packets (because the user-level
code cannot keep up with a full network load).
BPF attempts to be more efficient. It works by putting incoming
packets through "filters". Only those that pass the filters are
sent on to the user-level code. (This is directly analogous to
the multicast filtering the hardware performs: you can write a
filter that tosses *most* of the unwanted stuff cheaply, then do
a "perfect" job elsewhere.)
The actual design of BPF involves loading filtering code into the
kernel. The code is run on a simple simulated machine that has
instructions like "load word from packet", "mask" and "compare with
accumulator". A program in BPF can load the Ethernet type, see if
it is an IP packet, load some of the IP headers, see if it is for
some set of ports, and take the packet only if it is going to one
of the "interesting" ports. Since this simulated machine's language
has branches, however, it would be trivial to write a BPF program
that did something like:
L1: jump L1
and if you did that, you would lock up the machine entirely. Thus,
we would like to reject programs that will not halt.
There are various ways to do this. One method (not used in BPF)
is to give programs a "maximum cycle count": after running however
many instructions, the program "dies". All programs written in
this system will, by definition, halt: they will either reach the
"halt with value" instruction, or they will die of old age.
The method actually used in BPF is to force all jumps to be forward
jumps. If there are no backwards branches, there are no loops.
There are no function calls at all in BPF, so "no loops" immediately
implies "will halt". All BPF programs trivially halt.
Nonetheless, BPF programs are not always trivial: they often make
the difference between a network analyser that can keep up with
your traffic, and one that cannot.
So, when Paul Lutus says:
>> My point is, a person can look at a program and say "Clearly this one
>> is so trivial that I can say it certainly halts." The problem is in
>> *proving* that assertion.
this is not necessarily true: there are some programs (such as
those that make no recursive calls and have no loops) that must
trivially halt, and there is no problem proving it.
As soon as we introduce either loops or recursion, the problem
gets substantially harder, but as Chris Dollin notes:
>Well, yes, but that's nothing to do with Goedel. Going from "it is
>obvious that X" to a full proof of X is hard work.
Certainly we can take any specific program in any specific language,
attempt to determine whether and when it halts, and then try to
prove that. What the Halting Problem says is that there is no way
to generalize this to the point where a Turing machine could *always*
tell whether other Turing-machine programs halt. As long as we
restrict ourselves to specific cases, we can do better; and as long
as those restrictions do not pinch overmuch, we can be happy with
that as well. :-)
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc
El Cerrito, CA Domain: to...@bsdi.com +1 510 234 3167
http://claw.bsdi.com/torek/ (not always up) I report spam to abuse@.
Um, that would have *everything* to do with Gödel. Proving some statements
using a nontrivial formal system of logic may look like hard work, Gödel
asserts that it may be impossible.
<<
[Chris] I think it is unnecessarily misleading to characterise this as
"formal
proofs are suspect" ...
[Paul] It is not misleading, it is necessary.
[Chris] I'm sorry, I simply don't agree. You'll have to explain further.
>>
Okay. Gödel's Theorem basically says for a nontrivial formal system using
symbols, there are statements not provable within that system. In the
absence of detailed information about a particular case, and prior to making
a technical assessment, formal proofs that are contained entirely within any
formal system are suspect -- it must be demonstrated that this condition
doesn't apply to them. This cannot be assumed a priori.
Russsel and Whitehead's tome was a formal proof. It was not recognized to be
suspect until later.
Saying "formal proofs are suspect" is like saying e=mc^2. It is an
approximation, but a reasonable one. More information is available, but in
the absence of that information, the short form is likely to be accurate.
After re-examining the supposed conflict between two assertions about the
various classes of programs, I no longer think the statements are in
conflict. There are programs that cannot be proven to halt, there are
programs that can, and the gray zone between them is indeterminate. You
asserted (using a double negative) that *some* programs clearly halt. I
agree.
<< [Paul] and if (2) there is no rigorous way to distinguish a trivial case
from a non-trivial one, then ipso facto there are no soluble cases at all.
{Chris] This premise is false as well. >>
I agree. I went too far. This logical sequence is valid for some sets, but
in this case (I think) there is a set of trivial programs about which there
is no doubt.
My problem with "ramified type theory" was that you described it as a
solution. It was an attempted repair. The basic premise of PM -- that
mathematics could be shown to have a firm foundation in logic -- cannot be
restored, repairs notwithstanding.
<<
[Paul] This tries to avoid the problem, it doesn't escape it.
{Chris] Of course. You can't find a universal solution, so you look for
non-universal ones.
>>
I was not objecting to practical searches for solutions, I was objecting to
the suggestion that these programs in the "gray zone" might be thought
accessible to a rigorous halting proof. I think this assumption is risky.
<< It's an approach to solving a "practical" problem, namely "I have a
program that purports to satisfy specification X, does it do so?". It says
"you can make useful progress even if you can't draw the line". >>
I agree, but the basic subject -- provably halting -- is not addressed. Some
people might incorrectly think a particular program is outside the domain of
unprovability, proceed to "prove" that it halts, and assert things about the
"proof" that are unsupportable.
<< It's the subject of "the compiler might be buggy or the machine might
suffer an alpha attack; the odds are better that this causes
non-termination". >>
These and many other repeatable elements (therefore excluding the alpha
attack) are part of the abstract Turing Machine that influences the outcome
for a particular program. They add to the complexity of an apparently simple
program.
<< It doesn't *show* one, it *uses* one, the obvious one; if a program can
be shown to halt by using one of a variety of techniques applicable to
programs of that type, the program halts; if it can be shown to *not* halt
ditto, it doesn't; otherwise you don't know. >>
"Shown to halt" is a long way from "proven to halt" in the scenario you are
describing. The first may be a practical solution, but my concern is someone
may think it is a proof, and load a bunch of children onto a ski lift, or
expose some "clients" to a radiation therapy machine, on the strength of a
mistaken assumption. I am not talking about miracles here, I am talking
about the issue of informed consent. People should know, in a general way,
what limitations exist in software. Programmers should certainly know.
<< In other words, if a proof exists, it is not suspect [at least not for
Goedel reasons]. However, a proof may not exist. In practice this is no
different from "a proof exists, but you haven't the resources to find it".
>>
No. If you don't have the resources to find a proof, the proof may not
exist. It cannot be assumed to exist as you imply here. Nothing at all may
be assumed about it.
--
Paul Lutus
www.arachnoid.com
Chris Dollin wrote in message <37725E98...@hpjavaux.cup.hp.com>...
<snip>
Um, that would have *everything* to do with Gödel. Proving some statements
using a nontrivial formal system of logic may look like hard work, Gödel
asserts that it may be impossible.
<<
[Chris] I think it is unnecessarily misleading to characterise this as
"formal
proofs are suspect" ...
[Paul] It is not misleading, it is necessary.
[Chris] I'm sorry, I simply don't agree. You'll have to explain further.
>>
Okay. Gödel's Theorem basically says for a nontrivial formal system using
symbols, there are statements not provable within that system. In the
absence of detailed information about a particular case, and prior to making
a technical assessment, formal proofs that are contained entirely within any
formal system are suspect -- it must be demonstrated that this condition
doesn't apply to them. This cannot be assumed a priori.
Russsel and Whitehead's tome was a formal proof. It was not recognized to be
suspect until later.
Saying "formal proofs are suspect" is like saying e=mc^2. It is an
approximation, but a reasonable one. More information is available, but in
the absence of that information, the short form is likely to be accurate.
After re-examining the supposed conflict between two assertions about the
various classes of programs, I no longer think the statements are in
conflict. There are programs that cannot be proven to halt, there are
programs that can, and the gray zone between them is indeterminate. You
asserted (using a double negative) that *some* programs clearly halt. I
agree.
<< [Paul] and if (2) there is no rigorous way to distinguish a trivial case
from a non-trivial one, then ipso facto there are no soluble cases at all.
{Chris] This premise is false as well. >>
I agree. I went too far. This logical sequence is valid for some sets, but
in this case (I think) there is a set of trivial programs about which there
is no doubt.
My problem with "ramified type theory" was that you described it as a
solution. It was an attempted repair. The basic premise of PM -- that
mathematics could be shown to have a firm foundation in logic -- cannot be
restored, repairs notwithstanding.
<<
[Paul] This tries to avoid the problem, it doesn't escape it.
{Chris] Of course. You can't find a universal solution, so you look for
non-universal ones.
>>
I was not objecting to practical searches for solutions, I was objecting to
the suggestion that these programs in the "gray zone" might be thought
accessible to a rigorous halting proof. I think this assumption is risky.
<< It's an approach to solving a "practical" problem, namely "I have a
program that purports to satisfy specification X, does it do so?". It says
"you can make useful progress even if you can't draw the line". >>
I agree, but the basic subject -- provably halting -- is not addressed. Some
people might incorrectly think a particular program is outside the domain of
unprovability, proceed to "prove" that it halts, and assert things about the
"proof" that are unsupportable.
<< It's the subject of "the compiler might be buggy or the machine might
suffer an alpha attack; the odds are better that this causes
non-termination". >>
These and many other repeatable elements (therefore excluding the alpha
attack) are part of the abstract Turing Machine that influences the outcome
for a particular program. They add to the complexity of an apparently simple
program.
<< It doesn't *show* one, it *uses* one, the obvious one; if a program can
be shown to halt by using one of a variety of techniques applicable to
programs of that type, the program halts; if it can be shown to *not* halt
ditto, it doesn't; otherwise you don't know. >>
"Shown to halt" is a long way from "proven to halt" in the scenario you are
describing. The first may be a practical solution, but my concern is someone
may think it is a proof, and load a bunch of children onto a ski lift, or
expose some "clients" to a radiation therapy machine, on the strength of a
mistaken assumption. I am not talking about miracles here, I am talking
about the issue of informed consent. People should know, in a general way,
what limitations exist in software. Programmers should certainly know.
<< In other words, if a proof exists, it is not suspect [at least not for
Goedel reasons]. However, a proof may not exist. In practice this is no
different from "a proof exists, but you haven't the resources to find it".
>>
No. If you don't have the resources to find a proof, the proof may not
So, when Paul Lutus says:
>> My point is, a person can look at a program and say "Clearly this one
>> is so trivial that I can say it certainly halts." The problem is in
>> *proving* that assertion.
this is not necessarily true: there are some programs (such as
those that make no recursive calls and have no loops) that must
trivially halt, and there is no problem proving it.
>>
I agree with the logic, but, just for the sake of completeness, the abstract
Turing Machine in the example you gave must include the compiler. The
compiler may have a flaw that prevents it from faithfully meeting the
criteria the programmer set out -- no loops, only forward branches, etc..
In this very hypothetical scenario, the rather complex compiler (not itself
provably halting) might produce an unfaithful rendering of the provably
halting source code.
I think in discussions of this kind, we must limit ourselves to a discussion
of the source code, *or* if we make assertions about the executable, we must
include the behavior of the compiler.
--
Paul Lutus
www.arachnoid.com
Chris Torek wrote in message <7ktqlh$3r0$1...@elf.bsdi.com>...
Um, that would have *everything* to do with Gödel. Proving some statements
using a nontrivial formal system of logic may look like hard work, Gödel
asserts that it may be impossible.
<<
[Chris] I think it is unnecessarily misleading to characterise this as
"formal
proofs are suspect" ...
[Paul] It is not misleading, it is necessary.
[Chris] I'm sorry, I simply don't agree. You'll have to explain further.
>>
Okay. Gödel's Theorem basically says for a nontrivial formal system using
symbols, there are statements not provable within that system. In the
absence of detailed information about a particular case, and prior to making
a technical assessment, formal proofs that are contained entirely within any
formal system are suspect -- it must be demonstrated that this condition
doesn't apply to them. This cannot be assumed a priori.
Russsel and Whitehead's tome was a formal proof. It was not recognized to be
suspect until later.
Saying "formal proofs are suspect" is like saying e=mc^2. It is an
approximation, but a reasonable one. More information is available, but in
the absence of that information, the short form is likely to be accurate.
After re-examining the supposed conflict between two assertions about the
various classes of programs, I no longer think the statements are in
conflict. There are programs that cannot be proven to halt, there are
programs that can, and the gray zone between them is indeterminate. You
asserted (using a double negative) that *some* programs clearly halt. I
agree.
<< [Paul] and if (2) there is no rigorous way to distinguish a trivial case
from a non-trivial one, then ipso facto there are no soluble cases at all.
{Chris] This premise is false as well. >>
I agree. I went too far. This logical sequence is valid for some sets, but
in this case (I think) there is a set of trivial programs about which there
is no doubt.
My problem with "ramified type theory" was that you described it as a
solution. It was an attempted repair. The basic premise of PM -- that
mathematics could be shown to have a firm foundation in logic -- cannot be
restored, repairs notwithstanding.
<<
[Paul] This tries to avoid the problem, it doesn't escape it.
{Chris] Of course. You can't find a universal solution, so you look for
non-universal ones.
>>
I was not objecting to practical searches for solutions, I was objecting to
the suggestion that these programs in the "gray zone" might be thought
accessible to a rigorous halting proof. I think this assumption is risky.
<< It's an approach to solving a "practical" problem, namely "I have a
program that purports to satisfy specification X, does it do so?". It says
"you can make useful progress even if you can't draw the line". >>
I agree, but the basic subject -- provably halting -- is not addressed. Some
people might incorrectly think a particular program is outside the domain of
unprovability, proceed to "prove" that it halts, and assert things about the
"proof" that are unsupportable.
<< It's the subject of "the compiler might be buggy or the machine might
suffer an alpha attack; the odds are better that this causes
non-termination". >>
These and many other repeatable elements (therefore excluding the alpha
attack) are part of the abstract Turing Machine that influences the outcome
for a particular program. They add to the complexity of an apparently simple
program.
<< It doesn't *show* one, it *uses* one, the obvious one; if a program can
be shown to halt by using one of a variety of techniques applicable to
programs of that type, the program halts; if it can be shown to *not* halt
ditto, it doesn't; otherwise you don't know. >>
"Shown to halt" is a long way from "proven to halt" in the scenario you are
describing. The first may be a practical solution, but my concern is someone
may think it is a proof, and load a bunch of children onto a ski lift, or
expose some "clients" to a radiation therapy machine, on the strength of a
mistaken assumption. I am not talking about miracles here, I am talking
about the issue of informed consent. People should know, in a general way,
what limitations exist in software. Programmers should certainly know.
<< In other words, if a proof exists, it is not suspect [at least not for
Goedel reasons]. However, a proof may not exist. In practice this is no
different from "a proof exists, but you haven't the resources to find it".
>>
No. If you don't have the resources to find a proof, the proof may not
exist. It cannot be assumed to exist as you imply here. Nothing at all may
be assumed about it.
--
Paul Lutus
www.arachnoid.com
Chris Dollin wrote in message <37725E98...@hpjavaux.cup.hp.com>...
<snip>
Hilbert's program must date to at least his 1900 list of significant math
problems for the 20th century. Russell's Paradox (1901) undercut
Frege's work, and did make formal proofs suspect. Russell's theory
of types was in part an attempt to resolve such paradoxes. Principia
Mathematica was published in volumes, in 1910, 1912 and 1913,
well before the Incompleteness Theorem (1931), and was attacked
before then for other reasons.
I thought your points were very well explained.
No, it doesn't have "everything" to do with Goedel. *Even if Goedel's
theorem were not true*, my quoted statement above holds. Proofs are
hard to find *even when they exist*.
If you can't find a proof for X, X may not have a proof. Or it may
have a proof too difficult for *you* to find. If you can't prove that
X has no proof (!) then you have two choices: keep trying, or give up.
>
> <<
>
> [Chris] I think it is unnecessarily misleading to characterise this as
> "formal proofs are suspect" ...
>
> [Paul] It is not misleading, it is necessary.
>
> [Chris] I'm sorry, I simply don't agree. You'll have to explain
> further.
>
> >>
>
> Okay. Gödel's Theorem basically says for a nontrivial formal system
> using symbols,
(aside: are there formal systems *not* using symbols?)
> there are statements not provable within that system. In the
> absence of detailed information about a particular case, and prior to
> making a technical assessment, formal proofs that are contained
> entirely within any formal system are suspect -- it must be
> demonstrated that this condition doesn't apply to them. This cannot be
> assumed a priori.
I'm sorry, I'm still no better off. Suppose P is some proof of statement
X in formal system S. Since S *is* a formal system, it has proof rules
and axioms. Since P purports to be a proof, it has (alleged) axioms and a
sequence of (alleged) inference steps, ending in an assertion of X.
If the alleged axioms of P are not axioms of S, P is not a proof of X.
If any alledged inference step of P is not an instance of a proof rule
of S, P is not a proof of X. Otherwise, P is a proof of X.
If X is in fact not provable within S, nobody will be able to exhibit
a proof P (in S) of X.
Could you phrase your paragraph above in similar terms to mine so that
we may compare and contrast? Because this discussion has all the signs
of two people talking about different things but using the same words :-)
> Russsel and Whitehead's tome was a formal proof. It was not recognized
> to be suspect until later.
I thought that Frege's death-blow came during the construction of PM,
not later (certainly not *much* later).
> Saying "formal proofs are suspect" is like saying e=mc^2.
I would disagree violently; one is an assertion about mathematics,
and the other is supposed to be a piece of physics. But I don't
think this disagreement matters to our argument.
> It is an approximation, but a reasonable one.
I don't think it's reasonable, and, to the extent that I might agree,
I don't think that the statement is a consequence of Goedel's theorem.
> More information is available, but in the absence of that information,
> the short form is likely to be accurate.
A proof may have a mistake in it, yes. I don't think anyone would dispute
this. If that's all you mean, we have no quarrel.
A proof may not prove what you thought it did (because your axioms
do not reflect your intuitions). If that's all you mean, we have no
quarrel.
If you mean that a formal proof is automatically suspect *because of
the incompleteness theorem*, I think you have yet to make your case.
> After re-examining the supposed conflict between two assertions about
> the various classes of programs, I no longer think the statements are
> in conflict.
Thank you.
> There are programs that cannot be proven to halt, there are
> programs that can, and the gray zone between them is indeterminate. You
> asserted (using a double negative) that *some* programs clearly halt. I
> agree.
>
> << [Paul] and if (2) there is no rigorous way to distinguish a trivial
> case from a non-trivial one, then ipso facto there are no soluble cases
> at all.
>
> {Chris] This premise is false as well. >>
>
> I agree. I went too far. This logical sequence is valid for some sets,
> but in this case (I think) there is a set of trivial programs about
> which there is no doubt.
There's a large set of such programs, and an even larger one of "not
trivial, but doable". (Of course both sets are infinite, but I don't
think that matters.)
> My problem with "ramified type theory" was that you described it as a
> solution. It was an attempted repair.
I'm sorry; if I gave that impression I miswrote. I agree with your
description.
> The basic premise of PM -- that mathematics could be shown to have a
> firm foundation in logic -- cannot be restored, repairs
> notwithstanding.
*That* isn't a description I would use, however. But it depends on
what you mean by "mathematics", "firm", "foundation", and "logic",
and we've spent enough time already.
> <<
>
> [Paul] This tries to avoid the problem, it doesn't escape it.
>
> {Chris] Of course. You can't find a universal solution, so you look for
> non-universal ones.
>
> >>
>
> I was not objecting to practical searches for solutions, I was
> objecting to the suggestion that these programs in the "gray zone"
> might be thought accessible to a rigorous halting proof. I think this
> assumption is risky.
Oh, yes, it is. I don't recall making it, though.
> << It's an approach to solving a "practical" problem, namely "I have a
> program that purports to satisfy specification X, does it do so?". It
> says "you can make useful progress even if you can't draw the line". >>
>
> I agree, but the basic subject -- provably halting -- is not addressed.
> Some people might incorrectly think a particular program is outside the
> domain of unprovability, proceed to "prove" that it halts, and assert
> things about the "proof" that are unsupportable.
Suppose P is a program for which there is in fact no proof that it
halts and no proof that it doesn't. "Some people" don't realise this,
and "prove" that it halts.
There isn't a magic contamination that means that their "proof" will
*look* like a proof, but mysteriously fail to be correct just because
P "halts undecidably". Either they have a real proof (which, given
my premises, they don't) or a proof-with-a-flaw. The flaw is findable:
some axiom or step fails.
Of course *in practice* finding flaws is hard work. Actually, writing
almost any formal proof is hard work; details matter everywhere. But
this is just another variation of "programs have bugs".
> << It's the subject of "the compiler might be buggy or the machine
> might suffer an alpha attack; the odds are better that this causes
> non-termination". >>
>
> These and many other repeatable elements (therefore excluding the alpha
> attack) are part of the abstract Turing Machine that influences the
> outcome for a particular program. They add to the complexity of an
> apparently simple program.
If you like.
> << It doesn't *show* one, it *uses* one, the obvious one; if a program
> can be shown to halt by using one of a variety of techniques applicable
> to programs of that type, the program halts; if it can be shown to
> *not* halt ditto, it doesn't; otherwise you don't know. >>
>
> "Shown to halt" is a long way from "proven to halt" in the scenario you
> are describing.
They are intended to be synonymous in my paragraph.
> The first may be a practical solution, but my concern is someone
> may think it is a proof, and load a bunch of children onto a ski lift,
> or expose some "clients" to a radiation therapy machine, on the
> strength of a mistaken assumption.
This is a real concern: the abstract properties we prove with may not
reflect the actual properties of the systems we build.
This, however, has nothing to do with incompleteness or Turing-
computability.
> I am not talking about miracles here, I am talking about the issue of
> informed consent. People should know, in a general way, what
> limitations exist in software. Programmers should certainly know.
That's true. And the limitations we are routinely concerned with,
by and large, are not those of decidablity, because other, much less
subtle, problems arise.
(Will the input always be in the range 0..41? Does every client name
fit into 32 bytes? Can we use 7 bytes for a Zip code? Is this code
dependent on alignment or endian considerations? Will this buffer
overrun?)
> << In other words, if a proof exists, it is not suspect [at least not
> for Goedel reasons]. However, a proof may not exist. In practice this
> is no different from "a proof exists, but you haven't the resources to
> find it". >>
>
> No. If you don't have the resources to find a proof, the proof may not
> exist.
"However, a proof may not exist". That's what I said.
> It cannot be assumed to exist as you imply here.
I made and intended no such implication.
> Nothing at all may be assumed about it.
Yes. Where did I say any differently?
[The only thing I can see that you may be getting at is that we mustn't
make deductions based on "a proof exists, and so ...". That's true.
My "in practice" rather took for granted that one wouldn't *assume*
that proofs existed, one would *produce* such a proof when required.]
--
Control-Meta-Hedgehog
So if I understand this correctly, you're commenting that "proofs" might
turn out to be unreliable because we misunderstood the system we are
applying them to. In particular, if we prove something based on the
semantics of a particular language, we might come unstuck because a compiler
bug means that we're actually working with a subtly different language.
This is a good point and highlights a limitation of formal correctness
proofs. Indeed, it highlights a problem in software engineering in general.
However, it has nothing to do with the halting problem.
><< In other words, if a proof exists, it is not suspect [at least not for
>Goedel reasons]. However, a proof may not exist. In practice this is no
>different from "a proof exists, but you haven't the resources to find it".
>>>
>
>No. If you don't have the resources to find a proof, the proof may not
>exist. It cannot be assumed to exist as you imply here. Nothing at all may
>be assumed about it.
Chris implied no such thing. He explicitly said 'A proof may not exist'.
I think you may have misinterpreted the last-quoted sentence: 'In practice
this is no different from "a proof exists, but you haven't the resources to
find it".'
There are four possible things we can say about a statement X:
1. We can prove X
2. We can disprove X
3. We know that X is undecidable
4. We don't know which of 1, 2 or 3 applies.
Chris is commenting that, in all of the cases (2), (3) and (4), we can't
rely on the truth of X. In practical terms they are not all that different,
because in all three cases you shouldn't use X if you're trying to build a
reliable system.
This is the crux of the discussion: if we can prove X, then -- subject to
assumptions about the correctness of the proof, including those you
highlight above -- we can rely on X. If you can't prove X, then we can't
rely on X irrespective of whether a proof actually does exist. I have a
feeling that you're both really saying the same thing, but in slightly
different ways.
Cheers,
Richard
Or it may be unprovable in principle, thus having something to do with
Gödel. My original point.
<< aside: are there formal systems *not* using symbols >>
Good question. Maybe those that rely only on slippery things like words? :)
<< [lengthy snip] Otherwise, P is a proof of X. >>
Unless P can be shown to belong to the subset of proofs Gödel showed to be
indeterminate for any formal system that uses symbols. My original point --
formal systems are suspect, in a way that wasn't recognized before this
theorem.
<< Could you phrase your paragraph above in similar terms to mine so that we
may compare and contrast? Because this discussion has all the signs of two
people talking about different things but using the same words >>
I agree, and I certainly have been in my share of those.
Okay, here goes. Within formal system S, we assert proof P about assertion
X. P appears to meet all the criteria of a formal proof of assertion X, all
this resting on S's foundation of logical postulates and symbols. Then it is
discovered that P refers to one or more properties of system S in such a way
that the Theorem applies. This invalidates P. It doesn't disprove P, it
simply asserts that P has no meaning, is indeterminate.
<< Russell and Whitehead's tome was a formal proof. It was not recognized to
be suspect until later.
I thought that Frege's death-blow came during the construction of PM, not
later (certainly not *much* later). >>
I should have said the thesis was constructed before being shown to possess
one or more flaws. I don't know the dates of publication versus other
events. Certainly Gödel's work was later, and, as you have pointed out, the
relationship between PM and Gödel may be no more than a retrospective
relationship, not one that stirred anyone up at the time. I have often
thought the Incompleteness Theorem invalidated PM, but this may be simply
true without any specific historical connection.
<< [Paul] Saying "formal proofs are suspect" is like saying e=mc^2.
[Chris] I would disagree violently; one is an assertion about mathematics,
and the other is supposed to be a piece of physics. >>
I offer it because both are a kind of shorthand. They both compress reality
to the straining point, but are essentially valid in their respective ways.
As to formal proofs, you can take a careful look and then declare the
original remark an oversimplification. Same with e=mc^2.
<< If you mean that a formal proof is automatically suspect *because of the
incompleteness theorem*, I think you have yet to make your case. >>
All formal proofs (that meet the conditions set out above) that predate
Gödel must be examined in light of this new finding. As I understand this
issue, many will be (have been) shown to be unsupportable, to the degree
that they simultaneously rely on, and refer to, the postulates of the formal
system of which they are a part.
All formal proofs and theorems after Gödel, one presumes, have been examined
for legitimacy in light of the Incompleteness Theorem. The Halting Problem
belongs in this class. It is now known to be unprovable in principle, and
this determination is a result of the Incompleteness Theorem.
<< There isn't a magic contamination that means that their "proof" will
*look* like a proof, but mysteriously fail to be correct just because P
"halts undecidably". Either they have a real proof (which, given my
premises, they don't) or a proof-with-a-flaw. The flaw is findable: some
axiom or step fails. >>
I *think* you are saying that a proof about a program is still valid, even
if the program itself is proven to be subject to the insoluble halting
problem.
I think the halting criterion issue taints other proofs about the behavior
of an algorithm. If I say that an algorithm always provides the square root
of its argument, but the algorithm cannot be proven to halt, that calls the
primary proof into question as well.
<< [Chris] However, a proof may not exist. In practice this is no different
from "a proof exists, but you haven't the resources to find it".
[Paul] No. If you don't have the resources to find a proof, the proof may
not exist.
[Chris] "However, a proof may not exist". That's what I said. >>
I am objecting to the specific line "In practice this is no different from
'a proof exists, but you haven't the resources to find it'." I think
asserting that a proof exists but is too difficult to uncover is quite
different than acknowledging that no proof exists in principle, as in the
subset of logical statements invalidated by the Theorem.
But, in reading more of your message, I see we are actually probably in
agreement on this point.
It may appear that the halting problem, Gödel et.al. may seem comparatively
unimportant when held up to other, seemingly more concrete issues like human
error in code writing. But this issue certainly does come up dramatically in
efforts to formalize computer language design and programming practice. It
is generally recognized to pose an upper limit on complexity in compilers
and programs beyond which one cannot assert anything with complete
assurance.
I know how bleak that sounds, but I think it reflects the current attitude
in computer science.
Off our subject, but I think you construct your arguments well. I think it
may show your preference for light over heat, a preference I share.
--
Paul Lutus
www.arachnoid.com
Chris Dollin wrote in message <3773B1B2...@hpjavaux.cup.hp.com>...
<snip>
If the halting problem places an outer bound on analytical certainty as it
applies to algorithms and programs, it influences the entire field. It is
not an exaggeration to say it prevents the rigorous formalization of
algorithm design above a certain complexity level.
It may do to computer science what the fate of Russell & Whitehead's project
did to the formalization of mathematics, and for precisely the same reason
(disputes aside about the "real" cause of PM's downfall).
<< I think you may have misinterpreted the last-quoted sentence: 'In
practice this is no different from "a proof exists, but you haven't the
resources to find it".' >>
I think you are right -- I just wanted to make sure no one thought the two
phrases I placed in contrast were equivalent. It appears the two may not
have reflected Chris' meaning at all.
<<
There are four possible things we can say about a statement X:
1. We can prove X
2. We can disprove X
3. We know that X is undecidable
4. We don't know which of 1, 2 or 3 applies.
Chris is commenting that, in all of the cases (2), (3) and (4), we can't
rely on the truth of X. In practical terms they are not all that different,
because in all three cases you shouldn't use X if you're trying to build a
reliable system.
>>
But there is an important difference here. Item (3) requires a prudent
granting agency to stop funding work on a proof. None of the others sends
this clear signal. In "the big picture," Item (3) is a show-stopper.
--
Paul Lutus
www.arachnoid.com
Richard Stamp wrote in message <7l0fau$rpe$1...@soap.pipex.net>...
>Paul Lutus wrote...
>>"Shown to halt" is a long way from "proven to halt" in the scenario you
are
>>describing. The first may be a practical solution, but my concern is
>someone
>>may think it is a proof, and load a bunch of children onto a ski lift, or
>>expose some "clients" to a radiation therapy machine, on the strength of a
>>mistaken assumption. I am not talking about miracles here, I am talking
>>about the issue of informed consent. People should know, in a general way,
>>what limitations exist in software. Programmers should certainly know.
>
>So if I understand this correctly, you're commenting that "proofs" might
>turn out to be unreliable because we misunderstood the system we are
>applying them to. In particular, if we prove something based on the
>semantics of a particular language, we might come unstuck because a
compiler
>bug means that we're actually working with a subtly different language.
>
>This is a good point and highlights a limitation of formal correctness
>proofs. Indeed, it highlights a problem in software engineering in
general.
>However, it has nothing to do with the halting problem.
>
>><< In other words, if a proof exists, it is not suspect [at least not for
>>Goedel reasons]. However, a proof may not exist. In practice this is no
>>different from "a proof exists, but you haven't the resources to find it".
>>>>
>>
>>No. If you don't have the resources to find a proof, the proof may not
>>exist. It cannot be assumed to exist as you imply here. Nothing at all may
>>be assumed about it.
>
>Chris implied no such thing. He explicitly said 'A proof may not exist'.
>
>I think you may have misinterpreted the last-quoted sentence: 'In practice
>this is no different from "a proof exists, but you haven't the resources to
>Gaia dhuit,
>
>(Sorry for the delay in getting back to this one)
>
>Lawrence Kirby dropped 2 cents in the slot of <comp.lang.c>
...
>>I agree but excellence in communicability needs to be objectively based
>>or else two arbitrary people will never be able to communicate optimally.
>
>In general your use of "never" is obviously wrong,
> and we need look no futher than comp.lang.c for proof.
>The more two arbitrary posters agree on the (objective) Standard
> as the basis for a discussion about c code,
> the better their communication will be,
> but it is still possible for them
> to communicate excellently without it.
But not perfectly. Remember we are talkiing about perfection here.
>In particular (where code is the form of communication)
> I feel that it is also wrong :
>What is being communicated (between people) by code
> is an idea
> which is always a subjective thing.
It is a means of solving a problem. Certainly code can be used to
communicate ideas but for perfection we need to show that it is
something more than that.
>The communication of that idea need not always be hampered by
> the lack of an objective basis,
> eg with a variable that is to be used as an index
> I would use declaration of "int i;"
> The majority of c programmers will understand the intent
> and as such this will count as good communication.
>There is no objective basis for the "i == index" connection.
Except possibly from similar use in mathematics e.g. in sigma notation.
>Note - my only problem to that sentence is the "never",
> because otherwise you seem to be agreeing with me :
> the communication should be "optimal",
> which is a form of excellence, not perfection.
I suppose the question here is whether showing that there is nothing
better is enough to show perfection.
>>That may be an acceptable state of affairs for a natural language,
>>but not for a technical language when perfection is the goal.
>>
>Perfection is only the goal when you beg the question of
> whether it is possible.
Certainly perfection may not be possible. However there's no need to
asume that without foundation.
>>
>>Are you talking about aesthetics or something more?
>>
>Yes - aesthetics is part of a complete judgement,
> but the issue to me is more down to earth :
>
>The search for "perfect code" is futile, since there is no such thing.
> There will always be some way to find some fault with code,
> because it (subjectively) expresses (subjective) ideas.
If that is true it may be indicative of a failing in the language.
C is probably not a good example since it has never striven for
perfection in this sense.
>It is worse than futile, it is dangerous,
> because it discourages the notion that
> "My code will always be wrong in some way".
> And that has always seemed to me the best attitude for a coder
You can look at it another way: perfection is a goal. While a programmer
may never achieve it it is a good thing to aim towards it. It is
certainly foolish to presume you have reached it. I posted a program
to be considered by the newsgroup for "perfection". I've been here far to
long to believe for an instant that the newsgroup would agree with
that. :-) I'd be very disapointed if it did.
...
>>>> We must try
>>>>to create a framework for judging perfection that is totally objective.
>>>
>And what is "judging" if not "forming an opinion about" ?
> And that opinion is subjective.
>
>Even if the framework/criteria you use are objective,
> the closeness of the code to the criteria is subjective.
That depends on how the criteria are stated.
>And, what the hell, may as well come down off the fence on this one :
> If your framework truly is objective,
> then you have no way of appreciating that fact,
> since your very appreciation is subjective.
If the framework is objective it should be possible to verify it objectively.
>>>Didja ever see some code and say "That's nice" ?
>>
>>Surely, but that's an entirely different set of criteria to saying
>>"That's perfect".
>Exactly -
> if you *narrow* your criteria sufficiently
> you can judge some code as being perfect.
>
>But if you use the full set of criteria,
> which must include all the subjective criteria,
> then excellence is the best judgement you can make.
Which full set of criteria?