Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A qn on primitive recursive functions

4 views
Skip to first unread message

Peter Smith

unread,
Aug 31, 2004, 4:59:30 AM8/31/04
to
The following is true:

There is no effective way of deciding, of some arbitrary primitive
recursive function f(x), whether there are values x such that
f(x) = 0.

You can prove that readily if you already know about Turing machines and
halting problems, or already know that the total recursive functions are
not effectively enumerable [for if we can effectively tell which p.r.
functions are regular, we could effectively enumerate the total
recursive function, because we could tell which recipes for functions
involved regular minization].

But how about a more direct proof that doesn't go via results about
Turing machines or recursive functions?

Ideally, I'd like a proof that could be presented to students early on
(just after the diagonal proof that there are computable but not p.r.
functions and before minimization, recursive functions or Turing
machines have entered the scene). Any offers? (And I stand prepared to
kick myself for being really dim about this ....).

Cheers

Peter S.

peter_douglass

unread,
Aug 31, 2004, 9:08:13 AM8/31/04
to
"Peter Smith" <ps...@cam.ac.uk> wrote in message
news:ps218-5624F5....@pegasus.csx.cam.ac.uk...

> The following is true:

> There is no effective way of deciding, of some arbitrary primitive
> recursive function f(x), whether there are values x such that
> f(x) = 0.

> You can prove that readily if you already know about Turing machines and
> halting problems, or already know that the total recursive functions are
> not effectively enumerable [for if we can effectively tell which p.r.
> functions are regular, we could effectively enumerate the total
> recursive function, because we could tell which recipes for functions
> involved regular minization].
>
> But how about a more direct proof that doesn't go via results about
> Turing machines or recursive functions?

This request seems to appear somewhat regularly. I have attached
such a proof that is not depend upon Turing machines at the end of this
post. The proof is also "direct" in another sense, in that it does not
involve reductio ad absurdum.

The definitions that I give before the proof are somewhat "long".
IMHO, this helps to clarify exactly what is needed to make the
proof go through. If you are not concerned with water-tightness,
you may get away with some unexpressed assumptions. Suggestions
for making the proof shorter, without sacrificing formality
are welcome.

> Ideally, I'd like a proof that could be presented to students early on
> (just after the diagonal proof that there are computable but not p.r.
> functions and before minimization, recursive functions or Turing
> machines have entered the scene). Any offers? (And I stand prepared to
> kick myself for being really dim about this ....).

I would definitely like some feedback on whether you think this
proof is understandable to beginning students. I would also
welcome any critiques from other angles.

** An abstract proof of the halting theorem **

Assume as given the following:

A Countable Set of Machines M;
A Countable Set of Values V (not containing _|_);

A Predicate which we shall call Test
Test :: V -> Boolean
satisfying the requirement that
Exists v in V such that Test(v) is True, and
Exists v' in V such that Test(v') is False.

An injection from Machines to Values written #:
# :: M -> V ;

Pairing:
<_,_> :: V x V -> V ;

A Function Eval:
Eval :: M x V -> V + _|_
Eval gives the "result" of of running machine m in M on the input x.
If the machine halts, the result if V. If the machine does not
halt, the result is _|_ ;

(Rule of Composition)
For every pair of Machines m1 and m2 in M we can construct
a machine (m1;m2) which satisfies the following:
If m1 halts on input x, then
Eval( (m1;m2),x) = Eval( m2, Eval(m1,x) )
Otherwise
Eval( (m1;m2),x) = _|_

(Rule of Conditional)
For every pair of Machines m1 and m2 in M we can construct
a machine (m1 | m2) which satisfies the following:
If Test(x) then
Eval( (m1 | m2),x) = Eval( m1, x)
else
Eval( (m1 | m2),x) = Eval( m2, x) ;

(Existence of non-terminating Machine)
There exists a Machine m in M such that
forall x, Eval(m,x) = _|_
We refer such a machine satisfying
this property as LoopForever ;

(Existence of Double Machine)
There is a machine m in M
such that
forall x, Eval(m,x) = <x,x>
We refer to such a machine as Double ;

(Existence of a Machine invariantly returning False values)
There is a machine m in M
such that
forall x, Eval(m,x) != _|_
and
forall x, Test(Eval(m,x)) = False ;
We refer to such a machine as False ;


Definitions:

(Totality) Machine m in M is total
iff forall x, Eval(m,x) != _|_

(Partiality) Machine m in M is partial
iff There exists an x such that Eval(m,x) = _|_

(Halts)
Halts :: M x V -> Bool .
Halts(m,x) = ( Eval(m,x) != _|_ )
(Note that Halts(m,x) is a well defined function.
It is not the hypothetical Turing machine "Halts" that
is proven not to exist by reductio ad absurdum in most
popular proofs of the Halting Theorem.)

(Implements)
Machine m in M partially implements Halts
iff forall x,y in V,
Eval(m,<x,y>) != _|_ <=> Eval(m,<x,y>) = Halts(#x, y)
(That is, if m halts on input <x,y> then the value returned is
Halts(#x, y)

CLAIM:
Our claim is that for any Machine m in M,
either m does not partially compute Halts, or m is not total.
Thus, there is no Machine in M which totally computes
Halts. (This is a restatement of the Halting Theorem)

PROOF:

Construct the Machine m' given by

m' = (Double ; m ; (False | LoopForever) )

Lemma 1:
forall x,
!Halt(m',x) =>
!Halts((Double ; m), x) \/
(Halts((Double; m), x) /\ !Test(Eval((Double ; m), x)))
(proof omitted)

Let z = Eval(m',#m')

By cases,
z is an element of V or
z = _|_

Case 1. z is an element of V

Eval(m',#m') != _|_ =>
Halts(m',#m') = True =>
Halts((Double; m ; (False | LoopForever)),#m')

Halts(m',x) =>
Eval(m',x) = True =>
Test(Eval((Double ; m) , x)))

Halts(m',#m') =>
Test(Eval((Double ; m), #m'))) =>
Test(Eval(m,<#m',#m'>))

Halts(m',#m') /\ Test(Eval(m,<#m',#m'>) =>
m does not partially implement Halts

Case 2. z = _|_

By Lemma 1
!Halts((Double ; m), #m') \/ (Halts((Double ; m), #m') /\
!Test(Eval((Double ; m), #m')) )

Case 2a. z = _|_ and !Halts((Double ; m), #m')
!Halts((double ; m), #m') =>
Eval(m,<#m',#m'>) = _|_ =>
m is not total

Case 2b. z = _|_ and (Halts((Double ; m), #m') and !Test(Eval((Double ;
m), #m'))
z = _|_ =>
!Halts(m',#m')

!Halts(m',#m') /\ !Test(Eval(m,<#m',#m'>) =>
m does not partially implement Halts

QED


peter_douglass

unread,
Aug 31, 2004, 9:17:30 AM8/31/04
to
"peter_douglass" <bai...@gis.net> wrote in message
news:1L_Yc.99200$mD.71079@attbi_s02...

> "Peter Smith" <ps...@cam.ac.uk> wrote in message
> news:ps218-5624F5....@pegasus.csx.cam.ac.uk...
>
> > The following is true:
>
> > There is no effective way of deciding, of some arbitrary primitive
> > recursive function f(x), whether there are values x such that
> > f(x) = 0.
>
> > You can prove that readily if you already know about Turing machines and
> > halting problems, or already know that the total recursive functions are
> > not effectively enumerable [for if we can effectively tell which p.r.
> > functions are regular, we could effectively enumerate the total
> > recursive function, because we could tell which recipes for functions
> > involved regular minization].

> > But how about a more direct proof that doesn't go via results about
> > Turing machines or recursive functions?

Addendum: If you aren't interested in machines per se, but just
primative recursive functions, replace "machine" everywhere in the
proof by "partial function". i.e. M is a set of partial functions,
m is an element of that set etc.

--PeterD


peter_douglass

unread,
Aug 31, 2004, 7:08:53 PM8/31/04
to
Responding to an email request for clarification:

I am responding to the newsgroups, as this discussion
may be of interest to others.

> > > The following is true:
> >
> > > There is no effective way of deciding, of some arbitrary primitive
> > > recursive function f(x), whether there are values x such that
> > > f(x) = 0.
> >
> > > You can prove that readily if you already know about Turing machines
and
> > > halting problems, or already know that the total recursive functions
are
> > > not effectively enumerable [for if we can effectively tell which p.r.
> > > functions are regular, we could effectively enumerate the total
> > > recursive function, because we could tell which recipes for functions
> > > involved regular minization].
>
> > > But how about a more direct proof that doesn't go via results about
> > > Turing machines or recursive functions?

The Church-Turing Hypothesis says that every function that can be
computed by a *practical* algorithm is recursive (in a technical sense
described below). Another way to say that it is recursive is to say that
it may be computed by a Turing Machine. At this point, the
Church-Turing Hypothesis is just that, a hypothesis, although all of
the empirical evidence we have seen so far supports this hypothesis.
Thus, to "prove" that there is no "effective" way of deciding something,
we must take the position that a procedure provides an "effective"
way of deciding a question, if and only if that procedure defines
a recursive function. I don't think there is any way around making
this assumption.

So, if you are looking for a proof that avoids recursive functions
(or Turing Machines) even in the definition of what it means to be
"an effective way of deciding", then I'm not convinced that it can
be done, and I apologize that my previous emails didn't address this
point.

Given the above, is the following a reasonable translation of the
claim for which we seek a proof?

Let # be an injective function from primitive recursive functions
to naturals. When we write #g, we assert that g is a primitive
recursive function, and we say that #g is a representation of that
function.

Claim: There exists no total recursive function F such that

F(#g) = True iff there exists an x such that g(x) = 0
and F(#g) = False iff forall x, g(x) != 0

(if there does not exist a g such that n = #g, then
F(n) may, or may not be undefined, depending upon
how we wish to set up the problem)

So, please clarify if this is an acceptable translation, or whether
you see the problem differently.

--PeterD


H. Enderton

unread,
Aug 31, 2004, 9:11:47 PM8/31/04
to
Peter Smith <ps...@cam.ac.uk> wrote:
> There is no effective way of deciding, of some arbitrary primitive
> recursive function f(x), whether there are values x such that
> f(x) = 0.
>But how about a more direct proof that doesn't go via results about
>Turing machines or recursive functions?

The statement you seek to prove immediately implies that some r.e.
sets are not recursive, so you will need to get your hands dirty
to some extent.

And why avoid recursive functions?

--Herb Enderton

P.S. I dropped sci.math; bandwidth doesn't grow on trees.


Peter Smith

unread,
Sep 1, 2004, 5:28:14 AM9/1/04
to
HE "... so you will need to get your hands dirty to some extent."

PD "if you are looking for a proof that avoids recursive functions


(or Turing Machines) even in the definition of what it means to be
"an effective way of deciding", then I'm not convinced that it can
be done"

Yes, my intentionally rather open-ended question was prompted by
wondering in effect exactly HOW dirty we have to get our hands WHEN.

So consider the following class-room situation in a course where we are
discussing recursive functions BEFORE we meet Turing machines, register
machines, etc.

1) We prove by diagonalizing out that there computable but not primitive
recursive (p.r.) functions. This proof doesn't require any close
analysis of the idea of computability -- it just requires us to
recognize the diagonal function as intuitively computable.

2) We explain why the p.r. functions aren't all the computable functions
by pointing out that we allow intuitive computations to involve
unbounded searches. We motivate expanding the class of p.r. functions by
allowing use of the minimization operator, to get recursive functions.
We note that recursive functions won't be total unless minimization is
applied to "regular" functions. We also [at this early point in the
course, remember, before we've heard of Turing machines, etc.] speculate
that the total recursive functions are ALL the intuitively computable
total functions.

3) Bright and Persistent Student now asks: "Why can't we run a parallel
argument to (1) and diagonalize out of the class of total recursive
functions and get computable but not recursive functions". Because we
can't effectively enumerate the total recursive functions. BPS: "Why
not?" Because, for a start, we'd have to be able to effectively tell in
which recipes -- from an effectively enumerable list of potential
recipes for functions -- the minimization operator is applied to a
"regular" p.r function, and we can't do that. BPS: "Why not?" Because
["The Claim"] we can't effectively tell if an arbitrary p.r. function is
regular. BPS "Why not ...?".

4) Given the speculation that total recursive functions are ALL the
intuitively computable total functions is at this point in the classroom
dialectic just that, a speculation, we would LIKE now to be able to
deliver an argument for The Claim that -- like the argument at step (1)
-- doesn't depend (or at least, doesn't depend too overtly!) on the
Church-Turing Hypothesis. Otherwise our Bright and Persistent Student
may think she smells a rat! :-)) For she'll say "I wanted an argument
for The Claim so I can block one potential line of argument against the
Church-Turing Hypothesis that the total recursive functions are all the
total computable functions. You don't want me to go round in circles do
you?".

OK, at this point Harrassed Lecturer (or at least THIS Harrassed
Lecturer in the past) says in effect "Bear with me, we are going to be
looking at lots more stuff on ideas of computation, and how they are all
equivalent, and then you too, BPS, will get to accept the Church-Turing
Hypothesis which you can then use to get neat informal proofs of all
kinds of things, including The Claim". And BPS looks sceptical and
mutters about rigour not being what it was ....

How does HL do a bit better? :-))))

peter_douglass

unread,
Sep 2, 2004, 10:42:54 AM9/2/04
to
"Peter Smith" <ps...@cam.ac.uk> wrote in message
news:ps218-F7E4EF....@pegasus.csx.cam.ac.uk...

> So consider the following class-room situation in a course where we are
> discussing recursive functions BEFORE we meet Turing machines, register
> machines, etc.

> 1) We prove by diagonalizing out that there computable but not primitive
> recursive (p.r.) functions. This proof doesn't require any close
> analysis of the idea of computability -- it just requires us to
> recognize the diagonal function as intuitively computable.

You could generalize this as follows. For any set of functions which
can be "diagonalized", the diagonalization (and modification of each
position in the diagonal) constructs a new function which is not in the
original set. For example, an anti-diagonal of the set of primitive
recursive functions is not p.r.
At this point there is no need to say that these functions are necessarily
computable, because you are about to achieve that end in your step 2.

> 2) We explain why the p.r. functions aren't all the computable functions
> by pointing out that we allow intuitive computations to involve
> unbounded searches. We motivate expanding the class of p.r. functions by
> allowing use of the minimization operator, to get recursive functions.
> We note that recursive functions won't be total unless minimization is
> applied to "regular" functions. We also [at this early point in the
> course, remember, before we've heard of Turing machines, etc.] speculate
> that the total recursive functions are ALL the intuitively computable
> total functions.

OK

> 3) Bright and Persistent Student now asks: "Why can't we run a parallel
> argument to (1) and diagonalize out of the class of total recursive
> functions and get computable but not recursive functions". Because we
> can't effectively enumerate the total recursive functions. BPS: "Why
> not?" Because, for a start, we'd have to be able to effectively tell in
> which recipes -- from an effectively enumerable list of potential
> recipes for functions -- the minimization operator is applied to a
> "regular" p.r function, and we can't do that. BPS: "Why not?" Because
> ["The Claim"] we can't effectively tell if an arbitrary p.r. function is
> regular. BPS "Why not ...?".

If we have a set of all "computable" functions, for some definition of
"computable", then the diagonalization trick will necessarily generate
a function that is *not* computable. In your step one, we really didn't
know that the diagonal was *computable* until we added unbounded
search and partiality to p.r. functions to get general recursive functions.

If someone could add a new "realizable" method of constructing
functions to the set of methods used to construct general recursive
functions, and if one further could show that this method allowed the
construction of an anti-diagonal of the total recursive functions,
then one would have validly constructed computable functions that
are not recursive, thus disproving the Church-Turing Hypothesis.

You could allow for that possibility at this stage.

> 4) Given the speculation that total recursive functions are ALL the
> intuitively computable total functions is at this point in the classroom
> dialectic just that, a speculation, we would LIKE now to be able to
> deliver an argument for The Claim that -- like the argument at step (1)
> -- doesn't depend (or at least, doesn't depend too overtly!) on the
> Church-Turing Hypothesis. Otherwise our Bright and Persistent Student
> may think she smells a rat! :-)) For she'll say "I wanted an argument
> for The Claim so I can block one potential line of argument against the
> Church-Turing Hypothesis that the total recursive functions are all the
> total computable functions. You don't want me to go round in circles do
> you?".

I don't think you need to claim that the constructed diagonal of the
p.r. functions is "intuitively" computable in step 1. You *prove* that
there are non p.r. functions that are computable in step 2.
I *think* the problem with your student "smelling a rat" would go
away.

> OK, at this point Harrassed Lecturer (or at least THIS Harrassed
> Lecturer in the past) says in effect "Bear with me, we are going to be
> looking at lots more stuff on ideas of computation, and how they are all
> equivalent, and then you too, BPS, will get to accept the Church-Turing
> Hypothesis which you can then use to get neat informal proofs of all
> kinds of things, including The Claim". And BPS looks sceptical and
> mutters about rigour not being what it was ....

> How does HL do a bit better? :-))))

In my wild imagination, the thoughts above should allow your
class to progress step by step without rodent odor. I certainly
won't claim that I'm right about this, or that it is the best way
to teach the subject. But since you asked... ;-)

--PeterD


Peter Smith

unread,
Sep 3, 2004, 1:21:25 PM9/3/04
to
In article <OjGZc.276893$eM2.54086@attbi_s51>,
"peter_douglass" <bai...@gis.net> wrote:

>In your step one, we really didn't
> know that the diagonal was *computable* until we added unbounded
> search and partiality to p.r. functions to get general recursive functions.

That seems odd to me. I thought the dialectical situation was this. We
have an informal conception of algorithmic computability and we are in
the business of regimenting/sharpening/explicating this notion. (The
Church-Turing thesis is the (perhaps surprising) claim that the good
ways of doing this all come to the same: the thesis presumes we do have
some intuitive conception to work with!)

BEFORE getting our hands on a sharp regimentation of the notion of
computability, relying only on a rough-and-ready intuitive explanation
of the idea of step-by-step calculation, we are prepared to agree that
the p.r. functions are computable AND their recipes are effectively
enumerable. And THAT is all we need to see to get the diagonal argument
going and get to see that there must be intuitively computable but non
p.r. functions.

So I guess I'm saying we know that, BEFORE we continue our exercise in
regimentation and argue that unbounded search is what we need to add
into the mix in order to be able to define a larger class of computable
functions than the primitive recursive ones.

Another case: would you want to say that we don't know that the
Ackermann/Péter function is computable "until we added unbounded search
and partiality to p.r. functions to get general recursive functions."??
That again just seems wrong to me. You can easily see that the
definition by double-recursion well defines a function which has a value
for all arguments, and you know the recipe for computing it. So you know
it is computable before you do e.g. the tricksy proof depending on
coding that the function is recursive.

Peter R. Douglass

unread,
Sep 3, 2004, 4:10:14 PM9/3/04
to
"Peter Smith" <ps...@cam.ac.uk> wrote in message
news:ps218-762ECA....@pegasus.csx.cam.ac.uk...

> In article <OjGZc.276893$eM2.54086@attbi_s51>,
> "peter_douglass" <bai...@gis.net> wrote:

> >In your step one, we really didn't
> > know that the diagonal was *computable* until we added unbounded
> > search and partiality to p.r. functions to get general recursive
functions.

> That seems odd to me. I thought the dialectical situation was this. We
> have an informal conception of algorithmic computability and we are in
> the business of regimenting/sharpening/explicating this notion. (The
> Church-Turing thesis is the (perhaps surprising) claim that the good
> ways of doing this all come to the same: the thesis presumes we do have
> some intuitive conception to work with!)

Of course everyone is different, but I think the development of intuition
generally goes in the following steps.

1. Naively, one might believe that all functions N->N are computable.
2. Under assumption 1, one may wonder whether every function
may be expressed as a composition of a few basic functions.
3. We learn that some functions N->N are not computable
4. Knowing 3, we wonder whether every computable function
may be expressed as a composition of a few basic functions.
5. We try primitive recursive functions, and find that there are
computable functions that are not p.r.
6. We try general recursive functions, and find (empirically) that
these seem to include all computable functions.

> BEFORE getting our hands on a sharp regimentation of the notion of
> computability, relying only on a rough-and-ready intuitive explanation
> of the idea of step-by-step calculation, we are prepared to agree that
> the p.r. functions are computable AND their recipes are effectively
> enumerable. And THAT is all we need to see to get the diagonal argument
> going and get to see that there must be intuitively computable but non
> p.r. functions.

Again, I would argue that the naive intuition is *not* that p.r. functions
include all of the computable functions. Rather, I would expect the
naive intuition to be that adding new operations to p.r. functions would
allow
a larger set of functions to be computed.

Thus, I wonder about the efficacy of arguing that the p.r. functions
don't encompass the computable functions until after it is shown that
there are some functions that are uncomputable.

> So I guess I'm saying we know that, BEFORE we continue our exercise in
> regimentation and argue that unbounded search is what we need to add
> into the mix in order to be able to define a larger class of computable
> functions than the primitive recursive ones.

> Another case: would you want to say that we don't know that the
> Ackermann/Péter function is computable "until we added unbounded search
> and partiality to p.r. functions to get general recursive functions."??

No. Once again, I think the development of intuition tends to go
in the opposite direction. Naively, one might think that all functions are
computable. The interesting result is that some functions
are not computable.

> That again just seems wrong to me. You can easily see that the
> definition by double-recursion well defines a function which has a value
> for all arguments, and you know the recipe for computing it. So you know
> it is computable before you do e.g. the tricksy proof depending on
> coding that the function is recursive.

Yup.

--PeterD


Peter Smith

unread,
Sep 3, 2004, 6:18:25 PM9/3/04
to
In article <chajbt$a0t$1...@camelot.ccs.neu.edu>,

"Peter R. Douglass" <pet...@ccs.neu.edu> wrote:


> Of course everyone is different, but I think the development of intuition
> generally goes in the following steps.
>
> 1. Naively, one might believe that all functions N->N are computable.

Really??? -- surely not if we are working with the standard modern
conception of function. And then the naive belief won't survive
elementary cardinality considerations, before we ever get round to
defining recursive functions!


> Again, I would argue that the naive intuition is *not* that p.r. functions
> include all of the computable functions.

Sure: but nothing I said implied otherwise. What I was arguing is that
naive intuition will accept that p.r. functions are computable, and
accpet that they are effectively enumerable, and hence naive intuition
can be readily tutored to accept the diagonal function as computable but
not p.r. -- and all that WITHOUT having yet encountered the notion of a
(general) recursive function.

peter_douglass

unread,
Sep 3, 2004, 8:45:39 PM9/3/04
to

"Peter Smith" <ps...@cam.ac.uk> wrote in message
news:ps218-1ADDC0....@pegasus.csx.cam.ac.uk...

> In article <chajbt$a0t$1...@camelot.ccs.neu.edu>,
> "Peter R. Douglass" <pet...@ccs.neu.edu> wrote:

> > Of course everyone is different, but I think the development of
intuition
> > generally goes in the following steps.

> > 1. Naively, one might believe that all functions N->N are computable.

> Really??? -- surely not if we are working with the standard modern
> conception of function. And then the naive belief won't survive
> elementary cardinality considerations, before we ever get round to
> defining recursive functions!

You are correct that naive belief wouldn't survive cardinality
considerations. I see that you are in the UK, and I don't know
how much maths background your students get before entering
a CS program. In the US, it is often the case that CS students
have limited math background. Undergraduates are quite likely
to be ignorant of Cantor's diagonalization, or the distinction
between countable and uncountable sets. Without exposure
to these, they are likely to have the intuition that
all infinite sets have the same cardinality. If all infinite sets
have the same cardinality, then there is no problem of the
infinite function space being larger than the infinite space of
finite expressions.

If your students already understand that the the space of N->N
functions is uncountable, and hence that there exist N->N
functions which cannot be represented by finite expressions,
then more power to them! Also, if that is the case, then my
conjectures about their naive beliefs obviously don't hold.

> > Again, I would argue that the naive intuition is *not* that p.r.
functions
> > include all of the computable functions.

> Sure: but nothing I said implied otherwise. What I was arguing is that
> naive intuition will accept that p.r. functions are computable,

yup. But I'm still not convinced that their intuition is significantly
sharpened
by this. The same argument could be applied to any subset of the p.r.
functions. No-one would doubt that these are computable, and
diagonalization would give that there are computable functions
outside this (sub)set. Why pick p.r. functions?

> and accept that they are effectively enumerable, and hence naive intuition


> can be readily tutored to accept the diagonal function as computable but
> not p.r.

yup. Now I think I may understand the problem you are posing. An
anti-diagonal of the p.r. functions is computable. An anti-diagonal of
the general recursive functions is not. Why is that? (Forgive me if
I'm going way off course.)

We can effectively put the total p.r. functions in a list, because they
are all total, and we can sort them in lexigraphic order of their
expressions.
Because we can *effectively* place these functions in a list, we can
compute f(x) = M(x,x)+1 where M(x,x) is the value of the x'th
function applied to the value x.

The total general recursive functions cannot *effectively* be put into
a list. The expressions for general recursive functions include partial
functions, and we can't syntactically distinguish all of the partial
functions from the total functions. Thus, although cardinality says
we can put the total general recursive functions in a list, we don't
know how. Thus, we don't know how to compute f(x) = M(x,x)+1
as in the above case. This doesn't prove that f(x) is not computable,
but it casts doubt on the naive assumption that all diagonals are
computable -- thus giving motivation for a more in depth analysis.

I hope I got this right, and that this has helped.

--PeterD

0 new messages