Julio Di Egidio schrieb:
> The diagonalisation of a standard list of omega
> sequences is not an omega sequence, it is a
> standard sequence. (No-QED. E.g., as you know,
> "2+2=4 hence the moon is made of cheese" is an
> invalid argument, not an inconsistency in arithmetic.)
I don't use the terms "standard list" or "omega sequence".
I use the terms "omega word", which is an element from
a set, call this set Omega.
------------------------ Math ----------------------
And I showed different ways of enumerating this set of
Omega, leading to the diagonalization argument. The
set Omega is here:
Omega = { alfa beta^w | alfa is a finite word,
beta is a finite word }
This enumeration is already redundant, if we don't
care about arithmetic. There is both syntactic redundancy
not only arthmetic redundancy. These are the unnormalized
omega words, which are syntacticall unnormalized.
They also include cases where alfa can be shorted in
size when beta is rolled. And where beta can be shortened
when beta is unduplicated. For example:
0.1 11^w
The alfa word is '1', and the beta word is '11'.
The omega word is '1' '11'^w.
Is not normalized form is:
0.1^w
In the normalized form the alfa word is '', and the
beta word is '1'. The omega word is '' '1'^w. The
above normalization from 0.1 11^w to 0.1^w is
syntactical. Aritmetic normalization is when we for
example go from:
0.9^w
To:
1.0^w
We will need both in a library providing arithmetic.
In diagonalization we show that there is an infinite
sequence, namely a function f : N -> R, which is not
element of Omega. Thats all. Not very exciting.
------------------------ Prolog ----------------------
BTW: The Prolog program could include a normalization,
as follows:
omega_norm(X,Y) :- omega_add(X,0,Y).
But I guess a better idea is to implement omega_norm/2
directly, and change the signature of the predicates:
% Omega = omega(Integer,Atom,Atom)
% NormOmega a subset of Omega
% Those omega words that are syntactically normalized
% and arithmetically normalized.
Then go on an implement predicates:
% omega_norm(+Omega, -NormOmega)
% omega_add(+NormOmega, +NormOmega, -NormOmega)
% omega_sub(+NormOmega, +NormOmega, -NormOmega)
% omega_mul(+NormOmega, +NormOmega, -NormOmega)
% omega_div(+NormOmega, +NormOmega, -NormOmega)
% omega_compare(-Result, +NormOmega, +NormOmega)
But it could be also that the predicates are
a little bit agnostic, whether the input is really
normalized. So probably the following would also work:
% omega_add(+Omega, +Omega, -NormOmega)
% omega_sub(+Omega, +Omega, -NormOmega)
% omega_mul(+Omega, +Omega, -NormOmega)
% omega_div(+Omega, +Omega, -NormOmega)
% omega_compare(-Result, +Omega, +Omega)
But with the later interface, you are not allowed to
do the following short cut:
omega_add(0, O, O).
Since this would break the protocol, that omega_add
returns a normalized omega.
So there are a lot of pros and cons when designing
a library doing the stuff, to show that we can compute
with omega words. But the algorithm I gave for addition
in a previous post should work, as far as I know.
Maybe should produce an Isabelle/HOL verification or
somesuch. Well this is also a funny idea.
Bye