http://www.cs.brown.edu/people/pw/papers/bcj1.pdf
This paper brings up a possibly significant issue, which is the fact
that UTM's as formally defined don't contemplate the concept of
'interactive' input and output.
A couple interesting observations about this paper: First, even if
UTM's can't express a computing agent in interaction with its
environment, they may still be able to simulate the entire *system* of
agents interacting with each other. The key is to realize that once a
UTM starts computing, it is a closed system informationally.
Real-world computer systems, on the other hand, almost always interact
with the outside world, receiving continuous input (mouse movements,
keyboard strokes), and providing output (screen, speakers).
Secondly, it seems to me that a Cellular Automata *does* have the
power to capture the computational concepts the authors have in mind.
If we think of a computationally universal CA with boundaries acting
as inputs and outputs, it's pretty clear we can (given enough 'cycle
time' between input and output) emulate any possible series of wires
and gates.
There are more fundamental concepts even than any particular CA. I
suspect that if we think of input and output as being connected by a
series of so-called 'Random Boolean Networks', or simply logic gates
with a built-in delay connected arbitrarily, we can simulate any
possible kind of input/output computation, again given that cycle time
is a resource that must be taken into account (ie reaction time of the
machine).
I think this may actually have some impact on discussions of AI. If
in fact UTM's are insufficiently powerful to model an interacting
agent, then any claimed evidence of their inability to do human-level
things becomes somewhat weaker. I doubt anyone who thinks seriously
about AI ever thought that such a machine wouldn't be able to interact
with its environment!
-dbm
Hi Dan,
That discussion on DP that you refer to above relates, in part,
to a very specific situation that I'll describe briefly below.
We have documented observations from bacterial colonies, whereby certain
sophisticated computations (not in a metaphoric sense) take place.
Careful analysis indicates that such computations can not be
accounted for by the conventional UTM model. Prudence
requires that some other computational theory should be
proposed that would account for that kind of computation.
The crucial thing to keep in mind is that "natural bacterial
computation" is now reproducibly observable, and that (unlike
conventioal computing machines and their (formal) languages)
bacteria are not obligated to follow Turing's conceptions,
however ingenious in their time...
QUESTION: Is there any reason to believe that Turing's
original ideas have been framed so as to encompass natural
interactive computation, such as performed by bacteria?
[Note that bacteria are not symbol-manipulators in any sense
that is used since Hilbert thru Godel to Turing -- very crucial
here!]
If the answer is negative, then Turing's models of computations
are ipso facto rendered irrelevant to this particular *natural*
computational situation.
This is an attempt to frame a paradoxical situation, where
attractive abstract ideas about "universal" computation that
held sway for some 70 years, are pitted against a reality
of actual natural computation in microorganisms that
populate this planet for about 3.7 billion years.
Let the chips fall where they may.
-- jdi
That is not significant at all. If the entire system can be simulated
by a TM, then so can all of its components since all physical
interaction and "intensional" computation can be simulated by
subroutines. QED
I suspect the authors don't know much about algorithmic information
theory.
They badly want to say something of course.
Regards,
--
Eray Ozkural
Joel Isaacson wrote:
> danb...@yahoo.com (dan) wrote in message news:<fbf8d8f2.04032...@posting.google.com>...
...
>>-dbm
>
> Hi Dan,
>
> That discussion on DP that you refer to above relates, in part,
> to a very specific situation that I'll describe briefly below.
>
> We have documented observations from bacterial colonies, whereby certain
> sophisticated computations (not in a metaphoric sense) take place.
>
> Careful analysis indicates that such computations can not be
> accounted for by the conventional UTM model. Prudence
> requires that some other computational theory should be
> proposed that would account for that kind of computation.
Fine, I assume that the paper you reference proposes just such a
computational theory. Frankly I don't care whether we call it a UTM, an
SIM, or whatever. What matters to me is: do you or do you not believe
that what people do, or what these amazing bacteria do, can be emulated
by a digital machine? Because nothing in the article you quote (the one
about TM's) proposes that there is something fundamentally uncomputable,
or mysterious, or not understandable, about such behavior -- they just
make the rather abstruse and narrow claim that TM's as originally
concieved do not quite cover the range of computational paradigms
necessary to be considered truly universal. Their replacement,
interacting finite state machines, appear to me to be every bit as
finite, deterministic, and scrutable as Turing's original formulation,
and in particular are more than subsumed by most of the CA-type
architectures promulgated on this and other sites.
So let's get off the semantics and talk about the meat of the debate: Do
you honestly believe that there is something about consciousness,
intelligence, humanity, etc. and so forth, that is fundamentally not
possible to capture with *any* type of digital, deterministic system
based on finite logic? (the implication being, if what we observe in
this Universe cannot be captured with such a system, then such a system
cannot be the basis of our TOE).
- dbm
Eray Ozkural exa wrote:
> danb...@yahoo.com (dan) wrote in message news:<fbf8d8f2.04032...@posting.google.com>...
...
>>http://www.cs.brown.edu/people/pw/papers/bcj1.pdf
>>
>>This paper brings up a possibly significant issue, which is the fact
>>that UTM's as formally defined don't contemplate the concept of
>>'interactive' input and output.
...
> That is not significant at all. If the entire system can be simulated
> by a TM, then so can all of its components since all physical
> interaction and "intensional" computation can be simulated by
> subroutines. QED
Well, not so fast. What the authors of this paper are concerned with is
a very specific issue regarding the formal definition of a UTM. It
appears to me that the basic addition being suggested is a concept of
state and iteration. A UTM as originally defined is like a stateless
subroutine; what you are suggesting, and what the authors of this paper
are trying to formally define, is a matrix of 'subroutines' (as you call
them) with static variables that can retain information from one
invocation to the next. IOW, finite state machines with (potentially)
infinite memory *and* UTM-level logical capabilities.
This is not just a minor, geeky detail, because various strong arguments
supporting different positions have been based on careful analysis of
the formal definition of a UTM.
My one quibble with the paper is the focus on multiple interacting
units, because clearly any parallel architecture can be emulated by a
well-architected serial implementation. What is critical, IMHO, is not
the parallel nature of multiple machines, but the simple fact that a UTM
doesn't naturally incorporate a concept of retained state and iterations
of its program.
>
> I suspect the authors don't know much about algorithmic information
> theory.
I suspect you're wrong about that.
>
> They badly want to say something of course.
>
Well that's true of all academics :)
> Regards,
>
> --
> Eray Ozkural
>
> http://www.cs.brown.edu/people/pw/papers/bcj1.pdf
>
> This paper brings up a possibly significant issue, which is the fact
> that UTM's as formally defined don't contemplate the concept of
> 'interactive' input and output.
I/O in a Turing machine would be equivalent to either:
* letting someone scribble on the tape while it is running;
* letting someone scribble on the tape between runs;
The TM would be responsible for designating areas as possible to scribble on.
--
__________
|im |yler http://timtyler.org/ t...@tt1lock.org Remove lock to reply.
daniel B miller <dan...@cmu.edu> wrote in message news:<105sf5q...@news.supernews.com>...
> Hi --
>
> Eray Ozkural exa wrote:
> > danb...@yahoo.com (dan) wrote in message news:<fbf8d8f2.04032...@posting.google.com>...
> ...
> >>http://www.cs.brown.edu/people/pw/papers/bcj1.pdf
> >>
> >>This paper brings up a possibly significant issue, which is the fact
> >>that UTM's as formally defined don't contemplate the concept of
> >>'interactive' input and output.
> ...
> > That is not significant at all. If the entire system can be simulated
> > by a TM, then so can all of its components since all physical
> > interaction and "intensional" computation can be simulated by
> > subroutines. QED
>
> Well, not so fast. What the authors of this paper are concerned with is
> a very specific issue regarding the formal definition of a UTM. It
> appears to me that the basic addition being suggested is a concept of
> state and iteration. A UTM as originally defined is like a stateless
> subroutine; what you are suggesting, and what the authors of this paper
> are trying to formally define, is a matrix of 'subroutines' (as you call
> them) with static variables that can retain information from one
> invocation to the next. IOW, finite state machines with (potentially)
> infinite memory *and* UTM-level logical capabilities.
There is no such thing as infinite in nature. I can repeat that
sentence an arbitrary number of times if you would like.
Can you tell me where the "nonenumerable" in their paper comes from
please?
> This is not just a minor, geeky detail, because various strong arguments
> supporting different positions have been based on careful analysis of
> the formal definition of a UTM.
I didn't find anything mathematical in the paper, perhaps I should
read it again. They may have hid their "super-Turing discrete
formulation of computation" proofs in the footnotes. My opinion is
that there is no alternative formulation of _discrete_ computation
more powerful than what is defined by lambda calculus.
> My one quibble with the paper is the focus on multiple interacting
> units, because clearly any parallel architecture can be emulated by a
> well-architected serial implementation. What is critical, IMHO, is not
> the parallel nature of multiple machines, but the simple fact that a UTM
> doesn't naturally incorporate a concept of retained state and iterations
> of its program.
UTM is only one description of a universal discrete computer. There
can be other descriptions. However, they are all equivalent in power,
ie. they define the same set of integer functions. That is
undergraduate level theory of computation.
> >
> > I suspect the authors don't know much about algorithmic information
> > theory.
>
> I suspect you're wrong about that.
Maybe. There is a difference between understanding concepts and
reproducing formal definition from a textbook.
Regards,
--
Eray
These two operations can be easily simulated by a TM. It can be
readily seen that adding I/O operations do not surpass the model of
TM. It is the same thing as adding non-determinism and parallelism, we
get the same power of computation (ie. the same set of functions)
What's so surprising about that? Why do people want to make us believe
that silly OO programs are actually more powerful than ordinary
algorithms? Have not those CS professors read a single theory of
computation textbook? People have gone crazy!
Amazed,
--
Eray
> > > http://www.cs.brown.edu/people/pw/papers/bcj1.pdf
> > >
> > > This paper brings up a possibly significant issue, which is the fact
> > > that UTM's as formally defined don't contemplate the concept of
> > > 'interactive' input and output.
> >
> > I/O in a Turing machine would be equivalent to either:
> >
> > * letting someone scribble on the tape while it is running;
> > * letting someone scribble on the tape between runs;
> >
> > The TM would be responsible for designating areas as possible to
> > scribble on.
>
> These two operations can be easily simulated by a TM.
...assuming the agent doing the writing is itself finite - and does not
access some sort of "genuine" random number generator.
> It can be readily seen that adding I/O operations do not surpass the
> model of TM. It is the same thing as adding non-determinism and
> parallelism, we get the same power of computation (ie. the same set of
> functions)
Paralleleism really /does/ surpass the model of computation of a TM.
A parallel machine can do more - since it can read an infinite number
of inputs at once.
AFAIK, non-determinism doesn't really add anything very much of
consequence: practically anything that randomness can do, a strong
cryptographic PRNG algorithm can do just as well - for all practical
intents and purposes.
Not exactly. That paper, by Wegner-Goldin, was selected merely
as an example of an important trend, originating from well within the
CS establishment. Eshel's work is entirely independent of that "new
paradigm" group; in fact, the two groups are not even aware of each
other,
at the moment...
>Frankly I don't care whether we call it a UTM, an
> SIM, or whatever. What matters to me is: do you or do you not believe
> that what people do, or what these amazing bacteria do, can be emulated
> by a digital machine?
One hypothesis behind Eshel's work is that bacterial intelligence
is a precursor of human intelligence -- about 3.7 billion years
older... both Ehsel and I believe that bacterial computation can
be understood in terms of "digital means"... but not necessarily via
UTM models. For us, it is not a question of "emulation" --
it is rather what it actually IS!
I am aware that the bulk of interest in this and similar forums
is to wallow in physics, QM, EPR, Bell, etc., etc. Let me shock
you by diverting your focus to microbiology, rather than physics,
by reproducing here an example I gave yesterday on DP:
* * *
I'll give you an example. A typical bacterial colony under
study has between 10^9 and 10^13 bacteria. On the high end
the number is order of magnitudes the number of neurons in
a human brain. So, these colonies can generate massively-interacting
computations.
Such a colony self-assembles into a super-organism (a "society of
bacteria",
in Minsky's terms). A typical problem-solving task is how to evade
antibiotics by developing new strains.
It turns out that there are very specific mechanisms, from sensing and
transducing the biochemical signals involved, to all kinds of
communications/interactions, and ultimately re-computing the genome
into a more resistant genome. Elaborating all of these things,
step-by-step,
event-by-event in the actual bacterial lab, is central to these
studies.
So, this is not some kind of arm-chair speculation. It gets as real as
could be... Aren't we all realists at heart?
We take an informational/semantical approach and study this as
computation.
But the crucial point here that we want to understand the computation
from the *point of view of the bacteria*. In other words, not as
some
kind of digital simulation, on our own terms and our own computing
machines,
but as the *natural* computational process unfolding thru the means
available ONLY to the bacteria. We allow that those means are finite,
discrete, etc. So, we basically try to discover the "digital means"
deployed by bacteria to solve that particular survival problem.
Now, bacteria are not symbol-manipulators, and ipso facto cannot
follow
symbol-manipulation rules in the sense stipulated by formal systems,
or Turing machines.
Still, there is no question that they do this kind of computation, but
how?
Above is of course sketchy, but should give you some idea.
* * *
> Because nothing in the article you quote (the one
> about TM's) proposes that there is something fundamentally uncomputable,
> or mysterious, or not understandable, about such behavior -- they just
> make the rather abstruse and narrow claim that TM's as originally
> concieved do not quite cover the range of computational paradigms
> necessary to be considered truly universal. Their replacement,
> interacting finite state machines, appear to me to be every bit as
> finite, deterministic, and scrutable as Turing's original formulation,
> and in particular are more than subsumed by most of the CA-type
> architectures promulgated on this and other sites.
As I said above, Eshel's work is independent from that article,
although coincidentally fits along that general trend. It is
important to realize that we have before us a *real* world
puzzle, relating to bacterial computation. Resolving that
puzzle relates to understanding how bacteria manage to
outsmart our own immune system, itself an intelligent
system, not expected to be fully understood by the UTM model.
We don't rule out that both bacterial systems and the immune
system are of a kind; that both ultimately deploy "digital means"
of computations, involving finite resources, discrete elements,
and quite possibly zillions of CA-type processes at their base.
However, we don't see any reason to suppose that these kinds
of digital systems have been anticipated by the now 70-year old
conceptions of Turing & Co... we keep an open mind
that bacterial computation is, possibly, smarter than that...
> So let's get off the semantics and talk about the meat of the debate: Do
> you honestly believe that there is something about consciousness,
> intelligence, humanity, etc. and so forth, that is fundamentally not
> possible to capture with *any* type of digital, deterministic system
> based on finite logic? (the implication being, if what we observe in
> this Universe cannot be captured with such a system, then such a system
> cannot be the basis of our TOE).
I don't believe in vitalism, or that biological systems are
mysterious in some sense, or immune from understanding in terms
of some species of digital computation. The open question,
in my view, is what sort of digital computation...
Hope this helps some. -- jdi
And since you don't believe in infinites, note that the original
definition of a Turing machine postulates an infinite tape. It is
*precisely* because I believe our world is finite, that I am unsure
about the standing of UTM's as the final word on what machines can do.
> Can you tell me where the "nonenumerable" in their paper comes from
> please?
>
I'm working on it.
>
...
> UTM is only one description of a universal discrete computer. There
> can be other descriptions. However, they are all equivalent in power,
> ie. they define the same set of integer functions. That is
> undergraduate level theory of computation.
>
>
>>>I suspect the authors don't know much about algorithmic information
>>>theory.
>>
>>I suspect you're wrong about that.
>
>
> Maybe. There is a difference between understanding concepts and
> reproducing formal definition from a textbook.
>
That's hardly what this paper is doing. While I'm not particularly
impressed by JDI's list of sheepskins & awards that various authors may
have, I don't agree with you that they're all just idiots whistling in
the breeze.
I have long thought that there may be some important characteristics of
digital machines that are not captured correctly by UTM's. I'm not sure
this particular paper gets it right, but there is something that has
been bothering me about UTM's and time, and resources, and input/output.
A UTM can emulate any other machine, but at what cost in terms of
time/memory tradeoffs? That seems to me to be an important distinction
in the real world. After all, certain resource tradeoffs could easily
make P look like NP, and so on. So it seems important to get this part
of the equation right.
I don't just want to know that there's an algorithm that can compute
fuction f() over some domain; I want to know which algorithm best
approximates function f() given time T, energy E, mass M, volume V,
number of gates G, with latency L, specific interconnection topologies,
and so on.
It is prceisely this intesection between computation and real-world (or
fake-world!) constraints that may lead to new insights in both physics
and CS. How do we capture this issue of performance under constrained
resources in a formal and rigorous way?
-dbm
> Now, bacteria are not symbol-manipulators, and ipso facto cannot
> follow
> symbol-manipulation rules in the sense stipulated by formal systems,
> or Turing machines.
Could you explain this conclusion a bit more? It sounds like "this right
angle triangle, made with toothpicks, is not a mathematical entity, so you
can't use Pythagoras' theorem to calculate the lengths".
--
Frank Buß, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
OK: I think I know where we are not in resonance.
You say:
> but as the *natural* computational process unfolding thru the means
> available ONLY to the bacteria.
What do you mean by ONLY ? How can you know whether this computational
process is or isn't availabe to other, non-bacterial systems?
and...
> Now, bacteria are not symbol-manipulators, and ipso facto cannot
> follow
> symbol-manipulation rules in the sense stipulated by formal systems,
> or Turing machines.
This is clearly a level issue. If you believe, as many who call
themselves DP'ers do, that the Universe itself is a CA, or some similar,
deterministic, finite state machine, then *at*the*deepest*level, the
bacteria are in fact part of a symbol-manipulating machine. Because, in
spite of various discussions on this and other threads, the fact remains
that a UTM can emulate a CA. If a CA is the Universe, then a UTM can
emualte the entire Universe, including the bacteria with their magical
abilities. None of this is to imply that a UTM is a useful, elegant, or
efficient way to describe this state of affairs; only that it is, in
principle, sufficient to produce the behavior observed. It may very
well be that some network of interacting machines is a description that
has much more expressive power; but the fact remains that it is really
just another way to express the same algorithms as far as the
manipulation of bits is concerned.
This 'bacteria are sooo smart' stuff smacks of the old analog computing
claims. Elsewhere the fallacy of this red herring has been pointed out.
So-called 'analog' processes are only as useful as the underlying
precision, or information-processing capability, of the reductionist
process that produces the (macro-observed) analog behavior. That is why
most DP folks believe that Quantum Computing is the next Cold Fusion.
You can't get something for nothing, IMHO.
-dbm
And is this genuine random number generator emitting digits of Omega
or something like that? Where is this god machine? When I asked on
digitalphilosophy, people had said that such a component was
unnecessary!
> > It can be readily seen that adding I/O operations do not surpass the
> > model of TM. It is the same thing as adding non-determinism and
> > parallelism, we get the same power of computation (ie. the same set of
> > functions)
>
> Paralleleism really /does/ surpass the model of computation of a TM.
>
> A parallel machine can do more - since it can read an infinite number
> of inputs at once.
I think you have a misunderstanding here. A parallel machine is not an
infinite machine. A TM with multiple tapes and heads can be simulated
by the serial variant easily, as can be verified from any theory of
comp. textbook... You can even add message passing capability among
tapes and still get the same power of computation.
There are supercomputing centers all around the world and housed in
them are parallel computers. That's what a parallel computer is in
practice *and* theory, a finite computer with parallel computing units
(cpu, memory, disk, etc.) communicating over an interconnection
network (like a switched ethernet network).
You seem to have gotten your definition of parallel machine wrong.
FWIW *every* machine in nature is parallel, by parallel machine we
usually refer to the degree of asynchrony, or lack of dependence,
between these parallel computing units!
> AFAIK, non-determinism doesn't really add anything very much of
> consequence: practically anything that randomness can do, a strong
> cryptographic PRNG algorithm can do just as well - for all practical
> intents and purposes.
Obviously, the digits of Omega can't be produced by a PRNG algorithm
(like a LFSR). It might be my practical intent and purpose to
calculate the first n digits of Omega, which requires a specific
procedure.
Non-determinism and parallelism can be simulated by a serial machine
as I said before. For the latter, that's how I sometimes test my
parallel programs.
And then, we see that in your response the only open venue left is the
possibility of some super-Turing agent intervening in the process of
computation by overwriting the poor Turing Machine's tape. Now we are
chasing the homunculus I suppose. How is this other super-Turing agent
constructed then!? That's what that brilliant paper failed to address.
(The one with the blurb about sequential interactive machines and
object oriented programming blah blah)
Thanks,
--
Eray Ozkural
> > > These two operations can be easily simulated by a TM.
> >
> > ...assuming the agent doing the writing is itself finite - and does not
> > access some sort of "genuine" random number generator.
>
> And is this genuine random number generator emitting digits of Omega
> or something like that? Where is this god machine?
I never said it existed - whether genuine random numbers exist is not
yet known. I was pointing out that you were assuming their non-existence.
> > > It can be readily seen that adding I/O operations do not surpass the
> > > model of TM. It is the same thing as adding non-determinism and
> > > parallelism, we get the same power of computation (ie. the same set of
> > > functions)
> >
> > Paralleleism really /does/ surpass the model of computation of a TM.
> >
> > A parallel machine can do more - since it can read an infinite number
> > of inputs at once.
>
> I think you have a misunderstanding here. A parallel machine is not an
> infinite machine. A TM with multiple tapes and heads can be simulated
> by the serial variant easily, as can be verified from any theory of
> comp. textbook... You can even add message passing capability among
> tapes and still get the same power of computation.
>
> There are supercomputing centers all around the world and housed in
> them are parallel computers. That's what a parallel computer is in
> practice *and* theory, a finite computer with parallel computing units
> (cpu, memory, disk, etc.) communicating over an interconnection
> network (like a switched ethernet network).
>
> You seem to have gotten your definition of parallel machine wrong.
A TM is a model of a serial machine.
An infinite CA is a model of a parallel one.
Both have infinite memory resources - and /need/ to have or they are
weak models - and (e.g.) - if TMs with finite memory were permitted -
it would no longer be true that any TM could simulate any other one.
The CA is theoretically more powerful than the TM. It can do things
the TM will never manage. For example, it can process an infinite
number of inputs in a finite time.
Oh, the wording "ONLY" is not meant to be exclusionary to bacteria;
rather limiting of their capabilities; to clarify, read it as:
*natural*
computational process unfolding thru the restrictive means available
to bacteria. Those means are extremely primitive, well below
conventional computing machines, therefore present certain problems
special to this bacterial scenario.
To respond to the second part of your question above, we hope,
indeed, that bacterial-type computation is available to non-bacterial
systems, such as the immune system and the nervous system,
both crucial intelligent systems of higher organisms. This is,
exactly,
the prime motivation to study bacteria in the first place. They
evolutionarily predate the other biotic systems; and, as I mentioned
elsewhere, one hypothesis is that bacterial intelligence is a
precursor
to other types of *natural* intelligences.
So, I think you need to sharpen the dissonance with your thinking
on this point... or concede resonance...
> and...
>
> > Now, bacteria are not symbol-manipulators, and ipso facto cannot
> > follow
> > symbol-manipulation rules in the sense stipulated by formal systems,
> > or Turing machines.
>
> This is clearly a level issue. If you believe, as many who call
> themselves DP'ers do, that the Universe itself is a CA, or some similar,
> deterministic, finite state machine, then *at*the*deepest*level, the
> bacteria are in fact part of a symbol-manipulating machine.
Well, here you escape into the safety of communal mantra, Dan,
rather than thinking for yourself... anew...
>[1] Because, in
> spite of various discussions on this and other threads, the fact remains
> that a UTM can emulate a CA.
You are back to the fixation on "emulation". In contrast, ee are
trying
to study a *natural* system, on its own terms, without "emulation",
still acting as digital computation. What's the flaw in that
approach?
>[2] If a CA is the Universe, then a UTM can
> emualte the entire Universe, including the bacteria with their magical
> abilities.
[1] & [2] are mantric renditions of Plamenic doctrine. [2] is
especially
amazing. Where does "if a CA is the Universe" pop up from?
Does the assumption ("if") make it so? Do you really believe
that your DP-er's intuition now obligates *real* organisms (that, by
the way, populate your guts and without which you won't survive) to
become part to
your fantasy?
> None of this is to imply that a UTM is a useful, elegant, or
> efficient way to describe this state of affairs; only that it is, in
> principle, sufficient to produce the behavior observed. It may very
> well be that some network of interacting machines is a description that
> has much more expressive power; but the fact remains that it is really
> just another way to express the same algorithms as far as the
> manipulation of bits is concerned.
You are doing well until the last sentence above. Interjecting
"algorithms"
into the bacterial scenario is not warranted. [Just to put it in a
bit of
perspective. David Harel, who has done all that work on algorithmic
complexity, and coined the term "algorithmics", is now in a
"remorseful"
phase... to say the least... btw, he also works on the sense of smell
as computation... something we had fun with just a few days ago...]
>
> This 'bacteria are sooo smart' stuff smacks of the old analog computing
> claims. Elsewhere the fallacy of this red herring has been pointed out.
> So-called 'analog' processes are only as useful as the underlying
> precision, or information-processing capability, of the reductionist
> process that produces the (macro-observed) analog behavior.
It is funny you mention analog computing... both my master's (1961)
and PhD (1963) involved computational techniques (that today would be
called CA) to solve some engineering problems. In 1964 I published
one of my first papers on digital simulation of a certain biological
system. It was a sophisticated CA (although we didn't know to call
it by that name at that time...) It was published in a journal called
SIMULATION. Without exception, all articles in that journal had been
on
analog simulation, for digital simulation was deemed impossible by
most
people in the field. The hostiliy and anxiety expressed by some vocal
"analogists" was extreme... but the cool headed ones assured the
others that digital simulation may be a fluke... only aplicable to
biological stuff... So, please, Dan, to hear you -- 40 years later --
tell me about analog computing in those terms above...
you know what I mean...
> That is why
> most DP folks believe that Quantum Computing is the next Cold Fusion.
> You can't get something for nothing, IMHO.
I have no interest in quantum computing. I fail to see the connection
to this particular discussion. -- jdi
>
> -dbm
PS If you accept my posts, please don't edit them by snipping
and responding simultaneously. First post in its entirety. If and
when you care to respond, then you may snip all you want, of course!
In other words, don't conflate your dual function as moderator and
discussant Thanks, -- jdi
By arbitrary I meant some K < infinity ;) I might run out of my life
cycles, but I'd rather die than break your heart.
> And since you don't believe in infinites, note that the original
> definition of a Turing machine postulates an infinite tape. It is
> *precisely* because I believe our world is finite, that I am unsure
> about the standing of UTM's as the final word on what machines can do.
>
So are there actually memory devices with infinite memory in the
nature, that can operate on these infinite strings in finite time? I
didn't know that. Then you have continuous computation, which is known
to be super-Turing. Could you please show me a physical example?
> > Can you tell me where the "nonenumerable" in their paper comes from
> > please?
> >
> I'm working on it.
OK. I really didn't understand how and where nonenumerability arises.
In fact, I'm not even sure if they really mean nonenumerable, which
might mean uncountably infinite: continuum. What else might they mean?
> >>I suspect you're wrong about that.
> >
> >
> > Maybe. There is a difference between understanding concepts and
> > reproducing formal definition from a textbook.
> >
> That's hardly what this paper is doing. While I'm not particularly
> impressed by JDI's list of sheepskins & awards that various authors may
> have, I don't agree with you that they're all just idiots whistling in
> the breeze.
>
I didn't say the authors are idiots, but I certainly think they are
looking under the wrong carpet. What they should be working on is,
given their knowledge of complex systems, physics and biology, a
plausible measure of the complexity of biological systems and how the
measures over biological structures compare to human technology. Then,
I would like to see a mathematical model of these numbers. That is,
quantitative arguments and proofs, not qualitative/persuasive
arguments. That kind of argumentation is philosophy, art of thought
which I am so fond of myself, but it is not the kind of science we
must endorse. It is not entirely reliable either. Look how some of the
most popular philosophers have wasted our brain cycles with
unrealistic theories of mind for decades, even centuries.
> I have long thought that there may be some important characteristics of
> digital machines that are not captured correctly by UTM's. I'm not sure
> this particular paper gets it right, but there is something that has
> been bothering me about UTM's and time, and resources, and input/output.
The fact is that "Turing Machine" with its tape, head and finite
control is just one of a "countless" number of computer architecture
specifications. I like the way Chaitin gives intuition to it, TM is
just a very low level machine code, nothing else. So, in theory it has
absolutely no difference from another kind of extremely low level
machine programming languages. Do you remember the assembly languages
of 8-bit machines? That was pretty low level. Do you seek any elegance
in an assembly language? I don't expect too much.
However, if you work on high level specifications such as LISP, you
can observe the generality of the computer architecture more directly.
That is why, working on that level _enables_ us to come up with more
general algorithms than could be imagined if we restricted our
imagination to the lowest level of the specification hierarchy.
That is, if you tried to program a universal computer in Minsky's 2
counters, a minimalist version of the TM, then you would get into a
lot of trouble. Our brains are not that advanced.
> A UTM can emulate any other machine, but at what cost in terms of
> time/memory tradeoffs? That seems to me to be an important distinction
> in the real world. After all, certain resource tradeoffs could easily
> make P look like NP, and so on. So it seems important to get this part
> of the equation right.
The simulation tradeoff is specified by a polynomial slowdown in
running time which depends only on the architectural differences as
you would imagine. So it doesn't make P look like NP in practice.
> I don't just want to know that there's an algorithm that can compute
> fuction f() over some domain; I want to know which algorithm best
> approximates function f() given time T, energy E, mass M, volume V,
> number of gates G, with latency L, specific interconnection topologies,
> and so on.
I think these have all been analyzed quite well in the literature. For
instance, there has been a lot of work in circuit complexity.
People have written a lot of algorithms for different interconnection
topologies, and you can derive general results about time/space
complexity growth for at least regular interconnection networks. (such
as going from a 2d mesh -> 3d mesh -> hypercube -> tree of hypercubes
etc.) For general descriptions over weighted graphs or hypergraphs,
that is easier said than done :)
The particular problem you described is a hard one. It is basically a
specification of the ultimate compiler with the ultimate optimizer.
Some of the stuff there may be uncomputable, for instance if you want
the absolutely minimum program size and so forth... (as you well know)
In fact, if you solve that as well as it can be solved, you might get
strong AI. Don't underestimate the power of such an optimizer. You
might want to have a look at Juergen's new work on AI to get a nice
glimpse of general inductive inference procedures which try to
optimize time (and program size at the same time IIRC):
http://www.idsia.ch/~juergen/newai/newai.html
While I don't think Juergen has (yet) solved the problem of strong AI,
it's interesting to see the advanced in general algorithmic methods.
> It is prceisely this intesection between computation and real-world (or
> fake-world!) constraints that may lead to new insights in both physics
> and CS. How do we capture this issue of performance under constrained
> resources in a formal and rigorous way?
I remember some work on resource bounded computation, especially for
real-time systems and such in programming languages community but I
have no idea how far they have come :)
Regards,
--
Eray
Joel Isaacson wrote:
[...snip...]
>
> PS If you accept my posts, please don't edit them by snipping
> and responding simultaneously. First post in its entirety. If and
> when you care to respond, then you may snip all you want, of course!
> In other words, don't conflate your dual function as moderator and
> discussant Thanks, -- jdi
>
This is offbase. I respond to posts in a newsreader or in Google like
anyone else, and I don't perform any moderation functions in doing so,
or do anything that a non-moderator couldn't do. I also believe it is
accepted policy on Usenet to edit posts with an indication such as
[snip] or [...], especially when quotes and double quotes are making the
thread unmanageably long. Since each original post is available, there
is no need to quote each one in its entirety when you begin a new
article. The snipping is done to point out the specific issues you are
responding to.
At least that's how I've always done it, and seen it done - dbm
ps if a moderator is doing something relating to moderation, it is
always preceeded by a "MODERATOR'S COMMENT" note; that's actually
enforced in the moderation software. Anything else you see me say in
this group is just me offering my opinion. I take the separation of
hats seriously (ie, if I'm wearing my MODERATOR hat, I'll make that very
clear. Otherwise, it's just my everyday hat.)
> > > Can you tell me where the "nonenumerable" in their paper comes from
> > > please?
> > >
> > I'm working on it.
>
> OK. I really didn't understand how and where nonenumerability arises.
> In fact, I'm not even sure if they really mean nonenumerable, which
> might mean uncountably infinite: continuum. What else might they mean?
They might mean: that - or any bigger infinity.
Hi Dan,
This is to amend my previous posts in this thread. I am mindful of
the limited
scope of this forum, in so far in is limited to models of discrete,
finite physical systems. So, I shall not persist with this biotic
computation vista, unless specifically rendered on-topic by the
moderators. To help you form an
opinion on the matter, I refer to Ed Fredkin's wise comments on
biology.
In the Tasmanian Revelation (see Finite Nature -- Tasmania, or Physics
out
of Biology) Fredkin observes, rather poetically, that growth and
development
in biology may serve as a model for digital physics. In particular,
he
identifies and articulates two laws of information, common to both.
A decade or so later, Ed Fredkin wrote the following in his work on
DP:
"Today, a follower of Digital Philosophy could still predict that we
will discover that digital processes govern the growth and development
of living things. Perhaps life is based on digital informational
processes involving the digital information encoded in DNA. It is
obvious to a few (including the author and Stephen Wolfram) that such
digital processes as seen in cellular automata are possible
explanations and models for many of the informational processes in
biology."
[ DP, Chapter 4: DP and Biology -- Last Revised 22 October 2001]
Some of us have reviewed the statement above and consider it
remarkably sane
and responsible, as opposed to some other statements heard on this
and similar forums. Those other positions postulate a single,
all-inclusive
CA, that not only "computes" the physical Universe, but also subsumes
each and every living thing together with all life processes.
I fail to see this kind of sweeping fantasy in Fredkin's own position,
and wonder what is it that lets some adherents of DP hold these
kinds of views.
My own view (and I am not alone in this) is that many life processes
can be understood (also modeled) in terms of CAs, or similar
processes;
that we now have wonderful opportunities to test these kinds of models
in actual lab experiments; and that progress along these lines is
bound to aid in developing discrete, CA-based physical models of
the Universe, just as Ed Fredkin predicted in the Tasmanian Testament.
Respectfully submitted,
Joel D. Isaacson
Where and how have they observed such infinity? So far, all I have
seen is talk of large bacteria colonies which are hardly infinite!
Regards,
--
Eray Ozkural
> > > > > Can you tell me where the "nonenumerable" in their paper comes from
> > > > > please?
> > > > >
> > > > I'm working on it.
> > >
> > > OK. I really didn't understand how and where nonenumerability arises.
> > > In fact, I'm not even sure if they really mean nonenumerable, which
> > > might mean uncountably infinite: continuum. What else might they mean?
> >
> > They might mean: that - or any bigger infinity.
>
> Where and how have they observed such infinity? So far, all I have
> seen is talk of large bacteria colonies which are hardly infinite!
Their argument follows Turing.
Turing had never seen an infinite tape - yet he found it convenient
to insert one into his infamous machine.
He was also careful enough to insist on making the ID of the machine
finite at any point in execution.
To regard the abstract Turing machine as an infinite construct is a
conceptual error.
Regards,
--
Eray Ozkural
> > Their argument follows Turing.
> >
> > Turing had never seen an infinite tape - yet he found it convenient
> > to insert one into his infamous machine.
>
> He was also careful enough to insist on making the ID of the machine
> finite at any point in execution.
>
> To regard the abstract Turing machine as an infinite construct is a
> conceptual error.
There's no such thing as a finite Turing Machine.
If it's finite, it isn't a Turing Machine.
Turing Machines /must/ have access to an infinite tape - by definition.
What is the ID of the Turing Machine? The ID is expressed by a finite
string at all points of derivation. From "Introduction to Automata
Theory, Languages, and Computation" 2nd edition, p. 320:
QUOTE
8.2.3 Instantaneous Descriptions for Turing Machines
In order to describe formally what a Turing machine does, we need to
develop a notation for configurations or instantaneous descriptions
(ID's), like the notation we developed for PDA's. Since a TM, in
principle, has an infinitely long tape, we might imagine that it is
impossible to describe the configurations of a TM succinctly. However,
after any finite number of moves, the TM can have visited only a
finite number of cells, even though the number of cells visited can
eventually grow beyond any finite limit. [ Note that this is possible
only if we let the machine run forever - Eray ] Thus, in every ID,
there is an infinite prefix and an infinite suffix of cells that have
never been visited. These cells must all hold either blanks or one of
the finite number of input symbols. We thus show in an ID only the
cells between the leftmost and the rightmost non-blanks. Under special
conditions, when the head is scanning one of the leading or trailing
blanks, a finite number of blanks to the left or right of the nonblank
portion of the tape must also be included in the ID.
In addition to representing the tape, we must represent the finite
control and the tape-head position...
ENDQUOTE
The "tape" of the TM is infinite by definition, however the *physical*
state of a TM (its ID) will be finite at every moment, and *never*
infinite. Note also that the input and output are required to be
finite as well. (Or given finite time, deal with only finite strings,
in general. The procedure may be emitting parts of an infinite string
like the decimal expansion of Pi. That's a trivial point.)
This is the mechanism via which many Turing Machines find *exact*
realization in the real world. Anyway, there is also the argument in
the 1st edition I think, that the tape can be grown arbitrarily, one
can design a TM that will ask for more tape when it runs out of it.
This way, you can easily use up all the available space and time in
the universe. Would that satisfy your observation of unbounded
operation?
Regards,
--
Eray Ozkural
> Eray Ozkural exa <er...@bilkent.edu.tr> wrote or quoted:
[..]
>>To regard the abstract Turing machine as an infinite construct is a
>>conceptual error.
>>
>
> There's no such thing as a finite Turing Machine.
>
> If it's finite, it isn't a Turing Machine.
>
> Turing Machines /must/ have access to an infinite tape - by definition.
Maybe a practical definition of infinity should be used. The size of
the tape should be bigger than needed for the computation. In other
words, all is OK until a computation reached the "edge" of the tape.
In that case, you should have made the tape bigger.
Gerard
Dear Joel,
Hope you don't mind my being so personal, even self-referntial,
with you... Don't hold your breath, the moderators are not
going to respond to your proposal to include biotic computation
in this forum... (anyway, your post in this regard was
lost...) so, if I were you I'd simply drop out
of this forum -- for, between you and I, you are simply not relevant
to the more sophisticated aspects of this forum!
All the best,
-- jdi
I agree, let us not get lost in this minor quibble. Computation is a
finitary method under any sane interpretation I believe.
On the other hand, it is easy to imagine infinite machines which work
in Aleph_n. We can surely imagine impossible machines which operate in
impossible realms.
Regards,
--
Eray Ozkural