Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

lambda calculus ->algorithmic optimization

21 views
Skip to first unread message

Ankit Jain

unread,
Jul 2, 2003, 7:16:20 AM7/2/03
to
Hi there
Sub: lambda calculus application to program optimization (algorithmic
optimization)
I have a very brief knowledge of lambda calculus.Can somebody please
tell me whether lambda calculus can be used for optimization of a
program.

Whether some work has already been done in this regard.

Specifically I am interested in an artificial intelligence project in
which I intend to build programming assistants (yes!!!).I intend to
relate natural language with lambda calculus and then finally build
programs by just describing the problem.

Thanks in advance.
If this idea seems workable,please contact
me.(mail2...@rediffmail.com)
Ankit

Joachim Durchholz

unread,
Jul 2, 2003, 9:55:57 AM7/2/03
to
Ankit Jain wrote:
> I intend to
> relate natural language with lambda calculus and then finally build
> programs by just describing the problem.

This has been tried many times, and a direct approach has always failed.
The usual chain of realizations is:
* Try to make sense of natural language
* Realize that natural language is too varied and (worse) too ambiguous
* Use a formally defined subset of natural language
* Realize that the subset you're using is a programming language
(in disguise)

The direct route (make the computer understand human-language
descriptions) is indeed infeasible. Understanding natural language
involves encoding heaps of everyday knowledge just to make it understand
all the possible meanings that I may attribute to the word "drive".

Designing computer languages is not just a technical exercise, it
involves programmer psychology and work organization (and, in the time
of Open Source and Third-Party Libraries, intellectual property
protection, trust in unknown parties, and commerce).

Regards,
Jo

Ankit Jain

unread,
Jul 2, 2003, 2:39:29 PM7/2/03
to
Joachim Durchholz <joachim....@web.de> wrote in message news:<bduocv$11bbja$1...@ID-9852.news.dfncis.de>...

Thanks for reviewing,
I agree with you.Actually I think that natural language processing is
a field of AI that needs some more ground work on the AI basics
themselves.

As you said that it would require heaps of knowledge that I thought
could be extracted from large amount of texts ,stories(indeed!!!)
etc.But perhaps it would also require some form of memory
organisation.So as first it might be required to code in common
sense.And then I guess things should move in easier.

Would natural language processing be suited still to pick up a field
of AI for working on memory organisation?Or there can be some
relatively easier domain over which the basics of memory organisation
can be developed.Any ideas.

Thanks again,
Ankit

Joachim Durchholz

unread,
Jul 2, 2003, 5:31:14 PM7/2/03
to
Ankit Jain wrote:
> I agree with you.Actually I think that natural language processing is
> a field of AI that needs some more ground work on the AI basics
> themselves.

This depends entirely on what you want to do. Some fields of AI have
progressed beyond AI (and aren't considered AI anymore: neural networks,
speech recognition, expert systems...)

> As you said that it would require heaps of knowledge that I thought
> could be extracted from large amount of texts ,stories(indeed!!!)
> etc.

Automatic extraction of knowledge is difficult. The problem is: how do
you deal with incomplete and inconsistent information? At a formal
level, every natural-language text bristles with these.

There was a ten-year project for representing everyday knowledge. Most
of that time was spent encoding trivial facts of life: that fathers are
male, that dying is an irreversible process, that the sky is commonly
considered to be blue but may be covered by clouds (in which case people
say "they sky is gray" and don't mean the sky as an abstract concept
which is still blue, but the sky as seen from the earth surface which is
grey). Etc. pp. ad infinitum ad nauseam.

The project was terminated. They wanted to sell it, but it wasn't up to
the tasks (they wanted to use it for applying "common sense" to database
contents).
AFAIK a follow-up project was set up, supposedly to apply the lessons
learned. It's scheduled to be done in another ten years (well, seven or
eight years from now).

(Sorry, I have forgotten the name of either project. I dimly remember it
was similar to "cycle" or something.)

> But perhaps it would also require some form of memory
> organisation.

AFAIK the common sense representation project didn't use any
particularly clever data organisation. It was essentially a
precondition->conclusion network, easily representable as a relational
table (for example).

> So as first it might be required to code in common
> sense.

From the way the magazine article talked about the project, I'd rate
this at an order of magnitude of 100 person-years.

HTH
Jo

Hal Daume III

unread,
Jul 2, 2003, 7:00:01 PM7/2/03
to
Hi,

> > I agree with you.Actually I think that natural language processing is
> > a field of AI that needs some more ground work on the AI basics
> > themselves.
>
> This depends entirely on what you want to do. Some fields of AI have
> progressed beyond AI (and aren't considered AI anymore: neural networks,
> speech recognition, expert systems...)

...one might equally say 'some fields of NLP have progressed beyond AI.'
that is, "traditional AI" has been found to not be terribly useful at
tackling NLP problems. NLP is often considered more in line with machine
learning today, than with AI (though of course whether machine learning is
AI or not, ...). I think "AI" is a sticky name because it tends to mean
"anything which we can't do exactly." Speech recognition was considered
AI until (a) it began to be made up almost exclusively of HMMs and
(b) when it started working. Of course, these two things happened
simultaneously, so it's hard to say which caused it to drop out of the
"AI" umbrella.

[that's not to say SR really works, but...]

> There was a ten-year project for representing everyday knowledge. Most

> <SNIP>


> (Sorry, I have forgotten the name of either project. I dimly remember it
> was similar to "cycle" or something.)

Close. it was "CYC". You can find it at:

www.opencyc.org

--

As Joachim has said, it's very difficult to 'learn' from unannotated data
(your large collection of stories). In general, what will happen is that
it will learn *something*, but this something is nothing at all like what
you intended it to learn. There's also huge computational demand and blah
blah blah. Perhaps 182 years from now someone will try and maybe be able
to get halfway to somewhere, but I'd not wager money on that.

If you're interested in structured knowledge, etc., there's a subfield of
AI called KR (knowledge representation). They deal with those sorts of
issues. People in NLP tend not to use KR (ever) because it's always too
specific and doesn't handle even the most common cases.

If you're interested in NLP, there is a lot to be gained from traditional
AI stuff (search is integral to many tasks, for instance), but much more
from the fields of machine learning, statistics and linear algebra.

Of course there are other "AI" like fields which aren't at all like KR and
NLP, but that's another discussion.

- Hal

p.s., my point of view is obviously biased. There certainly are people
"out there" who do very formal, semantically based theories of NLP. This
stuff is theoretically nice, and can tell you the difference between
"Alice is in the house" and "Alice is near the house" but little
else. The most statistical approaches (the world I live in) don't really
"know" the difference between those two, but depending on the task can
probably generate the correct output given those two as inputs. This may
bother some. Not me.


Ankit Jain

unread,
Jul 3, 2003, 3:48:53 PM7/3/03
to
Joachim Durchholz <joachim....@web.de> wrote in message news:<bdvj2f$11l035$1...@ID-9852.news.dfncis.de>...

Thanks Joachim & Hal for your information.Iam coming to the conclusion
that NLP as such would not be able to serve the purpose for AI and
that it definitely requires a solid backend which is capable of
intelligent behaviour.Then perhaps NLP could be fruitful.

I think that true intelligence is something that is being missed in
most AI projects.As you have also cited that AFAIK did not use any
clever data organisation.

As Hal has pointed to try to look in the field of knowledge
representation but I could not find some good material just to know
whether it would be really suited.I mean some sort of overview.

Currently the problem that Iam facing is I want to develop a model so
that I can verify whether a system that maintains a knowledge base ,
tries to verify facts,draws conclusions acts upon these conclusions
and again verifies these conclusions to arrive at succesful
results,even has some insticts, can display some sort of intelligent
behaviour.This may perhaps be an answer to as Joachim has asked of how
to deal with inconsistent information.

I was wondering which language or tool may be useful for building such
a theory.Whether PERL would be suited(I have only a faint idea of
PERL) or some other tool.Or some similar work .
Some suggestions on how to go about building it further .
Thanks in advance!
Ankit

Alan Connor

unread,
Jul 3, 2003, 4:50:04 PM7/3/03
to

I know a guy in Seattle who is on the team that writes the software that
designs the Boeing airplanes.

He has a sign over his desk that says:


ARTIFICIAL INTELLIGENCE

When the Real Thing Just Won't Do

This fellow knows more about computers than anyone I have ever even heard of,
and reads about 20 languages like you and I read the funnies.


He doesn't believe that AI is even possible, that it is just hype and smoke
and mirrors to sell stocks and make jobs.

It is as easy as pie to design a program that will make a computer SEEM to
be intelligent, but the fact is that no one understands what HUMAN intelligence
is or how it works, and that makes it very difficult indeed to create a
technological equivalent, doesn't it?


Just spotted the "AI" in the subject and thought I'd share the thoughts of
a real guru I know.


Alan


Joachim Durchholz

unread,
Jul 3, 2003, 6:15:41 PM7/3/03
to
Ankit Jain wrote:
>
> I think that true intelligence is something that is being missed in
> most AI projects.

Er... there isn't even a generally accepted definition of intelligence.
(Other than "what a computer can do", or "what a human can do". Which
both are fairly uninteresting for AI.)

> I was wondering which language or tool may be useful for building such
> a theory.Whether PERL would be suited(I have only a faint idea of
> PERL) or some other tool.

Perl is most definitely not suited. For starters, it doesn't deal with
persistency at all.

From your description, I think you're heading for an expert system
(which is, in essence, a system of rules that's capable of producing all
the conclusions which can be drawn from them, including the nonobvious
ones).
As an alternative, you may look into languages that go under the heading
of "logic programming". Prolog was the first in that area, though
research most certainly has progressed beyond Prolog. (Prolog has some
serious deficits that make it less than ideal for general programming,
even of rule systems.) Logic programming is about efficiently proving
(or disproving) theorems, the price being that you have to recast your
theorems in a simplified language. (Prolog uses Horn clauses, a
particularly primitive form of theorems. This is one of the reasons why
I don't recommend Prolog except if learning how logic programming
started is of interest.)
Another approach might be proof systems. You give the system a set of
axioms and inference rules and a theorem, and it will tell you whether
the theorem is in line with the axioms (you may have to help the
inference engine by giving hints). These systems aren't tailored towards
producing proofs/disproofs for a huge number of theorems, since getting
a theorem through may require manual intervention - but they provide the
full power of higher-order predicate logic. In other words, if you can
formalize it, you can use a proof system to prove it (or to prove that
there's an error somewhere).

Regards,
Joachim

Marshall Spight

unread,
Jul 4, 2003, 1:19:59 AM7/4/03
to
"Joachim Durchholz" <joachim....@web.de> wrote in message news:be2a1q$hi1t$1...@ID-9852.news.dfncis.de...
> ... "logic programming". Prolog was the first in that area, though

> research most certainly has progressed beyond Prolog. (Prolog has some
> serious deficits that make it less than ideal for general programming,
> even of rule systems.) ...
> ... (Prolog uses Horn clauses, a

> particularly primitive form of theorems. This is one of the reasons why
> I don't recommend Prolog except if learning how logic programming
> started is of interest.)

The obvious question is ...

What DO you recommend for logic programming?


Marshall

Thomas Lindgren

unread,
Jul 4, 2003, 4:19:19 AM7/4/03
to

"Marshall Spight" <msp...@dnai.com> writes:

"Logic programming" is a compromise between logic and programming, and
in its defense and in contrast with many other attempts, Prolog does
provide a meaningful, efficient programming model (where you can
reason about efficiency, and so on). I've written largish amounts of
serious (non-AI) Prolog code, and I found it pleasant enough once I
figured out how it works.

If you are more of a static typing guy, Mercury is probably the
default choice.

For the purpose of theorem proving, where the "logic" part is much
more important than the "programming" part, you're probably better off
using one of the special-purpose theorem provers, though.

I also recommend asking in comp.lang.prolog or perhaps comp.constraints.

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin

Fergus Henderson

unread,
Jul 4, 2003, 4:33:59 AM7/4/03
to
Alan Connor <xxx...@xxxx.xxx> writes:

>I know a guy in Seattle who is on the team that writes the software that
>designs the Boeing airplanes.
>
>He has a sign over his desk that says:
>
> ARTIFICIAL INTELLIGENCE
>
> When the Real Thing Just Won't Do

Cute.

>He doesn't believe that AI is even possible, that it is just hype and smoke
>and mirrors to sell stocks and make jobs.

AI is possible. But, that said, full AI won't be achieved any time soon,
and it's certainly true that there has been plenty of hype and such like
in the AI field.

>It is as easy as pie to design a program that will make a computer SEEM to
>be intelligent, but the fact is that no one understands what HUMAN intelligence
>is or how it works,

No-one understands it completely. But we do have a reasonable understanding
of the general principles -- enough to be confident that it will eventually
be possible to simulate the process.

>and that makes it very difficult indeed to create a
>technological equivalent, doesn't it?

Sure. But very difficult is not the same as impossible. It probably
won't happen in our lifetimes, but in the long run it seems pretty
inevitable, barring some civilization-destroying catastrophe.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.

Joachim Durchholz

unread,
Jul 4, 2003, 6:24:29 AM7/4/03
to
Marshall Spight wrote:
>
> The obvious question is ...
>
> What DO you recommend for logic programming?

Actually I can't recommend anything, since my only personal experience
with logic programming were some very early Prologs.

Languages that I have seen recommended are:
1. Mercury.
2. Clean (actually I'm not sure whether it's logic at all).
3. Mozart/Oz. It is multi-paradigm, i.e. you can do linear, logic,
functional, imperative, and constraint programming, each paradigm
combinable with concurrency as needed. The interesting thing about
Mozart/Oz is that it does a good job at establishing a hierarchy of
programming paradigms, i.e. you see quite clearly which parts of logic
programming are just linear programming, which parts of imperative
programming are from functional programming, etc.

AFAIK, Mercury and Clean are industrial-strength, while Mozart/Oz is
more a research project that's headed roughly towards industry.

HTH
Jo

Torbjorn Lager

unread,
Jul 6, 2003, 4:12:41 PM7/6/03
to
Joachim Durchholz <joachim....@web.de> wrote in message news:<be3ko5

> AFAIK, Mercury and Clean are industrial-strength, while Mozart/Oz is
> more a research project that's headed roughly towards industry.

What do you mean "industrial-strength"? I don't know about Clean, but
in what sense is Mercury more industrial-strength than Mozart/Oz?
AFAIK, Mercury is a research project too. Isn't "industrial-strength"
just a business buzzword, totally devoid of any real meaning?

Cheers,
Torbjörn

Joachim Durchholz

unread,
Jul 6, 2003, 6:43:11 PM7/6/03
to
Torbjorn Lager wrote:
> Joachim Durchholz <joachim....@web.de> wrote in message news:<be3ko5
>
>>AFAIK, Mercury and Clean are industrial-strength, while Mozart/Oz is
>>more a research project that's headed roughly towards industry.
>
> What do you mean "industrial-strength"? I don't know about Clean, but
> in what sense is Mercury more industrial-strength than Mozart/Oz?
> AFAIK, Mercury is a research project too.

Ah, I didn't know that.

> Isn't "industrial-strength"
> just a business buzzword, totally devoid of any real meaning?

No, it has a very real meaning: it's the quality that makes people
recommend the language if you ask for advice on writing
business-critical software.
That's of course a rather fuzzy definition, but that doesn't make it
useless.

For example, I asked for just this kind of advice before.
Personally, I would have selected Haskell. It's the cleanest and most
concise syntax and semantics I have ever seen.
Yet I chose Erlang. The language itself isn't half as nice, both
syntactically and conceptually. But it comes with a complete soft
real-time framework (that I need). It comes with a distributed,
replicating, transactional, structured-data-per-field database that has
been in field use for years. It has a track record of field reliability
(it contributes six-digit lines of code in Ericsson products that must
not fail for more than half a minute per year).

These are all fuzzy arguments, but they were so strong that I was not
willing to bet the future of the company (and myself) on Haskell.
And THAT is quite a hard definition of "industrial-strength". Well, to
be more precise: "perceived industrial-strength" - it may well be that
all these things would have been available in Haskell had I known
enough, researched enough, written enough helper libraries. Alas, my
time is limited, and I have to rely on hearsay and newsgroup echo - if
we all had enough time to do everything "right", we would already have
done all the work that needs to be done in the information technology.
And we'd still be doing it in assembly since all improvements in
language designs were driven by the need to make programmer time more
productive...

Regards,
Jo

Ralph Becket

unread,
Jul 6, 2003, 10:23:44 PM7/6/03
to
la...@ling.gu.se (Torbjorn Lager) wrote in message news:<f4b38c7b.03070...@posting.google.com>...

>
> What do you mean "industrial-strength"? I don't know about Clean, but
> in what sense is Mercury more industrial-strength than Mozart/Oz?
> AFAIK, Mercury is a research project too. Isn't "industrial-strength"
> just a business buzzword, totally devoid of any real meaning?

Mercury was intended from the outset to be a declarative language one
could recommend to a software engineer with a straight face. The design
is conservative, has a very efficient and easy-to-predict execution model,
lacks distributed fat (i.e. you only pay for a feature when and where it
is used), compiler error messages are extremely clear, the foreign language
interface is second to none, and the tool set is capable, mature and very
robust.

Now, some more esoteric parts of the language specification remain to be
fully implemented and there *are* some experimental branches of the source
tree. But the main branch is rock solid and is undoubtedly "industrial
strength".

I'm afraid I don't know enough about Mozart/Oz to make a sensible
comparison.

-- Ralph

Joachim Durchholz

unread,
Jul 7, 2003, 7:40:08 AM7/7/03
to
Ralph Becket wrote:
> I'm afraid I don't know enough about Mozart/Oz to make a sensible
> comparison.

I don't know whether Mercury is industrial-strength by my personal
standards, but Mozart/Oz quite definitely isn't (alas).
For example, the documentation, while extensive, isn't well-indexed, so
it's difficult to find the relevant information - which means that
suboptimal solutions are implemented.
Mozart/Oz is close though. There isn't much that's missing (apart from a
field track record - *somebody* has to use it in the field for the first
time).

I do have my reservation about Oz's syntax. It requires a lot of getting
used to it...

I'd like to emphasize that these issues are not the fault of the
Mozart/Oz team. My impression is that Mozart/Oz has lots of excellent
groundwork, it's more the secondary issues like documentation,
regression testing, sponsoring the thing into an industrial application
and integrating the observed results into a year of solidifying it into
an industrial product.
Unfortunately, this is the type of activity that doesn't usually attract
neither governmental nor industrial funds.

Well, back to work :-)

Regards,
Jo

Torbjörn Lager

unread,
Jul 7, 2003, 8:32:52 AM7/7/03
to
Joachim Durchholz wrote:

> I don't know whether Mercury is industrial-strength by my personal
> standards, but Mozart/Oz quite definitely isn't (alas).
> For example, the documentation, while extensive, isn't well-indexed, so
> it's difficult to find the relevant information - which means that
> suboptimal solutions are implemented.

Well, here's a link to the docs, so that people can judge for
themselves: <http://www.mozart-oz.org/documentation/>

> Mozart/Oz is close though. There isn't much that's missing (apart from a
> field track record - *somebody* has to use it in the field for the first
> time).

People *are* using it in the field:

http://www.friartuck.net/
http://exploration.vanderbilt.edu/news/news_ANT.htm
http://www.mozart-oz.org/lists/oz-users/4575.html

They aren't telling the world about it... probably wise, since that
would take away their competitive edge :-)

-- Torbjörn

Joachim Durchholz

unread,
Jul 7, 2003, 10:58:30 AM7/7/03
to

Ah, good to know.
Somehow nobody spoke up when I asked for advice in the area.
Maybe my reliability requirements might have been beyond the scope of
Mozart/Oz, they are relatively high. (I also did some experiments with
the system, and I wasn't impressed by its integration with external
services - all worked, but error reporting didn't help me pinpoint the
real problems. Too little fine polish - as I said, not enough funding
for that. Or not enough guinea pigs willing to explore these problems
and report them back to the Mozart team. Or whatever.)

> They aren't telling the world about it... probably wise, since that
> would take away their competitive edge :-)

Hmm... that's the typical whining of people who want to support a
language that doesn't have the market share that they think it should have.
Sorry for being so harsh, but I have heard that tune in various other
camps, and I'm tired of it.
Besides, it's not even conclusive. If a language is a key element of
success, this will be proudly presented, if only in investor information
(which is enough to leak this kind of information to the general public).

Besides, if a language isn't advertised, it isn't going to get into
widespread use, so it's going to die a slow death - not an option that
I'd like to bet industrial success on.
People who use a new language know this risk. On one hand, this will
make them shy to take the risk in the first place - but /if/ they take
the risk, they will be as wide-mouthed as possible in it, just to see as
many other development houses jump the bandwagon just to build a viable
user community.

It's OK if some companies try to keep their use of advanced technology
secret. After all, some companies are more paranoid about competition
than building a viable user community.
But if /all/ companies keep it a secret - then /that/ is a reason for
concern.

Regards,
Jo

Damien Sullivan

unread,
Jul 7, 2003, 4:49:44 PM7/7/03
to
alanc...@earthlink.net wrote:
>
>I know a guy in Seattle who is on the team that writes the software that
>designs the Boeing airplanes.

Which doesn't make him an expert on AI. Me, I'm a grad student with Douglas
Hofstadter.

>He doesn't believe that AI is even possible, that it is just hype and smoke
>and mirrors to sell stocks and make jobs.

Stocks? Dang, we academic researchers are really missing out. And I was
making way more as a fairly good programmer with a real job than I do as a
grad student.

So what does he believe which makes AI impossible? Souls? Quantum effects
which somehow manifest in cellular soup at room temperature? Mysterious
consciousness properties of carbon?

>It is as easy as pie to design a program that will make a computer SEEM to
>be intelligent, but the fact is that no one understands what HUMAN

No, it's not easy as pie to do that. Getting a program which can seem
intelligent is the Holy Grail of a lot of AI researchers, cf. the Turing Test.
(Not all think it's valid, which is why I just say "a lot" of researchers.)

>intelligence is or how it works, and that makes it very difficult indeed to
>create a technological equivalent, doesn't it?

So we work from both ends in cognitive science. Neurologists and
psychologists work on how the brain really works, and CS types think about the
problems humans try to solve, and how one might engineer a system to attack
the problem, and hopefully both ends listen to each other. And the
philosphers tell us all what they think we're doing wrong.

-xx- Damien X-)

Jerzy Karczmarczuk

unread,
Jul 8, 2003, 6:02:36 AM7/8/03
to
"Damien Sullivan" fights against fighters against AI

> alanc...@earthlink.net wrote:
> >
> >I know a guy in Seattle ...


>
> So what does he believe which makes AI impossible? Souls?
> Quantum effects which somehow manifest in cellular soup at room
> temperature?

...

Now, I won't engage in this discussion which as *all* discussions on
AI lead nowhere, but just one comment. If you think that quantum
effects within a highly incoherent, macroscopic bulk grey matter
are to be dismissed lightly because "as we know", QM operates on
small, microscopic, preferrably cold and coherent systems, you might
be DEAD WRONG.

Personally I am almost certain that life and intelligence (true one)
have a lot to do with quantum physics. I am not a sectarian. I have
just seen that the collective behaviour may be conditioned by quantic
phenomena in many objects much bigger than a school example. Anyway
the Planck spectrum is quantic, and it conditions the whole of our
world...

* Q.Ph. determines the energetic spectra of quite hot and messy
semiconductors.
Your pocket laser is as trivial as a screwdriver, yet it works
on quantic phenomena.
* In a hot plasma soups you find a lot of quasi-particles which
wouldn't be there without quanta. Polarons, and God knows what.
* There are suspicions that during the replication of DNA there
are phenomena of coherent amplification of electromagnetic
interactions, which protect the system against errors. The DNA
chains behave -- in a sense -- like supra-conductors.

I could continue... resonances, quasi-particles, symmetry-breaking,
etc. ..., but after all this is comp.lang.functional.


Jerzy Karczmarczuk


--
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG

Frank Buss

unread,
Jul 8, 2003, 7:48:03 AM7/8/03
to
"Jerzy Karczmarczuk" <kar...@info.unicaen.fr> wrote:

> Personally I am almost certain that life and intelligence (true one)
> have a lot to do with quantum physics. I am not a sectarian. I have
> just seen that the collective behaviour may be conditioned by quantic
> phenomena in many objects much bigger than a school example. Anyway
> the Planck spectrum is quantic, and it conditions the whole of our
> world...

I'm sure you know Occam's Razor and I think collective behaviour can be
explained with local behaviour. A nice example is my cellular automaton:

http://www.frank-buss.de/automaton/totalistic.html

(don't miss to enter the "islands" parameters)

--
Frank Buß, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

Jerzy Karczmarczuk

unread,
Jul 8, 2003, 8:33:20 AM7/8/03
to
Frank Buss comments my credo:

>
> > Personally I am almost certain that life and intelligence (true one)
> > have a lot to do with quantum physics. I am not a sectarian. I have
> > just seen that the collective behaviour may be conditioned by quantic
> > phenomena in many objects much bigger than a school example.

> I'm sure you know Occam's Razor and I think collective behaviour can be

> explained with local behaviour. A nice example is my cellular automaton:
>
> http://www.frank-buss.de/automaton/totalistic.html

We might annoy a lot of people interested just in FP, but, well,
anyway...

I don't see at all what you are trying to suggest.
For all physicists *obviously* the collective behaviour is a result of
the propagation of the local dynamics. Your applet resembles a XY or
a (possibly frustrated) Ising model showing a slightly atypical phase
transition, that's alright, nice. Nobody claims (I would be the last)
that you need something totalistico-magic to have collective phenomena.
Phase transitions don't need holistic philosophy nor even quantum
physics.

I suggested only that delicate quantic phenomena in several BIG bodies
get amplified, and you end with a quantum object as brutal and cheap
as a Hong-Kong made pocket laser, so without a thorough quantitative
analysis we cannot exclude that some neuron properties are due to
some hidden coherence. I have both feet on the ground. May Ockham
repose in peace. We don't need him to protect us from the speculation
on the behaviour of Josephson junction (another macroscopic quantum
object) or on the dynamics of vortices in a huuuge bucket of liquid
helium.

BTW., the science called chemistry could live and evolve for centuries
without quantum mechanics, but nowadays, and especially in organic
chemistry everybody knows that classical, even very elaborate models of
molecules are worthless, all aromatic compounds are the result of
delicate quantum balance of forces. Kekule could imagine the cyclic
form of benzene, but its chemical properties, the stability of bounds
*must* be computed using quantum approach.

So, knowing that quantum systems in general are infinitely more complex
than all classical stuff, knowing that entanglement may be a powerful
resource of information, knowing that quantum system can compute
the Fourier transform in constant time and accelerate enormously the
research/info retrieval algorithms, -
- I wouldn't be surprised at all, if the essence of what we call the
intelligence of biologic tissues would turn out to be par excellence
quantic. This is no science fiction. The un-science fiction is the
speculation on the relation between a brain and the Turing machine...

Sleep well.

Richard Bos

unread,
Jul 8, 2003, 10:36:23 AM7/8/03
to
In article
<b140ffb96d3e4aab358...@mygate.mailgate.org>,
"Jerzy Karczmarczuk" <kar...@info.unicaen.fr> wrote:

> "Damien Sullivan" fights against fighters against AI
>
> > alanc...@earthlink.net wrote:
> >
> > >I know a guy in Seattle ...
> >
> > So what does he believe which makes AI impossible? Souls?
> > Quantum effects which somehow manifest in cellular soup at room
> > temperature?
>

> Now, I won't engage in this discussion which as *all* discussions on
> AI lead nowhere, but just one comment. If you think that quantum
> effects within a highly incoherent, macroscopic bulk grey matter
> are to be dismissed lightly because "as we know", QM operates on
> small, microscopic, preferrably cold and coherent systems, you might
> be DEAD WRONG.

OTOH, what he might be saying instead is that quantum effects generally
don't give a flying fsck whether they occur in dead transistors or in
living cells.

Hi, btw. I'm new here. I intend to ask some very basic questions about
Clean in the foreseeable future, because I want to find out whether this
functional programming thing works for me... well, all right, actually
because Clean is one of the few free compilers I could find for the Mac
and I might as well give it a shot <g>. Hope you don't mind...

Richard

Mark Carroll

unread,
Jul 8, 2003, 12:23:38 PM7/8/03
to
In article <rlb-22B391.1...@news.nl.uu.net>,
Richard Bos <r...@hoekstra-uitgeverij.nl> wrote:
(snip)

>Hi, btw. I'm new here. I intend to ask some very basic questions about

Welcome.

>Clean in the foreseeable future, because I want to find out whether this
>functional programming thing works for me... well, all right, actually
>because Clean is one of the few free compilers I could find for the Mac
>and I might as well give it a shot <g>. Hope you don't mind...

You should find that Haskell compilers work with recent Mac OS X too -
look at ghc, nhc, hugs. (Not that there's anything wrong with Clean!)

-- Mark

Aaron Denney

unread,
Jul 8, 2003, 2:25:36 PM7/8/03
to
Jerzy Karczmarczuk wrote:
> So, knowing that quantum systems in general are infinitely more complex
> than all classical stuff,

Exponentially, not infinitely.

> knowing that quantum system can compute the Fourier transform in
> constant time

Not constant, and it needs classical pre- and postprocesing to be
useful.

> and accelerate enormously the research/info retrieval algorithms,

Is O(sqrt(N)) better than O(log(N))? (Yes, better than O(N), and
better than O(N log (N)).)

> I wouldn't be surprised at all, if the essence of what we call the
> intelligence of biologic tissues would turn out to be par excellence
> quantic.

I would be horribly surprised. The nerves communicate via chemical
signals at temperatures far above 0K. The decoherence times are tiny.
Yes, the individual chemicals need to be treated quantum-mechanically,
but that's not where the intelligence is. Unlike liquid helium, human
brains are nowhere near 0K. Unlike lasers, the entropy is very high.
Unlike Josephon junctions, brains are hugely chaotic, and highly coupled
to the environment.

--
Aaron Denney
-><-

Fergus Henderson

unread,
Jul 9, 2003, 2:19:54 AM7/9/03
to
"Jerzy Karczmarczuk" <kar...@info.unicaen.fr> writes:

>- I wouldn't be surprised at all, if the essence of what we call the
>intelligence of biologic tissues would turn out to be par excellence
>quantic.

Intelligence, and more generally the externally observable effects of
consciousness, seem to be a very complicated combination of perception,
memory, deduction, analogy-making, reflective thinking, emotion, etc.
We can already build very crude models of each of these individual
aspects of these using digital computers... obviously there's a long
way to go in terms of scaling up from these very crude models of
individual aspects to something that could simulate human-like
behaviour, but I don't see anything to indicate that it would require
any fundamentally different approach than our already-implemented crude
models.

Our current efforts may only be sufficient to effectively simulate the
behaviour of relatively simple organisms such as grasshoppers, and
obviously human beings are a lot more complicated than grasshoppers,
but I don't see any reason to suppose that human brains depend on
quantum phenomena in a way substantially different than insect brains.

Richard Bos

unread,
Jul 9, 2003, 3:02:16 AM7/9/03
to
In article <Y7b*cW...@news.chiark.greenend.org.uk>,
Mark Carroll <ma...@chiark.greenend.org.uk> wrote:

> In article <rlb-22B391.1...@news.nl.uu.net>,
> Richard Bos <r...@hoekstra-uitgeverij.nl> wrote:
>
> >Clean in the foreseeable future, because I want to find out whether this
> >functional programming thing works for me... well, all right, actually
> >because Clean is one of the few free compilers I could find for the Mac
> >and I might as well give it a shot <g>. Hope you don't mind...
>
> You should find that Haskell compilers work with recent Mac OS X too -

Ah. Well. There's the problem. I don't _officially_ have a Mac. Being
one opf the sysadmins as well as the local programmer, I managed to
kidnap one of the very old Macs that were being replaced by G4s, just so
I could get a bit more experience on Macs. It's a Power Macintosh, but
the Power i.c. is fractional, I think. In any case, I don't think it
will even run OS X, and if it would, it'd run like frozen treacle
through a capillary vein. I couldn't even find a C compiler for it,
except ones which cost more than my employer would be willing to pay for
what is, after all, just an experiment.
But it runs Clean just fine. Hey, maybe I'll use Clean to write a free C
compiler for MacOS!

Richard

Jerzy Karczmarczuk

unread,
Jul 9, 2003, 4:27:14 AM7/9/03
to
"Aaron Denney" corrects my imprecisions:

> Jerzy Karczmarczuk wrote:
> > So, knowing that quantum systems in general are infinitely more complex
> > than all classical stuff,
>
> Exponentially, not infinitely.

Well, 2 to the power aleph0 is infinitely bigger than aleph0. For
discrete, finite systems you are right, I am wrong. For the 'real
stuff' I see no difference, and my formulation is more dramatic, so
I keep it.

> > knowing that quantum system can compute the Fourier transform in
> > constant time
>
> Not constant, and it needs classical pre- and postprocesing to be
> useful.

Useful for whom? It can be further processed by quantum units. Sure,
the interfacing *will* consume some time, but I am not sure (are you?)
what are the penalties in general case.


> > and accelerate enormously the research/info retrieval algorithms,
>
> Is O(sqrt(N)) better than O(log(N))? (Yes, better than O(N), and
> better than O(N log (N)).)

Hm. Here I shall shut my mouth. But the analysis of Grover algorithm
does not preclude other solutions. Somehow I remain optimistic
concerning the power of quantum search algorithms... You just weit,
Professor Higgins, you just weit, as sang Eliza Doolittle.

==

Then you point out that the "hot cellular soup" has much bigger
entropy than lasers and other exemplar systems I mentioned. That
nervous signals are chemical, that the decoherence should be awful,
and interaction with the environment, dominating. Chaos everywhere.
So no quanta...

Perhaps no, perhaps yes. Mind you, the entropy of a laser is very
big as well. The point is that the system is very far from equilibrium
and the generation of coherent wave due to the stimulated emission
becomes effective. Laser action is a kind of morphogenesis; all those
obscure analogies are well covered by the articles on Synergetics,
see Haken et al.

(This is not another sectarian church, but an attempt to find some
mathematical universalities in the behaviour of different dynamic
systems; a philosophy a little bit similar to that of René Thom and
his theory of catastrophes. In a very speculative sense, it has
something to do with functional programming, where one looks often
for universal properties... Alright, I agree, it *is* a kind of
church, but not very dangerous for the society.)

Now, I am not crazy enough to suppose that a neuron or a synaptic
cluster be a laser or something equivalent... But somehow, I cannot
explain why, but I will not apologise for it, I think that reducing
brain to a classical computational system, a Turing machine, or what,
is even more maniac than supposing that quanta might be relevant.
It is very easy to become sectarian here. A *very* intelligent fellow,
Roger Penrose wrote things which are hardly or simply not acceptable.

=====
Fergus Henderson declares:

> Our current efforts may only be sufficient to effectively simulate
> the behaviour of relatively simple organisms such as grasshoppers,
> and obviously human beings are a lot more complicated than
> grasshoppers, but I don't see any reason to suppose that human
> brains depend on quantum phenomena in a way substantially different
> than insect brains.

Yes, I think you are absolutely right. Quantum phenomena do not add
souls, do not produce extrasensorial perception through interaction
with Everett parallel worlds, or other rubbish of that kind.
They simple help, because of induced coherence, because of resonances,
because of gaps in the energy spectra to *stabilize* otherwise
chaotic systems. They help the self-structuring of complex systems
far from thermal equilibrium. Insects are no different from Einsteins
from this perspective.
But I disagree that we *can* realistically model ants and other
simple 'cognitive' structures. This is just a phenomenological simu-
lation, its relation to the reality might be much more distant than
the relation between Ptolomeian deferences and epicycles, and
the planet orbits as deduced from the General Theory of Relativity.

Thank you all.

Joachim Durchholz

unread,
Jul 9, 2003, 4:55:47 AM7/9/03
to
Jerzy Karczmarczuk wrote:
> Then you point out that the "hot cellular soup" has much bigger
> entropy than lasers and other exemplar systems I mentioned. That
> nervous signals are chemical, that the decoherence should be awful,
> and interaction with the environment, dominating. Chaos everywhere.
> So no quanta...
>
> Perhaps no, perhaps yes.

Which quite nicely sums it up.
We Don't Know.
Having quantum theory explain the mind is an intriguing concept. It's
definitely a theory worth testing.
But it's nothing that I'd base my mental health on right now. On the
surface, it seems damn unlikely, and when viewed from the
history-of-philosophy angle, it's just another case of pushing the
beloved mysterious farther into the corner of the unknown to keep it
mysterious. In other words, like it or not, intend it or not, hoping for
consciousness (or the mind) to live in the quantum effects of our grey
matter is a seamless continuation of millenia of wishful thinking.

Let me look at it from another angle.
Even if the mind is some extradimensional force that influences quantum
phenomena in the brain: as soon as science finds out what the mind or
consciousness *are*, this will open the road to understand its inner
workings, to healing, and to manipulation.

These are horrible prospects, but humanity has been adapting to all
kinds of things that were previously considered horrible.
Those societies that allow minds to be manipulated into eternal bliss
will die because nobody will be able to work. Those that cruelly
suppress their citizens will die due to rebellion (sooner or later -
even socialism died due to rebellion at the top, if you will). Those
that make their citizens happy and keep them working will survive - you
and me wouldn't like this kind of state, it's more like an anthill than
a society of informed citizens - but who are you and me to judge the
value of a far-in-the-future society that we aren't part of?

> But I disagree that we *can* realistically model ants and other
> simple 'cognitive' structures. This is just a phenomenological simu-
> lation, its relation to the reality might be much more distant than
> the relation between Ptolomeian deferences and epicycles, and
> the planet orbits as deduced from the General Theory of Relativity.

That's an interesting line of thought. Unfortunately, this kind of
theory isn't falsifiable.
We cannot hope to gain any further insight into the workings of the
universe if we rely on unfalsifiable theories.

Let me illustrate this with an example.
Scientists have mapped the entire nervous system of a sea sludge. (That
particular sludge was ideal for this kind of work because it has just a
few dozen neurons, and its neurons are large enough to be microscoped.)
Now there exists a simulation of the complete nervous system of a
sludge. If presented with virtual stimuli, it reacts just like the real
sludge reacts to equivalent real stimuli.
We may not be able to simulate the consciousness of the sludge, or
whatever lives in these quantum phenomena... but we don't need this
"consciousness thing" to explain why the sludge behaves like it does, so
why bother about it? Assuming a consciousness is just as speculative as
assuming that none exists, and both assumption are equally worthless in
understanding the world around us.

Jo

Nick Name

unread,
Jul 9, 2003, 7:38:45 AM7/9/03
to
On Wed, 09 Jul 2003 09:02:16 +0200
Richard Bos <r...@hoekstra-uitgeverij.nl> wrote:

> I couldn't even find a C compiler for it

I guess you will find all the compilers you need (except VBA <g>) if you
install linux/ppc (try yellowdog or debian-ppc) :)

Vincenzo

Borcis

unread,
Jul 9, 2003, 9:14:10 AM7/9/03
to
Alan Connor wrote:
>
> It is as easy as pie to design a program that will make a computer SEEM to
> be intelligent, but the fact is that no one understands what HUMAN intelligence
> is or how it works, and that makes it very difficult indeed to create a
> technological equivalent, doesn't it?

To me this statement sounds self-contradictory. If no one understands
the essence, appearance is the only testable form of equivalence.

Peter "Firefly" Lund

unread,
Jul 9, 2003, 9:12:10 AM7/9/03
to

Moscow ML should work with Mac OS 9 and Mac OS X:
(I haven't tried it myself, though)

http://www.dina.dk/~sestoft/mosml.html

-Peter

"It took a long time to figure out that people talking to one another, instead of simply
uploading badly-scanned photos of their cats, would be a useful pattern."
-- Shirky

Richard Bos

unread,
Jul 9, 2003, 9:29:05 AM7/9/03
to
In article
<30f5b050565c74b20fe...@mygate.mailgate.org>,
"Jerzy Karczmarczuk" <kar...@info.unicaen.fr> wrote:

> "Aaron Denney" corrects my imprecisions:
>
> > Jerzy Karczmarczuk wrote:
> > > So, knowing that quantum systems in general are infinitely more complex
> > > than all classical stuff,
> >
> > Exponentially, not infinitely.
>
> Well, 2 to the power aleph0 is infinitely bigger than aleph0. For
> discrete, finite systems you are right, I am wrong. For the 'real
> stuff' I see no difference, and my formulation is more dramatic, so
> I keep it.

The "real stuff" is also finite. It's finite with some literally
astronomical constants thrown into the O()-computations, but it _is_ (to
the best of current scientific current knowledge) finite :-).

> Hm. Here I shall shut my mouth. But the analysis of Grover algorithm
> does not preclude other solutions. Somehow I remain optimistic
> concerning the power of quantum search algorithms... You just weit,
> Professor Higgins, you just weit, as sang Eliza Doolittle.

That'd be 'Enry 'Iggins, IIRC...

Richard

Richard Bos

unread,
Jul 9, 2003, 9:32:37 AM7/9/03
to
In article <20030709134118.3...@ANTI.SPAM.inwind.it>,
Nick Name <nick...@ANTI.SPAM.inwind.it> wrote:

> On Wed, 09 Jul 2003 09:02:16 +0200
> Richard Bos <r...@hoekstra-uitgeverij.nl> wrote:
>
> > I couldn't even find a C compiler for it
>
> I guess you will find all the compilers you need (except VBA <g>)

Who needs VBA? ;->

> if you install linux/ppc (try yellowdog or debian-ppc) :)

On a Power Mac 7100/66? I suspect not. It's having enough problems
running the OS it was designed for at a reasonable speed. Besides, that
would negate the point of the exercise, which is for me to learn a bit
about MacOS...

Richard

Richard Bos

unread,
Jul 9, 2003, 9:34:22 AM7/9/03
to
In article <Pine.LNX.4.55.03...@ask.diku.dk>,
"Peter \"Firefly\" Lund" <fir...@diku.dk> wrote:

> http://www.dina.dk/~sestoft/mosml.html

Thanks... noted for later reference. I think I'll confuse myself with
one functional language at a time...

Richard

David Basil Wildgoose

unread,
Jul 9, 2003, 10:31:03 AM7/9/03
to
I thought that the application of Gödel's Theory did a good job of
demolishing deterministic approaches to Artificial Intelligence, (at
least that's what I remember from Hofstadter's "Gödel, Escher and
Bach" anyway).

But who is to say that both sides of the argument are not correct?
That is, that local effects rely on quantum processes, but that these
local effects then combine into the emergent behaviour that we can see
in cellular automata?

George Russell

unread,
Jul 9, 2003, 10:45:29 AM7/9/03
to
David Basil Wildgoose wrote:> I thought that the application of Gödel's Theory did a good job of

> demolishing deterministic approaches to Artificial Intelligence, (at
> least that's what I remember from Hofstadter's "Gödel, Escher and
> Bach" anyway).

No I don't think so. But Dreyfus & Dreyfus's book "Mind over Machine"
destroyed any hope I had of artificial intelligence of similar power to
human intelligence achievable through simple symbol manipulation. I may
be wrong (it not being anything like my field), but I get the impression
most AI researchers would agree.

Borcis

unread,
Jul 9, 2003, 3:02:44 PM7/9/03
to
Jerzy Karczmarczuk wrote:
>
> Well, 2 to the power aleph0 is infinitely bigger than aleph0.

OTOH, there's downward Loewenheim-Skolem, says that anything
that has a model >= aleph0 must have a model =< aleph0.

Damien Sullivan

unread,
Jul 9, 2003, 4:18:10 PM7/9/03
to
George Russell <g...@tzi.de> wrote:
>David Basil Wildgoose wrote:> I thought that the application of Gödel's Theory did a good job of
>> demolishing deterministic approaches to Artificial Intelligence, (at
>> least that's what I remember from Hofstadter's "Gödel, Escher and
>> Bach" anyway).

I think you badly misremember GEB; Hofstadter quotes Lucas as arguing that
Gödel demolishes AI (and Gödel himself may have leaned that way too) but
Hofstadter and Dennett and others disagree.

>No I don't think so. But Dreyfus & Dreyfus's book "Mind over Machine"
>destroyed any hope I had of artificial intelligence of similar power to
>human intelligence achievable through simple symbol manipulation. I may
>be wrong (it not being anything like my field), but I get the impression
>most AI researchers would agree.

I only know Dreyfus secondhand, so I don't know exactly what he was attacking.
Hofstadter argues against symbols being shunted around from above as a route
to AI, against AI arising out of a single level of formalism, or top level
formalism. Instead preferring "active symbols" arising out of subcognitive
events. It's still all computational at the bottom level, but the top level
ends up being more fluid and complication and more intelligent-like. I have
no idea if the Dreyfus's would agree or if they don't believe in computers
supporting intelligence at all. Not that I really care what they think,
either...

-xx- Damien X-)

Fergus Henderson

unread,
Jul 9, 2003, 9:18:55 PM7/9/03
to
wild...@operamail.com (David Basil Wildgoose) writes:

>I thought that the application of Goedel's Theory did a good job of


>demolishing deterministic approaches to Artificial Intelligence,

Not at all.

>(at least that's what I remember from Hofstadter's "Goedel, Escher and
>Bach" anyway).

You misunderstood it.

Some people who ought to know better, such as Penrose, have argued that,
but this argument is very easily demolished. Goedel's work only
implies that deterministic AI can't be infallible (except in limited
domains). But we humans aren't infallible either. AIs could certainly
be a lot less fallible than us humans.

Ralph Becket

unread,
Jul 9, 2003, 10:43:30 PM7/9/03
to
wild...@operamail.com (David Basil Wildgoose) wrote in message news:<265d96ac.03070...@posting.google.com>...

> I thought that the application of Gödel's Theory did a good job of
> demolishing deterministic approaches to Artificial Intelligence, (at
> least that's what I remember from Hofstadter's "Gödel, Escher and
> Bach" anyway).

No, Gödel's incompleteness theorems only apply to consistent formal
systems. Many people who bring this up as an "AI can't work" argument
seem to be labouring under the impression that an AI would have to be
some kind of theorem prover. There is no reason to suspect that this
should be the case and plenty of evidence (i.e. people) to suggest that
an AI almost certainly would not be an overgrown theorem prover.

> But who is to say that both sides of the argument are not correct?
> That is, that local effects rely on quantum processes, but that these
> local effects then combine into the emergent behaviour that we can see
> in cellular automata?

One point is that it seems simpler models than quantum mechanics are
sufficient to explain the workings of the brain.

But, even if one really does require the full precision of QM,
quantum computations can still be simulated on a Turing machine and
identical results obtained. Quantum computers just happen to do some
things a little faster. They do not allow us to answer questions that
cannot be answered with a Turing machine.

-- Ralph

David Basil Wildgoose

unread,
Jul 10, 2003, 3:54:48 AM7/10/03
to
f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<beietv$nmf$1...@mulga.cs.mu.OZ.AU>...

> wild...@operamail.com (David Basil Wildgoose) writes:
>
> >I thought that the application of Goedel's Theory did a good job of
> >demolishing deterministic approaches to Artificial Intelligence,
>
> Not at all.
>
> >(at least that's what I remember from Hofstadter's "Goedel, Escher and
> >Bach" anyway).
>
> You misunderstood it.
>
> Some people who ought to know better, such as Penrose, have argued that,
> but this argument is very easily demolished. Goedel's work only
> implies that deterministic AI can't be infallible (except in limited
> domains). But we humans aren't infallible either. AIs could certainly
> be a lot less fallible than us humans.

I know it's nearly 20 years since I read it, but I don't think I got
it that badly wrong. In fact, I respectfully suggest that it is you
that has missed an important implication. Gödel's work shows every
formal system has theorems (ideas if you will) that cannot be
expressed within it.

So the question then becomes, Do you accept that there are limitations
as to what a human being is capable of comprehending, or not?

If our intelligence can ultimately be modelled by a formal system,
then this must be the case, (and we would be unaware of the fact).

So basically this is turning into a religious argument between your
point of view that our intelligence can be modelled formally, and thus
"Only God can be all-knowing", and my point of view (as an atheist
humanist) that refuses to accept limitations on what human beings can
do or understand. My point of view implies that our intelligence
cannot be formally modelled, and thus I am afraid that I am in
fundamental opposition to your opinions on this matter.

Jerzy Karczmarczuk

unread,
Jul 10, 2003, 4:59:41 AM7/10/03
to
Ralph Becket wrote:

> One point is that it seems simpler models than quantum mechanics are
> sufficient to explain the workings of the brain.
>
> But, even if one really does require the full precision of QM,
> quantum computations can still be simulated on a Turing machine and
> identical results obtained. Quantum computers just happen to do some
> things a little faster. They do not allow us to answer questions that
> cannot be answered with a Turing machine.

I knew that this QM business will explode in all directions, and I promised
myself not to take part in Turingalilas and other Exotic Symphonies...
Oh well.

First, we have really NO MODELS for the cognitive and "thinking" aspects of
the brain. Surely, it is a macroscopic structure, and we are condemned to
apply classical models to its physiology, to the modelling of the generation
of alpha waves, or whatever. But *this is not the point*! Here, in Caen
at Cyceron (PET camera) there is a research project showing that when people
think hard on some specific problems, there are zones inside the brain, which
consume more oxygen. Nice. But what are the conclusions?


Second, thinking that quantum computations can be *really* simulated on a
Turing machine is - with my full respect - simply preposterous. Now, I know,
I know, don't tell me all that which concerns discrete, finite structures.
But in the real world the "quantum computations" are machines which would
work on a infinite, *non-enumerable* collection of "tapes". You may make
any models thereof you wish. Do you think that the fact that neurons 'fire',
makes it possible to forget the continuum aspects of their dynamics?

Perhaps for some of you it makes sense to imagine an artificial intelligent
being working as a Turing machine and needing 10^10^765764 years to perform
one elementary logic step. I respect and I appreciate the 'ultimate' questions
(better than answers; read Douglas Adams; put some smileys here if you wish).
But for other people it make sense analyzing what makes the Intelligence
**practical**. What are the mechanisms permitting to deduce in a couple of
seconds that, say, Iraq has 200 atomic bombs and a swimming pool with sarin?

David Basil Wildgoose opposes himself to Fergus Henderson, and declares his
disbelief in the formal modelling of the intelligence. We came there from
a discussion about Gödel. You may see now why I dislike too much of
formal thinking. Finally everything ends with declaration of faith. And the
'reall stuff' lies in front of our eyes, and asks: "find out *how* do
I work?".


So, I repeat once more. I think that Quantum Physics is *necessary* for the
harmonious, fast and stable behaviour of biological structures which are
substrates for cognitive processes. That computer scientists should get
beyond the modelling of qubits on simple data structures, and enter one
day the Dark World of systems possessing infinitely-dimensional bases, of
entangled continuum structures, etc. The only thing I want to say to
Monsieur XXX who tells me: "I don't need quantum mechanics, I can model
all that stuff classically" is:: "let's have another beer".

Bizarre at it may seem, here I believe that functional programming has a lot
to do. The best - in my almost-not-very-humble opinion - representation of
quantum states, are *functional objects*. Quantum theory needs a lot of
abstraction, in mathematical sense. FP paradigms provide some of these
abstractions + the ways of composing and transforming them.


Jerzy Karczmarczuk

Frank Buss

unread,
Jul 10, 2003, 5:40:06 AM7/10/03
to
ra...@cs.mu.oz.au (Ralph Becket) wrote:

> But, even if one really does require the full precision of QM,
> quantum computations can still be simulated on a Turing machine and
> identical results obtained. Quantum computers just happen to do some
> things a little faster. They do not allow us to answer questions that
> cannot be answered with a Turing machine.

"A little faster" is a nice understatement. With Shor's algorithmen for
number factorisation you get the result in polynomial time instead in
exponential, so if you have to wait on a Turing machine 10^50 years, the
quantum computer would be finished in some minutes:

http://www.wikipedia.org/wiki/Shor's_algorithm

If you like to write quantum programs, there is a emulator:

http://tph.tuwien.ac.at/~oemer/qcl.html

But it is not sure, that a quantum computer with enough qubits can be
built:

http://arxiv.org/abs/quant-ph/0306103

George Russell

unread,
Jul 10, 2003, 5:49:21 AM7/10/03
to
David Basil Wildgoose wrote (snipped):

> So the question then becomes, Do you accept that there are limitations
> as to what a human being is capable of comprehending, or not?

Surely this is self-evident? A human can comprehend Wiles's proof of
FLT, with difficulty. Multiply it by 10, and maybe it's still possible,
multiply it by 1000000 and I don't think you've got a hope. If the
human brain is representable by a complex formal system then a Gödel
statement for it might well be 1000000 times as complex as Wiles' proof.

> So basically this is turning into a religious argument between your
> point of view that our intelligence can be modelled formally, and thus
> "Only God can be all-knowing", and my point of view (as an atheist
> humanist) that refuses to accept limitations on what human beings can
> do or understand.

Which leads to the question of course whether God *can* be all-knowing.
I was trying to work out the implications of Gödel's theorem for an
omniscient God, but gave up. One needs to find a Thomist with a PhD
in mathematical logic ...

Richard Bos

unread,
Jul 10, 2003, 7:45:23 AM7/10/03
to
In article <265d96ac.03070...@posting.google.com>,

wild...@operamail.com (David Basil Wildgoose) wrote:

> f...@cs.mu.oz.au (Fergus Henderson) wrote in message
> news:<beietv$nmf$1...@mulga.cs.mu.OZ.AU>...
> > wild...@operamail.com (David Basil Wildgoose) writes:
> >
> > >I thought that the application of Goedel's Theory did a good job of
> > >demolishing deterministic approaches to Artificial Intelligence,
> >
> > Not at all.
> >
> > >(at least that's what I remember from Hofstadter's "Goedel, Escher and
> > >Bach" anyway).
> >
> > You misunderstood it.
>

> I know it's nearly 20 years since I read it, but I don't think I got
> it that badly wrong.

It's only two years or so since I read it, and you _did_ get it that
badly wrong.

> In fact, I respectfully suggest that it is you
> that has missed an important implication. Gödel's work shows every
> formal system has theorems (ideas if you will) that cannot be
> expressed within it.

"David Basil Wildgoose cannot believe in the truth of this statement."
Discuss.

Richard

David Basil Wildgoose

unread,
Jul 10, 2003, 8:56:34 AM7/10/03
to
pho...@ugcs.caltech.edu (Damien Sullivan) wrote in message news:<behta2$hv7$1...@naig.caltech.edu>...

> George Russell <g...@tzi.de> wrote:
> >David Basil Wildgoose wrote:> I thought that the application of Gödel's Theory did a good job of
> >> demolishing deterministic approaches to Artificial Intelligence, (at
> >> least that's what I remember from Hofstadter's "Gödel, Escher and
> >> Bach" anyway).
>
> I think you badly misremember GEB; Hofstadter quotes Lucas as arguing that
> Gödel demolishes AI (and Gödel himself may have leaned that way too) but
> Hofstadter and Dennett and others disagree.

You misunderstood me. I wasn't arguing against AI, on the contrary,
it was the prospect of AI that first interested me in computers.
Rather I was arguing against AI being based upon a formal set of
rules.

From memory, there were 2 sketches in GEB that illustrated these 2
points.

The "formal rules" sketch used the concept that for every record
player there was a record that could not be played without destroying
the player.

But there was also an interesting story about an Ant Hill and her
friend the Anteater, which neatly illustrated the emergent behaviour
visible when looking at individual ants at a different level, namely
that of the colony itself.

Joachim Durchholz

unread,
Jul 10, 2003, 9:29:13 AM7/10/03
to
Ralph Becket wrote:
>
> No, Gödel's incompleteness theorems only apply to consistent formal
> systems.

Agreed.

> Many people who bring this up as an "AI can't work" argument
> seem to be labouring under the impression that an AI would have to be
> some kind of theorem prover.

In this sense, every computer is a theorem prover as well, so that
parallel is rather superficial.
However, a formal system that's inconsistent is useless. Inconsistency
means that every proposition can be derived; when we translate this into
an AI system, this means that the system is free to do anything
regardless of its inputs.

Note that we're talking about inconsistencies in the reasoning itself.
It's entirely possible to have systems that aren't consistent with
reality (and the human mind, as far as it is a formal system, most
definitely isn't fully consistent with reality, else it would be
impossible to fool it). These inconsistencies are on a much less
fundamental level: reality follows one set of axioms and laws, the
"inconsistent" system another one. Each is consistent within its bounds,
you just can't blindly join the two systems without getting inconsistencies.

> There is no reason to suspect that this
> should be the case and plenty of evidence (i.e. people) to suggest that
> an AI almost certainly would not be an overgrown theorem prover.

Every program is a proof of existence.
Trivially so: it proves that, for every input, there exists an output
according to the specifications that the program was written for.
(Modulo programming errors, but you can word the thing so that even that
effect doesn't count.)

Of course, this is not what you mean... but it's difficult to tell that
a program is *not* a theorem prover.
For example, I'd expect any serious AI to have a planning component. A
plan is a proof of the theorem that the goals of the plan are
achievable, so the planner is a theorem prover...

So, what do you mean if you say that "an AI [is not] an overgrown
theorem prover"?

Regards,
Jo

Joachim Durchholz

unread,
Jul 10, 2003, 9:33:25 AM7/10/03
to
Frank Buss wrote:
>
>
> "A little faster" is a nice understatement. With Shor's algorithmen for
> number factorisation you get the result in polynomial time instead in
> exponential, so if you have to wait on a Turing machine 10^50 years, the
> quantum computer would be finished in some minutes:

Right - but it doesn't change a bit about the question whether the human
mind is a formal system or transcends it.

(Personally, I don't see why the human mind should transcend formal
systems - formal systems offer enough complexity to explain all
behaviour. I don't even understand why limitations in thinking are
considered so bad - let people transcend the limitations of their senses
and their prejudices, that's enough work before trying to transcend the
far looser limits of formal systems.)

Regards,
Jo

Jerzy Karczmarczuk

unread,
Jul 10, 2003, 9:43:28 AM7/10/03
to
Joachim Durchholz wrote:


> Every program is a proof of existence.

Coffe break. Time for even more silly questions.

What is "existence"?

> ... it proves that, for every input, there exists an output according
> to the specifications that the program was written for ...

What are "specifications"?

Jerzy Karczmarczuk

PS. Don't take me wrong. I don't want to destroy your nice mental
gymnastics. You (all) are doing quite well even without my silly
questions. But if one day you tell me what are the specifications
of an intelligent being, let me know.

Neelakantan Krishnaswami

unread,
Jul 10, 2003, 10:55:56 AM7/10/03
to
David Basil Wildgoose <wild...@operamail.com> wrote:
>
> I know it's nearly 20 years since I read it, but I don't think I got
> it that badly wrong. In fact, I respectfully suggest that it is you
> that has missed an important implication. Gödel's work shows every
> formal system has theorems (ideas if you will) that cannot be
> expressed within it.

Goedel's theorem restricts *consistent* systems powerful enough to
express arithmetic. Now think of the set of all sets that are not
members of themselves. You had no special problem with that, did you?
Congratulations, you have a proof by construction that your reasoning
processes aren't perfectly consistent.

This isn't meant to be a snide little joke, either. I think that it's
absolutely critical that human beings have reasoning processes that
can gracefully handle inconsistencies. Our perceptions are fallible,
and if our reasoning were required to be something like classical or
intuitionistic logic, then a single false belief could introduce an
inconsistency that would make us believe everything. So our mental
processes must have evolved as some kind of paraconsistent logic in
order to prevent contradictions from proving any proposition.

I wonder if that's why people have so much trouble with functional
programming. The type "a -> b" has, thanks to the Curry-Howard
isomorphism, an equivalent logical proposition "NOT(a) OR b", and
negation and self-reference are the keys to constructing Russell's
paradox. Maybe people intuitively avoid thinking about such things....

--
Neel Krishnaswami
ne...@alum.mit.edu

George Russell

unread,
Jul 10, 2003, 11:03:04 AM7/10/03
to
Neelakantan Krishnaswami wrote (snipped):

> Goedel's theorem restricts *consistent* systems powerful enough to
> express arithmetic. Now think of the set of all sets that are not
> members of themselves. You had no special problem with that, did you?
> Congratulations, you have a proof by construction that your reasoning
> processes aren't perfectly consistent.

I don't think it proves anything at all. I can conceive of weapons
of mass destruction without reasoning that weapons of mass destruction
exist; I can conceive of sets that are not members of themselves
without reasoning that sets that are not members of themselves exist.

Costin Cozianu

unread,
Jul 10, 2003, 2:07:32 PM7/10/03
to
Ralph Becket wrote:
> wild...@operamail.com (David Basil Wildgoose) wrote in message news:<265d96ac.03070...@posting.google.com>...
>
>>I thought that the application of Gödel's Theory did a good job of
>>demolishing deterministic approaches to Artificial Intelligence, (at
>>least that's what I remember from Hofstadter's "Gödel, Escher and
>>Bach" anyway).
>
>
> No, Gödel's incompleteness theorems only apply to consistent formal
> systems. Many people who bring this up as an "AI can't work" argument
> seem to be labouring under the impression that an AI would have to be
> some kind of theorem prover. There is no reason to suspect that this
> should be the case and plenty of evidence (i.e. people) to suggest that
> an AI almost certainly would not be an overgrown theorem prover.
>
>

I'll throw my 2c, I'm not trying to make a point here, but looking for
some clarification.

I recently discovered the very opinionated and probably very well
informed thoughts of one Jean Yves Girard who has some well deserved
fame and notoriety in computer science and mathematics.

He basically affirms without too much argumentation that automatic proof
doesn't work and will never work (basically because of the halting
problem), to quote him the computer is but a cyber-cretin who spends his
time verifying the opening and closing brackets. Sidebar, he also claims
that GEB is a "monument of vulgarity" (readers be advised).

For those who can read French, his article are extremely funny and
enjoyable:
http://iml.univ-mrs.fr/~girard/Articles.html
"Les fondements des mathématiques",
"Scientisme et obscurantisme",
"Du pourquoi au comment : la théorie de la démonstration de 1950 à nos
jours"

Now I can understand amd think his claim is very justified indeed that
automatic theorem proving will never work. But how other people with
more background in AI justify that if they can't prove theorems, they
can have AI ?

In other words how theorem proving relates to problem solving ? Humans
solve problems, and in order to solve some more complex problems, they
build some formal systems and prove some theorems. Of course, people
can't prove any and all theorems but it looks that they manage to find
the relevant ones and the provable ones. As far as I know current
automatic theorem proving relies heavily on human oracles to direct the
computer in a certain direction. It looks like we may approximate such a
theorem prover (problem solver) as a computing machine equipped with
some probabilistic oracle that will help the cyber-cretin avoid the
halting problem. The oracle can model the human intuition, maybe ? ("Ces
fichues idees")

Constructing a good computing machine (cyber-cretin) for this purpose
looks more and more like a technical problem but constructing a good
oracle do we know even if it is possible ? What if it proves to be
unfeasible in in space or in time ? Do we have some scientific basis to
believe that AI is possible, or is it just a "positivist" belief ?

Any clarifications, input, references will be greatly appreciated.

Thanks,
Costin

Joachim Durchholz

unread,
Jul 10, 2003, 2:35:06 PM7/10/03
to
Costin Cozianu wrote:
> Constructing a good computing machine (cyber-cretin) for this purpose
> looks more and more like a technical problem

I think provers are as old as Lisp, which is 1956.

> but constructing a good
> oracle do we know even if it is possible ?

Constructing oracles is easy. An oracle that always says "I don't know"
would be trivial, for example.
When it comes to constructing a *good* oracle: how do you define the
quality of an oracle? Only then can you answer the question whether it's
possible.

E.g. if you say a good oracle should be able to provide enough
information that the prover can prove all true statements, then the
oracle must solve the halting problem. That's obviously too much to ask.
An oracle that always fails to give useful answers is too little.
Where, on the near-continuum between these endpoints, would you say the
oracle is "good"?

> What if it proves to be
> unfeasible in in space or in time ?

Personally, I think it's quite feasible.
It's just that present-day oracles are obviously not good enough. And
given that roughly a kilogram of grey matter is able to produce the most
stunning mathematical proofs is basis for justified hope that this level
of expertise can be reproduced with reasonable effort. (Of course, if
the human mind is somehow connected to "Akasha" or otherwise linked
into an external source of inspiration, this belief may be too
optimistic. Frankly, nobody knows for sure until a serious attempt at
such an oracle has been made.)

> Do we have some scientific basis to
> believe that AI is possible, or is it just a "positivist" belief ?

Artificial intelligence is most definitely possible. It's just that AI
tends to be defined as "those parts of the human intellect that we do
not know how to emulate in software yet".
Playing chess used to be firmly in the domain of AI, until it was clear
that the problem could be solved using massively parallel computing, in
ways that are very different from the way that the human mind works.
Image recognition: likewise.
Learning: was considered AI until neural networks were invented. Today,
neural networks are boilerplate technology.
The Turing test used to be the litmus test of AI - until people began to
write programs that could simulate some nearly-psychotic personalities
that hide their intelligence behind certain interaction patterns.

I'm pretty sure that a useful oracle for theorem provers will be
considered non-AI once a working example is demonstrated.
It's still a Good Thing to have AI researchers. AI has been one of the
more productive areas of research - not in their primary goal, but in
terms of by-products.

Regards,
Jo

George Russell

unread,
Jul 10, 2003, 3:20:28 PM7/10/03
to
Joachim Durchholz wrote (snipped):

> Artificial intelligence is most definitely possible. It's just that AI
> tends to be defined as "those parts of the human intellect that we do
> not know how to emulate in software yet".
> Playing chess used to be firmly in the domain of AI, until it was clear
> that the problem could be solved using massively parallel computing, in
> ways that are very different from the way that the human mind works.

What is instructive about chess is that it took /very/ much longer to
get chess computers that could beat the best humans than anyone expected.
The very first chess program was I think written in Manchester in about
1950, to solve mate-in-two-move sort of problems. I don't think anyone
would have expected it to take 50 years to actually produce a computer
program that could beat the chess world champion, and I imagine they
would have been disappointed by the fact that these programs work by
considering unimaginably more combinations of potential moves than any
human could do.

> Image recognition: likewise.

Yes, likewise. I don't think anyone in the early days of computing could
have imagined the amount of time it would take to solve problems like
recognising human faces in a variety of conditions, and to this day the
problem of image recognition is still very imperfectly solved, as anyone
who has read a text produced by OCR (surely you'd think OCRing a printed
text could be done reliably pretty easily?) will testify.

> Learning: was considered AI until neural networks were invented. Today,
> neural networks are boilerplate technology.

The very first (artificial) neural networks were called "perceptrons" and were
invented in 1957.

http://ei.cs.vt.edu/~history/Perceptrons.Estebon.html

Useful though neural networks are, they are still no match at learning
to humans. If you disagree, then build one which can learn to prove
theorems like humans can ...

> The Turing test used to be the litmus test of AI - until people began to
> write programs that could simulate some nearly-psychotic personalities
> that hide their intelligence behind certain interaction patterns.

We can see how far this falls short of Turing's original idea by
reading Turing's original paper:

http://www.abelard.org/turpap/turpap.htm

(a very good read). In particular consider Turing's sample dialogue:

> Q: Please write me a sonnet on the subject of the Forth Bridge.
> A: Count me out on this one. I never could write poetry.
> Q: Add 34957 to 70764
> A: (Pause about 30 seconds and then give as answer) 105621.
> Q: Do you play chess?
> A: Yes.
> Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
> A: (After a pause of 15 seconds) R-R8 mate.
(note the deliberate mistake in the second question). Much later in the
paper he provides another dialogue, addressed to a "sonnet-writing machine".

> Interrogator: In the first line of your sonnet which reads 'Shall I compare
> thee to a summer's day', would not 'a spring day' do as well or better?
> Witness: It wouldn't scan.
> Interrogator: How about 'a winter's day' That would scan all right.
> Witness: Yes, but nobody wants to be compared to a winter's day.
> Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
> Witness: In a way.
> Interrogator: Yet Christmas is a winter's day, and I do not think Mr.
> Pickwick would mind the comparison.
> Witness: I don't think you're serious. By a winter's day one means a
> typical winter's day, rather than a special one like Christmas.

We can also see how far adrift we are of what Turing expected by
considering the following prediction he made in the paper (written in 1950):

> I believe that in about fifty years time it will be possible to programme
> computers with a storage capacity of about 10^9 to make them play the imitation
> game so well that an average interrogator will not have more than 70 per cent
> chance of making the right identification after five minutes of questioning. The
> original question, 'Can machines think?' I believe to be too meaningless to
> deserve discussion. Nevertheless I believe that at the end of the century the
> use of words and general educated opinion will have altered so much that one
> will be able to speak of machines thinking without expecting to be contradicted.

10^9 is I think the number of bits. Well, at least we do have computers with
a storage capacity of 128MB now. Turing elaborates on the hardware requirements
later:

> As I have explained, the problem is mainly one of programming. Advances in
> engineering will have to be made too, but it seems unlikely that these will not
> be adequate for the requirements. Estimates of the storage capacity of the brain
> vary from 10^10 to 10^15 binary digits. I incline to the lower values and believe
> that only a very small fraction is used for the higher types of thinking. Most of
> it is probably used for the retention of visual impressions. I should be
> surprised if more than l0^9 was required for satisfactory playing of the
> imitation game, at any rate against a blind man. (Note--The capacity of the
> Encyclopaedia Britannica, 11th edition, is 2 x l09.) A storage capacity of l0^7
> would be a very practicable possibility even by present techniques. It is
> probably not necessary to increase the speed of operations of the machines at
> all. Parts of modern machines which can be regarded as analogues of nerve cells
> work about a thousand times faster than the latter. This should provide a 'margin
> of safety' which could cover losses of speed arising in many ways. Our problem
> then is to find out how to programme these machines to play the game. At my
> present rate of working I produce about a thousand digits of programme a day, so
> that about sixty workers, working steadily through the fifty years might
> accomplish the job, if nothing went into the waste-paper basket. Some more
> expeditious method seems desirable.

But alas, it is clear Turing clearly misunderstood the nature of the problem,
for who now would even dream of attempting a solution to the Turing test with a
machine with speed comparable to that of the machines of 1950, and also the
solution, where much more than 50*60 person-years have been invested in
programming machines for artificial intelligence, and where the best solutions
are not really in principle far advanced beyond the pattern-matching of ELIZA,
though much more sophisticated. (See http://www.alicebot.org)

Damien Sullivan

unread,
Jul 10, 2003, 3:50:49 PM7/10/03
to
wild...@operamail.com (David Basil Wildgoose) wrote:

>So the question then becomes, Do you accept that there are limitations
>as to what a human being is capable of comprehending, or not?

Well, yeah. Theory: we're finite systems. Practice: individual humans
obviously run into lots of comprehension problems. Which can be boosted with
external memory and clever chunking tricks, but there's no reason to think the
process can be continued infinitely.

>So basically this is turning into a religious argument between your
>point of view that our intelligence can be modelled formally, and thus
>"Only God can be all-knowing", and my point of view (as an atheist
>humanist) that refuses to accept limitations on what human beings can
>do or understand. My point of view implies that our intelligence

False dichotomy. I'm atheist, materalist, soulless-believing, but I accept
limitations on what we can do or understand. For example, we can't solve the
halting problem.

-xx- Damien X-)

Galen Menzel

unread,
Jul 10, 2003, 4:43:34 PM7/10/03
to
In article <265d96ac.03070...@posting.google.com>, David Basil Wildgoose wrote:
> f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<beietv$nmf$1...@mulga.cs.mu.OZ.AU>...
>> wild...@operamail.com (David Basil Wildgoose) writes:
>>
>> >I thought that the application of Goedel's Theory did a good job of
>> >demolishing deterministic approaches to Artificial Intelligence,
>>
>> Not at all.
>>
>> >(at least that's what I remember from Hofstadter's "Goedel, Escher and
>> >Bach" anyway).
>>
>> You misunderstood it.
>>
>> Some people who ought to know better, such as Penrose, have argued that,
>> but this argument is very easily demolished. Goedel's work only
>> implies that deterministic AI can't be infallible (except in limited
>> domains). But we humans aren't infallible either. AIs could certainly
>> be a lot less fallible than us humans.
>
> I know it's nearly 20 years since I read it, but I don't think I got
> it that badly wrong. In fact, I respectfully suggest that it is you
> that has missed an important implication. Gödel's work shows every
> formal system has theorems (ideas if you will) that cannot be
> expressed within it.
>
> So the question then becomes, Do you accept that there are limitations
> as to what a human being is capable of comprehending, or not?

Any possible sufficiently powerful formal system we can ever come up
with is incomplete. This is a hard limit to what our conception of
mathematics can do. This seems like quite a limitation -- not only to
what we can comprehend, but to what we can possibly conceive of.
Maybe the limitations of our formal system can be seen in our concept
of mathematics, mirroring itself in the smaller systems we create.

The greatest thing about this idea is that it can't ever be proved or
disproved. Sounds familiar....

galen

Peter G. Hancock

unread,
Jul 10, 2003, 7:04:38 PM7/10/03
to

>>>>> George Russell wrote (on Thu, 10 Jul 2003 at 20:20):

> http://www.abelard.org/turpap/turpap.htm

Thanks very much for this link, and for your quotes from it.

What an amazingly clear writer Turing was! (Just compare the
simplicity of his sentences [mostly a line per sentence] with some of
your own! [mostly about 7] :-)

Cheap shots aside:

(Alan.)


>> As I have explained, the problem is mainly one of programming.

(George.)


> But alas, it is clear Turing clearly misunderstood the nature of

> the problem, for ...

Um, really??

Peter

David Basil Wildgoose

unread,
Jul 11, 2003, 7:13:52 AM7/11/03
to
Richard Bos <r...@hoekstra-uitgeverij.nl> wrote in message news:<rlb-AFFECC.1...@news.nl.uu.net>...

> In article <265d96ac.03070...@posting.google.com>,
> wild...@operamail.com (David Basil Wildgoose) wrote:
>
> > f...@cs.mu.oz.au (Fergus Henderson) wrote in message
> > news:<beietv$nmf$1...@mulga.cs.mu.OZ.AU>...
> > > wild...@operamail.com (David Basil Wildgoose) writes:
> > >
> > > >I thought that the application of Goedel's Theory did a good job of
> > > >demolishing deterministic approaches to Artificial Intelligence,
> > >
> > > Not at all.
> > >
> > > >(at least that's what I remember from Hofstadter's "Goedel, Escher and
> > > >Bach" anyway).
> > >
> > > You misunderstood it.
> >
> > I know it's nearly 20 years since I read it, but I don't think I got
> > it that badly wrong.
>
> It's only two years or so since I read it, and you _did_ get it that
> badly wrong.

How?

You are making a blanket statement with no reasoning to back it up.
On the contrary, I suggest that your failure to back up your
assertions implies that you are unable to do so.

My assertion is that Intelligence cannot be directly modelled, and
that Gödel's work backs that up.

I maintain that Intelligence actually arises "through the cracks" as
emergent behaviour that is not directly modelled by the system.
Hofstadter talks about "strange loops" between different "levels" as
being the heart of the matter. I suggest you go back and re-read it.

> > In fact, I respectfully suggest that it is you
> > that has missed an important implication. Gödel's work shows every
> > formal system has theorems (ideas if you will) that cannot be
> > expressed within it.
>
> "David Basil Wildgoose cannot believe in the truth of this statement."
> Discuss.

On the contrary, your statement actually does a good job of
illustrating why a formal symbol-shuffling approach is never going to
work.

David Basil Wildgoose

unread,
Jul 11, 2003, 8:04:12 AM7/11/03
to
Neelakantan Krishnaswami <ne...@alum.mit.edu> wrote in message news:<slrnbgr04l...@h00045a4799d6.ne.client2.attbi.com>...

> David Basil Wildgoose <wild...@operamail.com> wrote:
> >
> > I know it's nearly 20 years since I read it, but I don't think I got
> > it that badly wrong. In fact, I respectfully suggest that it is you
> > that has missed an important implication. Gödel's work shows every
> > formal system has theorems (ideas if you will) that cannot be
> > expressed within it.
>
> Goedel's theorem restricts *consistent* systems powerful enough to
> express arithmetic. Now think of the set of all sets that are not
> members of themselves. You had no special problem with that, did you?
> Congratulations, you have a proof by construction that your reasoning
> processes aren't perfectly consistent.

I know. I'm in perfect agreement with you. I don't understand why my
modest suggestion regarding emergent behaviour when looking at things
at a different level should have attracted so much venom. I can only
theorise that there are still plenty of people around who still think
that AI can be modelled with a sufficiently big Expert System, despite
Searle's "Chinese Room" thought experiment.

They are perfectly entitled to disagree with Searle, Penrose, etc.,
but I think it needlessly offensive to suggest that just because *I*
don't agree with them either ,that I am incapable (and presumably too
stupid) to understand what they are saying.

Peter "Firefly" Lund

unread,
Jul 11, 2003, 9:16:47 AM7/11/03
to
On Fri, 11 Jul 2003, David Basil Wildgoose wrote:

> that AI can be modelled with a sufficiently big Expert System, despite
> Searle's "Chinese Room" thought experiment.

Searle misunderstood his own thought experiment.

That discredits him as a relevant philosopher in my eyes.

-Peter

The problem isn't really "what will we use to replace C?", it's "what will
we use to replace Fortran and the other languages that were replaced by C?".
-- Peter da Silva, comp.arch

Hal Daume III

unread,
Jul 11, 2003, 11:03:56 AM7/11/03
to
On Fri, 11 Jul 2003, Peter "Firefly" Lund wrote:

> On Fri, 11 Jul 2003, David Basil Wildgoose wrote:
>
> > that AI can be modelled with a sufficiently big Expert System, despite
> > Searle's "Chinese Room" thought experiment.

IMO, The "Chinese Room" does not at all show the impossibility of
modelling AI with a sufficiently large system. I could say more, but the
real reason I'm replying is...

> Searle misunderstood his own thought experiment.
>
> That discredits him as a relevant philosopher in my eyes.

I'm curious how he misunderstood it (real question, not a jab)...could you
explain a bit?

- Hal


Marshall Spight

unread,
Jul 11, 2003, 11:09:26 AM7/11/03
to
"George Russell" <g...@tzi.de> wrote in message news:beke9t$g46$1...@kohl.informatik.uni-bremen.de...

>
> Useful though neural networks are, they are still no match at learning
> to humans. If you disagree, then build one which can learn to prove
> theorems like humans can ...

This conversation has been entirely about qualitative considerations.
But there are also some interesting points to be made from a
quantitative point of view. The above poster noted the difficulties
that machines have had catching up to humans in some problem areas,
and the surprise this difficulty occasioned. This doesn't seem at all
surprising when one brings in quantitative considerations.

In fact, implicit in many of the arguments in this thread has been the
question: if computers could be smart, why aren't the smart now?
If for no other reason, it's at least because they aren't anywhere near
fast enough yet.

Consider: Pentium 4 vs. human brain. What is the relative computing
power?

There are many ways to make the argument: heat dissipation, neural
connections/sec, extrapolations from the retina, etc. But let's just
count junctions for the moment; that's really easy.

Neurons aren't the same as transistors, sure. The neuron has approximately
10^3 more interconnects than the transistor, but the cycle time on the transistor
is much shorter. Let's say they're roughly at parity. This may not be all that
exact, but let's do it for back-of-the-envelope purposes.

Human: 100 billion neurons
P4: 55 million transistors

So the human has about 4 orders of magnitude more junctions. Clearly,
the computer has some significant catching up to do.

Here's an interesting paper with a lot more estimates:
http://www.transhumanist.com/volume1/moravec.htm


Marshall

Galen Menzel

unread,
Jul 11, 2003, 12:13:06 PM7/11/03
to
In article <265d96ac.03071...@posting.google.com>, David Basil Wildgoose wrote:
> Neelakantan Krishnaswami <ne...@alum.mit.edu> wrote in message news:<slrnbgr04l...@h00045a4799d6.ne.client2.attbi.com>...
>> David Basil Wildgoose <wild...@operamail.com> wrote:
>> >
>> > I know it's nearly 20 years since I read it, but I don't think I got
>> > it that badly wrong. In fact, I respectfully suggest that it is you
>> > that has missed an important implication. Gödel's work shows every
>> > formal system has theorems (ideas if you will) that cannot be
>> > expressed within it.
>>
>> Goedel's theorem restricts *consistent* systems powerful enough to
>> express arithmetic. Now think of the set of all sets that are not
>> members of themselves. You had no special problem with that, did you?
>> Congratulations, you have a proof by construction that your reasoning
>> processes aren't perfectly consistent.
>
> I know. I'm in perfect agreement with you. I don't understand why my
> modest suggestion regarding emergent behaviour when looking at things
> at a different level should have attracted so much venom. I can only
> theorise that there are still plenty of people around who still think
> that AI can be modelled with a sufficiently big Expert System, despite
> Searle's "Chinese Room" thought experiment.

The Chinese Room has always kind of puzzled me. It seems to imply
that I can't be intelligent if my neurons aren't as well. I don't
think it really proves anything one way or the other, except that
Searle was confused.

> They are perfectly entitled to disagree with Searle, Penrose, etc.,
> but I think it needlessly offensive to suggest that just because *I*
> don't agree with them either ,that I am incapable (and presumably too
> stupid) to understand what they are saying.

Well, most people *really* think that a lot of what Penrose wrote
about in the Emperor's New Mind is a bunch of hooey. You'd probably
get a similar reaction if you started talking about behavioralism in a
modern liguistics forum.

galen

David Basil Wildgoose

unread,
Jul 11, 2003, 7:53:21 PM7/11/03
to
Galen Menzel <ga...@alumni.utexas.net> wrote in message > Well, most people *really* think that a lot of what Penrose wrote

> about in the Emperor's New Mind is a bunch of hooey. You'd probably
> get a similar reaction if you started talking about behavioralism in a
> modern liguistics forum.

I'm afraid I'm not qualified to talk about "The Emperor's New Mind"
because I haven't read it - I only mentioned Penrose in order to make
the point that it is possible for an intelligent person to hold a
different point of view.

I genuinely don't understand why what I said was so controversial.
Perhaps on "comp.ai", but this is a programming languages forum! And
I take exception to the attitude that just because I don't agree with
someone that this must mean I don't understand the problem - that is
an attitude on a par with "the historic inevitability of communism"
and other such garbage.

I believe that AI is possible, but I do not believe that our
intelligence is nothing more than a big expert system full of "IF a
THEN b" rules.

Fergus Henderson et al may hold the view that their intelligence can
be modeled in that fashion, but I don't believe that at heart I'm a
finite state machine, and I'm pretty confident that the majority of
humanity would side with me.

I have just dug out my copy of GEB. This is what Hofstadter has to
say:

'My belief is that the explanations of emergent phenomena in our
brains - for instance , ideas, hopes, images, analogies and finally
consciousness and free will - are based on a kind of Strange Loop , an
interaction between levels in which the top level reaches back down
towards the bottom level and influences it, while at the same time
being itself determined by the bottom level. In other words, a
self-reinforcing "resonance" between different levels...' (p709)

That sounds to be almost exactly like my stated position, and despite
the nearly 2 decades between now and my late teens I believe I have
both remembered and more importantly, despite the assertions to the
contrary, understood what Hofstadter was saying.

In all fairness to the opposing point of view, Hofstadter does go on
to say "This should not be taken as an antireductionist position" and
explains that he just thinks that the model should take these strange
loops into account.

But this is just layers upon layers! Leading ultimately to our being
forced to model the Universe itself. At this level I am willing to
concede the point, but at this level the entire Universe is nothing
but a program.

Albert Lai

unread,
Jul 12, 2003, 1:18:35 AM7/12/03
to
Costin Cozianu <c_co...@hotmail.com> writes:

> I recently discovered the very opinionated and probably very well
> informed thoughts of one Jean Yves Girard who has some well deserved
> fame and notoriety in computer science and mathematics.
>
> He basically affirms without too much argumentation that automatic
> proof doesn't work and will never work (basically because of the
> halting problem),

1. Automated reasoning suffers from the Turing halting problem.
0. Mathematics suffers from the Goedel incompleteness and inconsistency
problem. (I know this is arguable, depending on what you consider
as the purpose of mathematics.)
2. Code optimization suffers from the Kolmogorov complexity problem.

They are all the same problem in different incarnations.

The days when people worried about mathematics because of #0 has long
gone. No one has ever used #2 to dismiss code optimization. In fact
both are now viewed optimistically and jokingly:

Theorem 0: Permanent employment of mathematicians. You will never run
out of new axioms and new models to study.

Theorem 2: Permanent employment of researchers in code optimization.
You will always be able to publish new, better optimizations.

So I don't see why we have to view #1 as a limitation rather than a
gift:

Theorem 1: Permanent employment of researchers in automated reasoning.
You can always find a way to beat your rival's automatic theorem
prover.

Ken Moore

unread,
Jul 11, 2003, 5:22:48 PM7/11/03
to
Basil Wildgoose <wild...@operamail.com> writes

>I know. I'm in perfect agreement with you. I don't understand why my
>modest suggestion regarding emergent behaviour when looking at things
>at a different level should have attracted so much venom. I can only
>theorise that there are still plenty of people around who still think
>that AI can be modelled with a sufficiently big Expert System, despite
>Searle's "Chinese Room" thought experiment.

Not an expert system, but a brain model on the lines of the one
described by Patricia Smith-Churchland in "Neuro-Philosophy", based on a
bottom-up understanding of neurons and sub-systems of ever-increasing
size. She estimated that at mid-1970s rates of progress there were only
another 700 years to go before the whole brain could be modelled, and
there ought to be big enough computers to hold the model by then, if
civilisation survives.

I thought the Chinese Room was a counter argument to machine
consciousness, not to AI.

--
Ken Moore
k...@mooremusic.org.uk
Web site: http://www.mooremusic.org.uk/
I reject emails > 300k automatically: warn me beforehand if you want to send one

Fergus Henderson

unread,
Jul 12, 2003, 12:28:32 PM7/12/03
to
wild...@operamail.com (David Basil Wildgoose) writes:

>I know it's nearly 20 years since I read it, but I don't think I got
>it that badly wrong.

I'm afraid you did.

>In fact, I respectfully suggest that it is you

>that has missed an important implication. Goedel's work shows every


>formal system has theorems (ideas if you will) that cannot be
>expressed within it.

Here you are confusing theorems with ideas. That is a bad start...

>So the question then becomes, Do you accept that there are limitations
>as to what a human being is capable of comprehending, or not?
>

>If our intelligence can ultimately be modelled by a formal system,
>then this must be the case, (and we would be unaware of the fact).

A fundamental confusion that often underlies arguments like yours is a failure
to distinguish between the theorem proven by the execution of the AI
program, and the statements made or beliefs held by the AI.
The theorem proven by the execution of the AI will be a *very* *boring*
theorem along the lines of "if you execute such-and-such a program for
such-and-such number of steps starting from this state [00110011...]
then you will get this new state [1111000101...]".
The AI's statements or beliefs, on the other hand, could be much more
interesting, for example "pi is an irrational number" or "I want an
icecream".

Goedel's results apply only to the boring theorem, not to the
AI's statements or beliefs. The statements or beliefs aren't subject
to Goedel's results because, just like human statements and beliefs,
AI statements and beliefs need not be consistent. Trying to apply
Goedel's theorem to inconsistent human-style beliefs is a fundamentally
wrong confusion of different levels.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.

Fergus Henderson

unread,
Jul 12, 2003, 12:36:10 PM7/12/03
to
George Russell <g...@tzi.de> writes:

I agree that Neelakantan Krishnaswami's argument is not proof of his point.
However, I think his point that humans are not perfectly consistent is
obviously correct. All humans who have never made a mistake in their lives
are welcome to disagree with me ;-)

Fergus Henderson

unread,
Jul 12, 2003, 12:33:46 PM7/12/03
to
wild...@operamail.com (David Basil Wildgoose) writes:

>I wasn't arguing against AI [...]


>I was arguing against AI being based upon a formal set of rules.

What do you mean by "based on"?

AI will have a formal set of rules at the bottom level.
But trying to understand it at that level will be pretty hopeless.
To understand AI, we'd need to look at it at higher levels of abstraction,
and at those levels, thinking of it as a formal set of rules would not
be helpful.

>in GEB [...] there was also an interesting story about an Ant Hill and her


>friend the Anteater, which neatly illustrated the emergent behaviour
>visible when looking at individual ants at a different level, namely
>that of the colony itself.

And what's to say that the individual ants weren't obeying a formal set of
rules?

Fergus Henderson

unread,
Jul 12, 2003, 12:54:20 PM7/12/03
to
wild...@operamail.com (David Basil Wildgoose) writes:

>Richard Bos <r...@hoekstra-uitgeverij.nl> wrote:
>> It's only two years or so since I read it, and you _did_ get it that
>> badly wrong.
>
>How?

...


>My assertion is that Intelligence cannot be directly modelled, and

>that Goedel's work backs that up.

I'm not sure exactly what you mean by "directly modelled", but if you
are suggesting that intelligence can't be simulated by a formal system,
you're wrong, and if you're suggesting that Goedel's work backs you
up on that, you're badly wrong.

I think your belief here is based on the fundamental confusion between
different levels that I referred to in my earlier post.

>I maintain that Intelligence actually arises "through the cracks" as
>emergent behaviour that is not directly modelled by the system.

Intelligence is definitely emergent behaviour. But it doesn't arise by
magic. It's just that "intelligence" is a very high-level concept
which needs to be viewed at a different level of abstraction than the
underlying formal system which is executing the AI. Nevertheless,
every individual part of the AI is being directly modelled by the
formal system.

Fergus Henderson

unread,
Jul 12, 2003, 1:11:43 PM7/12/03
to
wild...@operamail.com (David Basil Wildgoose) writes:

>I believe that AI is possible, but I do not believe that our
>intelligence is nothing more than a big expert system full of "IF a
>THEN b" rules.

This statement which you do not believe is misleading.
At *one level*, our intelligence is [probably] exactly that.
But considered at a different level, it is something far more interesting.
Saying that it is "nothing more than" the lowest-level view implies
that the higher-level views are wrong, but they're not.
But you can have your cake and eat it too:
we're complex beings full of complex emotions,
*and* we're [probably] just a big formal system. The two are not
contradictory.

>Fergus Henderson et al may hold the view that their intelligence can
>be modeled in that fashion, but I don't believe that at heart I'm a
>finite state machine, and I'm pretty confident that the majority of
>humanity would side with me.

The majority of humanity also at one time believed that the world was flat.

>In all fairness to the opposing point of view, Hofstadter does go on
>to say "This should not be taken as an antireductionist position"

Right. Exactly.

Your position, on the other hand, seems to be quite antireductionist.
Either that, or you have been miscommunicating (and we have been
misunderstanding). But if you are not antireductionist, I don't see
why you brought up the argument about Goedel's theorem which originated
this discussion.

Joachim Durchholz

unread,
Jul 12, 2003, 5:27:11 PM7/12/03
to
David Basil Wildgoose wrote:
> I believe that AI is possible, but I do not believe that our
> intelligence is nothing more than a big expert system full of "IF a
> THEN b" rules.

As Fergus said, this narrows your view to the lowest possible level of
formal systems.
The view you're presenting would be equivalent to saying that Goedel's
proof (or, for that matter, any other mathematical proof) is "nothing
more than a big inference full of 'IF a then b' reasoning". That's
correct but irrelevant.

The key is abstraction. You can combine many IF A THEN b items into
something that has interesting properties at a more abstract level. It
might be a program to generate a parser from a description of the
language (which is not a mean feat). It might be a chess program that
can beat Kasparov.
On the other hand, it might be just an incoherent jumble that does
nothing useful - at least when viewed at the abstraction level of "is a
program that does something useful". (Of course, there are many more
useless programs than useful ones.)

"Emergent behaviour" is not something magic that "rises through the
cracks". It's just that humans find something interesting, and are
willing to abstract to be able to reason about these higher-level
properties.

One question is whether intelligence, self-awareness, consciousness, and
intuition are emergent behaviour or something different.
If they are emergent, then the human mind is indeed a finite state
machine (though still unpredictable: chaos effects prevent any hope/fear
of predictability).
If they are different, then the human mind lies outside the scope of
current science. If that "different" source is amenable to scientific
research, I'm sure that the above discussion will reappear (including
all dead ends), but with that source in place of transistors. It's just
another case of pushing the dear concept of a soul further into the
scientifically unknown. (If I understood previous posts, Daniel doesn't
adhere to this belief, I just wanted to outline the consequences of this
concept.)

> I have just dug out my copy of GEB. This is what Hofstadter has to
> say:
>
> 'My belief is that the explanations of emergent phenomena in our
> brains - for instance , ideas, hopes, images, analogies and finally
> consciousness and free will - are based on a kind of Strange Loop , an
> interaction between levels in which the top level reaches back down
> towards the bottom level and influences it, while at the same time
> being itself determined by the bottom level. In other words, a
> self-reinforcing "resonance" between different levels...' (p709)

Hofstaedter assumption is that a Strange Loop (i.e. a self-referring
system) can escape the consequences of Goedel's theorem.
I find this a bit troublesome: he has gone to such lengths to show that
adding theorems and self-referring stuff isn't going to help escape
Goedel, yet he still - somehow - states a "belief" that contradicts what
he's saying elsewhere.
The desire to "be more" than a simple finite state machine must indeed
be strong.

> In all fairness to the opposing point of view, Hofstadter does go on
> to say "This should not be taken as an antireductionist position" and
> explains that he just thinks that the model should take these strange
> loops into account.

I don't know what a "reductionist position" is, much less an
"antireductionist position".
Could anybody explain?

> But this is just layers upon layers! Leading ultimately to our being
> forced to model the Universe itself. At this level I am willing to
> concede the point, but at this level the entire Universe is nothing
> but a program.

No it isn't - the nondeterminism in quantum theory prevents this.
The current state of research is that the human brain is uninfluenced by
quantum theory. All known interactions and state transitions of neurons
can be described in classical terms, which are mechanical and
deterministic. (With the allowance that - extremely rarely - a quantum
phenomenon may reach macroscopic size. Which is probably just as likely
as a sponaneous decomposition of your keyboard into photons - well, give
or take a few orders of magnitude of improbability, but I hope the
analogy is clear *g*.)

Regards,
Jo

Dylan Thurston

unread,
Jul 13, 2003, 2:16:33 AM7/13/03
to
In article <begle1$4sif4$1...@ID-9852.news.dfncis.de>, Joachim Durchholz wrote:
> Let me illustrate this with an example.
> Scientists have mapped the entire nervous system of a sea sludge. (That
> particular sludge was ideal for this kind of work because it has just a
> few dozen neurons, and its neurons are large enough to be microscoped.)
> Now there exists a simulation of the complete nervous system of a
> sludge. If presented with virtual stimuli, it reacts just like the real
> sludge reacts to equivalent real stimuli.
> We may not be able to simulate the consciousness of the sludge, or
> whatever lives in these quantum phenomena... but we don't need this
> "consciousness thing" to explain why the sludge behaves like it does, so
> why bother about it? Assuming a consciousness is just as speculative as
> assuming that none exists, and both assumption are equally worthless in
> understanding the world around us.

I've heard about similar experiments before, but have never read a
proper exposition. Do you have a reference? I'd love to see an actual
chart of the few dozen neurons... Has this been done with more
complicated animals?

Peace,
Dylan

Joachim Durchholz

unread,
Jul 13, 2003, 7:46:10 AM7/13/03
to
Dylan Thurston wrote:
>
> I've heard about similar experiments before, but have never read a
> proper exposition. Do you have a reference?

I think it was in Spektrum der Wissenschaft (the German version of
Scientific American). It's been years ago, and I don't have the issue
anymore, and it's IIRC anyway... so, no, I can't really help.

> I'd love to see an actual
> chart of the few dozen neurons...

It wasn't in the article anyway :-/

> Has this been done with more
> complicated animals?

Not that I know of - which means that any experiments that may be
underway didn't make it into the popular science area, not that such
experiments were/are not done.

Regards,
Jo

Neelakantan Krishnaswami

unread,
Jul 13, 2003, 9:26:12 AM7/13/03
to

I don't particularly care about whether or not Russell sets "exist" in
some ontological sense or not -- the question at hand is whether the
formal rules of inference that people "naturally" use permit
inconsistent constructions. I think it's the case that we do; consider
the circumstances under which Russell's paradox was discovered. Frege,
who was better at rigorous, logical thought than almost everyone who
has ever lived. was trying to formalize mathematics using naive set
theory, when Russell discovered the paradox in the system.

I think that a just-so story for this runs along the following lines:
people evolved the ability to reason with negation and self-reference,
since these are radically useful modes of thought. Then, since these
form an inconsistent formal system, we evolved/invented habits of
thought that "keep us away" from such the proof trees that create such
inconsistencies.

It's not pretty, but it is pretty effective. Coming back to PL, if we
look at Lisp, Smalltalk or Java programmers, we find they are able to
make frequent and effective use of reflection, even though it's a
profoundly dubious operation from the logical viewpoint. That's why
I'm fascinated by things like paraconsistent logic and inconsistency-
tolerant mathematics; it seems like just what we need to formalize
this kind of stuff.

(Note: I won't be able to follow up for the next two weeks or so,
since I'll be out of town.)

--
Neel Krishnaswami
ne...@alum.mit.edu

David Basil Wildgoose

unread,
Jul 13, 2003, 2:36:55 PM7/13/03
to
Joachim Durchholz <joachim....@web.de> wrote in message news:<bepuj4$7r215$1...@ID-9852.news.uni-berlin.de>...

> David Basil Wildgoose wrote:
> > I believe that AI is possible, but I do not believe that our
> > intelligence is nothing more than a big expert system full of "IF a
> > THEN b" rules.
>
> As Fergus said, this narrows your view to the lowest possible level of
> formal systems.
> The view you're presenting would be equivalent to saying that Goedel's
> proof (or, for that matter, any other mathematical proof) is "nothing
> more than a big inference full of 'IF a then b' reasoning". That's
> correct but irrelevant.
>
> The key is abstraction. You can combine many IF A THEN b items into
> something that has interesting properties at a more abstract level. It
> might be a program to generate a parser from a description of the
> language (which is not a mean feat). It might be a chess program that
> can beat Kasparov.
> On the other hand, it might be just an incoherent jumble that does
> nothing useful - at least when viewed at the abstraction level of "is a
> program that does something useful". (Of course, there are many more
> useless programs than useful ones.)

The point that I am obviously failing to communicate to people is that
you can't only look at the lowest level in the system, but rather you
*also* have to consider the system as a whole - synergy, the whole is
greater than the sum of its parts. This may not be visible from
*within* the system, but it is visible when looking from an external
point.

> "Emergent behaviour" is not something magic that "rises through the
> cracks". It's just that humans find something interesting, and are
> willing to abstract to be able to reason about these higher-level
> properties.

I think that the reason why we find this interesting is because we are
looking at the behaviour from an external point, and are then, as you
say, able to reason about these higher-level properties. These
properties may be a consequence of the lower level rules, but they are
*not* directly encoded within the lower level rules.

> One question is whether intelligence, self-awareness, consciousness, and
> intuition are emergent behaviour or something different.
> If they are emergent, then the human mind is indeed a finite state
> machine (though still unpredictable: chaos effects prevent any hope/fear
> of predictability).
> If they are different, then the human mind lies outside the scope of
> current science. If that "different" source is amenable to scientific
> research, I'm sure that the above discussion will reappear (including
> all dead ends), but with that source in place of transistors. It's just
> another case of pushing the dear concept of a soul further into the
> scientifically unknown. (If I understood previous posts, Daniel doesn't
> adhere to this belief, I just wanted to outline the consequences of this
> concept.)

Oh dear. Back to religion again. :-)

> > I have just dug out my copy of GEB. This is what Hofstadter has to
> > say:
> >
> > 'My belief is that the explanations of emergent phenomena in our
> > brains - for instance , ideas, hopes, images, analogies and finally
> > consciousness and free will - are based on a kind of Strange Loop , an
> > interaction between levels in which the top level reaches back down
> > towards the bottom level and influences it, while at the same time
> > being itself determined by the bottom level. In other words, a
> > self-reinforcing "resonance" between different levels...' (p709)
>
> Hofstaedter assumption is that a Strange Loop (i.e. a self-referring
> system) can escape the consequences of Goedel's theorem.
> I find this a bit troublesome: he has gone to such lengths to show that
> adding theorems and self-referring stuff isn't going to help escape
> Goedel, yet he still - somehow - states a "belief" that contradicts what
> he's saying elsewhere.
> The desire to "be more" than a simple finite state machine must indeed
> be strong.


I admit that it is a "gut" feeling that we are not just FSMs, but
philosophically I have to feel that way - it underpins our notions of
morality. If we are just "machines", then much of our moral
underpinnings about the "sacredness" of human life disappears. Oops.
Religion (or at least ethics) again. :-)


> > In all fairness to the opposing point of view, Hofstadter does go on
> > to say "This should not be taken as an antireductionist position" and
> > explains that he just thinks that the model should take these strange
> > loops into account.
>
> I don't know what a "reductionist position" is, much less an
> "antireductionist position".
> Could anybody explain?

I'm bound to do a bad job of this, so I shall qualify this by saying
that this is my understanding, and that it may well be superficial:

A reductionist position is one in which the mind can be formally
modelled by a basic set of rules. This set of rules may however take
into account the strange self-referential loops between different
levels.

An "antireductionist" position is one which holds the mind to be
effectively indivisible and unknowable. (This may be too simplistic).

My own position tends to the former camp, but with an important
caveat. Although I believe that we can come up with a sufficiently
complex self-referential system that attains the "critical mass" of
complexity required for intelligence, I am less convinced that we can
do so whilst simultaneously encapsulating that intelligence in one
formal set of rules.

Hofstadter says 'a reductionist explanation of a mind, in order to be
comprehensible, must bring in "soft" concepts such as levels,
mappings, and meanings. In principle, I have no doubt that a totally
reductionist but incomprehensible explanation of the brain exists; the
problem is how to translate it into a language we ourselves can
fathom.'

To that position I would like to add that each "translation" is likely
to involve further "strange loops" in need of mapping, thus perhaps
necessitating further translations...

David Basil Wildgoose

unread,
Jul 13, 2003, 3:21:35 PM7/13/03
to
f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<bepfgf$dns$1...@mulga.cs.mu.OZ.AU>...

> wild...@operamail.com (David Basil Wildgoose) writes:
>
> >I believe that AI is possible, but I do not believe that our
> >intelligence is nothing more than a big expert system full of "IF a
> >THEN b" rules.
>
> This statement which you do not believe is misleading.
> At *one level*, our intelligence is [probably] exactly that.
> But considered at a different level, it is something far more interesting.
> Saying that it is "nothing more than" the lowest-level view implies
> that the higher-level views are wrong, but they're not.
> But you can have your cake and eat it too:
> we're complex beings full of complex emotions,
> *and* we're [probably] just a big formal system. The two are not
> contradictory.
>
> >Fergus Henderson et al may hold the view that their intelligence can
> >be modeled in that fashion, but I don't believe that at heart I'm a
> >finite state machine, and I'm pretty confident that the majority of
> >humanity would side with me.
>
> The majority of humanity also at one time believed that the world was flat.

Zoroastrians still do. Just because you are in a minority doesn't make
you right either.

> >In all fairness to the opposing point of view, Hofstadter does go on
> >to say "This should not be taken as an antireductionist position"
>
> Right. Exactly.
>
> Your position, on the other hand, seems to be quite antireductionist.
> Either that, or you have been miscommunicating (and we have been
> misunderstanding). But if you are not antireductionist, I don't see
> why you brought up the argument about Goedel's theorem which originated
> this discussion.

Having seen what you have written above, I think that we have
misunderstood each other's positions - but that we were *both*
miscommunicating. I made a remark that could be (and was)
misinterpreted because I did not explain fully what I meant:

"I thought that the application of Gödel's Theory did a good job of
demolishing deterministic approaches to Artificial Intelligence, (at
least that's what I remember from Hofstadter's "Gödel, Escher and
Bach" anyway)."

by which I meant that simple "IF a THEN b" systems were inadequate,
but that we had to look more holistically at the whole system and the
interactions across levels.

I then said:

"But who is to say that both sides of the argument are not correct?
That is, that local effects rely on quantum processes, but that these
local effects then combine into the emergent behaviour that we can see
in cellular automata?"

which I hoped would make what I was saying clearer - namely that
complex behaviour can arise from simple rules when looked at from a
different viewpoint.

You then flatly stated that I didn't understand, which to me at least,
was not only an insult to my intelligence, but also implied that you
were opposed to this viewpoint, (and hence implying that only a
determistic set of rules was necessary with no requirement for
"emergent behaviour" and self-referential "strange" loops).

Hence the misunderstandings.

You say that I seem to be quite antireductionist. I don't personally
think it is as clear-cut as that, in that I feel the truth is
somewhere in between the two positions. Basically, I am sure that we
can come up with a set of self-referential rules that are capable of
embodying intelligence, but I am less sure that we would then be able
to adequately explain those rules.

Joachim Durchholz

unread,
Jul 13, 2003, 4:17:20 PM7/13/03
to
David Basil Wildgoose wrote:
> The point that I am obviously failing to communicate to people is that
> you can't only look at the lowest level in the system, but rather you
> *also* have to consider the system as a whole

Actually that's what I said.

> - synergy, the whole is
> greater than the sum of its parts.

This is a common saying, but it's misleading.
Whether something is "greater" than something else is entirely a
question of perspective. What's "greater" for an IT person may be
irrelevant to a lawyer, say. Or, even more generally: what's "greater"
for a human would probably be irrelvant to a fox.

> This may not be visible from
> *within* the system, but it is visible when looking from an external
> point.

I don't copy.
From "within the system", there is no observer to whom anything might
be visible. We're always observing from the outside; what's shifting is
the level of abstraction.
At a low level, the human brain is just an inextricable mass of neurons,
firing according to some well-understood rules, but that's all.
At a higher, more abstract level, the human brain is capable of
providing all sorts of interesting stuff. We just don't know (yet) how
the low-level stuff interacts to provide the high-level abstractions.
(Or, if one believes in the mind not being based on matter along: how
the low-level interaction *fails* to provide the high-level abstractions.)

>>"Emergent behaviour" is not something magic that "rises through the
>>cracks". It's just that humans find something interesting, and are
>>willing to abstract to be able to reason about these higher-level
>>properties.
>
> I think that the reason why we find this interesting is because we are
> looking at the behaviour from an external point, and are then, as you
> say, able to reason about these higher-level properties.

We'd be unable to reason about this regardless of whether it's high or
low level if we weren't looking at it from the outside.
I don't think there's any "inside perspective" at all. If that were the
case, we'd have to be able to observe the firing of individual neurons
within our brain - which, obviously, is not the case (and would not
provide us with any new insights if it were).

> These
> properties may be a consequence of the lower level rules, but they are
> *not* directly encoded within the lower level rules.

How would you know?
The materialists believe that the mind is indeed "encoded directly", via
the pattern of neuronal interconnections and the state of the neurons.
Just because we cannot fully trace or understand that pattern doesn't
mean it's not a "direct encoding".

>>>I have just dug out my copy of GEB. This is what Hofstadter has to
>>>say:
>>>
>>>'My belief is that the explanations of emergent phenomena in our
>>>brains - for instance , ideas, hopes, images, analogies and finally
>>>consciousness and free will - are based on a kind of Strange Loop , an
>>>interaction between levels in which the top level reaches back down
>>>towards the bottom level and influences it, while at the same time
>>>being itself determined by the bottom level. In other words, a
>>>self-reinforcing "resonance" between different levels...' (p709)
>>
>>Hofstaedter assumption is that a Strange Loop (i.e. a self-referring
>>system) can escape the consequences of Goedel's theorem.
>>I find this a bit troublesome: he has gone to such lengths to show that
>>adding theorems and self-referring stuff isn't going to help escape
>>Goedel, yet he still - somehow - states a "belief" that contradicts what
>>he's saying elsewhere.
>>The desire to "be more" than a simple finite state machine must indeed
>>be strong.
>
> I admit that it is a "gut" feeling that we are not just FSMs, but
> philosophically I have to feel that way - it underpins our notions of
> morality.

Another imprecision.
Morality is independent of whether we are finite state machines.
Either we are something more than a FSM: then morals are there to ensure
proper behaviour.
Or we are an FSM: then the news of criminals convicted will cause all
the other human FSMs to adjust their behaviours accordingly (either by
trying to be smarter in covering up their tracks, or by selecting a more
legal profession).
Either way, morals do make sense and are helpful.

> If we are just "machines", then much of our moral
> underpinnings about the "sacredness" of human life disappears. Oops.
> Religion (or at least ethics) again. :-)

The sacredness of human life disappears as soon as you drop religion;
that's orthogonal to whether humans are FSMs or not.
Actually, some religions so indeed see human life as predetermined (the
Muslim faith is the most prominent example), and some atheists strongly
object to humans being FSMs. So it's not just orthogonal in theory,
religion already has explored all four quadrants of this orthogonality.

>>>In all fairness to the opposing point of view, Hofstadter does go on
>>>to say "This should not be taken as an antireductionist position" and
>>>explains that he just thinks that the model should take these strange
>>>loops into account.
>>
>>I don't know what a "reductionist position" is, much less an
>>"antireductionist position".
>>Could anybody explain?
>
> I'm bound to do a bad job of this, so I shall qualify this by saying
> that this is my understanding, and that it may well be superficial:
>
> A reductionist position is one in which the mind can be formally
> modelled by a basic set of rules. This set of rules may however take
> into account the strange self-referential loops between different
> levels.
>
> An "antireductionist" position is one which holds the mind to be
> effectively indivisible and unknowable. (This may be too simplistic).
>
> My own position tends to the former camp, but with an important
> caveat. Although I believe that we can come up with a sufficiently
> complex self-referential system that attains the "critical mass" of
> complexity required for intelligence, I am less convinced that we can
> do so whilst simultaneously encapsulating that intelligence in one
> formal set of rules.

Well, this idea is simply inconsistent.
Gödel's theorem makes this just as impossible as the idea that an
infinite lifespan would allow you to count to the last integer, even
though the idea would sound plausible at the first glance (unless, of
course, you have done your books on countable infinities).

> Hofstadter says 'a reductionist explanation of a mind, in order to be
> comprehensible, must bring in "soft" concepts such as levels,
> mappings, and meanings. In principle, I have no doubt that a totally
> reductionist but incomprehensible explanation of the brain exists; the
> problem is how to translate it into a language we ourselves can
> fathom.'

This is a totally different matter: how to make the human mind
comprehensible to a human.
This still doesn't make the workings of the brain nondeterministic. Or
emergent. Or whatever.

Let me repeat:
Gödel's proof makes it totally impossible for an FSM (and even an
countably-infinite-state machine) to transcend the incompleteness
theorem. "Strange loops" (which aren't very strange actually) are either
outside the domain of the FSM, in which case they themselves constitute
the limit of what the FSM can do, or they are accepted by the FSM, in
which case they aren't strange loops anymore (they are just axioms), and
you just need a higher level of theorems that are incomplete.

Hofstaedter's book is a great explanation of what the incompleteness
theorem is about, but it miserably fails on questions of consciousness
and mind. The "explanations" given are just acts of faith.
In effect, Hofstaedter says that he doesn't quite understand strange
loops, he doesn't understand the mind, so the strange loops must be
responsibel for the mind. Not quite a compelling line of reasoning if
you ask me...

> To that position I would like to add that each "translation" is likely
> to involve further "strange loops" in need of mapping, thus perhaps
> necessitating further translations...

I think strange loops have been vastly overhyped by Hofstaedter. Nothing
in the working of the human brain indicates it's doing something
strange-loop-like. And, frankly, we don't need strange loops to create
lots of behaviour that's very human-like in some respects; there is not
a single serious argument that it's impossible to program a computer
that can mimick human behaviour. And once that computer is built,
there's the question whether it has a soul or not... and, personally, I
think the question would be beside the point: if it looks like a human,
behaves like a human, it /is/ a human. It's a quite simplistic view, but
in the end it's the only view that can be falsified.

Regards,
Jo

David Basil Wildgoose

unread,
Jul 13, 2003, 6:15:08 PM7/13/03
to
f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<bepd9a$bm7$1...@mulga.cs.mu.OZ.AU>...

> wild...@operamail.com (David Basil Wildgoose) writes:
>
> >I wasn't arguing against AI [...]
> >I was arguing against AI being based upon a formal set of rules.
>
> What do you mean by "based on"?
>
> AI will have a formal set of rules at the bottom level.
> But trying to understand it at that level will be pretty hopeless.
> To understand AI, we'd need to look at it at higher levels of abstraction,
> and at those levels, thinking of it as a formal set of rules would not
> be helpful.
>
> >in GEB [...] there was also an interesting story about an Ant Hill and her
> >friend the Anteater, which neatly illustrated the emergent behaviour
> >visible when looking at individual ants at a different level, namely
> >that of the colony itself.
>
> And what's to say that the individual ants weren't obeying a formal set of
> rules?

Pheromones. In other words, the ant colony communicates with itself,
thereby modifying the behaviour of individual ants.

For some reason, there was some confusion when I referred to
"deterministic" rules, i.e. rules whereby for a given input we know a
given output. The doctrine of determinism is the antithesis of Free
Will. For what it's worth, I believe that intelligence requires Free
Will, and that its absence means the "intelligence" can only be
simulated, and not actual.

For example, it has been argued that we are just the product of our
genes. (This ignores the fact that identical twins aren't truly
identical, despite having identical genetic material, because the
manner in which their genes are expressed can be different.)

If I was merely following the programming of my genes, then my
"programming" would be telling me to repeatedly procreate. And yet I
wear condoms! Thereby illustrating why the low-level rules are
ignored/overridden by the high-level behaviour in a complex
self-referential system like a human being.

David Basil Wildgoose

unread,
Jul 13, 2003, 6:36:52 PM7/13/03
to
f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<bepd9a$bm7$1...@mulga.cs.mu.OZ.AU>...
> wild...@operamail.com (David Basil Wildgoose) writes:
>
> >I wasn't arguing against AI [...]
> >I was arguing against AI being based upon a formal set of rules.
>
> What do you mean by "based on"?
>
> AI will have a formal set of rules at the bottom level.
> But trying to understand it at that level will be pretty hopeless.
> To understand AI, we'd need to look at it at higher levels of abstraction,
> and at those levels, thinking of it as a formal set of rules would not
> be helpful.
>
> >in GEB [...] there was also an interesting story about an Ant Hill and her
> >friend the Anteater, which neatly illustrated the emergent behaviour
> >visible when looking at individual ants at a different level, namely
> >that of the colony itself.
>
> And what's to say that the individual ants weren't obeying a formal set of
> rules?

Pheromones. In other words, the ant colony communicates with itself,

David Basil Wildgoose

unread,
Jul 13, 2003, 6:36:57 PM7/13/03
to
f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<bepd9a$bm7$1...@mulga.cs.mu.OZ.AU>...
> wild...@operamail.com (David Basil Wildgoose) writes:
>
> >I wasn't arguing against AI [...]
> >I was arguing against AI being based upon a formal set of rules.
>
> What do you mean by "based on"?
>
> AI will have a formal set of rules at the bottom level.
> But trying to understand it at that level will be pretty hopeless.
> To understand AI, we'd need to look at it at higher levels of abstraction,
> and at those levels, thinking of it as a formal set of rules would not
> be helpful.
>
> >in GEB [...] there was also an interesting story about an Ant Hill and her
> >friend the Anteater, which neatly illustrated the emergent behaviour
> >visible when looking at individual ants at a different level, namely
> >that of the colony itself.
>
> And what's to say that the individual ants weren't obeying a formal set of
> rules?

Pheromones. In other words, the ant colony communicates with itself,

Costin Cozianu

unread,
Jul 13, 2003, 8:23:50 PM7/13/03
to
Joachim Durchholz wrote:
> Costin Cozianu wrote:
>
>> Constructing a good computing machine (cyber-cretin) for this purpose
>> looks more and more like a technical problem
>
>
> I think provers are as old as Lisp, which is 1956.
>

Provers, proof checkers whatever :)

Still, the quasi-majority of mathematics is still done with pen and
paper and not in front of Coq, Nuprl and the likes.

So techincalities are still a long way to go from being acceptably solved.

> > but constructing a good
>
>> oracle do we know even if it is possible ?
>
>
> Constructing oracles is easy. An oracle that always says "I don't know"
> would be trivial, for example.
> When it comes to constructing a *good* oracle: how do you define the
> quality of an oracle? Only then can you answer the question whether it's
> possible.
>

To have a decent quality a good oracle should be at least as insightful
as your average math graduate.

> E.g. if you say a good oracle should be able to provide enough
> information that the prover can prove all true statements, then the
> oracle must solve the halting problem. That's obviously too much to ask.
> An oracle that always fails to give useful answers is too little.
> Where, on the near-continuum between these endpoints, would you say the
> oracle is "good"?
>

Let's say he's able to come up with Galois theory (or something as good)
on its own.

> > What if it proves to be
>
>> unfeasible in in space or in time ?
>
>
> Personally, I think it's quite feasible.

You "think" or you "believe" ? Folks used to belief much more obvious
things and were proven wrong.

> It's just that present-day oracles are obviously not good enough. And
> given that roughly a kilogram of grey matter is able to produce the most
> stunning mathematical proofs is basis for justified hope that this level
> of expertise can be reproduced with reasonable effort. (Of course, if
> the human mind is somehow connected to "Akasha" or otherwise linked into
> an external source of inspiration, this belief may be too optimistic.
> Frankly, nobody knows for sure until a serious attempt at such an oracle
> has been made.)
>

That's why I am puzzled how many people argue over their beliefs.

> > Do we have some scientific basis to
>
>> believe that AI is possible, or is it just a "positivist" belief ?
>
>
> Artificial intelligence is most definitely possible. It's just that AI
> tends to be defined as "those parts of the human intellect that we do
> not know how to emulate in software yet".
> Playing chess used to be firmly in the domain of AI, until it was clear
> that the problem could be solved using massively parallel computing, in
> ways that are very different from the way that the human mind works.

So they solved it using a non-intelligent approach. You're not going to
argue that a brute force solution is a feature of AI ?

> Image recognition: likewise.

Ahem, I recently fed Dijkstra's manuscripts (typographical handwriting)
to a few commercial OCR packages (otherwise pretty good). They're beyond
hopeless for anything other than printed text.

A 10 year old child having no previous exposure to Dijkstra's writing
whatsoever would read those without any problem. It might be that I can
teach them Dijkstra's writing if I only had that patience and time, but
then they'll stumble on their face to the next hand writing. Again, the
prototypical 10 year old doesn't need to be trained specifically for any
such writing.

The bottom line is that we have no clue (quite the contrary) that our
text recognition capability is a glorified OCR software.

> Learning: was considered AI until neural networks were invented. Today,
> neural networks are boilerplate technology.

Oh, boy. They're glorified a little bit too much for their performances
as the above experiment shows. We have pretty strong indications that
the simple math behind neural network is unlikely to be the explanation
of our intelligence. Nor it is likely to be how we learn.

> The Turing test used to be the litmus test of AI - until people began to
> write programs that could simulate some nearly-psychotic personalities
> that hide their intelligence behind certain interaction patterns.
>
> I'm pretty sure that a useful oracle for theorem provers will be
> considered non-AI once a working example is demonstrated.

I can't assign any realistic probability to whether any such oracle will
ever be constructed. So I'm puzzled how you can be pretty sure.

> It's still a Good Thing to have AI researchers. AI has been one of the
> more productive areas of research - not in their primary goal, but in
> terms of by-products.
>

Nobody argued against that.

However a more restrained and circumspect attitude with regards to AI
achievements and prospoects and what inferences we draw from it, is
certainly warranted.

> Regards,
> Jo
>

Best,
Costin.

Ralph Becket

unread,
Jul 13, 2003, 9:51:33 PM7/13/03
to
wild...@operamail.com (David Basil Wildgoose) wrote in message news:<265d96ac.03071...@posting.google.com>...

> f...@cs.mu.oz.au (Fergus Henderson) wrote in message news:<bepd9a$bm7$1...@mulga.cs.mu.OZ.AU>...
> > wild...@operamail.com (David Basil Wildgoose) writes:
> >
> > >I wasn't arguing against AI [...]
> > >I was arguing against AI being based upon a formal set of rules.
> >
> > What do you mean by "based on"?
> >
> > AI will have a formal set of rules at the bottom level.
> > But trying to understand it at that level will be pretty hopeless.
> > To understand AI, we'd need to look at it at higher levels of abstraction,
> > and at those levels, thinking of it as a formal set of rules would not
> > be helpful.
> >
> > >in GEB [...] there was also an interesting story about an Ant Hill and her
> > >friend the Anteater, which neatly illustrated the emergent behaviour
> > >visible when looking at individual ants at a different level, namely
> > >that of the colony itself.
> >
> > And what's to say that the individual ants weren't obeying a formal set of
> > rules?
>
> Pheromones. In other words, the ant colony communicates with itself,
> thereby modifying the behaviour of individual ants.

That's a non sequitur.

If by "pheromones" you mean there's a statistical component to the ants'
behaviour, then I don't think anybody would disagree with you. However,
there is no problem at all in describing non-deterministic formal systems.

> The doctrine of determinism is the antithesis of Free
> Will. For what it's worth, I believe that intelligence requires Free
> Will, and that its absence means the "intelligence" can only be
> simulated, and not actual.

Well, that's up to you. The fact remains there is no evidence one way or
the other to suggest that free will (in the not-a-product-of-a-formal-
system sense) actually exists.

> If I was merely following the programming of my genes, then my
> "programming" would be telling me to repeatedly procreate. And yet I
> wear condoms! Thereby illustrating why the low-level rules are
> ignored/overridden by the high-level behaviour in a complex
> self-referential system like a human being.

Your genes are as they are through natural selection. They only encode
a human being with a sex drive, not one with a desire for parenthood,
because until recently the latter was a largely inevitable consequence
of the former.

But it's not clear what you're arguing for. Are you suggesting that
(a) there can never be a mathematical (simulatable) model that accurately
describes you or
(b) that there may be such a thing, but simulation thereof may require
something computationally more powerful than a Turing machine or
(c) something else again?

I would point out that for there is nothing we know of that we can
measure that would support position (a).

Similarly, nobody has observed anything with more computing power than
a Turing machine, so taking position (b) is risky.

If you're still hung up on Goedel's incompleteness theorem, I refer you
to Fergus' post on 2003-07-12 09:40:02 PST.

-- Ralph

George Russell

unread,
Jul 14, 2003, 6:56:21 AM7/14/03
to
Marshall Spight wrote (snipped):

> This conversation has been entirely about qualitative considerations.
> But there are also some interesting points to be made from a
> quantitative point of view. The above poster noted the difficulties
> that machines have had catching up to humans in some problem areas,
> and the surprise this difficulty occasioned. This doesn't seem at all
> surprising when one brings in quantitative considerations.
>
> In fact, implicit in many of the arguments in this thread has been the
> question: if computers could be smart, why aren't the smart now?
> If for no other reason, it's at least because they aren't anywhere near
> fast enough yet.
>
> Consider: Pentium 4 vs. human brain. What is the relative computing
> power?

Turing's estimates in the article I cited would mean a Pentium 4 could pass
the Turing Test, with appropriate programming and 128 MB of memory. So
it would seem as if Turing estimates were too optimistic.

>
> There are many ways to make the argument: heat dissipation, neural
> connections/sec, extrapolations from the retina, etc. But let's just
> count junctions for the moment; that's really easy.
>
> Neurons aren't the same as transistors, sure. The neuron has approximately
> 10^3 more interconnects than the transistor, but the cycle time on the transistor
> is much shorter. Let's say they're roughly at parity. This may not be all that
> exact, but let's do it for back-of-the-envelope purposes.
>
> Human: 100 billion neurons
> P4: 55 million transistors
>
> So the human has about 4 orders of magnitude more junctions. Clearly,
> the computer has some significant catching up to do.

Indeed so. But progress since 1950 does not convince me that just throwing
more computer power at the problem is going to solve it. I suppose in theory
you could get human-like intelligence by accurately simulating every neuron in
the human brain, but I am not sure that would actually prove very much.

George Russell

unread,
Jul 14, 2003, 6:59:31 AM7/14/03
to
Peter G. Hancock wrote:>>>>>>George Russell wrote (on Thu, 10 Jul 2003 at 20:20):
>
>
> > http://www.abelard.org/turpap/turpap.htm
>
> Thanks very much for this link, and for your quotes from it.
>
> What an amazingly clear writer Turing was! (Just compare the
> simplicity of his sentences [mostly a line per sentence] with some of
> your own! [mostly about 7] :-)

Yes, I've just been reading a German author who seems to have abandoned paragraphs
and regards more than one full-stop a page as excessive, so maybe this is a bad
(or possibly good) influence.
>
> Cheap shots aside:
>
> (Alan.)
> >> As I have explained, the problem is mainly one of programming.
>
> (George.)
> > But alas, it is clear Turing clearly misunderstood the nature of
> > the problem, for ...
>
> Um, really??

If you could manage to read more than the first 7 words of my sentence you
might get to my explanation for why Turing misunderstood the nature of the
problem ...

George Russell

unread,
Jul 14, 2003, 7:04:42 AM7/14/03
to
Neelakantan Krishnaswami wrote (snipped):

> I don't particularly care about whether or not Russell sets "exist" in
> some ontological sense or not -- the question at hand is whether the
> formal rules of inference that people "naturally" use permit
> inconsistent constructions. I think it's the case that we do; consider
> the circumstances under which Russell's paradox was discovered. Frege,
> who was better at rigorous, logical thought than almost everyone who
> has ever lived. was trying to formalize mathematics using naive set
> theory, when Russell discovered the paradox in the system.

I certainly think it's possible for people to mislead themselves into
accepting an argument as consistent or valid when it isn't. Now what
was it we were saying about weapons of mass destruction again?

However I would dispute that there are such things as "formal rules of
inference that people "naturally" use". There are informal rules, but
they tend to allow exceptions, and they are not easy to describe.

Richard Bos

unread,
Jul 14, 2003, 9:57:19 AM7/14/03
to
In article <265d96ac.03071...@posting.google.com>,
wild...@operamail.com (David Basil Wildgoose) wrote:

> Richard Bos <r...@hoekstra-uitgeverij.nl> wrote in message
> news:<rlb-AFFECC.1...@news.nl.uu.net>...
> > In article <265d96ac.03070...@posting.google.com>,


> > wild...@operamail.com (David Basil Wildgoose) wrote:
> >
> > > f...@cs.mu.oz.au (Fergus Henderson) wrote in message

> > > news:<beietv$nmf$1...@mulga.cs.mu.OZ.AU>...


> > > > wild...@operamail.com (David Basil Wildgoose) writes:
> > > >

> > > > >I thought that the application of Goedel's Theory did a good job of


> > > > >demolishing deterministic approaches to Artificial Intelligence,
> > > >

> > > > Not at all.
> > > >
> > > > >(at least that's what I remember from Hofstadter's "Goedel, Escher and
> > > > >Bach" anyway).
> > > >
> > > > You misunderstood it.


> > >
> > > I know it's nearly 20 years since I read it, but I don't think I got
> > > it that badly wrong.
> >

> > It's only two years or so since I read it, and you _did_ get it that
> > badly wrong.
>
> How?
>

> You are making a blanket statement with no reasoning to back it up.
> On the contrary, I suggest that your failure to back up your
> assertions implies that you are unable to do so.


>
> My assertion is that Intelligence cannot be directly modelled, and

> that Gödel's work backs that up.

Ok; in that case, you mean "deterministic" to mean the same thing as
"directly modeled" (since that is the word you use above); and I
disagree with that.

> I maintain that Intelligence actually arises "through the cracks" as
> emergent behaviour that is not directly modelled by the system.

I agree that this is probable. However, I think this does not
necessarily mean that AI needs anything but simple, already available
programming techniques at the bottom. That layer would certainly be
completely deterministic. I understood you as meaning that this level
would not be deterministic, either; if that isn't what you meant, I
retract my statement.

Richard

David Basil Wildgoose

unread,
Jul 14, 2003, 12:41:21 PM7/14/03
to
Joachim Durchholz <joachim....@web.de> wrote in message news:<besesc$8h1t8$1...@ID-9852.news.uni-berlin.de>...

> David Basil Wildgoose wrote:
> > - synergy, the whole is
> > greater than the sum of its parts.
>
> This is a common saying, but it's misleading.
> Whether something is "greater" than something else is entirely a
> question of perspective. What's "greater" for an IT person may be
> irrelevant to a lawyer, say. Or, even more generally: what's "greater"
> for a human would probably be irrelvant to a fox.

Yes.

> > This may not be visible from
> > *within* the system, but it is visible when looking from an external
> > point.
>
> I don't copy.
> From "within the system", there is no observer to whom anything might
> be visible. We're always observing from the outside; what's shifting is
> the level of abstraction.
> At a low level, the human brain is just an inextricable mass of neurons,
> firing according to some well-understood rules, but that's all.
> At a higher, more abstract level, the human brain is capable of
> providing all sorts of interesting stuff. We just don't know (yet) how
> the low-level stuff interacts to provide the high-level abstractions.
> (Or, if one believes in the mind not being based on matter along: how
> the low-level interaction *fails* to provide the high-level abstractions.)

Agreed.



> >>"Emergent behaviour" is not something magic that "rises through the
> >>cracks". It's just that humans find something interesting, and are
> >>willing to abstract to be able to reason about these higher-level
> >>properties.
> >
> > I think that the reason why we find this interesting is because we are
> > looking at the behaviour from an external point, and are then, as you
> > say, able to reason about these higher-level properties.
>
> We'd be unable to reason about this regardless of whether it's high or
> low level if we weren't looking at it from the outside.
> I don't think there's any "inside perspective" at all. If that were the
> case, we'd have to be able to observe the firing of individual neurons
> within our brain - which, obviously, is not the case (and would not
> provide us with any new insights if it were).

Yes.

> > These
> > properties may be a consequence of the lower level rules, but they are
> > *not* directly encoded within the lower level rules.
>
> How would you know?
> The materialists believe that the mind is indeed "encoded directly", via
> the pattern of neuronal interconnections and the state of the neurons.
> Just because we cannot fully trace or understand that pattern doesn't
> mean it's not a "direct encoding".

To give a (probably bad) example. I have recently started to learn Go
(Igo to the Japanese). For a collection of stones to "live" they have
to have 2 "eyes". This isn't actually a stated rule, rather it is a
consequence of the rules. So whether or not it is a "direct encoding"
is a matter of perspective, (again).

Hmmm. Not sure about that. The idea of "trying to be smarter in
covering up their tracks" sounds like an "evolutionary arms race" to
me. Humans tend to be more co-operative in their outlook I'm glad to
say.


> > If we are just "machines", then much of our moral
> > underpinnings about the "sacredness" of human life disappears. Oops.
> > Religion (or at least ethics) again. :-)
>
> The sacredness of human life disappears as soon as you drop religion;
> that's orthogonal to whether humans are FSMs or not.
> Actually, some religions so indeed see human life as predetermined (the
> Muslim faith is the most prominent example), and some atheists strongly
> object to humans being FSMs. So it's not just orthogonal in theory,
> religion already has explored all four quadrants of this orthogonality.

Are you sure about that? I wasn't aware that Muslims believed in
predeterminism, in fact, as one of the three "peoples of the book"
they share a version of the Old Testament with both Christians and
Jews, and so I would have thought they also had a version of Genesis,
including the Garden of Eden, etc.

In fact, the more I think about it, the more I doubt it. What point
Salvation if you can't make conscious choices about your actions?

But that isn't something I'm qualified to make comment on...

Perhaps. You are right to point out the inconsistencies, but what I
had in mind was "growing" an intelligence. I found the following to
be an interesting read:

An Evolutionary Approach to Synthetic Biology: Zen and the Art of
Creating Life.

http://www.isd.atr.co.jp/~ray/pubs/zen/

I'm not sure about the concept of a soul, that's too vague an idea
with too many religious overtones. And there is no reason why
something must be like a human in order to be intelligent. After all,
if we ever encounter extra-terrestrial intelligence then that would
fail the "like a human" test but still be demonstrably intelligent.

David Basil Wildgoose

unread,
Jul 14, 2003, 1:36:56 PM7/14/03
to
Richard Bos <rlb...@hoekstra-uitgeverij.nl> wrote in message news:

> > I maintain that Intelligence actually arises "through the cracks" as
> > emergent behaviour that is not directly modelled by the system.
>
> I agree that this is probable. However, I think this does not
> necessarily mean that AI needs anything but simple, already available
> programming techniques at the bottom. That layer would certainly be
> completely deterministic. I understood you as meaning that this level
> would not be deterministic, either; if that isn't what you meant, I
> retract my statement.

No, that isn't what I meant. I never expected the word
"deterministic" would cause so much trouble. An alternative might
have been "non-chaotic", (using "chaotic" in the sense of difficult to
predict), but that would probably lead to even more confusion.

All I was trying to suggest was that Intelligence is not just a big
Expert System in which for a given Input there is a given
pre-determined Output.

David Basil Wildgoose

unread,
Jul 14, 2003, 3:08:38 PM7/14/03
to
ra...@cs.mu.oz.au (Ralph Becket) wrote in message news:<3638acfd.03071...@posting.google.com>...

> > > >in GEB [...] there was also an interesting story about an Ant Hill and her
> > > >friend the Anteater, which neatly illustrated the emergent behaviour
> > > >visible when looking at individual ants at a different level, namely
> > > >that of the colony itself.
> > >
> > > And what's to say that the individual ants weren't obeying a formal set of
> > > rules?
> >
> > Pheromones. In other words, the ant colony communicates with itself,
> > thereby modifying the behaviour of individual ants.
>
> That's a non sequitur.

No it isn't.

> If by "pheromones" you mean there's a statistical component to the ants'
> behaviour, then I don't think anybody would disagree with you. However,
> there is no problem at all in describing non-deterministic formal systems.

"Pheromones" has a clear dictionary definition of "A chemical secreted
by an animal, especially an insect, that influences the behavior or
development of others of the same species". That is most definitely
not *just* a "statistical component to the ants' behaviour".

> > The doctrine of determinism is the antithesis of Free
> > Will. For what it's worth, I believe that intelligence requires Free
> > Will, and that its absence means the "intelligence" can only be
> > simulated, and not actual.
>
> Well, that's up to you. The fact remains there is no evidence one way or
> the other to suggest that free will (in the not-a-product-of-a-formal-
> system sense) actually exists.

Which just returns us to my earlier contention that ultimately this
boils down to a "religious" argument between opposing viewpoints.

> > If I was merely following the programming of my genes, then my
> > "programming" would be telling me to repeatedly procreate. And yet I
> > wear condoms! Thereby illustrating why the low-level rules are
> > ignored/overridden by the high-level behaviour in a complex
> > self-referential system like a human being.
>
> Your genes are as they are through natural selection. They only encode
> a human being with a sex drive, not one with a desire for parenthood,
> because until recently the latter was a largely inevitable consequence
> of the former.
>
> But it's not clear what you're arguing for. Are you suggesting that
> (a) there can never be a mathematical (simulatable) model that accurately
> describes you or
> (b) that there may be such a thing, but simulation thereof may require
> something computationally more powerful than a Turing machine or
> (c) something else again?
>
> I would point out that for there is nothing we know of that we can
> measure that would support position (a).
>
> Similarly, nobody has observed anything with more computing power than
> a Turing machine, so taking position (b) is risky.
>
> If you're still hung up on Goedel's incompleteness theorem, I refer you
> to Fergus' post on 2003-07-12 09:40:02 PST.

Actually, I don't think it's clear what you are arguing for either.

Joachim Durchholz

unread,
Jul 14, 2003, 4:01:46 PM7/14/03
to
David Basil Wildgoose wrote:
> All I was trying to suggest was that Intelligence is not just a big
> Expert System in which for a given Input there is a given
> pre-determined Output.

I agree if you mean that intelligence must take experience into account,
i.e. if you require that intelligence requires a memory (and, of course,
the ability to remember past experiences and draw conclusions from it).

From a low-level point of view, this means you have a finite state
machine. At a somewhat higher level, you have a sort-of expert system
that modifies its rules while in use, in some not-well-understood way.

I'm not aware of any higher-level view of human intelligence that's more
than speculation (but them I'm not an expert in the field).

At the expert-system-and-below level, everything can easily be modelled
using a finite state machine (FSM). This doesn't mean that the human
mind is indeed an FSM, it just means that an FSM is enough to describe
everything that we know for sure.
Complexity theory also proves that whatever models we discover on top of
that expert-system-with-a-memory view, it can be modelled using an FSM.
Gödel's incompleteness theorem also asserts that there will always be
things that the FSM will never be able to reason about - though it's
extremely unlikely that any human brain will ever actually hit that
limit. Even within the limits of incompleteness, there are more things
to reason about than the collective lifespan of all humanity would be
able to handle, even if every single human were a mathematical genius.
The number of provable theorems is countably infinite, after all. (The
number of "interesting" theorems is an entirely different matter, and
Gödel says nothing about that subject.)

The above paragraph was written under the assumption that there is no
extra-classical influence on the human mind, such as a soul, quantum
phenomena, or other speculation.

My personal stance on anything "above" the expert system level is that
We Don't Know. There may be a soul, or there may be not - we don't know
how human intelligence works, so there isn't even room for intelligent
speculation on whether some non-material component is necessary for the
functioning of the human mind.

Well, enough of this.

Regards,
Jo

Joachim Durchholz

unread,
Jul 14, 2003, 4:36:36 PM7/14/03
to
David Basil Wildgoose wrote:

> Joachim Durchholz <joachim....@web.de> wrote:
>> How would you know? The materialists believe that the mind is
>> indeed "encoded directly", via the pattern of neuronal
>> interconnections and the state of the neurons. Just because we
>> cannot fully trace or understand that pattern doesn't mean it's not
>> a "direct encoding".
>
> To give a (probably bad) example. I have recently started to learn
> Go (Igo to the Japanese). For a collection of stones to "live" they
> have to have 2 "eyes". This isn't actually a stated rule, rather it
> is a consequence of the rules. So whether or not it is a "direct
> encoding" is a matter of perspective, (again).

It's a good example.

Actually, on second thinking, I found that my reasoning wasn't quite
correct. Nature does indeed employ an encoding: DNA.
DNA is an inherently indirect way of encoding. It just provides plans
for proteins, which in turn switch parts of the DNA on and off as needed
to form a human body. The DNA has no direct encoding of a liver, it just
encodes the proteins that will turn some of the human cells into a liver.
It's much less likely that there will ever be a direct encoding of
anything that we observe. It's not really a surprise that memories are
encoded in specific neurons but somehow "smeared" across an entire
network of them - nature evolves so that anything that's there will be
used for a purpose that helps in reproduction, that's essentially all.

So, in fact, it's true that there's probably no direct representation of
anything - but since direct representation is not an evolutionally
relevant concept, this observation doesn't help us gain much insight.

And, of course, it's essentially a matter of perspective if an encoding
is "direct". From the point of view of evolution, the DNA encoding is
cruelly direct: it describes a set of proteins that will reproduce with
a high probability. That these proteins also create intelligence is just
a superficial side effect.

>>> I admit that it is a "gut" feeling that we are not just FSMs, but
>>> philosophically I have to feel that way - it underpins our
>>> notions of morality.
>>
>> Another imprecision. Morality is independent of whether we are
>> finite state machines. Either we are something more than a FSM:
>> then morals are there to ensure proper behaviour. Or we are an FSM:
>> then the news of criminals convicted will cause all the other human
>> FSMs to adjust their behaviours accordingly (either by trying to be
>> smarter in covering up their tracks, or by selecting a more legal
>> profession). Either way, morals do make sense and are helpful.
>
> Hmmm. Not sure about that. The idea of "trying to be smarter in
> covering up their tracks" sounds like an "evolutionary arms race" to
> me.

Right - but that's actually how those criminals with a career operate.

> Humans tend to be more co-operative in their outlook I'm glad to say.
>

Well, there's also cooperation between criminals.
Legislation and morality are distinct. (Of course, criminal activities
are still something that should be fought, even though criminals have a
social life just like all other humans.)

>>> If we are just "machines", then much of our moral underpinnings
>>> about the "sacredness" of human life disappears. Oops. Religion
>>> (or at least ethics) again. :-)
>>
>> The sacredness of human life disappears as soon as you drop
>> religion; that's orthogonal to whether humans are FSMs or not.
>> Actually, some religions so indeed see human life as predetermined
>> (the Muslim faith is the most prominent example), and some atheists
>> strongly object to humans being FSMs. So it's not just orthogonal
>> in theory, religion already has explored all four quadrants of this
>> orthogonality.
>
> Are you sure about that?

Quite.

> I wasn't aware that Muslims believed in predeterminism, in fact,

Quite seriously in fact.
If somebody dies in a holy war (i.e. the martial form of "jihad"), this
was predetermined. There's a concept of "kismet", a book in which all of
the deeds of a human are written at the time of his birth.

I don't know how muslims operate under that belief.

Calvinism is a variant of this. I don't know whether Calvinists consider
that their entire life is predetermined, but they believe that their
ultimate fate (heaven or hell) is predetermined and cannot be changed.
Seems fatalist, but: Calvinists also believe that their fate in
afterlife is reflected in this world. God sends riches to those who are
chosen, and keeps those in poverty who aren't. Which means that all
Calvinists are in a frenzy to show that they are among the chosen,
spurring all sorts of commercial activity. (Which is why Calvinistic
communities usually had a better economy than their non-Calvinistic
neighbours.)

> as one of the three "peoples of the book" they share a version of the
> Old Testament with both Christians and Jews, and so I would have
> thought they also had a version of Genesis, including the Garden of
> Eden, etc.

Right - but Genesis doesn't say much about the afterlife, or about
predetermination. Most of it is just an account of things that happened,
without going into much detail on how bad or good actions reflected on
the acting persons. (Remember Saul, who is quite a tormented soul and
deserves more pity than derision, yet he must die; or David, the great
hero, who still sends one of his captains into sure death so that he
gets the poor captain's wife... and not a single word of moral praise or
damnation of these actions, just a protocol of what happened. I found
that quite remarkable.)

> In fact, the more I think about it, the more I doubt it. What point
> Salvation if you can't make conscious choices about your actions?
>
> But that isn't something I'm qualified to make comment on...

Salvation is a Christian concept. I know of no other religion that has it.
The Muslim is judged by his actions. After death, his deeds are judged
and the sentence is executed.
The Norse religion said that warriors would be judged according to their
prowess in battle; moral considerations were irrelevant.
The Roman/Greek idea of an afterlife was one of eternal unhappiness.
People who somehow aroused the interest of the gods got exceptional
treatment. The gods' judgements were sometimes fair, sometimes they
weren't - this depended heavily on whether one had friends among them or
not.

Etc. etc.
Different cultures, different gods, different ethics, different morals,
and entirely different ideas of what's central and what isn't.

> I'm not sure about the concept of a soul, that's too vague an idea
> with too many religious overtones.

Neither am I.

> And there is no reason why something must be like a human in order to
> be intelligent. After all, if we ever encounter extra-terrestrial
> intelligence then that would fail the "like a human" test but still
> be demonstrably intelligent.

Agreed.

Regards,
Jo

Amr Sabry

unread,
Jul 14, 2003, 6:17:26 PM7/14/03
to
Joachim Durchholz <joachim....@web.de> writes:

> > I wasn't aware that Muslims believed in predeterminism, in fact,
>
> Quite seriously in fact.
> If somebody dies in a holy war (i.e. the martial form of "jihad"), this
> was predetermined. There's a concept of "kismet", a book in which all of
> the deeds of a human are written at the time of his birth.
>
> I don't know how muslims operate under that belief.

No.

This is incorrect and to the extent that it is correct it is a gross
simplication of centuries of debate among Muslim philosophers. --Amr

Damien Sullivan

unread,
Jul 14, 2003, 7:31:00 PM7/14/03
to
wild...@operamail.com (David Basil Wildgoose) wrote:
>ra...@cs.mu.oz.au (Ralph Becket) wrote in message news:<3638acfd.03071...@posting.google.com>...
>
>> > > >in GEB [...] there was also an interesting story about an Ant Hill and her
>> > > >friend the Anteater, which neatly illustrated the emergent behaviour
>> > > >visible when looking at individual ants at a different level, namely
>> > > >that of the colony itself.
>> > >
>> > > And what's to say that the individual ants weren't obeying a formal set of
>> > > rules?
>> >
>> > Pheromones. In other words, the ant colony communicates with itself,
>> > thereby modifying the behaviour of individual ants.
>>
>"Pheromones" has a clear dictionary definition of "A chemical secreted
>by an animal, especially an insect, that influences the behavior or
>development of others of the same species". That is most definitely
>not *just* a "statistical component to the ants' behaviour".

So how does this break formal rules at the ant level? Input: state of ant
and immediate senses, including current local pheromone levels. Output: new
state, new behavior, and possibly new pheromone emissions. Classic state
machine stuff.

-xx- Damien X-)

Daniel C. Wang

unread,
Jul 14, 2003, 10:42:00 PM7/14/03
to
Joachim Durchholz <joachim....@web.de> wrote in message news:<bev2as$9dga0$1...@ID-9852.news.uni-berlin.de>...

> David Basil Wildgoose wrote:
> > All I was trying to suggest was that Intelligence is not just a big
> > Expert System in which for a given Input there is a given
> > pre-determined Output.
>
{stuff deleted}

> At the expert-system-and-below level, everything can easily be modelled
> using a finite state machine (FSM). This doesn't mean that the human
> mind is indeed an FSM, it just means that an FSM is enough to describe
> everything that we know for sure.
> Complexity theory also proves that whatever models we discover on top of
> that expert-system-with-a-memory view, it can be modelled using an FSM.
{stuff deleted}

After, following the thread it's now bit clear about why there seems
to be such confusion. David seems to be confusing *models* of
intelligence with Intelligence itself. Of course those people who are
actively trying to encode human intelligence as expert systems
(www.cyc.com) seem to miss the point that not any model will do.

As it has often been pointed out on this newsgroup, laziness can be
"encoded" in any eager language with references and thunks. As many
people have pointed out, this is not the "ideal" way of getting lazy
semantics when you want to. Even though you can model laziness people
make argue "YOU DON'T WANT TO, DO IT THAT WAY, EVEN IF YOU CAN!" I
personally, disagree about that .. but that's another inflamtory
topic...

Personally, I believe that all of human intelligence can be modled
with a Turing Machine/Higer-Order Logic. However, I also believe "YOU
DON'T WANT TO, DO IT THAT WAY, EVEN IF YOU CAN!" I'm more inclined to
model the human mind as some sort of dynamic stochastic process. So
maybe intelligence is best model as some sort of probablistic
Pi-calculus or petri-nets.

Many things are doable in a theortical sense. Somone could rewrite all
14-million lines of Microsoft Word as a Turing Machine. It could
probably even be done before the heat death of the universe. Nobody in
their right mind would do it. Encoding human intelligence as some sort
of determinsitc logical expert system I think is a similar kind of
doomed project. It is not impossible, but simply intractable, until
some one comes along with a better model of human intelligence.

David Basil Wildgoose

unread,
Jul 15, 2003, 2:47:16 AM7/15/03
to
pho...@ugcs.caltech.edu (Damien Sullivan) wrote in message news:<bevefk$n9s$1...@naig.caltech.edu>...

I think Daniel Wang's comment is pretty insightful. In any event, we
seem to be arguing in circles on a subject that is decidely off-topic
for this newsgroup.

Jerzy Karczmarczuk

unread,
Jul 15, 2003, 4:52:50 AM7/15/03
to
Joachim Durchholz wrote:

> The current state of research is that the human brain is uninfluenced by
> quantum theory. All known interactions and state transitions of neurons
> can be described in classical terms, which are mechanical and
> deterministic. (With the allowance that - extremely rarely - a quantum
> phenomenon may reach macroscopic size. Which is probably just as likely
> as a sponaneous decomposition of your keyboard into photons - well, give
> or take a few orders of magnitude of improbability, but I hope the
> analogy is clear *g*.)

As I stated already, all that sad speculation about "quantum minds" etc. don't
bother me at all. But - please - avoid to say rubbish at the level of physics,
chemistry, etc. Neurons are *NOT* classical, mechanical, deterministic devices.

Even if you disregard all the quantum effects //which *are* there through
chemical resonances, etc.//, you have still a massive, thermodynamic system,
very far from equilibrium, which implies a lot of "non-determinism" such as
you have in statistical physics. If there are strong currents in a system,
the notion of its 'state' becomes fuzzy. That is one reason not to buy the
statement from another posting of JoD

> The materialists believe that the mind is indeed "encoded directly", via the
> pattern of neuronal interconnections and the state of the neurons. Just
> because we cannot fully trace or understand that pattern doesn't mean it's
> not a "direct encoding".

We don't even know how to describe dead bulk matter which is far from equi-
librium, and people here speculate about treating a brain as a finite state
machine. This is not even funny, although I appreciate the ambitious
optimism of some of you. You might claim - as Ralph Becket - that

> there is no problem at all in describing non-deterministic formal systems.

but nobody really knows what kind of non-determinism rules in the brain, and
besides, "describing a formal system" is not enough. We are talking about
constructing AI. *With* chaos, and resonances, and whatever at the substrate
level.

You have the conflict between firing of neurons, and the "brain waves", which
are non-local, which are conditioned by elementary neuron activity, but -
similarly to the GEB vision of hierarchy of levels in a cyclic Strange Loop -
which conditions and stabilizes local neuron patterns. It is plausible that
the excitations propagate as solitons, which needs such a mathematical ma-
chinery that all 'em algebrogicians are helpless, they will need competent
analyticians.

So, please, decide. Either you speak about *our, concrete* world, and the
true mechanisms which seem to work in cells and tissues - in which case be
please more modest and modern, and don't reduce things to mechanics, or -

or, continue to Turingify and FSMagorize the human mind. In this case I
won't interrupt you, since interrupting vacuum is senseless. We will diverge
from science to the sacredness of life versus Boolean logic, etc., and it


gets nowhere. Or even worse. JoD says:

> The sacredness of human life disappears as soon as you drop religion;
> that's orthogonal to whether humans are FSMs or not.

No. There are religions where human life is not sacred, where it is almost
meaningless, just a turn, a small spark within an eternal machine. There
are humanist non-religious social philosophies - represented e.g. by some
branches of (European) Free-Masonry, where the notion of <<sacrum>> is
postulated on a dogmatic basis in order to avoid the trap of ethical re-
lativism; we *have* to have something inviolable *by us*, so why not the
human life, for the sake of social stability and some intuitive justice
accepted by almost everybody...
Besides, I don't know of any religion appropriate to finite-state machines.
I have to reread the "Robot fairy-tales" and "Cyberiad" of Stanislaw Lem...

Anyway, we got off-topic as far as it seems possible. Let's have a break...

Jerzy Karczmarczuk

Stefan Axelsson

unread,
Jul 15, 2003, 7:55:08 AM7/15/03
to
In article <lp9znjg...@dogfish.cs.indiana.edu>, Amr Sabry wrote:
> No.
>
> This is incorrect and to the extent that it is correct it is a gross
> simplication of centuries of debate among Muslim philosophers. --Amr

This is getting awfully off topic, but it's the middle of summer and
not much else going on, so perhaps we could indulge ourselves going a
bit further (and my curiosity was piqued).

Would you care to elaborate a bit further; to give a less simplified
version of the debate on the issue? A paragraph or two (I'm not
asking for an essay, or dissertation), or a link would be much
appreciated.

Regards,
--
Stefan Axelsson (email at http://www.cs.chalmers.se/staff/sax)

Richard Bos

unread,
Jul 15, 2003, 8:00:06 AM7/15/03
to
In article <bev4cf$9bidq$1...@ID-9852.news.uni-berlin.de>,
Joachim Durchholz <joachim....@web.de> wrote:

> Calvinism is a variant of this. I don't know whether Calvinists consider
> that their entire life is predetermined, but they believe that their
> ultimate fate (heaven or hell) is predetermined and cannot be changed.

Sorry, but that's nonsense. Calvin himself may have believed something
like this, but by no means all modern Calvinists believe something even
remotely like it. I should know; I am (at least nominally) Calvinist. A
very relaxed species of Calvinist, perhaps, but a member of a Calvinist
church, nevertheless. And I believe nothing of the stuff you write here.

> Seems fatalist, but: Calvinists also believe that their fate in
> afterlife is reflected in this world. God sends riches to those who are
> chosen, and keeps those in poverty who aren't.

That's not just nonsense, it is decidedly unchristian nonsense.

> Which means that all
> Calvinists are in a frenzy to show that they are among the chosen,

Ah, yes. One of the main driving forces of my life, showing that I am
worthy of Heaven to my fellow mortals. This explains my current lack of
funds and lack of care about same.

Next time, it might be wiser to choose an example you actually know
something about.

Richard

It is loading more messages.
0 new messages