Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Can artificial intelligence really exist?

6 views
Skip to first unread message

Synosemyne

unread,
Nov 29, 1998, 3:00:00 AM11/29/98
to
Hey all,

Do any of you happen to know some sensible arguments against the existence of artificial intelligence in computers?  It's true that computers are problem-solvers, which some call intelligence, which is fine and almost universally accepted.  But computers still lack understanding.  How does one prove that computers lack understanding?  It is easy to say so but there are always counter-arguments and counter-examples (although most are not very sturdy).  What philosophical or concrete arguments do you know of, to show that computers do not understand?

Thank you,
Synosemyne

Ronald Michaels

unread,
Nov 29, 1998, 3:00:00 AM11/29/98
to

When Computers understand, they will so inform us.

Ron
--
Ronald Michaels mic...@planetc.com
714 Burnett Station Rd. 423 573 4049
Seymour, TN 37865 USA

Gary H. Merrill

unread,
Nov 29, 1998, 3:00:00 AM11/29/98
to

Ronald Michaels wrote:

This is a very anthropocentric view. Why should they?

-- Gary Merrill

kreeg

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Ok, I have a good question for you:

Say we made a really small chip that could do the exact same thing as a
brain cell. Now say we took a person, and everyday we repaced 100
braincells, one by one, with a chip. After we replaced all the cells
(don't count in the several years it would take to actually do this)
with chips, the brain would now be a computer.... Would you then
consider that artificial intelligence?

"Fools Follow"
-ZOAD
http://www.northernnet.com/kreeg

kreeg

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Ronald Michaels wrote:

> Synosemyne wrote:
> >
> > Hey all,
> >
> > Do any of you happen to know some sensible arguments against the
> > existence of artificial intelligence in computers? It's true that

> <<<< SNIP >>>>


> >
> > Thank you,
> > Synosemyne
>
> When Computers understand, they will so inform us.
>
> Ron
> --
> Ronald Michaels mic...@planetc.com
> 714 Burnett Station Rd. 423 573 4049
> Seymour, TN 37865 USA

Hahahaha, lets leave it at that


kreeg

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
"Gary H. Merrill" wrote:

> Ronald Michaels wrote:
>
> > Synosemyne wrote:
> > >
> > > Hey all,
> > >
> > > Do any of you happen to know some sensible arguments against the
> > > existence of artificial intelligence in computers? It's true that

> > > computers are problem-solvers, which some call intelligence, which is
> > > fine and almost universally accepted. But computers still lack
> > > understanding. How does one prove that computers lack understanding?
> > > It is easy to say so but there are always counter-arguments and
> > > counter-examples (although most are not very sturdy). What
> > > philosophical or concrete arguments do you know of, to show that
> > > computers do not understand?
> > >

> > > Thank you,
> > > Synosemyne
> >
> > When Computers understand, they will so inform us.
> >
> > Ron
> > --
> > Ronald Michaels mic...@planetc.com
> > 714 Burnett Station Rd. 423 573 4049
> > Seymour, TN 37865 USA
>

> This is a very anthropocentric view. Why should they?
>
> -- Gary Merrill

Are you always this negative Gary?


Erik Westlin

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
kreeg wrote:
>
> Ok, I have a good question for you:
>
> Say we made a really small chip that could do the exact same thing as a
> brain cell. Now say we took a person, and everyday we repaced 100
> braincells, one by one, with a chip. After we replaced all the cells
> (don't count in the several years it would take to actually do this)
> with chips, the brain would now be a computer.... Would you then
> consider that artificial intelligence?
>
> "Fools Follow"

That is a good question also if we replaced cells with other cells.
When or where does *real* intelligence start?
Or am i talking about selfconciousness?
Does AI need to be intelligent?
Or is intelligence independent of its carrier?

-------------------------------------------------------------------------

Erik Westlin
email: wes...@msi.se

David Kastrup

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Ronald Michaels <mic...@planetc.com> writes:

> Synosemyne wrote:
> >
> > Do any of you happen to know some sensible arguments against the
> > existence of artificial intelligence in computers? It's true that
> > computers are problem-solvers, which some call intelligence, which is
> > fine and almost universally accepted. But computers still lack
> > understanding. How does one prove that computers lack understanding?
> > It is easy to say so but there are always counter-arguments and
> > counter-examples (although most are not very sturdy). What
> > philosophical or concrete arguments do you know of, to show that
> > computers do not understand?
> >

> When Computers understand, they will so inform us.

Big deal. The computer program "Eliza" often said "I understand.",
but I seriously doubt it really did.


--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany

James Marshall

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Hi,
one of the best known arguments against computer understanding in AI
has to be Searle's Chinese Room Argument, which is based on the Physical
Symbol Systems Hypothesis of AI (that intelligence can be created
solely through a formal symbol set and rules to act on that symbol set).
Briefly, the argument runs as follows:
Imagine I am in a room full of books containing rules for manipulating
symbols. It just so happens that these symbols are Chinese characters
but I am unaware of that. The rules are written in a language I
understand (English). There are also pencils, paper, etc. to help me in
my task, which is the following: every so often a piece of paper is
pushed under the door to the room I'm in, filled with symbols (Chinese
characters but remember I don't know that, they're just meaningless
symbols to me). My job is to take these input symbols, apply the rules
to them to produce output symbols, and push this output back under the
door. Now, it so happens that the input symbols are actually questions
written in Chinese by a Chinese speaking human outside. The output
produced by the rules I apply are perfectly formed reposnes to these
sentences, also in Chinese. In fact, the human outside has the
impression he's communicating with a native Chinese speaker.
The question is: do I know how to speak Chinese, or even realise what
I'm actually doing? The answer is generally agreed to be no.
As you can see, the Chinese Room is a metaphor for a computer executing
an AI program. I'm the CPU and to some extent the memory. The pencils
and paper are also the memory, and the books containing the rules are
the program code.
Searle predicted several arguments against his theory and prepared
responses to all of them (Many Mansions Reply, Robot Reply, etc.). I
can't remember them all off hand. Anyway, as far as I know, the Chinese
Room Argument still stands? I hope someone will let me know if there
have been new developments?
James

Synosemyne wrote:
>
> Hey all,


>
> Do any of you happen to know some sensible arguments against the
> existence of artificial intelligence in computers? It's true that
> computers are problem-solvers, which some call intelligence, which is
> fine and almost universally accepted. But computers still lack
> understanding. How does one prove that computers lack understanding?
> It is easy to say so but there are always counter-arguments and
> counter-examples (although most are not very sturdy). What
> philosophical or concrete arguments do you know of, to show that
> computers do not understand?
>

> Thank you,
> Synosemyne

Charles D. Chen

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
kreeg wrote in message <36625102...@northernnet.com>...

>Ok, I have a good question for you:
>
>Say we made a really small chip that could do the exact same thing as a
>brain cell. Now say we took a person, and everyday we repaced 100
>braincells, one by one, with a chip. After we replaced all the cells
>(don't count in the several years it would take to actually do this)
>with chips, the brain would now be a computer.... Would you then
>consider that artificial intelligence?


It is a interesting question. The connection is not difficult to implement
following the method you described. And if you can make this kind of chip,
you have already produced a certain kind of intelligence. Can we made this
kind of chip?

Charles

Paul Victor Birke

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Erik Westlin wrote:

>
> kreeg wrote:
> >
> > Ok, I have a good question for you:
> >
> > Say we made a really small chip that could do the exact same thing as a
> > brain cell. Now say we took a person, and everyday we repaced 100
> > braincells, one by one, with a chip. After we replaced all the cells
> > (don't count in the several years it would take to actually do this)
> > with chips, the brain would now be a computer.... Would you then
> > consider that artificial intelligence?
> >
> > "Fools Follow"
>
> That is a good question also if we replaced cells with other cells.
> When or where does *real* intelligence start?
> Or am i talking about selfconciousness?
> Does AI need to be intelligent?
> Or is intelligence independent of its carrier?
*********************************************************

Dear Erik Westlin

There seems little doubt in my mind that consciousness is a key element
and must be part of the equation in the defintion of << intelligence
>>. This is coupled with intelligence engine, so to say, in a human. There is a lot of work done by Roger Penrose and others in last decade on the question of << what is consciousness >>. But for a fully operational intelligence, such as us humans (maybe not the best guide at times but all we have so far?), I think they are ultimately linked and cannot be separated.

I am a bit out of my league here but this is what I believe. BTW, the
incremental chip argument is rather clever.

I now sit back and listen to you all on this rather good thread.

Paul Birke NN Researcher in Guelph ON CANADA

Kenneth Roback

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Synosemyne wrote:
>
> How does one prove that computers lack understanding?

Well.
I guess we have to trust our own intelligence in this area, since man has
created the computer from the beginning.

Some questions that might help the reasoning (if having problem with that):

Would your computer produce anything without you starting any program in it ?

If you have a program tucked into a computer and run it, would it produce
something you never expected or does it start to "behave" quite or drastically
different from what it is supposed to (i.e. not already programmed or
controlled by any rules) ?

Does the program by any chance suddenly look different from the original one
since you started it (ruling out eventual memory problems, i.e. parity error) ?

If you answer Yes to one of the above questions than you can really begin to
wonder and it might be worth looking into seriously.

How do you prove that computers does not lack understanding ?

/Kenneth

Jerry Hull

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 00:02:10 -0800, kreeg <kr...@northernnet.com> wrote:

>Ok, I have a good question for you:
>
>Say we made a really small chip that could do the exact same thing as a
>brain cell. Now say we took a person, and everyday we repaced 100
>braincells, one by one, with a chip. After we replaced all the cells
>(don't count in the several years it would take to actually do this)
>with chips, the brain would now be a computer.... Would you then
>consider that artificial intelligence?

Suppose we replaced each cell in the brain with a piece of macaroni that could
do the exact same thing as a brain cell. What, you say macaroni can't do the
same thing as brain cells? Then what makes you think a computer chip can do
the same thing? Because, frankly, we don't really know WHAT it is that brain
cells do that support consciousness, &c.

--
Jer
"Our Father which art in heaven / Stay there
And we will stay on earth / Which is sometimes so pretty."
-- Jacques Prévert

H.J. Gould

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to

Charles D. Chen wrote in message <73u2k2$dgo$1...@news.rz.uni-karlsruhe.de>...

>kreeg wrote in message <36625102...@northernnet.com>...
>>Ok, I have a good question for you:
>>
>>Say we made a really small chip that could do the exact same thing as a
>>brain cell. Now say we took a person, and everyday we repaced 100
>>braincells, one by one, with a chip. After we replaced all the cells
>>(don't count in the several years it would take to actually do this)
>>with chips, the brain would now be a computer.... Would you then
>>consider that artificial intelligence?
>
>
>It is a interesting question. The connection is not difficult to implement
>following the method you described. And if you can make this kind of chip,
>you have already produced a certain kind of intelligence. Can we made this
>kind of chip?
>
>Charles
>
>>
>>"Fools Follow"
>>-ZOAD
>>http://www.northernnet.com/kreeg
>>
>
>

This and other arguments pro and con artificial intelligence are well
presented and explained in a book by Douglas R. Hofstaedter. I believe the
title was 'The mirror of the soul'. Especially all the different forms of
Serles Argument i.e. the Chinese Box argument and responses to them. He also
presents several other arguments for and against artificial intelligence.

My own personal belief is that artificial intelligence is entirely possible.
The Chinese Box argument is flawed in that the focus is on the man in the
box mechanically performing certain rules which just happen to map chinese
questions to chinese answers, claiming that the man does not know this is
happening and that therefore the system is not intelligent.

If we follow this through to its logical conclusion man has no intelligence
because the individual neurons of his/her brain are performing thoughtless
electro-mechanical-chemical actions in accordance with the laws of nature
without being aware of this (the person might be aware of this but i doubt
his indiviual neuroins are). I would argue that the chinese box as a whole
does form an intelligent system.


Jiri Donat

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to

Erik Westlin wrote:

> kreeg wrote:
> >
> > Ok, I have a good question for you:
> >
> > Say we made a really small chip that could do the exact same thing as a
> > brain cell. Now say we took a person, and everyday we repaced 100
> > braincells, one by one, with a chip. After we replaced all the cells
> > (don't count in the several years it would take to actually do this)
> > with chips, the brain would now be a computer.... Would you then
> > consider that artificial intelligence?
> >

> > "Fools Follow"
>
> That is a good question also if we replaced cells with other cells.
> When or where does *real* intelligence start?
> Or am i talking about selfconciousness?
> Does AI need to be intelligent?
> Or is intelligence independent of its carrier?
>

> -------------------------------------------------------------------------
>
> Erik Westlin
> email: wes...@msi.se

Dear Erik,
A statement like this we (mathematicians) call "implication". If the
presumption is not met, the statement is true. But we cannot build the
understanding of our world on statements which are just formally true. Simply
speaking, although this statement is true, it is not relevant to our problem.

(Why the presumption is not met? We still do not know what a neuron is => we
cannot produce a really small chip that could do the EXACT SAME thing as a
brain cell.)

However, you have mentioned a very interesting area, consciousness. I am
working on a theory that our consciousness is just an emergent property of
neural network In such a theory, 'intelligence' would indeed be independent
of its carrier.

And I personally believe that this is one of the main targets Alife
endeavors to prove.

Best regards, Jiri


Jiri Donat

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
James,
I have an impression that the Searle's Chinese Room is just an attempt to
extend our normal computer reality to more general situation. We all know
computers interpreting a code written in a computer language - and these
machines can be very simple and from our point of view 'stupid'. Still the
results can be impressive.

However, these two situations differ:
The rules in Searle's Chinese Room are written in a language I understand
(English). English is not a formal, precise, descriptive language. The
difference between computer language (and thus any AI program) and any
native language is hard to overestimate. In simple words, you have to be
inteligent just to understand English.
This is maybe a formal fault of the model, but it prevents the model to
work.

Regards,

Jiri


Jiri Donat

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
kreeg wrote:

> Ok, I have a good question for you:
>

> Say we made . Now say we took a person, and everyday we repaced 100


> braincells, one by one, with a chip. After we replaced all the cells
> (don't count in the several years it would take to actually do this)
> with chips, the brain would now be a computer.... Would you then
> consider that artificial intelligence?
>
>

Sorry, statement like this we (mathematician) call "implication". If the
presumption is not met, the statement is true. So what?

Why the presumption is not met? We still do not know what a neuron is => we
cannot produce a really small chip that could do the EXACT SAME thing as a
brain cell.

Do you have any better proof, Kreeg?

Best regards from Jiri


> "Fools Follow"
> -ZOAD
> http://www.northernnet.com/kreeg
>

Kenneth Roback

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Jiri Donat wrote:
>
> I am working on a theory that our consciousness is just an emergent property
> of neural network.

This is my belief aswell (however I'm not working on a theory though).

I believe that "consciousness" or "thinking" is experienced as a result of
the freedom in generating results to different inputs and creating new
different inputs of this result a.s.o.
With freedom I mean the huge amount of possible connections, possible to use
to "solve" or create a "result" to act upon.

> In such a theory, 'intelligence' would indeed be independent of its carrier.

Sure, why not.
If there can be intelligent animals and human beings, why couldn't there be
other creatures aswell (consisting of chips or whatever).
As far as I know, we today only refer to living creatures of flesh and blood
(more or less) as intelligent. There is no proof to anything else yet.

There is even people who doesn't dare to acknowledge intelligence in animals!
(maybe they feel less "superior" if the do, who knows).

/Kenneth

Jerry Hull

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback <kenneth...@enator.se>
wrote:

>Jiri Donat wrote:
>>
>> I am working on a theory that our consciousness is just an emergent property
>> of neural network.

And I'm working on the theory that sweetness is an emergent property of
tinkertoy constructs. Just as likely.

Charles D. Chen

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to

James Marshall wrote in message <36627D06...@eclipse.co.uk>...


Frankly, I do not like the Chinese Room Argument, so I want to destroy it.
There must be at least two assumptions in this argument.
1. the rules in the room cover all the possible situations that will occur
in the conversation of the out door speaker.
2. the people in the room can understand all these rules. At least he know
how to use these rules in exactly right way.

When certain sentences come from the under of the door, people should choose
some rules to deal with them. While
If the people in the room know how to choose the right rules, he must have
know the "rules" about how to choose the rules in the room. Then, if these
"rules" have nothing related with Chinese, he can not choose different
rules according the certain input. else if these "rules" related with
Chinese,
then he knows something about Chinese. How much he knows about Chinese
depend on how much Chinese related "rules" and rules in the room belong to
himself. Moreover, it depends on how much useful memory which be regarded as
part of "I" with CPU, other part of "I". Therefore, the tricky of this
argument
is the defination of "I". See the quession:do I know how to speak Chinese?
If "I" refers to only the common CPU, the answer should be no. If "I" refers
to
the whole room, which means cpu + memory(all the rules about Chinese), the
answer
is ??.

thanks,

Charles

The Walrus

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Jerry Hull wrote:

> On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback <kenneth...@enator.se>
> wrote:
>
> >Jiri Donat wrote:
> >>
> >> I am working on a theory that our consciousness is just an emergent property
> >> of neural network.
>
> And I'm working on the theory that sweetness is an emergent property of
> tinkertoy constructs. Just as likely.

S'funny ... I was working on a property that bullshit is an emergent property of
usenet.

Think they could be related?

We should be told ...

d.

Jerry Hull

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 15:17:31 +0000, The Walrus <the_w...@BLOCK.bigfoot.com>
wrote:

In my own case, bullshit has been around long before the internet was woven,
though the latter does appear to be an excellent accelerant. <spew> <spew>

Mike Yukish

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback
<kenneth...@enator.se> wrote:

>Jiri Donat wrote:
>>
>> I am working on a theory that our consciousness is just an emergent property
>> of neural network.
>

>This is my belief aswell (however I'm not working on a theory though).
>

[stuff snipped]

I'll throw something out for sniping.. this comes from my background
as a pilot and a systems & controls guy.

In flying, at its most basic the pilot is part of the control system.
He translates inputs to outputs. He is part of the low level control
loop. He can be replaced by an autopilot, which usually does a better
job, at what it is designed to do. But when things go bad (lose an
engine) the pilot can step outside the immediate problem (step outside
of the loop) and re-evaluate his control algorithm to adjust. We now
have two loops, a low level loop and an upper level loop that modifies
the low level loop. After a few of those, the pilot can step outside
the problem of how to adjust, and look at how to train to adjust. With
time, the pilot can step out of that loop and modify it, and so on ad
infinatum. Typically, time scales of interest change with increasing
abstraction.

To my mind, we have the ability to abstract without bound, continually
stepping outside of the immediate loop and modifying our behavior to
optimize. Every artificial system, in contrast, has a well-defined
upper limit of abstraction, near as I can tell. To me, that is a
critical separation between us and them.
*****************************

Mike Yukish
may...@psu.edu
Applied Research Lab/Penn State U.

Lars Kroll Kristensen

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
In <73u6f5$hue$1...@reader2.wxs.nl> "H.J. Gould" <jo...@liemar.nl> writes:

I agree completely with your argument regarding the chinese box.

The best arguments I have heard, regarding the impossibillity of
Machine
intelligence is that any intelligence is by nescessity connected to a
body. The counter counter argument is of course to simulate the body
as well...

Any opinions on that ? Is it possible to generate machine intelligence
without simulating atleast a rudimentary "body" for the intelligence ?

>Charles D. Chen wrote in message <73u2k2$dgo$1...@news.rz.uni-karlsruhe.de>...
>>kreeg wrote in message <36625102...@northernnet.com>...

>>>Ok, I have a good question for you:
>>>

>>>Say we made a really small chip that could do the exact same thing as a
>>>brain cell. Now say we took a person, and everyday we repaced 100


>>>braincells, one by one, with a chip. After we replaced all the cells
>>>(don't count in the several years it would take to actually do this)
>>>with chips, the brain would now be a computer.... Would you then
>>>consider that artificial intelligence?
>>
>>

>>It is a interesting question. The connection is not difficult to implement
>>following the method you described. And if you can make this kind of chip,
>>you have already produced a certain kind of intelligence. Can we made this
>>kind of chip?
>>
>>Charles
>>
>>>

>>>"Fools Follow"
>>>-ZOAD
>>>http://www.northernnet.com/kreeg
>>>
>>
>>

>This and other arguments pro and con artificial intelligence are well


>presented and explained in a book by Douglas R. Hofstaedter. I believe the
>title was 'The mirror of the soul'. Especially all the different forms of
>Serles Argument i.e. the Chinese Box argument and responses to them. He also
>presents several other arguments for and against artificial intelligence.

>My own personal belief is that artificial intelligence is entirely possible.
>The Chinese Box argument is flawed in that the focus is on the man in the
>box mechanically performing certain rules which just happen to map chinese
>questions to chinese answers, claiming that the man does not know this is
>happening and that therefore the system is not intelligent.

>If we follow this through to its logical conclusion man has no intelligence
>because the individual neurons of his/her brain are performing thoughtless
>electro-mechanical-chemical actions in accordance with the laws of nature
>without being aware of this (the person might be aware of this but i doubt
>his indiviual neuroins are). I would argue that the chinese box as a whole
>does form an intelligent system.

--
Lars Kroll Kristensen Last century :"The Pen is mightier than the
email: kr...@daimi.aau.dk sword."
SnailMail: Sandoegade 13 This century :"The keyboard is mightier than
8200 Aarhus N.DK the M-16"

The Walrus

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to

Jerry Hull wrote:

> On Mon, 30 Nov 1998 15:17:31 +0000, The Walrus <the_w...@BLOCK.bigfoot.com>
> wrote:
>
> >S'funny ... I was working on a property that bullshit is an emergent property of
> >usenet.
> >
> >Think they could be related?
>
> In my own case, bullshit has been around long before the internet was woven,
> though the latter does appear to be an excellent accelerant. <spew> <spew>

Hee-hee ... I wasn't talking about you personally, but you could have been forgiven
for applying my comment to this group ;)

Of course, with an excellent accelerant, all one needs to get a really good blaze
going is the right spark ...

d.

James Marshall

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Hi,

H.J. Gould wrote:
>
> My own personal belief is that artificial intelligence is entirely possible.
> The Chinese Box argument is flawed in that the focus is on the man in the
> box mechanically performing certain rules which just happen to map chinese
> questions to chinese answers, claiming that the man does not know this is
> happening and that therefore the system is not intelligent.
>
> If we follow this through to its logical conclusion man has no intelligence
> because the individual neurons of his/her brain are performing thoughtless
> electro-mechanical-chemical actions in accordance with the laws of nature
> without being aware of this (the person might be aware of this but i doubt
> his indiviual neuroins are). I would argue that the chinese box as a whole
> does form an intelligent system.

Yes, this is the reductionist viewpoint as I understand it. That
intelligence is an emergent property of the behaviour of fundamental
components (e.g. neurons in a brain). This is quite a rich subject which
I will not cover here.
Your last point is paralled in the Brain Simulator Reply, in which it
was proposed that rules be given to simulate the neuron firings in the
brain of a native Chinese speaker as he hears/reads and responds to
Chinese sentences. Searle's response was to propose that the room
contain water pipes with valves to represent the neurons, and that rules
be provided to dictate which valves to turn on and off and in what
order. Now the room's inhabitant still cannot be claimed to understand
Chinese, or the water pipes to understand it. Not even a conjunction of
the man and the pipes could be said to understand Chinese (through the
System Reply below).
More importantly Searle addressed your argument about the Chinese Room
as a whole understanding Chinese. Actually this wasn't what you argued,
as you said the Chinese Room formed an intelligent system. I think that
might be subtly different to what is at issue in the CRA, and the
question that started this thread, of whether computers can understand.
So, here's the reply. Suppose the man in the room internalises
everything in that room, all the rules books, the pieces of paper and
pencils, and performs the entire process entirely in his head, apart
from reading the symbols from the input card and writing the output
symbols on another card. Does he now understand Chinese? No!
Searle's original paper is more eloquent than I could hope to be... I'm
afraid the reference is incomplete, I'm reading a reprint in "The Mind's
I", Hofstadter, R. & D. Dennet (eds.) Penguin Books 1981:

Searle, J.R (1980) Minds, Brains and Programs. The Behavioral and Brain
Sciences, vol. 3.

James Marshall

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Hellom

Jiri Donat wrote:
>
> James,
> I have an impression that the Searle's Chinese Room is just an attempt to
> extend our normal computer reality to more general situation. We all know
> computers interpreting a code written in a computer language - and these
> machines can be very simple and from our point of view 'stupid'. Still the
> results can be impressive.
>
> However, these two situations differ:
> The rules in Searle's Chinese Room are written in a language I understand
> (English). English is not a formal, precise, descriptive language. The
> difference between computer language (and thus any AI program) and any

Maybe not, but a subset of English certainly could be formal, precise
and descriptive. There is no reason to suppose that the CRA rules could
not be kept this simple, possibly as some kind of pseudo-code which
would be completely unambiguous, i.e:
IF you receive this sequence of characters
THEN output this sequence of characters
ELSE output another sequence of characters

> native language is hard to overestimate. In simple words, you have to be
> inteligent just to understand English.

Well, in the case I outlined above you would need no more intelligence
than a computer CPU to execute the "English" instructions.

> This is maybe a formal fault of the model, but it prevents the model to
> work.

I strongly diagree. The question addressed by the CRA is whether any
part of the Chinese Room understands Chinese, not whether any part of
the system is intelligent. That was also the original question that
provoked this thread, "are there any arguments against computer
understanding?". I also disagree, as I outlined above, that the Room's
inhabitant need be intelligent.
If you seriously believe that you have identified a fatal weakness in
the CRA, and it is an original thought on your part, please write a
paper on the subject and share it with the scientific community at
large. The CRA has been withstood criticism for 18 years as far as I am
aware. If this is not the case I would greatly appreciate references to
the relevant papers.
James

>
> Regards,
>
> Jiri

James Marshall

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Hello,

Charles D. Chen wrote:
>
>
> Frankly, I do not like the Chinese Room Argument, so I want to destroy it.
> There must be at least two assumptions in this argument.
> 1. the rules in the room cover all the possible situations that will occur
> in the conversation of the out door speaker.

In answer to point 1, it is conceivable that you could have some
generalised rule for dealing with unhandled situations, such as
introducing a different randomly selected topic of conversation (my
invention, don't blame Searle!)
Actually, I don't think this point is important. It is obvious that the
Chinese Room would be impractical (or even impossible?) to implement,
due to the complexity of the problem. However the CRA is a theoretical
argument against the possibility of computer understanding. If
necessary, I'm sure a different scenario which would be more practical
to implement complete rules for could be devised, such as calculating
the trajectory of a missile, or some such thing. The CRA would still
hold, in that the system or any part of it would not understand anything
about missiles, gravity, Newtonian forces etc.

> 2. the people in the room can understand all these rules. At least he know
> how to use these rules in exactly right way.
>
> When certain sentences come from the under of the door, people should choose
> some rules to deal with them. While
> If the people in the room know how to choose the right rules, he must have
> know the "rules" about how to choose the rules in the room. Then, if these
> "rules" have nothing related with Chinese, he can not choose different
> rules according the certain input. else if these "rules" related with
> Chinese,
> then he knows something about Chinese. How much he knows about Chinese
> depend on how much Chinese related "rules" and rules in the room belong to
> himself. Moreover, it depends on how much useful memory which be regarded as
> part of "I" with CPU, other part of "I". Therefore, the tricky of this
> argument
> is the defination of "I". See the quession:do I know how to speak Chinese?
> If "I" refers to only the common CPU, the answer should be no. If "I" refers
> to
> the whole room, which means cpu + memory(all the rules about Chinese), the
> answer
> is ??.
>

The answer is still no, I'm afraid. Searle predicted this argument in
the Systems Reply, which I will now cut and paste from my previuos
posting on the subject. Sorry, but I'm short of time!


So, here's the reply. Suppose the man in the room internalises
everything in that room, all the rules books, the pieces of paper and
pencils, and performs the entire process entirely in his head, apart
from reading the symbols from the input card and writing the output
symbols on another card. Does he now understand Chinese? No!

I'm interested to know why you think he has to have rules to choose
which rules to use! Why could these rules not be also stored in the rule
books? In the end it doesn't matter where the rules are stored though
(in his memory or in the books), as shown in the Systems Reply above.
There's also no need to assume some knowledge about Chinese in choosing
which rules to use. The fundamental rule to start the system could
simply be, search the rule books until you find the symbol pattern that
you have received through the door. Simple pattern matching and
requiring no knowledge of Chinese!

If you want to destroy the Chinese Room Argument, try reading it in full
first, including the predicted replies! That's essential for any theory,
even if you don't like it!
Searle, J.R (1980) Minds, Brains and Programs, The Behavioral and Brain
Sciences, vol. 3
also in:
"The Mind's I", Hofstadter, D. & D. Dennet (eds.) Penguin Books 1981

James

> thanks,
>
> Charles

Mr S A Penny

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
In article <36625102...@northernnet.com>,

kreeg <kr...@northernnet.com> writes:
>Ok, I have a good question for you:
>
>Say we made a really small chip that could do the exact same thing as a
>brain cell. Now say we took a person, and everyday we repaced 100
>braincells, one by one, with a chip. After we replaced all the cells
>(don't count in the several years it would take to actually do this)
>with chips, the brain would now be a computer.... Would you then
>consider that artificial intelligence?

several years?
several hundred million by conservative estimates!

SammyTheSnake
--
SammyT...@SammyServ.Tollon.co.uk SammyT...@Hotmail.com
PHUAE S.A....@Warwick.ac.uk (E)TLA page http://www.warwick.ac.uk/~phuae/
http://www.warwick.ac.uk/~phuae/StSim/index.html last update 2/11/98
--==<< StSim is a project to produce a neural network based quake bot >>==--

Charles D. Chen

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to

James Marshall wrote in message <3662D4F1...@eclipse.co.uk>...

>The answer is still no, I'm afraid. Searle predicted this argument in
>the Systems Reply, which I will now cut and paste from my previuos
>posting on the subject. Sorry, but I'm short of time!
>So, here's the reply. Suppose the man in the room internalises
>everything in that room, all the rules books, the pieces of paper and
>pencils, and performs the entire process entirely in his head, apart
>from reading the symbols from the input card and writing the output
>symbols on another card. Does he now understand Chinese? No!


>I'm interested to know why you think he has to have rules to choose
>which rules to use! Why could these rules not be also stored in the rule
>books? In the end it doesn't matter where the rules are stored though
>(in his memory or in the books), as shown in the Systems Reply above.
>There's also no need to assume some knowledge about Chinese in choosing
>which rules to use. The fundamental rule to start the system could
>simply be, search the rule books until you find the symbol pattern that
>you have received through the door. Simple pattern matching and
>requiring no knowledge of Chinese!


The reason why I use "rules" here is that there is no clearly definition
for "understand". As the fundamental rule you described, if you can "find
the symbol pattern that you have received through the door on the rule
book, you must have the knowledge to distingush the Chinese character
patterns with other patterns and even the the different Chinese character
patterns. Yes, this pattern matching may be simple, but it is still based
on the knowledge about the destinate object - Chinese. If there is some
knowledge about Chinese in his head, no matter how limited it is, it is
difficult to say that he can not understand Chinese absolutly.

Now, we can not avoid the difinition of the "understand". Only when we
have the clear definition about "understand", we can answer this argument.
It is on the assumption that if something can "understand", it is
intelligent.

>
>If you want to destroy the Chinese Room Argument, try reading it in full
>first, including the predicted replies! That's essential for any theory,
>even if you don't like it!
>Searle, J.R (1980) Minds, Brains and Programs, The Behavioral and Brain
>Sciences, vol. 3
>also in:
>"The Mind's I", Hofstadter, D. & D. Dennet (eds.) Penguin Books 1981
>
> James
>

You are right. I should read it and "understand" it at first.

Charles


Jim Balter

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
H.J. Gould wrote:

> This and other arguments pro and con artificial intelligence are well
> presented and explained in a book by Douglas R. Hofstaedter. I believe the
> title was 'The mirror of the soul'.

_The Mind's I:Fantasies and Reflections on Self and Soul_,
arranged and composed by Douglas R. Hofstadter and Daniel C. Dennett,
Basic Books, Inc., 1981, ISBN 0-553-01412-9.

> Especially all the different forms of
> Serles Argument i.e. the Chinese Box argument and responses to them. He also
> presents several other arguments for and against artificial intelligence.
>

> My own personal belief is that artificial intelligence is entirely possible.
> The Chinese Box argument is flawed in that the focus is on the man in the
> box mechanically performing certain rules which just happen to map chinese
> questions to chinese answers, claiming that the man does not know this is
> happening and that therefore the system is not intelligent.
>
> If we follow this through to its logical conclusion man has no intelligence
> because the individual neurons of his/her brain are performing thoughtless
> electro-mechanical-chemical actions in accordance with the laws of nature
> without being aware of this (the person might be aware of this but i doubt
> his indiviual neuroins are). I would argue that the chinese box as a whole
> does form an intelligent system.

This is the most common response, the "Systems Response", which is
indeed valid, and Searle has never given a logical rebuttal of it.
He does throw a verbal tantrum, reviling people for imagining that a
person who doesn't understand Chinese together with "little bits of
paper" could possibly understand Chinese. Of course, since the claim he
is disputing is that a computer can understand Chinese, and functionally
computer memory can be "little bits of paper", Searle's complaint that
people who can imagine this must be "in the grips of an ideology" is the
worst sort of question begging.

What is truly distressing is that the Chinese Room Argument, which
is quite arguably the worst presented and argued thought experiment
in the recent history of philosophy, and has been refuted in many
different ways, is taken seriously. Aside from Searle's question
begging and the emotion-laden ad hominems that he employs, the whole
exercise is malformed, a giant irrelevancy. Searle motivates the
experiment with "One way to test any theory of the mind is to ask
oneself what it would be like if my mind actually worked on the
principles that the theory says all minds work on". But in the
experiment itself, Searle's mind works the way it always has!
That is, nothing at all is said or demonstrated about how the mind
of the Searle homunculus works; it's just plain old John Searle, as we
know him and he knows himself -- including that he doesn't currently
understand Chinese. Rather, it is the *Chinese Room* which is
set up to work "on the principles that the theory says all minds
work on". But does Searle identify his mind with the Chinese Room at
any point? No, quite the opposite; it is this identification that
he rejects, a priori. In fact, if he were to make such an
identification, he would be led to conclude that the Chinese Room
*does* understand Chinese, since human minds that exhibit the behavior
of the Chinese Room are taken by all of us to understand Chinese.

It's just obvious to Searle, a matter of "common sense", that the
Chinese Room cannot understand Chinese, and in order to justify his
belief, he grasps for an "argument" to support it. That he offers up
such an incredible piece of garbage, violating every principle of
rational argumentation, and that so many people accept it as valid,
suggests who really is "in the grips of an ideology".

--
<J Q B>

Jim Balter

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Jerry Hull wrote:
>
> On Mon, 30 Nov 1998 15:17:31 +0000, The Walrus <the_w...@BLOCK.bigfoot.com>
> wrote:
>
> >Jerry Hull wrote:
> >
> >> On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback <kenneth...@enator.se>
> >> wrote:
> >>
> >> >Jiri Donat wrote:
> >> >>
> >> >> I am working on a theory that our consciousness is just an emergent property
> >> >> of neural network.
> >>
> >> And I'm working on the theory that sweetness is an emergent property of
> >> tinkertoy constructs. Just as likely.

Given that we have no examples of complex tinkertoy constructs producing
sweetness, and that in fact that would be inconsistent with our current
body of physical theory, and that we do have models and theories that
show how various particular observable aspects of conscious beings
can arise from certain neural nets (namely the ones in human brains),
and we even have theories, however tenuous, that provide a framework for
explaining the accompaniment of "felt experience" with those nets,
then this claim of equal likeihood is prima facie false.

> >S'funny ... I was working on a property that bullshit is an emergent property of
> >usenet.
> >
> >Think they could be related?
>
> In my own case, bullshit has been around long before the internet was woven,
> though the latter does appear to be an excellent accelerant. <spew> <spew>

The usenet is a broadcast and feedback system. Attach to it several
neural nets with a tendency to generate bullshit, and we can expect that
tendency to grow, often to the point where little else
is left.

> "Our Father which art in heaven / Stay there
> And we will stay on earth / Which is sometimes so pretty."

It's all that fertilizer.

--
<J Q B>

Jim Balter

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
James Marshall wrote:

> Charles D. Chen wrote:

> See the quession:do I know how to speak Chinese?
> > If "I" refers to only the common CPU, the answer should be no. If "I" refers
> > to
> > the whole room, which means cpu + memory(all the rules about Chinese), the
> > answer
> > is ??.
> >
>

> The answer is still no, I'm afraid.

There is no basis for your fear or your conclusion. That you and Searle
are able to give a vague description of a scenario without exploring any
of the details or implications, and without entertaining any
counterexample or reading any of the literature that explains the flaws
in your conceptions, and then claim, with no justification whatsoever,
that the mere presentation of this ill-formed scenario is tantamount
to the deep conclusion "no understanding", has no bearing on the actual
answer.

> Searle predicted this argument in
> the Systems Reply, which I will now cut and paste from my previuos
> posting on the subject. Sorry, but I'm short of time!
> So, here's the reply. Suppose the man in the room internalises
> everything in that room, all the rules books, the pieces of paper and
> pencils, and performs the entire process entirely in his head, apart
> from reading the symbols from the input card and writing the output
> symbols on another card. Does he now understand Chinese? No!

Suppose I internalize a Chinese person's brain, all the neurons,
neural topology, neurotransmitter levels, and so on, and perform the
entire process that the Chinese brain performs entirely in my head. Do
I now understand Chinese?

Saying no would be hard to reconcile with the fact that we take native
Chinese speakers to understand Chinese. But this desire to say no seems
to be predicated on our actually comprehending what it would mean to
internalize all this, and that we could internalize all this without
undergoing some radical change, such as, say, coming to understand
Chinese. But clearly neither of those hold without argument, so
there is a great deal of question-begging going on here.

> If you want to destroy the Chinese Room Argument, try reading it in full
> first, including the predicted replies!

If you read Searle's paper, you should know that he never claimed to
have predicted the Systems Reply; this seems to be your attempt to
raise Searle to some status he doesn't deserve. The SR is and was
implicit in Schank's work and throughout the systems literature,
so it hardly needed prediction, and Searle's counterarguments
are to six different responses that include the schools primarily
associated with those replies. Perhaps you think that Searle also
predicted who would give what reply? Quite a feat! Perhaps you are
under the impression that Searle published his paper without ever
talking to anyone about it beforehand? If so, your mental image of the
process Searle went through is as underdeveloped as your mental image
of the internalization of a program that understands Chinese.

Imagine that you are ancephalic -- that you have no brain.
This is quite consistent with the fact that you do not understand
Chinese. Now, transplant a Chinese person's brain into your skull.
Despite the fact that you can now speak fluently in Chinese and give
every appearance of understanding Chinese, do you understand Chinese?
No! The mere internalization of the workings of a brain into a person
without a brain doesn't change the fact that the brainless person
doesn't understand Chinese. .... Or does it? If it does, then there is
reason to think that internalization of "everything in the room"
involves a similar change.

> That's essential for any theory,
> even if you don't like it!

It is also essential to read the prevailing counterarguments.
In fact, Hofstadter provides an extensive response to Searle's
internalization scenario in his reflections on Searle's paper
but, as is usually the case, you only reference Searle's paper but not
Hofstadter's response a page later. This may paint the false picture
that Searle's is the last word on the subject, which is
hardly the case. I suggest that you (re-)read Hofstadter's piece,
and see if you can understand it and rebut it, before painting yourself
into an intellectual corner with your emphatic "No!" You should also
read David Chalmers' straightforward rebuttal of Searle on pp 322-326
of _The Conscious Mind_ (in fact, read the whole chapter, where Chalmers
elegantly defends Strong AI).

> Searle, J.R (1980) Minds, Brains and Programs, The Behavioral and Brain
> Sciences, vol. 3
> also in:
> "The Mind's I", Hofstadter, D. & D. Dennet (eds.) Penguin Books 1981

"Dennett", please. And they are not merely editors; each article
comes with a substantial "reflection".

--
<J Q B>

Keith Wiley

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
> > > Say we made a really small chip that could do the exact same thing as a
> > > brain cell. Now say we took a person, and everyday we repaced 100
> > > braincells, one by one, with a chip. After we replaced all the cells
> > > (don't count in the several years it would take to actually do this)
> > > with chips, the brain would now be a computer.... Would you then
> > > consider that artificial intelligence?
> > >
> I am a bit out of my league here but this is what I believe. BTW, the
> incremental chip argument is rather clever.

This is a common argument in favor of the notion of mind uploading, a slightly
different phenomenon than artificial intelligence. Frankly, I find it quite compelling.

. . .. ... ..... ........ ............. .....................
.. ... ..... ....... ........... ............. .................
. .. .... ........ ................ ................................
*
Keith Wiley * * * * * *
Email: kwi...@tigr.org *** ** * * ** *
WWW: http://www.tigr.org/~kwiley/ * ** ** ***

Keith Wiley

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Jerry Hull wrote:
>
> On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback <kenneth...@enator.se>
> wrote:
>
> >Jiri Donat wrote:
> >>
> >> I am working on a theory that our consciousness is just an emergent property
> >> of neural network.
>
> And I'm working on the theory that sweetness is an emergent property of
> tinkertoy constructs. Just as likely.

Is this remark meant to imply that you don't believe consciousness emerges
from an interacting neural network? There are only two other popular
theories. One is the existence of a religiously based soul, and the other is
Penrose's quantum consciousness. Since these two theories are far more flaky
than the emergent consciousness theory, perhaps you would like to enlighten
the rest of us as to your theory on consciousness. You sure sound like you
think you know.

Keith Wiley

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
> >Say we made a really small chip that could do the exact same thing as a
> >brain cell. Now say we took a person, and everyday we repaced 100
> >braincells, one by one, with a chip. After we replaced all the cells
> >(don't count in the several years it would take to actually do this)
> >with chips, the brain would now be a computer.... Would you then
> >consider that artificial intelligence?
>
> It is a interesting question. The connection is not difficult to implement
> following the method you described. And if you can make this kind of chip,
> you have already produced a certain kind of intelligence. Can we made this
> kind of chip?

Not today, but it won't be more than a couple decades at most. Keep an eye out.

Jerry Hull

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 12:03:10 -0800, Jim Balter <j...@sandpiper.net> wrote:

>Jerry Hull wrote:
>>
>> On Mon, 30 Nov 1998 15:17:31 +0000, The Walrus <the_w...@BLOCK.bigfoot.com>
>> wrote:
>>

>> >Jerry Hull wrote:
>> >
>> >> On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback <kenneth...@enator.se>
>> >> wrote:
>> >>
>> >> >Jiri Donat wrote:
>> >> >>
>> >> >> I am working on a theory that our consciousness is just an emergent property
>> >> >> of neural network.
>> >>
>> >> And I'm working on the theory that sweetness is an emergent property of
>> >> tinkertoy constructs. Just as likely.
>

>Given that we have no examples of complex tinkertoy constructs producing
>sweetness, and that in fact that would be inconsistent with our current
>body of physical theory, and that we do have models and theories that
>show how various particular observable aspects of conscious beings
>can arise from certain neural nets (namely the ones in human brains),
>and we even have theories, however tenuous, that provide a framework for
>explaining the accompaniment of "felt experience" with those nets,
>then this claim of equal likeihood is prima facie false.

We have no examples of computer neural nets producing consciousness, and we
have models and theories according to which that is strictly impossible.

No doubt there is a some level a similarity between neural nets in computers
and the neuron assemblages in the human brain. But at the same time, the
structures in the brain that have those similarities also have many features
that are NOT shared by neural nets. And there are many structures & levels in
the brain that have no neural net analog. On the other hand, nothing prevents
our tinkertoy constructs from exhibiting the molecular forms of glucose,
succrose, &c. Arguably, the tinkertoy constructs are as like unto things that
are sweet as neural nets are like unto things that think.

I note that the "observable aspects of conscious beings" do NOT include
consciousness. And in what sense can the claim that a property is emergent
from X be thought to be an EXPLANATION of that property? The very claim that
it is emergent entails that it is NOT explained by X.

--
Jer


"Our Father which art in heaven / Stay there
And we will stay on earth / Which is sometimes so pretty."

-- Jacques Prévert

Jerry Hull

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 16:22:15 -0500, Keith Wiley <kwi...@tigr.org> wrote:

>Jerry Hull wrote:
>>
>> On Mon, 30 Nov 1998 15:16:10 +0100, Kenneth Roback <kenneth...@enator.se>
>> wrote:
>>
>> >Jiri Donat wrote:
>> >>
>> >> I am working on a theory that our consciousness is just an emergent property
>> >> of neural network.
>>
>> And I'm working on the theory that sweetness is an emergent property of
>> tinkertoy constructs. Just as likely.
>

>Is this remark meant to imply that you don't believe consciousness emerges
>from an interacting neural network? There are only two other popular
>theories. One is the existence of a religiously based soul, and the other is
>Penrose's quantum consciousness. Since these two theories are far more flaky
>than the emergent consciousness theory, perhaps you would like to enlighten
>the rest of us as to your theory on consciousness. You sure sound like you
>think you know.

I think we DON'T KNOW what it is about brains that brings about consciousness.
I believe that there are strong arguments that computer operations do not
constitute the basic building blocks of conscious thought, & the idea that you
can overcome this by piling on A LOT of operations of a particular kind seems
wishful thinking at best.

Or are you suggesting that I have no right to criticize a prima facie
preposterous theory if I have no positive theory of my own to replace it?

Keith Wiley

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
> >Say we made a really small chip that could do the exact same thing as a
> >brain cell. Now say we took a person, and everyday we repaced 100
> >braincells, one by one, with a chip. After we replaced all the cells
> >(don't count in the several years it would take to actually do this)
> >with chips, the brain would now be a computer.... Would you then
> >consider that artificial intelligence?
>
> Suppose we replaced each cell in the brain with a piece of macaroni that could
> do the exact same thing as a brain cell. What, you say macaroni can't do the
> same thing as brain cells? Then what makes you think a computer chip can do
> the same thing? Because, frankly, we don't really know WHAT it is that brain
> cells do that support consciousness, &c.

A couple of thoughts:

No one has ever insistently claimed that macaroni wouldn't make a good neural
replacement, but our *present* knowledge of macaroni suggests that it's a bad
place to start. What makes me think a computer can replicate the action of a
neuron? A vast number of things. For one, we have a fairly good idea of how
neurons work at the chemical level. At the atomic level, the universe is
still rather hazy, but at the molecular and certainly at the chemical level,
we've got neurons pretty much figured out...and, in terms of replicating the
type of signal analysis and transmission that neurons perform, we can make
computer chips that do the same thing, it's already been done, so don't bother
questioning our ability to do it. We can model it in software and we can
build hardware that does it. We haven't built trillions of them and wired
them together because it's prohibitvely expensive and the technology is still
not viable in vast quantities, but it's perfectly doable from a manufacturing standpoint.

Now, while me may fundamentally understand what is going on inside a neuron to
the point that we can make superficial models of it, we don't know for certain
yet that we can manufacture a copy of an existing neuron. Logical speculation
based on further research involving finer levels of observation suggests that
we will be able to soon, but that issue is open to debate. That question is
not relevant to building artificial intelligence though, it is only relevant
to mind uploading, in whiche one tries to make an AI that is a replica of an
existing brain.

What is not really arguable anymore is the basic question of building a
machine that does what a neuron does. We can do it. I don't think we can do
it at 1x scale, but I think we can at 10x or 100x scale. The components
etched into present computer chips are quite a bit larger than the internal
structures of a neuron, but are vastly smaller than the neuron itself, and our
techniques have been getting smaller without a sign of abatement for decades
now. You lack the patience than computer and neurological research has most
definitely earned.

One last point: You say we can't make a computer do what a neuron does
because we don't know what a neuron does that supports consiousness. These
are two nonrelated issues. Our ability to make a neuron and our ability to
understand it are independent problems. Scientists and industrial factories
manufacture artificial magnets by the ton every day, but no one has a real
clue what magnetism is yet. People were wearing spectacles long before we had
a modern understanding of light, much less the 20th century merger of wave and
particle theories. It is extremely likely that we will create articificial
intelligence before we have a solid scientific explanation of consciousness.
I am personally quite certain AI will come first as a matter of fact, if for
no other reason, because we seem to be much closer to that goal.

There, I'm done now.

Sergio Navega

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Lars Kroll Kristensen wrote in message <73uhe4$247$1...@xinwen.daimi.au.dk>...

>In <73u6f5$hue$1...@reader2.wxs.nl> "H.J. Gould" <jo...@liemar.nl> writes:
>
>I agree completely with your argument regarding the chinese box.
>
>The best arguments I have heard, regarding the impossibillity of
>Machine
>intelligence is that any intelligence is by nescessity connected to a
>body. The counter counter argument is of course to simulate the body
>as well...
>
>Any opinions on that ? Is it possible to generate machine intelligence
>without simulating atleast a rudimentary "body" for the intelligence ?
>


Here we go again with Searle! I promised myself not entering this
kind of discussion anymore. Let's make one exception.

Searle's argument is strong and fragile at the same time. I hope to be
clear enough with my arguments.

It is strong because it showed clearly that a computer fed with a bunch
of symbols from any human language *will not* develop a *human-equivalent*
level of understanding of the world. Inside the Chinese Room you may
put anything: a Cray, a Sun workstation, an "idiot savant" or a
group of 10 "Einsteins". Their performance in the task of understanding
Chinese (or the world outside the room, for that matter), will be
miserably poor.

It is fragile because our brain does, in a way, something *very* similar
to the Chinese Room and all understanding that we have about the universe
around us is obtained through an analogous process. This argument needs
more space to be clarified.

Think about our brain as the entity we're trying to leave inside the
room. Everything this brain captures from the outside world comes from
the sensory perceptions, things that are "outside" the room. This brain
is fed only with signals (pulses) in which all that is relevant is the
timing aspect between the spikes. The brain in that room does not have
only one "door" through which it receives these pulses, but a large
quantity of them, coming from the primary sensory inputs (vision,
audition, etc) and also from others (exteroceptive, proprioceptive,
interoceptive), responsible for our "feeling" of internal events in
our body.

The train of pulses received is comparable to the chinese inputs because
it is a codification of external signals (light, for example, will be
translated from photons to a sequence of pulses). What this brain "sees"
of the world is a careful transformation, made by our sensory equipment.
It is *not* the external signal! To communicate what it senses, our
vision uses a "syntax" that is used to codify what is received in
corresponding pulses.

This brain must find meaning from the the incoming pulses. It has only
one way of starting this process: only looking for patterns and correlations
among the received pulses, temporal "coincidences", things that happen
one after the other, and so on.

Here is where Searle's vision of the problem needed more development:
our brain is looking for meaning in the "syntax" of the pulses, much
as a human in the original room would start looking for meaning
in the syntax of the incoming chinese symbols. This is enough for us
to see that a human being fed with chinese symbols would be able, after
some time, to perceive some *regularities* in the chinese phrases.

This will be enough for that human to start conjecturing the remainder
of a phrase by its initial words. Obviously, this is a cry from the
understanding of chinese, but it will be, I claim, the kind of
"meaning" that can be extracted from the syntax of the chinese phrases.

Now guess what: if this human in the chinese room is allowed to look
to a photograph linked to the text it receives (say a photo of a sun for
the phrase "the sun is rising") after some time he will be able to
ascribe *meaning* to the symbols he receives (he will identify the
word "sun" after some experiences). This photograph will
enter his eyes and will be converted into spikes, will resonate
in his visual cortex and will inform him of the "meaning": what
he knows about "suns", that every (sighted) human knows. In this
case, it is a "chinese room" inside another.

Searle failed to perceive that his brain is inside a "room" being
fed with data from the world and deriving meaning from
what it receives. All of us who work with Artificial Intelligence
must be aware of what this "means".

Regards,
Sergio Navega.

Jim Balter

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Sergio Navega wrote:

> Here we go again with Searle! I promised myself not entering this
> kind of discussion anymore. Let's make one exception.
>
> Searle's argument is strong and fragile at the same time. I hope to be
> clear enough with my arguments.
>
> It is strong because it showed clearly that a computer fed with a bunch
> of symbols from any human language *will not* develop a *human-equivalent*
> level of understanding of the world.

It most certainly did not do any such thing! Please read the literature
that *rebuts* Searle's argument. It is widely believed among computer
scientists that Searle's argument is flawed. It cannot therefore have
"clearly" shown what it is purported to have shown, even if it *did*
show it.

> Inside the Chinese Room you may
> put anything: a Cray, a Sun workstation, an "idiot savant" or a
> group of 10 "Einsteins". Their performance in the task of understanding
> Chinese (or the world outside the room, for that matter), will be
> miserably poor.

The *premise* of the CR is that the Chinese Room itself is
*competent* in all tasks which require an understanding Chinese,
even if it doesn't "actually" understand Chinese. Searle grants
that the *behavior* of the CR is equivalent to that of one who
understands Chinese. His argument is strictly *metaphysical*.

Let's at least get the fundamentals of this thought experiment right
before pursuing its implications.

--
<J Q B>

Jerry Hull

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
On Mon, 30 Nov 1998 16:43:14 -0500, Keith Wiley <kwi...@tigr.org> wrote:

>> >Say we made a really small chip that could do the exact same thing as a
>> >brain cell. Now say we took a person, and everyday we repaced 100
>> >braincells, one by one, with a chip. After we replaced all the cells
>> >(don't count in the several years it would take to actually do this)
>> >with chips, the brain would now be a computer.... Would you then
>> >consider that artificial intelligence?
>>
>> Suppose we replaced each cell in the brain with a piece of macaroni that could
>> do the exact same thing as a brain cell. What, you say macaroni can't do the
>> same thing as brain cells? Then what makes you think a computer chip can do
>> the same thing? Because, frankly, we don't really know WHAT it is that brain
>> cells do that support consciousness, &c.
>
>A couple of thoughts:
>
>No one has ever insistently claimed that macaroni wouldn't make a good neural
>replacement, but our *present* knowledge of macaroni suggests that it's a bad
>place to start. What makes me think a computer can replicate the action of a
>neuron? A vast number of things. For one, we have a fairly good idea of how
>neurons work at the chemical level. At the atomic level, the universe is
>still rather hazy, but at the molecular and certainly at the chemical level,
>we've got neurons pretty much figured out...and, in terms of replicating the
>type of signal analysis and transmission that neurons perform, we can make
>computer chips that do the same thing, it's already been done, so don't bother
>questioning our ability to do it. We can model it in software and we can
>build hardware that does it.

Why are you so sure that it is ONLY the signal processing of neurons that
makes them relevant to consciousness? Prima facie this is absurd;
consciousness is not signal processing. At the chemical and biological levels
neurons are doing a lot more than that.

> We haven't built trillions of them and wired
>them together because it's prohibitvely expensive and the technology is still
>not viable in vast quantities, but it's perfectly doable from a manufacturing standpoint.

Yeah, but so what?

>Now, while me may fundamentally understand what is going on inside a neuron to
>the point that we can make superficial models of it, we don't know for certain
>yet that we can manufacture a copy of an existing neuron. Logical speculation
>based on further research involving finer levels of observation suggests that
>we will be able to soon, but that issue is open to debate. That question is
>not relevant to building artificial intelligence though, it is only relevant
>to mind uploading, in whiche one tries to make an AI that is a replica of an
>existing brain.

Agreed, but if you are going to try to construct a conscious being, the
obvious guide would be the characteristics of beings that are conscious.

>What is not really arguable anymore is the basic question of building a
>machine that does what a neuron does. We can do it. I don't think we can do
>it at 1x scale, but I think we can at 10x or 100x scale. The components
>etched into present computer chips are quite a bit larger than the internal
>structures of a neuron, but are vastly smaller than the neuron itself, and our
>techniques have been getting smaller without a sign of abatement for decades
>now. You lack the patience than computer and neurological research has most
>definitely earned.

My patience is beside the point. My basic claim, remember, is that there is a
lot more going on in the brain than can be replicated via basic signal
processing. For example, it has been claimed that in every brain cell there
is a little magnetic node, similar to those in one celled animals which
(appear to) use them for directionality. What if these magnetic nodes turned
out to be important for consciousness?

>One last point: You say we can't make a computer do what a neuron does
>because we don't know what a neuron does that supports consiousness. These
>are two nonrelated issues. Our ability to make a neuron and our ability to
>understand it are independent problems. Scientists and industrial factories
>manufacture artificial magnets by the ton every day, but no one has a real
>clue what magnetism is yet.

Not sure where you posit this cluelessness. Hasn't electromagnetism been
pretty well understood since Maxwell?


> People were wearing spectacles long before we had
>a modern understanding of light, much less the 20th century merger of wave and
>particle theories.

To understand light enough to make spectacles you don't need QM.

> It is extremely likely that we will create articificial
>intelligence before we have a solid scientific explanation of consciousness.

Depends on what you mean by "artificial intelligence". Anyway, I've been
talking about CONSCIOUSNESS, which to a good degree appears independent of
intelligence (not really a zinger even though it sounds like one).

Neil Rickert

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
ZZZg...@stny.lrun.com (Jerry Hull) writes:

>Or are you suggesting that I have no right to criticize a prima facie
>preposterous theory if I have no positive theory of my own to replace it?

Hull is prima facie preposterous.

There are many very smart people who disagree with you on computer
intelligence. It is quite possible that these many very smart people
are mistaken. It is not possible that they are doing something that
is prima facie preposterous.


Chris Mesterharm

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
James Marshall <home...@eclipse.co.uk> writes:

>Your last point is paralled in the Brain Simulator Reply, in which it
>was proposed that rules be given to simulate the neuron firings in the
>brain of a native Chinese speaker as he hears/reads and responds to
>Chinese sentences. Searle's response was to propose that the room
>contain water pipes with valves to represent the neurons, and that
>rules be provided to dictate which valves to turn on and off and in
>what order. Now the room's inhabitant still cannot be claimed to
>understand Chinese, or the water pipes to understand it. Not even a
>conjunction of the man and the pipes could be said to understand
>Chinese (through the System Reply below).

I don't think this is the issue. The point is that many people
believe his argument also applies to neurons in the brain. This is a
contradiction because most people (including Searle) believe that
brains have the ability to understand.

All the Chinese room reveals is that we don't yet understand what it
means to understand. Searle is using a folk psychology notion of
understanding and, not surprisingly, comes up with problems. After we
discover what it means for a person to understand Chinese, we can
worry about teaching rooms Chinese.

Chris Mesterharm

Jim Balter

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
Chris Mesterharm wrote:
>
> James Marshall <home...@eclipse.co.uk> writes:
>
> >Your last point is paralled in the Brain Simulator Reply, in which it
> >was proposed that rules be given to simulate the neuron firings in the
> >brain of a native Chinese speaker as he hears/reads and responds to
> >Chinese sentences. Searle's response was to propose that the room
> >contain water pipes with valves to represent the neurons, and that
> >rules be provided to dictate which valves to turn on and off and in
> >what order. Now the room's inhabitant still cannot be claimed to
> >understand Chinese, or the water pipes to understand it. Not even a
> >conjunction of the man and the pipes could be said to understand
> >Chinese (through the System Reply below).
>
> I don't think this is the issue. The point is that many people
> believe his argument also applies to neurons in the brain. This is a
> contradiction because most people (including Searle) believe that
> brains have the ability to understand.

Indeed, a conjunction of the man and the pipes *could* be said
to understand Chinese, and in fact Strong AI proponents *do* say so,
Searle's and Marshall's repeated bleating to the contrary
notwithstanding,

> All the Chinese room reveals is that we don't yet understand what it
> means to understand. Searle is using a folk psychology notion of
> understanding and, not surprisingly, comes up with problems. After we
> discover what it means for a person to understand Chinese, we can
> worry about teaching rooms Chinese.

Many of us already know what it means to understand Chinese --
it's a matter of competence. By such a criterion, The Chinese Room
ex hypothesi understands Chinese. It thus follows necessarily that
Searle's argument must be flawed, and the trick, which is no longer
of any relevance to AI, is simply to find the flaw. And it really
isn't all that hard once one drops a prior commitment to Searle's
conclusion.

--
<J Q B>
------------------------------------------------------------------------

Patrick Juola

unread,
Nov 30, 1998, 3:00:00 AM11/30/98
to
In article <73uhe4$247$1...@xinwen.daimi.au.dk>,

Lars Kroll Kristensen <kr...@daimi.au.dk> wrote:
>In <73u6f5$hue$1...@reader2.wxs.nl> "H.J. Gould" <jo...@liemar.nl> writes:
>
>I agree completely with your argument regarding the chinese box.
>
>The best arguments I have heard, regarding the impossibillity of
>Machine
>intelligence is that any intelligence is by nescessity connected to a
>body. The counter counter argument is of course to simulate the body
>as well...
>
>Any opinions on that ? Is it possible to generate machine intelligence
>without simulating atleast a rudimentary "body" for the intelligence ?

What part of the body do you consider to be necessary?

For instance, lots of people deaf since birth (or blind since birth)
are "intelligent." For any given body part or function, I can
conceive of an "intelligent" person who lacks that part/function.
By induction, I can "prove" that a body is not necessary.

Of course, I can also "prove" by a similar induction that baldness
is impossible. But it does show that you need to very carefully define
your terms.

-kitten

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On 30 Nov 1998 17:10:18 -0600, ric...@cs.niu.edu (Neil Rickert) wrote:

>ZZZg...@stny.lrun.com (Jerry Hull) writes:
>
>>Or are you suggesting that I have no right to criticize a prima facie
>>preposterous theory if I have no positive theory of my own to replace it?
>
>Hull is prima facie preposterous.

Thanks!

>There are many very smart people who disagree with you on computer
>intelligence. It is quite possible that these many very smart people
>are mistaken. It is not possible that they are doing something that
>is prima facie preposterous.

A view that is prima facie preposterous need not be in fact preposterous. And
people that can be very smart about A can be denser than stench about B.
Finally, the idea that thought is computation is LITERALLY prima facie
preposterous, because people all the time do plenty of thinking without (on
the surface, at least) doing any kind of calculation.

But I suspect that all you were really looking for was an excuse to insult me.
Ah, you must be soo lonely.

Bloxy's

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Again, and again, and again, and again.

Artificial "intelligence" is a contradiction of terms.

No such a thing is possible, unless you redefine the very
notion of intelligence and reduce it to purely mechanic
activity of fulfilling some simple mechanical task.

The only intelligence known is biological.

The very driving force behind ANY intelligence is emotion,
and not reason.
Emotion supercedes ANY "reason", or logic.

Emotional urge MAY include logic or reason,
but is not obliged to do so.
The very sense of fulfillment is emotional.

The intelligence has a PURPOSE and INTENT.
The primary guiding aspect of intelligence is intuition,
which is a leap into the future.

The machine, however sophisticated its program,
can never posess the attributions of intelligence,
as it knows no emotional contentment,
has no notion of fulfillment,
no purpose, but purely mechanical task oriented activity,
and no intuition, leaping forward, and sensing the future.

Your idiotic principles of "reason" or logic will never
be sufficient to classify something as intelligence.

The artificial "intelligence" is one of the biggest lies,
just on the par with psychology, which is nothing, but
psyche-ology, the science of the psyche, while that so called
"science" of the psyche, denies the existance of
the very subject of its study - the psyche.

You so called artificial "intelligence" is a "science"
of the cunt, refusing to see ANYTHING, but the most
mechanical aspects of life force as such, and reducing
life to the level of a machine,
converting humans into bio-robots.

Bio-robot:
Biological entity,
programmed to behave according to a limited set of instruction,
based in morality ["good" and "bad" definitions],
created by the priest,
to manipulate your fear and guilt,
in order to collect a sin tax.

The priest created the moral foundation,
that is at the very core of your so called sciences,
as at the roots of all your assumptions are based on
"good" and "bad",
and so are your judgement sticks.

And beyond that "good" and "bad",
which can not be proven,
your so called sciences have nothing but dust in their hands,
as the results of their "progress" is global destruction,
which you are beginning to feel the results of.

It is pronounced yet again:

You now entered into the age of corruption,
as corruption is total, perverse and complete.

The common delusion that we live in the age of
information is just that - utter delusion,
as what you have is the age of DIS-information,
on ALL the channels of your media and communications.

Your only rule and ultimate "law" is:

God = money, and
money = god.

You converted one of the noblest idea, ever created by the man,
democracy, into suckocracy and lickassocracy,
in the name of the game of survival "of the fittest".

You allow the fat cat tp drink the blood of all others
by the oceans.

Now, what kind of intelligence you have even on biological level?

You think with all the manipulations of reality
in the name of sucking the blood of one another
you can come up with purely mechanical intelligence gadget?

What kind of a delusion is this?

You are standing on the brink of self destruction,
as a result of utterly separating the heart from the head,
and exploiting everything, that moves, and does not,
for that matter, to the point of utter oblivion,
and you want some mechanical gadget,
which you will programm with all the violence you know,
to solve ANY of YOUR problems?

How?

Mike Burrage

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
James Marshall wrote:

> > is the defination of "I". See the quession:do I know how to speak Chinese?


> > If "I" refers to only the common CPU, the answer should be no. If "I" refers
> > to
> > the whole room, which means cpu + memory(all the rules about Chinese), the
> > answer
> > is ??.
> >
>

> The answer is still no, I'm afraid. Searle predicted this argument in


> the Systems Reply, which I will now cut and paste from my previuos
> posting on the subject. Sorry, but I'm short of time!
> So, here's the reply. Suppose the man in the room internalises
> everything in that room, all the rules books, the pieces of paper and
> pencils, and performs the entire process entirely in his head, apart
> from reading the symbols from the input card and writing the output
> symbols on another card. Does he now understand Chinese? No!

> I'm interested to know why you think he has to have rules to choose
> which rules to use! Why could these rules not be also stored in the rule
> books? In the end it doesn't matter where the rules are stored though
> (in his memory or in the books), as shown in the Systems Reply above.
> There's also no need to assume some knowledge about Chinese in choosing
> which rules to use. The fundamental rule to start the system could
> simply be, search the rule books until you find the symbol pattern that
> you have received through the door. Simple pattern matching and
> requiring no knowledge of Chinese!
>

> If you want to destroy the Chinese Room Argument, try reading it in full

> first, including the predicted replies! That's essential for any theory,


> even if you don't like it!

> Searle, J.R (1980) Minds, Brains and Programs, The Behavioral and Brain
> Sciences, vol. 3
> also in:
> "The Mind's I", Hofstadter, D. & D. Dennet (eds.) Penguin Books 1981
>

> James
>
> > thanks,
> >
> > Charles

Hmm, I still disagree in part with Searle's CR as a means of saying that
computer
"understanding" is impossible. I actually haven't read the entire
theory, only
excerpts and interpretations, but I have a comments...
It appears to me that his argument logically targets the methods of
_determining
understanding_ rather than the possible existence thereof. Simply
because the man
in the room responds correctly, obviously does not show that he
understands
Chinese. Understanding is when a machine A can communicate a "concept"
AC via a
language L to another machine B, which can generate a "concept" BC in
machine B's
ontology. If machine B's ontology and machine A's ontology are
equivalent, the
idea has been understood in as much as BC and AC overlap. "Equivalent"
is a
squishy term there.
Searle pointed out that we can not determine understanding based on
input and
output measurements [alone]. That is all.
We say that two people understand each other often when speaking the
same
language, but who's to say my concept of "What is happiness?" is the
same as the
next persons. Studies have shown that the area of the brain excited by
many types
of questions are the _similar_ among most [or all] people, but they are
not the
_same_ neurons. Everyone's brain, and thus their mental representation
of
information, is different [slightly]. We generally say that someone is
well
understood (it's not a yes/no option) when the response they give to
"What is
happiness?" is consistent to the set of possible actions the questioner
would
give. Searle's argument showed the flaw in this reasoning.
Searle's CR argument merely pointed out that ONE SPECIFIC METHOD of
determining
understanding was flawed. In as much as 2 people are said to understand
each
other, I say a machine can understand a human. The problem lies in that
determination, or that ontological equivalence relationship.
I still say that even though Searle pointed out this flaw, checking the
CR man's
responses is the _best_ way to determine if he understood (until a
better method
arises).

---Mike

Chris Mesterharm

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Jim Balter <j...@sandpiper.net> writes:

<snip>

>Chris Mesterharm wrote:
>
>> All the Chinese room reveals is that we don't yet understand what it
>> means to understand. Searle is using a folk psychology notion of
>> understanding and, not surprisingly, comes up with problems. After we
>> discover what it means for a person to understand Chinese, we can
>> worry about teaching rooms Chinese.

>Many of us already know what it means to understand Chinese --

>it's a matter of competence. <snip>

This is begging the question, but perhaps it can't be helped.

Trying to give a rigorous argument using ill-defined terms is
unproductive. Yet if someone comes along and tries to define the
terms, they are begging the question. I guess the correct approach is
to first show the definitions are ill-defined, and then come up with
new definitions. The new definitions should keep many useful
properties, but be well-defined. (Easier said then done.)

How does Searle define understanding?

Chris Mesterharm

Kenneth Roback

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Is above message worth a comment at all...
I think it says all about the person who wrote it!
Is this type of messages usual in this meetings ?

/Kenneth

Lars Kroll Kristensen

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
In <36631166.4097611@news-server> ZZZg...@stny.lrun.com (Jerry Hull) writes:

<SNIP>

>I think we DON'T KNOW what it is about brains that brings about consciousness.
>I believe that there are strong arguments that computer operations do not
>constitute the basic building blocks of conscious thought, & the idea that you
>can overcome this by piling on A LOT of operations of a particular kind seems
>wishful thinking at best.

I for one, would love to hear those arguments. The best ones I have
heard so far, concerns the existence of the human soul.
Now, I'm not saying that we don't have a soul, I'm saying that we
don't KNOW if we have a soul. IMHO if we do, I can't see why a
theoretical sentient computer shouldn't be able to have one.

It is also true that we don't know exactly what that thing
consciousness is. Maybe all those concepts (consciousness, emotions,
intelligence etc.) are connected. Maybe the distinction is linguistic
and not 'functional'.

Opinions ?

>Or are you suggesting that I have no right to criticize a prima facie
>preposterous theory if I have no positive theory of my own to replace it?

Why is it preposterous to suggest that intelligence and consciousness
isn't nescesarily connected to a human being ? You don't need a
positive theory, just some good arguments as to why the alleged
preposterous theory is invalid.

>--
>Jer
>"Our Father which art in heaven / Stay there
>And we will stay on earth / Which is sometimes so pretty."
> -- Jacques Prévert

--
Lars Kroll Kristensen Last century :"The Pen is mightier than the
email: kr...@daimi.aau.dk sword."
SnailMail: Sandoegade 13 This century :"The keyboard is mightier than
8200 Aarhus N.DK the M-16"

Jim Balter

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Chris Mesterharm wrote:
>
> Jim Balter <j...@sandpiper.net> writes:
>
> <snip>
>
> >Chris Mesterharm wrote:
> >
> >> All the Chinese room reveals is that we don't yet understand what it
> >> means to understand. Searle is using a folk psychology notion of
> >> understanding and, not surprisingly, comes up with problems. After we
> >> discover what it means for a person to understand Chinese, we can
> >> worry about teaching rooms Chinese.
>
> >Many of us already know what it means to understand Chinese --
> >it's a matter of competence. <snip>
>
> This is begging the question, but perhaps it can't be helped.

What question does it beg? Rather than "begging the question",
it answers the question.



> Trying to give a rigorous argument using ill-defined terms is
> unproductive. Yet if someone comes along and tries to define the
> terms, they are begging the question.

Well, that's the claim you just made, but it's nonsense.

> I guess the correct approach is
> to first show the definitions are ill-defined, and then come up with
> new definitions. The new definitions should keep many useful
> properties, but be well-defined. (Easier said then done.)
>
> How does Searle define understanding?

Read the paper; he begs the question.

--
<J Q B>

The Walrus

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Lars Kroll Kristensen wrote:

> I for one, would love to hear those arguments. The best ones I have
> heard so far, concerns the existence of the human soul.
> Now, I'm not saying that we don't have a soul, I'm saying that we
> don't KNOW if we have a soul. IMHO if we do, I can't see why a
> theoretical sentient computer shouldn't be able to have one.

I just thought: perhaps the soul is an emergent property of consciousness? ;)

d.

James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Hi,
I've never seen anything like it here! I think it's excellent! What an
enormous rant! I can almost imagine him foaming at the mouth as he
typed!
Or he might just be making a joke,
James

Bart Zonneveld

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
In article <36627D06...@eclipse.co.uk>, James Marshall
<home...@eclipse.co.uk> wrote:

> Hi,
> one of the best known arguments against computer understanding in AI
> has to be Searle's Chinese Room Argument, which is based on the Physical
> Symbol Systems Hypothesis of AI (that intelligence can be created
> solely through a formal symbol set and rules to act on that symbol set).

> Synosemyne wrote:
> >
> > Hey all,
> >
> > Do any of you happen to know some sensible arguments against the
> > existence of artificial intelligence in computers? It's true that
> > computers are problem-solvers, which some call intelligence, which is
> > fine and almost universally accepted. But computers still lack
> > understanding. How does one prove that computers lack understanding?
> > It is easy to say so but there are always counter-arguments and
> > counter-examples (although most are not very sturdy). What
> > philosophical or concrete arguments do you know of, to show that
> > computers do not understand?
> >
> > Thank you,
> > Synosemyne


For all that I know, is that the entire system (the room, the little
notes, the persons) is considered as intelligent, the parts of the system
aren't...

--
Bart Zonneveld.

Email: Bart.Zo...@phil.uu.nl
Cra...@yahoo.com

Bart Zonneveld

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
>
> Sorry, statement like this we (mathematician) call "implication". If the
> presumption is not met, the statement is true. So what?
>
> Why the presumption is not met? We still do not know what a neuron is => we
> cannot produce a really small chip that could do the EXACT SAME thing as a
> brain cell.
>
> Do you have any better proof, Kreeg?
>
> Best regards from Jiri


Just follow the "fish-hook" implication. For I don't know the English
word, I just translated the Dutch word into English.

James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Hi,

Mike Burrage wrote:
>
> Hmm, I still disagree in part with Searle's CR as a means of saying that
> computer
> "understanding" is impossible. I actually haven't read the entire

Actually, the purpose of the CRA is not to prove that computer
understanding is impossible, but to disprove the claims of proponents of
Strong AI who say that an appropriately programmed computer IS the
mind, has understanding and other cognitive states, and is therefore an
explanation of human cognition.

> theory, only
> excerpts and interpretations, but I have a comments...

Read it! Surely one of the most important papers on AI written, whether
you agree with it or not.

> It appears to me that his argument logically targets the methods of
> _determining
> understanding_ rather than the possible existence thereof. Simply
> because the man
> in the room responds correctly, obviously does not show that he
> understands
> Chinese. Understanding is when a machine A can communicate a "concept"
> AC via a
> language L to another machine B, which can generate a "concept" BC in
> machine B's
> ontology. If machine B's ontology and machine A's ontology are
> equivalent, the
> idea has been understood in as much as BC and AC overlap. "Equivalent"
> is a
> squishy term there.
> Searle pointed out that we can not determine understanding based on
> input and
> output measurements [alone]. That is all.

I'm not sure how you can make such categorical assertions when by your
own admission you haven't read the paper in full, but... I don't think
Searle did point out that we cannot determine understanding based on
input and output measurements alone. I consider that to be a
misinterpretation, and can't see where you got it from, maybe you could
help me out? Rather, Searle pointed out (or at least argued, to avoid
controversy) that you can't ascribe understanding and other cognitive
states (such as intentionality) to what amounts to a physical symbol
system (the Chinese Room).

> We say that two people understand each other often when speaking the
> same
> language, but who's to say my concept of "What is happiness?" is the
> same as the
> next persons. Studies have shown that the area of the brain excited by
> many types
> of questions are the _similar_ among most [or all] people, but they are
> not the
> _same_ neurons. Everyone's brain, and thus their mental representation
> of
> information, is different [slightly]. We generally say that someone is
> well
> understood (it's not a yes/no option) when the response they give to
> "What is
> happiness?" is consistent to the set of possible actions the questioner
> would
> give. Searle's argument showed the flaw in this reasoning.
> Searle's CR argument merely pointed out that ONE SPECIFIC METHOD of
> determining
> understanding was flawed. In as much as 2 people are said to

Ah, OK... now I see where you are coming from. It's a good point, that
given a native Chinese speaker and a Chinese Room which both responded
appropriately to Chinese questions, how would you decide which of them,
if any, understood what you were talking about.

understand
> each
> other, I say a machine can understand a human. The problem lies in that

Surely the machine "understanding" the human is purely a syntactic and
semantic issue? Humans generally instruct machines through well defined
interfaces and formal languages (command line interface, programming
languages, etc.) This kind of understanding is different from the
cognitive state of understanding that Strong AI supporters want to
ascribe to a computer running an appropriate AI program.

> determination, or that ontological equivalence relationship.
> I still say that even though Searle pointed out this flaw, checking the
> CR man's
> responses is the _best_ way to determine if he understood (until a
> better method
> arises).
>

Quite right, not only is it the best way to check his understanding, it
could be considered the only way, as the Chinese Room is supposed to be
sealed! But even if you think he understood, you may be mistaken, as you
pointed out before. It depends whether he's a native Chinese speaker, or
a non-Chinese speaker applying formal rules to uninterpreted symbols.
James

> ---Mike

Kenneth Roback

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
James Marshall wrote:
>
> Hi,
> I've never seen anything like it here! I think it's excellent! What an
> enormous rant! I can almost imagine him foaming at the mouth as he
> typed!

:-) Yeah, and rather freneticly, I would say!
He even managed to get some holy associations in there too,
maybe a slip on the keyboard, or who knows!

> Or he might just be making a joke.

Strange sence of humor in that case...
But than again I'm Swedish soo maybe we have a different sence of humor here
in Sweden.

/Kenneth

Sergio Navega

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Jim Balter wrote in message <36631FCB...@sandpiper.net>...

>Sergio Navega wrote:
>
>> Here we go again with Searle! I promised myself not entering this
>> kind of discussion anymore. Let's make one exception.
>>
>> Searle's argument is strong and fragile at the same time. I hope to be
>> clear enough with my arguments.
>>
>> It is strong because it showed clearly that a computer fed with a bunch
>> of symbols from any human language *will not* develop a
*human-equivalent*
>> level of understanding of the world.
>
>It most certainly did not do any such thing! Please read the literature
>that *rebuts* Searle's argument. It is widely believed among computer
>scientists that Searle's argument is flawed. It cannot therefore have
>"clearly" shown what it is purported to have shown, even if it *did*
>show it.
>


What is clear for some is not so for others. Those who rebuts Searle's
argument in all aspects (as you seem to be doing) forget that we are
naive when it comes to thinking about "human-equivalent intelligence",
which was the main point of my previous paragraph.

For me, understanding Chinese imply that precondition: human-likeness.
Nothing will get human-equivalent understanding of the world unless it
has at least one human-equivalent sensory equipment, human-equivalent
"computational power" and human-equivalent emotional drives. All this
is essential to human-equivalent performance. A man inside that room
fails in the first prerequisite (he does not have access to the external
world). A rational robot, with vision, audition, etc, will fail on the
third (no emotions and/or drives). A dog with "emotions" and drives,
will fail on the second (lack of computationally equivalent brain).

>> Inside the Chinese Room you may
>> put anything: a Cray, a Sun workstation, an "idiot savant" or a
>> group of 10 "Einsteins". Their performance in the task of understanding
>> Chinese (or the world outside the room, for that matter), will be
>> miserably poor.
>
>The *premise* of the CR is that the Chinese Room itself is
>*competent* in all tasks which require an understanding Chinese,
>even if it doesn't "actually" understand Chinese. Searle grants
>that the *behavior* of the CR is equivalent to that of one who
>understands Chinese. His argument is strictly *metaphysical*.
>
>Let's at least get the fundamentals of this thought experiment right
>before pursuing its implications.
>

The task of believing that the room is competent in "all" tasks is
typical of philosophers: it may take more than the universe to store
all possible interpretation of phrases (this is much worse than the
frame problem). From this starting point, Searle's argument is just
a joke. But I guess that Searle wasn't concerned with this impossibility.
This is equivalent of doing an exploratory trip to the center of the
sun and coming back alive: impossible in any current account, but
useful to mentally experiment with the idea. In this regard, playing
with imagination as if it was possible, Searle's argument is strong.
This does not mean, however, that I agree with Searle's conclusions.

What I would like to read from Searle is the "back to earth" lesson
that this argument suggests: language alone is incompetent to provide
understanding, it demands one "listener" with world grounded concepts
(meaning) to make use of it. This is what Stevan Harnad preaches in
his "symbol grounding problem", a much more useful exploration of the
argument (although a little bit modified by the chinese dictionary
idea).

If CR argument is flawed, so are the usual rebuttals to his argument.
Searle's lesson should be another, different than that he may have
originally envisioned. Harnad's argument is closer to a better
exploitation of this intriguing thought experiment.

Regards,
Sergio Navega.


James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Hello,

Jim Balter wrote:

>
> Indeed, a conjunction of the man and the pipes *could* be said
> to understand Chinese, and in fact Strong AI proponents *do* say so,
> Searle's and Marshall's repeated bleating to the contrary
> notwithstanding,
>

Could be said to do so according to what criteria? You need to spell out
on what grounds you dismiss the CRA and the System Reply. That's
absolutely necessary if you wish to express any scientific opinion.
Otherwise we would be unable to challenge people who expressed the view
that the world is flat and not round, etc. etc.
Of course Strong AI proponents do say that the Chinese Room understands
Chinese. That is what prompted Searle to come up with the CRA in the
first place. It should be no great surprise that there are still Strong
AI supporters who still adhere to their beliefs, maybe they have good
reason. If they have, I wish someone would tell me!
Don't get me wrong, I don't see the CRA as a holy grail, it is a
scientific theory that is open to objective criticism like any other. I
will be neither happy or unhappy if convincing arguments against it are
made. I will be very interested though. To label my postings as
"bleating" is unfair when all I have done is point out the existence of
this theory in response to a question, and highlight the responses
already present in the paper to those who seem unaware of them and
attack the theory on grounds that have already been anticipated and
dealt with. However to label Searle's contributions as "bleating" is
rather sad. Research and science is driven on by discussion, hypothesis
and counter-hypothesis, exactly like that fostered by the claims of
Strong AI, then the CRA, then subsequent commentary on it. Whether the
CRA stands or falls, Searle has still made a valuable contribution to
our theories of intelligence.

>
> Many of us already know what it means to understand Chinese --

> it's a matter of competence. By such a criterion, The Chinese Room
> ex hypothesi understands Chinese. It thus follows necessarily that
> Searle's argument must be flawed, and the trick, which is no longer
> of any relevance to AI, is simply to find the flaw. And it really
> isn't all that hard once one drops a prior commitment to Searle's
> conclusion.

Understanding is a matter of competence? As an extreme example, what
about a tin opener? The tin opener is very competent at opening a tin
(more so than an unequipped human). Does the tin opener understand more,
or even the same amount as a human, about opening tins, or anything to
do with tins, their contents, or the action of opening something?
At a slightly higher level of complexity, my computer is fairly
competent (but not as much as I'd like!) at running Windows NT. Still,
it does the job better than I would do with pencil and paper going
through the program code. Does the computer understand more than me
about Windows NT, or operating systems in general, or von Neumann
architectures?
Given your subjective definition of understanding as competence, let's
consider some of the other cognitive states other than understanding
that Strong AI want us to ascribe to the Chinese Room, such as
intentionality. Can we ascribe intentionality to the Chinese Room?
James

James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Hello,

Chris Mesterharm wrote:

> I don't think this is the issue. The point is that many people
> believe his argument also applies to neurons in the brain. This is a
> contradiction because most people (including Searle) believe that
> brains have the ability to understand.

I think Searle's subsequent arguments focussed on the causal power of
the brain. He claimed that in order to have a mind it is necessary to
have a machine with causal power at least equivalent to the brain.

> Trying to give a rigorous argument using ill-defined terms is
> unproductive. Yet if someone comes along and tries to define the

> terms, they are begging the question. I guess the correct approach is


> to first show the definitions are ill-defined, and then come up with
> new definitions. The new definitions should keep many useful
> properties, but be well-defined. (Easier said then done.)
>
> How does Searle define understanding?
>

Maybe it would be worth considering the other cognitive states we could
attribute to the Chinese Room, such as intentionality?
James

> Chris Mesterharm

kreeg

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
on the contrary, we can produce a chip that can do the exact same as a brain
cell, but have we tried?!?! not that i know of

we have the technology to run all our cars off of super efficient electrical
systems, but do we?!?! no...

Jiri Donat wrote:

> kreeg wrote:
>
> > Ok, I have a good question for you:
> >
> > Say we made . Now say we took a person, and everyday we repaced 100


> > braincells, one by one, with a chip. After we replaced all the cells
> > (don't count in the several years it would take to actually do this)
> > with chips, the brain would now be a computer.... Would you then
> > consider that artificial intelligence?
> >
> >
>

> Sorry, statement like this we (mathematician) call "implication". If the
> presumption is not met, the statement is true. So what?
>
> Why the presumption is not met? We still do not know what a neuron is => we
> cannot produce a really small chip that could do the EXACT SAME thing as a
> brain cell.
>
> Do you have any better proof, Kreeg?
>
> Best regards from Jiri
>

> > "Fools Follow"
> > -ZOAD
> > http://www.northernnet.com/kreeg

Kenneth Roback

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
kreeg wrote:
>
> on the contrary, we can produce a chip that can do the exact same as a brain
> cell, but have we tried?!?! not that i know of

As far as I know, there is experiments in progress in USA trying to get a
silicon chip interconnected to living braincells.

The goal is to let the chip listen to the brain signals and register them
(and trying to understand them I suppose) in order to later on, be able to
reverse signalling (i.e. input artificially generated signals to the brain).

Got this information for a couple of years ago, so I don't know how far the
project and experiments have gone yet. Maybe there already is a human being
hooked-up somewhere in the states to a computer by now :-)

/Kenneth

Keith Wiley

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
This one's getting pretty long:

Yes, but if you look at what is happening, it is the signal processing that is
directly affected. All the chemical gradients flowing around the brain seem
to have a one to one direct causal effect on firing patterns of the neurons.
It is the synaptic activity that is altered by such chemical events. All we
need to do is replicate such wide-spread affect on the firing. Besides, even
if chemicals are the root of the conscious phenomenon and not simply the
cause, then we can simply mix chemicals into our
"neural-brain-computer-circuits". We have considerable experience making wet
computers and put live neurons onto silicon chips. Perhaps you're right, the
necessary next step is to start mixing various chemicals onto the chips, but
the important thing is that there is no good reason to believe this can't be
artificially manufactured.

> > We haven't built trillions of them and wired
> >them together because it's prohibitvely expensive and the technology is still
> >not viable in vast quantities, but it's perfectly doable from a manufacturing standpoint.
>
> Yeah, but so what?

So what? Basically, don't know it till you try it is so what. We don't know
for certain that consciousness comes from the neural net, but it is a very
popular theory amongst people who understand the brain, so until we try it, it
should not be discarded as an incorrect theory. It actually has a pretty good
chance, and until we build one, we won't know for certain that is works, but
we also won't know for certain that it doesn't work.

> >Now, while me may fundamentally understand what is going on inside a neuron to
> >the point that we can make superficial models of it, we don't know for certain
> >yet that we can manufacture a copy of an existing neuron. Logical speculation
> >based on further research involving finer levels of observation suggests that
> >we will be able to soon, but that issue is open to debate. That question is
> >not relevant to building artificial intelligence though, it is only relevant
> >to mind uploading, in whiche one tries to make an AI that is a replica of an
> >existing brain.
>
> Agreed, but if you are going to try to construct a conscious being, the
> obvious guide would be the characteristics of beings that are conscious.

Right, the characteristics, but not an atom for atom copy. I totally think
that AI should be modelled on a human brain. But the argument that we can't
model it for some unknown reason is unfair, *because* it is unknown that we
can't model it. Until you can prove we can't model it, why are you so sure?

> >What is not really arguable anymore is the basic question of building a
> >machine that does what a neuron does. We can do it. I don't think we can do
> >it at 1x scale, but I think we can at 10x or 100x scale. The components
> >etched into present computer chips are quite a bit larger than the internal
> >structures of a neuron, but are vastly smaller than the neuron itself, and our
> >techniques have been getting smaller without a sign of abatement for decades
> >now. You lack the patience than computer and neurological research has most
> >definitely earned.
>
> My patience is beside the point. My basic claim, remember, is that there is a
> lot more going on in the brain than can be replicated via basic signal
> processing. For example, it has been claimed that in every brain cell there
> is a little magnetic node, similar to those in one celled animals which
> (appear to) use them for directionality. What if these magnetic nodes turned
> out to be important for consciousness?

Then we build them into our AI machines. Sheesh. I keep saying the same
thing over and over again. I'm not saying that a bidigital neural network
tossing nothing but data around will necessarily become conscious (it does
seem a little too simple, doesn't it). I'm saying that whatever it is, we can
make it.

> >One last point: You say we can't make a computer do what a neuron does
> >because we don't know what a neuron does that supports consiousness. These
> >are two nonrelated issues. Our ability to make a neuron and our ability to
> >understand it are independent problems. Scientists and industrial factories
> >manufacture artificial magnets by the ton every day, but no one has a real
> >clue what magnetism is yet.
>
> Not sure where you posit this cluelessness. Hasn't electromagnetism been
> pretty well understood since Maxwell?

fine.

> > People were wearing spectacles long before we had
> >a modern understanding of light, much less the 20th century merger of wave and
> >particle theories.
>
> To understand light enough to make spectacles you don't need QM.

What, that's *not* the point. The point is you don't have to implicitly
understand something in order to build it. We don't necessarily have to have
a complete working knowledge of the brain in order to build it. We might be
able to build it long before we totally understand it.

> > It is extremely likely that we will create articificial
> >intelligence before we have a solid scientific explanation of consciousness.
>
> Depends on what you mean by "artificial intelligence". Anyway, I've been
> talking about CONSCIOUSNESS, which to a good degree appears independent of
> intelligence (not really a zinger even though it sounds like one).

And my point stands. We still might manufacture artificial *consciousness*
before we understand the phenomenon.

. . .. ... ..... ........ ............. .....................
.. ... ..... ....... ........... ............. .................
. .. .... ........ ................ ................................
*
Keith Wiley * * * * * *
Email: kwi...@tigr.org *** ** * * ** *
WWW: http://www.tigr.org/~kwiley/ * ** ** ***

Keith Wiley

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Looks like someone's trying to stoke a flame, ey?

Haavard N. Jakobsen

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to

Kenneth Roback wrote in message <36629323...@enator.se>...

>Synosemyne wrote:
>>
>> How does one prove that computers lack understanding?
>
>Well.
>I guess we have to trust our own intelligence in this area, since man has
>created the computer from the beginning.

Well, God created man in the first place and look what happened to
mankind... :-)

>Some questions that might help the reasoning (if having problem with that):
>
>Would your computer produce anything without you starting any program in it
?
No, but this doesn't matter much. Maybe it's the program that is
intelligent.

>If you have a program tucked into a computer and run it, would it produce
>something you never expected or does it start to "behave" quite or
drastically
>different from what it is supposed to (i.e. not already programmed or
>controlled by any rules) ?
No, but this may change soon. Why shouldn't it be able to analyse and
diagnose code, and then fix it if it found a bug? This may also start as
strict rules, add
some learning behaviour on this and then in 100 years time you get a
selfconsius machine(maybe) providing you have the computing power and
storage capasity needed. It's still controlled by the same rules and the
same
program, but it has evolved and added some/removed some rules.
As long as the technology is available, there is no reason that this
shouldn't
happend(exept that mankind might not want a selfcontroling machine out
of fear of the consequences).

>Does the program by any chance suddenly look different from the original
one
>since you started it (ruling out eventual memory problems, i.e. parity
error) ?
No, not yet. And I would propably rule out the suudenly in this. However I
expect
computer to be able to program themselves and analyse code in less then
10 years. Not advanced like knowing what code they need, but sorting out
small problems given to them by humans. It's just a matter of time before
it happens. I must admit I cannot imagine myself working as a software
enginner
10 years from now. From that point on to a machine that know what it needs
is a
bit more vague, but I do believe that computers will make independent(from
humans that is) conclusions at some point in time. Maybe a hundered years,
maybe a thouousand from now, but I do believe it will happen.

If this should be considered an inteligent machine is a different thing,
a bit more philosofical. If it can reason like a human, is it intelligent?
Humans also have creative sides that I doubt a machine can come up with.

IMHO of course...

Haavard

PS: Sorry for my bad english

Richard Anggono

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Sun, 29 Nov 1998 18:46:32 -0500, Synosemyne <mer...@hotmail.com>
wrote:

>Do any of you happen to know some sensible arguments against the
>existence of artificial intelligence in computers? It's true that
>computers are problem-solvers, which some call intelligence, which is
>fine and almost universally accepted. But computers still lack
>understanding. How does one prove that computers lack understanding?
>It is easy to say so but there are always counter-arguments and
>counter-examples (although most are not very sturdy). What
>philosophical or concrete arguments do you know of, to show that
>computers do not understand?


Hi,

I believe that artificial intelligence exists.
The word artifiicial intelligence itself means "fake" intelligence.
Not real, bluff.
The system might act or appear to be intelligent but in fact does not
even know what it is doing, let alone even know why it is doing it.

For example, look at computer strategy games. You attack, they defend.
They counter-attack, they create diversions etc. almost anything you'd
expect from a human player. But it's all artificial; the moves are
probably pre-programmed. But it does appear intelligent, so it does
have AI.

As for Understanding; systems that use AI definitely can't understand.
Understanding is an emergent behaviour of intelligence. And since AI
is not really intelligence in the true sense, how can it understand?

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On 1 Dec 1998 09:21:01 GMT, kr...@daimi.au.dk (Lars Kroll Kristensen) wrote:

>In <36631166.4097611@news-server> ZZZg...@stny.lrun.com (Jerry Hull) writes:
>
><SNIP>
>
>>I think we DON'T KNOW what it is about brains that brings about consciousness.
>>I believe that there are strong arguments that computer operations do not
>>constitute the basic building blocks of conscious thought, & the idea that you
>>can overcome this by piling on A LOT of operations of a particular kind seems
>>wishful thinking at best.
>

>I for one, would love to hear those arguments. The best ones I have
>heard so far, concerns the existence of the human soul.
>Now, I'm not saying that we don't have a soul, I'm saying that we
>don't KNOW if we have a soul. IMHO if we do, I can't see why a
>theoretical sentient computer shouldn't be able to have one.

Soul is a red-herring, in my view. The term should best be regarded as
referring to the preferential component of a person, which -- like other
mind-stuff -- is not intrinsically material but must presumably have a
material substrate to exist.

I can adumbrate an argument why computational operations do not constitue
consciousness, tho I recognize that like all such arguments it will be
unpersuasive to those disinclined to accept its conclusion.

The basic elements of computation can be reduced to increment and
if-zero-branch-else decrement (Boolos & Jeffrey's abaci). It is clear that
these do not, in themselves, provide any definition of consciousness. Nor, it
should be clear, could the addition of either of these operations to some
scheme of computation somehow cross the line to consciousness. It should be
obvious, therefore, that consciousness cannot be DEFINED in terms of
computation. Alternatively, I KNEW I was conscious long before I KNEW
anything about numbers, so clearly consciousness cannot be defined in terms of
computation.

The other possibility is that some computational scheme CAUSES consciousness,
leaving it open exactly how the latter might be defined. But this is absurd
since the computational operations are FORMAL, i.e., they can be realized in
many different ways -- rooms full of Chinese, tinkertoys & streams, &c. &c. --
such that the specific causal powers of a given realization may differ in any
particular respect from any other realization.

Thus, one may conclude that computation neither defines nor causes
consciousness.

>>Or are you suggesting that I have no right to criticize a prima facie
>>preposterous theory if I have no positive theory of my own to replace it?
>
>Why is it preposterous to suggest that intelligence and consciousness
>isn't nescesarily connected to a human being ? You don't need a
>positive theory, just some good arguments as to why the alleged
>preposterous theory is invalid.

Well, my cats are intelligent & conscious, so clearly those characteristics
aren't necessarily connected to human beings. I concede the possibility that
alien creatures of a different biological/compositional nature from earthlings
may be conscious, but would insist that, whatever these aliens may be, they
aren't MERELY computational.

James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Aargh! The Systems Reply yet again! If you are familiar with the Systems
Reply then presumably you have some reason for rejecting it. If you are
not you should read the original paper in full, and see how Searle
responded to your argument 18 years ago!
James

Bart Zonneveld wrote:
>
>
> For all that I know, is that the entire system (the room, the little
> notes, the persons) is considered as intelligent, the parts of the system
> aren't...
>

James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Hi,

Kenneth Roback wrote:
>
> As far as I know, there is experiments in progress in USA trying to get a
> silicon chip interconnected to living braincells.
>
> The goal is to let the chip listen to the brain signals and register them
> (and trying to understand them I suppose) in order to later on, be able to
> reverse signalling (i.e. input artificially generated signals to the brain).
>
> Got this information for a couple of years ago, so I don't know how far the
> project and experiments have gone yet. Maybe there already is a human being
> hooked-up somewhere in the states to a computer by now :-)
>
> /Kenneth

My university (De Montfort University, UK), has already succeeded in
growing neurons on silicon. The next step is to find some way of
connecting them together...
James

Mike Burrage

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to

James Marshall wrote:

>
> Ah, OK... now I see where you are coming from. It's a good point, that
> given a native Chinese speaker and a Chinese Room which both responded
> appropriately to Chinese questions, how would you decide which of them,
> if any, understood what you were talking about.
>

If the method of determining understanding is to guage responses to the input
(based on what the responses of the questioner were), and the confidence in
that determination approaches 1 as the set of all possible inputs have been
exhausted, then the guy in the room does _understand_ Chinese when all
Chinese questions have been posed to him (and his respnoses are valid).

Searle's Chinese people were guaging the understanding of the room as if
it were a weak AI. Ask it a question, guage the response. Great, he pointed
out that the method we use for guaging the understanding of weak AI needed to
be clarified.

The basis for his argument that strong AI is impossible lies in his assertion
that brain cells have "causal powers" that computer chips don't have, and
in Axiom 3. If you disagree with those arguments, then his argument is useless

to you. I hope the scientists attempting to develop strong AI systems disagree

with these arguments, so that we can get somewhere... like generating
semantic "representations" based on a given syntax, and using those semantic
representations to generate actions similar to humans, so that we can call it
"thinking".

> Surely the machine "understanding" the human is purely a syntactic and
> semantic issue? Humans generally instruct machines through well defined
> interfaces and formal languages (command line interface, programming
> languages, etc.) This kind of understanding is different from the
> cognitive state of understanding that Strong AI supporters want to
> ascribe to a computer running an appropriate AI program.

...summary of Axiom 3, which I disagree with...

---Mike


Charles D. Chen

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to

Keith Wiley wrote in message <36630CD0...@tigr.org>...
>Not today, but it won't be more than a couple decades at most. Keep an eye
out.


Will it cost so much time? I think one year may be enough.

Charles


Kenneth Roback

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Haavard N. Jakobsen wrote:
>
> Kenneth Roback wrote in message <36629323...@enator.se>...
> >Synosemyne wrote:
> >>
> >> How does one prove that computers lack understanding?
> >
> >Well.
> >I guess we have to trust our own intelligence in this area, since man has
> >created the computer from the beginning.
>
> Well, God created man in the first place and look what happened to
> mankind... :-)

Mankind invented the computer (else we wouldn't have this dialogue).

> >Some questions that might help the reasoning (if having problem with that):
> >
> >Would your computer produce anything without you starting any program in it?
> No, but this doesn't matter much. Maybe it's the program that is intelligent.

At least I would be interested if the computer did!



> >If you have a program tucked into a computer and run it, would it produce
> >something you never expected or does it start to "behave" quite or
> >drastically different from what it is supposed to (i.e. not already
> >programmed or controlled by any rules) ?
> No, but this may change soon. Why shouldn't it be able to analyse and
> diagnose code, and then fix it if it found a bug?

I most probably will, in time!

> This may also start as strict rules, add some learning behaviour on this
> and then in 100 years time you get a selfconsius machine(maybe) providing
> you have the computing power and storage capasity needed.
> It's still controlled by the same rules and the same program, but it has
> evolved and added some/removed some rules.
> As long as the technology is available, there is no reason that this
> shouldn't happend(exept that mankind might not want a selfcontroling
> machine out of fear of the consequences).

I agree!



> If this should be considered an inteligent machine is a different thing,
> a bit more philosofical. If it can reason like a human, is it intelligent?

It depends, if you consider humans to be intelligent or not
and if you consider humans reasoning to be intelligent or not.

> Humans also have creative sides that I doubt a machine can come up with.

Creativity doesn't necessarily equals intelligence according to me.
There has been a whole bunch of people in human history using their mind
and creativity to construct destructive weapons and much more.
I wouldn't consider that rather intelligent, just misuse of hardware :-)

However, I do understand what you're referring to.

/Kenneth
Swedish neural network inside (not Intel)

Mike Burrage

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to

Jerry Hull wrote:

> I can adumbrate an argument why computational operations do not constitue
> consciousness, tho I recognize that like all such arguments it will be
> unpersuasive to those disinclined to accept its conclusion.
>
> The basic elements of computation can be reduced to increment and
> if-zero-branch-else decrement (Boolos & Jeffrey's abaci). It is clear that
> these do not, in themselves, provide any definition of consciousness. Nor, it
>

Do individual neurons define consciousness? No.

> should be clear, could the addition of either of these operations to some
> scheme of computation somehow cross the line to consciousness. It should be

Wow, that's a big step. Let me apply that reasoning to neurons...
1. Individual neurons do not define consciousness.
2. It is clear that the addition of these neurons into groups do not cross the line
of consciousness.
C. Brains don't contain consciousness.


>
> obvious, therefore, that consciousness cannot be DEFINED in terms of
> computation. Alternatively, I KNEW I was conscious long before I KNEW
> anything about numbers, so clearly consciousness cannot be defined in terms of
> computation.

Hmm, I knew about consciousness before I knew about neurons and a brain, but I still
beleive they exhibit consciousness.

>
> The other possibility is that some computational scheme CAUSES consciousness,
> leaving it open exactly how the latter might be defined. But this is absurd
> since the computational operations are FORMAL, i.e., they can be realized in
> many different ways -- rooms full of Chinese, tinkertoys & streams, &c. &c. --
> such that the specific causal powers of a given realization may differ in any
> particular respect from any other realization.
>
> Thus, one may conclude that computation neither defines nor causes
> consciousness.

What if we could represent the action of a neuron using computation (such
formulas as F = G*m1*m2/(r*r)). This is what science attempts.
If that were possible, and we could represent the interaction
between neurons using computation, then logically we could represent
consciousness computationally. Right?
Your line of arguments holds true if science can NEVER predict the future state of a
neuron computationally. Do you believe this?

> --
> Jer
> "Our Father which art in heaven / Stay there
> And we will stay on earth / Which is sometimes so pretty."
> -- Jacques Prévert

---Mike


Kenneth Roback

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
James Marshall wrote:
>
> My university (De Montfort University, UK), has already succeeded in
> growing neurons on silicon. The next step is to find some way of
> connecting them together...
> James

Hi!
The neurons or the silicon chips ? No, just kidding :-)

Do you have the same goal as I mentioned the Americans do ?
Or what is your goal and how far has it come ?

Just curious.

/Kenneth
Swedish neural network inside (not Intel).

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 01 Dec 1998 09:06:39 -0500, Keith Wiley <kwi...@tigr.org> wrote:

>> Why are you so sure that it is ONLY the signal processing of neurons that
>> makes them relevant to consciousness? Prima facie this is absurd;
>> consciousness is not signal processing. At the chemical and biological levels
>> neurons are doing a lot more than that.
>
>Yes, but if you look at what is happening, it is the signal processing that is
>directly affected. All the chemical gradients flowing around the brain seem
>to have a one to one direct causal effect on firing patterns of the neurons.
>It is the synaptic activity that is altered by such chemical events. All we
>need to do is replicate such wide-spread affect on the firing. Besides, even
>if chemicals are the root of the conscious phenomenon and not simply the
>cause, then we can simply mix chemicals into our
>"neural-brain-computer-circuits". We have considerable experience making wet
>computers and put live neurons onto silicon chips. Perhaps you're right, the
>necessary next step is to start mixing various chemicals onto the chips, but
>the important thing is that there is no good reason to believe this can't be
>artificially manufactured.

Clearly if we exactly duplicate a human brain AND ITS NEUROPHYSIOLOGICAL
ENVIRONMENT we will have consciousness. "Artificial manufacture" is not the
issue. The problem is determining what characteristics of that
brain+environment are essential and which are superfluous. Your assumption is
that it is the signal processing (=? synaptic activity) which is essential;
but that is precisely what I am questioning.

>> > We haven't built trillions of them and wired
>> >them together because it's prohibitvely expensive and the technology is still
>> >not viable in vast quantities, but it's perfectly doable from a manufacturing standpoint.
>>
>> Yeah, but so what?
>
>So what? Basically, don't know it till you try it is so what. We don't know
>for certain that consciousness comes from the neural net, but it is a very
>popular theory amongst people who understand the brain, so until we try it, it
>should not be discarded as an incorrect theory. It actually has a pretty good
>chance, and until we build one, we won't know for certain that is works, but
>we also won't know for certain that it doesn't work.

I have elsewhere given reasons why I think that simply isolating the
computational aspects of brain activity will be inadequate to produce
consciousness. Go ahead with your experiments. But surely we can tell ahead
of time when an avenue of research is less than promising. It's possible you
could cook up something that tastes just like pate de foie gras from
Kennelration, but I ain't gonna subsidize your kitchen.

>> >Now, while me may fundamentally understand what is going on inside a neuron to
>> >the point that we can make superficial models of it, we don't know for certain
>> >yet that we can manufacture a copy of an existing neuron. Logical speculation
>> >based on further research involving finer levels of observation suggests that
>> >we will be able to soon, but that issue is open to debate. That question is
>> >not relevant to building artificial intelligence though, it is only relevant
>> >to mind uploading, in whiche one tries to make an AI that is a replica of an
>> >existing brain.
>>
>> Agreed, but if you are going to try to construct a conscious being, the
>> obvious guide would be the characteristics of beings that are conscious.
>
>Right, the characteristics, but not an atom for atom copy. I totally think
>that AI should be modelled on a human brain. But the argument that we can't
>model it for some unknown reason is unfair, *because* it is unknown that we
>can't model it. Until you can prove we can't model it, why are you so sure?

I am sure that we will, some day, lord willing, understand what it is about
the biological brain (+ environment) that enables it to sustain consciousness.
I am only arguing that COMPUTATIONAL models -- as such -- clearly appear to be
insufficient.

>> >What is not really arguable anymore is the basic question of building a
>> >machine that does what a neuron does. We can do it. I don't think we can do
>> >it at 1x scale, but I think we can at 10x or 100x scale. The components
>> >etched into present computer chips are quite a bit larger than the internal
>> >structures of a neuron, but are vastly smaller than the neuron itself, and our
>> >techniques have been getting smaller without a sign of abatement for decades
>> >now. You lack the patience than computer and neurological research has most
>> >definitely earned.

1st, it is possible that neurons will not turn out to be -- by themselves --
the relevant, or only relevant, structure. 2nd, even assuming that they are,
it need not be the case that their signal processing component is the ONLY
relevant component, VAV consciousness.

>> My patience is beside the point. My basic claim, remember, is that there is a
>> lot more going on in the brain than can be replicated via basic signal
>> processing. For example, it has been claimed that in every brain cell there
>> is a little magnetic node, similar to those in one celled animals which
>> (appear to) use them for directionality. What if these magnetic nodes turned
>> out to be important for consciousness?
>
>Then we build them into our AI machines. Sheesh. I keep saying the same
>thing over and over again. I'm not saying that a bidigital neural network
>tossing nothing but data around will necessarily become conscious (it does
>seem a little too simple, doesn't it). I'm saying that whatever it is, we can
>make it.

Oh no, not another Sheesh guy! If all you are arguing is that, WHATEVER IT IS
about the brain (+ environment) that enables consciousness, it is something we
can IN PRINCIPLE build in our laboratories, then you have no dispute. Give me
a pretty lab assistant & some time on the couch, & I'll build you one myself.
I was arguing against replacing neurons with computer chips.

>> > People were wearing spectacles long before we had
>> >a modern understanding of light, much less the 20th century merger of wave and
>> >particle theories.
>>
>> To understand light enough to make spectacles you don't need QM.
>
>What, that's *not* the point. The point is you don't have to implicitly
>understand something in order to build it. We don't necessarily have to have
>a complete working knowledge of the brain in order to build it. We might be
>able to build it long before we totally understand it.

What is "implicit" understanding? Sure, it's also possible a Trobriand
Islander could construct a Ferrari from driftwood & sea shells. But I ain't
gonna subsidize his efforts either.

>And my point stands. We still might manufacture artificial *consciousness*
>before we understand the phenomenon.

And I "might" win the Nobel Prize for literature.

David Kastrup

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
ZZZg...@stny.lrun.com (Jerry Hull) writes:

> Clearly if we exactly duplicate a human brain AND ITS
> NEUROPHYSIOLOGICAL ENVIRONMENT we will have consciousness.

Oh, sure. Care to explain why the amount of consciousness (if any)
exhibited by a new-born baby is almost useless to anybody? Care to
explain how to substitute for the years of massive impact on its
sensors needed for generating even halfway intelligible reactions?
And even if you duplicate an adult brain including its complete
configuration, what would the brain be *conscious* of if deprived of
any input from an existing world?

--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany

Michael B. Wolfe

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
In article <36640ce6.4232584@news-server>, ZZZg...@stny.lrun.com (Jerry
Hull) wrote:

> kr...@daimi.au.dk (Lars Kroll Kristensen) wrote:
>

> >ZZZg...@stny.lrun.com (Jerry Hull) writes:
>
> Soul is a red-herring, in my view. The term should best be regarded as
> referring to the preferential component of a person, which -- like other
> mind-stuff -- is not intrinsically material but must presumably have a
> material substrate to exist.

I'm not much of a philosopher, but what's the stuff that's not
intrinsically material? "Preferential component?" What's that, and why
isn't it intrinsically material?

> The basic elements of computation can be reduced to increment and
> if-zero-branch-else decrement (Boolos & Jeffrey's abaci). It is clear that
> these do not, in themselves, provide any definition of consciousness. Nor, it

> should be clear, could the addition of either of these operations to some
> scheme of computation somehow cross the line to consciousness. It should be

> obvious, therefore, that consciousness cannot be DEFINED in terms of
> computation.

If the basic elements of computation don't define consciousness in and of
themselves, then how can you conclude from this alone that computation
can't "cross the line" to consciousness?

> Alternatively, I KNEW I was conscious long before I KNEW
> anything about numbers, so clearly consciousness cannot be defined in terms of
> computation.

Huh? You didn't know about numbers, so they aren't useful in providing a
definition of consciousness? Numbers are a theoretical construct.
Consciousness is a theoretical construct. Why in the heck can't one be
defined in terms of the other, if it happens to be a *useful* definition?
I'm actually not arguing that it *is* useful, just that you can't assume
that it isn't.

> Well, my cats are intelligent & conscious, so clearly those characteristics
> aren't necessarily connected to human beings. I concede the possibility that
> alien creatures of a different biological/compositional nature from earthlings
> may be conscious, but would insist that, whatever these aliens may be, they
> aren't MERELY computational.

Maybe I missed it, but have you provided an example of this
non-computational aspect? And have you provided an example that can't be
described computationally? That's key, because just because humans do it
one way doesn't mean that some other life form couldn't do it another way
and still be considered intelligent or conscious.

--
--Michael Wolfe
--The Institute for the Learning Sciences
--Northwestern University
--wo...@ils.anti_spam_nwu.edu

James Marshall

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Mike Burrage wrote:
>
> If the method of determining understanding is to guage responses to the input
> (based on what the responses of the questioner were), and the confidence in
> that determination approaches 1 as the set of all possible inputs have been
> exhausted, then the guy in the room does _understand_ Chinese when all
> Chinese questions have been posed to him (and his respnoses are valid).
>

I strongly disagree with that, because of your focus on the man INSIDE
the room. I think you would have a stronger case for saying the whole
system understood Chinese.

> Searle's Chinese people were guaging the understanding of the room as if
> it were a weak AI. Ask it a question, guage the response. Great, he pointed
> out that the method we use for guaging the understanding of weak AI needed to
> be clarified.
>

I'm rather confused by your point here. Weak AI is not claimed to
understand at all. Weak AI is intended to produce tools for studying the
mind, not to produce minds (which is Strong AI). Maybe you could clarify
your point here?

> The basis for his argument that strong AI is impossible lies in his assertion
> that brain cells have "causal powers" that computer chips don't have, and

Searle actually said that brains have causal powers that computer don't.
He doesn't contend the idea that you could build a machine with causal
powers similar to a brain and it could have a mind.

> in Axiom 3. If you disagree with those arguments, then his argument is useless
>
> to you. I hope the scientists attempting to develop strong AI systems disagree
>
> with these arguments, so that we can get somewhere... like generating
> semantic "representations" based on a given syntax, and using those semantic
> representations to generate actions similar to humans, so that we can call it
> "thinking".
>
> > Surely the machine "understanding" the human is purely a syntactic and
> > semantic issue? Humans generally instruct machines through well defined
> > interfaces and formal languages (command line interface, programming
> > languages, etc.) This kind of understanding is different from the
> > cognitive state of understanding that Strong AI supporters want to
> > ascribe to a computer running an appropriate AI program.
>
> ...summary of Axiom 3, which I disagree with...
>
> ---Mike

As for Searle's axioms, I don't have a copy of that paper, and I can`t
even remember the title of the paper. I can't remember off hand what the
axioms were. When I've re-educated myself, I'll reply again,
James

Mike Burrage

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
James Marshall wrote:

> I strongly disagree with that, because of your focus on the man INSIDE
> the room. I think you would have a stronger case for saying the whole
> system understood Chinese.

Right. My mistake.

> > Searle's Chinese people were guaging the understanding of the room as if
> > it were a weak AI. Ask it a question, guage the response. Great, he pointed
> > out that the method we use for guaging the understanding of weak AI needed to
> > be clarified.
> I'm rather confused by your point here. Weak AI is not claimed to
> understand at all. Weak AI is intended to produce tools for studying the
> mind, not to produce minds (which is Strong AI). Maybe you could clarify
> your point here?

My point is that Searle was guaging the _understanding_ of the room based
on 1) The fact that the output of the room is consistent with the questioner's
own possible responses. This is how the questioner guage's the room's
_understanding_. And most importantly 2) Searle's knowledge that the
man inside the room is not exhibiting (in the sense of understanding chinese)
the "causal powers" of the mind.

When has the room understood? (Forget that he says "we KNOW" the man
does not understand).

From a scientific standpoint, this is his definition for the measurement of the
understanding of the room #1 + #2. Mine is only #1.

His definition makes it impossible to create a strong AI, because this "causal
power" (the value measured in #2) which he has defined, has been defined to
not exist in machines.

If we can consistently use this measurement for whether a system
understands, then it is useful. I don't see how this can be a "standard"
measurement for the understanding of a system, and is thus useless.

His argument cannot stand without his measurement of understanding and his
definition of "causal power".

---Mike


Andrew Murray

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
If you put all the information that came under the door, and put _it_ into
rule books, what then?

Andrew.

Andrew Murray

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Oh, and the person's mood decides which shelf to look at....

Andrew.

Ben Zen

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Jerry Hull wrote in message <36629819.567597@news-server>...

>
>Suppose we replaced each cell in the brain with a piece of macaroni that could
>do the exact same thing as a brain cell. What, you say macaroni
>
Hey....Thanks...Good Idea... MACARONI !...I should have thought of it.
Miam miam... (I wanted an easy dinner...Cheese Macaroni is the easiest I know) ;-)

>can't do the
>same thing as brain cells? Then what makes you think a computer chip can do
>the same thing? Because, frankly, we don't really know WHAT it is that brain
>cells do that support consciousness, &c.
>

IMHO you nailed the problem (or question) exactly.
"What it os that brain cells do that support consciousness"

Five years ago I have spend countless hours looking for an answer to that.
I have recently just begun to identify the phenomenon. Without spoiling my advance in
this technology; I can tell you one simple thing. Don't expect to be able to see that under
a microscope. It ain't a material/chemical mechanism. It is in the order or a physical
law that deals with a-causal phenomenons.

As for Artificial intelligence that supports relatively advanced forms of Consciousness,
the answer is NO. Or at least not in this lifetime. The phenomenon CANNOT be reproduced
using common electronics. But more commonly speaking; Artificial intelligence will
emerge everywhere in simplistic but usefull computer applications. Without "lifelike"
counsciousness, this Artificial Intelligence will work at a completely different order than
the animal-mind. Yet, the difference is subtile.

I spoke too much already.
Gotta go. Macaroni is cooking ;-)

-Ben


Ed Rudyk

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
I really don't know what all this thread was about...

The main question not about CRA or "macaroni" brain. To give the answer to
the question at subject line, we should first define the basic definitions,
such as:
1. what is "counsciousness" ?
2. what is "understanding" ?
etc.
At that moment we will define all that things, we could give the answer to
our main question. (the answer will reflect the defenitions of the "basics",
so if anyone defines the basics in some other way, he probably will give us
the different answer).

Once in some place I read a very impressive example: at the begining of
Eliza, one of the technical personal people who wasn't clearly knew what is
the project about, asked where the other person, he was talking to,
siting...
This is just the example that shows that some people thinks about
intelligent system, as the system that asks smart questions. But the main
issue, remains the same. Without exact definitions of intelligence we can't
say that this or any other given system INTELLIGENT.

One could say, that the Chinese Box system is intelligent, but this is not a
big problem to build such system using advanced pattern recogniton and
association algorithms, and the same one will say that this system is
intelligent to ??

One could say, that the human brain is intelligent, but yet no one know how
it's working.. Maybe it's very simple learning algorithm we haven't
discovered yet, because of our wrong model of neuron ??
- Thanks, Ed.

P.S. My own opinion is that not all the humans are intelligent ... yet (I
hope)
P.S.S. Sorry about my english.


Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 01 Dec 1998 11:16:21 -0500, Mike Burrage <mbur...@ne.mediaone.net>
wrote:

>Jerry Hull wrote:
>
>> I can adumbrate an argument why computational operations do not constitue
>> consciousness, tho I recognize that like all such arguments it will be
>> unpersuasive to those disinclined to accept its conclusion.
>>

>> The basic elements of computation can be reduced to increment and
>> if-zero-branch-else decrement (Boolos & Jeffrey's abaci). It is clear that
>> these do not, in themselves, provide any definition of consciousness. Nor, it
>

>Do individual neurons define consciousness? No.

Correct.

>> should be clear, could the addition of either of these operations to some
>> scheme of computation somehow cross the line to consciousness. It should be
>

>Wow, that's a big step. Let me apply that reasoning to neurons...
>1. Individual neurons do not define consciousness.
>2. It is clear that the addition of these neurons into groups do not cross the line
>of consciousness.
>C. Brains don't contain consciousness.

Notice that your attempt at a parallel argument has gone from 'define' to
'contain'. These are very different things. My plastic jug does not define
milk, but it does contain milk. I would agree that neurons, neither singly
nor in assemblage, DEFINE consciousness. The word 'conscious' existed in
dictionaries long before anyone had any idea what a "neuron" was. However,
empirical evidence strongly indicates that neurons do UNDERLIE consciousness.

>> obvious, therefore, that consciousness cannot be DEFINED in terms of

>> computation. Alternatively, I KNEW I was conscious long before I KNEW


>> anything about numbers, so clearly consciousness cannot be defined in terms of
>> computation.
>

>Hmm, I knew about consciousness before I knew about neurons and a brain, but I still
>beleive they exhibit consciousness.

Again, 'exhibit', like 'contain', is different from 'define'.

>> The other possibility is that some computational scheme CAUSES consciousness,
>> leaving it open exactly how the latter might be defined. But this is absurd
>> since the computational operations are FORMAL, i.e., they can be realized in
>> many different ways -- rooms full of Chinese, tinkertoys & streams, &c. &c. --
>> such that the specific causal powers of a given realization may differ in any
>> particular respect from any other realization.
>>
>> Thus, one may conclude that computation neither defines nor causes
>> consciousness.
>
>What if we could represent the action of a neuron using computation (such
>formulas as F = G*m1*m2/(r*r)). This is what science attempts.
>If that were possible, and we could represent the interaction
>between neurons using computation, then logically we could represent
>consciousness computationally. Right?
>Your line of arguments holds true if science can NEVER predict the future state of a
>neuron computationally. Do you believe this?

We can describe the path of a baseball by using similar formulae. That hardly
implies that the baseball is doing computation. (Actually, there is an analog
sense in which WE may be said to use the path of the baseball, e.g., to
"compute" how far it would go when hit with such-and-such force. I am arguing
rather against the more familiar sense in which computation represents
operations upon sequences of a finite alphabet of symbols.)

Keith Wiley

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
> >> My patience is beside the point. My basic claim, remember, is that there is a
> >> lot more going on in the brain than can be replicated via basic signal
> >> processing. For example, it has been claimed that in every brain cell there
> >> is a little magnetic node, similar to those in one celled animals which
> >> (appear to) use them for directionality. What if these magnetic nodes turned
> >> out to be important for consciousness?
> >
> >Then we build them into our AI machines. Sheesh. I keep saying the same
> >thing over and over again. I'm not saying that a bidigital neural network
> >tossing nothing but data around will necessarily become conscious (it does
> >seem a little too simple, doesn't it). I'm saying that whatever it is, we can
> >make it.
>
> Oh no, not another Sheesh guy! If all you are arguing is that, WHATEVER IT IS
> about the brain (+ environment) that enables consciousness, it is something we
> can IN PRINCIPLE build in our laboratories, then you have no dispute. Give me
> a pretty lab assistant & some time on the couch, & I'll build you one myself.
> I was arguing against replacing neurons with computer chips.

Despite your petty dig, I'll reply:
Okay, we seem have found a vague common ground. We both agree it may be
possible to build AI (something I wasn't sure you believed earlier). You
don't think we can do it using "computer chips". It think the problem is that
you have a very 70s/80s/90s view of computer chips. 21st computer chips are
going to be radically different. In terms of AI, we will have neural chips by
the truckload, these being hybrid "cyborg" chips with real neurons attached in
neural nets on them. That may be the first step towards building AI. Then,
figuring out what (I make no claim as to what "what" refers to) neurons are
doing, and building machines (computers/automatons/chips/whatever you want to
call it) that replicate the neuron's function. You see how the differention
between neuron and computer chip isn't nearly as cut and dry as you insist it
must be. They may or may not work on bidigital electronic chips, but they
will work on "computer chips", meaning chips built by people in motorola
facturies for the express and wholly artificial intention of designing and
building an artificial brain. If this is simply an argument about the
definition of a computer chip, then let's just drop it.

> What is "implicit" understanding? Sure, it's also possible a Trobriand
> Islander could construct a Ferrari from driftwood & sea shells. But I ain't
> gonna subsidize his efforts either.

You really go off the deep end with the analogies, just pointing out...

> >And my point stands. We still might manufacture artificial *consciousness*
> >before we understand the phenomenon.
>
> And I "might" win the Nobel Prize for literature.

doubtful, way too many dramatic metaphors.

Keith Wiley

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
> Five years ago I have spend countless hours looking for an answer to that.
> I have recently just begun to identify the phenomenon. Without spoiling my advance in
> this technology; I can tell you one simple thing. Don't expect to be able to see that under
> a microscope. It ain't a material/chemical mechanism. It is in the order or a physical
> law that deals with a-causal phenomenons.
>
> As for Artificial intelligence that supports relatively advanced forms of Consciousness,
> the answer is NO. Or at least not in this lifetime. The phenomenon CANNOT be reproduced
> using common electronics. But more commonly speaking; Artificial intelligence will
> emerge everywhere in simplistic but usefull computer applications. Without "lifelike"
> counsciousness, this Artificial Intelligence will work at a completely different order than
> the animal-mind. Yet, the difference is subtile.

Don't you think that's a rather fat worm to dangle in front of everybody.
Personally, I take a strictly reductionist view of the world. How can it be
logically impossible to manufacture something that already exists. It's
existence IS the proof that it can be constructed, for it HAS been
constructed. Don't you think that "common" electronics in the 21st century
will be coming pretty close to technology that can replicate many or most
biologically equivilant functions.

Your statement that it can't be seen under a microscope would hold true for my
connectionist theory, but that doesn't mean it can't be made. Perhaps you
mean something magical that can't be made my people, but that comes
dangerously close to implying God.

Toss us a bone why don't you. Enlighten us.

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 01 Dec 1998 10:49:35 -0600, wo...@ils.anti_spam_nwu.edu (Michael B.
Wolfe) wrote:

>In article <36640ce6.4232584@news-server>, ZZZg...@stny.lrun.com (Jerry
>Hull) wrote:
>
>> kr...@daimi.au.dk (Lars Kroll Kristensen) wrote:
>>
>> >ZZZg...@stny.lrun.com (Jerry Hull) writes:
>>
>> Soul is a red-herring, in my view. The term should best be regarded as
>> referring to the preferential component of a person, which -- like other
>> mind-stuff -- is not intrinsically material but must presumably have a
>> material substrate to exist.
>
>I'm not much of a philosopher, but what's the stuff that's not
>intrinsically material? "Preferential component?" What's that, and why
>isn't it intrinsically material?

Some things, like "wood" or "gold", are identified in terms of what they are
made up of. Some people might agree that consciousness is "made up" of
thoughts, but thoughts traditionally have been regarded as their own stuff
(Cartesian dualism & all that). I am not espousing dualism, but simply
pointing out that identifying thought & consciousness as such does not involve
identifying the material processes responsible for their existence; indeed, we
still don't know exactly what those processes are.

>> The basic elements of computation can be reduced to increment and
>> if-zero-branch-else decrement (Boolos & Jeffrey's abaci). It is clear that
>> these do not, in themselves, provide any definition of consciousness. Nor, it

>> should be clear, could the addition of either of these operations to some
>> scheme of computation somehow cross the line to consciousness. It should be

>> obvious, therefore, that consciousness cannot be DEFINED in terms of
>> computation.
>

>If the basic elements of computation don't define consciousness in and of
>themselves, then how can you conclude from this alone that computation
>can't "cross the line" to consciousness?

I am not inferring the latter. This argument is loosely modeled on
mathematical induction. Thus, I am claiming that (1) taken by themselves,
these operations do not define consciousness, and (2) added to some previous
scheme of computation, they cannot somehow complete what we MEAN by
consciousness. Indeed, you can look in your dictionary under 'consciousness'
and find nary a reference to increment nor if-zero-branch-else-decrement.
Contrast this with something like 'multiplication', which obviously CAN be
defined in terms of these operations.

>> Alternatively, I KNEW I was conscious long before I KNEW
>> anything about numbers, so clearly consciousness cannot be defined in terms of
>> computation.
>

>Huh? You didn't know about numbers, so they aren't useful in providing a
>definition of consciousness? Numbers are a theoretical construct.
>Consciousness is a theoretical construct. Why in the heck can't one be
>defined in terms of the other, if it happens to be a *useful* definition?
>I'm actually not arguing that it *is* useful, just that you can't assume
>that it isn't.

I ain't doin' no assumin'; I'se ARGUIN'. Malaise is a theoretical construct;
electron is a theoretical construct; but that is hardly a compelling reason to
define either in terms of the other.

>> Well, my cats are intelligent & conscious, so clearly those characteristics
>> aren't necessarily connected to human beings. I concede the possibility that
>> alien creatures of a different biological/compositional nature from earthlings
>> may be conscious, but would insist that, whatever these aliens may be, they
>> aren't MERELY computational.
>
>Maybe I missed it, but have you provided an example of this
>non-computational aspect? And have you provided an example that can't be
>described computationally? That's key, because just because humans do it
>one way doesn't mean that some other life form couldn't do it another way
>and still be considered intelligent or conscious.

Well, I know what consciousness is from my own first-person instance; & I
reasonably infer that other beings whose nature & behavior is sufficiently
similar to my own also have this first-person experience. This "sufficiently
similar" contains all the dirty laundry, of course, & for an alien creature
the judgement would be obviously difficult. However, I have read enough scifi
to be convinced that there are circumstances under which the ascription of
consciousness to aliens would be warranted.

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On 01 Dec 1998 17:39:21 +0100, David Kastrup
<d...@mailhost.neuroinformatik.ruhr-uni-bochum.de> wrote:

>ZZZg...@stny.lrun.com (Jerry Hull) writes:
>
>> Clearly if we exactly duplicate a human brain AND ITS
>> NEUROPHYSIOLOGICAL ENVIRONMENT we will have consciousness.
>

>Oh, sure. Care to explain why the amount of consciousness (if any)
>exhibited by a new-born baby is almost useless to anybody? Care to
>explain how to substitute for the years of massive impact on its
>sensors needed for generating even halfway intelligible reactions?
>And even if you duplicate an adult brain including its complete
>configuration, what would the brain be *conscious* of if deprived of
>any input from an existing world?

Well, I would say that babies as soon as they become active in the womb are
conscious. But you are correct to suggest that the QUALITY of human
consciousness is clearly dependent upon years of experience. BTW, with the
emphasized phrase "neurophysiological environment" I intended to INCLUDE
sensors & their input, & indeed much of the human body, which is replete with
sensors.

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 1 Dec 1998 13:23:42 -0500, "Ben Zen"
<NOSPAM_loknar@videotron._NOSPAMca> wrote:

>As for Artificial intelligence that supports relatively advanced forms of Consciousness,
>the answer is NO. Or at least not in this lifetime. The phenomenon CANNOT be reproduced
>using common electronics. But more commonly speaking; Artificial intelligence will
>emerge everywhere in simplistic but usefull computer applications. Without "lifelike"
>counsciousness, this Artificial Intelligence will work at a completely different order than
>the animal-mind. Yet, the difference is subtile.

John, the toaster is refusing to work unless we let it watch Jerry Springer
again.

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 01 Dec 1998 16:02:39 -0500, Keith Wiley <kwi...@tigr.org> wrote:

>> >Then we build them into our AI machines. Sheesh. I keep saying the same
>> >thing over and over again. I'm not saying that a bidigital neural network
>> >tossing nothing but data around will necessarily become conscious (it does
>> >seem a little too simple, doesn't it). I'm saying that whatever it is, we can
>> >make it.
>>
>> Oh no, not another Sheesh guy! If all you are arguing is that, WHATEVER IT IS
>> about the brain (+ environment) that enables consciousness, it is something we
>> can IN PRINCIPLE build in our laboratories, then you have no dispute. Give me
>> a pretty lab assistant & some time on the couch, & I'll build you one myself.
>> I was arguing against replacing neurons with computer chips.
>

>Despite your petty dig, I'll reply:

Was not intended as any kind of dig.

>Okay, we seem have found a vague common ground. We both agree it may be
>possible to build AI (something I wasn't sure you believed earlier). You
>don't think we can do it using "computer chips". It think the problem is that
>you have a very 70s/80s/90s view of computer chips. 21st computer chips are
>going to be radically different. In terms of AI, we will have neural chips by
>the truckload, these being hybrid "cyborg" chips with real neurons attached in
>neural nets on them. That may be the first step towards building AI. Then,
>figuring out what (I make no claim as to what "what" refers to) neurons are
>doing, and building machines (computers/automatons/chips/whatever you want to
>call it) that replicate the neuron's function. You see how the differention
>between neuron and computer chip isn't nearly as cut and dry as you insist it
>must be. They may or may not work on bidigital electronic chips, but they
>will work on "computer chips", meaning chips built by people in motorola
>facturies for the express and wholly artificial intention of designing and
>building an artificial brain. If this is simply an argument about the
>definition of a computer chip, then let's just drop it.

Then your new-falutin must be doing more than mere computation. They compute
+ X, where X is whatever it is that is required for consciousness. This is
like the block diagram in the cartoon wherein all paths lead through a box
labelled "A miracle happens".

>> What is "implicit" understanding? Sure, it's also possible a Trobriand
>> Islander could construct a Ferrari from driftwood & sea shells. But I ain't
>> gonna subsidize his efforts either.
>

>You really go off the deep end with the analogies, just pointing out...

Well, perhaps. But the physical/material nature of consciousness is surely
one of the largest unsolved scientific & metaphysical problems, & to blithely
o'erleap it in your rush to manufacture cyborgs seems a magical kind of
thinking akin to cargo cult mentality.

>> >And my point stands. We still might manufacture artificial *consciousness*
>> >before we understand the phenomenon.
>>
>> And I "might" win the Nobel Prize for literature.
>

>doubtful, way too many dramatic metaphors.

Surely my point, tho I am saddened nonetheless.

Mike Burrage

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Jerry Hull wrote:

> However,
> empirical evidence strongly indicates that neurons do UNDERLIE consciousness.

By UNDERLIE, I assume you mean "to form the foundation of" or "to be
at the basis of", which suggests that you feel neurons are ONE OF the
elements that DEFINE consciousness. (That is another way of saying it)
If that is the case, then obviously silicon is not a neuron, and can thus not
UNDERLIE consciousness.
I'm curious to see what empirical evidence suggests that silicon CANNOT
UNDERLIE consciousness (point me at the paper).
Another important question would be "What other aspects UNDERLIE
consciousness?". And since computer chips can obviously not be neurons,
shouldn't the goal of AI be to achieve those other aspects?

> >Hmm, I knew about consciousness before I knew about neurons and a brain, but I still
> >beleive they exhibit consciousness.
>
> Again, 'exhibit', like 'contain', is different from 'define'.

Pardon me, but your original argument said "I knew A long before I knew B,
so clearly B cannot be defined in terms of A". That is not logic. Perhaps, as
I often do, you just left out some arguments...

> We can describe the path of a baseball by using similar formulae. That hardly
> implies that the baseball is doing computation. (Actually, there is an analog
> sense in which WE may be said to use the path of the baseball, e.g., to
> "compute" how far it would go when hit with such-and-such force. I am arguing
> rather against the more familiar sense in which computation represents
> operations upon sequences of a finite alphabet of symbols.)

Hmm, I think I was trying to point out that if we can develop a computational
model (based on various formulae) to produce the exact same response to
as a neuron to ALL stimulus (input), that computational model along with the appropirate
machinery to administer the response (electric charge, chemicals,
whatever), could replace the neuron.

---Mike


Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 01 Dec 1998 16:58:01 -0500, Mike Burrage <mbur...@ne.mediaone.net>
wrote:

>Jerry Hull wrote:


>
>> However,
>> empirical evidence strongly indicates that neurons do UNDERLIE consciousness.
>
>By UNDERLIE, I assume you mean "to form the foundation of" or "to be
>at the basis of", which suggests that you feel neurons are ONE OF the
>elements that DEFINE consciousness. (That is another way of saying it)
>If that is the case, then obviously silicon is not a neuron, and can thus not
>UNDERLIE consciousness.

No, not at all. At one point (the era of Ben Franklin) it was discovered that
electricity underlay lightning. This was an EMPIRICAL discovery, not a
definition. (Nowadays, of course, we have encorporated electricity into our
definition of lightning.) I do not at all believe that consciousness can be
defined in terms of neurons.

>I'm curious to see what empirical evidence suggests that silicon CANNOT
>UNDERLIE consciousness (point me at the paper).

It COULD. Similarly, tapioca pudding COULD underlie consciousness. But there
are reasons (which I have gone into elsewhere) to think that at least the
computational capabilities of silicon are insufficient to underlie
consciousness.

>Another important question would be "What other aspects UNDERLIE
>consciousness?". And since computer chips can obviously not be neurons,
>shouldn't the goal of AI be to achieve those other aspects?

Don't know what this means. I have not been talking about "aspects".

>> >Hmm, I knew about consciousness before I knew about neurons and a brain, but I still
>> >beleive they exhibit consciousness.
>>
>> Again, 'exhibit', like 'contain', is different from 'define'.
>
>Pardon me, but your original argument said "I knew A long before I knew B,
>so clearly B cannot be defined in terms of A". That is not logic. Perhaps, as
>I often do, you just left out some arguments...

What is logic?, asked jesting Pilate, but would not stay to reason (an
absolutely irrelevant aside). Here I am only saying that 'exhibit' is NOT THE
SAME as 'define'. My face my exhibit sadness, but it surely does not DEFINE
sadness.

>> We can describe the path of a baseball by using similar formulae. That hardly
>> implies that the baseball is doing computation. (Actually, there is an analog
>> sense in which WE may be said to use the path of the baseball, e.g., to
>> "compute" how far it would go when hit with such-and-such force. I am arguing
>> rather against the more familiar sense in which computation represents
>> operations upon sequences of a finite alphabet of symbols.)
>
>Hmm, I think I was trying to point out that if we can develop a computational
>model (based on various formulae) to produce the exact same response to
>as a neuron to ALL stimulus (input), that computational model along with the appropirate
>machinery to administer the response (electric charge, chemicals,
>whatever), could replace the neuron.

Only on the assumption that a purely computational model is sufficient to
explicate everything neurons are doing VAV supporting consciousness. But this
is the very point I have disputed from the start.

Jerry Hull

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
On Tue, 01 Dec 1998 16:58:01 -0500, Mike Burrage <mbur...@ne.mediaone.net>
wrote:

>Jerry Hull wrote:

>Pardon me, but your original argument said "I knew A long before I knew B,
>so clearly B cannot be defined in terms of A". That is not logic. Perhaps, as
>I often do, you just left out some arguments...

If A is defined in terms of B, then you cannot know that something is an A
without knowing that it is a B. If I don't know what 3 is, or what a straight
line on a plane is, then I can't know what a triangle is. Conversely, if I
can know that something is an A without knowing what C is, then A is not
defined in terms of C.

If indeed I knew what it was to be conscious before I knew anything about
numbers & operations thereupon, then clearly recognition of numerical
operations were not essential to my recognition of consciousness.

Seems logical to me.

Jim Balter

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
Jerry Hull wrote:
>
> On Tue, 01 Dec 1998 16:58:01 -0500, Mike Burrage <mbur...@ne.mediaone.net>
> wrote:
>
> >Jerry Hull wrote:
>
> >Pardon me, but your original argument said "I knew A long before I knew B,
> >so clearly B cannot be defined in terms of A". That is not logic. Perhaps, as
> >I often do, you just left out some arguments...
>
> If A is defined in terms of B, then you cannot know that something is an A
> without knowing that it is a B. If I don't know what 3 is, or what a straight
> line on a plane is, then I can't know what a triangle is. Conversely, if I
> can know that something is an A without knowing what C is, then A is not
> defined in terms of C.
>
> If indeed I knew what it was to be conscious before I knew anything about
> numbers & operations thereupon, then clearly recognition of numerical
> operations were not essential to my recognition of consciousness.
>
> Seems logical to me.

Water can be defined as a liquid oxide of hydrogen, yet you claimed
that it cannot be because we knew water before we knew hydrogen and
so on. If there is "logic" to your claims, it is in terms of
a calculus of informal language with many unstated assumptions
concerning such things as the nature and role of definitions,
assumptions that would be seen to be inconsistent with the actual
use of such language were these assumptions explicitly stated.

As to the *recognition* of consciousness, the term "consciousness"
is currently so ill-defined that most people cannot decide whether
they themselves recognize consciousness in various objects and
behaviors, let alone reach agreement with others. Much of the debate
about consciousness is based upon the fundamental fallacy of thinking
that there is some fact of the matter whether or not an ill-defined
attribute applies in each situation. But in order to establish whether,
say, a thermometer is conscious of the temperature or a dog is conscious
of the fact that it's tail is wagging or a computer is conscious of
the meaning of Chinese sentences, you have to define consciousness
crisply enough to be able to say whether its sufficient conditions apply
in each of those cases. There are definitions of consciousness for
which they do, and there are definitions of consciousness for which
they don't. There can and will be a debate about whether the various
definitions match traditional use of the terms (e.g., ice is H2O,
is it water?) and whether they capture parsimonious physical theory
(e.g., H2O vs. "the liquid that descends from clouds"). Formal and
informal definitions will reside side by side, with various refinements
and nuances used in various circumstances. Definitions are a matter of
linguistic history (which is, of course, strongly constrained by
empirical observation), and matters of fact are dependent upon the set
of definitions that are assumed, and can only be resolved to the degree
that those definitions establish the criteria for resolution. Trying to
decide unambiguously whether something is conscious without having first
settled on a definition of consciousness that establishes unambiguous
criteria for that thing being conscious is like trying to solve
an equation in two unknowns. Yet that is precisely what a great deal
of philosophical debate comes down to, which is one reason why these
debates go on for centuries.

--
<J Q B>

Patrick Juola

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
In article <36640AB2...@northernnet.com>,
kreeg <kr...@northernnet.com> wrote:
>on the contrary, we can produce a chip that can do the exact same as a brain
>cell, but have we tried?!?! not that i know of

Um, no, we can't.

We don't even know what a brain cell does, exactly, so of course we
can't duplicate it.

-kitten


Patrick Juola

unread,
Dec 1, 1998, 3:00:00 AM12/1/98
to
In article <36641655...@ne.mediaone.net>,

Mike Burrage <mbur...@ne.mediaone.net> wrote:
>> The basic elements of computation can be reduced to increment and
>> if-zero-branch-else decrement (Boolos & Jeffrey's abaci). It is clear that
>> these do not, in themselves, provide any definition of consciousness. Nor, it
>>
>
>Do individual neurons define consciousness? No.
>
>> should be clear, could the addition of either of these operations to some
>> scheme of computation somehow cross the line to consciousness. It should be
>
>Wow, that's a big step. Let me apply that reasoning to neurons...
>1. Individual neurons do not define consciousness.
>2. It is clear that the addition of these neurons into groups do not cross the line
>of consciousness.
>C. Brains don't contain consciousness.

Doesn't follow. Just as a trivial counterexample, if "consciousness"
could be found to reside in the glial cells, then brains would contain
consciousness.

If an empty glass doesn't make me drunk, and if adding water to a glass
doesn't make me drunk, then how come I still get drunk on a scotch-and-
water?

More generally, if neurons don't contain consciousness, and the mere
addition of neurons into groups doesn't contain consciousness, then
there's still the possibility that the brain, which contains neurons
*plus some other, unspecified, and as yet unknown property* could
still contain consciousness.

-kitten

Stewart Dean

unread,
Dec 2, 1998, 3:00:00 AM12/2/98
to
may...@psu.edu (Mike Yukish) wrote:

>. After a few of those, the pilot can step outside
>the problem of how to adjust, and look at how to train to adjust. With
>time, the pilot can step out of that loop and modify it, and so on ad
>infinatum. Typically, time scales of interest change with increasing
>abstraction.
>
>To my mind, we have the ability to abstract without bound, continually
>stepping outside of the immediate loop and modifying our behavior to
>optimize. Every artificial system, in contrast, has a well-defined
>upper limit of abstraction, near as I can tell. To me, that is a
>critical separation between us and them.

But that neednt be the case. Only in a hardwired system are the
abstraction levels set.

Your model is good in that I feel the 'concoiusness' is a very small
part of the brains processes, a manager for the overall aims. Most of
the brain deals with processing stimuli, trying to ferrit through
related concepts and basically spends it's times coping with the
demands of input and output - even then it needs to have some time
just to sort out the mess all this work churns up (sleep).

I believe humans arnt increadibly aware of what goes on around us
(what I hear you say). You may have a sock draw - what colour is the
sock nearest the front on the left without looking? Unless all you
socks are the same colour you may probably won't know - this is simple
demonstration of how unaware we are of the world around us. To be
more aware we'd have to be an awful lot more intelligent and be able
to hold more than on average six chunks in short term memory!

Cheers


Stewart Dean - ste...@webslave.dircon.co.uk
alife guide - http://www.webslave.dircon.co.uk/alife

JAVIER MOLINA VILAPLANA

unread,
Dec 2, 1998, 3:00:00 AM12/2/98
to

Ronald Michaels escribió:

> Synosemyne wrote:
> >
> > Hey all,
> >
> > Do any of you happen to know some sensible arguments against the
> > existence of artificial intelligence in computers? It's true that
> > computers are problem-solvers, which some call intelligence, which is
> > fine and almost universally accepted. But computers still lack
> > understanding. How does one prove that computers lack understanding?
> > It is easy to say so but there are always counter-arguments and
> > counter-examples (although most are not very sturdy). What
> > philosophical or concrete arguments do you know of, to show that
> > computers do not understand?
> >
> > Thank you,
> > Synosemyne
>
> When Computers understand, they will so inform us.
>
> Ron
> --
> Ronald Michaels mic...@planetc.com
> 714 Burnett Station Rd. 423 573 4049
> Seymour, TN 37865 USA

I don't believe in word artificial, I do with INTELLIGENCE, it really
exists.My question is,following Penrose,if human mind(or some aspects of
it) has a non-algorhitmic nature,can we create elements able to reproduce
them?.I think the answer is not.There's not necessary physics(relevant
quantum events) in our actual computers based on Si.These quantum events,
have a non-local nature,and I think that`s a way to follow in comprension
of intelligence.

It is loading more messages.
0 new messages