The Chinese Box

9 views
Skip to first unread message

David Reich

unread,
May 26, 2010, 5:54:26 PM5/26/10
to the-...@googlegroups.com
Can a box know how to speak Chinese?  If you find a box, into which you can put written sentences in Chinese and get output that's indisguinishable from responces you might get from a human, does the box know Chinese?  The discussion is just the same as if you had had it with someone from China - the language is semantically and syntactically flawless, it fits the situation perfectly, and you can have a normal, sensible conversation. 

Now, imagine that (case 1) the reason this box is able to use Chinese properly is because it has someone in it who knows Chinese, who writes out replies as you put in input.  In this case, one would certainly be able to say the the box 'knows' Chinese, correct?

(case 2) The box contains someone who knows absolutely now Chinese - but he has a little (or rather, incredibly large) book that he's able to look through instantaneously.  This book tells him exactly what to write down and output in every case, context-sensitively and so on.  This produces the exact same output as case 1, but does the box still 'know' Chinese?

(case 3) The box contains a computer, running a program to 'speak' Chinese.  The program, though, uses the same instructions as were given to the person in the box.  Does this count as 'knowing' Chinese - it is it to be considered a trick, that the computer doesn't understand what it's doing?  In this case, the computer is only working syntactally, not semantically.


Paul Gully

unread,
May 26, 2010, 6:13:44 PM5/26/10
to the-...@googlegroups.com
well, what is knowledge? (token response)
it hinges on that question, i think. 

anyway, that point aside, this is how i interpret it. is the meaning behind the language present in the system? are you operating above the system writing on the paper on in it? if your just following rules from a book you're operating in the system- you're like a driver without a map following road signs. if you actually 'know' the language, you're operating above it, you can navigate without the road signs and can get to the same place doing something completely different from what it says. difference between computers with the general archetecture and people (although i do not doubt that with SOME strange computer architecture one could mimic the human brain precisley) 

now another question is wether or not it is relevant.

Andrew Towle

unread,
May 26, 2010, 6:57:11 PM5/26/10
to the-...@googlegroups.com
Knowledge is having the information, having it stored in your memory. Not only is knowledge this, but it is also the conscienceness of what you are doing. Say, for example, that you asked the computer speaking Chinese what it was doing. It responds ' I'm communicating with you in Chinese'. Ok, but the only reason it was able to 'understand' you was becaus eit can recognize the words you are speaking. It didn't actually think to itself ' I am communicating with this human Chinese'.
--
Andy T.

kari...@verizon.net

unread,
May 26, 2010, 7:49:26 PM5/26/10
to the-...@googlegroups.com, the-...@googlegroups.com

I agree. If it can't actually think about what it's doing (doesn't consciously know that it's spewing out Chinese), then it doesn't "know" Chinese, it's imitating.

May 26, 2010 05:57:18 PM, the-...@googlegroups.com wrote:

===========================================

Knowledge is having the information, having it stored in your memory. Not only is knowledge this, but it is also the conscienceness of what you are doing. Say, for example, that you asked the computer speaking Chinese what it was doing. It responds ' I'm communicating with you in Chinese'. Ok, but the only reason it was able to 'understand' you was becaus eit can recognize the words you are speaking. It didn't actually think to itself ' I am communicating with this human Chinese'.

On Wed, May 26, 2010 at 5:13 PM, Paul Gully wrote:
well, what is knowledge? (token response) it hinges on that question, i think. 
anyway, that point aside, this is how i interpret it. is the meaning behind the language present in the system? are you operating above the system writing on the paper on in it? if your just following rules from a book you're operating in the system- you're like a driver without a map following road signs. if you actually 'know' the language, you're operating above it, you can navigate without the road signs and can get to the same place doing something completely different from what it says. difference between computers with the general archetecture and people (although i do not doubt that with SOME strange computer architecture one could mimic the human brain precisley) 
now another question is wether or not it is relevant.

Paul Gully

unread,
May 26, 2010, 8:41:03 PM5/26/10
to the-...@googlegroups.com
so what implies consciousness? what makes something conscious? what do we have that the computer doesn't?

David Reich

unread,
May 26, 2010, 8:49:24 PM5/26/10
to the-...@googlegroups.com
I'll see if I can devil's advocate here, arguing that the box does 'know' Chinese.  This won't be that hard, I actually pretty much think that myself: 

I'll probably be using a lot of computer analogy here, so you should know what RAM is and how it differs from hard drive.

Andrew, you claim that 'knowledge' requires two parts:  1: having the information in a permanent memory; and 2: being conscious of what it's doing.  Karina also claims point 2. I'll argue that the box fulfills the first requirement, and that the second is meaningless.

1: If we take the box as a hole, the book or the hard drive containing the program are integral components of it.  Without them, the box wouldn't be the same.  They're a permanent part - even more permanent, in fact, and more effective at keeping data, than the memory of the Chinese man in case 1.  They certainly do have the information stored in memory. 

2: What is consciousness?  I would assume you'd say that it's being able to know what the self is doing.  By such a definition, a computer is more conscious than a human!  It knows exactly what it's doing - down to which bits are in which state.  A human doesn't know what state his individual neurons are in - does this make us not conscious?

Andrew Towle

unread,
May 26, 2010, 8:53:53 PM5/26/10
to the-...@googlegroups.com
Consciounceness is the knowledge of what yu are doing, having emotions, and just overall having a brain. Yeah, a computer has a 'brain', but it's not a neurological organ like what we have.
--
Andy T.

Paul Gully

unread,
May 26, 2010, 8:54:43 PM5/26/10
to the-...@googlegroups.com
why are emotions an intrinsic part of consciousness? and why is the fact that ours is an 'organ' important here at all?

Paul Gully

unread,
May 26, 2010, 8:56:02 PM5/26/10
to the-...@googlegroups.com
(also, for the record, it appears that i am quaestor-ing and david is 'devils advocating', just to establish roles)

Quoc-Thuy Vuong

unread,
May 26, 2010, 10:35:02 PM5/26/10
to the-...@googlegroups.com
Whew. This topic progressed a lot, already. I need to step in.
As for me, I think the box knows Chinese. By definition, to know is "To possess knowledge, understanding, or information." Technically, the box possesses to knowledge of Chinese in order to give a human answer. How does it do this? Ancient Chinese Secret-aru.

Doubt

unread,
May 27, 2010, 5:05:10 AM5/27/10
to SSPC
Obviously, your question is designed to get us thinking about the
difference between a human and a computer that produce the same
results in the same situations. Assuming you came up with this thought
experiment yourself, I think it's quite clever. And I think that by
the end of this discussion, all of us will at least admit that because
the differences are so difficult to point out, intelligence will
appear to be a more universal phenomenon rather than a uniquely human
enterprise.

I tend to agree with Paul's first response, which I believe addresses
some of the follow-up questions you posed, David. First of all, I
don't think consciousness is merely tied to understanding what one is
doing. I would propose that some "thing" is conscious if and only if
it is aware of its own existence. Whether or not the system of working
parts that allows for the output of functional Chinese is considered a
"whole" from the outside, the system does not understand or know
itself to be a whole. It may understand the role of each individual
part to produce the desired result, but the entire box will never be
aware of itself as a box. It will never contemplate the task it is
given or, assuming it is perfectly programmed, deviate from that task.

This feeds directly into Paul's insightful response, which explains
the difference between operating within a system and actually
understanding and utilizing that system. I think the ability to use a
system is a big part of intelligence, human or otherwise. But the
difference is that computers are not aware of their own agency. They
perform a series of calculations that resolve in "yes" or "no"
answers, which lead them down a particular path of actions. In other
words, they perfectly understand each individual yes/no calculation
(usually represented with 0's and 1's) but are not aware of themselves
as a "computer" making those calculations. Humans invent these
general, all inclusive descriptions of systems like computers, or in
this case, a box. But that is because we have consciousness, and
understand ourselves to be an enclosed system, a "person," so we
naturally apply that nomenclature to other things as well. Take away a
part of the Chinese Box, and it simply ceases to function, perhaps
adding to its memory that its task can no longer be performed. Take
away a part of a human, and it not only understands that it can no
longer perform a particular function, but it will probably want to
know what happened to it. In other words, humans will seek knowledge
from outside the system they are working. Sort of like Paul's metaphor
about following road signs, or utilizing knowledge of those signs to
perform an unrelated task.

Clearly, I do not know much about computer programming or
neuroscience, so I would love it if an expert could weigh in from
either or both sides. But even with my limited knowledge, it seems
like a distinct possibility that the human brain is merely an
incredibly complex machine that performs an incomprehensible series of
yes/no calculations that give the illusion of randomness and (as we
call it) emotion. Somewhere along this spectrum from a simple computer
chip to a human brain, consciousness develops, and the being ceases to
have to worry about each calculation (in our case, our heartbeat or
even the replication of cells) as smaller tasks become delegated to
systems within the system. At some point a larger system that is
actually unaware of its own workings develops which can understand the
collection of these systems as producing a "thing." I won't pretend to
theorize on the how or why of this phenomenon, but I think that there
is a clear relationship between complexity and self-awareness.

Andrew Towle

unread,
May 27, 2010, 7:53:30 AM5/27/10
to the-...@googlegroups.com
Ok, Doubt, I'm not gonna read all that you just posted, so this may be repetative. Thuy, you just said that knowledge is the possession of information and the understanding of it. Yes, the box possesses the knowledge of Chinese, but it doesn't understand that it is speaking Chinese.
--
Andy T.

Quoc-Thuy Vuong

unread,
May 27, 2010, 8:59:48 AM5/27/10
to the-...@googlegroups.com
How about a little Chinese baby who spoke his first word? Would the baby know it's speaking Chinese? Does it understand that he did-aru?

Andrew Towle

unread,
May 27, 2010, 5:01:50 PM5/27/10
to the-...@googlegroups.com
Well suppose that he heard you say the word 'konichiwa' and then said the word 'konichiwa'. He doesn't know that he's speaking Chinese because you'd have to say to him 'You are speaking Chinese'. But, he wouldn't know any of what you were saying or what he was saying, so know, the baby would not know he was speaking Chinese.
--
Andy T.

Paul Gully

unread,
May 27, 2010, 6:22:27 PM5/27/10
to the-...@googlegroups.com
so at which point does he begin to 'know' chinese?

David Reich

unread,
May 27, 2010, 7:20:59 PM5/27/10
to the-...@googlegroups.com
First, Doubt, I didn't come up with this idea myself.  It's a fairly well-known thought experiment in AI, which I was reminded of though my usual method of finding a topic for the SSPC by going through the wiki page on thought-experiments until something pops up.

As for your claim that 'self-awareness' (again, very poorly defined) is consciousness - would you say that case 2, the human following the book, is conscious?  The human inside knows that he exists, is self-aware, and knows that he's writing responses in Chinese.  He has no way of communicating this to the outside world, though.  Is such communication at all relevant to knowledge or consciousness? 

Anyway, I'm not a neuroscientist, but I do know some things about computers and programming.  One big thing that seems relevant to this is the idea of low-level and high-level understanding or programming.  First, a definition, for those who don't know these terms:  Very low-level programming uses only the 1s and 0s, it is very hard for a human to write or understand, but is very easy to make a computer execute it.  High-level programming uses languages like PHP.  It's easier for humans to write, but requires a 'compiler' written in low-level, which will turn the PHP into 1s and 0s.  High-level understanding would be the sort a human has - I know I'm writing this, but I'm not worrying about moving the muscles in my fingers to hit the keys.  Low-level understanding has much less 'meaning' - it's like a computer, that knows the bits 10001111011010010110101.  If it's using a high-level programming language, it might know that those bits encode the string 'C,h,i,n,e,s,e', but it certainly doesn't know anything about Chinese, the idea doesn't have any links to ideas of what Chinese writing looks like, or to China, or anything.  Of course, a high-enough level program could theoretically be made, in which such links existed. 

As for the idea that taking out a part of the box in cases 2 and 3 would be fatal, whereas it wouldn't in case 1, I entirely disagree.  In case 1, you could easily cut off the Chinese man's head - this would render the box completely useless.  It would be similar to removing the computer's CPU in case 3.  However, in case 1 or 2, you could simply take away the guy's pen - he still wouldn't be able to give output, but the system would be just as conscious - or not so - as it was before.  In case 3, (assuming the program running was sophisticated enough) you could concievably remove a single stick of RAM, or one CPU core.  This would cause the current process to go awry - the next character output would be wrong - but the program could, in theory, recognise that part of the computer had been removed, using a form of error-checking, and run without it (although a bit slower, missing part of it's processing power), possibly asking with the output what had happened - just as well as either other case could ask. 

As for the last thing you said - I write computer programs in Python (a high-level programming language) without having to worry ata ll about where my memory actually is, or what bits are flipped.  I can find this out if I want, but I don't need to.

Andrew Towle

unread,
May 27, 2010, 8:22:46 PM5/27/10
to the-...@googlegroups.com
The baby  already 'knows' a Chinese word, but doesn't have an understanding of what it's saying.
--
Andy T.

Mvpeh

unread,
May 27, 2010, 9:17:23 PM5/27/10
to the-...@googlegroups.com
I don't even understand the question, roflcopters! How canz0rs a box0rz speakz Chinese-is?
1n73rn3t t3xt ftw


--
-Marton

Quoc-Thuy Vuong

unread,
May 27, 2010, 9:29:21 PM5/27/10
to the-...@googlegroups.com
First of all, that baby is speaking Japanese. It's first word would be "Ni Hao". Secondly, the baby would technically know Chinese without the thought of it "knowing" chinese. What if the box responded to the question,"Do you know Chinese?" with "是,我知道汉语" (Yes, I know Chinese)? Would that mean that the box knows Chinese? It did say so itself. What if you asked, "How do you know?" and it responded, "那是一个古老中国秘密" (Ancient Chinese secret)?

Cheers-aru

Andrew Towle

unread,
May 27, 2010, 9:38:43 PM5/27/10
to the-...@googlegroups.com
It can speak Chinese, and when aksed what it's doing, it can say that it is speaking Chinese, but it doesn't actually understadn that it is speaking Chinese. that's kind of obvious. I don't know why we're arguing about this. This isn't even philosophy.

2010/5/27 Quoc-Thuy Vuong <vqt...@lv5.org>



--
Andy T.

David Reich

unread,
May 27, 2010, 9:46:30 PM5/27/10
to the-...@googlegroups.com
Andrew:

You're completely useless to the SSPC here. It's getting fairly obvious by now that you're not taking this seriously enough.  In this and other threads, you go on irrelevant tangents and claim that people are wrong without so much as even reading their posts (it seems like).  This is definitely a valid philosophical issue.  I won't try to prove that to you, but the size of this discussion should be proof enough.  So:  STOP going on tangents, and START reading posts.  If you don't understand a post, look up the bits you don't understand on wikipedia, and if you still don't, then ask the person who posted.  If you don't like any particular topic, start a new one.

The same goes for you, Marton.  Your last comment was completely useless and a waste of space.  I doubt you read any of this topic - and you should have read all of it.

2010/5/27 Andrew Towle <andyt...@gmail.com>

Andrew Towle

unread,
May 27, 2010, 9:48:05 PM5/27/10
to the-...@googlegroups.com
Sorry. You're right. I'll try to read all of what people say and stay on topic.

2010/5/27 David Reich <elli...@gmail.com>



--
Andy T.

Mvpeh

unread,
May 27, 2010, 9:48:36 PM5/27/10
to the-...@googlegroups.com


I was asking what this question was about, because I googled it, and nothing came up, so I asked you guys directly, to get somebody trashing me for questioning?


--
-Marton

Quoc-Thuy Vuong

unread,
May 27, 2010, 9:49:42 PM5/27/10
to the-...@googlegroups.com
It's actually logical evidence that it replies that it's knows and is speaking Chinese. Even if I asked if it understood it was speaking in Chinese, it would reply "是" (yes). What if the box said it was possessed by a ghost (just play along and don't bring in religion) and because the ghost is binded into the box? Would that box know Chinese, then?

Now for David, being the king and tyrant of this organization, you need to be more understanding. You can be firm but you must also be fair.

Cheers

2010/5/27 Andrew Towle <andyt...@gmail.com>

Andrew Towle

unread,
May 27, 2010, 9:49:42 PM5/27/10
to the-...@googlegroups.com
I smell......UPRISING.
--
Andy T.

Quoc-Thuy Vuong

unread,
May 27, 2010, 9:50:30 PM5/27/10
to the-...@googlegroups.com
Andrew, what did David just type?

Andrew Towle

unread,
May 27, 2010, 9:50:52 PM5/27/10
to the-...@googlegroups.com
No, the ghost knows Chinese, not the box. And just because the box understands what you say and responds to it correctly, doen't mean it actually understands it.

2010/5/27 Quoc-Thuy Vuong <vqt...@lv5.org>



--
Andy T.

Andrew Towle

unread,
May 27, 2010, 9:51:31 PM5/27/10
to the-...@googlegroups.com
What? Oh...sorry. 'Chinese Box...must think about chinese box...'
--
Andy T.

David Reich

unread,
May 27, 2010, 9:52:17 PM5/27/10
to the-...@googlegroups.com
Andrew, I'm glad we have you here, you're good at debate when you post a good argument (same for you, marton).

Marton, your comment was bad both because of not being in any sort of civilized, philosophy-worthy language, but also because had you read my first post & intro, you would have understood the topic.

Thuy, at least I'm not banning them, you should see some forums...

Andrew Towle

unread,
May 27, 2010, 9:53:53 PM5/27/10
to the-...@googlegroups.com
Yeah I know, it's fine. You're just bieng firm. But you were kind of acting like my Social studies teacher.( you know what i mean marton). Sorry. Must stay on topic...
--
Andy T.

David Reich

unread,
May 27, 2010, 9:54:33 PM5/27/10
to the-...@googlegroups.com
Andrew, again, it's that you make claims ("And just because the box understands what you say and responds to it correctly, doen't mean it actually understands it.") that don't exactly support themselves (the box understands Chinese, but doesn't _actually_ understand it? What?), and you give no support.  If you look at some of the wall-of-text-arguments that have been posted, we give pretty coherent and defended arguments.

Andrew Towle

unread,
May 27, 2010, 9:57:09 PM5/27/10
to the-...@googlegroups.com
Ok yeah, but you know the box doesn't have a brain. Even if it does understand the nuances of the language, it doesn't have a certain understanding of what the language is and sarcasm and such.
--
Andy T.

Paul Gully

unread,
May 27, 2010, 10:00:43 PM5/27/10
to the-...@googlegroups.com
well as for what consciousness is; I have a few notes on that. First of all (David will probably get this more than most others, i encourage you to elaborate or rephrase as i will probably botch it up horribly), establish that our brains work on fundamentally logical principals (or at least we couldn't even possibly think of understanding it if it didn't)- it may not be the ones and zeroes that david mentioned, and probably isn't in any system that is easily translatable to anything we can easily understand with 'numbers'. By way of reductionism, this is how the brain works. include the different chemical messengers that cause emotion and whatnot, and you have a ridiculously complex system, even at the most basic level. this makes it versatile. like most logical systems of such complexity, it can describe itself, or is self referential, at least somewhere up the ladder. this is probably among the first things that is required in a 'conscious system'. now, this thing is able to take in information from some thing, this is what it is operating on. so it also needs Input and Output potential. more specifically, output must be able to affect input in a clear-cut way. we can see our hands move when we 'tell them to', for example. 
Now related to what i was saying before with the 'in the system/over the system' thing, this thing needs to have a huge number of 'levels' and the 'POV' has to be able to jump around in them (or make Tasks Jump around in them, as with the language and stuff). tasks like breathing are automatic, being run by a 'sub-process', but can be 'elevated' to a higher status where we control them. walking, blinking, sometimes talking even, are all tasks like this. 

so to summarize, a conscious system needs to have the following:
Turing-Completeness (Another way to say 'complex logical system')
Ability to be Self-Referential
"Reality" to interact with
Linked Input/Output methods
Many Levels of operation, and the ability to jump around them

now, one more thing i need to elaborate on from the text, and one new thing. the POV thing is basically where the system is changing and operating. probably where response to stimuli happen. its the big unknown in this, what freud called the ID.

second, this model doesn't address Developmental stuffs. that is also important, and probably much much more subtle than we have much chance of explaining. 

Paul Gully

unread,
May 27, 2010, 10:04:38 PM5/27/10
to the-...@googlegroups.com
also, I think my opinion on the whole debate that occurred about 'on topic-ness' is apparent. guys, really. perhaps we need to make an off-topic thread or something. that would clear it up.

also, andrew, about the box not having a brain, thats the whole point of this discussion. 'having a brain' is not a necessary part of consciousness, it just happens to be the only place we know of that 'consciousness' occurs, barring microorganisms, etc. 

Andrew Towle

unread,
May 27, 2010, 10:06:26 PM5/27/10
to the-...@googlegroups.com
The box is like a computer. It is speaking Chinese, and is conscious of itslef speaking Chinese, but it can only 'think' as far as a computer can 'think'.
--
Andy T.

Quoc-Thuy Vuong

unread,
May 27, 2010, 10:08:02 PM5/27/10
to the-...@googlegroups.com
Well, with a complex enough box, it may be "complete", "able to refer to itself in it's answers", "interact with others", "give many outputs", and would in sum have many levels of operation. If it were really well developed, it would learn as the program progresses. This would be the start of artificial intelligence. If the box had such artificial intelligence, would it "know" Chinese?

Thinking back to David's first post, it could just be a person inside the box who's writing all of these answers on a piece of paper-aru.

Paul Gully

unread,
May 27, 2010, 10:14:51 PM5/27/10
to the-...@googlegroups.com
no, read what i said. computers as we make them now, with the RAM-HD-Processor architecture is versatile, but cant do a lot of the things i described in my 'wall of text'. that doesn't mean that we may not be able to explore other architectures that would have some of those capabilities. I mean, really if you outfitted a standard CPU with an arm and two cameras, you may be able to program an output-input loop like i described. some multi-processor architecture with a lot of ram and almost no hard storage would capable (probably) of that 'levels' thing. it would be hard to program, but possible (i omit hard storage because no animal we know of can store info like that, memory is extremely fallible)



Michael Oppenheimer

unread,
May 27, 2010, 10:18:11 PM5/27/10
to the-...@googlegroups.com
First of all, sorry for not posting in a while.

Paul, I feel that a computer fulfils all of these requirements.  It can be a complex logical system (kind of self-explanatory(if you disagree please say so)).  It can operate on its own systems or programs; if it is told to work on itself, it can.  [read below paragraph for third point].  It obviously had linked Input/Output methods.  It (as was already stated) can isolate and increase the priority of some functions if it is told to, or 'thinks' that it is faster or more efficient, so I think that it can work on an (almost) infinite number of operations.

[thanks for reading here]  We can't decide whether it can interact with "reality" until we settle the "brain" issue.

Quoc-Thuy Vuong

unread,
May 27, 2010, 10:20:42 PM5/27/10
to the-...@googlegroups.com
This talk of input/output and computers bring to mind C3PO from Star Wars. I would like to bring this as an example (yet it is a horrible example). Would you say it knows English even though it's only made up of computers? Would you say that it is living to some extent?

Cheers

David Reich

unread,
May 27, 2010, 10:02:50 PM5/27/10
to the-...@googlegroups.com
Ah, but if it were programmed with enough sophistication and development, why not?  If a statement seems to go against the, contextual, larger point of the input, and other conditions are met (I wouldn't know which, myself, but I'm confident that they exist), a program would be able to react to input that it had calculated was 'sarcasm'. 
 
(That's a responce to Andrew, Paul just posted his thing as I finished that, and I'm off to read it now)

David Reich

unread,
May 27, 2010, 10:23:18 PM5/27/10
to the-...@googlegroups.com
I would agree with Thuy here - either your definition is missing something, Paul, so such a box is consious. 
You said:
>>so to summarize, a conscious system needs to have the following:
>>Turing-Completeness (Another way to say 'complex logical system')
Yes, computers have this  Also, I don't think you know what Turing-complete means.  read http://en.wikipedia.org/wiki/Turing_completeness
>>Ability to be Self-Referential
LISP! In fact, any program can modify itself in both hard-drive and memory, LISP just makes it easy
>>"Reality" to interact with
Yep
>>Linked Input/Output methods
Yep, the output, read by whoever's outside, effects what they'll put in next.  By the way, why is this necessary for consciousness?  In a sensory-deprivation chamber, I would still be conscious.
>>Many Levels of operation, and the ability to jump around them
This is the hardest, But I would say computers have this too.

You mainly talked about this in the post I was just given in the middle of writing this, so here's a reply:
I could, running on the standard IBM motherboard and x86 CPU, program something with multiple levels.  It's very possible to make, say, a python script (high level) that takes input from a user by means of voice recognition and converts it to text, and calls an embedded low-level function in C++ that processes the text in a different way based on how much memory is available, and returns it to a search-engine like function (written in Shell, a mid-level language) that searches the web for other examples of what the use said.

Michael gives a reasonably good argument here.

Of course - the hard part of this is the high levels.  I've got the basics of a big argument about that in my head, but I think I'll post it tommorow when I've thought it out better.

Paul Gully

unread,
May 27, 2010, 10:24:52 PM5/27/10
to the-...@googlegroups.com
well, the linked output-input isn't really present, and there definitely isn't a system to recognize those connections. (they exist in some cases, but are not used to create 'AI'. some RC plaines and stuff use camera tracking to judge there relative speed). 
also, CP30 is assumed to have some sort of complex AI working inside. never elaborates how it works, but he certainly appears 'conscious'. (working with fiction, of course, is hard though...)

Paul Gully

unread,
May 27, 2010, 10:39:36 PM5/27/10
to the-...@googlegroups.com
 Ok, i think most of you are misunderstanding what i mean by levels. i dont mean levels of complexity (from low-level programs to high-level ones). perhaps a better way to explain it is that each process outputs information that is used by the higher-level processes, until you get to the top. a fairly low level process is one which, when you look around your room say, composites all of the information from your eyes into a 3D-ish thing. that information is then taken by say the walking process to auto-correct your steps while your having a conversation. or maybe you've developed a reflex that allows you to pick catch balls really well mid-air. that has been internalized as a process that is receiving info from your eyes and your sense of position (which is a process that takes the bent-ness of your muscles to determine approximately where your limbs are). and so on.  

also, it occurs to me that the big thing i am really missing is consciousness itself. these are requirements for the system that supports consciousness, but the thing itself is a 'programming issue' im talking hardware, not software. (I think i have explained some of my theories about this to david, and will discuss them with him in person to make them more coherent and less inane before posting them, perhaps)

David Reich

unread,
May 27, 2010, 10:32:37 PM5/27/10
to the-...@googlegroups.com
There is a very rough form of output -> input: information displayed on the screen will prompt the user to enter different commands.  Here's an example:

A program to learn the names of different colors.  10 different colors are displayed onscreen, and the user is prompted to name each one.  The same program is run 5 times, with 5 different users.  for each color, the most common name is saved as the name of that color.  This information could be then used somewhere else.

And yes, these arn't really used to create AI, but that's not their fault.  They could be.  Remember, though, the only input the system we're discussing here gets is Chinese writing, and that's the only output it's allowed.

Also, you havn't explained why such a cycle is necessary for consciousness.

Andrew Towle

unread,
May 28, 2010, 7:38:09 AM5/28/10
to the-...@googlegroups.com
Ok Thuy, going back to your reference to C3PO, I do't think he's living to any extent. He's not an animal or a plant. He doesn't perform chemosynthesis, ohotosynthesis, or cellular respiration. He doesn't eat, either. He may be conscious, but he's definitely not living.
--
Andy T.

Quoc-Thuy Vuong

unread,
May 28, 2010, 7:45:37 AM5/28/10
to the-...@googlegroups.com
Does he know English-aru?

David Reich

unread,
May 28, 2010, 4:36:57 PM5/28/10
to the-...@googlegroups.com
He knows 'Galactic Common' or something, which is Star Wars for English.

And living an conscious arn't necesssarily causally linked.
Reply all
Reply to author
Forward
0 new messages