May 26, 2010 05:57:18 PM, the-...@googlegroups.com wrote:
===========================================
Knowledge is having the information, having it stored in your memory. Not only is knowledge this, but it is also the conscienceness of what you are doing. Say, for example, that you asked the computer speaking Chinese what it was doing. It responds ' I'm communicating with you in Chinese'. Ok, but the only reason it was able to 'understand' you was becaus eit can recognize the words you are speaking. It didn't actually think to itself ' I am communicating with this human Chinese'.
On Wed, May 26, 2010 at 5:13 PM, Paul Gully wrote:
well, what is knowledge? (token response) it hinges on that question, i think.
anyway, that point aside, this is how i interpret it. is the meaning behind the language present in the system? are you operating above the system writing on the paper on in it? if your just following rules from a book you're operating in the system- you're like a driver without a map following road signs. if you actually 'know' the language, you're operating above it, you can navigate without the road signs and can get to the same place doing something completely different from what it says. difference between computers with the general archetecture and people (although i do not doubt that with SOME strange computer architecture one could mimic the human brain precisley)
now another question is wether or not it is relevant.
I tend to agree with Paul's first response, which I believe addresses
some of the follow-up questions you posed, David. First of all, I
don't think consciousness is merely tied to understanding what one is
doing. I would propose that some "thing" is conscious if and only if
it is aware of its own existence. Whether or not the system of working
parts that allows for the output of functional Chinese is considered a
"whole" from the outside, the system does not understand or know
itself to be a whole. It may understand the role of each individual
part to produce the desired result, but the entire box will never be
aware of itself as a box. It will never contemplate the task it is
given or, assuming it is perfectly programmed, deviate from that task.
This feeds directly into Paul's insightful response, which explains
the difference between operating within a system and actually
understanding and utilizing that system. I think the ability to use a
system is a big part of intelligence, human or otherwise. But the
difference is that computers are not aware of their own agency. They
perform a series of calculations that resolve in "yes" or "no"
answers, which lead them down a particular path of actions. In other
words, they perfectly understand each individual yes/no calculation
(usually represented with 0's and 1's) but are not aware of themselves
as a "computer" making those calculations. Humans invent these
general, all inclusive descriptions of systems like computers, or in
this case, a box. But that is because we have consciousness, and
understand ourselves to be an enclosed system, a "person," so we
naturally apply that nomenclature to other things as well. Take away a
part of the Chinese Box, and it simply ceases to function, perhaps
adding to its memory that its task can no longer be performed. Take
away a part of a human, and it not only understands that it can no
longer perform a particular function, but it will probably want to
know what happened to it. In other words, humans will seek knowledge
from outside the system they are working. Sort of like Paul's metaphor
about following road signs, or utilizing knowledge of those signs to
perform an unrelated task.
Clearly, I do not know much about computer programming or
neuroscience, so I would love it if an expert could weigh in from
either or both sides. But even with my limited knowledge, it seems
like a distinct possibility that the human brain is merely an
incredibly complex machine that performs an incomprehensible series of
yes/no calculations that give the illusion of randomness and (as we
call it) emotion. Somewhere along this spectrum from a simple computer
chip to a human brain, consciousness develops, and the being ceases to
have to worry about each calculation (in our case, our heartbeat or
even the replication of cells) as smaller tasks become delegated to
systems within the system. At some point a larger system that is
actually unaware of its own workings develops which can understand the
collection of these systems as producing a "thing." I won't pretend to
theorize on the how or why of this phenomenon, but I think that there
is a clear relationship between complexity and self-awareness.