On Thu, 02 Jul 2015 21:40:08 -0400, RSNorman <
r_s_n...@comcast.net>
wrote:
>On Thu, 2 Jul 2015 15:34:15 -0700 (PDT), Bill Rogers
><
broger...@gmail.com> wrote:
>
>>On Thursday, July 2, 2015 at 5:41:22 PM UTC-4, Sneaky O. Possum wrote:
>>> Bill Rogers <
broger...@gmail.com> wrote in
>>>
news:857dae25-bd1d-41a1...@googlegroups.com:
>>>
>>> > On Thursday, July 2, 2015 at 10:51:24 AM UTC-4, someone wrote:
>>> >
>>> >> In this harder to not-understand-the-point scenario, imagine you are
>>> >> in a box with an LED display which not display any symbol initially
>>> >> and then display either a "1" or a "0", go off again, and then
>>> >> display either a "1" or a "0", and then go off again. Imagine this on
>>> >> off cycle will happen several times. Imagine also that you are
>>> >> instructed to shout out "I'm in a blue box" if when the LED comes on
>>> >> it displays a "1" and shout out "I'm in a green box" if the LED comes
>>> >> on and it displays a "0". Imagine you don't know the colour of the
>>> >> box you are in, and that you are able to and do follow the
>>> >> instructions.
>>> >>
>>> >> Now imagine that unknown to you, what caused the LED to display a "0"
>>> >> or a "1" changed each time. Given that you wouldn't know what caused
>>> >> the LED to display a "0" or "1", would you agree that you weren't
>>> >> basing what you shouted out on what caused the LED to display a "0"
>>> >> or "1" each time? ---
>>> >>
>>> > It doesn't matter why the LED displays 0 or 1. My instructions are
>>> > only based on the particular number that I see. I'm certainly not
>>> > instructed to look at the color of the box and say what that color is.
>>> > So of course, when I see a 0 I'll say what I was instructed to say
>>> > when I see a zero, and when I see a 1 I'll say whatever I was
>>> > instructed to say when I see a 1.
>>> >
>>> > So what?
>>> >
>>> > Burkhard is right, this is a weak and poorly fleshed out version of
>>> > Searle's Chinese Room (which is itself an advertisement for a point of
>>> > view rather than an argument).
>>>
>>> It ain't *that* bad. Searle has argued that artificial intelligence is
>>> feasible, with the proviso that an artificial thinking machine will have
>>> to mimic the architecture of an organic one in ways that a binary
>>> processor doesn't. By his own account, he formulated the Chinese Room
>>> thought-experiment to demonstrate that a binary processor functions in an
>>> essentially different way from an organic brain, and will never become a
>>> brain regardless of how fast it gets or how much power it has.
>>>
>>> At the time he formulated the thought-experiment, people were arguing
>>> that continual increases in the processing speed of computers would
>>> eventually result in a conscious computer: there may still be some people
>>> who hold that view.
>>>
>>> Searle may be wrong, but I've yet to read a convincing rebuttal of his
>>> actual claims. (A number of people have convincingly rebutted claims he
>>> never actually made, but the utility of such rebuttals is questionable.)
>>> --
>>> S.O.P.
>>
>>Searle's claim was that no computer running a program based on formal manipulation of symbols can have either understanding or intentionality, because no formal manipulation of symbols (syntactics) can give the symbols meaning (semantics). His argument just doesn't get even to that conclusion, and it certainly does not get to the broader conclusions that people sometimes attribute to him.
>>
>>Here are a few problems with his paper
>>
>>1. The person in the box is a distraction, put there for rhetorical purposes to force the reader's conclusion in the desired direction without an actual argument. We expect the person to be conscious. But the person in the box is acting as a small part of the circuitry in a computer. We would not expect either a small collection of semiconductors or a handful of neurons to be conscious. It would have been a more honest argument if Searle simply left out the man in the box and just talked about a computer.
>>
>>2. The "algorithmic program" that Searle describes is simply a look-up table. Lots of programs are far more complex than that, and I doubt anyone thinks that a look-up table would emulate consciousness. Searle is really describing an over-simplified program designed to mimic conversation rather than a program designed to act conscious. And yet he wants the reader to treat that look-up table as the paradigm for all algorithmic programs.
>>
>>3. There is no ongoing interaction between the man/box system and the world, and no self-reference, no process by which the man/box system monitors and models its own internal states (and that is certainly something that could be done algorithmically).
>>
>>4. Searle's key claim that no manipulation of symbols (syntactics) can produce meaning (semantics), but he does not make much of an argument to support this claim, which probably seems self-evident to him. Specifically, he does not ask where meaning comes from in humans. He merely attributes it to unspecified characteristics of the brain.
>>
>>5. Finally, he does not explain what there is in the brain that is non-algorithmic or non-binary. What exactly is there in the timing of the firing of neurons that cannot be broken down to binary operations? Of course the description of them as binary operations would be very complex considering summation of lots of inputs, the global level of different neurotransmitters and neuroactive drugs, etc. But he doesn't offer a good explanation of what there is specifically non-algorithmic in the brain.
>>
>>The Chinese box is basically a trick to force the reader to a conclusion by tweaking his intuitions in the right way. It's not a real argument.
>
>What you say makes someone's argument hew even closer to the Chinese
>Box story: they are both basically tricks to force the reader to a
>conclusion by tweaking the situation in the proper way.
>
>There is an enormous difference between a person with a brain doing
>"computations" and a computer doing supposedly the same computations.
>The person has effectors -- we can act in the world and make changes
>to the physical environment. The person has sensors -- we can detect
>changes in the physical environment. And we can determine that the
>changes we see are often direct consequences of the actions we take.
>Even more, the person has internal machinery and the actions and even
>the computations (thoughts, if you want) cause changes in the internal
>machinery (metabolic changes, for example). And the person has
>internal sensors and we can detect the changes in our own bodies that
>are produced from our mental activity. All of these things act in a
>smoothly coherent and coordinated way (usually). These are consistent
>with us being agents in the world acting on it and being acted on. I
>would argue that all these notions are essential parts of what we call
>"being aware of ourselves".
>
>It should not be impossible to build a robot to do a lot of this
>although the incredible quantity of physical changes that occur within
>our body and the incredible quantity of sensors we have to detect
>those changes would be virtually impossible to duplicate in practice.
>Would a robot be designed to find itself in some pain and distress
>because of an overactive immune and hormonal system simply because it
>was given problems to solve that were incredibly difficult with
>insufficient information and conflicting requirements to compute
>behavior that it can easily calculate would be worthless in coping
>with the problem at hand yet still is necessary to perform for many
>other reasons? We humans very often react extremely poorly to being
>subjected to stress, physical, emotional, and mental, for long periods
>of time. Would the computer say "I really feel shitty -- I need a
>vacation!" representing self knowledge? Or would it say "my
>diagnostic programs indicate something is amiss -- please summon a
>repair technician"? What we would now build is the latter. I do not
>know why a robot capable of learning from experience and sharing
>experiences and learning and even discussing alternative courses of
>action and details of internal states with others would not act as the
>former and express self awareness.
>
>To return to the Chinese Room specifically, the person locked in the
>room would not demonstrate any understanding. However if the person
>actually got out and interacted with Chinese speakers and saw the
>results of the translations, how they produced changes both in the
>behavior and the emotional state of the listeners, and further engage
>in back and forth dialog, then I would argue that true understanding
>could easily result.
>
>To return to someone's silly premise: NAND gates alone may be able to
>produce a fully functional general pupose computer with massive memory
>stores capable of machine learning and all but it would be like a
>"brain in a vat" or the person locked inside the Chinese Room. To
>have consciousness requires actual behavior in the world, actual
>experiences to detect, analyze, and interpret and, finally, to
>internalize. So the robot has an awful lot of machinery far beyond
>simple NAND gates.
No response, someone?