I showed in the argument I gave you why believing in existential phenomenalism (EP) and scientific realism (SR) entails a contradiction, but you snipped it, and have made no attempt to refute the argument. I think you realise you were unable to refute the argument, and so are not capable of arguing that your position is not a nonsense. I state nonsense rather than false, because as I showed, it contains a contradiction. You cannot hold the position that you take an existential phenomenological approach to any enquiry about reality while also holding the position that what you consciously experience (which any existential phenomenological approach is based on) makes no difference to your behaviour. And the argument (which you snipped and ran from rather that honestly faced up to) shows that the mainstream scientific interpretation (that scientific realism suggests you believe in) implies that the evidence that the phenomenological approach is based on cannot be evidence. So EP claims it is evidence, but SR claims it is not, and so if you believe they are both right, you are just being illogical. I can understand you having not of realised it, but when shown it in an argument, to just snip it and run, is in my opinion a pitiful response (assuming you did not have some agenda for doing so).
> >Furthermore the following
> >argument shows that
> >phenomenology is logically >incompatible with the
> >ontology suggested by the >mainstream interpretation of >science.
>
> >...snip...
>
> I going with this contrary view instead:
>
> "In my lectures on mirror neurons I often conclude by saying that our research should be called existential neuroscience. I say this because the themes raised by mirror neuron research map well onto the themes recurrent in existential phenomenology." P.266
>
> "Mirroring People: The New Science of How We Connect with Others" 2009, By Marco Iacoboni. There is also the idea of using mirror neuronal science for making empathic robotics as well.
>
> In summary, the main idea I would like you to think about is the limitation of logic in arguing about unobservable things. Take consciously experiencing for example. The more you argue for a "logic" of consciously experiencing, the more you loose touch with that "you" yourself. You are negating it. Rather than negate it, interocept it as forces of biophysical vitals. Here there is a knowledge of self by the sensationng of it as a physical thing in the world; and then in combination with science like a medical physician's test for example, a gradually better knowledge builds; so this "act" that starts as a translogic is so much better for you than metaphysics.
>
> In closing, I wish you good health, with emphasis on getting to the good part of it.
>
> SC RED
So in the end you just snip and run, and do not offer any answer to the original question. I do not know what you mean by a "logic" of consciously experiencing. The argument just uses logic based on the fact that I am consciously experiencing.
Also the idea of "empathetic" robots entails a misuse of the word empathy. The robot would not be empathising as its behaviour would not be based on its feelings. As whether it was consciously experiencing or not would make no difference to its behaviour. And while a robot could know how another robot would behave for example, and people like Dennett might claim that there is no distinction between that and knowing what it would be like for the other robot, they would be wrong. There is a distinction, and here is another little argument that illustrates that:
In foot note 3 of Daniel Dennett's paper "What RoboMary Knows"
https://ase.tufts.edu/cogstud/dennett/papers/RoboMaryfinal.htm, Dennett notes:
---
Robinson (1993) also claims that I beg the question by not honouring a distinction he declares to exist between knowing "what one would say and how one would react" and knowing "what it is like." If there is such a distinction, it has not yet been articulated and defended, by Robinson or anybody else, so far as I know. If Mary knows everything about what she would say and how she would react, it is far from clear that she wouldn't know what it would be like.
---
In the paper Dennett imagines RoboMary as follows:
"1.RoboMary is a standard Mark 19 robot, except that she was brought on line without colour vision; her video cameras are black and white, but everything else in her hardware is equipped for colour vision, which is standard in the Mark 19."
Dennett then, it seems to me, considers that RoboMary would consciously experience red when in a similar situation to us experiencing red etc. At the very least, from his response to Robinson, it is clear that he is claiming that it has not been shown that if you know what it would say and how it would react, you would know what it was like for it. Dennett considers the following objection to his thought experiment:
"Robots don't have colour experiences! Robots don't have qualia. This scenario isn't remotely on the same topic as the story of Mary the colour scientist."
And gives the following response:
"I suspect that many will want to endorse this objection, but they really must restrain themselves, on pain of begging the question most blatantly. Contemporary materialism-at least in my version of it-cheerfully endorses the assertion that we are robots of a sort-made of robots made of robots. Thinking in terms of robots is a useful exercise, since it removes the excuse that we don't yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgement. If materialism is true, it should be possible ("in principle!") to build a material thing-call it a robot brain-that does what a brain does, and hence instantiates the same theory of experience that we do. Those who rule out my scenario as irrelevant from the outset are not arguing for the falsity of materialism; they are assuming it, and just illustrating that assumption in their version of the Mary story. That might be interesting as social anthropology, but is unlikely to shed any light on the science of consciousness."
Here one might straight away claim that there is a distinction between knowing how a robot will behave and knowing whose theory was correct regarding robots. Two people could know how the robot would behave, but disagree about the correct theory regarding consciousness. You could think job done, why bother continuing. But one can go further.
Let us imagine that for each camera pixel the Mark 19's eye sockets have three 8-bit channels A, B and C which are used for the light intensity encodings. For the grey scale camera the A, B and C channel values will all be the same. But with the colour cameras what they will be will depend on the version. With RGB cameras channel A will transmit the encoded red intensity, channel B the encoded green intensity, and channel C the encoded blue intensity, but with BRG cameras channel A will transmit the blue intensity, channel B the red intensity, and channel C the green intensity.
Now consider three Mark 19 robots. Each of which is in a different brightly lit room, sitting in a chair, with all of its motors disabled, so it is unable to move any body parts including its cameras.
The first is in a white room with a red cube which its RGB cameras are looking at. These cameras are slightly unusual as they also wirelessly broadcast their signal.
The second is in a white room with a blue cube which its BRG cameras are looking at. These cameras are also slightly unusual as they also wirelessly broadcast their signal.
The third is in a room with no box, but what is plugged into its camera sockets is a receiver that switches between picking up the signals broadcast from the cameras in the first two rooms.
The processing would be the same in each case, as in each case the channel values for the box pixels (assuming no shading) would be channel A = 255, channel B = 0, channel C = 0. There seems to me to be no way for Dennett (or any other physicalist philosopher for that matter), being able to establish whether the Mark19 in the third room's experience of a box was closer to how they (the philosopher) consciously experiences a red or whether it was closer to how they would consciously experience a blue box. If any philosopher disagrees, then I for one would be interested in how they thought they could tell. If not, then there is another example of a distinction between knowing how something will behave, and knowing what it would be like (if it was thought to like anything at all) for a robot.
"Knock-down refutations are rare in philosophy, and unambiguous self-refutations are even rarer, for obvious reasons, but sometimes we get lucky. Sometimes philosophers clutch an insupportable hypothesis to their bosoms and run headlong over the cliff edge. Then, like cartoon characters, they hang there in mid-air, until they notice what they have done and gravity takes over."
-Daniel Dennett
(I do not expect you to be able to refute it, but perhaps consider why you cannot).