> It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.
On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com> wrote:> It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.I agree. And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.
Artificial Intelligence is hard but Artificial Consciousness Is easy.
--You received this message because you are subscribed to the Google Groups "Everything List" group.To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2dufQQNA2B6WGp5_LHPYry%3DoZDKLuwxWfg%3DeQuGT%2Be1g%40mail.gmail.com.
>> And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.> If you truly believe this is the case, then it follows that anyone writing such a program and subjecting it to X=P should be considered guilty of torture. Do you agree?
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAF0GBnjtnJ1S%2BS6uhENxBey%2BT7X3%3D4Z9V1FVSiUe4ODYpiCn9Q%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2j0rtQBuiZoJxy0%3DL9hradH42s%3DRQBJnGwE9qORhjUMA%40mail.gmail.com.
> It's impossible to refute solipsism
> It's true that the only thing we know for sure is our own consciousness,
> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.
> What's the difference between / how do we know, the program is experiencing pain when P is high, versus the program is experiencing bliss when P is low?
> Or is bliss merely the complete absence of pain and the distinction in my prior question is meaningless?
> I have an analogue computer that implements this: Two magnets. If I push
two equal poles toward each other, does this cause the system of the two
magnets to feel pain?
On Mon, Apr 26, 2021, 5:29 AM John Clark <johnk...@gmail.com> wrote:
On Mon, Apr 26, 2021 at 6:06 AM Telmo Menezes <te...@telmomenezes.net> wrote:
>> And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.
> If you truly believe this is the case, then it follows that anyone writing such a program and subjecting it to X=P should be considered guilty of torture. Do you agree?
Yes. If I'm right, and I think I am, then anyone writing such a program not only should be but logically MUST be considered to have been engaging in torture. What conclusion can be drawn from that bizarre conclusion? Assuming a level of consciousness to something while ignoring all information about its intelligent behavior is not a useful tool for assessing the morality of an action.
John K Clark
What's the difference between / how do we know, the program is experiencing pain when P is high, versus the program is experiencing bliss when P is low?
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com.
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act. This is commonly the situation during a dream. One is aware of dreamt events but doesn't actually move in response to them.
And I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
But I agree with his general point that consciousness is easy and intelligence is hard.
I think human consciousness, having an inner narrative, is just an evolutionary trick the brain developed for learning and accessing learned information to inform decisions. Julian Jaynes wrote a book about how this may have come about, "The Origin of Consciousness in the Breakdown of the Bicameral Mind". I don't know that he got it exactly right, but I think he was on to the right idea.
--
Brent
On 4/26/2021 4:07 PM, Terren Suydam wrote:
So do you have nothing to say about coma patients who've later woken up and said they were conscious? Or people under general anaesthetic who later report being gruesomely aware of the surgery they were getting? Should we ignore those reports? Or admit that consciousness is worth considering independently from its effects on outward behavior?
--On Mon, Apr 26, 2021 at 11:16 AM John Clark <johnk...@gmail.com> wrote:
--On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:
> It's impossible to refute solipsismTrue, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.
> It's true that the only thing we know for sure is our own consciousness,And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better.> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.
And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/72cf6136-17df-2be5-bdba-5dadf036e08e%40verizon.net.
On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act. This is commonly the situation during a dream. One is aware of dreamt events but doesn't actually move in response to them.
And I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
Or rather, even if they're doing nothing at all. Someone meditating for hours on end, or someone lying on a couch with eyeshades and headphones on tripping on psilocybin, may be having extraordinary internal experiences and display absolutely no outward behavior.
But I agree with his general point that consciousness is easy and intelligence is hard.
It depends how you look at it. JC's point is that it's impossible to prove much of anything about consciousness, so you can imagine many ways to explain consciousness without ever suffering the pain of your theory being slain by a fact.
However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA8td-C7eacVOiBSGPMjiE%3DLoLKfEj8MLothH0jfBDutsg%40mail.gmail.com.
However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.
What I mean by it is that if you can engineer intelligence at a high level it will necessarily entail consciousness. An entity cannot be human-level intelligent without being able to prospectively consider scenarios in which they are actors in which the scenario is informed by past experience...and I think that is what constitutes the core of consciousness.
Brent
On Tue, Apr 27, 2021 at 1:27 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.
What I mean by it is that if you can engineer intelligence at a high level it will necessarily entail consciousness. An entity cannot be human-level intelligent without being able to prospectively consider scenarios in which they are actors in which the scenario is informed by past experience...and I think that is what constitutes the core of consciousness.
Sure - although it seems possible that there could be intelligences that are not conscious. We're pretty biased to think of intelligence as we have it - situated in a meat body, and driven by evolutionary programming in a social context. There may be forms of intelligence so alien we could never conceive of them, and there's no guarantee about consciousness.
Take corporations. A corporation is its own entity and it acts intelligently in the service of its own interests. They can certainly be said to "prospectively consider scenarios in which they are actors in which the scenario is informed by past experience". Is a corporation conscious?
> So do you have nothing to say about coma patients who've later woken up and said they were conscious? Or people under general anaesthetic who later report being gruesomely aware of the surgery they were getting? Should we ignore those reports? Or admit that consciousness is worth considering independently from its effects on outward behavior?
> I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
> consciousness is harder to work with than intelligence, because it's harder to make progress.
> Facts that might slay your theory are much harder to come by.
.
On 4/26/2021 11:11 PM, Terren Suydam wrote:
Sure - although it seems possible that there could be intelligences that are not conscious. We're pretty biased to think of intelligence as we have it - situated in a meat body, and driven by evolutionary programming in a social context. There may be forms of intelligence so alien we could never conceive of them, and there's no guarantee about consciousness.
I don't see how an entity could be really intelligent without being able to consider its actions by a kind of internal simulation.
Take corporations. A corporation is its own entity and it acts intelligently in the service of its own interests. They can certainly be said to "prospectively consider scenarios in which they are actors in which the scenario is informed by past experience". Is a corporation conscious?
I think so. And the Supreme Court agrees. :-)
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/adcec69e-5c78-f355-9d35-a503d8d12d5f%40verizon.net.
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:> It's impossible to refute solipsismTrue, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.
> It's true that the only thing we know for sure is our own consciousness,And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better.> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.
--You received this message because you are subscribed to the Google Groups "Everything List" group.To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com.
On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam <terren...@gmail.com> wrote:> consciousness is harder to work with than intelligence, because it's harder to make progress.It's not hard to make progress in consciousness research, it's impossible.
> Facts that might slay your theory are much harder to come by.Such facts are not hard to come by. they're impossible to come by. So for a consciousness scientist being lazy works just as well as being industrious, so consciousness research couldn't be any easier, just face a wall, sit on your hands, and contemplate your navel.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com.
> John - do you have any response?
>> It's not hard to make progress in consciousness research, it's impossible.So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia,
>Why doesn't that represent progress?
> Is it because you don't trust people's reports?
> in an FMRI has lead to some interesting facts.
> You seem to think progress can only mean being able to prove conclusively how consciousness works.
.
..
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv39t6-uC6z55HBerygGSWPBMKZKEsNiX%3D3kQ7FVyfTLTw%40mail.gmail.com.
> But this doesn't mean we can't develop theories of consciousness
> and gather empirical evidence for them.
> If we simulate brains in computers or develop functional brain scanners that measure individual neurons, we can answer questions about what makes a philosophers of mind talk about qualia or pose or answer questions about consciousness.
On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam <terren...@gmail.com> wrote:> John - do you have any response?If you insist.>> It's not hard to make progress in consciousness research, it's impossible.So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia,We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.
>Why doesn't that represent progress?It may represent progress but not progress towards understanding consciousness.
> Is it because you don't trust people's reports?Trust but verify. When you and I talk about consciousness I don't even know if we're talking about the same thing; perhaps by your meaning of the word I am not conscious, maybe I'm conscious by my meaning of the word but not by yours, maybe my consciousness is just a pale pitiful thing compared to the grand glorious awareness that you have and what you mean by the word "consciousness". Maybe comparing your consciousness to mine is like comparing a firefly to a supernova. Or maybe it's the other way around. Neither of us will ever know.
>> We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.> Got it. Accounts of subjective experience are not the salient facts in these experiments, it's the way they move their lips and tongue and pass air through their vocal cords that matters. The rest of the world has moved on from BF Skinner, but not you, apparently.
>>> Why doesn't that represent progress?>> It may represent progress but not progress towards understanding consciousness.> Why not? Understanding how the brain maps or encodes different subjective experiences
> If we can explain why, for example, you see stars if you bash the back of your head,
> You make it sound as though there's nothing to be gleaned from systematic investigation,
> the thing I understand the least is how incurious you are about it.
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act. This is commonly the situation during a dream. One is aware of dreamt events but doesn't actually move in response to them.
And I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
But I agree with his general point that consciousness is easy and intelligence is hard.
I think human consciousness, having an inner narrative,
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/72cf6136-17df-2be5-bdba-5dadf036e08e%40verizon.net.
On Wed, Apr 28, 2021 at 11:17 AM Terren Suydam <terren...@gmail.com> wrote:>> We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.> Got it. Accounts of subjective experience are not the salient facts in these experiments, it's the way they move their lips and tongue and pass air through their vocal cords that matters. The rest of the world has moved on from BF Skinner, but not you, apparently.Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.
>>> Why doesn't that represent progress?>> It may represent progress but not progress towards understanding consciousness.> Why not? Understanding how the brain maps or encodes different subjective experiencesBecause understanding how the brain maps and encodes information will tell you lots about behavior and intelligence but absolutely nothing about consciousness.> If we can explain why, for example, you see stars if you bash the back of your head,It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.
> You make it sound as though there's nothing to be gleaned from systematic investigation,It's impossible to systematically investigate everything therefor a scientist needs to use judgment to determine what is worth his time and what is not. Every minute you spend on consciousness research is a minute you could've spent on researching something far far more productive, which would be pretty much anything. Consciousness research has made ZERO progress over the last thousand years and I have every reason to believe it will make twice as much during the next thousand.
> the thing I understand the least is how incurious you are about it.The thing I find puzzling is how incurious you and virtually all internet consciousness mavens are about how intelligence works. Figuring out intelligence is a solvable problem, but figuring out consciousness is not, probably because it's just a brute fact that consciousness is the way data feels when it is being processed. If so then there's nothing more they can be said about consciousness, however I am well aware that after all is said and done more is always said and done.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0RTqv_FdnC6szHBEHO_gM%3DSeXJ86z9FEJmkW_Ba%2B7edg%40mail.gmail.com.
> If we can explain why, for example, you see stars if you bash the back of your head,It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.
> one can discover whether your reports of green qualia correspond to something consistent in our shared world,
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net.
>> Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.> OK, I want to explain consciousness from first principles, so Y = consciousness. What is X?
> I'm interested in a theory of consciousness that can tell me, among other things, how it is that we have conscious experiences when we dream. Don't you wonder about that?
> I'm very curious about how intelligence works too.
..
On Wed, Apr 28, 2021 at 2:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
> one can discover whether your reports of green qualia correspond to something consistent in our shared world,
Consistency is not the same as identity. If what you and I mean by the words "red" and "green" were inverted then both of us would still say tomatoes are red and leaves are green, but those things would not look subjectively the same to us.
--
John K Clark
--
On 4/28/2021 9:06 AM, John Clark wrote:
> If we can explain why, for example, you see stars if you bash the back of your head,It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.
No, but one can discover whether your reports of green qualia correspond to something consistent in our shared world, as opposed say to your reports of little green men when drunk. That's why we have a word for qualia which is different from Illusion.
Brent
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1wtDbm2b_7ddN4CD4AL3%2BthLNFOximoZqGZ7BTw6d0QA%40mail.gmail.com.
>>Consistency is not the same as identity. If what you and I mean by the words "red" and "green" were inverted then both of us would still say tomatoes are red and leaves are green, but those things would not look subjectively the same to us.
> How do you know that?
If you can't know they're the same, you can't know whether they are different either.--
Notice that I referred to "reports". You're worrying whether the qualia are the same...contrary to your own avowal that there's no there there.
Brent
--
John K Clark
--
On 4/28/2021 9:06 AM, John Clark wrote:
> If we can explain why, for example, you see stars if you bash the back of your head,It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.
No, but one can discover whether your reports of green qualia correspond to something consistent in our shared world, as opposed say to your reports of little green men when drunk. That's why we have a word for qualia which is different from Illusion.
Brent
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1wtDbm2b_7ddN4CD4AL3%2BthLNFOximoZqGZ7BTw6d0QA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/136a386e-09e9-4f05-f11a-ab444bcc51b0%40verizon.net.
On Wed, Apr 28, 2021 at 2:39 PM Terren Suydam <terren...@gmail.com> wrote:>> Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.> OK, I want to explain consciousness from first principles, so Y = consciousness. What is X?Something that shows up on a brain scan machine according to you.
> I'm interested in a theory of consciousness that can tell me, among other things, how it is that we have conscious experiences when we dream. Don't you wonder about that?I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.
> I'm very curious about how intelligence works too.Glad to hear it, but there's 10 times or 20 times more verbiage about consciousness than intelligence on this list.
.
..
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3GiVYXroetrK-EU1ai2Pg6nv8P%2B1RBrmKZ0yQGaX%2BfaA%40mail.gmail.com.
> testimony of experience constitutes facts about consciousness.
>> I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.> We're talking about consciousness, not merely "mental activity".
.
On Wed, Apr 28, 2021 at 2:02 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/28/2021 11:39 AM, Terren Suydam wrote:
>
> I'm interested in a theory of consciousness that can tell me, among
> other things, how it is that we have conscious experiences when we
> dream. Don't you wonder about that?
No especially. It's certainly consistent with consciousness being a
brain process. And it's consistent with Jeff Hawkins theory that the
brain is continually trying to predict sensation and it is predictions
that are endorsed by the most neurons that constitute conscious
thoughts. in sleep, with little or no sensory input the predictions
wander, depending mainly on memory for input.
There was a neurologist (I forgot who) that said "Waking life is a dream modulated by the senses."
In other words, the brain's main function is effectively that of a dreaming machine (to generate a picture of reality centered on a subject). Normally, when we are awake, this dream is synched up to mostly follow along with an external world, given data input from the senses. But when we sleep, the brain is free to make things up in ways not synced up to the external world through the senses.
I don't know how true this idea is, but it makes sense and sounds plausible. If it's true, we can expect any creature that dreams likely also experiences a picture of a reality centered on a subject.
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUioAOW6msmzs0nqFdUyfMs8OJd%2BuftwiUxUqwfr%2BtOirw%40mail.gmail.com.
On Wed, Apr 28, 2021 at 3:50 PM Terren Suydam <terren...@gmail.com> wrote:> testimony of experience constitutes facts about consciousness.Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence
>> I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.> We're talking about consciousness, not merely "mental activity".And as I mentioned in a previous post, if consciousness is NOT the inevitable byproduct of intelligence then when we're talking about consciousness we don't even know if we're talking about the same thing.
.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0tA9-4GsUkWamsNSdo_O7cgjqpt6uCTFPvYxSz9Kg-iA%40mail.gmail.com.
>>> testimony of experience constitutes facts about consciousness.>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com.
On Wed, Apr 28, 2021 at 5:51 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:
>>> testimony of experience constitutes facts about consciousness.
>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence
> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?
It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.
That said, do you agree that testimony of experience constitutes facts about consciousness?
--
Terren
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA_VGG8qrMqnm-W-UGnPDL_4EdynRPuxnSWddz4OTrcm7g%40mail.gmail.com.
On 4/28/2021 3:17 PM, Terren Suydam wrote:
On Wed, Apr 28, 2021 at 5:51 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:
>>> testimony of experience constitutes facts about consciousness.
>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence
> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?
It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.
That said, do you agree that testimony of experience constitutes facts about consciousness?
It wouldn't if it were just random, like plucking passages out of novels. We only take it as evidence of consciousness because there are consistent patterns of correlation with what each of us experiences. If every time you pointed to a flower you said "red", regardless of the flower's color, a child would learn that "red" meant a flower and his reporting when he saw red wouldn't be testimony to the experience of red. So the usefulness of reports already depends on physical patterns in the world. Something I've been telling Bruno...physics is necessary to consciousness.
Brent
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bfe08930-bf9a-c88b-be8b-f621e5488c4f%40verizon.net.
On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/28/2021 4:40 PM, Terren Suydam wrote:
On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/28/2021 3:17 PM, Terren Suydam wrote:
On Wed, Apr 28, 2021 at 5:51 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:
>>> testimony of experience constitutes facts about consciousness.
>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence
> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?
It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.
That said, do you agree that testimony of experience constitutes facts about consciousness?
It wouldn't if it were just random, like plucking passages out of novels. We only take it as evidence of consciousness because there are consistent patterns of correlation with what each of us experiences. If every time you pointed to a flower you said "red", regardless of the flower's color, a child would learn that "red" meant a flower and his reporting when he saw red wouldn't be testimony to the experience of red. So the usefulness of reports already depends on physical patterns in the world. Something I've been telling Bruno...physics is necessary to consciousness.
Brent
I agree with everything you said there, but all you're saying is that intersubjective reality must be consistent to make sense of other peoples' utterances. OK, but if it weren't, we wouldn't be here talking about anything. None of this would be possible.
Which is why it's a fool's errand to say we need to explain qualia. If we can make an AI that responds to world the way we to, that's all there is to saying it has the same qualia.
I don't think either of those claims follows. We need to explain suffering if we hope to make sense of how to treat AIs. If it were only about redness I'd agree. But creating entities whose existence is akin to being in hell is immoral. And we should know if we're doing that.
To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.
On 4/28/2021 9:54 AM, Telmo Menezes wrote:Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act. This is commonly the situation during a dream. One is aware of dreamt events but doesn't actually move in response to them.
And I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
But I agree with his general point that consciousness is easy and intelligence is hard.JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.I think you are falling for similar traps.
Which is what?
I think you are falling into the trap of searching for the ding an sich. Engineering is the measure of understanding.That's JKC's point (JFK is dead),
if your theory doesn't lead to engineering it's just philosophizing and that's easy.
Brent
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/dc8dc430-56c7-497c-9169-84883e7fb5cc%40www.fastmail.com.
>> If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?> It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon.
> As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all.
> do you agree that testimony of experience constitutes facts about consciousness?
Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:On 4/28/2021 9:54 AM, Telmo Menezes wrote:Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act. This is commonly the situation during a dream. One is aware of dreamt events but doesn't actually move in response to them.
And I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
But I agree with his general point that consciousness is easy and intelligence is hard.JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.I think you are falling for similar traps.Which is what?The trap of equating the perceived difficulty of a task with its merit. Are we after the truth, or are we after bragging rights?
I think you are falling into the trap of searching for the ding an sich. Engineering is the measure of understanding.That's JKC's point (JFK is dead),My apologies to JKC for my dyslexia, it was not on purpose.if your theory doesn't lead to engineering it's just philosophizing and that's easy.Well, that is you philosophizing, isn't it? Saying that "engineering is the measure of understanding" is a philosophical position that you are not bothering to justify.
If you propose a hypothesis, we can follow this hypothesis to its logical conclusions. So let us say that brain activity generates consciousness. The brain is a finite thing, so its state can be fully described by some finite configuration. Furthermore, this configuration can be replicated in time and space. So a consequence of claiming that the brain generates consciousness is that a conscious state cannot be constrained by time or space. If the exact configuration we are experiencing now is replicated 1 million years from now or in another galaxy, then it leads to the same exact first person experience and the instantiations cannot be distinguished. If you want pure physicalism then you have to add something more to your hypothesis.
To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.
Behavior includes reports. What else would you suggest we go on?
Bent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/577ce844-a528-4dcd-deab-3cf1e5e833e8%40verizon.net.
On Wed, Apr 28, 2021 at 6:18 PM Terren Suydam <terren...@gmail.com> wrote:>> If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?> It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon.That remark makes no sense, and you never answered my question. If consciousness is an epiphenomenon, and from Evolutions point of you it certainly is, then the only way natural selection could've produced consciousness is if its the inevitable byproduct of something else that is not an epiphenomenon, something like intelligence. And you know for a fact that Evolution has produced consciousness at least once and probably many billions of times.
> As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all.For the purposes of this argument it's irrelevant if any sort of data processing can produce consciousness or if only the type that leads to intelligence can because evolution doesn't select for data processing it selects for intelligence, but you can't have intelligence without data processing.
> do you agree that testimony of experience constitutes facts about consciousness?Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1e47JG3rfVnTCp6KxLRFnqmZRKQHNcNKVhrNBRaEkk5A%40mail.gmail.com.
> A theory would give you a way to predict what kinds of beings are capable of feeling pain
> we'd say "given theory X,
> we know that if we create an AI with these characteristics,
> a theory of consciousness that explains how qualia come to be within a system,
> you could make claims about their experience that go beyond observing behavior.
>I think it's possible there was consciousness before there was intelligence,
> you're implicitly working with a theory of consciousness. Then, you're demanding that I use your theory of consciousness when you insist that I answer questions about consciousness through the framing of evolution.
> >> do you agree that testimony of experience constitutes facts about consciousness?>> Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".
> OK, fine, let's say intelligence implies consciousness,
>the account given was honest (as in, nobody witnessing the account would have a credible reason to doubt it),
> and we can agree on all those terms.
.
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:> A theory would give you a way to predict what kinds of beings are capable of feeling painFinding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?
> we'd say "given theory X,And if the given X which we take as being true is "Hogwarts exist" then we must logically conclude we could find Harry Potter at that magical school of witchcraft and wizardry.> we know that if we create an AI with these characteristics,If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.
> a theory of consciousness that explains how qualia come to be within a system,Explains? Just what sort of theory would satisfy you and make you say the problem of consciousness has been solved? If I said the chemical Rednosium Oxide produced qualia would all your questions be answered or would you be curious to know how this chemical managed to do that?
> you could make claims about their experience that go beyond observing behavior.Claims are even easier to come about then theories are, but true claims not so much..
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com.
On Thu, Apr 29, 2021 at 9:48 AM Terren Suydam <terren...@gmail.com> wrote:>I think it's possible there was consciousness before there was intelligence,I very much doubt it, but of course nobody will ever be able to prove or disprove it so the proposition fits in very nicely with all existing consciousness literature.
> you're implicitly working with a theory of consciousness. Then, you're demanding that I use your theory of consciousness when you insist that I answer questions about consciousness through the framing of evolution.I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?
> >> do you agree that testimony of experience constitutes facts about consciousness?>> Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".> OK, fine, let's say intelligence implies consciousness,If you grant me that then what are we arguing about?
>the account given was honest (as in, nobody witnessing the account would have a credible reason to doubt it),The most successful lies are those in which the reason for the lying is not immediately obvious.
> and we can agree on all those terms.Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0M63Y_GL_rDjOL41uu7pgjvnwfiu2rM0LNWoL-y0Ahfw%40mail.gmail.com.
Finding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?The same way we figure out any theory is correct.
>>If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.>Sure, but we might be talking about the behavior of neurons, or their equivalent in an AI.
> All of our disagreements come down to whether there are facts about consciousness. You don't think there are,
Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:
On 4/28/2021 9:54 AM, Telmo Menezes wrote:
Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act. This is commonly the situation during a dream. One is aware of dreamt events but doesn't actually move in response to them.
And I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently." I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.
But I agree with his general point that consciousness is easy and intelligence is hard.
JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?
Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.
I think you are falling for similar traps.
Which is what?
The trap of equating the perceived difficulty of a task with its merit. Are we after the truth, or are we after bragging rights?
I think you are falling into the trap of searching for the ding an sich. Engineering is the measure of understanding.
That's JKC's point (JFK is dead),
My apologies to JKC for my dyslexia, it was not on purpose.
if your theory doesn't lead to engineering it's just philosophizing and that's easy.
Well, that is you philosophizing, isn't it? Saying that "engineering is the measure of understanding" is a philosophical position that you are not bothering to justify.
If you propose a hypothesis, we can follow this hypothesis to its logical conclusions. So let us say that brain activity generates consciousness. The brain is a finite thing, so its state can be fully described by some finite configuration. Furthermore, this configuration can be replicated in time and space. So a consequence of claiming that the brain generates consciousness is that a conscious state cannot be constrained by time or space. If the exact configuration we are experiencing now is replicated 1 million years from now or in another galaxy, then it leads to the same exact first person experience and the instantiations cannot be distinguished. If you want pure physicalism then you have to add something more to your hypothesis.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a2e70ec6-8262-49ef-884a-ea1d74ca00fa%40www.fastmail.com.
>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?>No, I can't.
> If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?
>>> OK, fine, let's say intelligence implies consciousness,>> If you grant me that then what are we arguing about?> Over whether there are facts about consciousness, without having to link it to intelligence.
>> Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.> We don't need infinite precision to uncover useful facts.
> If someone says "that hurts", or "that looks red", we know what they mean.
> We take it as an assumption, and we make it explicit, that when someone says "I see red" they are having the same kind of, or similar enough,
On Thu, Apr 29, 2021 at 1:57 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/28/2021 9:42 PM, Terren Suydam wrote:
On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/28/2021 4:40 PM, Terren Suydam wrote:
I agree with everything you said there, but all you're saying is that intersubjective reality must be consistent to make sense of other peoples' utterances. OK, but if it weren't, we wouldn't be here talking about anything. None of this would be possible.
Which is why it's a fool's errand to say we need to explain qualia. If we can make an AI that responds to world the way we to, that's all there is to saying it has the same qualia.
I don't think either of those claims follows. We need to explain suffering if we hope to make sense of how to treat AIs. If it were only about redness I'd agree. But creating entities whose existence is akin to being in hell is immoral. And we should know if we're doing that.
John McCarthy wrote a paper in the '50s warning about the possibility of accidentally making a conscious AI and unknowingly treating it unethically. But I don't see the difference from any other qualia, we can only judge by behavior. In fact this whole thread started by JKC considering AI pain, which he defined in terms of behavior.
A theory would give you a way to predict what kinds of beings are capable of feeling pain. We wouldn't have to wait to observe their behavior, we'd say "given theory X, we know that if we create an AI with these characteristics, it will be the kind of entity that is capable of suffering".
To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.
Behavior includes reports. What else would you suggest we go on?
Again, in a theory of consciousness that explains how qualia come to be within a system, you could make claims about their experience that go beyond observing behavior. I know John Clark's head just exploded, but it's the point of having a theory of consciousness.
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:
> A theory would give you a way to predict what kinds of beings are capable of feeling pain
Finding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?
> we'd say "given theory X,
And if the given X which we take as being true is "Hogwarts exist" then we must logically conclude we could find Harry Potter at that magical school of witchcraft and wizardry.
> we know that if we create an AI with these characteristics,
If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.
> a theory of consciousness that explains how qualia come to be within a system,
Explains? Just what sort of theory would satisfy you and make you say the problem of consciousness has been solved? If I said the chemical Rednosium Oxide produced qualia would all your questions be answered or would you be curious to know how this chemical managed to do that?
> you could make claims about their experience that go beyond observing behavior.
Claims are even easier to come about then theories are, but true claims not so much.
.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com.
On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam <terren...@gmail.com> wrote:>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?>No, I can't.So I can explain something that you cannot. So which of our ideas are superior?
> If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?So what?!! If evolution selects for intelligence and you can't have intelligence without data processing and consciousness is the way data feels when it is being processed then it's no great mystery as to how evolution managed to produce consciousness by way of natural selection.
>>> OK, fine, let's say intelligence implies consciousness,>> If you grant me that then what are we arguing about?> Over whether there are facts about consciousness, without having to link it to intelligence.If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousness. But I don't think Darwin was wrong, I think you are.
>> Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.> We don't need infinite precision to uncover useful facts.I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.
> If someone says "that hurts", or "that looks red", we know what they mean.Do you? When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.
> We take it as an assumption, and we make it explicit, that when someone says "I see red" they are having the same kind of, or similar enough,That is one hell of an assumption! If you're willing to do that why not be done with it and just take it as an assumption that your consciousness theory, whatever it may be, is correct?
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0fFgjWVsPTKG0SmeKb0PVuAyah9hyTDGpG2M8O-_AKYA%40mail.gmail.com.
On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam <terren...@gmail.com> wrote:>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?>No, I can't.So I can explain something that you cannot. So which of our ideas are superior?> If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?So what?!! If evolution selects for intelligence and you can't have intelligence without data processing and consciousness is the way data feels when it is being processed then it's no great mystery as to how evolution managed to produce consciousness by way of natural selection.
>>>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?>>>No, I can't.>>So I can explain something that you cannot. So which of our ideas are superior?> All you've succeeded in doing is showing your preference for a particular theory
>> If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousness. But I don't think Darwin was wrong, I think you are.> I'm neither claiming that evolution produced consciousness or that Darwin was wrong.
>> I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.> If that were true, language would be useless.
>> When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.> I don't care about that. What matters is that you know you are seeing red and I know I am seeing red.
On 25 Apr 2021, at 22:29, Jason Resch <jason...@gmail.com> wrote:It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?Does, for instance, the following program "know" anything about the data it is processing?if (pixel.red > 128) then {// knows pixel.red is greater than 128} else {// knows pixel.red <= 128}If not, what else is required for knowledge?
Does the program behavior have to change based on the state of some information? For example:if (pixel.red > 128) then {// knows pixel.red is greater than 128doX();} else {// knows pixel.red <= 128doY():}Or does the program have to possess some memory and enter a different state based on the state of the information it processed?if (pixel.red > 128) then {// knows pixel.red is greater than 128enterStateX():} else {// knows pixel.red <= 128enterStateY();}Or is something else altogether needed to say the program knows?
If a program can be said to "know" something then can we also say it is conscious of that thing?
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com.
On Thu, Apr 29, 2021 at 3:10 PM Terren Suydam <terren...@gmail.com> wrote:>>>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?>>>No, I can't.>>So I can explain something that you cannot. So which of our ideas are superior?> All you've succeeded in doing is showing your preference for a particular theoryCorrect. If idea X can explain something better than idea Y then I prefer idea X.
>> If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousness. But I don't think Darwin was wrong, I think you are.> I'm neither claiming that evolution produced consciousness or that Darwin was wrong.You're going to have to clarify that remark, it can't possibly be as nuts as it seems to be.
>> I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.> If that were true, language would be useless.Nonsense. If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
>> When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.> I don't care about that. What matters is that you know you are seeing red and I know I am seeing red.In other words you care more about behavior than consciousness because the use of the word "red" is consistent with both of us, as is our behavior, regardless of what our subjective impression of "red" is. So I guess you're starting to agree with me.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3xPtMS0Lwn6p37tZHdAQuPkYOmZWG1qmWVbj7mcoPpbA%40mail.gmail.com.
>>> All you've succeeded in doing is showing your preference for a particular theory>> Correct. If idea X can explain something better than idea Y then I prefer idea X.> What intention did you have that caused you to change "... a particular theory of consciousness" to "a particular theory"?
> You are one of the least generous people I've ever argued with.
> You intentionally obfuscate, attack straw men, selectively clip responses, don't address challenging points, don't budge an inch and [blah blah]
> just generally take a disrespectful tone.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1sgpV14De%2B6b%3Dm_sH-RNL86jGoUB63caNR3t0SwqrAKw%40mail.gmail.com.
> I have arguments against your arguments,
> and anyone can see that.
> But it doesn't go anywhere because you often remove my rebuttals from your response
> these aren't personal insults,
> I'm backing out,
.
Nonsense. If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
If a program can be said to "know" something then can we also say it is conscious of that thing?
>> If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
> But if the qualia of experiencing red is nothing more than the neuronal structure and process that consistently associates 700nm signals from the retina with the same actions as everyone else, e.g. saying "red", stopping at the light, eating the fruit,...then it seems to me it is perfectly justified to say people share the same qualia. That's the engineering stance. What the qualia really is, is a psuedo-problem,
,
Hi Jason,On 25 Apr 2021, at 22:29, Jason Resch <jason...@gmail.com> wrote:It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?Does, for instance, the following program "know" anything about the data it is processing?if (pixel.red > 128) then {// knows pixel.red is greater than 128} else {// knows pixel.red <= 128}If not, what else is required for knowledge?Do you agree that knowledgeability obeysknowledgeability(A) -> Aknowledgeability(A) -> knowledgeability(knowledgeability(A))
(And also, to limit ourselves to rational knowledge:knowledgeability(A -> B) -> (knowledgeability(A) -> knowledgeability(B))From this, it can be proved that “ knowledgeability” of any “rich” machine (proving enough theorem of arithmetic) is not definable in the language of that machine, or in any language available to that machine.
So the best we can do is to define a notion of belief (which abandon the reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable (in the language of the machine), and then we can apply the idea of Theatetus, and define knowledge (or knowledgeability, when we add the transitivity []p -> [][]p) by true belief.The machine knows A when she believes A and A is true.
Does the program behavior have to change based on the state of some information? For example:if (pixel.red > 128) then {// knows pixel.red is greater than 128doX();} else {// knows pixel.red <= 128doY():}Or does the program have to possess some memory and enter a different state based on the state of the information it processed?if (pixel.red > 128) then {// knows pixel.red is greater than 128enterStateX():} else {// knows pixel.red <= 128enterStateY();}Or is something else altogether needed to say the program knows?You need self-reference ability for the notion of belief, together with a notion of reality or truth, which the machine cannot define.
To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy.
If a program can be said to "know" something then can we also say it is conscious of that thing?1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)
2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p.Then to be “simply” conscious, becomes []t & <>t (& t).Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...).That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral), WITHOUT any collapse postulate.
--BrunoJason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/217C90D2-0AB9-4AD3-BBC7-A876EAA28069%40ulb.ac.be.
On Fri, Apr 30, 2021 at 12:56 PM Terren Suydam <terren...@gmail.com> wrote:
> I have arguments against your arguments,They say persistence is a virtue so I'll ask the same question for the fourth time; given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?
> electronics are so much faster than neurons, it might be possible to implement intelligent behavior just by creating giant hash tables of experience and using them as look-ups for responses. I don't know that this is possible, but it's not obviously impossible and then it would hard to say whether this form of AI had qualia or not.
On Fri, Apr 30, 2021 at 2:22 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
>> If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.> But if the qualia of experiencing red is nothing more than the neuronal structure and process that consistently associates 700nm signals from the retina with the same actions as everyone else, e.g. saying "red", stopping at the light, eating the fruit,...then it seems to me it is perfectly justified to say people share the same qualia. That's the engineering stance. What the qualia really is, is a psuedo-problem,I pretty much agree. You make a strong argument that you and I are experiencing the same qualia, but I can make an equally strong argument that they can't be the same qualia because if they were then you and I would be the same person.
And that I think is a good indication that you're right, it is a pseudo-problem, meaning a question that will never have an answer or lead to anything productive.
.,
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv01nMi5Gb%3Du8yameFsqFS3Tc5JY83i1ukR4NBOKnU32ag%40mail.gmail.com.
> It [consciousness] could be the inevitable byproduct of the only path open to evolution. Evolution has to always build on what has already been evolved. So what was inevitable starting with ATP->ADP or RNA or DNA, might not be inevitable starting with silicon or gallium.
On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com> wrote:> It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.I agree. And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.Artificial Intelligence is hard but Artificial Consciousness Is easy.
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:> It's impossible to refute solipsismTrue, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.
> It's true that the only thing we know for sure is our own consciousness,And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better.> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.
>> Artificial Intelligence is hard but Artificial Consciousness Is easy.> This strikes me as totally wrong.
> We may be near a time where the frontiers of physics will be pursued by AI systems, and we human physicists will do little but sit with slack jaw, maybe get high and wait for the might AI oracle to make a pronouncement.
> Yet I question whether such a deep learning AI system has any cognitive awareness of a physical world or anything else.
.
On Monday, April 26, 2021 at 10:16:47 AM UTC-5 johnk...@gmail.com wrote:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:
> It's impossible to refute solipsism
True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.
If everything is conscious, then how do we know? We have no unconscious objects to compare them with. The panpsychist argument becomes an ouroboros that consumes itself into a vacuous condition of either true or false. Nothing can be demonstrated from it, so it is a scientifically worthless conjecture.
The best definition of consciousness is that it defines those annoying episodes between sleep.
LC
A bit long, but this interview of of the very lucid Hedda Mørch (pronounced "Mark") is very good:(for consciousness "realists"):
viahttps://twitter.com/onemorebrown/status/1386970910230523906
On Tuesday, April 27, 2021 at 8:38:32 AM UTC-5 Terren Suydam wrote:
On Tue, Apr 27, 2021 at 7:22 AM John Clark <johnk...@gmail.com> wrote:
On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam <terren...@gmail.com> wrote:
> consciousness is harder to work with than intelligence, because it's harder to make progress.
It's not hard to make progress in consciousness research, it's impossible.
So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia, in a repeatable way. Why doesn't that represent progress? Is it because you don't trust people's reports?
> Facts that might slay your theory are much harder to come by.
Such facts are not hard to come by. they're impossible to come by. So for a consciousness scientist being lazy works just as well as being industrious, so consciousness research couldn't be any easier, just face a wall, sit on your hands, and contemplate your navel.
There are fruitful lines of research happening. Research on patients undergoing meditation, and psychedelic experiences, while in an FMRI has lead to some interesting facts. You seem to think progress can only mean being able to prove conclusively how consciousness works. Progress can mean deepening our understanding of the relationship between the brain and the mind.
Terren
.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/d963be0a-9393-4680-b77f-6f32011a44b0n%40googlegroups.com.
On 30 Apr 2021, at 20:47, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say it is conscious of that thing?
That's not even common parlance. Conscious thoughts are fleeting. Knowledge is in memory. I know how to ride a bicycle because I do it unconsciously. I don't think consciousness can be understood except as a surface or boundary of the subconscious and the unconscious (physics).
Brent
1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)
2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p.Then to be “simply” conscious, becomes []t & <>t (& t).
Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.
To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...).
That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral), WITHOUT any collapse postulate.
Bruno
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a81f3dcd-2120-d2fd-598b-3b80fbd9f8c3%40verizon.net.
On 30 Apr 2021, at 20:52, Jason Resch <jason...@gmail.com> wrote:On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal <mar...@ulb.ac.be> wrote:Hi Jason,On 25 Apr 2021, at 22:29, Jason Resch <jason...@gmail.com> wrote:It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?Does, for instance, the following program "know" anything about the data it is processing?if (pixel.red > 128) then {// knows pixel.red is greater than 128} else {// knows pixel.red <= 128}If not, what else is required for knowledge?Do you agree that knowledgeability obeysknowledgeability(A) -> Aknowledgeability(A) -> knowledgeability(knowledgeability(A))Using the definition of knowledge as "true belief" I agree with this.
(And also, to limit ourselves to rational knowledge:knowledgeability(A -> B) -> (knowledgeability(A) -> knowledgeability(B))From this, it can be proved that “ knowledgeability” of any “rich” machine (proving enough theorem of arithmetic) is not definable in the language of that machine, or in any language available to that machine.Is this because the definition of knowledge includes truth, and truth is not definable?
So the best we can do is to define a notion of belief (which abandon the reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable (in the language of the machine), and then we can apply the idea of Theatetus, and define knowledge (or knowledgeability, when we add the transitivity []p -> [][]p) by true belief.The machine knows A when she believes A and A is true.So is it more appropriate to equate consciousness with belief, rather than with knowledge?
It might be a true fact that "Machine X believes Y", without Y being true. Is it simply the truth that "Machine X believes Y" that makes X consciousness of Y?
Does the program behavior have to change based on the state of some information? For example:if (pixel.red > 128) then {// knows pixel.red is greater than 128doX();} else {// knows pixel.red <= 128doY():}Or does the program have to possess some memory and enter a different state based on the state of the information it processed?if (pixel.red > 128) then {// knows pixel.red is greater than 128enterStateX():} else {// knows pixel.red <= 128enterStateY();}Or is something else altogether needed to say the program knows?You need self-reference ability for the notion of belief, together with a notion of reality or truth, which the machine cannot define.Can a machine believe "2+2=4" without having a reference to itself?
What, programmatically, would you say is needed to program a machine that believes "2+2=4" or to implement self-reference?
Does a Turing machine evaluating "if (2+2 == 4) then" believe it?
Or does it require theorem proving software that reduces a statement to Peano axioms or similar?
To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy.By consistency here do you mean the machine must never come to believe something false, or that the machine itself must behave in a manner consistent with its design/definition?
I still have a conceptual difficulty trying to marry these mathematical notions of truth, provability, and consistency with a program/Machine that manifests them.
If a program can be said to "know" something then can we also say it is conscious of that thing?1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)In a way, I might view these long term memories as environmental signals that encroach upon one's mind state. A state which is otherwise not immediately aware of all the contents of this memory (like opening a sealed box to discover it's content).
2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p.Then to be “simply” conscious, becomes []t & <>t (& t).Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...).That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral), WITHOUT any collapse postulate.Can it be said that Deep Blue is conscious of the state of the chess board it evaluates?
Is a Tesla car conscious of whether the traffic signal is showing red, yellow, or green?
Or is a more particular class of software necessary for belief/consciousness? This is what I'm struggling to understand. I greatly appreciate all the answers you have provided.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/020e743e-6617-44ad-bf90-0ec46e956d93n%40googlegroups.com.
On 30 Apr 2021, at 20:47, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say it is conscious of that thing?
That's not even common parlance. Conscious thoughts are fleeting. Knowledge is in memory. I know how to ride a bicycle because I do it unconsciously. I don't think consciousness can be understood except as a surface or boundary of the subconscious and the unconscious (physics).
If you use physics, you have to explain what it is, and how that select the computations in arithmetic,
or you need to abandon mechanism.
With mechanism, to claim that a machine consciousness is not attributable to some universal machinery, despite they do execute a computation, in the only mathematical sense discovered by Church and Turing (and some others) seem a bit magical.
Note that you don’t quote me, above. You should have quoted my answer. The beauty of Mechanism is that the oldest definition of (rational) knowledge (Theaetetus true (justified) opinion) already explain why no machine can define its own knowledge, why consciousness seems necessarily mysterious, and why we get that persistent feeling that we belong to a physical reality, when in fact we are just infinitely many numbers involved in complex relations.
On 30 Apr 2021, at 20:52, Jason Resch <jason...@gmail.com> wrote:It might be a true fact that "Machine X believes Y", without Y being true. Is it simply the truth that "Machine X believes Y" that makes X consciousness of Y?It is more the belief that the machine has a belief which remains true, even if the initial belief is false.
Can a machine believe "2+2=4" without having a reference to itself?Not really, unless you accept the idea of unconscious belief, which makes sense in some psychological theory.My method consists in defining “the machine M believes P” by “the machine M asserts P”, and then I limit myself to machine which are correct by definition. This is of no use in psychology, but is enough to derive physics.
What, programmatically, would you say is needed to program a machine that believes "2+2=4" or to implement self-reference?That it has enough induction axioms, like PA and ZF, but unlike RA (R and Q), or CL (combinatory logic without induction).The universal machine without induction axiom are conscious, but are very limited in introspection power. They don’t have the rich theology of the machine having induction.I recall that the induction axioms are all axioms having the shape [P(0) & (for all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build universals.
Does a Turing machine evaluating "if (2+2 == 4) then" believe it?If the machine can prove:Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be self-conscious. PA can, RA cannot.Or does it require theorem proving software that reduces a statement to Peano axioms or similar?That is requires for the rational belief, but not for the experience-able one.
To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy.By consistency here do you mean the machine must never come to believe something false, or that the machine itself must behave in a manner consistent with its design/definition?That the machine will not believe something false. I agree this works only because I can limit myself to correct machine.The psychology and theology of the lying machine remains to be done, but it has no use to derive physics from arithmetic.I still have a conceptual difficulty trying to marry these mathematical notions of truth, provability, and consistency with a program/Machine that manifests them.It *is¨subtle, that is why we need to use the mathematics of self-reference. It is highly counter-intuitive. All errors in philosophy/theology comes from confusing a self-referential mode with another, I would say.If a program can be said to "know" something then can we also say it is conscious of that thing?1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)In a way, I might view these long term memories as environmental signals that encroach upon one's mind state. A state which is otherwise not immediately aware of all the contents of this memory (like opening a sealed box to discover it's content).OK.2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p.Then to be “simply” conscious, becomes []t & <>t (& t).Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...).
That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral), WITHOUT any collapse postulate.Can it be said that Deep Blue is conscious of the state of the chess board it evaluates?Deep blue. I guess not. But for alpha-go, or some of its descendants, it looks like they are circular neural pathway allowing the machine to learn its own behaviour, and to attach some identity in this way. So, deep learning might coverage on a conscious machine.
But that is not verified, and they are still just playing “simple games”. We don’t ask to build a theory of themselves. It even looks like they try to avoid this, and it is normal. Like nature with insects, we don’t want a terrible child, and try to make “mature machine” right at the start.
Is a Tesla car conscious of whether the traffic signal is showing red, yellow, or green?I doubt this, but I have not study them. I doubt it has full self-reference ability, like PA and ZF, or any human baby.
Or is a more particular class of software necessary for belief/consciousness? This is what I'm struggling to understand. I greatly appreciate all the answers you have provided.All you need is enough induction power. RA + induction on recursive formula is not enough, unless you add the exponentiation axiom. But RA + induction on recursive enumerable formula is enough.
Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:> It's impossible to refute solipsismTrue, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.
When I was a little kid I would ask adults if rocks were conscious. They tried to train me to stop asking such questions, because they were worried about what other people would think. To this day, I never stopped asking these questions. I see three options here:(1) They were correct to worry and I have a mental issue.(2) I am really dumb and don't see something obvious.(3) Beliefs surrounding consciousness are socially normative, and asking question outside of such boundaries is a taboo.