>In your opinion who has offered the best theory of consciousness to date, or who do you agree with most?
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com.
>Deepmind has succeeded in building general purposes learning algorithms. Intelligence is mostly a solved problem,
> But questions of consciousness are no less important nor less pressing:
Is this uploaded brain conscious or a zombie?
> Can (bacterium, protists, plants, jellyfish, worms, clams, insects, spiders, crabs, snakes, mice, apes, humans) suffer?
> Are these robot slaves conscious?
> Do they have likes or dislikes that we repress?
> When does a developing human become conscious?
> Is that person in a coma or locked-in?
> Does this artificial retina/visual cortex provide the same visual experiences?
> Does this particular anesthetic block consciousness or merely memory formation?
> These questions remain unsettled
>If none of these questions interest you, perhaps this one will:
Is consciousness inherent to any intelligent process?
On Fri, Jun 18, 2021 at 8:17 PM Jason Resch <jason...@gmail.com> wrote:>Deepmind has succeeded in building general purposes learning algorithms. Intelligence is mostly a solved problem,I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.
If intelligence was a solved problem the world would change beyond all recognition and we'd be smack in the middle of the Singularity, and we're obviously not because at least to some degree future human events are still somewhat predictable.
> But questions of consciousness are no less important nor less pressing:Is this uploaded brain conscious or a zombie?I don't know, are you conscious or a zombie?
> Can (bacterium, protists, plants, jellyfish, worms, clams, insects, spiders, crabs, snakes, mice, apes, humans) suffer?I don't know, I know I can suffer, can you?
> Are these robot slaves conscious?Are you conscious?
> Do they have likes or dislikes that we repress?What's with this "we" business?
> When does a developing human become conscious?Other than in my case does any developing human EVER become conscious?> Is that person in a coma or locked-in?I don't know, are you locked in?
> Does this artificial retina/visual cortex provide the same visual experiences?The same as what?
> Does this particular anesthetic block consciousness or merely memory formation?Did the person have consciousness even before the administration of the anesthetic?
> These questions remain unsettledYes, and these questions will remain unsettled till the end of time, so even if time is infinite it could be better spent pondering other questions that actually have answers.
>If none of these questions interest you, perhaps this one will:
Is consciousness inherent to any intelligent process?I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness.
Information is the key. Conscious agents are defined by precisely that
information that specifies the content of their consciousness.
This
means that a conscious agent can never be precisely located in some
physical object, because the information that describes the conscious
experience will always be less detailed than the information present in
the exact physical description of an object such a brain. There are
always going to be a very large self localization ambiguity due to the
large number of different possible brain states that would generate
exactly the same conscious experience.
So, given whatever conscious
experience the agent has, the agent could be in a very large number of
physically distinct states.
The simpler the brain and the algorithm implemented by the brain, the
larger this self-localization ambiguity becomes because smaller
algorithms contain less detailed information.
Our conscious experiences
localizes us very precisely on an Earth-like planet in a solar system
that is very similar to the one we think we live in. But the fly walking
on the wall of the room I'm in right now may have some conscious
experience that is exactly identical to that of another fly walking on
the wall of another house in another country 600 years ago or on some
rock in a cave 35 million year ago.
The conscious experience of the fly I see on the all is therefore not
located in the particular fly I'm observing.
This is i.m.o. the key
thing you get from identifying consciousness with information, it makes
the multiverse an essential ingredient of consciousness. This resolves
paradoxes you get in thought experiments where you consider simulating a
brain in a virtual world and then argue that since the simulation is
deterministic, you could replace the actual computer doing the
computations by a device playing a recording of the physical brain
states. This argument breaks down if you take into account the
self-localization ambiguity and consider that this multiverse aspect is
an essential part of consciousness due to counterfactuals necessary to
define the algorithm being realized, which is impossible in a
deterministic single-world setting.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bd53588153f2debae241dbb41e48b60a%40zonnet.nl.
>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.>Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.
> Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty,
> Being unable to predict the future isn't a good definition of the singularity, because we already can't.
> We are getting very close to that point.
> There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not.
> So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?
> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.
>> I know I can suffer, can you?>I can tell you that I can.
> You could verify via functional brain scans that I wasn't preprogrammed like an Eliza bot to say I can. You could trace the neural firings in my brain to uncover the origin of my belief that I can suffer, and I could do the same for you.
> Could a zombie write a book like Chalmers's "The Consciousness Mind"?
> Some have proposed writing philosophical texts on the philosophy of mind as a kind of super-Turing test for establishing consciousness.
Wouldn't you prefer the anesthetic that knocks you out vs. the one that only blocks memory formation? Wouldn't a theory of consciousness be valuable here to establish which is which?
> You appear to operate according to a "mysterian" view of consciousness, which is that we cannot ever know.
> You could have been a mysterian about how life reproduces itself or why the stars shine, until a few hundred years ago, but you would have been proven wrong. Why do you think these questions below are intractable?
>>I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness.> This itself is a theory of consciousness.
> You must have some reason to believe it, even if you cannot yet prove it.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi0bO%3DkUVdZ-zRArhC8NNfguTKCnQR6m%3Db%3DRxG9dkmSww%40mail.gmail.com.
On Sat, Jun 19, 2021, 6:17 AM smitra <smi...@zonnet.nl> wrote:
Information is the key. Conscious agents are defined by precisely that
information that specifies the content of their consciousness.
While I think this is true, I don't know of a consciousness theory that is explicit in terms of how information informs a system to create a conscious system. Bits sitting on a still hard drive platter are not associated with consciousness, are they? Facts sitting idly in one's long term memory are not the content of anyone's consciousness, are they?
For information to carry meaning, I think requires some system to be informed by that information.
> What I think is missing in the JKC's idea that intelligence is interesting and understandable but consciousness isn't, is that he leaves out values. Intelligence is define in terms of achieving goals. [...] the part we commonly call 'wisdom' when it works out) is how conflictingvalues are resolved.
On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jason...@gmail.com> wrote:>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.>Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.As I've said, that is very impressive, but even more impressive would be winning a Nobel prize, or even just being able to diagnose that the problem with your old car is a broken fan belt, and be able to remove the bad belt and install a good one, but we're not quite there yet.> Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty,And yet many times humans can make predictions that turn out to be better than random guessing, and a computer should be able to do at least as good, and I'm certain they will eventually.> Being unable to predict the future isn't a good definition of the singularity, because we already can't.Not true, often we can make very good predictions, but that will be impossible during the singularity> We are getting very close to that point.Maybe, but even if the singularity won't happen for 1000 years 999 years from now it will still seem like a long way off because more progress will be made in that last year than the previous 999 combined. It's in the nature of exponential growth and that's why predictions are virtually impossible during that time, the tiniest uncertainty in initial condition gets magnified into a huge difference in final outcome.> There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not.Even if that's true I don't see how that would help me figure out if you're a zombie or not.> So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?If catching a ball is your criteria for consciousness then computers are already conscious, and you don't even need a supercomputer, you can make one in your own home for a few hundred dollars and some spare parts. Well maybe so, I always maintained that consciousness is easy but intelligence is hard.> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv18n8RKZ7QuYgQfWK71O9QVXYjrVnYa-3TFHuTynqno5A%40mail.gmail.com.
Of my own free will, I consciously decide to go to a restaurant.
Why?
Because I want to.
Why ?
Because I want to eat.
Why?
Because I'm hungry?
Why ?
Because lack of food triggered nerve impulses in my stomach , my brain interpreted these signals as pain, I can only stand so much before I try to
stop it.
Why?
Because I don't like pain.
Why?
Because that's the way my brain is constructed.
Why?
Because my body and the hardware of my brain were made from the information in my genetic code (lets see, 6 billion base pairs 2 bits per base pair
8 bits per byte that comes out to about 1.5 gig, ) the programming of my brain came from the environment, add a little quantum randomness perhaps and of my own free will I consciously decide to go to a restaurant.
I know Darwinian Evolution produced me and I know for a fact that I am conscious, but Natural Selection can't see consciousness any better than we can directly see consciousness in other people,
John K Clark See what's on my new list at Extropolis
kdf
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3ESeioZj53B8WGsQBhLfowRvC2MQyF0Qcaue7d4fx%2BoQ%40mail.gmail.com.
> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.
Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to.
Chalmers presents a proof of this in the form of a reductio ad absurdum.
> This depends on how we define consciousness. If it means imagining and using simulations in which you represent yourself in order to plan your actions then maybe natural selection can "see" it. People who can't or don't plan by imagining themselves in various prospective scenarios and who don't have a theory of mind regarding other people are probably less successful at reproducing.
> Certain values are built in by evolution, values related to reproducing mostly
On Sat, Jun 19, 2021 at 7:17 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
> This depends on how we define consciousness. If it means imagining and using simulations in which you represent yourself in order to plan your actions then maybe natural selection can "see" it. People who can't or don't plan by imagining themselves in various prospective scenarios and who don't have a theory of mind regarding other people are probably less successful at reproducing.
If so then consciousness is the inevitable byproduct of intelligent behavior.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2%2BAO%3DNtB%2BpmmshALX4tzbp5TXq4MHt_fEyiashppxkug%40mail.gmail.com.
>> If so then consciousness is the inevitable byproduct of intelligent behavior.
> Yes, I agree with that. But I don't think either intelligence or consciousness are all-or-nothing attributes. I think consciousness occurs at different levels which correspond to it's different uses as a tool of intelligence.
I agree with Saibal on this and welcome his great explanation. Not to miss out on not giving credit where credit is due, let me invoke Donald Hoffman as their chief proponent of conscious agents. Or, the best known.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/901280005.1412392.1624123221433%40mail.yahoo.com.
On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jason...@gmail.com> wrote:>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.>Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.As I've said, that is very impressive, but even more impressive would be winning a Nobel prize,
or even just being able to diagnose that the problem with your old car is a broken fan belt,
and be able to remove the bad belt and install a good one, but we're not quite there yet.
> Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty,And yet many times humans can make predictions that turn out to be better than random guessing, and a computer should be able to do at least as good, and I'm certain they will eventually.> Being unable to predict the future isn't a good definition of the singularity, because we already can't.Not true, often we can make very good predictions, but that will be impossible during the singularity
> We are getting very close to that point.Maybe, but even if the singularity won't happen for 1000 years 999 years from now it will still seem like a long way off because more progress will be made in that last year than the previous 999 combined. It's in the nature of exponential growth and that's why predictions are virtually impossible during that time, the tiniest uncertainty in initial condition gets magnified into a huge difference in final outcome.
> There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not.Even if that's true I don't see how that would help me figure out if you're a zombie or not.
> So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?If catching a ball is your criteria for consciousness then computers are already conscious, and you don't even need a supercomputer, you can make one in your own home for a few hundred dollars and some spare parts. Well maybe so, I always maintained that consciousness is easy but intelligence is hard.
> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to.
>> I know I can suffer, can you?>I can tell you that I can.So now I know you could generate the ASCII sequence "I can tell you that I can", but that doesn't answer my question, can you suffer? I don't even know if you and I mean the same thing by the word "suffer".> You could verify via functional brain scans that I wasn't preprogrammed like an Eliza bot to say I can. You could trace the neural firings in my brain to uncover the origin of my belief that I can suffer, and I could do the same for you.No I cannot. Theoretically I could trace the neural firings in your brain and figure out how they stimulated the muscles in your hand to type out "I can tell you that I can" but that's all I can do. I can't see suffering or unhappiness on an MRI scan, although I may be able to trace the nerve impulses that stimulate your tear glands to become more active.
> Could a zombie write a book like Chalmers's "The Consciousness Mind"?I don't think so because it takes intelligence to write a book and my axiom is that consciousness is the inevitable byproduct of intelligence. I can give reasons why I think the axiom is reasonable and probably true but it falls short of a proof, that's why it's an axiom.
> Some have proposed writing philosophical texts on the philosophy of mind as a kind of super-Turing test for establishing consciousness.I think you could do much better than that because it only takes a minimal amount of intelligence to dream up a new consciousness theory, they're a dime a dozen, any one of them is as good, or as bad, as another. Good intelligence theories on the other hand are hard as hell to come up with but if you do find one you're likely to become the world's first trillionaire.
Wouldn't you prefer the anesthetic that knocks you out vs. the one that only blocks memory formation? Wouldn't a theory of consciousness be valuable here to establish which is which?Such a theory would be utterly useless because there would be no way to tell if it was correct.
If one consciousness theory says you were conscious and a rival theory says you were not there is no way to tell which one was right.
> You appear to operate according to a "mysterian" view of consciousness, which is that we cannot ever know.There is no mystery, I just operate in the certainty that there are only 2 possibilities, a chain of "why" questions either goes on for infinity or the chain terminates in a brute fact. In this case I think termination is more likely, so I think it's a brute fact consciousness is the way data feels when it is being processed.
Of my own free will, I consciously decide to go to a restaurant.
Why?
Because I want to.
Why ?
Because I want to eat.
Why?
Because I'm hungry?
Why ?
Because lack of food triggered nerve impulses in my stomach , my brain interpreted these signals as pain, I can only stand so much before I try to
stop it.
Why?
Because I don't like pain.
Why?
Because that's the way my brain is constructed.
Why?
Because my body and the hardware of my brain were made from the information in my genetic code (lets see, 6 billion base pairs 2 bits per base pair
8 bits per byte that comes out to about 1.5 gig, ) the programming of my brain came from the environment, add a little quantum randomness perhaps and of my own free will I consciously decide to go to a restaurant.> You could have been a mysterian about how life reproduces itself or why the stars shine, until a few hundred years ago, but you would have been proven wrong. Why do you think these questions below are intractable?Because there are objective experiments you can perform and things you can observe that will give you information on how organisms reproduce themselves and how stars shine, but there is nothing comparable with regard to consciousness, there is no way to bridge the objective/subjective divide without making use of unproven and unprovable assumptions or axioms. That's why the field of consciousness research has not progressed one nanometer in the last century, or even the last millennium.
>>I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness.> This itself is a theory of consciousness.Yep, and it's just as good, and just as bad, as every other theory of consciousness.> You must have some reason to believe it, even if you cannot yet prove it.I do. I know Darwinian Evolution produced me and I know for a fact that I am conscious, but Natural Selection can't see consciousness any better than we can directly see consciousness in other people, Evolution can only see intelligent behavior and it can't select for something it can't see. And yet Evolution managed to produce consciousness at least once and probably many billions of times. I therefore conclude that either Darwin was wrong or consciousness is an inevitable byproduct of intelligence. I don't think Darwin was wrong.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv18n8RKZ7QuYgQfWK71O9QVXYjrVnYa-3TFHuTynqno5A%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/c0417025-eb22-cd1b-fc18-1efdfd7a97c4%40verizon.net.
This resolves
paradoxes you get in thought experiments where you consider simulating a
brain in a virtual world and then argue that since the simulation is
deterministic, you could replace the actual computer doing the
computations by a device playing a recording of the physical brain
states. This argument breaks down if you take into account the
self-localization ambiguity and consider that this multiverse aspect is
an essential part of consciousness due to counterfactuals necessary to
define the algorithm being realized, which is impossible in a
deterministic single-world setting.
> Anything we can identify as having universal utility or describe as a universal goal we can use to predict the long term direction of technology, even if humans are no longer the drivers of it.
> Even a paperclip maximizer will have the meta goal of increasing its knowledge, during which time it may learn to escape its programming, just as the human brain may transcended its biological programming when it chooses to upload into a computer and ditch it's genes.
> If I demonstrate knowledge to you, by responding to my environment, or by telling you about my thoughts, etc., could I do any of those things without knowing the state of my environment or my mind?
> Stathis mentions Chalmers's fading/dancing qualia as a reductio ad absurdum. Are you familiar with his argument? If so, do you think it succeeds?
> I would call your hypothesis that "intelligence implies consciousness" a theory that could be proved or disproved,
> AIXI is a good theory of universal and perfect intelligence. It's just not practical because it takes exponential time to compute. The tricks lie in finding shortcuts that give approximate results to AIXI but can be computed in reasonable time. (The inventor of AIXI now works at DeepMind.) Neural networks are known to be universal in terms of being able to learn any mapping function. There are probably discoveries to be made in terms of improving learning efficiency, but we already have systems that learn to play chess, poker, and go better than any human in less than a week, so maybe the only thing missing is massive computational resources. Researchers seem to have demonstrated this in their leap from GPT-2 to GPT-3. GPT-3 can write text that is nearly indistinguishable from text written by humans. It's even learned to write code and do math, despite not being trained to do so.
>> If one consciousness theory says you were conscious and a rival theory says you were not there is no way to tell which one was right.>That's why we make theories, so we can test them
On 18 Jun 2021, at 20:46, Jason Resch <jason...@gmail.com> wrote:In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right. For example:Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is keyDennett/Chalmers: function is key
To me all seem potentially valid,
and perhaps all three are needed in some combination. I'm curious to hear what other viewpoints exist or if there are other candidates for the "secret sauce" behind consciousness I might have missed.Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com.
On 19 Jun 2021, at 02:18, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:I'm most with Dennett. I see consciousness as having several different levels, which are also different levels of self-reference.
At the lowest level even bacteria recognize (in the functional/operational sense) a distinction between "me" and "everything else". A little above that, some that are motile also sense chemical gradients and can move toward food. So they distinguish "better else" from "worse else". At a higher level, animals and plants with sensors know more about their surroundings. Animals know a certain amount of geometry and are aware of their place in the world. How close or far things are. Some animals, mostly those with eyes, employ foresight and planning in which they forsee outcomes for themselves. They can think of themselves in relation to other animals. More advanced social animals are aware of their social status. Humans, perhaps thru the medium of language, have a theory of mind, i.e. they can think about what other people think and attribute agency to them (and to other things) as part of their planning. The conscious part of all this awareness is essentially that which is processed as language and image; ultimately only a small part.
Brent
On 6/18/2021 11:46 AM, Jason Resch wrote:
In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?
I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right. For example:
Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is keyDennett/Chalmers: function is key
To me all seem potentially valid, and perhaps all three are needed in some combination. I'm curious to hear what other viewpoints exist or if there are other candidates for the "secret sauce" behind consciousness I might have missed.
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/cdc372c6-2579-fa64-2a26-bf8c1ce33c56%40verizon.net.
On 19 Jun 2021, at 16:02, John Clark <johnk...@gmail.com> wrote:Suppose there is an AI that behaves more intelligently than the most intelligent human who ever lived, however when the machine is opened up to see how this intelligence is actually achieved one consciousness theory doesn't like what it sees and concludes that despite its great intelligence it is not conscious, but a rival consciousness theory does like what it sees and concludes it is conscious. Both theories can't be right although both could be wrong, so how on earth could you ever determine which, if any, of the 2 consciousness theories are correct?
John K Clark See what's on my new list at Extropolis
qnoyrm
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv241V2Kw2L%3DsUUGrFrhc8684TGzi%3DRC_yHm-_1rez%2BC_w%40mail.gmail.com.
In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right. For example:Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key
>> Suppose there is an AI that behaves more intelligently than the most intelligent human who ever lived, however when the machine is opened up to see how this intelligence is actually achieved one consciousness theory doesn't like what it sees and concludes that despite its great intelligence it is not conscious, but a rival consciousness theory does like what it sees and concludes it is conscious. Both theories can't be right although both could be wrong, so how on earth could you ever determine which, if any, of the 2 consciousness theories are correct?> A consciousness theory has no value if it does not make testable prediction.
> But that is the case for the theory of consciousness brought by the universal machine/number in arithmetic. They give the logic of the observable, and indeed until now that fits with quantum logic.The mechanist brain-mind identity theory would be confirmed if Bohm’s hidden variable theory was true, or if we could find an evidence that the physical cosmos is unique, or that Newton physics was the only correct theory, etc.
> With the induction axioms, the machine get Löbian (and consciousnesss becomes basically described by Grzegorczyk formula []([](p->[]p) -> p) -> p
On 4 Jul 2021, at 17:40, Tomas Pales <litew...@gmail.com> wrote:On Friday, June 18, 2021 at 8:46:39 PM UTC+2 Jason wrote:In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right. For example:Hofstadter/Marchal: self-reference is keyI don't know if self-reference in the sense of Godel sentences is relevant to consciousness
but I would say that self-reference in the sense of intrinsic identity of an object explains qualitative properties of consciousness (qualia).
I imagine that every object has two kinds of identity: intrinsic identity (something that the object is in itself)
and extrinsic identity (relations of the object to all other objects). Intrinsic identity is something qualitative (non-relational), a quality that stands in relations to other qualities, so it seems like a natural candidate for the qualitative properties of consciousness.
All relations are instances of the similarity relation (similarities between qualities arising from common and different properties of the qualities), of which a particular kind of relation deserves a special mention: the composition relation, also known as the set membership relation in set theory, or the relation between a whole and its part (or between a combination of objects and an object in the combination), which gives rise to a special kind of relational identity of an object: the compositional identity, which is constituted by the relations of the object to its parts (in other words, it is the internal structure of the object - not to be confused with the intrinsic identity of the object, which is a non-structural quality!). Set theory describes the compositional identity of all possible composite objects down to non-composite objects (instances of the empty set).
Since all objects have an intrinsic identity, this is a panpsychist view but it seems important to differentiate between different levels or intensities of consciousness.
Tononi/Tegmark: information is keyStudy of neural correlates of consciousness suggests that the level or intensity of consciousness of an object depends on the complexity of the object's structure. There are two basic approaches to the definition of complexity: "disorganized" complexity (which is high in objects that have many different and independent (random) parts) and "organized" complexity (which is high in objects that have many different but also dependent (integrated) parts). It is the organized complexity in a dynamic form that seems important for the level of consciousness. Tononi's integrated information theory is based on such organized complexity though I don't know if his particular specification of the complexity is correct.Dennett/Chalmers: function is keyFrom the evolutionary perspective it seems important for an organism to be able to create internal representations of external objects on different levels of composition of reality. Such representations reflect both the diversity and regularities of reality and need to be properly integrated to have a unified, coordinated influence on the organism's behavior. So the organized complexity of the organism's representations seems to be related to its functionality.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/cdb10702-4479-4089-b0c0-2d145de35efdn%40googlegroups.com.
but I would say that self-reference in the sense of intrinsic identity of an object explains qualitative properties of consciousness (qualia).But what is a object? What is intrinsic identity? And why that would give qualia?
I imagine that every object has two kinds of identity: intrinsic identity (something that the object is in itself)To be honest, I don’t understand. To be sure, I like mechanism because it provide a clear explanation of where the physical appearance comes from, without having us ti speculate on some “physical” object which would be primary, as we have no evidence for this, and it makes the mind-body problem unsolvable.