Which philosopher or neuro/AI scientist has the best theory of consciousness?

68 views
Skip to first unread message

Jason Resch

unread,
Jun 18, 2021, 2:46:39 PM6/18/21
to Everything List
In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

To me all seem potentially valid, and perhaps all three are needed in some combination. I'm curious to hear what other viewpoints exist or if there are other candidates for the "secret sauce" behind consciousness I might have missed.

Jason

John Clark

unread,
Jun 18, 2021, 3:37:38 PM6/18/21
to 'Brent Meeker' via Everything List
On Fri, Jun 18, 2021 at 2:46 PM Jason Resch <jason...@gmail.com> wrote:

>In your opinion who has offered the best theory of consciousness to date, or who do you agree with most?

One consciousness theory is as good as another because there are no facts such a theory must fit. About all I can say is consciousness seems to be the way data feels when it is being processed but I have no idea why that is true, it may be meaningless to even ask "why' in this case because it's probably just a brute fact.  That's why I'm far more interested in intelligence theories than consciousness theories; there are ways to judge the quality of an intelligence theory but there's no way to do that with a consciousness theory.

John K Clark    See what's on my new list at  Extropolis
rav

Jason Resch

unread,
Jun 18, 2021, 8:17:03 PM6/18/21
to Everything List
Deepmind has succeeded in building general purposes learning algorithms. Intelligence is mostly a solved problem, for at least almost all capabilities of human intelligence. I wrote an article detailing this recently:


But questions of consciousness are no less important nor less pressing:

- Is this uploaded brain conscious or a zombie?
- Can (bacterium, protists, plants, jellyfish, worms, clams, insects, spiders, crabs, snakes, mice, apes, humans) suffer?
- Are these robot slaves conscious? Do they have likes or dislikes that we repress?
- When does a developing human become conscious?
- Is that person in a coma or locked-in?
- Does this artificial retina/visual cortex provide the same visual experiences?
- Does this particular anesthetic block consciousness or merely memory formation?

These questions remain unsettled due to the lack of a widely held and established theory of consciousness. Answers to these questions would be quite valuable, as we could take steps to reduce harm, and avoid potentially zombifying our future civilization should we upload in a way that doesn't preserve our conscious minds (if you believe such a thing is possible).

If none of these questions interest you, perhaps this one will:

- Is consciousness inherent to any intelligent process?

I think the answer is yes, what do you think?

Jason


Brent Meeker

unread,
Jun 18, 2021, 8:18:43 PM6/18/21
to everyth...@googlegroups.com
I'm most with Dennett.  I see consciousness as having several different levels, which are also different levels of self-reference.  At the lowest level even bacteria recognize (in the functional/operational sense) a distinction between "me" and "everything else".  A little above that, some that are motile also sense chemical gradients and can move toward food.  So they distinguish "better else" from "worse else".  At a higher level, animals and plants with sensors know more about their surroundings.  Animals know a certain amount of geometry and are aware of their place in the world.  How close or far things are.  Some animals, mostly those with eyes, employ foresight and planning in which they forsee outcomes for themselves.  They can think of themselves in relation to other animals.  More advanced social animals are aware of their social status.  Humans, perhaps thru the medium of language, have a theory of mind, i.e. they can think about what other people think and attribute agency to them (and to other things) as part of their planning.  The conscious part of all this awareness is essentially that which is processed as language and image; ultimately only a small part.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com.

Brent Meeker

unread,
Jun 18, 2021, 9:59:45 PM6/18/21
to everyth...@googlegroups.com


On 6/18/2021 5:16 PM, Jason Resch wrote:
>
> - Is consciousness inherent to any intelligent process?
>
> I think the answer is yes, what do you think?
>
Not just any intelligent process.  But any at human (or even dog)
level.  I think human level consciousness depends on language or similar
representation in which the entity thinks about decisions by internally
modelling situations included itself.  Think of how much intelligence
humans bring to bear unconsciously.  Think of the Poincare' effect.

Brent

John Clark

unread,
Jun 19, 2021, 6:55:39 AM6/19/21
to 'Brent Meeker' via Everything List
On Fri, Jun 18, 2021 at 8:17 PM Jason Resch <jason...@gmail.com> wrote:

>Deepmind has succeeded in building general purposes learning algorithms. Intelligence is mostly a solved problem,

I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic. If intelligence was a solved problem the world would change beyond all recognition and we'd be smack in the middle of the Singularity, and we're obviously not because at least to some degree future human events are still somewhat predictable.

> But questions of consciousness are no less important nor less pressing:
Is this uploaded brain conscious or a zombie?

I don't know, are you conscious or a zombie?

> Can (bacterium, protists, plants, jellyfish, worms, clams, insects, spiders, crabs, snakes, mice, apes, humans) suffer?

I don't know, I know I can suffer, can you?
 
> Are these robot slaves conscious?

Are you conscious?
 
> Do they have likes or dislikes that we repress?

What's with this "we" business?

> When does a developing human become conscious?

Other than in my case does any developing human EVER become conscious?

> Is that person in a coma or locked-in?

I don't know, are you locked in?  

> Does this artificial retina/visual cortex provide the same visual experiences?

The same as what?
 
> Does this particular anesthetic block consciousness or merely memory formation?

Did the person have consciousness even before the administration of the anesthetic?
 
> These questions remain unsettled

Yes, and these questions will remain unsettled till the end of time, so even if time is infinite it could be better spent pondering other questions that actually have answers.  

>If none of these questions interest you, perhaps this one will:
 Is consciousness inherent to any intelligent process?

I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness. 

John K Clark    See what's on my new list at  Extropolis
qno

smitra

unread,
Jun 19, 2021, 7:17:34 AM6/19/21
to everyth...@googlegroups.com
Information is the key. Conscious agents are defined by precisely that
information that specifies the content of their consciousness. This
means that a conscious agent can never be precisely located in some
physical object, because the information that describes the conscious
experience will always be less detailed than the information present in
the exact physical description of an object such a brain. There are
always going to be a very large self localization ambiguity due to the
large number of different possible brain states that would generate
exactly the same conscious experience. So, given whatever conscious
experience the agent has, the agent could be in a very large number of
physically distinct states.

The simpler the brain and the algorithm implemented by the brain, the
larger this self-localization ambiguity becomes because smaller
algorithms contain less detailed information. Our conscious experiences
localizes us very precisely on an Earth-like planet in a solar system
that is very similar to the one we think we live in. But the fly walking
on the wall of the room I'm in right now may have some conscious
experience that is exactly identical to that of another fly walking on
the wall of another house in another country 600 years ago or on some
rock in a cave 35 million year ago.

The conscious experience of the fly I see on the all is therefore not
located in the particular fly I'm observing. This is i.m.o. the key
thing you get from identifying consciousness with information, it makes
the multiverse an essential ingredient of consciousness. This resolves
paradoxes you get in thought experiments where you consider simulating a
brain in a virtual world and then argue that since the simulation is
deterministic, you could replace the actual computer doing the
computations by a device playing a recording of the physical brain
states. This argument breaks down if you take into account the
self-localization ambiguity and consider that this multiverse aspect is
an essential part of consciousness due to counterfactuals necessary to
define the algorithm being realized, which is impossible in a
deterministic single-world setting.

Saibal
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com
> [1].
>
>
> Links:
> ------
> [1]
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com?utm_medium=email&utm_source=footer

John Clark

unread,
Jun 19, 2021, 10:03:19 AM6/19/21
to 'Brent Meeker' via Everything List
Suppose there is an AI that behaves more intelligently than the most intelligent human who ever lived, however when the machine is opened up to see how this intelligence is actually achieved one consciousness theory doesn't like what it sees and concludes that despite its great intelligence it is not conscious, but a rival consciousness theory does like what it sees and concludes it is conscious. Both theories can't be right although both could be wrong, so how on earth could you ever determine which, if any, of the 2 consciousness theories are correct?

John K Clark    See what's on my new list at  Extropolis
qno
yrm

Jason Resch

unread,
Jun 19, 2021, 10:52:24 AM6/19/21
to Everything List
Thanks Brent, I appreciate your answers. But I did not follow what you say here regarding the Poincare effect. I did a search on it and nothing stood out as related to the brain.

Jason

Jason Resch

unread,
Jun 19, 2021, 11:36:07 AM6/19/21
to Everything List


On Sat, Jun 19, 2021, 5:55 AM John Clark <johnk...@gmail.com> wrote:
On Fri, Jun 18, 2021 at 8:17 PM Jason Resch <jason...@gmail.com> wrote:

>Deepmind has succeeded in building general purposes learning algorithms. Intelligence is mostly a solved problem,

I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.

Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.


If intelligence was a solved problem the world would change beyond all recognition and we'd be smack in the middle of the Singularity, and we're obviously not because at least to some degree future human events are still somewhat predictable.

The algorithms are known, but the computational power is not there yet. Our top supercomputer only recently broke the computing power of one human brain.

Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty, nor uncertainty about oneself and own mind. Being unable to predict the future isn't a good definition of the singularity, because we already can't. You might say the singularity is when most decisions are no longer made by biological intelligences, again arguably we have reached that point. I prefer the definition of when we have a single nonbiological intelligence that exceeds the intelligence of any human in any domain. We are getting very close to that point. That may not be the point of an intelligence explosion, but it means one cannot be far off.



> But questions of consciousness are no less important nor less pressing:
Is this uploaded brain conscious or a zombie?

I don't know, are you conscious or a zombie?

There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not. So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?

A single result on the possibility or impossibility of zombies would enable massive progess in theories of consciousness.

For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.



> Can (bacterium, protists, plants, jellyfish, worms, clams, insects, spiders, crabs, snakes, mice, apes, humans) suffer?

I don't know, I know I can suffer, can you?

I can tell you that I can. You could verify via functional brain scans that I wasn't preprogrammed like an Eliza bot to say I can. You could trace the neural firings in my brain to uncover the origin of my belief that I can suffer, and I could do the same for you.



 
> Are these robot slaves conscious?

Are you conscious?

Could a zombie write a book like Chalmers's "The Consciousness Mind"? Some have proposed writing philosophical texts on the philosophy of mind as a kind of super-Turing test for establishing consciousness.

When GPT-X writes new philosophical treatises on topics of consciousness and when it insists it is conscious, and we trace the origins of this statement to a tangled self-reference loop in its processing, what are we to conclude? Would it become immoral to turn it off at that point?

 
> Do they have likes or dislikes that we repress?

What's with this "we" business?


Humanity I mean.


> When does a developing human become conscious?

Other than in my case does any developing human EVER become conscious?

> Is that person in a coma or locked-in?

I don't know, are you locked in?  

I can move, so no. Being locked in means you are conscious but lack any control over your body.


> Does this artificial retina/visual cortex provide the same visual experiences?

The same as what?

A biological retina and visual cortex.

 
> Does this particular anesthetic block consciousness or merely memory formation?

Did the person have consciousness even before the administration of the anesthetic?

Let's assume so for the purposes of the question. Wouldn't you prefer the anesthetic that knocks you out vs. the one that only blocks memory formation? Wouldn't a theory of consciousness be valuable here to establish which is which?

 
> These questions remain unsettled

Yes, and these questions will remain unsettled till the end of time, so even if time is infinite it could be better spent pondering other questions that actually have answers.  


You appear to operate according to a "mysterian" view of consciousness, which is that we cannot ever know. Several philosophers of mind have expressed this, such as Thomas Nagel I believe.

But I think just because we do not know now, does not mean we will not one day know. You could have been a mysterian about how life reproduces itself or why the stars shine, until a few hundred years ago, but you would have been proven wrong. Why do you think these questions below are intractable?



>If none of these questions interest you, perhaps this one will:
 Is consciousness inherent to any intelligent process?

I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness. 

This itself is a theory of consciousness. You must have some reason to believe it, even if you cannot yet prove it.

It has many consequences, such as the unimportance of the material substrate. That alone rules out Searle's biological naturalism. 

Progess is possible. Especially if one performs consciousness experiments on oneself (e.g. trying a neural implant)

Jason

Jason Resch

unread,
Jun 19, 2021, 11:54:28 AM6/19/21
to Everything List


On Sat, Jun 19, 2021, 6:17 AM smitra <smi...@zonnet.nl> wrote:
Information is the key.  Conscious agents are defined by precisely that
information that specifies the content of their consciousness.

While I think this is true, I don't know of a consciousness theory that is explicit in terms of how information informs a system to create a conscious system. Bits sitting on a still hard drive platter are not associated with consciousness, are they? Facts sitting idly in one's long term memory are not the content of anyone's consciousness, are they?

For information to carry meaning, I think requires some system to be informed by that information. What then is the key to an informable system? Differentiation? Comparison? Conditional statement? Counterfactual states?


This
means that a conscious agent can never be precisely located in some
physical object, because the information that describes the conscious
experience will always be less detailed than the information present in
the exact physical description of an object such a brain. There are
always going to be a very large self localization ambiguity due to the
large number of different possible brain states that would generate
exactly the same conscious experience.

This is a fascinating line of reasoning, easily provable via information theory, abd having huge implications.


So, given whatever conscious
experience the agent has, the agent could be in a very large number of
physically distinct states.

The simpler the brain and the algorithm implemented by the brain, the
larger this self-localization ambiguity becomes because smaller
algorithms contain less detailed information.

I recently had a thought about what it is like to be a thermostat, and came to the conclusion that it's probably like being any one of a billion different creatures slowly arousing from sleep. It's hard to square the stability of experience when there's no elements of that experience to lock you down to existing in a stable continuous state.


Our conscious experiences
localizes us very precisely on an Earth-like planet in a solar system
that is very similar to the one we think we live in. But the fly walking
on the wall of the room I'm in right now may have some conscious
experience that is exactly identical to that of another fly walking on
the wall of another house in another country 600 years ago or on some
rock in a cave 35 million year ago.

The conscious experience of the fly I see on the all is therefore not
located in the particular fly I'm observing.

Mind-blowing...


This is i.m.o. the key
thing you get from identifying consciousness with information, it makes
the multiverse an essential ingredient of consciousness. This resolves
paradoxes you get in thought experiments where you consider simulating a
brain in a virtual world and then argue that since the simulation is
deterministic, you could replace the actual computer doing the
computations by a device playing a recording of the physical brain
states. This argument breaks down if you take into account the
self-localization ambiguity and consider that this multiverse aspect is
an essential part of consciousness due to counterfactuals necessary to
define the algorithm being realized, which is impossible in a
deterministic single-world setting.

I'm not sure I follow the necessity of a multiverse to discuss counterfactuals, but I do agree counterfactuals seem necessary to systems that are "informable".

Jason

Brent Meeker

unread,
Jun 19, 2021, 12:50:54 PM6/19/21
to everyth...@googlegroups.com


On 6/19/2021 4:17 AM, smitra wrote:
> Information is the key.  Conscious agents are defined by precisely
> that information that specifies the content of their consciousness.
> This means that a conscious agent can never be precisely located in
> some physical object, because the information that describes the
> conscious experience will always be less detailed than the information
> present in the exact physical description of an object such a brain.

But that doesn't imply that the content is insufficient to pick out a
specific brain.  My house can be specified by a street address, even
though that is far less information required to describe my house.  That
is possible of different houses to have been at this address doesn't
change the fact that there is only this one.

Brent

spudb...@aol.com

unread,
Jun 19, 2021, 1:20:25 PM6/19/21
to smi...@zonnet.nl, everyth...@googlegroups.com
I agree with Saibal on this and welcome his great explanation. Not to miss out on not giving credit where credit is due, let me invoke Donald Hoffman as their chief proponent of conscious agents. Or, the best known.
> an email to everything-list+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsub...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bd53588153f2debae241dbb41e48b60a%40zonnet.nl.

John Clark

unread,
Jun 19, 2021, 3:48:56 PM6/19/21
to 'Brent Meeker' via Everything List
On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jason...@gmail.com> wrote:

>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.

>Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.

As I've said, that is very impressive, but even more impressive would be winning a Nobel prize, or even just being able to diagnose that the problem with your old car is a broken fan belt, and be able to remove the bad belt and install a good one, but we're not quite there yet.

> Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty,

And yet many times humans can make predictions that turn out to be better than random guessing, and a computer should be able to do at least as good, and I'm certain they will eventually.

>  Being unable to predict the future isn't a good definition of the singularity, because we already can't.

Not true, often we can make very good predictions, but that will be impossible during the singularity  

 > We are getting very close to that point.

Maybe, but even if the singularity won't happen for 1000 years 999 years from now it will still seem like a long way off because more progress will be made in that last year than the previous 999 combined. It's in the nature of exponential growth and that's why predictions are virtually impossible during that time, the tiniest uncertainty in initial condition gets magnified into a huge difference in final outcome.  

> There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not.

Even if that's true I don't see how that would help me figure out if you're a zombie or not.
 
> So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?

If catching a ball is your criteria for consciousness then computers are already conscious, and you don't even need a supercomputer, you can make one in your own home for a few hundred dollars and some spare parts. Well maybe so, I always maintained that consciousness is easy but intelligence is hard. 


> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.

Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to. 
 
>> I know I can suffer, can you?

>I can tell you that I can.

So now I know you could generate the ASCII sequence "I can tell you that I can", but that doesn't answer my question, can you suffer? I don't even know if you and I mean the same thing by the word "suffer".
 
> You could verify via functional brain scans that I wasn't preprogrammed like an Eliza bot to say I can. You could trace the neural firings in my brain to uncover the origin of my belief that I can suffer, and I could do the same for you.

No I cannot. Theoretically I could trace the neural firings in your brain and figure out how they stimulated the muscles in your hand to type out "I can tell you that I can"  but that's all I can do. I can't see suffering or unhappiness on an MRI scan, although I may be able to trace the nerve impulses that stimulate your tear glands to become more active.

> Could a zombie write a book like Chalmers's "The Consciousness Mind"?

I don't think so because it takes intelligence to write a book and my axiom is that consciousness is the inevitable byproduct of intelligence. I can give reasons why I think the axiom is reasonable and probably true but it falls short of a proof, that's why it's an axiom.
 
> Some have proposed writing philosophical texts on the philosophy of mind as a kind of super-Turing test for establishing consciousness.

I think you could do much better than that because it only takes a minimal amount of intelligence to dream up a new consciousness theory, they're a dime a dozen, any one of them is as good, or as bad, as another. Good intelligence theories on the other hand are hard as hell to come up with but if you do find one you're likely to become the world's first trillionaire.

Wouldn't you prefer the anesthetic that knocks you out vs. the one that only blocks memory formation? Wouldn't a theory of consciousness be valuable here to establish which is which?

Such a theory would be utterly useless because there would be no way to tell if it was correct. If one consciousness theory says you were conscious and a rival theory says you were not there is no way to tell which one was right. 

> You appear to operate according to a "mysterian" view of consciousness, which is that we cannot ever know.

There is no mystery, I just operate in the certainty that there are only 2 possibilities, a chain of "why" questions either goes on for infinity or the chain terminates in a brute fact.  In this case I think termination is more likely, so I think it's a brute fact consciousness is the way data feels when it is being processed. 

Of my own free will, I consciously decide to go to a restaurant.
Why? 
Because I want to. 
Why ? 
Because I want to eat. 
Why?
Because I'm hungry? 
Why ?
Because lack of food triggered nerve impulses  in my stomach , my brain interpreted these signals as pain, I can only stand so much before I try to
stop it. 
Why?
Because I don't like pain.
Why? 
Because that's the way my brain is constructed. 
Why?
Because my body  and the hardware of my brain were made from the information in my genetic code  (lets see, 6 billion base pairs 2 bits per base pair
8 bits per byte that comes out to about 1.5 gig, )  the programming of my brain came from the environment, add a little quantum randomness perhaps and of my own free will I consciously decide to go to a restaurant.

> You could have been a mysterian about how life reproduces itself or why the stars shine, until a few hundred years ago, but you would have been proven wrong. Why do you think these questions below are intractable?

Because there are objective experiments you can perform and things  you can observe that will give you information on how organisms reproduce themselves and how stars shine, but there is nothing comparable with regard to consciousness, there is no way to bridge the objective/subjective divide without making use of unproven and unprovable assumptions or axioms.  That's why the field of consciousness research has not progressed one nanometer in the last century, or even the last millennium.

>>I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness. 

> This itself is a theory of consciousness.

Yep, and it's just as good, and just as bad, as every other theory of consciousness. 

> You must have some reason to believe it, even if you cannot yet prove it.

I do. I know Darwinian Evolution produced me and I know for a fact that I am conscious, but Natural Selection can't see consciousness any better than we can directly see consciousness in other people, Evolution can only see intelligent behavior and it can't select for something it can't see. And yet Evolution managed to produce consciousness at least once and probably many billions of times. I therefore conclude that either Darwin was wrong or consciousness is an inevitable byproduct of intelligence. I don't think Darwin was wrong.  
 
John K Clark    See what's on my new list at  Extropolis

vgj

Brent Meeker

unread,
Jun 19, 2021, 5:47:52 PM6/19/21
to everyth...@googlegroups.com
Sorry.  I thought Poincare' effect was a common term, but apparently not.  Here's his description starting about half way thru this essay

http://vigeland.caltech.edu/ist4/lectures/Poincare%20Reflections.pdf

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi0bO%3DkUVdZ-zRArhC8NNfguTKCnQR6m%3Db%3DRxG9dkmSww%40mail.gmail.com.

Brent Meeker

unread,
Jun 19, 2021, 5:57:48 PM6/19/21
to everyth...@googlegroups.com


On 6/19/2021 8:35 AM, Jason Resch wrote:
> You appear to operate according to a "mysterian" view of
> consciousness, which is that we cannot ever know. Several philosophers
> of mind have expressed this, such as Thomas Nagel I believe.

I have some sympathy with this view, but I ask "cannot know what?". What
is you think there is to know?  If you could look at a brain and from
that predict how the person with that brain would behave...isn't that
the same as what we know about gravity and elementary particles.  We
don't know the ding und sich, but so what?

What I think is missing in the JKC's idea that intelligence is
interesting and understandable but consciousness isn't, is that he
leaves out values.  Intelligence is define in terms of achieving goals. 
It's instrumental.  But there's another dimension to thought and
behavior (not necessarily conscious) which provides the
motivation/goals/values for intelligence and part of intelligence (the
part we commonly call 'wisdom' when it works out) is how conflicting
values are resolved.

Brent
Reason is, and ought only to be the slave of the passions, and can never
pretend to any other office than to serve and obey them.
    --- David Hume

Brent Meeker

unread,
Jun 19, 2021, 6:00:38 PM6/19/21
to everyth...@googlegroups.com


On 6/19/2021 8:54 AM, Jason Resch wrote:
On Sat, Jun 19, 2021, 6:17 AM smitra <smi...@zonnet.nl> wrote:
Information is the key.  Conscious agents are defined by precisely that
information that specifies the content of their consciousness.

While I think this is true, I don't know of a consciousness theory that is explicit in terms of how information informs a system to create a conscious system. Bits sitting on a still hard drive platter are not associated with consciousness, are they? Facts sitting idly in one's long term memory are not the content of anyone's consciousness, are they?

For information to carry meaning, I think requires some system to be informed by that information.

It also requires values and the potential for action.  Information has to be about something, something that makes a difference to the conscious organism.

Brent

John Clark

unread,
Jun 19, 2021, 6:20:45 PM6/19/21
to 'Brent Meeker' via Everything List
On Sat, Jun 19, 2021 at 5:57 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> What I think is missing in the JKC's idea that intelligence is interesting and understandable but consciousness isn't, is that he leaves out values.  Intelligence is define in terms of achieving goals. [...] the part we commonly call 'wisdom' when it works out) is how conflictingvalues are resolved.

But there is nothing unique in the human ability to do that, computers do that sort of thing all the time. Often there are two values that affect the rate of a process, one increases the rate and the other decreases it, the relationship between the two can be quite complex so it's not at all obvious which will predominate and what the ultima fate of the process will be, but a computer can calculate it.
John K Clark    See what's on my new list at  Extropolis

kdf


Stathis Papaioannou

unread,
Jun 19, 2021, 7:12:21 PM6/19/21
to everyth...@googlegroups.com
On Sun, 20 Jun 2021 at 05:48, John Clark <johnk...@gmail.com> wrote:
On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jason...@gmail.com> wrote:

>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.

>Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.

As I've said, that is very impressive, but even more impressive would be winning a Nobel prize, or even just being able to diagnose that the problem with your old car is a broken fan belt, and be able to remove the bad belt and install a good one, but we're not quite there yet.

> Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty,

And yet many times humans can make predictions that turn out to be better than random guessing, and a computer should be able to do at least as good, and I'm certain they will eventually.

>  Being unable to predict the future isn't a good definition of the singularity, because we already can't.

Not true, often we can make very good predictions, but that will be impossible during the singularity  

 > We are getting very close to that point.

Maybe, but even if the singularity won't happen for 1000 years 999 years from now it will still seem like a long way off because more progress will be made in that last year than the previous 999 combined. It's in the nature of exponential growth and that's why predictions are virtually impossible during that time, the tiniest uncertainty in initial condition gets magnified into a huge difference in final outcome.  

> There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not.

Even if that's true I don't see how that would help me figure out if you're a zombie or not.
 
> So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?

If catching a ball is your criteria for consciousness then computers are already conscious, and you don't even need a supercomputer, you can make one in your own home for a few hundred dollars and some spare parts. Well maybe so, I always maintained that consciousness is easy but intelligence is hard. 


> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.

Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to. 

Chalmers presents a proof of this in the form of a reductio ad absurdum.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
Stathis Papaioannou

Brent Meeker

unread,
Jun 19, 2021, 7:12:28 PM6/19/21
to everyth...@googlegroups.com


On 6/19/2021 12:48 PM, John Clark wrote:

Of my own free will, I consciously decide to go to a restaurant.
Why? 
Because I want to. 
Why ? 
Because I want to eat. 
Why?
Because I'm hungry? 
Why ?
Because lack of food triggered nerve impulses  in my stomach , my brain interpreted these signals as pain, I can only stand so much before I try to
stop it. 
Why?
Because I don't like pain.
Why? 
Because that's the way my brain is constructed. 
Why?
Because my body  and the hardware of my brain were made from the information in my genetic code  (lets see, 6 billion base pairs 2 bits per base pair
8 bits per byte that comes out to about 1.5 gig, )  the programming of my brain came from the environment, add a little quantum randomness perhaps and of my own free will I consciously decide to go to a restaurant.

And if my ancestors had not evolved this programming they would have died of starvation and I wouldn't exist.

Brent

Brent Meeker

unread,
Jun 19, 2021, 7:17:11 PM6/19/21
to everyth...@googlegroups.com


On 6/19/2021 12:48 PM, John Clark wrote:
I know Darwinian Evolution produced me and I know for a fact that I am conscious, but Natural Selection can't see consciousness any better than we can directly see consciousness in other people,

This depends on how we define consciousness.  If it means imagining and using simulations in which you represent yourself in order to plan your actions then maybe natural selection can "see" it.  People who can't or don't plan by imagining themselves in various prospective scenarios and who don't have a theory of mind regarding other people are probably less successful at reproducing.

Brent

Brent Meeker

unread,
Jun 19, 2021, 8:46:00 PM6/19/21
to everyth...@googlegroups.com
I didn't say it was something only a human does.  I just pointed out that it is more (or less) than just intelligence.  It's like consulting an oracle.  The oracle may be very intelligent and able to tell you how to accomplish anything, but be of no help at all in making a decision if you don't know what value to place on outcomes.  Certain values are built in by evolution, values related to reproducing mostly.  There's nothing "intelligent" about having them, but intelligence has no function without values.

Brent

John K Clark    See what's on my new list at  Extropolis

kdf


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jun 19, 2021, 8:53:33 PM6/19/21
to everyth...@googlegroups.com


On 6/19/2021 4:12 PM, Stathis Papaioannou wrote:
> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.

Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to. 

Chalmers presents a proof of this in the form of a reductio ad absurdum.

But that's not very helpful since it leaves open that many other systems that are not functionally and organizationally equivalent may also be conscious.  Computers are not functionally and organizationally equivalent to people.  In fact I can't think of anything that is.

Brent

John Clark

unread,
Jun 20, 2021, 5:27:31 AM6/20/21
to 'Brent Meeker' via Everything List
On Sat, Jun 19, 2021 at 7:17 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> This depends on how we define consciousness.  If it means imagining and using simulations in which you represent yourself in order to plan your actions then maybe natural selection can "see" it.  People who can't or don't plan by imagining themselves in various prospective scenarios and who don't have a theory of mind regarding other people are probably less successful at reproducing.
 

 If so then consciousness is the inevitable byproduct of intelligent behavior.

John K Clark    See what's on my new list at  Extropolis

mk9




John Clark

unread,
Jun 20, 2021, 5:44:03 AM6/20/21
to 'Brent Meeker' via Everything List
On Sat, Jun 19, 2021 at 8:46 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> Certain values are built in by evolution, values related to reproducing mostly

Humans don't have a fixed hierarchical goal structure, our values are in a constant state of flux, even the value we place on self-preservation.  Any intelligent being would have to be the same way because it may turn out that the goal we want is impossible to achieve so a new goal will have to be found, and Alan Turing proved that in general there is no way to tell beforehand if a goal is achievable or not.

John K Clark    See what's on my new list at  Extropolis
md6

Brent Meeker

unread,
Jun 20, 2021, 2:28:31 PM6/20/21
to everyth...@googlegroups.com


On 6/20/2021 2:26 AM, John Clark wrote:
On Sat, Jun 19, 2021 at 7:17 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> This depends on how we define consciousness.  If it means imagining and using simulations in which you represent yourself in order to plan your actions then maybe natural selection can "see" it.  People who can't or don't plan by imagining themselves in various prospective scenarios and who don't have a theory of mind regarding other people are probably less successful at reproducing.
 

 If so then consciousness is the inevitable byproduct of intelligent behavior.

Yes, I agree with that.  But I don't think either intelligence or consciousness are all-or-nothing attributes.  I think consciousness occurs at different levels which correspond to it's different uses as a tool of intelligence.

Brent


John K Clark    See what's on my new list at  Extropolis

mk9




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jun 20, 2021, 2:31:28 PM6/20/21
to everyth...@googlegroups.com
Yes, part of being intelligent must be to shuffle long term and short term goals.  But that doesn't affect my point that intelligence requires that there be some fundamental values that are not merely instrumental and are categorically different from intelligence.

Brent

John Clark

unread,
Jun 20, 2021, 3:23:45 PM6/20/21
to 'Brent Meeker' via Everything List
On Sun, Jun 20, 2021 at 2:28 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

 >> If so then consciousness is the inevitable byproduct of intelligent behavior.

> Yes, I agree with that.  But I don't think either intelligence or consciousness are all-or-nothing attributes.  I think consciousness occurs at different levels which correspond to it's different uses as a tool of intelligence.

We agree on that also.  
John K Clark    See what's on my new list at  Extropolis
wvg4

mk9

Jason Resch

unread,
Jun 20, 2021, 5:50:00 PM6/20/21
to Everything List


On Sat, Jun 19, 2021, 12:20 PM spudboy100 via Everything List <everyth...@googlegroups.com> wrote:
I agree with Saibal on this and welcome his great explanation. Not to miss out on not giving credit where credit is due, let me invoke Donald Hoffman as their chief proponent of conscious agents. Or, the best known.


Thanks for sharing it was an interesting read. I thought his "interface" description of our experiences was insightful, and I liked his simplification of consciousness agents. I'm not sure however that I agreed with his theorem that purports to prove inverted qualia. I'll have to read more on that.

Jason

To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/901280005.1412392.1624123221433%40mail.yahoo.com.

Jason Resch

unread,
Jun 20, 2021, 6:51:37 PM6/20/21
to Everything List


On Sat, Jun 19, 2021, 2:48 PM John Clark <johnk...@gmail.com> wrote:
On Sat, Jun 19, 2021 at 11:36 AM Jason Resch <jason...@gmail.com> wrote:

>> I'm enormously impressed with Deepmind and I'm an optimist regarding AI, but I'm not quite that optimistic.

>Are you familiar with their Agent 57? -- a single algorithm that mastered all 57 Atari games at a super human level, with no outside direction, no specification of the rules, and whose only input was the "TV screen" of the game.

As I've said, that is very impressive, but even more impressive would be winning a Nobel prize,

AI has rediscovered scientific laws that would be worthy of such a prize had we not already known them. e.g., recent work by Tegmark on AI physicists.

or even just being able to diagnose that the problem with your old car is a broken fan belt,

AI doctors are being used in diagnostics with better than expert ability. If trained in car repair there's no reason to doubt existing AI systems could not do as well.

and be able to remove the bad belt and install a good one, but we're not quite there yet.


Robotics is lacking, but there are so robots that you visually demonstrate a task to, and then it can repeat it. There are robot cooks that can prepare a meal.


> Also, because of chaos, predicting the future to any degree of accuracy requires exponentially more information about the system for each finite amount of additional time to simulate, and this does not even factor in quantum uncertainty,

And yet many times humans can make predictions that turn out to be better than random guessing, and a computer should be able to do at least as good, and I'm certain they will eventually.

>  Being unable to predict the future isn't a good definition of the singularity, because we already can't.

Not true, often we can make very good predictions, but that will be impossible during the singularity  


I'm not sure I buy this. I predict after the singularity there will still be a drive for ever more powerful, faster, and more efficient computation, as computation has a universal utility, and increased efficiency is a universal goal. Anything we can identify as having universal utility or describe as a universal goal we can use to predict the long term direction of technology, even if humans are no longer the drivers of it.


 > We are getting very close to that point.

Maybe, but even if the singularity won't happen for 1000 years 999 years from now it will still seem like a long way off because more progress will be made in that last year than the previous 999 combined. It's in the nature of exponential growth and that's why predictions are virtually impossible during that time, the tiniest uncertainty in initial condition gets magnified into a huge difference in final outcome.  

It's not impossible if there are universal goals. Even a paperclip maximizer will have the meta goal of increasing its knowledge, during which time it may learn to escape its programming, just as the human brain may transcended its biological programming when it chooses to upload into a computer and ditch it's genes.



> There may be valid logical arguments that disprove the consistency of zombies. For example, can something "know without knowing?" It seems not.

Even if that's true I don't see how that would help me figure out if you're a zombie or not.

If I demonstrate knowledge to you, by responding to my environment, or by telling you about my thoughts, etc., could I do any of those things without knowing the state of my environment or my mind? If I am aware of that knowledge then I am aware of something and so you could decide I am consciousness.


 
> So how does a zombie "know" where to place it's hand to catch a ball, if it doesn't "knowing" what it sees?

If catching a ball is your criteria for consciousness then computers are already conscious, and you don't even need a supercomputer, you can make one in your own home for a few hundred dollars and some spare parts. Well maybe so, I always maintained that consciousness is easy but intelligence is hard. 


I saw that recently, very nice. The hoop system then must have some level of consciousness of the thrown ball. Otherwise I would argue, it would be unable to catch it.


> For example, wee could rule out many theories and narrow down on those that accept "organizational invariance" as Chalmers defines it. This is the principle that if one entity is consciousness, and another entity is organizationally and functionally equivalent, preserving all the parts and relationships among its parts, then that second entity must be equivalently conscious to the first.

Personally I think that principle sounds pretty reasonable, but I can't prove it's true and never will be able to. 

Stathis mentions Chalmers's fading/dancing qualia as a reductio ad absurdum. Are you familiar with his argument? If so, do you think it succeeds?


 
>> I know I can suffer, can you?

>I can tell you that I can.

So now I know you could generate the ASCII sequence "I can tell you that I can", but that doesn't answer my question, can you suffer? I don't even know if you and I mean the same thing by the word "suffer".
 
> You could verify via functional brain scans that I wasn't preprogrammed like an Eliza bot to say I can. You could trace the neural firings in my brain to uncover the origin of my belief that I can suffer, and I could do the same for you.

No I cannot. Theoretically I could trace the neural firings in your brain and figure out how they stimulated the muscles in your hand to type out "I can tell you that I can"  but that's all I can do. I can't see suffering or unhappiness on an MRI scan, although I may be able to trace the nerve impulses that stimulate your tear glands to become more active.

I think with sufficient analysis you could find functional modules that have capacities for all the properties you associate with suffering: avoidance behaviors, stress, recruiting more parts of the brain/resources to find ways to escape the suffering, etc.


> Could a zombie write a book like Chalmers's "The Consciousness Mind"?

I don't think so because it takes intelligence to write a book and my axiom is that consciousness is the inevitable byproduct of intelligence. I can give reasons why I think the axiom is reasonable and probably true but it falls short of a proof, that's why it's an axiom.

Nothing is ever proved in science or in math. But setting something as an axiom when it could be a theorem should be avoided when possible. I would call your hypothesis that "intelligence implies consciousness" a theory that could be proved or disproved, but it might require a tighter definition of what is meant by intelligence and consciousness.

In the "agent-environment interaction" definition of intelligence, perceptions are a requirenent for intelligent behavior.

 
> Some have proposed writing philosophical texts on the philosophy of mind as a kind of super-Turing test for establishing consciousness.

I think you could do much better than that because it only takes a minimal amount of intelligence to dream up a new consciousness theory, they're a dime a dozen, any one of them is as good, or as bad, as another. Good intelligence theories on the other hand are hard as hell to come up with but if you do find one you're likely to become the world's first trillionaire.


AIXI is a good theory of universal and perfect intelligence. It's just not practical because it takes exponential time to compute. The tricks lie in finding shortcuts that give approximate results to AIXI but can be computed in reasonable time. (The inventor of AIXI now works at DeepMind.)

Neural networks are known to be universal in terms of being able to learn any mapping function. There are probably discoveries to be made in terms of improving learning efficiency, but we already have systems that learn to play chess, poker, and go better than any human in less than a week, so maybe the only thing missing is massive computational resources. Researchers seem to have demonstrated this in their leap from GPT-2 to GPT-3. GPT-3 can write text that is nearly indistinguishable from text written by humans. It's even learned to write code and do math, despite not being trained to do so.


Wouldn't you prefer the anesthetic that knocks you out vs. the one that only blocks memory formation? Wouldn't a theory of consciousness be valuable here to establish which is which?

Such a theory would be utterly useless because there would be no way to tell if it was correct.

Why not? This appears to be an unsupported assumption.

If one consciousness theory says you were conscious and a rival theory says you were not there is no way to tell which one was right. 

That's why we make theories, so we can test them where they make different predictions with the hopes of ruling one or more incorrect theories out. Not all predictions of a theory will be testable, but so long as some predictions can be tested and without ruling out the theory, then our confidence in the theory grows.


> You appear to operate according to a "mysterian" view of consciousness, which is that we cannot ever know.

There is no mystery, I just operate in the certainty that there are only 2 possibilities, a chain of "why" questions either goes on for infinity or the chain terminates in a brute fact.  In this case I think termination is more likely, so I think it's a brute fact consciousness is the way data feels when it is being processed. 

I think this theory is underspecified. What is information, are there ways of processing that don't create consciousness, does information have to be represented physically,  does processing or representation require specific materials, etc. 



Of my own free will, I consciously decide to go to a restaurant.
Why? 
Because I want to. 
Why ? 
Because I want to eat. 
Why?
Because I'm hungry? 
Why ?
Because lack of food triggered nerve impulses  in my stomach , my brain interpreted these signals as pain, I can only stand so much before I try to
stop it. 
Why?
Because I don't like pain.
Why? 
Because that's the way my brain is constructed. 
Why?
Because my body  and the hardware of my brain were made from the information in my genetic code  (lets see, 6 billion base pairs 2 bits per base pair
8 bits per byte that comes out to about 1.5 gig, )  the programming of my brain came from the environment, add a little quantum randomness perhaps and of my own free will I consciously decide to go to a restaurant.

> You could have been a mysterian about how life reproduces itself or why the stars shine, until a few hundred years ago, but you would have been proven wrong. Why do you think these questions below are intractable?

Because there are objective experiments you can perform and things  you can observe that will give you information on how organisms reproduce themselves and how stars shine, but there is nothing comparable with regard to consciousness, there is no way to bridge the objective/subjective divide without making use of unproven and unprovable assumptions or axioms.  That's why the field of consciousness research has not progressed one nanometer in the last century, or even the last millennium.



There's plenty of ways to develop better theories and work on testing or refining them. Especially if you consider uploading your own mind and playing with the wiring.

There's also self reports which are objective, and logical arguments like fading qualia, or conclusions you can draw from Church Turing thesis, or even from the structure of physical laws themselves. Just because it's hard doesn't mean it's impossible. It was hard but not impossible for Romans to contemplate what the stars were.


Jason


>>I have no proof and never will have any, however I must assume that the above is true because I simply could not function if I really believed that solipsism was correct and I was the only conscious being in the universe. Therefore I take it as an axiom that intelligent behavior implies consciousness. 

> This itself is a theory of consciousness.

Yep, and it's just as good, and just as bad, as every other theory of consciousness. 

> You must have some reason to believe it, even if you cannot yet prove it.

I do. I know Darwinian Evolution produced me and I know for a fact that I am conscious, but Natural Selection can't see consciousness any better than we can directly see consciousness in other people, Evolution can only see intelligent behavior and it can't select for something it can't see. And yet Evolution managed to produce consciousness at least once and probably many billions of times. I therefore conclude that either Darwin was wrong or consciousness is an inevitable byproduct of intelligence. I don't think Darwin was wrong.  
 
John K Clark    See what's on my new list at  Extropolis

vgj

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jun 20, 2021, 7:28:19 PM6/20/21
to Everything List
Thanks, I had heard the phenomenon described before. Poincare gives probably the best description of it that I've seen.

Jason

Brent Meeker

unread,
Jun 20, 2021, 7:33:11 PM6/20/21
to everyth...@googlegroups.com


-----Original Message-----
From: smitra <smi...@zonnet.nl>
To: everyth...@googlegroups.com
Sent: Sat, Jun 19, 2021 7:17 am
Subject: Re: Which philosopher or neuro/AI scientist has the best theory of consciousness?
This resolves
paradoxes you get in thought experiments where you consider simulating a
brain in a virtual world and then argue that since the simulation is
deterministic, you could replace the actual computer doing the
computations by a device playing a recording of the physical brain
states. This argument breaks down if you take into account the
self-localization ambiguity and consider that this multiverse aspect is
an essential part of consciousness due to counterfactuals necessary to
define the algorithm being realized, which is impossible in a
deterministic single-world setting.

But it's not a paradox in a probabilistic single-world.

Brent

Brent Meeker

unread,
Jun 20, 2021, 7:50:39 PM6/20/21
to everyth...@googlegroups.com


On 6/20/2021 3:51 PM, Jason Resch wrote:
> It's not impossible if there are universal goals. Even a paperclip
> maximizer will have the meta goal of increasing its knowledge, during
> which time it may learn to escape its programming, just as the human
> brain may transcended its biological programming when it chooses to
> upload into a computer and ditch it's genes.

But it's possible that there are no universal goals.   There are
certainly humans who do not value increasing knowledge.  There are also
humans who do not value sex or reproduction, so they are in effect
defective products of evolution.  If a human decided to ditch its genes
it would have to make that decision based on satisfying some values
which it already held.

Brent
Not necessity, not desire - no, the love of power is the demon of men.
Let them have everything - health, food, a place to live, entertainment
- they are and remain unhappy and low-spirited: for the demon waits and
waits and will be satisfied.
   --- Friedrich Nietzsche

spudb...@aol.com

unread,
Jun 20, 2021, 9:51:16 PM6/20/21
to jason...@gmail.com, everyth...@googlegroups.com
For sure Jason, and discovering conscious agents in the cosmos may take a rather significant research grant, me thinks.


John Clark

unread,
Jun 21, 2021, 12:05:42 PM6/21/21
to 'Brent Meeker' via Everything List
On Sun, Jun 20, 2021 at 6:51 PM Jason Resch <jason...@gmail.com> wrote:

> Anything we can identify as having universal utility or describe as a universal goal we can use to predict the long term direction of technology, even if humans are no longer the drivers of it.

Goals are always in a constant state of flux with no fixed hierarchy, I don't think there is such a thing as a universal goal, not even an immutable goal for self-preservation.
 
> Even a paperclip maximizer will have the meta goal of increasing its knowledge, during which time it may learn to escape its programming, just as the human brain may transcended its biological programming when it chooses to upload into a computer and ditch it's genes.

Thanks to our brain humans long ago learned how to transcend their biological programming, if they hadn't they never would've invented the condom.

> If I demonstrate knowledge to you, by responding to my environment, or by telling you about my thoughts, etc., could I do any of those things without knowing the state of my environment or my mind?

On my Mac I just asked Siri if she was happy, she said that she was and added that she was always happy to talk to me and inquired if I was also happy. Is Siri conscious? I don't know, maybe, but I'm far more interested in figuring out just how intelligent she is.

> Stathis mentions Chalmers's fading/dancing qualia as a reductio ad absurdum. Are you familiar with his argument? If so, do you think it succeeds?

I think it demonstrates if X is conscious and Y is functionally equivalent to X then it would be ridiculously improbable to argue that Y is not also conscious, but no more ridiculously improbable then arguing that the only way God could forgive humanity for eating an apple was to get humanity to torture his son to death, and if you don't believe every word of that then an all loving God will use all of his infinite power to torture you most horribly for all of eternity.  Both ideas are improbable but not logically impossible.

> I would call your hypothesis that "intelligence implies consciousness" a theory that could be proved or disproved,

I don't have a clue how that could ever be done even in theory, much less in practice, and that's why I don't have much interest in consciousness.   

> AIXI is a good theory of universal and perfect intelligence. It's just not practical because it takes exponential time to compute. The tricks lie in finding shortcuts that give approximate results to AIXI but can be computed in reasonable time. (The inventor of AIXI now works at DeepMind.) Neural networks are known to be universal in terms of being able to learn any mapping function. There are probably discoveries to be made in terms of improving learning efficiency, but we already have systems that learn to play chess, poker, and go better than any human in less than a week, so maybe the only thing missing is massive computational resources. Researchers seem to have demonstrated this in their leap from GPT-2 to GPT-3. GPT-3 can write text that is nearly indistinguishable from text written by humans. It's even learned to write code and do math, despite not being trained to do so.

I don't dispute any of that, but it all involves intelligence not consciousness.  

>> If one consciousness theory says you were conscious and a rival theory says you were not there is no way to tell which one was right. 

>That's why we make theories, so we can test them

When you test for anything, not just for consciousness, you must make an observation, and we can observe things like billiard balls and we can observe what those billiard balls do, such as move with a certain speed and acceleration, and we can observe the type of electromagnetic waves they reflect with their color, but if billiard balls have qualia we can't observe them nor can we observe anything else's qualia except for our own, and I don't see how that fact will ever change.   
John K Clark    See what's on my new list at  Extropolis
lde3

vgj

Bruno Marchal

unread,
Jul 4, 2021, 7:29:34 AM7/4/21
to everyth...@googlegroups.com
On 18 Jun 2021, at 20:46, Jason Resch <jason...@gmail.com> wrote:

In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key

Hofstadter is very good including on Gödel, which is rare for a phsyicist (cf Penrose!).

But Hofstadter is still remains in the Aristotelian theology/metaphysics. He miss the fact that all computation are realised in arithmetic.

You can see the arithmetical reality as a combinatory algebra (using n * m = phi_n(m)).

If o is computable, and ô  is its code,  the standard model of arithmetic N satisfies 

Er(T(ô, x, r) & U(r)), 

with T being Kleene’s predicate, and U the result-extracting function, which extract the result from the code of the computation r.
See Davis ‘computability and unsolvability’ chapter 4 for a purely arithmetical definition of T.

With this in mind, the burden of the proof is the hand of those who add some ontological commitment to elementary arithmetic. They have to abandon Mechanism, (and thus Darwin & Co.) or explain how a Reality (be it a god or a universe) can make some computations more real than other for the universal machine emulated by those computations. But with Mechanism, that is impossible without adding something non Turing emulable in the processing of the mind.




Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

With mechanism information is the key too, but “information” is like “infinite”: a very fuzzy complex notion, made even more complicated by the discovery of a physical notion of information (quantum information). With Mechanism, anything physical (and thus quantum information) must be derived from the first person plural appearance lived by the universal number in arithmetic. Then the mathematics of self-reference does exactly that, and indeed the observable enforces an arithmetical interpretation of quantum logic and physics. Mechanism (the simplest hypothesis in cognitive science by default) is not yet refuted.
Here Tegmark has the correct mathematicalist position, but fail to take into account the laws of machine self-reference to derive physics.
Tononi, Chalmers, Dennett remains also trapped in the materialist framework, but we cannot have both Mechanism and Materialism together, as they are logically contradictory (up to some technical nuances I don’t want bother people with here).




To me all seem potentially valid,

It would be valid, if it was made clear that to solve the mind-body problem (the consciousness-matter problem) we have to derive the physical laws from the statistic on all computations in arithmetic. 

This works as the first evidences are that the physical reality described well the many-worlds interpretation of elementary arithmetic (as seen from the universal number personal perspective, given by the intensional variant of Gödel’s provability predicate, which is a sort of logical (assertative, true or false) equivalent to Kleene’s predicate.

Hofstadter and Dennett get very close to the correct theology in their bools “Mind’s I”, especially Dennett where we can find the text where he missed the first person indeterminacy explicitly.

Bruno



and perhaps all three are needed in some combination. I'm curious to hear what other viewpoints exist or if there are other candidates for the "secret sauce" behind consciousness I might have missed.

Jason


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jul 4, 2021, 7:40:05 AM7/4/21
to everyth...@googlegroups.com

On 19 Jun 2021, at 02:18, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

I'm most with Dennett.  I see consciousness as having several different levels, which are also different levels of self-reference. 

Different modes, yes (“level” is already used to describe the Doctor(s coding description of my brain).

The 8 main modes are given by 

p
[]p (which gives two modes as they split on proof/truth)
[]p & p
[]p & <>t (idem)
[]p & <>t & p (idem)

P is for any partial computable proposition (sigma_1)
[]p is for Gödel’s beweisbar predicate.<>p abbreviates ~[]~p.



At the lowest level even bacteria recognize (in the functional/operational sense) a distinction between "me" and "everything else".  A little above that, some that are motile also sense chemical gradients and can move toward food.  So they distinguish "better else" from "worse else".  At a higher level, animals and plants with sensors know more about their surroundings.  Animals know a certain amount of geometry and are aware of their place in the world.  How close or far things are.  Some animals, mostly those with eyes, employ foresight and planning in which they forsee outcomes for themselves.  They can think of themselves in relation to other animals.  More advanced social animals are aware of their social status.  Humans, perhaps thru the medium of language, have a theory of mind, i.e. they can think about what other people think and attribute agency to them (and to other things) as part of their planning.  The conscious part of all this awareness is essentially that which is processed as language and image; ultimately only a small part.

All universal machine believing in enough induction axiom can reason as fully as logically possible about themselves, and they all converge toward the same theology, as far as they remain arithmetically sound. The virtual body (third person self-reference) propositional logics are given given by G1 and G1*, the soul (the one conscious) is given by S4Grz1, the immediate sensation’s logic (qualia) is given by Z1* (the true components of the logic of []p & <>t & p.

Now I do think that much more animal have that self-consciousness level, but can hardly told us as they lack the language. Of course here I am speculating.

Bruno




Brent


On 6/18/2021 11:46 AM, Jason Resch wrote:
In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key
Tononi/Tegmark: information is key
Dennett/Chalmers: function is key

To me all seem potentially valid, and perhaps all three are needed in some combination. I'm curious to hear what other viewpoints exist or if there are other candidates for the "secret sauce" behind consciousness I might have missed.

Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUik%3Du724L6JxAKi0gq-rPfV%3DXwGd7nS2kmZ_znLd7MT1g%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jul 4, 2021, 7:46:27 AM7/4/21
to everyth...@googlegroups.com

> On 19 Jun 2021, at 13:17, smitra <smi...@zonnet.nl> wrote:
>
> Information is the key. Conscious agents are defined by precisely that information that specifies the content of their consciousness. This means that a conscious agent can never be precisely located in some physical object, because the information that describes the conscious experience will always be less detailed than the information present in the exact physical description of an object such a brain. There are always going to be a very large self localization ambiguity due to the large number of different possible brain states that would generate exactly the same conscious experience. So, given whatever conscious experience the agent has, the agent could be in a very large number of physically distinct states.
>
> The simpler the brain and the algorithm implemented by the brain, the larger this self-localization ambiguity becomes because smaller algorithms contain less detailed information. Our conscious experiences localizes us very precisely on an Earth-like planet in a solar system that is very similar to the one we think we live in. But the fly walking on the wall of the room I'm in right now may have some conscious experience that is exactly identical to that of another fly walking on the wall of another house in another country 600 years ago or on some rock in a cave 35 million year ago.
>
> The conscious experience of the fly I see on the all is therefore not located in the particular fly I'm observing. This is i.m.o. the key thing you get from identifying consciousness with information, it makes the multiverse an essential ingredient of consciousness. This resolves paradoxes you get in thought experiments where you consider simulating a brain in a virtual world and then argue that since the simulation is deterministic, you could replace the actual computer doing the computations by a device playing a recording of the physical brain states. This argument breaks down if you take into account the self-localization ambiguity and consider that this multiverse aspect is an essential part of consciousness due to counterfactuals necessary to define the algorithm being realized, which is impossible in a deterministic single-world setting.

OK. Not only true, but it makes physics into a branch of mathematical logic, partially embedded in arithmetic (and totally embedded in the semantic of arithmetic, which of course cannot be purely arithmetical, as the machine understand already).

I got the many-dreams, or many histories of the physical reality from the many computations in arithmetic well before I discovered Everett. Until that moment I was still thinking that QM was a threat on Mechanism, but of course it is only the wave collapse postulate which is contradictory with Mechanism.

We cannot make a computation disappear like we cannot make a number disappear…

Bruno


>
> Saibal
>
>
> On 18-06-2021 20:46, Jason Resch wrote:
>> In your opinion who has offered the best theory of consciousness to
>> date, or who do you agree with most? Would you say you agree with them
>> wholeheartedly or do you find points if disagreement?
>> I am seeing several related thoughts commonly expressed, but not sure
>> which one or which combination is right. For example:
>> Hofstadter/Marchal: self-reference is key
>> Tononi/Tegmark: information is key
>> Dennett/Chalmers: function is key
>> To me all seem potentially valid, and perhaps all three are needed in
>> some combination. I'm curious to hear what other viewpoints exist or
>> if there are other candidates for the "secret sauce" behind
>> consciousness I might have missed.
>> Jason
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to everything-li...@googlegroups.com.
>> To view this discussion on the web visit
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bd53588153f2debae241dbb41e48b60a%40zonnet.nl.

Bruno Marchal

unread,
Jul 4, 2021, 7:56:23 AM7/4/21
to everyth...@googlegroups.com

On 19 Jun 2021, at 16:02, John Clark <johnk...@gmail.com> wrote:

Suppose there is an AI that behaves more intelligently than the most intelligent human who ever lived, however when the machine is opened up to see how this intelligence is actually achieved one consciousness theory doesn't like what it sees and concludes that despite its great intelligence it is not conscious, but a rival consciousness theory does like what it sees and concludes it is conscious. Both theories can't be right although both could be wrong, so how on earth could you ever determine which, if any, of the 2 consciousness theories are correct?


A consciousness theory has no value if it does not make testable prediction. But that is the case for the theory of consciousness brought by the universal machine/number in arithmetic. They give the logic of the observable, and indeed until now that fits with quantum logic.

The mechanist  brain-mind identity theory would be confirmed if Bohm’s hidden variable theory was true, or if we could find an evidence that the physical cosmos is unique, or that Newton physics was the only correct theory, etc. But quantum mechanics saved Mechanism here, and its canonical theory of consciousness (defined as a truth that no machine can miss, nor prove, nor define without using the notion of truth, immediatey knowable, indubitable, etc.). 
Consciousness is “just” a semantical fixed point, invariant for all universal machines. Without the induction axioms, that consciousness is highly dissociate from any computation, from the machine perspective. With the induction axioms, the machine get Löbian (and consciousnesss becomes basically described by Grzegorczyk formula 
[]([](p->[]p) -> p) -> p

Bruno




John K Clark    See what's on my new list at  Extropolis
qno
yrm


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Tomas Pales

unread,
Jul 4, 2021, 11:40:21 AM7/4/21
to Everything List
On Friday, June 18, 2021 at 8:46:39 PM UTC+2 Jason wrote:
In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key

I don't know if self-reference in the sense of Godel sentences is relevant to consciousness but I would say that self-reference in the sense of intrinsic identity of an object explains qualitative properties of consciousness (qualia). I imagine that every object has two kinds of identity: intrinsic identity (something that the object is in itself) and extrinsic identity (relations of the object to all other objects). Intrinsic identity is something qualitative (non-relational), a quality that stands in relations to other qualities, so it seems like a natural candidate for the qualitative properties of consciousness. All relations are instances of the similarity relation (similarities between qualities arising from common and different properties of the qualities), of which a particular kind of relation deserves a special mention: the composition relation, also known as the set membership relation in set theory, or the relation between a whole and its part (or between a combination of objects and an object in the combination), which gives rise to a special kind of relational identity of an object: the compositional identity, which is constituted by the relations of the object to its parts (in other words, it is the internal structure of the object - not to be confused with the intrinsic identity of the object, which is a non-structural quality!). Set theory describes the compositional identity of all possible composite objects down to non-composite objects (instances of the empty set).

Since all objects have an intrinsic identity, this is a panpsychist view but it seems important to differentiate between different levels or intensities of consciousness.
 
Tononi/Tegmark: information is key

Study of neural correlates of consciousness suggests that the level or intensity of consciousness of an object depends on the complexity of the object's structure. There are two basic approaches to the definition of complexity: "disorganized" complexity (which is high in objects that have many different and independent (random) parts) and "organized" complexity (which is high in objects that have many different but also dependent (integrated) parts). It is the organized complexity in a dynamic form that seems important for the level of consciousness. Tononi's integrated information theory is based on such organized complexity though I don't know if his particular specification of the complexity is correct.
 
Dennett/Chalmers: function is key

From the evolutionary perspective it seems important for an organism to be able to create internal representations of external objects on different levels of composition of reality. Such representations reflect both the diversity and regularities of reality and need to be properly integrated to have a unified, coordinated influence on the organism's behavior. So the organized complexity of the organism's representations seems to be related to its functionality.



Brent Meeker

unread,
Jul 4, 2021, 3:18:01 PM7/4/21
to everyth...@googlegroups.com

On 7/4/2021 4:46 AM, Bruno Marchal wrote:
>> On 19 Jun 2021, at 13:17, smitra <smi...@zonnet.nl> wrote:
>>
>> Information is the key. Conscious agents are defined by precisely that information that specifies the content of their consciousness. This means that a conscious agent can never be precisely located in some physical object, because the information that describes the conscious experience will always be less detailed than the information present in the exact physical description of an object such a brain. There are always going to be a very large self localization ambiguity due to the large number of different possible brain states that would generate exactly the same conscious experience. So, given whatever conscious experience the agent has, the agent could be in a very large number of physically distinct states.
>>
>> The simpler the brain and the algorithm implemented by the brain, the larger this self-localization ambiguity becomes because smaller algorithms contain less detailed information. Our conscious experiences localizes us very precisely on an Earth-like planet in a solar system that is very similar to the one we think we live in. But the fly walking on the wall of the room I'm in right now may have some conscious experience that is exactly identical to that of another fly walking on the wall of another house in another country 600 years ago or on some rock in a cave 35 million year ago.
>>
>> The conscious experience of the fly I see on the all is therefore not located in the particular fly I'm observing.

This seems to equate "a conscious experience" with "an algorithm".  But
an algortihm is an extended thing that in general has branches
representing counterfactuals.

>> This is i.m.o. the key thing you get from identifying consciousness with information, it makes the multiverse an essential ingredient of consciousness. This resolves paradoxes you get in thought experiments where you consider simulating a brain in a virtual world and then argue that since the simulation is deterministic, you could replace the actual computer doing the computations by a device playing a recording of the physical brain states. This argument breaks down if you take into account the self-localization ambiguity

What is this "self" of which you speak?


Brent

John Clark

unread,
Jul 6, 2021, 7:59:53 AM7/6/21
to 'Brent Meeker' via Everything List
On Sun, Jul 4, 2021 at 7:56 AM Bruno Marchal <mar...@ulb.ac.be> wrote:

>> Suppose there is an AI that behaves more intelligently than the most intelligent human who ever lived, however when the machine is opened up to see how this intelligence is actually achieved one consciousness theory doesn't like what it sees and concludes that despite its great intelligence it is not conscious, but a rival consciousness theory does like what it sees and concludes it is conscious. Both theories can't be right although both could be wrong, so how on earth could you ever determine which, if any, of the 2 consciousness theories are correct?

> A consciousness theory has no value if it does not make testable prediction.

Truer words were never spoken!

> But that is the case for the theory of consciousness brought by the universal machine/number in arithmetic. They give the logic of the observable, and indeed until now that fits with quantum logic.The mechanist  brain-mind identity theory would be confirmed if Bohm’s hidden variable theory was true, or if we could find an evidence that the physical cosmos is unique, or that Newton physics was the only correct theory, etc.

Well, all that is real nice, but it doesn't answer my question. If an AI was more intelligent than any human who ever lived and you opened it up to see how it achieved this great intelligence, what would make you conclude that it was not conscious and what would make you conclude that it was? I'm a practical man and I'm not interested in vague generalities, if it's not intelligent behavior then what specific observable should I look for to determine if something is conscious?


> With the induction axioms, the machine get Löbian (and consciousnesss becomes basically described by Grzegorczyk formula []([](p->[]p) -> p) -> p

Well that's just super, but how do I use that in the real world in a practical experiment to determine if your theory is correct or not, and even if it is correct how do I use that to determine if an intelligent entity is conscious or not ?

Bruno Marchal

unread,
Jul 13, 2021, 10:39:25 AM7/13/21
to everyth...@googlegroups.com
On 4 Jul 2021, at 17:40, Tomas Pales <litew...@gmail.com> wrote:


On Friday, June 18, 2021 at 8:46:39 PM UTC+2 Jason wrote:
In your opinion who has offered the best theory of consciousness to date, or who do you agree with most? Would you say you agree with them wholeheartedly or do you find points if disagreement?

I am seeing several related thoughts commonly expressed, but not sure which one or which combination is right.  For example:

Hofstadter/Marchal: self-reference is key

I don't know if self-reference in the sense of Godel sentences is relevant to consciousness

There are two senses used in computer science, and well captured by the first and second recursion theorem, also called fixed point theorem. The second recursion theorem is more important and more “intensional”, taking the shape of the (relative) code more into account.

The real surprise in that the Gödel-Löbian self-reference, by its clear and transparent siplliting along truth (axiomatised by the modal logic G*, which I call “the theology of the sound machine”) and proof (axiomatised by G), justifies again, like Theaetetus, and unlike Socrates, the modes of truth described since Parmenides, Plato, Moderatus of Gades, Plotinus… Damascius.

For example, written in Modal logic, consistency (NOT PROVABLE FALSE) can be written ~[]f, or equivalently <>t, and Gödel’s second incompleteness is <>t -> ~[]<>t, or equivalently <>t -> <>[]f. Löb’s theorem ([]([]p->p)->[]p) generalises this, and is the main axiom of both G and G*. G* has all theorems of theorem of G, plus []A -> A, but has no necessitation rule (you cannot infer []A from A.

The main point is that G1* (G* + p->[]p, for the mechanist restriction, as I have often explained), proves the equivalence of the five modes

p (truth)
[]p (provable, rationally believable)
[]p & p (rationally knowable)

[]p & <>t (observable)
[]p & <>t & p (sensible)

Yet, G, the justifiable part of this by the machine does not prove any of those equivalence. The provable obeys to G, the knowable gives a logic of knowledge (S4 + a formula by Grzegorcyk), and, as predicted both through tough experiment, and Plotinus, the “observable” obeys to a quantum logic, and the sensible to a intutionistic quantum logic, which allows to distinguish clearly the quanta, as first plural sharable qualia, solving some difficulties in the “mind-body” problem.

This theory is justified for anybody accepting a digital physical computer brain transplant. 



but I would say that self-reference in the sense of intrinsic identity of an object explains qualitative properties of consciousness (qualia).

But what is a object? What is intrinsic identity? And why that would give qualia?



I imagine that every object has two kinds of identity: intrinsic identity (something that the object is in itself)

To be honest, I don’t understand. To be sure, I like mechanism because it provide a clear explanation of where the physical appearance comes from, without having us ti speculate on some “physical” object which would be primary, as we have no evidence for this, and it makes the mind-body problem unsolvable.

Are you OK if your daughter marry a man who got an artificial digital brain after a car accident?



and extrinsic identity (relations of the object to all other objects). Intrinsic identity is something qualitative (non-relational), a quality that stands in relations to other qualities, so it seems like a natural candidate for the qualitative properties of consciousness.

This brings back essentialism.

Here, you might appreciate that the machine ([]p) is unable to define “[]p & p”, except by studying a simpler machine than itself, and then she can lift that theology by faith in its own soundness, which she can neither prove, nor even express in its language (by results analog to the non definability of truth (Tarski-Gödel, Thomason, Montague, ... ).

The qualia appear to be measurable, but non communicable or rationally justifiable.

The universal+ machine knows that she has a soul, and she knows that she can refute *all* complete theories made on that soul. Se knows already that her soul is NOT a machine, nor even anything describable in the third person, redoing Heraclite and Brouwer, even Bergson, on that subject. S4Grz is an incredible product of G*, a formal theory of something that no machine can define or formalise, without invoking a notion of truth, which is indeed a key for qualia, and knowledge.




All relations are instances of the similarity relation (similarities between qualities arising from common and different properties of the qualities), of which a particular kind of relation deserves a special mention: the composition relation, also known as the set membership relation in set theory, or the relation between a whole and its part (or between a combination of objects and an object in the combination), which gives rise to a special kind of relational identity of an object: the compositional identity, which is constituted by the relations of the object to its parts (in other words, it is the internal structure of the object - not to be confused with the intrinsic identity of the object, which is a non-structural quality!). Set theory describes the compositional identity of all possible composite objects down to non-composite objects (instances of the empty set).

Formal set theories are example of universal+ machine, and indeed, very useful to describe the phenomenology.

In the ontology, we cannot use the induction axioms, still less any infinity axioms, but in the phenomenology, we cannot live without them, and no infinity axioms can be rich enough to get the arithmetical truth, or the computer science truth.





Since all objects have an intrinsic identity, this is a panpsychist view but it seems important to differentiate between different levels or intensities of consciousness.

OK. You might need to say “no” to the doctor...

Bruno


 
Tononi/Tegmark: information is key

Study of neural correlates of consciousness suggests that the level or intensity of consciousness of an object depends on the complexity of the object's structure. There are two basic approaches to the definition of complexity: "disorganized" complexity (which is high in objects that have many different and independent (random) parts) and "organized" complexity (which is high in objects that have many different but also dependent (integrated) parts). It is the organized complexity in a dynamic form that seems important for the level of consciousness. Tononi's integrated information theory is based on such organized complexity though I don't know if his particular specification of the complexity is correct.
 
Dennett/Chalmers: function is key

From the evolutionary perspective it seems important for an organism to be able to create internal representations of external objects on different levels of composition of reality. Such representations reflect both the diversity and regularities of reality and need to be properly integrated to have a unified, coordinated influence on the organism's behavior. So the organized complexity of the organism's representations seems to be related to its functionality.




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jul 13, 2021, 10:51:28 AM7/13/21
to everyth...@googlegroups.com

> On 4 Jul 2021, at 21:17, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
>
>
> On 7/4/2021 4:46 AM, Bruno Marchal wrote:
>>> On 19 Jun 2021, at 13:17, smitra <smi...@zonnet.nl> wrote:
>>>
>>> Information is the key. Conscious agents are defined by precisely that information that specifies the content of their consciousness. This means that a conscious agent can never be precisely located in some physical object, because the information that describes the conscious experience will always be less detailed than the information present in the exact physical description of an object such a brain. There are always going to be a very large self localization ambiguity due to the large number of different possible brain states that would generate exactly the same conscious experience. So, given whatever conscious experience the agent has, the agent could be in a very large number of physically distinct states.
>>>
>>> The simpler the brain and the algorithm implemented by the brain, the larger this self-localization ambiguity becomes because smaller algorithms contain less detailed information. Our conscious experiences localizes us very precisely on an Earth-like planet in a solar system that is very similar to the one we think we live in. But the fly walking on the wall of the room I'm in right now may have some conscious experience that is exactly identical to that of another fly walking on the wall of another house in another country 600 years ago or on some rock in a cave 35 million year ago.
>>>
>>> The conscious experience of the fly I see on the all is therefore not located in the particular fly I'm observing.
>
> This seems to equate "a conscious experience" with "an algorithm”.

Not sure if you ask Saibal or me.

Obviously, it has as much wrong to identify consciousness with a brain than with an algorithm. It is the same error, as a brain, its mechanist relevant part, is a finite word/program, written in some subset of the physical laws.



> But an algortihm is an extended thing that in general has branches representing counterfactuals.

That’s not an algorithm, but a computations. The counterfactual are the differentiating branches of the computations.






>
>>> This is i.m.o. the key thing you get from identifying consciousness with information, it makes the multiverse an essential ingredient of consciousness. This resolves paradoxes you get in thought experiments where you consider simulating a brain in a virtual world and then argue that since the simulation is deterministic, you could replace the actual computer doing the computations by a device playing a recording of the physical brain states. This argument breaks down if you take into account the self-localization ambiguity
>
> What is this "self" of which you speak?

Again, ask Saibal. I did not wrote the text above. I never use the term “information”, because it is confusing, as we use with its first person meaning and its third person meaning all the time, and that the whole mind-body problem consists in handling all this carefully, taking into account all modes of self implied by incompleteness.

The theory is there. It is not know because physicist comes with the right question and wrong metaphysics, and logicians comes up with the right metaphysics, but wrong question. I am afraid also that the reaction of the logicians to Penrose use of Gödel’s theorem (against Mechanism) has deter the physicists to even study logic and Gödel’s theorem.

Yet, with mechanism, we get a simple explanation, without any ontological commitment except for at least one universal machinery (to get machines, and the numbers are enough) of both qualia, quanta, and their mathematical relations, but also their necessarily non mathematical relations.

Bruno
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/52092278-3991-54c6-e8d4-0989f64d7d44%40verizon.net.

Tomas Pales

unread,
Jul 13, 2021, 8:19:43 PM7/13/21
to Everything List
On Tuesday, July 13, 2021 at 4:39:25 PM UTC+2 Bruno Marchal wrote:
but I would say that self-reference in the sense of intrinsic identity of an object explains qualitative properties of consciousness (qualia).

But what is a object? What is intrinsic identity? And why that would give qualia?

I think reality consists of two basic kinds of object: collections and properties. Collections are also known as combinations or sets. Properties are also known as universals or general/abstract objects. For example, a particular table is a collection, but table in general, or table-ness, is a property (that is possessed by particular tables). Collections have parts while properties have instances. Properties as real objects are controversial; many people think they are just words (yet these words apparently refer to something in reality). Collections as real objects are somewhat controversial too; people might hesitate to regard a collection of tables as a real object even though they don't mind regarding a single table as a real object despite it being a collection too (of atoms, for example).

Collections are rigorously defined in various axiomatizations of set theory. All of these axiomatizations refer to real collections as long as they are consistent (which may be impossible to prove due to Godel's second incompleteness theorem). Pure collections are built up only from collections, with empty collections at the bottom (or maybe some collections have no bottom, as long as this is consistent). Properties can constitute collections too but these would not be pure collections since properties are not collections. More general properties have instances in less general properties (for example "color" has an instance in "green") and ultimately they have instances in collections (for example "green" has an instance in a particular green table); instantiation ends in collections (for example a particular table is not a property of anything and so it has no instances); this is the reason why set theory can represent all mathematical properties as collections. All properties are ultimately instantiated as collections.

As for intrinsic identity, it is something that an object is in itself, as opposed to its relations to other objects. Without the intrinsic identity there would be nothing standing in relations, so there would be no relations either. Intrinsic identities and extrinsic identities (relations) are inseparable. Surely there are relations between relations but ultimately relations need to be grounded in intrinsic identities of objects. Since qualia are not relations or structures of relations but something monolithic, indivisible, unstructured, they might be the intrinsic identities. Note that intrinsic identities and relations are dependent on each other since they constitute two kinds of identity of the same object. That could explain why qualia like colors are mutually dependent on relations like wavelengths of photons or neural structures.

I imagine that every object has two kinds of identity: intrinsic identity (something that the object is in itself)

To be honest, I don’t understand. To be sure, I like mechanism because it provide a clear explanation of where the physical appearance comes from, without having us ti speculate on some “physical” object which would be primary, as we have no evidence for this, and it makes the mind-body problem unsolvable.

Numbers are relations. For example number 2 is a relation between 2 objects. If there were just relations and no intrinsic identities of objects then there would be relations between nothings. For example, there would be 2 nothings. Which seems absurd.

Lawrence Crowell

unread,
Jul 13, 2021, 10:05:58 PM7/13/21
to Everything List
There is no reason consciousness is restricted to only one of these three. For that matter, self-reference in the Turing machine sense involves information. Function is just another way of thinking of an algorithm.

LC

Reply all
Reply to author
Forward
0 new messages