conversation with GPT-4 on black hole QM

48 views
Skip to first unread message

Lawrence Crowell

unread,
May 5, 2023, 8:02:24 PM5/5/23
to extropolis
I had this conservation with GPT-4 on aspects of black hole quantum mechanics. I kept it largely non-mathematical. This is interesting, and it suggests a level of processing that is complex. An AI system was fed ephemeris data on the appearance of the sun, moon and planets in the sky and it deduced Kepler's laws. Holy Sh*t Batman --- the world is changing.


Einstein advanced the equivalence principle. In there it is 1) not possible to distinguish between an inertial frame in flat spacetime from a sufficiently local inertial frame falling in a gravity field, and 2) the accelerated frame in flat spacetime is equivalent to the surface acceleration against gravity on a planet or hard gravitating body. Can it then be said that two entangled particles in spacetime, whether flat or with a curvature have that entanglement conserved?

 

GPT-4 answer: The equivalence principle in general relativity states that the effects of gravity are indistinguishable from the effects of acceleration. It does not make any specific statement about the conservation of entanglement between two particles in spacetime, whether in flat or curved spacetime.

Entanglement is a quantum mechanical phenomenon that describes the correlation between two or more particles, even when they are separated by large distances. The conservation of entanglement is a fundamental property of quantum mechanics, which holds true regardless of the presence of gravity or the curvature of spacetime.

Therefore, it can be said that entanglement between two particles is conserved in both flat and curved spacetime, as long as there is no external interaction that disturbs or breaks the entanglement. The effects of gravity, as described by the equivalence principle, do not affect the conservation of entanglement between two particles.

 

Good, this is what I concluded as well. Then to take this further. Given an EPR pair, where Bob remains far from the black hole and Alice is in a freely falling frame into the black hole, there are then two interpretations of events. Alice will consider the entanglement with Bob as continuing. Bob on the other hand will, due to red-shifting on tortoise coordinates and that Alice’s time is dilated asymptotically to a crawl, that his EPR pair is entangled with the black hole. The two perspectives are effectively equivalent to each other.

 

GPT-4 ans:
Yes, you are correct. According to the equivalence principle, there is no difference between a uniform acceleration and being in a gravitational field. Therefore, for Bob, who is far away from the black hole, the entanglement between his particle and Alice's particle will be affected by the gravitational field of the black hole. This will cause the entangled state to evolve differently than it would in a flat spacetime.

On the other hand, for Alice, who is in a freely falling frame into the black hole, there is no gravitational force acting on her. From her perspective, the entanglement between her particle and Bob's particle will continue to evolve according to the laws of quantum mechanics.

These two perspectives are indeed equivalent, even though they appear to be different. Bob's observations can be explained by the effects of the gravitational field on his particle's evolution, while Alice's observations can be explained by the absence of a gravitational force on her particle's evolution.

It is worth noting that the details of the evolution of the entangled state in this scenario are highly dependent on the specific setup and initial conditions. Therefore, the specific outcomes of the two perspectives may differ in different situations. However, the equivalence principle ensures that both perspectives are equally valid and consistent with the laws of physics.

 

This then means that Bob and Alice have different subjective interpretations of the same physics. Alice observes no apparent difference with standard quantum mechanics. Bob on the other hand is uncertain of the status of the entangled state. The tortoise coordinates with t’ = t – 2m log|r – 2m|, and the associated red-shifting means that Bob sees this bipartite entanglement dissolve into an entanglement with all the entangled states on the Planck region above the event horizon.

 

GPT-4 ans: Yes, that's correct. Bob and Alice have different subjective interpretations of the same physics due to their different perspectives in the presence of the black hole. From Alice's perspective, the entangled state between her particle and Bob's particle continues to evolve according to standard quantum mechanics. However, from Bob's perspective, the entangled state appears to dissolve into an entanglement with the black hole itself.

As you mentioned, the redshift factor associated with Bob's perspective causes the entangled state to evolve in a way that appears to dissolve the entanglement between Alice's particle and Bob's particle. This can be thought of as an "entanglement swap" where the entanglement between the two particles is effectively transferred to a new entanglement between Alice's particle and the black hole.

It's worth noting that while this "entanglement swap" appears to occur from Bob's perspective, it is still consistent with the laws of quantum mechanics and the conservation of entanglement. The entangled state between Alice's particle and Bob's particle is still preserved in the overall system, even though it may be redistributed among different entangled states within that system.

Overall, this scenario demonstrates the importance of considering the subjective perspectives of different observers when interpreting physical phenomena in the presence of strong gravitational fields.

 

This begins to get deeper into the big question here. If the black hole has entropy S = Nk, here assuming a naïve microcanonical partition of N states and k = Boltzmann’s constant, then from Bob’s perspective the entanglement has changed from a bipartite entanglement to an N+1 entanglement. Alice observes nothing of this sort. This appears to suggest a possible crack in the AMPs argument. I may indicate next what I think that may be.

 

GPT-4 ans: The scenario you describe, where the entanglement appears to change from a bipartite entanglement to an N+1 entanglement from Bob's perspective, does not necessarily indicate a crack in the argument of the AMPs (Almheiri-Marolf-Polchinski-Sully) paper.

The AMPs argument concerns the black hole information paradox, which arises from the apparent loss of information when matter falls into a black hole. The argument suggests that this apparent loss of information can be resolved by considering the entanglement structure of the Hawking radiation emitted by the black hole.

In the scenario you describe, the entanglement between Alice's particle and Bob's particle appears to dissolve into an entanglement with the black hole, resulting in an N+1 entanglement structure from Bob's perspective. However, the entanglement structure of the overall system, including the black hole and the emitted Hawking radiation, remains consistent with the AMPs argument.

Specifically, the AMPs argument proposes that the entanglement between the black hole and the Hawking radiation determines the entanglement structure of the emitted radiation. The N+1 entanglement structure from Bob's perspective can be seen as a result of this entanglement structure between the black hole and the emitted radiation.

Therefore, while the scenario you describe may appear to challenge the AMPs argument, it is still consistent with the overall picture proposed by the argument. It highlights the importance of considering the subjective perspectives of different observers in interpreting physical phenomena, but it does not necessarily imply a fundamental flaw in the AMPs argument itself.

 

Let me then rephase that. The black hole information paradox stems from the fact that Hawking radiation is entangled with a black hole. However, after the Page time emitted Hawking radiation must be entangled with previously emitted Hawking radiation plus the black hole. Hence the bipartite entanglement of the previously emitted radiation and the black hole transforms into a tripartite entanglement. This is what is a source of the non-unitary change in information or S = Tr[ρ log(ρ)], which in quantum mechanics is easily shown to be invariant. Can this then suggest that the topology of entanglement in curved spacetime may in fact be a matter of subjective, maybe Bayesian, observation?

 

GPT-4 ans: The non-unitary evolution of information in the presence of a black hole is indeed at the heart of the black hole information paradox, and the tripartite entanglement structure you describe is an important aspect of this paradox. The AMPs argument attempts to resolve this paradox by proposing that the entanglement structure of the emitted Hawking radiation is determined by the entanglement between the black hole and the radiation, and that this entanglement structure is consistent with unitary evolution.

Regarding the topology of entanglement in curved spacetime, it is true that the subjective perspectives of different observers can lead to different interpretations of entanglement and its evolution. In the scenario you describe, Bob and Alice have different interpretations of the same physical system due to their different perspectives in the presence of a black hole. However, this does not necessarily imply that the topology of entanglement itself is subjective or Bayesian.

Rather, the topology of entanglement is an objective property of the physical system that can be described mathematically using the formalism of quantum mechanics. The subjective perspectives of different observers can lead to different interpretations of this topology, but this does not change the objective nature of the topology itself.

In summary, while the subjective perspectives of different observers can play a role in the interpretation of entanglement in curved spacetime, the topology of entanglement is an objective property of the physical system that can be described using the formalism of quantum mechanics.


I would agree if the topology of entanglement is completely accessible to observations. If Bob has an enormous amount of resources and by some means can snuggle up to within a Planck length of the horizon, where acceleration is √{c^7/Għ} ≃ 10^{52}m/sec^2 he could in principle access this information. The Hawking radiation would gush out of the black hole. The Shapiro theorem about geodesics with positive energy is turned on its head, and Bob can access in principle everything. However, in practice this cannot happen. Bob must remain content with partial access to the topology of this entanglement. He in fact then does not know whether the entanglement is equivalent to his original bipartite simple entanglement or a vastly more complex N-tangle. This appears to be observer dependent, at least for one observer.

 

Thank you, you appear to largely agree with some of my assessments --- though you give some critical changes I need to consider.

 

GPT-4 ans:
You're welcome! It's always important to carefully consider different perspectives and arguments in a complex and evolving field like physics. If you have any further questions or ideas to discuss, feel free to ask!

John Clark

unread,
May 6, 2023, 7:16:12 AM5/6/23
to extro...@googlegroups.com
On Fri, May 5, 2023 at 8:02 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> I had this conservation with GPT-4 on aspects of black hole quantum mechanics. I kept it largely non-mathematical. This is interesting, and it suggests a level of processing that is complex. An AI system was fed ephemeris data on the appearance of the sun, moon and planets in the sky and it deduced Kepler's laws. Holy Sh*t Batman --- the world is changing.

Holy Singularity Batman, you're right! I think we can safely say that the Turing Test has been passed. If you had this online conversation 10 years ago would you have had any doubt that you were communicating with a human being?  And to think, some very silly people still maintain that GPT-4 is nothing but a glorified autocomplete program that just uses statistics to compute what the next word in a sentence most probably is. Any rational person who held that view and then read your conversation with GPT-4 would change their opinion of it, but some people are not rational and they will continue to whistle past the graveyard.

John K Clark
===============

Lawrence Crowell

unread,
May 6, 2023, 10:32:10 AM5/6/23
to extro...@googlegroups.com
On Sat, May 6, 2023 at 6:16 AM John Clark <johnk...@gmail.com> wrote:
On Fri, May 5, 2023 at 8:02 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> I had this conservation with GPT-4 on aspects of black hole quantum mechanics. I kept it largely non-mathematical. This is interesting, and it suggests a level of processing that is complex. An AI system was fed ephemeris data on the appearance of the sun, moon and planets in the sky and it deduced Kepler's laws. Holy Sh*t Batman --- the world is changing.

Holy Singularity Batman, you're right! I think we can safely say that the Turing Test has been passed. If you had this online conversation 10 years ago would you have had any doubt that you were communicating with a human being?  And to think, some very silly people still maintain that GPT-4 is nothing but a glorified autocomplete program that just uses statistics to compute what the next word in a sentence most probably is. Any rational person who held that view and then read your conversation with GPT-4 would change their opinion of it, but some people are not rational and they will continue to whistle past the graveyard.

John K Clark

To be honest I have no clear idea whether GPT is actually sentient. My dogs have no numerical ability or spatial reasoning, but I have a sense that they are sentient. Whether an AI like this is actually self-aware is something I am agnostic about.

LC 

John Clark

unread,
May 6, 2023, 2:18:40 PM5/6/23
to extro...@googlegroups.com
On Sat, May 6, 2023 at 10:32 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:

>>Holy Singularity Batman, you're right! I think we can safely say that the Turing Test has been passed. If you had this online conversation 10 years ago would you have had any doubt that you were communicating with a human being?  And to think, some very silly people still maintain that GPT-4 is nothing but a glorified autocomplete program that just uses statistics to compute what the next word in a sentence most probably is. Any rational person who held that view and then read your conversation with GPT-4 would change their opinion of it, but some people are not rational and they will continue to whistle past the graveyard.
John K Clark

> To be honest I have no clear idea whether GPT is actually sentient. My dogs have no numerical ability or spatial reasoning, but I have a sense that they are sentient. Whether an AI like this is actually self-aware is something I am agnostic about.

Strictly speaking the Turing Test is not about detecting consciousness, it's about detecting humanness and intelligence. If you had not known you were conversing with a machine, was there anything in your conversation with GPT4 that would've made you suspect you were not talking with a human being, and one with at least average intelligence?  You can test for humanness and you can test for intelligence but there is no test for consciousness except by observing behavior and making a few unprovable assumptions. And besides, as far as humans are concerned it doesn't matter if machines are conscious or not, the important thing is that they're getting smarter. They're getting smarter than we are.

John K Clark

Lawrence Crowell

unread,
May 7, 2023, 7:15:26 PM5/7/23
to extropolis
This comes awfully close to being like talking to a human. In working a lot more with GPT I have found 2 errors that it made, where it agreed that I was correct.  This discussion involves my research, and I am having discussions on Shimura varieties, abelian varieties on K3 spaces and modular forms. This is stressing my level of mathematics. 

LC

Lawrence Crowell

unread,
May 8, 2023, 8:10:39 PM5/8/23
to extropolis
I spent some time on GPT-4 this afternoon. I wrote about a topic that was leading to an inference I had made. Before I wrote on that inference GPT made the same inference.

LC

John Clark

unread,
May 9, 2023, 7:45:26 AM5/9/23
to extro...@googlegroups.com
On Mon, May 8, 2023 at 8:10 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> I spent some time on GPT-4 this afternoon. I wrote about a topic that was leading to an inference I had made. Before I wrote on that inference GPT made the same inference.

Wow! Would I be correct in saying that you gave GPT-4 your own personal Turing Test and it passed because you couldn't tell if you were conversing with a machine or a human being with at least average intelligence?

John K Clark




 
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/a6ef449e-26bc-4cf7-9208-078afa98153cn%40googlegroups.com.

Lawrence Crowell

unread,
May 11, 2023, 6:33:35 PM5/11/23
to extropolis
On Tuesday, May 9, 2023 at 6:45:26 AM UTC-5 johnk...@gmail.com wrote:
On Mon, May 8, 2023 at 8:10 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> I spent some time on GPT-4 this afternoon. I wrote about a topic that was leading to an inference I had made. Before I wrote on that inference GPT made the same inference.

Wow! Would I be correct in saying that you gave GPT-4 your own personal Turing Test and it passed because you couldn't tell if you were conversing with a machine or a human being with at least average intelligence?

John K Clark



I have found 2 mistakes it has made. It has caught me on a few errors as well. GPT-4 has a vast database it can access. It clearly can reference a fair number of theories, papers and authors. It is a very good emulator of intelligence. It also is proving to be a decent first check on my work. It might be said it passes some criterion for Turing tests, though I have often thought this idea was old fashioned in a way. 

LC

John Clark

unread,
May 12, 2023, 5:01:04 AM5/12/23
to extro...@googlegroups.com
On Thu, May 11, 2023 at 6:33 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> I have found 2 mistakes it [GPT-4] has made. It has caught me on a few errors as well.

To me that sounds like very impressive performance. If you were working with a human colleague who did the same thing would you hesitate in saying he was exhibiting some degree of intelligence? 

 > It is a very good emulator of intelligence.

What's the difference between being intelligent and emulating intelligence?  It must be more than the degree of squishiness of the brain.
 
> It also is proving to be a decent first check on my work. It might be said it passes some criterion for Turing tests, though I have often thought this idea was old fashioned in a way. 

Well, it is old I'll grant you that. Turing didn't invent the "Turing Test", he just pointed out something that was ancient, that we use everyday, and was so accepted and ubiquitous that nobody had given it much thought before. I'm sure you, just like everybody else, has at one time or another in your life encountered people who you consider to be brilliant and people who you consider to be stupid, when making your determination, if you did not use the Turing Test (which is basically just observing behavior and judging if it's intelligent ) what method did you use ? How in the world can you judge if something is intelligent or not except by observing if it does anything intelligent?

John K Clark

Lawrence Crowell

unread,
May 13, 2023, 5:10:47 PM5/13/23
to extropolis
To continue, I have input about half the paragraphs of a paper draft I am writing. It found a small error I made. It also made a couple of remarkable extrapolations of this work. Since I have been doing this the last couple of weeks I have generated a lot of text. GPT-4 is capable of interpreting and generating LaTeX mathematics. So my discussions have become very advanced. 

This does make me ponder what is the relationship between consciousness and intelligence. I suspect GPT-4 and other AI systems may be intelligent, but they are so without underlying consciousness. Our intelligence is in a sense built upon a pore-existing substratum of sentience. My dogs are sentient, but when it comes to numerical intelligence they have none, and indeed very poor spatial sense. They are though socially intelligent and understand far more words than most people are aware of. Further, what we subjectively experience as consciousness is built on a deeper substrate of biological activity.

LC

John Clark

unread,
May 13, 2023, 5:44:22 PM5/13/23
to extro...@googlegroups.com
On Sat, May 13, 2023 at 5:10 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> This does make me ponder what is the relationship between consciousness and intelligence. I suspect GPT-4 and other AI systems may be intelligent, but they are so without underlying consciousness. Our intelligence is in a sense built upon a pore-existing substratum of sentience.

If the two things can be separated and sentience is not the inevitable byproduct of intelligence then how and why did Natural Selection bother to produce consciousness when you know with absolute certainty it did so at least once and probably many billions of times? After all, except by way of intelligent behavior Evolution can't directly observe  consciousness  any better than we can, and natural selection can't select for something it can't see.  I strongly suspect that artificial sentience is easy, but artificial intelligence is hard.

> My dogs are sentient, but when it comes to numerical intelligence they have none, and indeed very poor spatial sense. 

Your dogs may not know much mathematics and they probably wouldn't do very well on an IQ test, but their behavior is not random, they show some degree of intelligence.  

> we subjectively experience as consciousness

We? I certainly subjectively experience consciousness and I assume you do too but only because I have observed intelligent behavior on your part. I have no choice, I have to accept as an axiom of existence that consciousness is the inevitable byproduct of intelligence because, even though I can't disprove it, I simply could not function if I really believed in solipsism. 

> is built on a deeper substrate of biological activity.

But again, if it wasn't intelligent behavior then what on earth could the Evolutionary pressure have been that led to the development of consciousness?

John K Clark

Lawrence Crowell

unread,
May 13, 2023, 8:21:02 PM5/13/23
to extropolis
What you write is rather standard fair. I think sentience or even bio-sense has its basis in some means by which white noise processes, such as statistical mechanics and quantum mechanics, are adjusted to have pink noise. This means a fluctuation at one time can influence a fluctuation at a later time. This is then the start of molecular information storage and memory. I am not certain whether molecular information processing of this sort is the basis for bio-sense or whether the means by which white noise is filtered or transitioned into pink noise. 

LC

John Clark

unread,
May 14, 2023, 7:09:46 AM5/14/23
to extro...@googlegroups.com
On Sat, May 13, 2023 at 8:21 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

>What you write is rather standard fair.

I agree, everything I said was standard stuff that has been well understood since the time of Darwin, that's why I don't understand why you still believe it's possible for something to be intelligent but not conscious. And I don't understand why you think Natural Selection could select for something like consciousness that it can't see any better than we can. But both humans and Natural Selection have the ability to observe intelligent behavior.  I don't think it's true but it is at least conceivable to me that something could be conscious but not intelligent, however the reverse simply doesn't make any sense to me. That's why I say consciousness is easy but intelligence is hard.  
 
> I think sentience or even bio-sense has its basis in some means by which white noise processes, such as statistical mechanics and quantum mechanics, are adjusted to have pink noise.

But that doesn't explain anything. Pink noise is just the result of the human ear not treating all frequencies equally, and it is in the very nature of an intelligent system that very small changes in input can sometimes result in enormous and long lasting changes in output, for example inserting the small word "not" into a sentence. And by their very nature intelligence systems always pay more attention to some things than others, and "attention"  is just a synonym for consciousness. And by the way, the thing that set off the current AI explosion was the 2017 theoretical paper entitled "Attention Is All You Need".

In mathematics you have to start with unprovable axioms and I think the same sort of thing is true for physics, and one of those axioms is the brute fact that consciousness is the way data feels when it is being processed intelligently and, as far as consciousness is concerned, it doesn't make any difference if that processing is done electronically or mechanically or biologically. Or classically or quantum mechanically. Regardless of how you calculate it 2+2 is always equal to 4. And if you've seen one 4 you've seen them all.  

John K Clark
==

Brent Allsop

unread,
May 14, 2023, 8:21:41 AM5/14/23
to extro...@googlegroups.com

Hello Mr Larwence,

John and I have been having this conversation you guys (and surely gazilions of people all over the world) are repeating, since probably back in the 90s.  I've been saying things like you:

"This does make me ponder what is the relationship between consciousness and intelligence. I suspect GPT-4 and other AI systems may be intelligent, but they are so without underlying consciousness. Our intelligence is in a sense built upon a pore-existing substratum of sentience."

And John has been endlessly responding with:

"consciousness is the way data feels when it is being processed intelligently and, as far as consciousness is concerned, it doesn't make any difference if that processing is done electronically or mechanically or biologically."

I got tired of saying the same things over and over again, year after year, with no apparent progress.  That's why we built the consensus building and tracking system at Canonizer and started the Theories of Consciousness topic.  Now we can track who, and how many people are on each side of issues like this.  The most important thing is, if you find an argument that falsifies someone's camps, and get them to jump to a better camp, we want to be able to know that, so we can focus on those arguments.  And at least some people have abandoned their camp, due to falsifying data from the Large Hadron Collider, for example.  And you can see other's jumping camps by examining the history using the "as of" selection on the side bar.  This chapter in our video describes a bit about the emerging consensus that continues to progress.

To better illustrate the difference between our views, I started with this picture, modified from an image on the wikipedia page on Qualia.
3_functionally_equal_machines_tiny.png
John has expressed that he doesn't like this picture.  So we've created a new version which I hope is more descriptive of the idea, and not as easy to misinterpret what it represents.
John, let me know if you think it is any better.
The_Strawberry_is_Red_064.jpg


The bottom line is, subjective knowledge is represented by subjective qualities like redness and greenness (no dictionary required).  While abstract intelligent systems represent the same information, abstracted away from any physical properties or qualities (the word 'red' requires an additional dictionary).  All of these systems can be equally intelligent (some of them require more hardware for a dictionary), and they can all tell you the strawberry is red.  To me, this proves that John's claim that "consciousness is the way data feels" is wrong, as the abstract word "red" does not feel like anything.  John seems to not care about any of these differences, while I think it is critically important, and the very definition of what is and isn't sentient.  The strawberry knowledge of A and C are like something, though different.  But the knowledge of R is not like anything.  For me, not only do you need to know if the type of knowledge is phenomenal or not, you need to know what it is like.  (as in "what is it like to be a bat.")  But John doesn't care about any of these differences.

John claims that evolution "can't see" qualities, but I disagree.  I believe computing directly on intrinsic subjective qualities, like redness and greenness is a far more efficient way to do intelligent things like find and pick the strawberries from all the leaves.  C would be more survivable than A (because of the natural difference in importance between redness and greenness), and both of these phenomenal systems would be far more survivable than R, simply because R requires an additional hardware dictionary to know what the word "red" means, while the quality of A and C can do the same kind of situational awareness in a much more natural and intelligent way, since it doesn't require a dictionary to know what the representation means.  We're predicting that once we discover which of all our abstract descriptions of stuff in the brain is a description of redness, along with how such qualities can be "computationally bound" into one composite subjective situational awareness experience, this will revolutionize the way computation is done in far more naturally intelligent ways, than using the discrete logic gates we use today, to do the same thing in far less natural and less intelligent ways.

Does any of that help?


























--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Lawrence Crowell

unread,
May 14, 2023, 9:02:25 AM5/14/23
to extropolis
On Sunday, May 14, 2023 at 7:21:41 AM UTC-5 brent....@gmail.com wrote:

Hello Mr Larwence,

John and I have been having this conversation you guys (and surely gazilions of people all over the world) are repeating, since probably back in the 90s.  I've been saying things like you:

"This does make me ponder what is the relationship between consciousness and intelligence. I suspect GPT-4 and other AI systems may be intelligent, but they are so without underlying consciousness. Our intelligence is in a sense built upon a pore-existing substratum of sentience."

And John has been endlessly responding with:

"consciousness is the way data feels when it is being processed intelligently and, as far as consciousness is concerned, it doesn't make any difference if that processing is done electronically or mechanically or biologically."

I got tired of saying the same things over and over again, year after year, with no apparent progress.  That's why we built the consensus building and tracking system at Canonizer and started the Theories of Consciousness topic.  Now we can track who, and how many people are on each side of issues like this.  The most important thing is, if you find an argument that falsifies someone's camps, and get them to jump to a better camp, we want to be able to know that, so we can focus on those arguments.  And at least some people have abandoned their camp, due to falsifying data from the Large Hadron Collider, for example.  And you can see other's jumping camps by examining the history using the "as of" selection on the side bar.  This chapter in our video describes a bit about the emerging consensus that continues to progress.

To better illustrate the difference between our views, I started with this picture, modified from an image on the wikipedia page on Qualia.
3_functionally_equal_machines_tiny.png
John has expressed that he doesn't like this picture.  So we've created a new version which I hope is more descriptive of the idea, and not as easy to misinterpret what it represents.
John, let me know if you think it is any better.
The_Strawberry_is_Red_064.jpg


The bottom line is, subjective knowledge is represented by subjective qualities like redness and greenness (no dictionary required).  While abstract intelligent systems represent the same information, abstracted away from any physical properties or qualities (the word 'red' requires an additional dictionary).  All of these systems can be equally intelligent (some of them require more hardware for a dictionary), and they can all tell you the strawberry is red.  To me, this proves that John's claim that "consciousness is the way data feels" is wrong, as the abstract word "red" does not feel like anything.  John seems to not care about any of these differences, while I think it is critically important, and the very definition of what is and isn't sentient.  The strawberry knowledge of A and C are like something, though different.  But the knowledge of R is not like anything.  For me, not only do you need to know if the type of knowledge is phenomenal or not, you need to know what it is like.  (as in "what is it like to be a bat.")  But John doesn't care about any of these differences.

John claims that evolution "can't see" qualities, but I disagree.  I believe computing directly on intrinsic subjective qualities, like redness and greenness is a far more efficient way to do intelligent things like find and pick the strawberries from all the leaves.  C would be more survivable than A (because of the natural difference in importance between redness and greenness), and both of these phenomenal systems would be far more survivable than R, simply because R requires an additional hardware dictionary to know what the word "red" means, while the quality of A and C can do the same kind of situational awareness in a much more natural and intelligent way, since it doesn't require a dictionary to know what the representation means.  We're predicting that once we discover which of all our abstract descriptions of stuff in the brain is a description of redness, along with how such qualities can be "computationally bound" into one composite subjective situational awareness experience, this will revolutionize the way computation is done in far more naturally intelligent ways, than using the discrete logic gates we use today, to do the same thing in far less natural and less intelligent ways.

Does any of that help?

What you outline might be in line with what I might be suspecting. The process of converting white noise from a physical system we observe as stochastic into a process of pink noise is a sort of Darwinian selection process. A spectra of random microstates or quantum bits is acted upon by some filter, or these states may in fact be their own filter, so the output is a less random distribution of qubits or state configuration data. I think this is built up not just in brains but in the molecular basis for the most elementary life forms. A kinase phosphorylates a peptide and in doing so it changes the shape of the molecule, which is a sort of switching process. The process by which stochasticity of molecular dynamics, both classical and quantum, is transformed into pink noise or correlated states that are less stochastic is what I think brings this sort of processor into existence.

BTW, there are ideas of putting neural organoids, millimeter scale bundles or nodes of cloned human or mouse neurons, into computer circuitry. The mechanics of keeping these alive would be difficult I should think, but the idea is that actual biological and neurological activity would now be involved with artificial neural networks.

LC

John Clark

unread,
May 14, 2023, 4:25:19 PM5/14/23
to extro...@googlegroups.com
Brent wrote:

> To better illustrate the difference between our views, I started with this picture,
3_functionally_equal_machines_tiny.png

I know, you've shown me that exact same cartoon about 6.02×10^ 23 times. All it demonstrates is that 3 different intelligent entities might (and almost certainly would) have different subjective experiences when they see a red strawberry even though it causes the first two to behave in the same way, the last one apparently could not tell the difference between green and blue. But it's not surprising they have different subjective experiences, if they didn't then they wouldn't be 3 different people there would only be one.


> John has expressed that he doesn't like this picture.  So we've created a new version which I hope is more descriptive of the idea, and not as easy to misinterpret what it represents.
John, let me know if you think it is any better.
The_Strawberry_is_Red_064.jpg


Sorry Brent, I don't like your new cartoon any better, and the problem wasn't that I didn't understand the first cartoon,  the first time I saw it I understood what you were trying to say and I didn't disagree, it's just that I didn't think what you were trying to say was very important or very profound.    


 
> The bottom line is, subjective knowledge is represented by subjective qualities like redness and greenness (no dictionary required). 

Well at least you've stopped going on and on about the importance of dictionaries.  

While abstract intelligent systems represent the same information, abstracted away from any physical properties or qualities (the word 'red' requires an additional dictionary). 

I spoke too soon, we're back with dictionaries. 
 
> this proves that John's claim that "consciousness is the way data feels" is wrong, as the abstract word "red" does not feel like anything. 

The word "red" is just a symbol for electromagnetic waves in the 700 nm range, and so is the red qualia; I maintain that in an intelligent mind the bundle of  concepts that are represented by either of those two symbols DOES feel like something.

> The strawberry knowledge of A and C are like something, though different.

They are indeed different and that difference could easily be experimentally proven.  C observes the world in black-and-white and thus cannot differentiate between a blue grape and a green grape, but A could.
 
> John claims that evolution "can't see" qualities, but I disagree.

You can't detect consciousness in anything other than yourself and neither can natural selection, but you CAN detect intelligent behavior and so can Evolution. If you disagree with this then explain exactly how either you or Evolution can detect consciousness in others except through intelligent behavior; if you can do that then you won this long argument.

 > I believe computing directly on intrinsic subjective qualities, like redness and greenness is a far more efficient way to do intelligent things like find and pick the strawberries from all the leaves. 

Obviously the ability to differentiate between electromagnetic waves of 700 nm and those of 380 nm would give somebody an evolutionary advantage over somebody who could not. But what does that have to do with the subjective color qualia?   And besides, there are machines they can detect far more subtle differences in color than a human can.
 
>  and both of these phenomenal systems would be far more survivable than R, simply because R requires an additional hardware dictionary to know what the word "red" means,

The word "red" is only required if you wish to communicate with somebody else, and a dictionary is never required, but examples are.  

> We're predicting that once we discover which of all our abstract descriptions of stuff in the brain is a description of redness, along with how such qualities can be "computationally bound" into one composite subjective situational awareness experience,

I'm sorry but I'm still not at all sure what  you are predicting. Give me a specific example of something your approach could achieve that no other could. 

> this will revolutionize the way computation is done in far more naturally intelligent ways, than using the discrete logic gates we use today, 

I sincerely hope the Chinese government and other totalitarian regimes follows your advice on this,  I wouldn't want them to get ahead in the AI race and would much prefer they spin their wheels on useless stuff like that. 

John K Clark

Brent Allsop

unread,
May 14, 2023, 6:14:30 PM5/14/23
to extro...@googlegroups.com

Hi John,
You claim you understand what these images represent, but you are saying things that, to me at least, indicate you don't understand.

On Sun, May 14, 2023 at 2:25 PM John Clark <johnk...@gmail.com> wrote:
Brent wrote:

> To better illustrate the difference between our views, I started with this picture,
3_functionally_equal_machines_tiny.png

I know, you've shown me that exact same cartoon about 6.02×10^ 23 times. All it demonstrates is that 3 different intelligent entities might (and almost certainly would) have different subjective experiences when they see a red strawberry even though it causes the first two to behave in the same way, the last one apparently could not tell the difference between green and blue. But it's not surprising they have different subjective experiences, if they didn't then they wouldn't be 3 different people there would only be one.


> John has expressed that he doesn't like this picture.  So we've created a new version which I hope is more descriptive of the idea, and not as easy to misinterpret what it represents.
John, let me know if you think it is any better.
The_Strawberry_is_Red_064.jpg


Sorry Brent, I don't like your new cartoon any better, and the problem wasn't that I didn't understand the first cartoon,  the first time I saw it I understood what you were trying to say and I didn't disagree, it's just that I didn't think what you were trying to say was very important or very profound.    

 
> The bottom line is, subjective knowledge is represented by subjective qualities like redness and greenness (no dictionary required). 

Well at least you've stopped going on and on about the importance of dictionaries.  

While abstract intelligent systems represent the same information, abstracted away from any physical properties or qualities (the word 'red' requires an additional dictionary). 

I spoke too soon, we're back with dictionaries. 

This is how you often reply, but you don't seem to understand why I am pointing this out.

 
> this proves that John's claim that "consciousness is the way data feels" is wrong, as the abstract word "red" does not feel like anything. 

The word "red" is just a symbol for electromagnetic waves in the 700 nm range, and so is the red qualia; I maintain that in an intelligent mind the bundle of  concepts that are represented by either of those two symbols DOES feel like something.

How could the word "red" which would actually be a string of 1s and 0s be like anything?  True, the ones and zeros can be represented by qualities or properties, but you'd need a dictionary to tell you which property or quality represented the 1, and which would represent the 0.  In other words, the 1s and 0s are explicitly removed from whatever is representing them with a transducing system that tells you which is which.  In other words, by definition, the 1s and 0s are abstracted away from whatever is representing them.  So how could they be like anything?
 
> The strawberry knowledge of A and C are like something, though different.

They are indeed different and that difference could easily be experimentally proven.  C observes the world in black-and-white and thus cannot differentiate between a blue grape and a green grape, but A could.

This seems to prove I have completely failed to communicate what I intended with this image.  A, B, C, are meant to be just like you, all looking at the same ripe strawberry, in normal light.  A and B are different in that there is a red / green signal inverter between the strawberry and your subjective knowledge of the strawberry.  In B, the inversion is before the retina, in A, the inversion is between the retina and optic nerve.  There is "observing the world in black-and-white".  These images are illustrating the fact that subjective qualities are all false colored qualities our brains use to represent things out there.  In reality, we have no idea the colorness quality of anything out there.  This is the only reason the strawberry they are all looking at is in black and white.

 


John Clark

unread,
May 15, 2023, 6:44:39 AM5/15/23
to extro...@googlegroups.com
On Sun, May 14, 2023 at 6:14 PM Brent Allsop <brent....@gmail.com> wrote:

> Hi John,
You claim you understand what these images represent, but you are saying things that, to me at least, indicate you don't understand.

Hi Brent.
I was going to rebut your post line by line, and perhaps I will, but something you said discouraged me. You said "the 1s and 0s are abstracted away from whatever is representing them. So how could they be like anything?"  It occurred to me that exactly the same thing could be said about the string of ASCII characters that made up your entire email. So instead I'm going to ask you just one question and if you answer it successfully then you've won this long argument.

You've admitted you don't fully understand consciousness and there is a crucial part of the puzzle that is still missing, so why do you believe so firmly that your fellow human beings have this mysterious missing thing, at least when they are not sleeping or under anesthesia or dead, even though you can't recognize it and have no idea what it is, but GPT-4, even when it's clearly active and behaving very intelligently, does not have this mystifying phantom that, so far at least, remains completely invisible?  

John K Clark

Brent Allsop

unread,
May 15, 2023, 1:24:43 PM5/15/23
to extro...@googlegroups.com

Great.  I believe this single issue is the core of our misunderstanding.

There is nothing "mystifying" or "phantom" and it does not remain invisible.  In fact, qualities are more absolutely real than anything.  It is just a fact that the information we "perceive" through our senses is necessarily abstracting.  The light, nor any of the other representations in the causal chain of perception are like what they represent (such as the properties of the strawberry).  So, you need a dictionary to know what any of the intermediate representations or "codes" of the knowledge is representing.
AND since we falsely color all our knowledge of what is out there, we don't yet know which of all our descriptions of stuff in the brain is a description of a quality like redness.  All we need to do is plug in all the stuff into our computational binding system so we can find out which one is redness.  Once our terminology is grounded and demonstrated in this way, the easy problem will be solved, we'll then finally know the qualities of things.  Nothing "mysterious" or "phantom" about physical properties.

As far as knowing the nature of fellow human beings, as Descartes pointed out, various theories, such as being a brain in a vat, or solipsism (my subjective knowledge is the only thing that exists) are all theoretical possibilities.  But as we've discussed multiple times, the left hemisphere of your brain knows, absolutely, that it is not the only hemisphere in existence, since it can be computationally bound with subjective knowledge in the other.  So, I don't yet know for sure if you are lying, mistaken, or whatever when you claim you know what your redness is like.  But, once we have neural ponytails, we will be able to falsify such sceptical theories.  Or failure to be able to computationally bind to anything "out there" could verify something like solipsism.


















--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
May 15, 2023, 2:26:27 PM5/15/23
to extro...@googlegroups.com
On Mon, May 15, 2023 at 1:24 PM Brent Allsop <brent....@gmail.com> wrote:

> So, you need a dictionary to know what any of the intermediate representations or "codes" of the knowledge is representing.

No you do not! What you need are examples because without examples every definition in a dictionary is circular.

> As far as knowing the nature of fellow human beings, as Descartes pointed out, various theories, such as being a brain in a vat, or solipsism (my subjective knowledge is the only thing that exists) are all theoretical possibilities. 

A brain in a vat made out of bone is more than just a theoretical possibility.  And I know you don't believe in solipsism because you believe your fellow human beings are conscious, it's just the things that have brains that are hard and dry and not soft and squishy that you think are not conscious regardless of how intelligently they behave.
 
> But as we've discussed multiple times, the left hemisphere of your brain knows, absolutely, that it is not the only hemisphere in existence, since it can be computationally bound with subjective knowledge in the other.  So [...]

So what does that have to do with the price of eggs in China?  You still haven't answered my question, why do you believe that your fellow human beings have this certain something or whatever it is you want to call the secret sauce but GPT-4 does not? 

John K Clark

Brent Allsop

unread,
May 15, 2023, 3:40:23 PM5/15/23
to extro...@googlegroups.com
On Mon, May 15, 2023 at 12:26 PM John Clark <johnk...@gmail.com> wrote:
On Mon, May 15, 2023 at 1:24 PM Brent Allsop <brent....@gmail.com> wrote:

> So, you need a dictionary to know what any of the intermediate representations or "codes" of the knowledge is representing.

No you do not! What you need are examples because without examples every definition in a dictionary is circular.
I agree.  This is more or less what I was trying to say. 

 
> As far as knowing the nature of fellow human beings, as Descartes pointed out, various theories, such as being a brain in a vat, or solipsism (my subjective knowledge is the only thing that exists) are all theoretical possibilities. 

A brain in a vat made out of bone is more than just a theoretical possibility.  And I know you don't believe in solipsism because you believe your fellow human beings are conscious, it's just the things that have brains that are hard and dry and not soft and squishy that you think are not conscious regardless of how intelligently they behave.
 
> But as we've discussed multiple times, the left hemisphere of your brain knows, absolutely, that it is not the only hemisphere in existence, since it can be computationally bound with subjective knowledge in the other.  So [...]

So what does that have to do with the price of eggs in China?  You still haven't answered my question, why do you believe that your fellow human beings have this certain something or whatever it is you want to call the secret sauce but GPT-4 does not? 

I believe my friends are sentient, because they say things like they can experience redness.
There is the possibility you could be lying, or something, but I consider that to be unlikely.





 

John Clark

unread,
May 15, 2023, 3:54:15 PM5/15/23
to extro...@googlegroups.com
On Mon, May 15, 2023 at 3:40 PM Brent Allsop <brent....@gmail.com> wrote:

> I believe my friends are sentient, because they say things like they can experience redness.

And yet I don't think you would believe an AI that said it can also experience redness because, unlike your human friends, it is not made of meat. I agree with Martin Luther King who said "minds should not be judged by the squishiness of their brains but by the content of their character".  Ah ... that might not be an exact quote but I'm pretty sure he said something like that.

John K Clark



Brent Allsop

unread,
May 15, 2023, 4:20:19 PM5/15/23
to extro...@googlegroups.com
Exactly.  Silicone has properties and voltages, just like our subjective knowledge has properties like redness.  You should not judge the abstractness of their knowledge by what properties are representing it, but whether or not the knowledge is abstracted away from those properties.  In an abstract system a word like 'red' is really just a string of 1s and 0s.  And it doesn't matter what properties or qualities or voltages are representing a 1, or a 0, by design.  You just need a transducing system to tell you which property or voltage is representing the 1, and which is representing the 0.  So, by definition, those 1s and 0s are not anything like what is representing them at any point in time.  This is how they should be judged.











 

John Clark

unread,
May 15, 2023, 4:28:17 PM5/15/23
to extro...@googlegroups.com
On Mon, May 15, 2023 at 4:20 PM Brent Allsop <brent....@gmail.com> wrote:
> In an abstract system a word like 'red' is really just a string of 1s and 0s. 

And all Shakespeare did is produce a string of ASCII characters.  
 
> by definition, those 1s and 0s are not anything like what is representing them 

But the action potential across the cell membrane of a neuron in the brain IS like something?! 

John K Clark  



Brent Allsop

unread,
May 15, 2023, 4:36:47 PM5/15/23
to extro...@googlegroups.com
You always change the subject to things like these, which has nothing to do with what I'm trying to describe.
I'm not seeing any evidence that you fully understand the key points I'm trying to make.
Can you describe the key points I'm trying to make, when I talk about the difference between a word like "red" and your subjective knowledge of red_border.png?






John Clark

unread,
May 15, 2023, 4:49:33 PM5/15/23
to extro...@googlegroups.com
On Mon, May 15, 2023 at 4:36 PM Brent Allsop <brent....@gmail.com> wrote:

> In an abstract system a word like 'red' is really just a string of 1s and 0s. 

And all Shakespeare did is produce a string of ASCII characters.  
 
> by definition, those 1s and 0s are not anything like what is representing them 

But the action potential across the cell membrane of a neuron in the brain IS like something?! 

> You always change the subject to things like these, which has nothing to do with what I'm trying to describe.

I'm trying, without much success, to get you to answer the key question that I asked. I understand why you don't want to answer because every argument you use to try to show that an AI is not conscious can be used with equal effectiveness (or ineffectiveness) to show that your fellow human beings are also not conscious; nevertheless I will repeat my question  now :

Brent Allsop

unread,
May 15, 2023, 6:28:54 PM5/15/23
to extro...@googlegroups.com

Hi John,
OK, I'll explain to you, what you are trying to communicate, in my own words, to see if I am fully understanding what you are asking.

You want me to answer why I think squishy meatware has this "missing piece of the puzzle", while dry hardware does not.  You believe the dry hardware will be able to do anything the squishy meatware will be able to do, so that once I explain to you how I know this with the squishy meatweare, or not, you will then be able to apply the same argument to the dry hardware, or not.

Am I missing anything?



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Stathis Papaioannou

unread,
May 15, 2023, 6:46:02 PM5/15/23
to extro...@googlegroups.com
What’s wrong with the idea that consciousness is a result of the abstractions, so that if you used voltages or cogs and gears to do the processing the same consciousness would result? I know you don’t believe it is true but how is it inconsistent with what we observe and experience?

--
Stathis Papaioannou

Brent Allsop

unread,
May 15, 2023, 8:41:22 PM5/15/23
to extro...@googlegroups.com

Hi Stathis,
Functionality wise, there is no difference.  There are lots of possible systems that are isomorphically the same, or functionally the same.  That is all the functionalists are talking about.  But if you want to know what it is like, you are talking about something different.  You are asking: "Are you using voltages, cogs, gears or qualities to achieve that functionality?"

And capability and efficiency is important.  Voltages are going to function far better than cogs.  And computation directly on qualities is more efficient than systems which represent information which is abstracted away from what is representing it, doing the computational binding with brute force discrete logic gates.  Motivational computation done by an actual physical attraction which feels like something is going to be much more robust than some abstractly programmed attraction, that isn't like anything.

And then, there is the simple: "What is fundamental?" question.  What is reality, and knowledge of reality made of?  What are we.  How is our functionality achieved?  What is reality made of?  How do you get new qualities, to enable greater functionality, and all that.


John Clark

unread,
May 15, 2023, 9:25:06 PM5/15/23
to extro...@googlegroups.com
On Mon, May 15, 2023 at 8:41 PM Brent Allsop <brent....@gmail.com> wrote:

computation directly on qualities is more efficient than systems which represent information which is abstracted away from what is representing it

What I want to know is why you believe that brains that are squishy do less "abstracting away" than brains that are not squishy.  

> And then, there is the simple: "What is fundamental?" question.  What is reality, and knowledge of reality made of? 

I'm pretty sure that information is fundamental and is what knowledge is made of, I'm absolutely sure squishiness is not involved in either.

John K Clark

Stathis Papaioannou

unread,
May 15, 2023, 9:25:31 PM5/15/23
to extro...@googlegroups.com
On Tue, 16 May 2023 at 10:41, Brent Allsop <brent....@gmail.com> wrote:

Hi Stathis,
Functionality wise, there is no difference.  There are lots of possible systems that are isomorphically the same, or functionally the same.  That is all the functionalists are talking about.  But if you want to know what it is like, you are talking about something different.  You are asking: "Are you using voltages, cogs, gears or qualities to achieve that functionality?"

And capability and efficiency is important.  Voltages are going to function far better than cogs.  And computation directly on qualities is more efficient than systems which represent information which is abstracted away from what is representing it, doing the computational binding with brute force discrete logic gates.  Motivational computation done by an actual physical attraction which feels like something is going to be much more robust than some abstractly programmed attraction, that isn't like anything.

And then, there is the simple: "What is fundamental?" question.  What is reality, and knowledge of reality made of?  What are we.  How is our functionality achieved?  What is reality made of?  How do you get new qualities, to enable greater functionality, and all that.

What if there are no "qualities" accessible to the conscious system other than the qualities generated by the system? A brass gear has a certain mass, feels cool to the touch, even has a certain brassy smell. But if it is part of a conscious system, none of that is relevant. The only relevant thing is that it functions as a gear. Furthermore, even the functioning as a gear is not accessible to the conscious system: all it knows is that it is able to experience red qualia, which it could do regardless of what the gear was made of. Is there any reason why this model should be wrong?


--
Stathis Papaioannou

Brent Allsop

unread,
May 16, 2023, 12:25:14 PM5/16/23
to extro...@googlegroups.com

This is the key to the issue.  You are saying "even the functioning as a gear is not accessible to the conscious system."  But redness is a quality of something, like a gear or glutamate or something, and those properties are what consciousness is composed of.  If a single pixel of redness changes to greenness, the entire system must be aware of that change, and be able to report on that change.  You are saying, or assuming, or just stating, that nothing can do that.  But, without that, you can't have composite subjective experiences composed of thousands of pixels of qualities like redness and greenness, where the entire system is aware of every pixel of diverse qualities, all at once.





--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
May 16, 2023, 1:49:47 PM5/16/23
to extro...@googlegroups.com
On Tue, May 16, 2023 at 12:25 PM Brent Allsop <brent....@gmail.com> wrote:
 
> You are saying "even the functioning as a gear is not accessible to the conscious system." 

In a mechanical brain the functioning of billions of gears could have an effect on a conscious system, but the mind won't know how many RPMs a particular gear is rotating at anymore more than you know if a particular neuron in your brain has fired or not.  
 
> But redness is a quality of something, like a gear or glutamate 

Neither the number 11, the concept "large", or redness  is like a gear or like glutamate, they have been "abstracted away". In fact nearly everything in your conscious mind has been abstracted away from the action potential across the cell membrane in a neuron in your brain.

> and those properties are what consciousness is composed of. 

Consciousness is a set of thoughts, and neither glutamate nor action potentials are part of that set except when you're thinking about glutamate or action potentials. 

> If a single pixel of redness changes to greenness, the entire system must be aware of that change,

Only if the mind remembers what the old pixel was like so the two could be compared, because without contrast meaning cannot be conveyed.  That's why redness would mean nothing unless your eye was capable of seeing things that were NOT red.

John K Clark

Dylan Distasio

unread,
May 16, 2023, 3:05:14 PM5/16/23
to extro...@googlegroups.com
I'm not sure if I'm just too simple, and apologies if I am misrepresenting your position as it is not my intention, but I have never understood how you can be so confident that redness is a fundamental quality (qulia) of the human nervous system.   It would appear to me it's abstractions all the way down until you get to the bare iron of the sensory system.

I have been listening to a lot of lectures by the biochemist Nick Lane lately.   Although he is not focused on consciousness per se in his research, he tends to believe it runs on a gradient in biological organisms, so that even the first organisms like prokaryotes that have the ability to detect a change in the electrical field of their membrane are slightly conscious and can react to their environment.   Once eukaryotes arose and mitochondria opened up a much larger genetic landscape to explore, predators/prey could develop and sensory systems expanded, along with the capabilities in terms of consciousness in higher level organisms and where they lie on the gradient.

Despite all the cartoons, I am still not sure why redness has to exist as a fundamental quality somewhere in the brain.   Your eyes have receptors that trigger electrical impulses based on the wavelengths they're capable of capturing.   This gets abstracted away and an image is created with various colors that have been assigned labels by human beings.   Everything is likely an abstraction beyond the initial receptor change in the rods/cones.   Red looks red because of the interaction between the rods/cones and photons.

I happen to be color blind so some reds look the same as some greens to me because I have a defect in my rods/cones.   With other shades of red and green, I can clearly see the difference.    This is all being driven at its root by the initial signals in the receptors interacting with photons.  I know what blue looks like because as a child someone showed me objects reflecting those wavelengths and told me they were blue, etc.   You can call it whatever you want to but it's still just an abstraction of photons hitting receptors in my eyes that at its root depends on the physics of light and the physics of the receptors in my eyes.   I just can't imagine how a general purpose nervous system would evolve any other way.   These different levels of abstractions provide huge benefits and flexibility despite your argument that a dictionary is somehow inefficient compared to the qualia theory.

Despite many attempts to understand your position, I'm not sure why any model of the brain would need to incorporate redness as an actual fundamental unit hiding somewhere in the physical system.   Redness needs to be nothing more than what a photon of a certain wavelength interacting with a receptor looks like when a full image of the field of view is assembled by the brain at a higher level of abstraction.   

There doesn't need to be a fundamental "red" hidden somewhere in the brain to be able to know what red is, decompose a field of vision into different objects, and identify the strawberry as red.

Is there a shred of physical evidence that redness is actually located as a distinct unit in the brain?   It seems an enormous leap of faith to maintain your current theory when all existing evidence (to my knowledge) points to the idea that everything downstream of the photon hitting the receptor is an abstraction of one sort or another.   I don't see high level how this is much different from how a computer with a camera and a convolutional neural network breakdown an image into its components and assign RGB values to it based on the sensor outputs.   

William Flynn Wallace

unread,
May 16, 2023, 4:22:36 PM5/16/23
to extro...@googlegroups.com, ExI chat list
My question is;  what IS a fundamental quality and how do you know that something is one?  And what difference does that make?  If you think that a fundamental quality is one that is not processed but goes straight to consciousness, I think that you are wrong; that everything is processed through memory etc.  bill w

Dylan Distasio

unread,
May 16, 2023, 4:36:42 PM5/16/23
to extro...@googlegroups.com
Bill-

Thanks for replying.   Can you be more specific on where you think I am wrong.   I'm not clear on what your position is.   I don't disagree that memory is involved, but it's just another layer of abstraction up the line.   

The point I am arguing is that I don't believe there is some physical molecule in the brain that represents redness on a strawberry you're viewing directly with no level of translation required between electrical impulses coming in when a photon hits a visual receptor and your perception of redness.

I'm arguing that there is a signal passed from photons/receptors up the chain through various abstraction layers in the brain until you observe that the strawberry is red (which could include memory).   I believe Brent is arguing that there is some fundamental physical molecule that is "red" with no dictionary/translation required anywhere.    

Stathis Papaioannou

unread,
May 16, 2023, 4:38:42 PM5/16/23
to extro...@googlegroups.com
On Wed, 17 May 2023 at 02:25, Brent Allsop <brent....@gmail.com> wrote:

This is the key to the issue.  You are saying "even the functioning as a gear is not accessible to the conscious system."  But redness is a quality of something, like a gear or glutamate or something, and those properties are what consciousness is composed of.  If a single pixel of redness changes to greenness, the entire system must be aware of that change, and be able to report on that change.  You are saying, or assuming, or just stating, that nothing can do that.  But, without that, you can't have composite subjective experiences composed of thousands of pixels of qualities like redness and greenness, where the entire system is aware of every pixel of diverse qualities, all at once.

The system is not aware of the gear or how the gear functions, just as you are not aware of your neurons and neurotransmitters. If a cog on the gear is changed from brass to titanium, the system does not notice, and continues experiencing red. If a cog is bent, the system starts behaving differently and experiences the red pixels turning green. Similarly, if your glutamate is swapped for glycine and glutamate receptors for glycine receptors, you don’t notice and continue experiencing red. If the glutamate is swapped for glycine and no other changes are made your brain starts behaving differently and you experience red pixels turning green.
--
Stathis Papaioannou

William Flynn Wallace

unread,
May 16, 2023, 4:53:05 PM5/16/23
to extro...@googlegroups.com
I agree with you.  Everything is processed. No direct route to some sensation.  bill w  

Brent is arguing that there is some fundamental physical molecule that is "red" with no dictionary/translation required anywhere.    I don't know what this means.  bill w

Brent Allsop

unread,
May 18, 2023, 2:23:48 AM5/18/23
to extro...@googlegroups.com


On Tue, May 16, 2023, 2:53 PM William Flynn Wallace <fooz...@gmail.com> wrote:
I agree with you.  Everything is processed. No direct route to some sensation.  bill w  

Brent is arguing that there is some fundamental physical molecule that is "red" with no dictionary/translation required anywhere.    I don't know what this means.  bill w

Thank you Bill, for at least saying the correct words.  If "everything is processed" what is it that has the seeming grenness quality when you have a red/green signal inverter immediately after the retina?  Those words you said mean exactly what they are saying.  Let me see if I can explain it this way.

In order to understand, one must first distinguish between reality and knowledge of reality.  Also, one must understand that: "If you know something, that knowledge must be something."  In other words, if you have subjective knowledge of a 3D strawberry, there must be something that is the 3D model that is your 3D subjective knowledge of the strawberry.

We initially naively think redness is an intrinsic quality of the strawberry.  IF that were the case, a strawberry wouldn't need a dictionary to know what its quality is.  That is the right way to think about it, redness is just an intrinsic quality of something else, something in your brain.  Your brain false colors its knowledge of the strawberry with your redness quality, so you know what to focus on and pick.

Think of your 3D subjective knowledge (situational awareness) of the strawberry as being built out of a set of 3D voxel elements like colored LEGO blocks, laid out in your brain, as portrayed here (you may want to watch the entire chapter).  All rendered into your computationally bound subjective CPU by your perception system.  It is these LEGO block voxels that have the intrinsic subjective qualities, not the strawberry.  Your subjective CPU is computing directly on these computationally bound qualities.

Just like these words describe: "some fundamental physical molecule [like a LEGO block] that is "red" with no dictionary/translation required."



 


Stathis Papaioannou

unread,
May 18, 2023, 2:39:01 AM5/18/23
to extro...@googlegroups.com
On Thu, 18 May 2023 at 16:23, Brent Allsop <brent....@gmail.com> wrote:


>In order to understand, one must first distinguish between reality and knowledge of reality.  Also, one must understand that: "If you know something, that knowledge must be something."  In other words, if you have subjective knowledge of a 3D strawberry, there must be something that is the 3D model that is your 3D subjective knowledge of the strawberry.

But why isn’t it possible that any system that can detect and describe a strawberry in a particular way, be it a human, other animal, or computer, also has the special type of subjective knowledge of the strawberry which we call qualia?


--
Stathis Papaioannou

Brent Allsop

unread,
May 18, 2023, 2:49:38 AM5/18/23
to extro...@googlegroups.com
Hi Stathis,
I've answered exactly this question so many times.  Yet you still show no sign of understanding what I'm trying to say.  Maybe this latest picture will help?
The_Strawberry_is_Red_064.jpg
All of these systems can tell you the strawberry is red.  It's just that if you ask them what their knowledge of the strawberry is like, they will all (but A and B) give you these different answers.

C:  My redness is like your redness.
A and B: My redness is like your greenness.
R: My knowledge is intentionally abstracted away from whatever properties are representing it, via transducing dictionaries.  So, my abstract knowledge isn't like anything, and unlike A, B, and C, I need a dictionary to know what the word "red" means.








--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
May 18, 2023, 3:18:06 AM5/18/23
to extro...@googlegroups.com
Hi Stathis,
Perhaps I answered the wrong question, because it is possible for anything to be engineered with the same qualities, if you use the same qualities, whatever those are.
If you engineered a bat to represent the echolocated knowledge of a bug/food with your redness, you would know what at least that part of that bat was like.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Stathis Papaioannou

unread,
May 18, 2023, 5:57:50 AM5/18/23
to extro...@googlegroups.com


On Thu, 18 May 2023 at 16:49, Brent Allsop <brent....@gmail.com> wrote:
Hi Stathis,
I've answered exactly this question so many times.  Yet you still show no sign of understanding what I'm trying to say.  Maybe this latest picture will help?
The_Strawberry_is_Red_064.jpg
All of these systems can tell you the strawberry is red.  It's just that if you ask them what their knowledge of the strawberry is like, they will all (but A and B) give you these different answers.

C:  My redness is like your redness.
A and B: My redness is like your greenness.
R: My knowledge is intentionally abstracted away from whatever properties are representing it, via transducing dictionaries.  So, my abstract knowledge isn't like anything, and unlike A, B, and C, I need a dictionary to know what the word "red" means.

But R could say “my redness is like your redness” or “my redness is like your greenness”. What would you make of that? And if you wouldn’t believe it, why not?
--
Stathis Papaioannou

John Clark

unread,
May 18, 2023, 7:41:46 AM5/18/23
to extro...@googlegroups.com
On Thu, May 18, 2023 at 2:23 AM Brent Allsop <brent....@gmail.com> wrote:

> In order to understand, one must first distinguish between reality and knowledge of reality. 

All knowledge is information (although the reverse may not necessarily be true). And as far as knowledge is concerned I would maintain that information is the ONLY thing that we can know. Nobody can directly access reality, the sound of glass breaking is not broken glass, the look of broken glass is not broken glass, the feel of broken glass is not broken glass. And machines are no better and no worse at directly accessing reality than we are. What "IS" broken glass? It must have stable properties of some sort or I wouldn't be able to identify it as a "thing". I don't know what those ultimate stable properties are, but I know what they are not, they are NOT sense sensations, they are NOT qualia. The sad truth is, I can point to "things" but I don't know exactly what a thing "IS", and I'm not even sure that I know what "IS" is.
 
> Also, one must understand that: "If you know something, that knowledge must be something." 

If you have knowledge of something then you have knowledge that some-thing has been able to produce a qualia in your mind, and it might have not have come through any of your sense organs but originated entirely in your brain, for example if you've taken a hallucinogenic drug.

> We initially naively think redness is an intrinsic quality of the strawberry.  IF that were the case, a strawberry wouldn't need a dictionary

As far as getting to the fundamental nature of things it's silly for you to keep talking about dictionaries until you answer one question.  Where do lexicographers get the knowledge to write their book?


> It is these LEGO block voxels that have the intrinsic subjective qualities, not the strawberry.  

At last you said something I agree with!  You have subjective knowledge of the LEGO block voxels, but LEGO block voxels are NOT a strawberry, they do NOT give you access to objective reality. That's true for all brains regardless of if they're soft and squishy or hard and dry. 

> Your subjective CPU is computing directly on these computationally bound qualities.

Although I've asked you many times you have never explained why you think computers fundamentally do this sort of thing differently or why, despite all the evidence to the contrary, you think the human way is better.

John K Clark

Brent Allsop

unread,
May 18, 2023, 7:35:42 PM5/18/23
to extro...@googlegroups.com

You are changing the subject from what matters.  I am only talking about the fact of the matter that is the quality of the knowledge.  Whether the person is lying about the quality of its knowledge, or not, or whatever, has nothing to do with the quality, itself, and what it is like.  Let's emagine a "shut in" person who can't interact with the outside world, but is still experiencing or dreaming of a subjective redness experience.  That quality of that is the only thing I am talking about.


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Stathis Papaioannou

unread,
May 18, 2023, 7:53:39 PM5/18/23
to extro...@googlegroups.com


On Fri, 19 May 2023 at 09:35, Brent Allsop <brent....@gmail.com> wrote:

You are changing the subject from what matters.  I am only talking about the fact of the matter that is the quality of the knowledge.  Whether the person is lying about the quality of its knowledge, or not, or whatever, has nothing to do with the quality, itself, and what it is like.  Let's emagine a "shut in" person who can't interact with the outside world, but is still experiencing or dreaming of a subjective redness experience.  That quality of that is the only thing I am talking about.

No, I mean that the robot could say the same things as the humans AND the robot could be accurately describing its experiences. Why do you think that is impossible?


On Thu, May 18, 2023 at 3:57 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 18 May 2023 at 16:49, Brent Allsop <brent....@gmail.com> wrote:
Hi Stathis,
I've answered exactly this question so many times.  Yet you still show no sign of understanding what I'm trying to say.  Maybe this latest picture will help?
The_Strawberry_is_Red_064.jpg
All of these systems can tell you the strawberry is red.  It's just that if you ask them what their knowledge of the strawberry is like, they will all (but A and B) give you these different answers.

C:  My redness is like your redness.
A and B: My redness is like your greenness.
R: My knowledge is intentionally abstracted away from whatever properties are representing it, via transducing dictionaries.  So, my abstract knowledge isn't like anything, and unlike A, B, and C, I need a dictionary to know what the word "red" means.

But R could say “my redness is like your redness” or “my redness is like your greenness”. What would you make of that? And if you wouldn’t believe it, why not?
--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAH%3D2ypUQkgpkbO1_Uhpu%2BOhjEVo0go%2B9SU%3DSENm_Y5SQZ7xfKg%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
--
Stathis Papaioannou

Brent Allsop

unread,
May 18, 2023, 8:13:44 PM5/18/23
to extro...@googlegroups.com
On Thu, May 18, 2023 at 5:53 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 19 May 2023 at 09:35, Brent Allsop <brent....@gmail.com> wrote:
You are changing the subject from what matters.  I am only talking about the fact of the matter that is the quality of the knowledge.  Whether the person is lying about the quality of its knowledge, or not, or whatever, has nothing to do with the quality, itself, and what it is like.  Let's emagine a "shut in" person who can't interact with the outside world, but is still experiencing or dreaming of a subjective redness experience.  That quality of that is the only thing I am talking about.

No, I mean that the robot could say the same things as the humans AND the robot could be accurately describing its experiences. Why do you think that is impossible?

You are just proving you have no idea what this image is trying to say.  It is a fact that the quality of A's knowledge is different from the quality of C's knowledge.  To understand the nature of R's knowledge, as being an abstract word like 'red', emagine a quality you have never experienced, before, let's call it grue.  So, you can talk about grue knowledge, you may even know what has a grueness quality, and be able to tell people that thing is grue, but you have no idea what grue is like.  The same is true for the robot, and the usage of the word 'red'.  it can talk about red, and tell you the strawberry is 'red', but it has no idea what redness is like.  Its knowledge of red is not anything like A's or C's knowledge, the same as your knowledge of grue is not like anything.  Those are just the facts portrayed in this image.  You just seem to be trying to distract people away from those facts, so you can use your sleight of hand to talk about silly substitutions, while completely ignoring what really matters, the qualities, and the fact that if you change them (whatever they are), they are no longer the same..

 

Lawrence Crowell

unread,
May 18, 2023, 8:50:43 PM5/18/23
to extro...@googlegroups.com
On Thu, May 18, 2023 at 7:13 PM Brent Allsop <brent....@gmail.com> wrote:


On Thu, May 18, 2023 at 5:53 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 19 May 2023 at 09:35, Brent Allsop <brent....@gmail.com> wrote:
You are changing the subject from what matters.  I am only talking about the fact of the matter that is the quality of the knowledge.  Whether the person is lying about the quality of its knowledge, or not, or whatever, has nothing to do with the quality, itself, and what it is like.  Let's emagine a "shut in" person who can't interact with the outside world, but is still experiencing or dreaming of a subjective redness experience.  That quality of that is the only thing I am talking about.

No, I mean that the robot could say the same things as the humans AND the robot could be accurately describing its experiences. Why do you think that is impossible?

You are just proving you have no idea what this image is trying to say.  It is a fact that the quality of A's knowledge is different from the quality of C's knowledge.  To understand the nature of R's knowledge, as being an abstract word like 'red', emagine a quality you have never experienced, before, let's call it grue.  So, you can talk about grue knowledge, you may even know what has a grueness quality, and be able to tell people that thing is grue, but you have no idea what grue is like.  The same is true for the robot, and the usage of the word 'red'.  it can talk about red, and tell you the strawberry is 'red', but it has no idea what redness is like.  Its knowledge of red is not anything like A's or C's knowledge, the same as your knowledge of grue is not like anything.  Those are just the facts portrayed in this image.  You just seem to be trying to distract people away from those facts, so you can use your sleight of hand to talk about silly substitutions, while completely ignoring what really matters, the qualities, and the fact that if you change them (whatever they are), they are no longer the same..


Ultimately this is because a biological neural network is not computing red. It is adjusting neural weights in such a way that they settle into a condition corresponding to red. There is a back propagation or self-referential feedback loop and the neural network settles onto a configuration that corresponds to red/ Because this involve an approximation of a Godel loop, plus stochasticity, this process is not computable.

LC
 

Brent Allsop

unread,
May 18, 2023, 10:26:35 PM5/18/23
to extro...@googlegroups.com

Hi Lawrence,
I'm not understanding what you mean by any of this.  The first biggest problem is I have no idea what you mean by "red".  Are you talking about the property of something like a strawberry, or something that reflects or emits red light?  Or are you talking about the redness quality (which could be engineered to be your grenness) which represents your knowledge of red things.  Or are you just talking about the abstract term 'red' in general, that doesn't have anything to do with any particular physical or subjective property?

To clear up these kinds of ambiguities, I define red and redness to be labels for very different properties, as defined in the RQT camp statement.





Stathis Papaioannou

unread,
May 18, 2023, 11:53:51 PM5/18/23
to extro...@googlegroups.com
On Fri, 19 May 2023 at 10:13, Brent Allsop <brent....@gmail.com> wrote:


On Thu, May 18, 2023 at 5:53 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 19 May 2023 at 09:35, Brent Allsop <brent....@gmail.com> wrote:
You are changing the subject from what matters.  I am only talking about the fact of the matter that is the quality of the knowledge.  Whether the person is lying about the quality of its knowledge, or not, or whatever, has nothing to do with the quality, itself, and what it is like.  Let's emagine a "shut in" person who can't interact with the outside world, but is still experiencing or dreaming of a subjective redness experience.  That quality of that is the only thing I am talking about.

No, I mean that the robot could say the same things as the humans AND the robot could be accurately describing its experiences. Why do you think that is impossible?

You are just proving you have no idea what this image is trying to say.  It is a fact that the quality of A's knowledge is different from the quality of C's knowledge.  To understand the nature of R's knowledge, as being an abstract word like 'red', emagine a quality you have never experienced, before, let's call it grue.  So, you can talk about grue knowledge, you may even know what has a grueness quality, and be able to tell people that thing is grue, but you have no idea what grue is like.  The same is true for the robot, and the usage of the word 'red'.  it can talk about red, and tell you the strawberry is 'red', but it has no idea what redness is like.  Its knowledge of red is not anything like A's or C's knowledge, the same as your knowledge of grue is not like anything.  Those are just the facts portrayed in this image.  You just seem to be trying to distract people away from those facts, so you can use your sleight of hand to talk about silly substitutions, while completely ignoring what really matters, the qualities, and the fact that if you change them (whatever they are), they are no longer the same..

Yes, the image asserts that the robot’s knowledge of red is not the same as A’s knowledge or C’s knowledge, but why are you so certain that that is the case, even if the robot says otherwise?

--
Stathis Papaioannou

Brent Allsop

unread,
May 19, 2023, 12:26:58 AM5/19/23
to extro...@googlegroups.com

I'm absolutely certain, the same as I am absolutely certain that you can't know what the grue quality is like.
And the word 'red' is just a string of 1s and 0s, all of which, it doesn't matter what is representing them, as long as you have a transducing system telling you which property or quality is the 1, and which is the 0.   So, by definition, I know that the word "red" isn't like anything.  Or rather, it is like anything, from redness, to greenness, to a punch in a paper, to a magnetic polarization, to a voltage on a line... and regardless of what properties you are using, the transducing system gets the "1" from all of those, by design, in a way that has nothing to do with whatever properties are representing that 1.













Stathis Papaioannou

unread,
May 19, 2023, 12:34:04 AM5/19/23
to extro...@googlegroups.com
On Fri, 19 May 2023 at 14:26, Brent Allsop <brent....@gmail.com> wrote:

I'm absolutely certain, the same as I am absolutely certain that you can't know what the grue quality is like.
And the word 'red' is just a string of 1s and 0s, all of which, it doesn't matter what is representing them, as long as you have a transducing system telling you which property or quality is the 1, and which is the 0.   So, by definition, I know that the word "red" isn't like anything.  Or rather, it is like anything, from redness, to greenness, to a punch in a paper, to a magnetic polarization, to a voltage on a line... and regardless of what properties you are using, the transducing system gets the "1" from all of those, by design, in a way that has nothing to do with whatever properties are representing that 1.

It’s the ones and zeros in an appropriate order that may produce the redness knowledge in the robot. You say you are certain that it can’t, but you have no evidence.


--
Stathis Papaioannou

Dylan Distasio

unread,
May 19, 2023, 1:24:39 AM5/19/23
to extro...@googlegroups.com
It's still not clear to me how you are so confident that something similar isn't happening with human beings.   You literally have a voltage on the line coming in from the receptors in the retina that is relaying different signals based on the wavelength of light hitting them.   Higher levels of processing turn this into an image, and even higher levels of abstraction than that associate it with what we know as the color red in terms of assigning labels to that original output.   Where is the evidence that we're not just translating that original signal through multiple levels of processing similar to the robot?

I'm color blind to some degree.   I see certain shades of red as green that I am confident you see as red (assuming you're not also color blind).   This is all beginning with receptors in the retina, not a fundamental molecule that is redness.  

Someone born blind has no idea what red is, because they've never had those signals from the receptors abstracted into what we call red.  The relevant neural network(s) need to be trained and pruned.  

With working eyes, what's to say the brain is not translating the source signal into what we know as red.   I don't see the need for an explanation of redness that doesn't begin with a voltage passed in from receptors and ends with a translation into a visual model at a higher level.

Lawrence Crowell

unread,
May 19, 2023, 5:29:14 AM5/19/23
to extro...@googlegroups.com
I am referring to the perception of red, not the 600nm wavelength of EM radiation.

LC

Brent Allsop

unread,
May 19, 2023, 7:51:23 AM5/19/23
to extro...@googlegroups.com

Wow.  So you are saying something like 11001 has a redness quality and 11000 has a greenness quality??? To me that passes that laughable test less well than some function like square root is redness and cube root is greenness.  So even a pattern of ones and zeros on your computer screen can have a redness quality??!  That doesn't seem completely laughable to you?

John Clark

unread,
May 19, 2023, 12:50:30 PM5/19/23
to extro...@googlegroups.com
On Fri, May 19, 2023 at 7:51 AM Brent Allsop <brent....@gmail.com> wrote:

> Wow.  So you are saying something like 11001 has a redness quality and 11000 has a greenness quality??? To me that passes that laughable test less well than some function like square root is redness and cube root is greenness. 

If 11001 pointed to an element in a set in an AI's memory in which, under the same lighting conditions, all the objects reflected light primarily around 380 nanometers then I don't consider what Lawrence said to be at all laughable. I think it's reasonable to think that an AI might perceive 11001 as the redness qualia, after all 11001 and the redness qualia are both just symbols that represent light that has a wavelength of 380 nanometers. And the same thing could be said about 11000, 550 nanometers, and the greenesss qualia.

And it's not just AI's, there are some people, usually those who are very good at mathematics, who have a neurological condition that causes them to see numbers as colors, it's called "synesthesia". Richard Feynman had it but for him it wasn't numbers it was mathematical functions. He wrote:

"When I see equations, I see the letters in colors. I don’t know why. I see vague pictures of Bessel functions with light-tan j’s, slightly violet-bluish n’s, and dark brown x’s flying around. And I wonder what the hell it must look like to the students".

John K Clark

Stathis Papaioannou

unread,
May 19, 2023, 5:40:04 PM5/19/23
to extro...@googlegroups.com
On Fri, 19 May 2023 at 21:51, Brent Allsop <brent....@gmail.com> wrote:

Wow.  So you are saying something like 11001 has a redness quality and 11000 has a greenness quality??? To me that passes that laughable test less well than some function like square root is redness and cube root is greenness.  So even a pattern of ones and zeros on your computer screen can have a redness quality??!  That doesn't seem completely laughable to you?

No, 11001 could participate in a process that gives rise to any quality, just like glutamate can. 11001 on a screen does not have a redness quality any more than glutamate in a test tube does.

--
Stathis Papaioannou

Brent Allsop

unread,
May 19, 2023, 9:01:00 PM5/19/23
to extro...@googlegroups.com
OK, whew.
So what type of process would this be?

Stathis Papaioannou

unread,
May 19, 2023, 9:04:08 PM5/19/23
to extro...@googlegroups.com
On Sat, 20 May 2023 at 11:01, Brent Allsop <brent....@gmail.com> wrote:
OK, whew.
So what type of process would this be?

A process that would allow a computer to look at the environment, distinguish red from green things, describe them, write stories about them, philosophise about qualia, wonder if non-computers have qualia, and so on.

Virus-free.www.avast.com

Brent Allsop

unread,
May 19, 2023, 10:17:59 PM5/19/23
to extro...@googlegroups.com

If a person asked: “I want to know what type of vehicle you took to get to London?”

And if the person answered: “I took one that got me there.”  Would you consider that an answer?



Stathis Papaioannou

unread,
May 19, 2023, 11:07:14 PM5/19/23
to extro...@googlegroups.com
On Sat, 20 May 2023 at 12:18, Brent Allsop <brent....@gmail.com> wrote:

If a person asked: “I want to know what type of vehicle you took to get to London?”

And if the person answered: “I took one that got me there.”  Would you consider that an answer?


No, because the question already assumes that I got to London in a vehicle, so the answer is not giving any further information.

--
Stathis Papaioannou

John Clark

unread,
May 20, 2023, 5:48:46 AM5/20/23
to extro...@googlegroups.com
On Fri, May 19, 2023 at 10:18 PM Brent Allsop <brent....@gmail.com> wrote:
> If a person asked: “I want to know what type of vehicle you took to get to London?”
And if the person answered: “I took one that got me there.”  Would you consider that an answer?

No, but if a person could give me a detailed description of the vehicle, and explained how it was manufactured, and explain the basic principle that allowed it to operate, and even write a sonnet about the vehicle then I would say it was a very good answer. And that is exactly what GPT four is able to do. It's a one hell of a lot better answer than "the fundamental reason humans can experience qualia but computers cannot is because humans have a brain that is soft and squishy but a computer's brain is hard and dry".  

John K Clark 


Brent Allsop

unread,
May 20, 2023, 11:20:43 AM5/20/23
to extro...@googlegroups.com

It is so frustrating when people lump me in a bucket like this:

On Sat, May 20, 2023 at 3:48 AM John Clark <johnk...@gmail.com> wrote:
It's a one hell of a lot better answer than "the fundamental reason humans can experience qualia but computers cannot is because humans have a brain that is soft and squishy but a computer's brain is hard and dry".

Humans can represent knowledge abstractly, and computers can represent knowledge directly on physical properties or qualities the same as humans.  In other words, I am the opposite of being in that bucket you are putting me in.

I was describing how a squishy brain can have abstract knowledge earlier when I was talking about a new grue quality which someone has never experienced before.  And a conscious human that suffers from achromatopsia (experiences everything in black and white, so has never experienced redness).  Their squishy brains can describe everything about red or grue.  They know what does or doesn't have a grue quality.  They can describe how grueness or redness behaves, and everything.  But until that squishy brain directly experiences grueness, they can't know what that grueness property is like.  It doesn't have anything to do with being a computer or a human.  It is only about how one is representing information, and what that representation or property or quality is like.










 

Brent Allsop

unread,
May 20, 2023, 11:39:44 AM5/20/23
to extro...@googlegroups.com
On Fri, May 19, 2023 at 9:07 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sat, 20 May 2023 at 12:18, Brent Allsop <brent....@gmail.com> wrote:

If a person asked: “I want to know what type of vehicle you took to get to London?”

And if the person answered: “I took one that got me there.”  Would you consider that an answer?


No, because the question already assumes that I got to London in a vehicle, so the answer is not giving any further information.

OK, good.
So then can you understand the problem I have with this answer, where your answer provides nothing other than what my question is already assuming one can do?

Stathis Papaioannou

unread,
May 20, 2023, 12:27:43 PM5/20/23
to extro...@googlegroups.com
On Sun, 21 May 2023 at 01:39, Brent Allsop <brent....@gmail.com> wrote:


On Fri, May 19, 2023 at 9:07 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sat, 20 May 2023 at 12:18, Brent Allsop <brent....@gmail.com> wrote:

If a person asked: “I want to know what type of vehicle you took to get to London?”

And if the person answered: “I took one that got me there.”  Would you consider that an answer?


No, because the question already assumes that I got to London in a vehicle, so the answer is not giving any further information.

OK, good.
So then can you understand the problem I have with this answer, where your answer provides nothing other than what my question is already assuming one can do?

No. I am saying that a robot, an animal, an alien, or anything that can distinguish between colours in the nuanced way that humans can may experience visual qualia, regardless of what is inside its head. You reject this as absurd, but you can’t explain why.
--
Stathis Papaioannou

Brent Allsop

unread,
May 20, 2023, 12:47:20 PM5/20/23
to extro...@googlegroups.com
This statement proves we are just defining qualia very differently and talking past each other.

You are defining 'qualia' as "distinguishing things in nuanced ways"  - "regardless of what is inside its head." 

And I define qualia as, what is inside its head.  I am asking what, inside a person's head, is it using to  "distinguishing things in nuanced ways", and all you are saying is it would be something so it can "distinguish things in nuanced ways."


 

Stathis Papaioannou

unread,
May 20, 2023, 1:00:50 PM5/20/23
to extro...@googlegroups.com
 No, I am saying that if it can behave like a human it may have genuine experiences such as you or I have, regardless of the hardware. Why do you think that is impossible?
--
Stathis Papaioannou

Brent Allsop

unread,
May 20, 2023, 1:24:35 PM5/20/23
to extro...@googlegroups.com
In a way, I'm not saying it is impossible, I'm just talking about something different.

You are talking about "behaving like a human". I am talking about the quality, itself, that results in the person (honestly or dishonestly) saying something like: "My redness is like your greenness."

You always talk about "genuine experiences" but never say what those genuine experiences are, other than the casually downstream behavior it causes.  I am asking what is a "genuine experience" which causes that behavior?

I'm not asking about the type of light it reflects.  I'm asking, if it is reflecting light like that, what quality or property causes it to reflect that type of light?  (the light being very different from the thing reflecting that light.)


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
May 20, 2023, 1:34:31 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 11:20 AM Brent Allsop <brent....@gmail.com> wrote:

> Humans can represent knowledge abstractly, and computers can represent knowledge directly on physical properties or qualities the same as humans.  [...] But until that squishy brain directly experiences grueness, they can't know what that grueness property is like.  It doesn't have anything to do with being a computer or a human.  

I have no problem with that, but if intelligent humans and intelligent computers represent knowledge in the same physical way then whatever it is that generates qualia in humans must also do so in computers. So what are we arguing about?  

> I was describing how a squishy brain can have abstract knowledge earlier when I was talking about a new grue quality which someone has never experienced before. 

If neither a human or a computer has ever experienced "grue" before then neither one of them will know what it's like to experience it. 
 
> And a conscious human that suffers from achromatopsia (experiences everything in black and white, so has never experienced redness). 

It would be easy to test for achromatopsia, if they can't differentiate between red photons and blue photons then they have it, but there is no test to determine if they are experiencing the red qualia or the blue qualia. And that's why the study of consciousness and qualia has never made any progress, and it never will.

John K Clark


John Clark

unread,
May 20, 2023, 1:56:14 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 1:24 PM Brent Allsop <brent....@gmail.com> wrote:

> I'm not asking about the type of light it reflects.  I'm asking, if it is reflecting light like that, what quality or property causes it to reflect that type of light?  

A substance reflects or emits fluorescent light of some wavelengths but not others because of the quantum mechanical behavior of electron shells that surround the nucleus of atoms, and in particular because of the Pauli Exclusion Principle; but I don't know why you asked as that has nothing to do with color qualia ..... or rather it *probably* has nothing to do with it, however as nobody knows anything about qualia (and never will) nobody knows for sure.

John K Clark

 

Stathis Papaioannou

unread,
May 20, 2023, 2:20:10 PM5/20/23
to extro...@googlegroups.com
On Sun, 21 May 2023 at 03:24, Brent Allsop <brent....@gmail.com> wrote:
In a way, I'm not saying it is impossible, I'm just talking about something different.

You are talking about "behaving like a human". I am talking about the quality, itself, that results in the person (honestly or dishonestly) saying something like: "My redness is like your greenness."

You always talk about "genuine experiences" but never say what those genuine experiences are, other than the casually downstream behavior it causes.  I am asking what is a "genuine experience" which causes that behavior?

I'm not asking about the type of light it reflects.  I'm asking, if it is reflecting light like that, what quality or property causes it to reflect that type of light?  (the light being very different from the thing reflecting that light.)

I am pretty sure I understand what you are talking about and am talking about the same thing: the subjective aspect of seeing red. I propose that any entity that is able to behave in the way humans do when they look at the world may also have similar subjectivity, because it is the type of information processing that results in this behaviour that gives rise to the subjectivity, not the subjectivity that gives rise to the behaviour.
--
Stathis Papaioannou

Brent Allsop

unread,
May 20, 2023, 2:26:44 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 11:34 AM John Clark <johnk...@gmail.com> wrote:
On Sat, May 20, 2023 at 11:20 AM Brent Allsop <brent....@gmail.com> wrote:

> Humans can represent knowledge abstractly, and computers can represent knowledge directly on physical properties or qualities the same as humans.  [...] But until that squishy brain directly experiences grueness, they can't know what that grueness property is like.  It doesn't have anything to do with being a computer or a human.  

I have no problem with that, but if intelligent humans and intelligent computers represent knowledge in the same physical way then whatever it is that generates qualia in humans must also do so in computers.

Yes, exactly, I agree.  If they use the same qualities, they will be the same in whatever subjective experience they are computationally bound with.  I'm not a squishy wet ware racist against hardware.
 
So what are we arguing about?  

> I was describing how a squishy brain can have abstract knowledge earlier when I was talking about a new grue quality which someone has never experienced before. 

If neither a human or a computer has ever experienced "grue" before then neither one of them will know what it's like to experience it. 
 
> And a conscious human that suffers from achromatopsia (experiences everything in black and white, so has never experienced redness). 

It would be easy to test for achromatopsia, if they can't differentiate between red photons and blue photons then they have it,
I agree, and that causally downstream behavior is not what we are arguing about.
 
This is a problem, and Chalmers would agree with you that this is an impossibly hard problem, and he is asking you to "face up to this problem."  Stathis, do you agree with John, that this is an impossibly hard problem?:
but there is no test to determine if they are experiencing the red qualia or the blue qualia. And that's why the study of consciousness and qualia has never made any progress, and it never will.

We are arguing about whether that is possible as you are predicting, or not.
You are claiming this isn't approachable or demonstratable via science.
I agree that we will never bridge the explanatory gap, and never be able to communicate to something through only their abstract senses which use codes that are different from their referent properties being perceived.  But, I am arguing, or predicting, that there is a different way to directly (not causally downstream) gain knowledge than this type of abstracting perception from afar (causally downstream) which we will discover.  Our brain does this all day every day, and I am predicting we will discover how our brain does this so we can experimentally demonstrate it to all.

You are claiming we will never know the true intrinsic colorness qualities of things in this world, and that we will only ever know their causally downstream effects.  I am pointing out that consciousness is composed of diverse intrinsic properties we experience all at the same time as one unified subjective experience.  I'm pointing out that once we discover how the brain is computationally combining these intrinsic qualities into one unified composite experience (i.e. it is effing the ineffable), we will then be able to use that same physical process our brain does every day, between subjective brains, so we will finally directly experience the true intrinsic colorness qualities of things as composite knowledge, at least on the elemental property level, of which all composite subjective experiences are composed.

And that once we know that, we will then be able to ground codes like "redness" so we know what someone is saying when we say or perceive it.

Bottom line: I am a faithful curious scientist, and I want to know, and be able to demonstrate to all, what color things truly are, not just the colors they seem to be.  I am predicting that consciousness isn't an impossibly hard problem, as you claim, it is just an easily solvable quality problem.










 

John Clark

unread,
May 20, 2023, 2:47:44 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 2:26 PM Brent Allsop <brent....@gmail.com> wrote:

> We are arguing about whether that is possible as you are predicting, or not.

I am predicting that over the next thousand years we will make precisely as much progress in understanding consciousness and qualia as we made during the last thousand years. Zero.

> Bottom line: I am a faithful curious scientist, and I want to know, and be able to demonstrate to all, what color things truly are, not just the colors they seem to be. 

Huh? The redness qualia is what 700 nanometer electromagnetic waves SEEMS TO BE to me, and the only way for you to know what that seems to be to me would be for you to become me, and even then you still wouldn't know because you wouldn't be you, you'd be me. 

 John K Clark


Brent Allsop

unread,
May 20, 2023, 2:47:56 PM5/20/23
to extro...@googlegroups.com

OK, two different assumptions:

1. Information processing gives rise to redness.
2. Information processing is done on redness.

Would you agree that whatever "information processing" gives rise to redness, that it is a physical fact that this is the case?  As in, physical reality has an interface, such that if you perform the correct incantation of "information process" a redness experience "arises" from that.

Also, you must agree that consciousness can be composed of one incantation for redness, and different incantation for greenness, and that both redness and greenness can be computationally bound into one unified subjective composite experience which can represent things like subjective composite knowledge of strawberries?

So, now.  Do the neural substitution on that, and if you can do that, it will prove that it can't be that incantation, either.

Bummer.  I hate all the zombie contradictions and impossibly hard problems.  Maybe we should try a different assumption and see if all these impossibly hard problems ALL GO AWAY.  Is there a possibility that a different assumption might enable us to find out the true colorness qualities of things, not just the qualities things seem to be?







--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
May 20, 2023, 2:55:31 PM5/20/23
to extro...@googlegroups.com

John, what is your consciousness made of?


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
May 20, 2023, 3:11:34 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 2:55 PM Brent Allsop <brent....@gmail.com> wrote:

> John, what is your consciousness made of?

I am in adjective, I am the way atoms behave when they are organized in a Johnkclarkian way, and if I'm right when I say consciousness is the way data feels when it is being processed intelligently (and I really think I am although I will never be able to prove it ) then my consciousness is made of bits.

John K Clark

Stathis Papaioannou

unread,
May 20, 2023, 3:13:48 PM5/20/23
to extro...@googlegroups.com
On Sun, 21 May 2023 at 04:47, Brent Allsop <brent....@gmail.com> wrote:

OK, two different assumptions:

1. Information processing gives rise to redness.
2. Information processing is done on redness.

I don’t think 2 happens, only 1.

Would you agree that whatever "information processing" gives rise to redness, that it is a physical fact that this is the case?  As in, physical reality has an interface, such that if you perform the correct incantation of "information process" a redness experience "arises" from that.

Apparently that is the case.

Also, you must agree that consciousness can be composed of one incantation for redness, and different incantation for greenness, and that both redness and greenness can be computationally bound into one unified subjective composite experience which can represent things like subjective composite knowledge of strawberries?

I don’t agree that the incantations are unique or static. The same hardware may give rise to redness on one occasion, greenness on another.


So, now.  Do the neural substitution on that, and if you can do that, it will prove that it can't be that incantation, either.

No, because it can’t be just one thing. Anything that gives rise to the redness behaviour supports redness qualia. Redness qualia have no separate causal efficacy of their own.

Bummer.  I hate all the zombie contradictions and impossibly hard problems.  Maybe we should try a different assumption and see if all these impossibly hard problems ALL GO AWAY.  Is there a possibility that a different assumption might enable us to find out the true colorness qualities of things, not just the qualities things seem to be?

I don’s see a problem. It is the way the world seems to work.





On Sat, May 20, 2023 at 12:20 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Sun, 21 May 2023 at 03:24, Brent Allsop <brent....@gmail.com> wrote:
In a way, I'm not saying it is impossible, I'm just talking about something different.

You are talking about "behaving like a human". I am talking about the quality, itself, that results in the person (honestly or dishonestly) saying something like: "My redness is like your greenness."

You always talk about "genuine experiences" but never say what those genuine experiences are, other than the casually downstream behavior it causes.  I am asking what is a "genuine experience" which causes that behavior?

I'm not asking about the type of light it reflects.  I'm asking, if it is reflecting light like that, what quality or property causes it to reflect that type of light?  (the light being very different from the thing reflecting that light.)

I am pretty sure I understand what you are talking about and am talking about the same thing: the subjective aspect of seeing red. I propose that any entity that is able to behave in the way humans do when they look at the world may also have similar subjectivity, because it is the type of information processing that results in this behaviour that gives rise to the subjectivity, not the subjectivity that gives rise to the behaviour.
--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAH%3D2ypVP6mwjE_vdCzE8S3jGsjF6QHOauxaRYu5z02N_3Q8FOQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
--
Stathis Papaioannou

Brent Allsop

unread,
May 20, 2023, 3:17:57 PM5/20/23
to extro...@googlegroups.com

I guess if that is the case, people will forever be reporting that consciousness is a remaining problem that science hasn't figured out yet, and neve



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
May 20, 2023, 3:50:02 PM5/20/23
to extro...@googlegroups.com

Hi Stathis,

On Sat, May 20, 2023 at 1:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 04:47, Brent Allsop <brent....@gmail.com> wrote:
Also, you must agree that consciousness can be composed of one incantation for redness, and different incantation for greenness, and that both redness and greenness can be computationally bound into one unified subjective composite experience which can represent things like subjective composite knowledge of strawberries?

I don’t agree that the incantations are unique or static. The same hardware may give rise to redness on one occasion, greenness on another.

Right, we both agree, and my question assumes something gives rise to redness.  My question is what?  At best, you seem to be saying it is turtles all the way down.  If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

I don’s see a problem. It is the way the world seems to work.

You don't have a problem with there being an eternal "impossibly hard" problem?  You don't ever want to see the headlines say: "Consciousness has finally been figured out" reporting that we now know the colorness qualities of at least something...?


Is there any other argument that makes you think we can't assume:

2. Information processing is done on redness.

than the neural substitution argument that incorrectly assumes that the machine being substituted can achieve computational binding of intrinsic qualities like redness and greenness (otherwise the neuro substitution would fail)?






Brent Allsop

unread,
May 20, 2023, 4:15:44 PM5/20/23
to extro...@googlegroups.com

What is data made of?

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
May 20, 2023, 4:23:20 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 4:15 PM Brent Allsop <brent....@gmail.com> wrote:

> What is data made of?

As I've mentioned before, logically there can only be 2 possibilities, a chain of iterative questions either goes on forever or ends in a brute fact. 

John K Clark

Brent Allsop

unread,
May 20, 2023, 4:47:18 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 2:23 PM John Clark <johnk...@gmail.com> wrote:
On Sat, May 20, 2023 at 4:15 PM Brent Allsop <brent....@gmail.com> wrote:

> What is data made of?

As I've mentioned before, logically there can only be 2 possibilities, a chain of iterative questions either goes on forever or ends in a brute fact. 

You mean a brute demonstrable physical fact like +5 volts, or a redness quality?
Or the 1 or 0 that can be represented by either of those (but only if you have a properly operating transduction mechanism that tells you which is which.)





 

John Clark

unread,
May 20, 2023, 5:13:11 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 4:47 PM Brent Allsop <brent....@gmail.com> wrote:

>> As I've mentioned before, logically there can only be 2 possibilities, a chain of iterative questions either goes on forever or ends in a brute fact. 

>You mean a brute demonstrable physical fact like +5 volts, or a redness quality?

Obviously if you're talking fundamentals it can't be something like the redness quality or any color qualia because you can completely ignore that sort of thing and still make a perfectly usable data processing machine. There can not be one fundamental thing that makes up data but 2, signal and no signal. A signal can be anything but it can't be nothing because the best definition of "nothing" I've ever heard is infinite unbounded homogeneity. Put it another way, for information to be transmitted you need contrast.  

John K Clark

Brent Allsop

unread,
May 20, 2023, 6:28:51 PM5/20/23
to extro...@googlegroups.com

OK, you need contrast.
So you can distinguish +5 volts from 0 volts.  So you set up a transducing system that says 0 volts is 1 and +5 volts is 0.  And that is a "sort of thing from which you can still make a perfectly usable data processing machine"  but only if you have a transducing system to tell you whether it is the +5 volts, or the 0 volts, that is the 1.

Or is it the other way around, and the 1 is more fundamental than the voltage?  How would that work?  Where would you get the 1 from (what fact), so you can interpret it as either +5 volts or 0 volts?












--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Stathis Papaioannou

unread,
May 20, 2023, 7:42:11 PM5/20/23
to extro...@googlegroups.com
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:

Hi Stathis,

On Sat, May 20, 2023 at 1:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 04:47, Brent Allsop <brent....@gmail.com> wrote:
Also, you must agree that consciousness can be composed of one incantation for redness, and different incantation for greenness, and that both redness and greenness can be computationally bound into one unified subjective composite experience which can represent things like subjective composite knowledge of strawberries?

I don’t agree that the incantations are unique or static. The same hardware may give rise to redness on one occasion, greenness on another.

Right, we both agree, and my question assumes something gives rise to redness.  My question is what?  At best, you seem to be saying it is turtles all the way down. 

I don’t claim that it is an infinite regress, which is what that term is used for.

If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience.

I don’s see a problem. It is the way the world seems to work.

You don't have a problem with there being an eternal "impossibly hard" problem?  You don't ever want to see the headlines say: "Consciousness has finally been figured out" reporting that we now know the colorness qualities of at least something...?

The Hard Problem is why we should have these experiences at all rather than just being dead inside, as you assume that robots are. No possible scientific fact can ever answer that question. If you are right and redness is an intrinsic property of a substance such as glutamate, there is still the question of why glutamate produces redness rather than greenness and why it produces any qualia at all.

Is there any other argument that makes you think we can't assume:

2. Information processing is done on redness.

than the neural substitution argument that incorrectly assumes that the machine being substituted can achieve computational binding of intrinsic qualities like redness and greenness (otherwise the neuro substitution would fail)?

I don’t see the binding problem as a real problem. Computers functionally bind different types of data using various techniques such as noting that they are synchronised in time or by being trained to associate different modalities, like a self-driving car that uses cameras, ultrasound and radar.
--
Stathis Papaioannou

Brent Allsop

unread,
May 20, 2023, 10:34:06 PM5/20/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 5:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:
If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience. 

Right, and you could objectively isolate a description of whatever it is that is the necessary and sufficient initial cause of the system reporting those things, giving you your dictionary. And the set of redness and the set of greenness must be a disjoint set.  Perception of such different sets, through our senses, would be a weak form of effing the ineffable, and as descartes pointed out, you could doubt that.  So we need some way to directly experience them both, at the same time, in a way that can't be doubted (not perceived, causally downstream, but directly apprehended) .  As in "I think with redness, therefore I know what redness is like."

And when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness.  So at that point, the neuro substitution would fail because anything but redness would be different, by definition.


I don’s see a problem. It is the way the world seems to work.

You don't have a problem with there being an eternal "impossibly hard" problem?  You don't ever want to see the headlines say: "Consciousness has finally been figured out" reporting that we now know the colorness qualities of at least something...?

The Hard Problem is why we should have these experiences at all rather than just being dead inside, as you assume that robots are. No possible scientific fact can ever answer that question. If you are right and redness is an intrinsic property of a substance such as glutamate, there is still the question of why glutamate produces redness rather than greenness and why it produces any qualia at all.

Yes, I agree.  We don't know why Force equals mass time acceleration.  We just know that it does.  And knowing only that allows us to dance in the heavens.  Just like reliably predicting that glutamate (or some function, or whatever) has a redness quality, and that if we can discover a bunch of new things for which it is a fact that they have many other new qualities, we will be able to amplify our intelligence with not only thousands of times more pixels, but thousands of times more qualities for each pixel, greatly increasing our subjective "situational awareness" intelligence.

 
Is there any other argument that makes you think we can't assume:

2. Information processing is done on redness.

than the neural substitution argument that incorrectly assumes that the machine being substituted can achieve computational binding of intrinsic qualities like redness and greenness (otherwise the neuro substitution would fail)?

I don’t see the binding problem as a real problem. Computers functionally bind different types of data using various techniques such as noting that they are synchronised in time or by being trained to associate different modalities, like a self-driving car that uses cameras, ultrasound and radar.

I go over the difference in the "Computational Binding" chapter of the video.  There is only "binding" between at most one pixel at a time in a computer CPU.  And lots of intermediate results being built up, after lots of individual binding comparisons, is very different than a system that can be aware of the intrinsic qualities of LOTS of pixels at the same time, in a way so that if any one of the intrinsic properties changes, the entire system must be able to know about and report any single pixel change from all the rest.

With lots of individual sequences of comparisons of single pixels, any pixel can be represented with anything you want, as long as there is a dictionary mapping it (as is the sleight of hand performed in a substitution, with changing mapping dictionaries, does) to the abstract meaning, so it will eventually end up with the same abstract causally downstream uttering: "it is red".  Again, computing directly on intrinsic properties like this, where the entire system can be directly aware of all the intrinsic properties, in parallel, at one time is very different.

As you point out, nothing like that is possible in your substitution, because every time I point out a possible change which it could be redness,  you insist that we can also swap in a dictionary mapping system that maps the change back to the same meaning, preventing that change from being anything that is computationally bound, so the entire system can know when redness has changed to greenness, as subjective composite experience requires.










 

Stathis Papaioannou

unread,
May 21, 2023, 12:49:15 AM5/21/23
to extro...@googlegroups.com
On Sun, 21 May 2023 at 12:34, Brent Allsop <brent....@gmail.com> wrote:


On Sat, May 20, 2023 at 5:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:
If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience. 

Right, and you could objectively isolate a description of whatever it is that is the necessary and sufficient initial cause of the system reporting those things, giving you your dictionary. And the set of redness and the set of greenness must be a disjoint set.  Perception of such different sets, through our senses, would be a weak form of effing the ineffable, and as descartes pointed out, you could doubt that.  So we need some way to directly experience them both, at the same time, in a way that can't be doubted (not perceived, causally downstream, but directly apprehended) .  As in "I think with redness, therefore I know what redness is like."

And when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness.  So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

The system would not report that redness was different if the substitution left the output of the system the same, by definition. This is the point I keep making, the whole basis of the functionalist argument. We work out what glutamate does in the neuron, replace it with something else that does the same thing, and the redness (or whatever) will necessarily be preserved. “What glutamate does in the neuron” means in what way it causes physical changes. You seem to have the idea that “redness” itself can somehow cause physical changes, and that is the problem. Qualia cannot do anything themselves, it is only physical forces, matter and energy that can do things. In the case of glutamate what it does in the brain is very limited: it attaches to glutamate receptor proteins and changes their shape slightly.


I don’s see a problem. It is the way the world seems to work.

You don't have a problem with there being an eternal "impossibly hard" problem?  You don't ever want to see the headlines say: "Consciousness has finally been figured out" reporting that we now know the colorness qualities of at least something...?

The Hard Problem is why we should have these experiences at all rather than just being dead inside, as you assume that robots are. No possible scientific fact can ever answer that question. If you are right and redness is an intrinsic property of a substance such as glutamate, there is still the question of why glutamate produces redness rather than greenness and why it produces any qualia at all.

Yes, I agree.  We don't know why Force equals mass time acceleration.  We just know that it does.  And knowing only that allows us to dance in the heavens.  Just like reliably predicting that glutamate (or some function, or whatever) has a redness quality, and that if we can discover a bunch of new things for which it is a fact that they have many other new qualities, we will be able to amplify our intelligence with not only thousands of times more pixels, but thousands of times more qualities for each pixel, greatly increasing our subjective "situational awareness" intelligence.

 
Is there any other argument that makes you think we can't assume:

2. Information processing is done on redness.

than the neural substitution argument that incorrectly assumes that the machine being substituted can achieve computational binding of intrinsic qualities like redness and greenness (otherwise the neuro substitution would fail)?

I don’t see the binding problem as a real problem. Computers functionally bind different types of data using various techniques such as noting that they are synchronised in time or by being trained to associate different modalities, like a self-driving car that uses cameras, ultrasound and radar.

I go over the difference in the "Computational Binding" chapter of the video.  There is only "binding" between at most one pixel at a time in a computer CPU.  And lots of intermediate results being built up, after lots of individual binding comparisons, is very different than a system that can be aware of the intrinsic qualities of LOTS of pixels at the same time, in a way so that if any one of the intrinsic properties changes, the entire system must be able to know about and report any single pixel change from all the rest.

With lots of individual sequences of comparisons of single pixels, any pixel can be represented with anything you want, as long as there is a dictionary mapping it (as is the sleight of hand performed in a substitution, with changing mapping dictionaries, does) to the abstract meaning, so it will eventually end up with the same abstract causally downstream uttering: "it is red".  Again, computing directly on intrinsic properties like this, where the entire system can be directly aware of all the intrinsic properties, in parallel, at one time is very different.

As you point out, nothing like that is possible in your substitution, because every time I point out a possible change which it could be redness,  you insist that we can also swap in a dictionary mapping system that maps the change back to the same meaning, preventing that change from being anything that is computationally bound, so the entire system can know when redness has changed to greenness, as subjective composite experience requires.


--
Stathis Papaioannou

Stuart LaForge

unread,
May 21, 2023, 2:36:41 AM5/21/23
to extropolis
I actually very much appreciate your intuition here, Lawrence. For a while, I had considered consciousness as wave phenomenon similar to a signal atop a carrier wave which would be result of processing the white noise of all sensory input one received from the environment. During processing, it might very well be downshifted to pink noise and filtered to include only the sensory information relevant to the subject as they become conscious of the sensory input that receiving from their environment. That being said, consciousness would seem to be the signal one applies atop the white noise carrier signal of the senses. In other words, consciousness is a combination of ones awareness of the environment and ones reaction to that awareness.

I had tried to find other approaches after, despite the evidence of literal brain-waves displayed by EEG, because I had realized that waves could not be mediators of consciousness because waves are purely linear whereas consciousness is a non-linear phenomena. Waves, even as complex  as quantum fields, are no more or no less than than the sum of their constituent parts, c.f. Fourier. But consciousness, like all emergent phenomena, is more than the sum of its parts. Therefore, consciousness cannot be explained by mere epiphenomenalism because the scope of consciousness exceeds the scope of the physical processes that bring it about. It is like a shape of dimension 10^6 embedded in space-time of 3+1 dimensions.

Stuart LaForge

On Saturday, May 13, 2023 at 5:21:02 PM UTC-7 goldenfield...@gmail.com wrote:
What you write is rather standard fair. I think sentience or even bio-sense has its basis in some means by which white noise processes, such as statistical mechanics and quantum mechanics, are adjusted to have pink noise. This means a fluctuation at one time can influence a fluctuation at a later time. This is then the start of molecular information storage and memory. I am not certain whether molecular information processing of this sort is the basis for bio-sense or whether the means by which white noise is filtered or transitioned into pink noise. 

LC

On Saturday, May 13, 2023 at 4:44:22 PM UTC-5 johnk...@gmail.com wrote:
On Sat, May 13, 2023 at 5:10 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> This does make me ponder what is the relationship between consciousness and intelligence. I suspect GPT-4 and other AI systems may be intelligent, but they are so without underlying consciousness. Our intelligence is in a sense built upon a pore-existing substratum of sentience.

If the two things can be separated and sentience is not the inevitable byproduct of intelligence then how and why did Natural Selection bother to produce consciousness when you know with absolute certainty it did so at least once and probably many billions of times? After all, except by way of intelligent behavior Evolution can't directly observe  consciousness  any better than we can, and natural selection can't select for something it can't see.  I strongly suspect that artificial sentience is easy, but artificial intelligence is hard.

> My dogs are sentient, but when it comes to numerical intelligence they have none, and indeed very poor spatial sense. 

Your dogs may not know much mathematics and they probably wouldn't do very well on an IQ test, but their behavior is not random, they show some degree of intelligence.  

> we subjectively experience as consciousness

We? I certainly subjectively experience consciousness and I assume you do too but only because I have observed intelligent behavior on your part. I have no choice, I have to accept as an axiom of existence that consciousness is the inevitable byproduct of intelligence because, even though I can't disprove it, I simply could not function if I really believed in solipsism. 

> is built on a deeper substrate of biological activity.

But again, if it wasn't intelligent behavior then what on earth could the Evolutionary pressure have been that led to the development of consciousness?

John K Clark


 

On Friday, May 12, 2023 at 4:01:04 AM UTC-5 johnk...@gmail.com wrote:
On Thu, May 11, 2023 at 6:33 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> I have found 2 mistakes it [GPT-4] has made. It has caught me on a few errors as well.

To me that sounds like very impressive performance. If you were working with a human colleague who did the same thing would you hesitate in saying he was exhibiting some degree of intelligence? 

 > It is a very good emulator of intelligence.

What's the difference between being intelligent and emulating intelligence?  It must be more than the degree of squishiness of the brain.
 
> It also is proving to be a decent first check on my work. It might be said it passes some criterion for Turing tests, though I have often thought this idea was old fashioned in a way. 

Well, it is old I'll grant you that. Turing didn't invent the "Turing Test", he just pointed out something that was ancient, that we use everyday, and was so accepted and ubiquitous that nobody had given it much thought before. I'm sure you, just like everybody else, has at one time or another in your life encountered people who you consider to be brilliant and people who you consider to be stupid, when making your determination, if you did not use the Turing Test (which is basically just observing behavior and judging if it's intelligent ) what method did you use ? How in the world can you judge if something is intelligent or not except by observing if it does anything intelligent?

John K Clark


John Clark

unread,
May 21, 2023, 6:47:55 AM5/21/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 6:28 PM Brent Allsop <brent....@gmail.com> wrote:

> OK, you need contrast.
So you can distinguish +5 volts from 0 volts.  So you set up a transducing system that says 0 volts is 1 and +5 volts is 0.  And that is a "sort of thing from which you can still make a perfectly usable data processing machine"  but only if you have a transducing system to tell you whether it is the +5 volts, or the 0 volts, that is the 1.

 A "Transducing System" sounds very grand but the simplest one able to tell the difference between 5 volts and 0 is just an electromagnet, and they were invented two centuries ago. 
 
> Or is it the other way around, and the 1 is more fundamental than the voltage? 

The "1" describes the voltage, so which is more fundamental depends on if adjectives are more fundamental than nouns. And that may be a meaningless question, perhaps that's why so little progress has been made in answering it. 

John K Clark

John Clark

unread,
May 21, 2023, 7:07:10 AM5/21/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 10:34 PM Brent Allsop <brent....@gmail.com> wrote:

> when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness. 

Yes.
 
> So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

Obviously if a neuro upload could no longer tell the difference between red and green photons then its behavior would change, but that could be easily rectified because these days new color camera replacement parts are cheap. The huge advantage that digital has over analog is that all digital needs to do is notice that there is a difference between 2 things, it doesn't need to concern itself with the nature of that difference or how large it is as long as the difference is large enough to detect.

John K Clark


 

Brent Allsop

unread,
May 21, 2023, 7:22:16 AM5/21/23
to extro...@googlegroups.com
On Sat, May 20, 2023 at 10:49 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 12:34, Brent Allsop <brent....@gmail.com> wrote:
On Sat, May 20, 2023 at 5:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:
If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience. 

Right, and you could objectively isolate a description of whatever it is that is the necessary and sufficient initial cause of the system reporting those things, giving you your dictionary. And the set of redness and the set of greenness must be a disjoint set.  Perception of such different sets, through our senses, would be a weak form of effing the ineffable, and as descartes pointed out, you could doubt that.  So we need some way to directly experience them both, at the same time, in a way that can't be doubted (not perceived, causally downstream, but directly apprehended) .  As in "I think with redness, therefore I know what redness is like."

And when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness.  So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

The system would not report that redness was different if the substitution left the output of the system the same, by definition.

It is a waste of time to talk about this case.  At some point you must get to one or more neurons (and the neurotransmitters of all the downstream synapses) that are necessary for at least one pixel redness in our visual subjective knowledge.  And if you change redness to anything else, like greeness, and especially if you change with something 100% abstract, you will not be able to reproduce redness, by definition.  So you won't be able to proceed any further with the  substitution.

You often talk about changing glutamate to something different, then you demand that you also change the receptor so that this change from glutamate results in the neuron firing the same way as when it was glutamate.  When you make this demand, this results in this glutamate not being a necessary part of redness, so there must be something else in the system that is necessary for redness.  And if you substitute the entire system, while making this demand, then all you've done is demand that nothing in the system can be responsible for redness.  Again, an irrelevant case, as we know something in the brain is responsible for redness.

If you would not demand that one also change the receptors, when you change glutamate, so the neuron would then behave differently, when you change glutamate, then this is a theoretical candidate for something that could be responsible for redness.  But, again, if this is the case, the neuro substitution would fail at that point.  The bottom line is, there must be something in the system responsible for redness.  Just making arbitrary demands that nothing can be responsible for redness, doesn't prove anything about a system that does have something that is responsible for redness.


This is the point I keep making, the whole basis of the functionalist argument.

Yes, and I asked elsewhere if this is the only reason you reject this assumption but you did not answer:

2.  Functionality runs on redness. 

Is there any other argument you can make to justify your rejection of this as a hypothetical possibility?
 
We work out what glutamate does in the neuron, replace it with something else that does the same thing, and the redness (or whatever) will necessarily be preserved.

As I keep trying to say, this can't be responsible for redness, by definition, because when  you change whatever is responsible for redness to anything that is not redness, the system must report that change, and the neuro substitution can't proceed.
 
“What glutamate does in the neuron” means in what way it causes physical changes.
Yes, I agree.
 
You seem to have the idea that “redness” itself can somehow cause physical changes, and that is the problem.
I do not do this, and this is not the problem.  I'm simply saying that our description of some physical change is a description of what is necessary for redness.

 
Qualia cannot do anything themselves, it is only physical forces, matter and energy that can do things.
You are going too far by talking about physics like this.  All we know are the descriptions of downstream causal effects or behavior of physics.  And descriptions of this can't tell you anything about the true nature of the physics, only the downstream causal effects of those physics.  Just because you detect light reflecting off of the strawberry, tells you very little about the properties of the surface of the strawberry, since the light is nothing like the properties of the surface of the strawberry.
 
In the case of glutamate what it does in the brain is very limited: it attaches to glutamate receptor proteins and changes their shape slightly.

Again, as long as you make this demand, all you are doing is demanding that it can't be this that is responsible for redness.   So something else must be responsible for redness.  And demanding that nothing can be responsible for redness is just a waste of time as we know something must be responsible for redness.  There must be something in the system such that, when a single pixel of knowledge changes from anything but redness, the entire computationally bound system must be affected.  Demanding that nothing can do that is just saying the system you are substituting can't do what is required to have lots of computationally bound pixels of intrinsic colorness qualities.

If you did not make this demand, and if you allow that changing glutamate (and possibly all the other synapses in the same neuron also firing with the same glutamate) have a broad change on the system, then this could be responsible candidate for redness, and changing redness to anything else will have drastic changes in the way the system changes, when that single pixel is no longer redness.  THEN and only then would it be what is necessary for redness.  Just saying that nothing can be responsible for redness is just a waste of our time, because we know something must be responsible for redness.  And the system must be able to detect when something in the system, which is redness, changes to something that is not redness.








 

Brent Allsop

unread,
May 21, 2023, 7:25:57 AM5/21/23
to extro...@googlegroups.com
On Sun, May 21, 2023 at 5:07 AM John Clark <johnk...@gmail.com> wrote:
On Sat, May 20, 2023 at 10:34 PM Brent Allsop <brent....@gmail.com> wrote:

> when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness. 

Yes.
True, and relevant to the discussion of subjective qualities.
 
> So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

Obviously if a neuro upload could no longer tell the difference between red and green photons then its behavior would change, but that could be easily rectified because these days new color camera replacement parts are cheap. The huge advantage that digital has over analog is that all digital needs to do is notice that there is a difference between 2 things, it doesn't need to concern itself with the nature of that difference or how large it is as long as the difference is large enough to detect.

Also, obviously true.  But in my opinion, like much of what you say, this has nothing to do with a discussion of subjective qualities.

 

Brent Allsop

unread,
May 21, 2023, 7:31:36 AM5/21/23
to extro...@googlegroups.com
On Sun, May 21, 2023 at 4:47 AM John Clark <johnk...@gmail.com> wrote:
On Sat, May 20, 2023 at 6:28 PM Brent Allsop <brent....@gmail.com> wrote:

> OK, you need contrast.
So you can distinguish +5 volts from 0 volts.  So you set up a transducing system that says 0 volts is 1 and +5 volts is 0.  And that is a "sort of thing from which you can still make a perfectly usable data processing machine"  but only if you have a transducing system to tell you whether it is the +5 volts, or the 0 volts, that is the 1.

 A "Transducing System" sounds very grand but the simplest one able to tell the difference between 5 volts and 0 is just an electromagnet, and they were invented two centuries ago. 
 
> Or is it the other way around, and the 1 is more fundamental than the voltage? 

The "1" describes the voltage, so which is more fundamental depends on if adjectives are more fundamental than nouns.

This has nothing to do with "adjectives" or "nouns" it has to do with what can be experimentally demonstrated.  You can experimentally demonstrate +5 volts.  A "1" can only exist if you have a relay telling you +5 volts represents a 1 or a 0.  Otherwise, you don't know what is a 1, and what isn't.
 
And that may be a meaningless question, perhaps that's why so little progress has been made in answering it. 

No.  unless you mean so little progress in making good convincing, and experimentally demonstrable arguments that you can get an "it" from any "bit".


 

John Clark

unread,
May 21, 2023, 8:10:18 AM5/21/23
to extro...@googlegroups.com
On Sun, May 21, 2023 at 7:31 AM Brent Allsop <brent....@gmail.com> wrote:

>>> Or is it the other way around, and the 1 is more fundamental than the voltage? 

>> The "1" describes the voltage, so which is more fundamental depends on if adjectives are more fundamental than nouns.

> This has nothing to do with "adjectives" or "nouns"

I think it does because I think "I" is the way matter behaves when it is organized in a Johnkclarkian way.  
 
> You can experimentally demonstrate +5 volts.  A "1" can only exist if you have a relay telling you +5 volts represents a 1 or a 0.  Otherwise, you don't know what is a 1, and what isn't.

Pronouns, especially personal pronouns, can sweep a lot of philosophical blunders under the rug. The above quotation contains 37 words and 4 of them, about 11%, are "you"; but the way atoms behave when they are arranged in a Brentallsopian way has not defined that word

>> that may be a meaningless question, perhaps that's why so little progress has been made in answering it. 

> No.  unless you mean so little progress in making good convincing, and experimentally demonstrable arguments that you can get an "it" from any "bit".

The difference between science and philosophy is that in philosophy you don't even know what the right questions are, much less the answers. The ancient Greeks would have said that one of the deepest questions possible is answering why  there are only 7 planets, the Sun, the Moon, Mercury, Venus, Mars, Jupiter and Saturn.  Today that question is still unanswered and nobody is even trying to answer it because today we realize it's a silly question.  It's a fact that in 2023 we are no closer to answering any of the questions you seem to be interested in than we were in 1923 or 1823 or even 2023BC, which makes me strongly suspect that you're asking the wrong questions because they're ultimately meaningless.

 John K Clark

Stathis Papaioannou

unread,
May 21, 2023, 11:35:57 AM5/21/23
to extro...@googlegroups.com
On Sun, 21 May 2023 at 21:22, Brent Allsop <brent....@gmail.com> wrote:


On Sat, May 20, 2023 at 10:49 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 12:34, Brent Allsop <brent....@gmail.com> wrote:
On Sat, May 20, 2023 at 5:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:
If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience. 

Right, and you could objectively isolate a description of whatever it is that is the necessary and sufficient initial cause of the system reporting those things, giving you your dictionary. And the set of redness and the set of greenness must be a disjoint set.  Perception of such different sets, through our senses, would be a weak form of effing the ineffable, and as descartes pointed out, you could doubt that.  So we need some way to directly experience them both, at the same time, in a way that can't be doubted (not perceived, causally downstream, but directly apprehended) .  As in "I think with redness, therefore I know what redness is like."

And when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness.  So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

The system would not report that redness was different if the substitution left the output of the system the same, by definition.

It is a waste of time to talk about this case.  At some point you must get to one or more neurons (and the neurotransmitters of all the downstream synapses) that are necessary for at least one pixel redness in our visual subjective knowledge.  And if you change redness to anything else, like greeness, and especially if you change with something 100% abstract, you will not be able to reproduce redness, by definition.  So you won't be able to proceed any further with the  substitution.

You often talk about changing glutamate to something different, then you demand that you also change the receptor so that this change from glutamate results in the neuron firing the same way as when it was glutamate.  When you make this demand, this results in this glutamate not being a necessary part of redness, so there must be something else in the system that is necessary for redness.  And if you substitute the entire system, while making this demand, then all you've done is demand that nothing in the system can be responsible for redness.  Again, an irrelevant case, as we know something in the brain is responsible for redness.

Nothing is UNIQUELY responsible for redness, in the same way that nothing is uniquely responsible for any other function of a system. There are many components involved in printing the letter “A” between the keyboard and the sheet of paper, if any of them go wrong there will be no letter printed or it will be distorted, but if any components are appropriately substituted the letter will be printed normally. You find it incredible that this is the case because it shows nothing is responsible for the letter “A”. You might also say “let’s do the experiment and see if it works”. But if you manage to break the printer every time you try to replace a gear to show that the substitution works, that does not mean that that gear is uniquely responsible for the letter A, it just means that as a practical matter you have been unable to find a replacement.


If you would not demand that one also change the receptors, when you change glutamate, so the neuron would then behave differently, when you change glutamate, then this is a theoretical candidate for something that could be responsible for redness.  But, again, if this is the case, the neuro substitution would fail at that point.  The bottom line is, there must be something in the system responsible for redness.  Just making arbitrary demands that nothing can be responsible for redness, doesn't prove anything about a system that does have something that is responsible for redness.


This is the point I keep making, the whole basis of the functionalist argument.

Yes, and I asked elsewhere if this is the only reason you reject this assumption but you did not answer:

2.  Functionality runs on redness. 

Is there any other argument you can make to justify your rejection of this as a hypothetical possibility?
 
We work out what glutamate does in the neuron, replace it with something else that does the same thing, and the redness (or whatever) will necessarily be preserved.

As I keep trying to say, this can't be responsible for redness, by definition, because when  you change whatever is responsible for redness to anything that is not redness, the system must report that change, and the neuro substitution can't proceed.

The system cannot report a change when the only physical change that the glutamate makes in the system, altering the shape of the receptor protein, is replicated. 

“What glutamate does in the neuron” means in what way it causes physical changes.
Yes, I agree.
 
You seem to have the idea that “redness” itself can somehow cause physical changes, and that is the problem.
I do not do this, and this is not the problem.  I'm simply saying that our description of some physical change is a description of what is necessary for redness.

 
Qualia cannot do anything themselves, it is only physical forces, matter and energy that can do things.
You are going too far by talking about physics like this.  All we know are the descriptions of downstream causal effects or behavior of physics.  And descriptions of this can't tell you anything about the true nature of the physics, only the downstream causal effects of those physics.  Just because you detect light reflecting off of the strawberry, tells you very little about the properties of the surface of the strawberry, since the light is nothing like the properties of the surface of the strawberry.
 
In the case of glutamate what it does in the brain is very limited: it attaches to glutamate receptor proteins and changes their shape slightly.

Again, as long as you make this demand, all you are doing is demanding that it can't be this that is responsible for redness.   

If you knock out the glutamate in the brain and the redness changes (which is quite possible) then it means that the glutamate was partly responsible for redness, along with a billion other components. If you restore the function of the glutamate with a substitute and everything returns to normal, including the redness, that confirms it.

So something else must be responsible for redness.  And demanding that nothing can be responsible for redness is just a waste of time as we know something must be responsible for redness.  There must be something in the system such that, when a single pixel of knowledge changes from anything but redness, the entire computationally bound system must be affected.  Demanding that nothing can do that is just saying the system you are substituting can't do what is required to have lots of computationally bound pixels of intrinsic colorness qualities.

If you did not make this demand, and if you allow that changing glutamate (and possibly all the other synapses in the same neuron also firing with the same glutamate) have a broad change on the system, then this could be responsible candidate for redness, and changing redness to anything else will have drastic changes in the way the system changes, when that single pixel is no longer redness.  THEN and only then would it be what is necessary for redness.  Just saying that nothing can be responsible for redness is just a waste of our time, because we know something must be responsible for redness.  And the system must be able to detect when something in the system, which is redness, changes to something that is not redness.








 

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
--
Stathis Papaioannou

Brent Allsop

unread,
May 21, 2023, 4:22:50 PM5/21/23
to extro...@googlegroups.com
On Sun, May 21, 2023 at 9:35 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 21:22, Brent Allsop <brent....@gmail.com> wrote:
On Sat, May 20, 2023 at 10:49 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 12:34, Brent Allsop <brent....@gmail.com> wrote:
On Sat, May 20, 2023 at 5:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:
If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience. 

Right, and you could objectively isolate a description of whatever it is that is the necessary and sufficient initial cause of the system reporting those things, giving you your dictionary. And the set of redness and the set of greenness must be a disjoint set.  Perception of such different sets, through our senses, would be a weak form of effing the ineffable, and as descartes pointed out, you could doubt that.  So we need some way to directly experience them both, at the same time, in a way that can't be doubted (not perceived, causally downstream, but directly apprehended) .  As in "I think with redness, therefore I know what redness is like."

And when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness.  So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

The system would not report that redness was different if the substitution left the output of the system the same, by definition.

It is a waste of time to talk about this case.  At some point you must get to one or more neurons (and the neurotransmitters of all the downstream synapses) that are necessary for at least one pixel redness in our visual subjective knowledge.  And if you change redness to anything else, like greeness, and especially if you change with something 100% abstract, you will not be able to reproduce redness, by definition.  So you won't be able to proceed any further with the  substitution.

You often talk about changing glutamate to something different, then you demand that you also change the receptor so that this change from glutamate results in the neuron firing the same way as when it was glutamate.  When you make this demand, this results in this glutamate not being a necessary part of redness, so there must be something else in the system that is necessary for redness.  And if you substitute the entire system, while making this demand, then all you've done is demand that nothing in the system can be responsible for redness.  Again, an irrelevant case, as we know something in the brain is responsible for redness.

Nothing is UNIQUELY responsible for redness, in the same way that nothing is uniquely responsible for any other function of a system.There are many components involved in printing the letter “A” between the keyboard and the sheet of paper, if any of them go wrong there will be no letter printed or it will be distorted, but if any components are appropriately substituted the letter will be printed normally. You find it incredible that this is the case because it shows nothing is responsible for the letter “A”.

A phenomenally subjective system must be more than any other digital system we build today.  This is because digital is, by definition, abstracted away from whatever physical property is representing the abstract information at any point in the system.  So you can do a substitution of any element in the system, as long as you also alter the transducing system, so the same abstract information is preserved.  That is all the substitution does, and of course it will work on any abstract system.  Consciousness is not an abstract system.  It runs directly on some subjective qualities, whatever those qualities are.

When there is a single pixel in a field of redness subjective knowledge which is alternating from redness to greenness.  Something in the system must be responsible for the subjective change of that one pixel.  Each pixel can't be that complicated.  For example, you can't have "billions of other components" involved in each pixel of redness of our subjective knowledge.  And whatever this change is, from redness to greenness, it must have a profound change on the rest of the system, when it changes from redness to greenness.  And when you say:
 
The system cannot report a change when the only physical change that the glutamate makes in the system, altering the shape of the receptor protein, is replicated.

You are just saying that it can't be this, because you are specifying that this change can't have a profound change on the rest of the system when it changes.  So it must be something else in the system that is achieving this broad influence on the extremely integrated information system.
 
 “What glutamate does in the neuron” means in what way it causes physical changes.
Yes, I agree.
 
You seem to have the idea that “redness” itself can somehow cause physical changes, and that is the problem.
I do not do this, and this is not the problem.  I'm simply saying that our description of some physical change is a description of what is necessary for redness.

 
Qualia cannot do anything themselves, it is only physical forces, matter and energy that can do things.
You are going too far by talking about physics like this.  All we know are the descriptions of downstream causal effects or behavior of physics.  And descriptions of this can't tell you anything about the true nature of the physics, only the downstream causal effects of those physics.  Just because you detect light reflecting off of the strawberry, tells you very little about the properties of the surface of the strawberry, since the light is nothing like the properties of the surface of the strawberry.
 
In the case of glutamate what it does in the brain is very limited: it attaches to glutamate receptor proteins and changes their shape slightly.

Again, as long as you make this demand, all you are doing is demanding that it can't be this that is responsible for redness.   

If you knock out the glutamate in the brain and the redness changes (which is quite possible) then it means that the glutamate was partly responsible for redness, along with a billion other components. If you restore the function of the glutamate with a substitute and everything returns to normal, including the redness, that confirms it.

Yes, finally.  Thank you.  Instead of just saying: "Nothing is UNIQUELY responsible for redness"  you are finally talking about something objectively observable being at least partly responsible for redness.  Except for the "billions of other components" responsible for each pixel of redness, I can agree with this.

 

John Clark

unread,
May 21, 2023, 4:57:20 PM5/21/23
to extro...@googlegroups.com
On Sun, May 21, 2023 at 4:22 PM Brent Allsop <brent....@gmail.com> wrote:

> A phenomenally subjective system must be more than any other digital system we build today. 

As far as we know a "phenomenally subjective system" does not exist at the most fundamental level, certainly human beings are not such a system. No matter how powerful a microscope we have, we can't find redness in any part of the brain and for the same reason that we can't find wetness in one single H2O molecule. At the lowest most fundamental level the human brain does not compute with qualia and the workings of neurons can be understood even if you ignore qualia completely.  


> This is because digital is, by definition, abstracted away from whatever physical property is representing the abstract information at any point in the system.

Exactly the same thing could be said about analog systems. Analog computers manipulate voltages and the output voltage is an analogy for something else, that's how they got their name. 

> When there is a single pixel in a field of redness [ ...]

When a human being experiences redness he is NOT experiencing 700 nanometer electromagnetic waves, those waves and their reality have been, to use your term, "abstracted away".  

  John K Clark

Stathis Papaioannou

unread,
May 21, 2023, 7:46:28 PM5/21/23
to extro...@googlegroups.com
On Mon, 22 May 2023 at 06:22, Brent Allsop <brent....@gmail.com> wrote:


On Sun, May 21, 2023 at 9:35 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 21:22, Brent Allsop <brent....@gmail.com> wrote:
On Sat, May 20, 2023 at 10:49 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 12:34, Brent Allsop <brent....@gmail.com> wrote:
On Sat, May 20, 2023 at 5:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 21 May 2023 at 05:50, Brent Allsop <brent....@gmail.com> wrote:
If  not, then what is responsible for redness, and how is it different from greenness?  How would I discover and demonstrate grueness?  How would I objectively observe redness, and how would I objectively know it isn't greenness?

You would display a change in behaviour if your greenness changed to redness: you would say “all the green things now look red”. Or you would say “I see a new colour that I have never seen before, call it grue”. This is an objective expression of your subjective experience. 

Right, and you could objectively isolate a description of whatever it is that is the necessary and sufficient initial cause of the system reporting those things, giving you your dictionary. And the set of redness and the set of greenness must be a disjoint set.  Perception of such different sets, through our senses, would be a weak form of effing the ineffable, and as descartes pointed out, you could doubt that.  So we need some way to directly experience them both, at the same time, in a way that can't be doubted (not perceived, causally downstream, but directly apprehended) .  As in "I think with redness, therefore I know what redness is like."

And when you do a neuro substitution of redness (or whatever thing or function is responsible for what you isolated), with something different, like greenness, the system must be able to report that greenness isn't the same as redness.  So at that point, the neuro substitution would fail because anything but redness would be different, by definition.

The system would not report that redness was different if the substitution left the output of the system the same, by definition.

It is a waste of time to talk about this case.  At some point you must get to one or more neurons (and the neurotransmitters of all the downstream synapses) that are necessary for at least one pixel redness in our visual subjective knowledge.  And if you change redness to anything else, like greeness, and especially if you change with something 100% abstract, you will not be able to reproduce redness, by definition.  So you won't be able to proceed any further with the  substitution.

You often talk about changing glutamate to something different, then you demand that you also change the receptor so that this change from glutamate results in the neuron firing the same way as when it was glutamate.  When you make this demand, this results in this glutamate not being a necessary part of redness, so there must be something else in the system that is necessary for redness.  And if you substitute the entire system, while making this demand, then all you've done is demand that nothing in the system can be responsible for redness.  Again, an irrelevant case, as we know something in the brain is responsible for redness.

Nothing is UNIQUELY responsible for redness, in the same way that nothing is uniquely responsible for any other function of a system.There are many components involved in printing the letter “A” between the keyboard and the sheet of paper, if any of them go wrong there will be no letter printed or it will be distorted, but if any components are appropriately substituted the letter will be printed normally. You find it incredible that this is the case because it shows nothing is responsible for the letter “A”.

A phenomenally subjective system must be more than any other digital system we build today.  This is because digital is, by definition, abstracted away from whatever physical property is representing the abstract information at any point in the system. 

The world in general could be made of discrete rather than continuous elements of not only matter and energy but time and space. I’m Nader some more fanciful theories, the world might even be a computer simulation.

So you can do a substitution of any element in the system, as long as you also alter the transducing system, so the same abstract information is preserved.  That is all the substitution does, and of course it will work on any abstract system.

You can do a substitution in any analogue system as well. With my printing example, you can substitute parts of the computer circuitry, which are actually analogue components, and the mechanical parts of the printer. If you do this correctly, the output will be the same. We don’t need to do the experiment or even know anything about how it works: it is obvious as a matter of logic that IF (somehow, even using alien technology) you could make the substitution THEN the output will be the same.

  Consciousness is not an abstract system.  It runs directly on some subjective qualities, whatever those qualities are.

When there is a single pixel in a field of redness subjective knowledge which is alternating from redness to greenness.  Something in the system must be responsible for the subjective change of that one pixel.  Each pixel can't be that complicated.  For example, you can't have "billions of other components" involved in each pixel of redness of our subjective knowledge.  And whatever this change is, from redness to greenness, it must have a profound change on the rest of the system, when it changes from redness to greenness. 

You are ignoring the example I have given many times. The brain is a system of interacting parts, like the computer and printer. Regardless of how it works, IF you substitute a part with a different part that interacts  with the adjacent components in the same way the the output of the system MUST be the same. So whatever happens to “subjective knowledge” the system will output speech at the vocal chords saying “I can see a red strawberry, it is just as red as it was before, my redness has not changed”. If the subject’s redness has in fact changed, there is a mismatch between his qualia and what he notices and reports about his qualia. How do you explain that?

And when you say:
 
The system cannot report a change when the only physical change that the glutamate makes in the system, altering the shape of the receptor protein, is replicated.

You are just saying that it can't be this, because you are specifying that this change can't have a profound change on the rest of the system when it changes.  So it must be something else in the system that is achieving this broad influence on the extremely integrated information system.

You misunderstand, the slight change on the shape of the receptor protein has a very profound effect on the system. If it is slightly off, the subject will at best start having very weird qualia and at worst have a total neurological failure and die. To avoid this, you would have to substitute the glutamate for something that has exactly the same effect on the receptor protein, or else you would have to tweak the receptor protein so that it responds with the same ion fluxes when the glutamate replacement binds to it. If you don’t change either the receptor protein you could instead tweak the neuron in some other way so that the different ion fluxes do not change the timing and amplitude of the action potential when the glutamate replacement binds. In this manner you can change everything all the way to the muscles controlling the vocal chords, so that the subject continues to say “I can see a red strawberry, it is just as red as it was before, the redness hasn’t changed.” At this point, you might have made multiple changes to the brain, and the subject still says the qualia are the same. And you can do this with ANY component in the brain. How would you explain this?

 “What glutamate does in the neuron” means in what way it causes physical changes.
Yes, I agree.
 
You seem to have the idea that “redness” itself can somehow cause physical changes, and that is the problem.
I do not do this, and this is not the problem.  I'm simply saying that our description of some physical change is a description of what is necessary for redness.

 
Qualia cannot do anything themselves, it is only physical forces, matter and energy that can do things.
You are going too far by talking about physics like this.  All we know are the descriptions of downstream causal effects or behavior of physics.  And descriptions of this can't tell you anything about the true nature of the physics, only the downstream causal effects of those physics.  Just because you detect light reflecting off of the strawberry, tells you very little about the properties of the surface of the strawberry, since the light is nothing like the properties of the surface of the strawberry.

But the downstream causal effects determine what the subject reports about their qualia. How could the subject report that their qualia are just the same when in fact they have radically changed?

In the case of glutamate what it does in the brain is very limited: it attaches to glutamate receptor proteins and changes their shape slightly.

Again, as long as you make this demand, all you are doing is demanding that it can't be this that is responsible for redness.   

If you knock out the glutamate in the brain and the redness changes (which is quite possible) then it means that the glutamate was partly responsible for redness, along with a billion other components. If you restore the function of the glutamate with a substitute and everything returns to normal, including the redness, that confirms it.

Yes, finally.  Thank you.  Instead of just saying: "Nothing is UNIQUELY responsible for redness"  you are finally talking about something objectively observable being at least partly responsible for redness.  Except for the "billions of other components" responsible for each pixel of redness, I can agree with this.

If you change even one of billions of other components in the brain, the qualia would probably change. Many things have to be working perfectly for normal consciousness.
--
Stathis Papaioannou

Brent Allsop

unread,
May 22, 2023, 7:18:07 PM5/22/23
to extro...@googlegroups.com
On Sun, May 21, 2023 at 5:46 PM Stathis Papaioannou <stat...@gmail.com> wrote:
You are ignoring the example I have given many times. The brain is a system of interacting parts, like the computer and printer. Regardless of how it works, IF you substitute a part with a different part that interacts  with the adjacent components in the same way the the output of the system MUST be the same. So whatever happens to “subjective knowledge” the system will output speech at the vocal chords saying “I can see a red strawberry, it is just as red as it was before, my redness has not changed”. If the subject’s redness has in fact changed, there is a mismatch between his qualia and what he notices and reports about his qualia. How do you explain that?

One of the functions of this system is to detect qualities.  If the quality has changed, and the system is not accurately reporting that change, it is simply not working correctly.











 

Stathis Papaioannou

unread,
May 22, 2023, 8:28:02 PM5/22/23
to extro...@googlegroups.com
If so, it would be possible to create this kind of malfunction: the subject’s qualia have changed, perhaps changed radically. Everything that looked red now looks green. But because the mechanical interactions are intact, the subject continues reporting that everything looks the same, and will continue doing so for the rest of their lives. There are three possible ways to explain this:

1. The qualia have changed, but the subject doesn’t notice, so he keeps saying that everything is the same. 

2. The qualia have changed and the subject notices, but he can’t control his body any more. He wants to say “everything that looked red now looks green, and I’m really upset about it”, but his mouth says “everything looks exactly the same”. For the rest of his life, he observes helplessly as his body does its own thing, and he is unable to communicate his distress to anyone.

3. The qualia have not changed, because they supervenes on the mechanics of the brain rather than the other way around. If the output of the brain is the same for all inputs, the associated consciousness is necessarily the same.

(3) is the only way to avoid the absurdity of (1) and (2).



--
Stathis Papaioannou
It is loading more messages.
0 new messages