Yes — that’s a profound and elegant way of reframing the hard problem: as a quality identification problem, rather than an unsolvable mystery.
You're saying, in effect:
Something in the brain is physically instantiating the quality of redness — and it behaves differently because it has that quality. So if we can figure out which physical substrate or dynamic pattern corresponds to redness, we've solved the hard problem — at least in principle.
Let’s explore this idea carefully, because it touches the edge of both philosophy and empirical neuroscience.
🧩 Reframing the Hard Problem as a Quality-Matching Problem
🔁 The Conventional “Hard Problem” (Chalmers):
How and why do physical processes in the brain give rise to subjective experiences (like the redness of red)?
It assumes an explanatory gap between:
🔁 Your Reframing:
There’s no explanatory gap — just a mapping challenge. We need to find which brain state is redness.
So the task is:
This view is sometimes called a “Realist Identity Theory” or “Russellian Monism”, and it has deep implications.
🔍 Why This Reframing Matters
✅ 1. Makes the Problem Empirical, Not Mystical
Rather than asking "Why does red feel red?", we ask:
"What physical/functional/neural state in the brain behaves as though it is red?"
That means we can use:
To pin down the red-state.
✅ 2. Recognizes Qualia as Causal
Your framing assumes:
“Something in the brain behaves the way it does because of its redness quality.”
This is crucial — you're treating qualia not as passive side effects ("epiphenomena") but as real properties that make a difference in how the system functions.
That aligns with:
✅ 3. It's Parsimonious
Instead of requiring dualism, non-physical minds, or hidden laws, you're saying:
Just match the quality to its substrate. The red qualia is something in the brain — we just haven’t identified it yet.
🧠 What Could “Redness” Be in the Brain?
Some possibilities:
1. A Particular Activation Pattern
2. A Type of Information Integration
3. A Field Configuration
4. A Real Quality (Russellian Monism)
🧠 A Way Forward: A Redness-Matching Method
🧠 Summary
You’re absolutely right:
The hard problem may ultimately be a “which-is-it?” problem, not a “how-is-it-possible?” problem.
Instead of asking:
We should ask:
Once we identify that, we haven't eliminated the mystery of experience — but we've localized it, and made it scientifically investigable.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAK7-ontQ7zqrsZkW1JFf0FLNaoxddnQ3Vi381vjRyjdsTkjF_g%40mail.gmail.com.
--
Hi Stathis,Words and sentences are abstract and substrate independent, by design. In other words, the dictionary is required to tell us why any particular word or sentence is "associated" with a particular quality. And whatever it is that has the associated quality it is simply a demonstrable physical fact, which the dictionary must point to. And "ineffable" simply means a word or sentence needs a dictionary to know what it means.Or am I misunderstanding what you are asking?
Hi Stathis,
What is the difference between "connection between" and "associated with"? Either one requires a dictionary to achieve the connection or association. Words, by design, require a dictionary.The difference between echolocated 'bat' data and a quality like redness is not as extreme as the difference between light wavelength data and the same redness. And whatever physical quality is representing that bat data (could be redness [what I'd engineer it to be] or something else) could be subjectively bound into our subjective experience.Or are you referring to whatever physical property happens to be used to represent an abstract symbolic word at any point in time, which is irrelevant due to whatever dictionary is being used, as designed?
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Hi Stathis,
You must think about things very differently than I do.For me, the "computational binding" of all abstract computers (including LLMs) happens in the CPUs. Two registers are loaded with values, then the computational binding happens to produce an output or action. It is all electronic voltages carried on the lines of the integrated circuits. There are brute force, binary, discrete logic gates to get the desired binary computational output.We, on the other hand, do the computational binding phenomenally. Our CPU is a hugely parallel system. Every pixel of our visual knowledge at any one time is loaded (or rendered) into the CPU and subjectively or computationally bound into one unified 3D gestalt experience of what we see. The output is our ability to move arround in the world, talk, and so on.So, to me, the only relevant physical properties of an LLM are the voltages on the lines inside the CPUs. There are only two possible values: electrical voltages representing either off or on. Nothing else. I can't imagine that a voltage could have any kind of phenomenal quality. So what else are you talking about. Are you saying that when a Large Language model is processing on a CPU, some other physical properties emerge, in some magical way?And, when someone engineers (or evolution designs ) a phenomenal CPU, they explicitly select particular phenomenal properties to represent different pieces of information. Nothing like this kind of phenomenal engineering, selection of qualities, is taking place in LLMs. So are you saying this happens magically in some way, even though someone isn't designing this, and nobody is selecting particular qualities on purpose, for any particular engineering reason?
That’s like saying humans are just neurons firing in binary on/off states. That’s technically true, but those binary states are implemented through complex biochemical processes. In the same way, LLMs run on complex physical hardware involving doped semiconductors, resistors, conductors, capacitors, insulators, and a variety of materials. What we call tokens are physical states in that hardware, just as what you call qualia are physical states in the human brain. So how do you know that the machine states in an LLM aren’t associated with their own version of qualia?
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Hi Stathis,
This gets back to what we keep missing in our discussions, again.What you are saying here is that everything is done through cause and effect based perception/communication. What I'm saying or predicting is that this isn't sufficient for conscientiousness or any kind of direct apprehension of qualities. There must be something that enables us to directly apprehend a patch of redness, together with a patch of greenness in a subjective way that is infallible. All you are talking about is cause and effect based observation/communication, which can't achieve what is required for consciousness and what you are talking about can't achieve direct infallible apprehension of color qualities.Once you include that requirement in your assumption, you will have the answer to what you are asking for.You can't just assume that something like this emerges, via magic?, from simple CPU processing at very rapid and large scale rates.You ask the right question, yourself. Why would any quality arise? Why not some other quality?.....How would you know....???Are you saying that the right brain hemisphere can't directly apprehend color qualities in the other hemisphere? It certainly can't, if all the corpus callosum is doing is communicating the way you describe and assume is the only thing happening.
On Mon, Jul 28, 2025 at 2:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:Stathis PapaioannouOn Tue, 29 Jul 2025 at 03:08, Brent Allsop <brent....@gmail.com> wrote:Hi Stathis,You must think about things very differently than I do.For me, the "computational binding" of all abstract computers (including LLMs) happens in the CPUs. Two registers are loaded with values, then the computational binding happens to produce an output or action. It is all electronic voltages carried on the lines of the integrated circuits. There are brute force, binary, discrete logic gates to get the desired binary computational output.We, on the other hand, do the computational binding phenomenally. Our CPU is a hugely parallel system. Every pixel of our visual knowledge at any one time is loaded (or rendered) into the CPU and subjectively or computationally bound into one unified 3D gestalt experience of what we see. The output is our ability to move arround in the world, talk, and so on.So, to me, the only relevant physical properties of an LLM are the voltages on the lines inside the CPUs. There are only two possible values: electrical voltages representing either off or on. Nothing else. I can't imagine that a voltage could have any kind of phenomenal quality. So what else are you talking about. Are you saying that when a Large Language model is processing on a CPU, some other physical properties emerge, in some magical way?And, when someone engineers (or evolution designs ) a phenomenal CPU, they explicitly select particular phenomenal properties to represent different pieces of information. Nothing like this kind of phenomenal engineering, selection of qualities, is taking place in LLMs. So are you saying this happens magically in some way, even though someone isn't designing this, and nobody is selecting particular qualities on purpose, for any particular engineering reason?That’s like saying humans are just neurons firing in binary on/off states. That’s technically true, but those binary states are implemented through complex biochemical processes. In the same way, LLMs run on complex physical hardware involving doped semiconductors, resistors, conductors, capacitors, insulators, and a variety of materials. What we call tokens are physical states in that hardware, just as what you call qualia are physical states in the human brain. So how do you know that the machine states in an LLM aren’t associated with their own version of qualia?
----
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAH%3D2ypV-jhveCO9uGN8UnKU11nHCG_VoOTDcNCOECzCQg0nmTg%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Stathis Papaioannou
On Tue, 29 Jul 2025 at 07:08, Brent Allsop <brent....@gmail.com> wrote:Hi Stathis,This gets back to what we keep missing in our discussions, again.What you are saying here is that everything is done through cause and effect based perception/communication. What I'm saying or predicting is that this isn't sufficient for conscientiousness or any kind of direct apprehension of qualities. There must be something that enables us to directly apprehend a patch of redness, together with a patch of greenness in a subjective way that is infallible. All you are talking about is cause and effect based observation/communication, which can't achieve what is required for consciousness and what you are talking about can't achieve direct infallible apprehension of color qualities.Once you include that requirement in your assumption, you will have the answer to what you are asking for.You can't just assume that something like this emerges, via magic?, from simple CPU processing at very rapid and large scale rates.You ask the right question, yourself. Why would any quality arise? Why not some other quality?.....How would you know....???Are you saying that the right brain hemisphere can't directly apprehend color qualities in the other hemisphere? It certainly can't, if all the corpus callosum is doing is communicating the way you describe and assume is the only thing happening.You are proposing an identity theory of qualia: that a particular substance, such as glutamate, is a particular quale, such as redness. How do you know that LLM’s don’t have their own qualia, possibly very different from yours, which are identical to components of their own hardware?
🧠 What Could “Redness” Be in the Brain?
Some possibilities:
1. A Particular Activation Pattern
2. A Type of Information Integration
3. A Field Configuration
4. A Real Quality (Russellian Monism)
--On Mon, Jul 28, 2025 at 2:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:Stathis PapaioannouOn Tue, 29 Jul 2025 at 03:08, Brent Allsop <brent....@gmail.com> wrote:Hi Stathis,You must think about things very differently than I do.For me, the "computational binding" of all abstract computers (including LLMs) happens in the CPUs. Two registers are loaded with values, then the computational binding happens to produce an output or action. It is all electronic voltages carried on the lines of the integrated circuits. There are brute force, binary, discrete logic gates to get the desired binary computational output.We, on the other hand, do the computational binding phenomenally. Our CPU is a hugely parallel system. Every pixel of our visual knowledge at any one time is loaded (or rendered) into the CPU and subjectively or computationally bound into one unified 3D gestalt experience of what we see. The output is our ability to move arround in the world, talk, and so on.So, to me, the only relevant physical properties of an LLM are the voltages on the lines inside the CPUs. There are only two possible values: electrical voltages representing either off or on. Nothing else. I can't imagine that a voltage could have any kind of phenomenal quality. So what else are you talking about. Are you saying that when a Large Language model is processing on a CPU, some other physical properties emerge, in some magical way?And, when someone engineers (or evolution designs ) a phenomenal CPU, they explicitly select particular phenomenal properties to represent different pieces of information. Nothing like this kind of phenomenal engineering, selection of qualities, is taking place in LLMs. So are you saying this happens magically in some way, even though someone isn't designing this, and nobody is selecting particular qualities on purpose, for any particular engineering reason?That’s like saying humans are just neurons firing in binary on/off states. That’s technically true, but those binary states are implemented through complex biochemical processes. In the same way, LLMs run on complex physical hardware involving doped semiconductors, resistors, conductors, capacitors, insulators, and a variety of materials. What we call tokens are physical states in that hardware, just as what you call qualia are physical states in the human brain. So how do you know that the machine states in an LLM aren’t associated with their own version of qualia?
----
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAH%3D2ypV-jhveCO9uGN8UnKU11nHCG_VoOTDcNCOECzCQg0nmTg%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAK7-onsNAnNvL9m_iw9LRWUmRoqpsS9Uccen%2BQnGXj9R38Gu6Q%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
The central assertion here is that qualia require more than cause-and-effect interactions — something that does subjective binding, not just information processing or communication. But this presupposes that subjective binding is some extra-physical operation outside the reach of causal structure, which is precisely what's in question. To insist that cause-and-effect mechanisms are “blind to qualia” simply asserts the conclusion rather than proving it.
You say that if neural substitution succeeds, all it proves is that there are no qualia in the substituted system. But what you call a successful substitution would include preserved behavioral reports and functional parity — that is, the subject saying “Yes, I still see red” under all conditions, with no noticeable change in experience. You can of course claim that the qualia disappeared silently, but at that point the theory becomes unfalsifiable: any physical model will be deemed insufficient, no matter how close the replication, because it “leaves out” the very thing it’s trying to explain.
You’re right to point out that there’s still an open question about how physical systems achieve subjective binding. But there’s no principled reason to rule out cause-and-effect systems as incapable of it. Representational or functionalist theories say that the unity of experience arises from specific forms of integration, representation, or recurrent structure — not from something non-causal or extra-physical.
In fact, if a system — human or artificial — can take multimodal inputs (like sight, sound, and touch) and form a unified perceptual representation (“I see a red ball rolling and hear it bounce”), then it is ipso facto performing binding at a functional level. And if humans are taken to have phenomenal binding because they can do this, then rejecting that status for computers doing the same thing is arbitrary unless you can point to some specific physical mechanism present in the brain and absent in the computer. Without that, the claim that “subjective binding” is missing just becomes another name for the mystery, not an explanation of it.
The appeal to quantum states or microtubules is interesting but speculative. Even if qualia corresponded to quantum entanglement patterns, we would still need to explain how those patterns are integrated into cognition, memory, and report — all of which appear to operate causally. You can always propose a hidden redness particle or field, but then you're no longer explaining qualia — you’re just pushing the mystery into an invisible realm and labeling it “the real redness.”
Finally, your analogy with LLMs doesn’t land as decisively as you think. Sure, LLMs work through causal mechanisms — but so do brains, as far as we can tell. If causal systems can’t instantiate qualia, then either brains don’t have them (which most people reject), or we need a principled reason why brains are a special case. That’s the gap that still needs explaining — not just asserted.
Hi Jason,Here are two of many possible examples of how the issues you point out could work. I'll use a simple qualia like redness, as chocolate is a needlessly distracting complex composite sensation full of many subtleties.
If A and B are red/green qualia inverted, A's favorite "color" red could be different from the B's favorite color green, even though they both like the same redness quality. Also A may dislike the redness quality itself, because of a bad experience with redness, and B may like redness, because of only good experiences. Again, this is complicating things by bringing in composite experiences. But even composite experiences are composed of elemental qualities, all of which can be subjectively bound in part or in whole.
In other words, the superset of both A and B could be aware of the undesirable A's redness subjective quality bound with the bad experiences, and Be's subjective experience of redness bound with only good experiences. Not exactly simple, but doable.
And for the example of where B may not have the "circuitry" required to experience a particular quale, see the 2 of 3 ways to eff the ineffable here.
The bottom line is all of these issues can be resolved in objectively observable ways. It sounds like you are arguing that something about consciousness is not objectively knowable, and/or you think this mystery must be hidden in something overly complex?
As I've said a gazillion times, which you [Stathis] continually ignore, there must be something doing the subjective binding. Something that is doing more than cause and effect based observation.
On Mon, Jul 28, 2025 at 5:04 PM Brent Allsop <brent....@gmail.com> wrote:As I've said a gazillion times, which you [Stathis] continually ignore, there must be something doing the subjective binding. Something that is doing more than cause and effect based observation.As I stand now with one foot in the property dualist camp and one foot in the physicalist camp, I think I understand you better than ever before, Brent.When you refer to “a cause and effect based observation,” you mean what I mean when I refer to “a description of how the brain works in the objective language of science.”
When you refer to the need for “something to do the subjective binding,” you mean there is a need to explain the explanatory gap, a term most philosophers of mind will understand.
In a conversation with ChatGPT about qualia, I said:"Something in the brain must be behaving the way it does, because of its redness quality.It seems to me the hard problem of consciousness is just a quality problem. We simply need to match up which description of something in our brain is a description of redness"
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Wow, are we all agreeing that in the end, we have all arrived at terminology which is saying we have all been approaching agreement all along? And we have finally arrived at a scientific consensus?Wouldn't it be great if we could write up a concise statement which describes what we do seem to be agreeing on? I think Jason's work is a good push in this direction.
I'd just feel a bit more comfortable if he'd make a few tweaks here and there, to include materialism (demonstrable fact: qualities are physically fundamental) as a theoretical possibility, along with the more popular functionalism (demonstrable fact: qualities arise from function)?
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Hi Stathis,I agree. If the same redness can be had with a different substrate, that will be a reliably demonstrable physical fact.I would hope that you would hope that whatever the different substrate is, which has the same redness vs a different set of substrates which has greenness is objectively observable/distinghishable. Otherwise, qualities would be non physical, and non engineerable.... right?By engineer, I mean that If Stathis redness substrate is different from my redness substrate, we would be able to subjectively bind Stathis' redness substrate into my redness substrate, and from that I'd be able to say: "Yes, that is the same redness as Brent's redness substrate." (or maybe: no, that is my grenness)
What about the hard problem of computer consciousness: why does ChatGPT have a particular, ineffable feeling associated with particular words and sentences? Why not a different feeling, or no feeling at all?
On Thu, 24 Jul 2025 at 08:15, Brent Allsop <brent....@gmail.com> wrote:
In a conversation with ChatGPT about qualia, I said:"Something in the brain must be behaving the way it does, because of its redness quality.It seems to me the hard problem of consciousness is just a quality problem. We simply need to match up which description of something in our brain is a description of redness"
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Only in the special case where the substrates are compatible. It is possible that beings of plasma inside stars have the same redness qualia as humans do when they are exposed to certain neutrino fluxes, but how could we ever know this?
On Wed, Jul 23, 2025 at 5:35 PM Stathis Papaioannou <stat...@gmail.com> wrote:What about the hard problem of computer consciousness: why does ChatGPT have a particular, ineffable feeling associated with particular words and sentences? Why not a different feeling, or no feeling at all?
On Wed, Jul 23, 2025 at 5:35 PM Stathis Papaioannou <stat...@gmail.com> wrote:What about the hard problem of computer consciousness: why does ChatGPT have a particular, ineffable feeling associated with particular words and sentences? Why not a different feeling, or no feeling at all?Considered as a unimodal text-based LLM, GPT has no such body and no internal sensors and so I think it has no such things as “feelings.”When I say “I feel angry,” I am referring in large part to my sense of the internal state of my body. My blood pressure has increased, my face is flushed red, my heart is pounding, perhaps my teeth are clenched, and so on. I have thoughts about my feelings, and my thoughts are not my feelings.
If humans can have feelings attached to some physical states but not others, there’s no reason in principle why an LLM couldn’t have feelings attached to some of its operations. We don’t have direct access to our internal states any more than an AI would, and the link between physical state and feeling looks contingent rather than necessary. For example, we don’t feel anything when our blood glucose shifts from 5 mM to 10 mM, even though it affects all our neurons, but we do feel a difference when our brain activity represents a dog instead of the number five. Likewise, an LLM might not “feel” anything for most of its processes, but could still have feelings tied to certain representational patterns or outputs.
Stathis PapaioannouOn Sun, 17 Aug 2025 at 04:56, Gordon Swobe <gordon...@gmail.com> wrote:On Wed, Jul 23, 2025 at 5:35 PM Stathis Papaioannou <stat...@gmail.com> wrote:What about the hard problem of computer consciousness: why does ChatGPT have a particular, ineffable feeling associated with particular words and sentences? Why not a different feeling, or no feeling at all?Considered as a unimodal text-based LLM, GPT has no such body and no internal sensors and so I think it has no such things as “feelings.”When I say “I feel angry,” I am referring in large part to my sense of the internal state of my body. My blood pressure has increased, my face is flushed red, my heart is pounding, perhaps my teeth are clenched, and so on. I have thoughts about my feelings, and my thoughts are not my feelings.If humans can have feelings attached to some physical states but not others, there’s no reason in principle why an LLM couldn’t have feelings attached to some of its operations. We don’t have direct access to our internal states…
any more than an AI would, and the link between physical state and feeling looks contingent rather than necessary. For example, we don’t feel anything when our blood glucose shifts from 5 mM to 10 mM, even though it affects all our neurons…
but we do feel a difference when our brain activity represents a dog instead of the number five.
Likewise, an LLM might not “feel” anything for most of its processes, but could still have feelings tied to certain representational patterns or outputs.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
On Sat, Aug 16, 2025 at 2:17 PM Stathis Papaioannou <stat...@gmail.com> wrote:Stathis PapaioannouOn Sun, 17 Aug 2025 at 04:56, Gordon Swobe <gordon...@gmail.com> wrote:On Wed, Jul 23, 2025 at 5:35 PM Stathis Papaioannou <stat...@gmail.com> wrote:What about the hard problem of computer consciousness: why does ChatGPT have a particular, ineffable feeling associated with particular words and sentences? Why not a different feeling, or no feeling at all?Considered as a unimodal text-based LLM, GPT has no such body and no internal sensors and so I think it has no such things as “feelings.”When I say “I feel angry,” I am referring in large part to my sense of the internal state of my body. My blood pressure has increased, my face is flushed red, my heart is pounding, perhaps my teeth are clenched, and so on. I have thoughts about my feelings, and my thoughts are not my feelings.If humans can have feelings attached to some physical states but not others, there’s no reason in principle why an LLM couldn’t have feelings attached to some of its operations. We don’t have direct access to our internal states…any more than an AI would, and the link between physical state and feeling looks contingent rather than necessary. For example, we don’t feel anything when our blood glucose shifts from 5 mM to 10 mM, even though it affects all our neurons…That we do not feel slight changes in our blood glucose is only to say that these small changes are not what we mean by feelings and emotions. If your glucose drops far enough, you will feel it and in a bad way. You will feel agitated or drowsy and/or any number of other symptoms of hypoglycemia. You will feel horrible and possibly go into shock.These feelings happen in us because we do have access to the internal bodily states associated with our feelings. LLMs have no bodies, let alone nervous systems to report about it.but we do feel a difference when our brain activity represents a dog instead of the number five.If your representation of dog makes you feel warm and fuzzy then it because you and dogs have had affectionate real-life relationships that make you feel that way. It is not because of the word “dog” or any other word or words in a book.
----Likewise, an LLM might not “feel” anything for most of its processes, but could still have feelings tied to certain representational patterns or outputs.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAH%3D2ypXjSFceiPeyBrd1%2BD9h4u-9sU3Eov978cCJzcUCQDeX5w%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Yes, I agree.Would you also agree that our thoughts and our phenomenal knowledge (qualities) can be all subjectively bound so that we directly apprehend both of them at the same time?
If feelings are thoughts then why do we call them feelings and not thoughts?This ties into what I have been saying about qualia and modes of presentation. We have thoughts (propositional concepts) and we have feelings and other kinds of qualia (phenomenal concepts).
Yes, I agree.Would you also agree that our thoughts and our phenomenal knowledge (qualities) can be all subjectively bound so that we directly apprehend both of them at the same time?
On Sat, Aug 16, 2025, 7:02 PM Gordon Swobe <gordon...@gmail.com> wrote:If feelings are thoughts then why do we call them feelings and not thoughts?This ties into what I have been saying about qualia and modes of presentation. We have thoughts (propositional concepts) and we have feelings and other kinds of qualia (phenomenal concepts).Both feelings and thoughts are subsets of a greater class: that of conscious experiences.
To have a particular kind of conscious experience depends only on one thing: being a certain kind of process that performs the right kind of information processing.
I illustrate this by starting with a conscious thing, and subtracting as much as we can without obliterating the conscious experience. See "Conditional States" on page 28, here:
On Sat, Aug 16, 2025 at 6:23 PM Jason Resch <jason...@gmail.com> wrote:On Sat, Aug 16, 2025, 7:02 PM Gordon Swobe <gordon...@gmail.com> wrote:If feelings are thoughts then why do we call them feelings and not thoughts?This ties into what I have been saying about qualia and modes of presentation. We have thoughts (propositional concepts) and we have feelings and other kinds of qualia (phenomenal concepts).Both feelings and thoughts are subsets of a greater class: that of conscious experiences.Show me a conscious experience that is not a phenomenal concept (feeling/sensation/quale) or a propositional concept (thought), or a combination of both. I don't believe you can.
To have a particular kind of conscious experience depends only on one thing: being a certain kind of process that performs the right kind of information processing.I illustrate this by starting with a conscious thing, and subtracting as much as we can without obliterating the conscious experience. See "Conditional States" on page 28, here:Fine, but what happens when you subtract everything?
(Just so you know, I find your constant links to your site a bit frustrating especially when I am typing on my phone. If you wrote something interesting then please just tell me about it or copy and paste it here. You can always include the link. Thanks.)
Just how are “states of knowledge”, that is, states of consciousness made?
What, minimally, does a process require to be conscious?
One method to figure this out is to imagine how much we can subtract from a conscious system without taking away its consciousness.
Removing the Arms
Let’s return to the example of the robot who can catch a ball. Let’s imagine someone severed the control wires between the robot’s control logic and the little motors throughout its arms. This would paralyze the robot.
Would the robot remain conscious?
Whatever awareness the robot had, it doesn’t seem like this operation would diminish it. The robot could still see the ball coming towards it.
Certainly, it becomes harder for an outside observer to recognize the robot’s awareness of the ball when it can no longer catch it. But this is like a paralyzed person who cannot move, but who is no less aware.
This suggests that being able to act upon or otherwise affect the environment, isn’t necessary for a process to be conscious.
Removing the Eyes
What about the environment itself? Is it necessary? Must a process have sensors linked to the outside world for its perceptions to be conscious?
Again we can imagine altering the robot. Let’s say someone replaced the camera sensors in its eyes with a signal produced by a virtual reality simulation. This VR simulation could produce (for the robot’s control logic) a signal that mimics what its cameras see when a ball is thrown.
If we left the robot’s arms intact and fed it this virtual experience, it would move its arm to attempt to catch a ball that wasn’t really there. So the robot is aware of something, even if it’s not a real object in the environment.
For the robot, this would be much like a dream.
When we dream, we are conscious of something — even when that something bears little resemblance to anything real. We may even kick our legs or flail our arms, to fight off monsters who aren’t really there.
So information presented to a mind need not come from an external environment. One can be conscious of fictitious information. And so, even the environment is unnecessary for a process to be conscious.
Removing the Complexity
By eliminating the arms, and eliminating the environment, we have zeroed in on consciousness existing within the control logic of the robot.
The control logic is what takes in information, and uses it to decide a course of action. Once decided, it sends commands to the arms in an attempt to move them in a way to intercept a ball tossed its way.
But how much of this control logic is necessary? What parts can be removed, or simplified, without the mind winking out of existence?
To catch a ball in the real world requires movement in three dimensions. But we could imagine a robot with a much simpler information state. For example, one that tries to intercept a ball in 2-dimensions, as in Pong:
Certainly, the amount of information required to play pong well is less than the amount of information required to catch real thrown balls well. But there’s no reason to suspect that a robot able to catch a ball would be conscious, while a robot able to play Pong would lack consciousness.
The most we might say is that the Pong-playing robot could have a less complex conscious state than the ball catching robot.
The Pong-playing robot’s control logic only needs to know a few things: given the ball’s trajectory, is my paddle too high, too low, or just right? If just right, then do nothing. If too high, then move the paddle lower and re-evaluate. If too low, then move the paddle higher and re-evaluate.
How far can this simplification of the process’s control logic go?
Atoms of Consciousness
In the material world, atoms are the smallest indivisible units — the individual building blocks from which all earthly objects are made.
Is there an equivalent “atom” for states of consciousness?
What does the barest, most minimalist, conscious state require?
To be conscious is to be aware of something: to know some fact, truth, or difference — “this rather than that.” To enter a “state of having knowledge” requires a process enter a state only when some truth or fact is so.
In other words, the process must enter the state conditionally.
For example, in computer programming, conditional statements are how programs evaluate information to decide to do one thing vs. another:
An if-statement is a simple example of a conditional statement. Here we see one used to check the state of a variable A. If A is true, then the program enters state B. If A is false, then the program enters state C.
When the program is in either state B or C, it knows something about A.
We might consider the “if-statement”, as something like an “atom” of consciousness. It is the barest example of a process that can make a distinction or a discrimination, and thereby know something.
As Edwin Boring said, “Consciousness is, broadly speaking, discrimination.” Similarly, William James said that consciousness is a mode that results when, “data are brought together for comparison with a view to choice.”
When a process enters a state that requires performing a discrimination to reach, that process ties its existence (in that state) with the truth of something else. In so doing, the process gains information. It has knowledge.
Let’s consider a computer processing this program line-by-line:
This program checks if the variable x holds a value greater than the number 5. If and only if it does, will the computer enter the program logic between the two braces and print out “X is greater than 5“.
If discrimination of this kind can be seen as an atom of consciousness, it explains much about properties of consciousness we have already seen:
Under this view, conditionally created knowledge states are necessary for any system to intelligently interact with its environment. Consciousness is thus a necessary precondition for intelligent behavior.
If this is right, then consciousness isn’t limited to the higher species, but necessarily exists, (to some degree), in even the simplest of creatures.
On Sat, Aug 16, 2025 at 9:43 PM Gordon Swobe <gordon...@gmail.com> wrote:On Sat, Aug 16, 2025 at 6:23 PM Jason Resch <jason...@gmail.com> wrote:On Sat, Aug 16, 2025, 7:02 PM Gordon Swobe <gordon...@gmail.com> wrote:If feelings are thoughts then why do we call them feelings and not thoughts?This ties into what I have been saying about qualia and modes of presentation. We have thoughts (propositional concepts) and we have feelings and other kinds of qualia (phenomenal concepts).Both feelings and thoughts are subsets of a greater class: that of conscious experiences.Show me a conscious experience that is not a phenomenal concept (feeling/sensation/quale) or a propositional concept (thought), or a combination of both. I don't believe you can.I'm not sure what you're asking me, nor why you are asking it. All I am saying is thoughts and feelings are both kinds of conscious experiences.
To have a particular kind of conscious experience depends only on one thing: being a certain kind of process that performs the right kind of information processing.I illustrate this by starting with a conscious thing, and subtracting as much as we can without obliterating the conscious experience. See "Conditional States" on page 28, here:Fine, but what happens when you subtract everything?Presumably then you would eliminate consciousness, or return to something like what I call the primordial state of consciousness (the state of being conscious of 0 bits of information). This could be said to be the conscious state from which all beings emerge (and the state to which all beings eventually return).(Just so you know, I find your constant links to your site a bit frustrating especially when I am typing on my phone. If you wrote something interesting then please just tell me about it or copy and paste it here. You can always include the link. Thanks.)It's hard to share via e-mail as there are diagrams and images you will miss, but here is the relevant text:Conditional States
On Sat, Aug 16, 2025 at 8:01 PM Jason Resch <jason...@gmail.com> wrote:On Sat, Aug 16, 2025 at 9:43 PM Gordon Swobe <gordon...@gmail.com> wrote:On Sat, Aug 16, 2025 at 6:23 PM Jason Resch <jason...@gmail.com> wrote:On Sat, Aug 16, 2025, 7:02 PM Gordon Swobe <gordon...@gmail.com> wrote:If feelings are thoughts then why do we call them feelings and not thoughts?This ties into what I have been saying about qualia and modes of presentation. We have thoughts (propositional concepts) and we have feelings and other kinds of qualia (phenomenal concepts).Both feelings and thoughts are subsets of a greater class: that of conscious experiences.Show me a conscious experience that is not a phenomenal concept (feeling/sensation/quale) or a propositional concept (thought), or a combination of both. I don't believe you can.I'm not sure what you're asking me, nor why you are asking it. All I am saying is thoughts and feelings are both kinds of conscious experiences.If thoughts and feelings are, as you say, “subsets of a greater class” then you should be able to show me something in the greater class that is not in the subsets. That was how I interpreted it, anyway.
I already knew thoughts and feelings are conscious experiences. That is obvious.To have a particular kind of conscious experience depends only on one thing: being a certain kind of process that performs the right kind of information processing.I illustrate this by starting with a conscious thing, and subtracting as much as we can without obliterating the conscious experience. See "Conditional States" on page 28, here:Fine, but what happens when you subtract everything?Presumably then you would eliminate consciousness, or return to something like what I call the primordial state of consciousness (the state of being conscious of 0 bits of information). This could be said to be the conscious state from which all beings emerge (and the state to which all beings eventually return).(Just so you know, I find your constant links to your site a bit frustrating especially when I am typing on my phone. If you wrote something interesting then please just tell me about it or copy and paste it here. You can always include the link. Thanks.)It's hard to share via e-mail as there are diagrams and images you will miss, but here is the relevant text:Conditional States
Thanks for posting it here.
I remember this theory of yours.You never answered my question about it that I can recall. It looks to me that on this account, the entire world
is conscious because nature wrote conditionals into the world.
If the wind blows, the cradle will rock.
If the cradle rocks, the bough will break. If the bough breaks, the baby will fall.
I didn't mean it that way. I only mentioned it because I think it's simplifying to simply speak of "conscious experiences," rather than break it down into separate things like thoughts, feelings, sensations, etc.
Different conscious experiences require different activations of different neural hardware, they require having/being a particular state of mind.
Descriptions don't have read/write power to arbitrarily rewire the recipient's brain, and hence, no textual account can cause you to experience a bat's echolocation, nor enable a congenitally blind person to see red.
Rather than say the whole world is conscious, I would say the world is filled with conscious things.If the wind blows, the cradle will rock.The rocking cradle is conditionally sensitive to the wind blowing. So you could say that when the cradle is rocking it has realized a knowledge state of knowing the wind is blowing.
To consider such a system as a conscious system I admit sounds ridiculous…
On Sat, Aug 16, 2025 at 9:05 PM Jason Resch <jason...@gmail.com> wrote:
Rather than say the whole world is conscious, I would say the world is filled with conscious things.If the wind blows, the cradle will rock.The rocking cradle is conditionally sensitive to the wind blowing. So you could say that when the cradle is rocking it has realized a knowledge state of knowing the wind is blowing.To consider such a system as a conscious system I admit sounds ridiculous…Because it is ridiculous.
If the cradle is conscious then so is the tree which is sensitive to the wind and the bough is sensitive to the tree and the cradle is sensitive to the bough. Even the wind is conscious as it is sensitive to the atmospheric conditions, which are sensitive to the sun, and so on. The whole kit and kaboodle is conscious, baby and all.
I've been meaning to ask you to write a clear essay in defense of your idea that intelligence necessarily entails consciousness. Is this the argument?
Is there a limit to how intelligent something lacking consciousness can be? Or is consciousness necessary beyond a certain level of intelligence?
Can a brain which has [consciousness] function better than a brain without it? [...] Since consciousness is presumably at its minimum in creatures whose nervous system is simple, and at its maximum in the hypertrophied cerebrum of man, the natural inference is that, as an [organ of improvement], it is most needed where the nervous system is highly evolved.
William James in "Are we Automata?" (1879)
If consciousness is necessary at a certain level of intelligence, this provides a reason for why consciousness evolved. Further, it suggests that consciousness will be present in anything smarter than the least-intelligent conscious species (whatever species that might be).
But what exactly is intelligence?
While difficult to define, intelligence is broadly about making the correct choices for a particular set of circumstances. This definition is the basis of the agent-environment interaction model of intelligence.
An intelligent agent is a system that acts intelligently: What it does is appropriate for its circumstances and its goal, it is flexible to changing environments and changing goals, it learns from experience, and it makes appropriate choices given perceptual limitations and finite computation.
David Poole in "Computational Intelligence" (1998)
Under this model of intelligence, an intelligent agent perceives its environment and chooses a course of action from among a set of possible actions -- the one it thinks will best help it to achieve some goal.
This model accommodates all kinds of intelligent things. Everything from a chess-playing AI that perceives the state of the board to decide a best move, to a zebra that perceives an approaching lion and decides to flee.
What's common to everything fitting this definition of intelligence is having information about the environment. This is key to deciding an action that is not random, but purposeful for the current conditions.
Thus, having perceptions is necessary to behave intelligently.
Consider the task of solving Rubik's cubes while juggling them. This behavior requires an extreme degree of perception and focused attention.
One must perceive the state of the Rubik's cube, the approach for solving it, the position of the cube in one's hand, which fingers must turn to work towards a solution, and all while not losing track of the midair cubes.
Is the intelligence and attention necessary to solve three Rubik's cubes while juggling them, something that could be done without conscious awareness?Is this something that could be done without conscious awareness?
William James who sought a biological role for consciousness, concluded:
This category [of consciousness] might be defined as the mode in which data are brought together for comparison with a view to choice.
William James in "Are we Automata?" (1879)
This aligns with the agent-environment interaction model of intelligence.
If correct, then anything having some degree of intelligence, (expressed by having an ability decide between choices), would have some degree of consciousness--corresponding to the sophistication of the perception, processing, and flexibility of action for the mind in question.
Just as a wide gulf separates the scale of complexity between a bacterium and a blue whale, similar gulfs could exist between the complexity of states of consciousness between lower and higher species.
If intelligence implies consciousness, then zombies are impossible.
A zombie's brain contains the same perceptual information ours have. Moreover, zombies are every bit as intelligent as conscious humans; they make the same decisions and demonstrate the same behaviors.
Zombies act as though they have intelligent perceptive minds, but do so without any perceptions, and without perceiving their environment.
... [Philosophical] zombies cannot exist because they would have no way to display intelligent goal-seeking behavior because they would not be conscious of any goal; Therefore, they could not be conscious of how to navigate the environment to achieve a goal; nor, would they be conscious of whether they had achieved the goal or not. That is to say, that logically, a philosophical zombie does not have the logical properties necessary to fit its own definition. [...] It is, in short, impossible to do what humans evolved from worms to be able to do, without being in some measure, more conscious than a worm.
Stuart LaForge in "discussion list" (2023)
The literal meaning of "con•scious•ness" is "the state of being with knowledge." Since perception consists in "having knowledge (about the environment)," perception alone fits the definition of consciousness.
Provided that intelligent action requires perception, we could conclude that consciousness is a precondition of intelligence.
How do we detect consciousness?
The idea that consciousness isn't causally inert, implies that consciousness must have some physical effects. For consciousness to be visible to forces of evolution, it must make at least some difference to our behaviors.
These physical effects are measurable. So if epiphenomenalism is false, consciousness is -- in principle -- detectable. But what effects should we be on the look out for? And under what conditions do these effects arise?
Which behaviors are the most reliable indicators of consciousness?
We know that behaviors impossible for a zombie, are behaviors that require consciousness, and we have identified several candidates. But externally visible behaviors don't reveal what's happening inside.
There are cases where externally visible behaviors can deceive us. We might think there is consciousness where there is none: animatronics can feign awareness and emotion; a recorded lecture can express intelligence.
Animatronics can imitate complex behaviors, without the presence of a mind.Then there are cases where a lack of visible behavior deceives us.
We might think there is no consciousness where there is: such as when someone suffers from locked-in syndrome or is paralyzed by curare.
A reliable test of consciousness must look deeper than the external clues of speech and movement. It must consider behaviors of the brain itself. Only then could we know that an animatronic's behavior is scripted, or that an immobilized person has an active brain with a mind that's still there.
The key question to ask is: would the brain have behaved differently, by either reaching a different state, or demonstrating a different pattern of activity, if something in the environment presented differently?
In other words: is the brain responsive? An ability to respond to change indicates perception, and perception is the hallmark of awareness.
Is behavioral equivalence enough?
When two brains are behaviorally and physically indistinguishable, then the consciousness of one implies the consciousness of the other.
But what about cases of physical, chemical, or biological differences? Is behavioral equivalence alone a sufficient test of consciousness?
Imagine we encountered an alien who appears as intelligent and behaviorally flexible as a human. This alien can describe its perceptions, and even has words for 'qualia' and 'consciousness' in its language.
But consider the case where the brain of this alien is based on a biochemistry very different from our own. For example, this alien might have evolved on a planet covered in oceans of liquid ammonia.
Given that it has a brain that's physically and chemically distinguishable from ours, how can we reliably test the consciousness of this alien?
We disproved zombies by showing that consciousness leads to detectable differences between a conscious brain and a zombie brain. But when two brains start off as physically distinguishable, this approach doesn't work.
So might a behaviorally-equivalent, but chemically-different, species be incapable of consciousness? Could this alien be a philosophical zombie?
This question is not answered by refuting epiphenomenalism.
For all we know, consciousness depends on the presence of specific chemicals found in the human brain. Then an alien, with a brain lacking these chemicals, would lack consciousness -- despite all appearances.
But is biochemistry important to consciousness?
Answering this question is crucial if we are to ever test for the consciousness of alien species, other animals, or our own creations.
What does Consciousness Do?
We previously wondered what purpose consciousness serves. What, if anything, does it do? Now we know that consciousness must have effects. If it didn't, it never would have evolved in the first place. Further, if consciousness had no effects, we wouldn't talk or wonder about it.
So we might conclude that consciousness is the cause of our talking about consciousness. But talking about consciousness is not evolutionarily useful. There must be a more compelling reason and purpose for it.
We now recognize that intelligent reactivity to the environment requires perception. A creature must use sensory data to discriminate among a space of possible present conditions, and then use that knowledge to select an appropriate response from a range of potential responses.
That state of having knowledge is consciousness. It's a precondition for intelligent reactivity, and so consciousness is necessary for intelligence. The utility of intelligence explains the purpose of consciousness.
[...]
Jerry Fodor called this view of the brain's organization, the modularity of mind. It says that the brain consists of specialized modules, each of which has only limited access to the internal operations of the other modules.
Some theorize that this limited-access could explain one of the more mysterious features of qualia. Namely, that we can tell two qualia apart, say red and green, but we're incapable of describing how they differ.
Let's consider modularity and its implications using the example of a self-driving car -- one that can autonomously tell the status of a traffic light.
All autonomous vehicles have sensors. These can include cameras, ultrasonic sensors, and LIDAR. These sensors collect raw data from the environment. For example the camera may collect tens of millions of pixels, comprising hundreds of megabytes of data, each second.
This raw data is not immediately actionable. It must be processed and refined before it can be interpreted. This is performed by modules in the second layer. These modules process the raw data with the aim of making useful discriminations. For example, they may identify lane boundaries, read posted signs and traffic signals, or detect obstacles ahead.
The Signal Reading Module, which determines the status of a traffic light, may have to process many megabytes of pixel data to produce a single discrimination, an output indicating one of three possible states:
The simplified outputs from the processing modules are fed into the Control System. The Control System evaluates the discriminations made by each of the modules in the prior layer, and decides on a course of action.
For example, if an obstacle is detected and the vehicle can't stop in time, the Control System may opt to override its usual goal of staying in its lane, in order to meet its higher-priority goal of avoiding a collision.
Let's consider how the world seems from the point of view of this Control System. If we connected it with a Language Module, so we could interrogate the Control System, what would it say about its knowledge of the world?
If we asked it, "How does a red light differ from a green light signal?" What could it say? The Control System would know there's a difference between a red traffic signal and a green one. It might even know how the two states affect its behavior. But the Control System does not know how the Signal Reading Module performed its discrimination.
The Control System lacks access to the raw data of the cameras. Accordingly, it can't know how these two states differ, only that they do.
To get some feel for the situation, imagine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect relationally on what it perceives. What would such a system be like? [...] When we asked the system what perception was like, what would it say? [...]
David Chalmers in "The Conscious Mind" (1996)
When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, "I just see it!" When we ask how it knows the tricycle is red, it would say the same sort of thing that we do: "It just looks red." If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system's point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine's point of view this does not help.
It seems we're in a similar situation to this Control System.
If our brains are modular then the part of our brain that talks lacks access to the raw information processed in the low-level visual centers. So while we can know red differs from green, we can't say how. The information used to make that discrimination simply isn't available there.
So this is why conscious experience seems strange. We are able to make distinctions, but we have direct access neither to the sources of those distinctions, or to how we make the distinctions.
David Chalmers in "Consciousness and Cognition" (1990)
Chalmers asked, “Why is it the case that information processing is accompanied by subjective experience?” What could it be about certain processes that requires them to bear (or invoke) a conscious experience?
From our earlier considerations of zombies, we reasoned that zombies are logically impossible because certain behaviors require consciousness.
But even with this logical understanding, we still lack an intuitive feel. It's still hard to see why and how consciousness must enter the picture in the course of performing certain behaviors or certain kinds of processing.
What is it about a behavior that decides whether or not performance of that behavior is accompanied by some sort of conscious awareness?
What is it that separates consciousness-necessitating behaviors from any other kind of behaviors that can be done with no consciousness whatsoever?
The simplest example of a behavior that necessitates consciousness is:
reliable responsiveness to environmental stimuli
That is, if a behavior involves responding to a stimulus, and if the behavior reliably depends on that stimulus being present or not, then some degree of consciousness must exist to perform that behavior.
To see why, let's consider the example of a robot catching a thrown ball:
In order for the robot to reliably catch the ball, it must know the ball's position and speed relative to its hand. Without such information, the robot wouldn't know how to move its arm or when to close its hand.
So some aspect within the robot system must possess this knowledge.
A thing can't act as if it knows where the ball is, unless it actually knows where the ball is. And to know something is to be conscious of something.
It is hard to isolate any one part of the robot as the part having a conscious experience. The robot's awareness is neither in the eyes, nor the control logic, nor the actuators of the arm -- it exists across the catching process.
It is the process that is conscious.
We can ask: why can't there simply be processing, absent any awareness?
But that is impossible: For a process to respond to something in the environment, it must perceive information from the environment. To act upon that information, it must possess information. Behaving as though it knows something, requires that it know something. If it knows something, then it is conscious of something. To have knowledge is to be conscious.
This is the answer to why certain information processes are accompanied by subjective experience, and answers why there is consciousness at all.
So what does [conscious perceptual experience] require? Roughly, it requires incoming information to be present, in a certain sense, to the system's main decision-making processes. [...]
Robert Kirk and Peter Carruthers in "Consciousness and Concepts" (1992)
The system must be able to: (i) collect information about its environment, (ii) store that and other information, (iii) initiate and control its behaviour on the basis of incoming and stored information.
So we have at least a preliminary answer to the question of what it is for the system to acquire information that is 'for it'.
On Sat, Aug 16, 2025 at 9:05 PM Jason Resch <jason...@gmail.com> wrote:
I didn't mean it that way. I only mentioned it because I think it's simplifying to simply speak of "conscious experiences," rather than break it down into separate things like thoughts, feelings, sensations, etc.These are critical distinctions! The philosophy of mind would be lost without them, as would psychology and cognitive science. You need to rethink this.
Qualia are the most mysterious aspect of consciousness. We know qualia as well as we know anything -- it could even be said that qualia are all we ever can know: as they make up everything we ever experience.
And yet, not one of us can describe our perceptions to anyone else. Any attempt ends with comparisons to other, equally indescribable, sensations:
How does green look to you?
Like how grass looks in the summer.
How about this wine, how does it taste?
Like how plums and cranberries taste.
Aside from their indescribability, qualia appear to transcend all known physical properties and qualities. We might call them extraphysical.
Consider the color magenta:
Magenta is an invention of our visual system. It's a color with no physical correspondence to any frequency of light, a so-called extra-spectral color. As such, there are no magenta-colored photons, and there can be no magenta-colored lasers (since laser light is always a single frequency).
We experience magenta when our retinas receive a mixture of red and blue light. It is as though the brain connects the red and blue ends of the spectrum in a loop, and invents a color to fill the spot between them.
Most qualia are like this.
Though we can compare red to photons with a frequency of [katex]4 \times 10^{14}[/katex] Hertz, most qualia have no correspondence to any physical property. Physics has properties like spin, charge, and mass. There are no physical properties of love, nor the smell of lemons or the taste of chocolate.
To give a full account of qualia, we must explain both why qualia are indescribable, and how qualia could be "extraphysical," while at the same time exist in a physical universe as a product of physical brains.
Qualia are often said to be indescribable, inexpressible, ineffable, incommunicable, unsharable, and private:
The subjective character of the experience of a person deaf and blind from birth is not accessible to me, for example, nor presumably is mine to him.
Thomas Nagel in "What is it Like to be a Bat?" (1974)
It is remarkable that such common phenomena in our lives are so completely ineffable as to make a simple confirmation, like one that we are experiencing the same qualia, impossible.
Ray Kurzweil in "How to Create a Mind" (2012)
No matter how much a congenitally blind neuroscientist learns about the physical working of the brain, she will never be able to fully grasp the yellowness of a yellow experience.
Philip Goff in "Galileo's Error" (2019)
Why should this be?
What is it about the qualia that makes them impossible to share?
As it turns out, there are several reasons. Much difficulty results from limitations of language, reference, introspection, mental flexibility, as well as the high complexity of qualia. But a deeper reason explains why qualia are impossible to share: description is never a substitute for being.
Let's examine each of these reasons. While we may not be able to describe our qualia, we can at least understand why qualia are indescribable.
Communication is mediated through language. And while human language is effectively universal in what it can objectively describe, language struggles to communicate anything that's subjective.
There are a few reasons for this. Among them are:
These hinder our ability to meaningfully speak about our qualia.
Language involves the trades of symbols, not the things itself. The word "tree" is not a tree. So how is it that the word "tree" can invoke similar ideas, (say of trunks, branches, and leaves), in the listeners' minds?
More generally: how do symbols acquire the meaning?
This is known as the symbol grounding problem.
The chief answer is that the meaning of the words is established on a foundation of ostensive definitions, that is, defining by way of pointing to examples. A young child learns, "THAT is a tree, THIS is a dog."
Because we inhabit the same world, we can meet in the same place, point to the same object, and agree on a symbol to represent that object.
But what opportunity exists for agreement on qualia that are only ever privately perceived from within the confines of one's head? No one can enter your head to point to a quale that is private to you. Nor can you ever see into another's head to point out a particular quale of theirs to them.
Thus, qualia are sealed off from ostensive definitions.
As Locke highlighted in his case of the inverted spectrum, we can't agree on the referents of words that refer to private experiences. When defining such words, we can only use external, objective, and public, associations.
If one looks at the catalog of conscious experiences that I presented earlier, the experiences in question are never described in terms of their intrinsic qualities. Rather, I used expressions such as "the smell of freshly baked bread," "the patterns one gets when closing one's eyes," and so on. Even with a term like "green sensation," reference is effectively pinned down in extrinsic terms. When we learn the term "green sensation," it is effectively by ostension–we learn to apply it to the sort of experience caused by grass, trees, and so on. Generally, insofar as we have communicable phenomenal categories at all, they are defined with respect either to their typical external associations or to an associated kind of psychological state.
David Chalmers in "The Conscious Mind" (1996)
This accounts for why no two people can verify that their experience of "green" or "the scent of lemons" is the same between them. But qualia are not the only things that can be made incommunicable.
Even figures considered objective, the measurable things Galileo considered to be primary qualities, figures like "two meters," "five seconds," "10 grams," etc., are incommunicable without a shared reference frame.
Consider the plight of two AIs, who each inhabit two distinct and isolated virtual realities. They're allowed to communicate, but being confined to their own virtual environments, they can never visit one another.
One day, the AIs want to know which of them is taller. The first AI says, "I am twice as tall as the metal rod in my city's museum." The second AI says, "Well, I am 57 times taller than the width of my king's thumb."
But since neither AI has access to the environment of the other, there's no common ground for ostensive definitions and no basis of comparison.
Under such conditions, even length becomes private and unshareable. A measurement like "57 king thumbs" is meaningless beyond the confines of the virtual reality, which constitutes its own isolated reference frame.
When people try to compare qualia, they're in a similar predicament. Each person is in the separate reality of one's headspace. Each person's mind is a private world unto itself -- one we can't invite others into.
Accordingly, things that appear only privately in each person's mind (their qualia) cannot be meaningfully shared, compared, or described. They can only be described in relation to other private elements of that person's experience: "this soda tastes like how cherries taste to me."
So it turns out qualia are not uniquely indescribable. Rather, all things can be rendered incommunicable and ineffable absent a common frame of reference shared between the two communicating parties.
Between qualia of a particular sense, comparisons can be meaningful. We might say orange is like a reddish-yellow. But it's nonsensical to mix quale of different senses, to attempt to describe a color in terms of a sound.
A studious blind man who had used explanations written in books or given to him by his friends in an attempt to understand the names of light and colours that he often encountered bragged one day that he now understood what ‘scarlet’ signified. It was, he said, like the sound of a trumpet!
John Locke in "An Essay Concerning Human Understanding" (1690)
To those who can see and hear, it is obvious that color of scarlet is not like the sound of a trumpet. For that matter, no color is like any sound. Sensations from different senses are simply untranslatable.
What about describing qualia using words?
In the end, words are symbols for spoken sounds. Thus, a verbal description of a color would be a "sound" that conveys the color. So if no sound can communicate any color, then no verbal description can exist.
But words, in addition to being sounds, also convey meaning. Might this meaning get across the experience of a color, in a way that would enable a congenitally blind person to really know what that color is like?
Words provide the illusion of being able to convey any idea. But language always involves a translation -- from private ideas into something non-experiential: letters and symbols. The person who receives these symbols must interpret them with their own mind, forming their own ideas.
Further complicating the process: the meanings of words subtly differ between each person. In any translation/interpretation process, the full meaning is never perfectly conveyed -- something is always lost.
Verbalizable knowledge can be passed around and shared, to the extent that words really "mean the same thing" to different people. Do two people ever speak the same language? What we mean by "speak the same language" is a prickly issue. We accept and take for granted that the hidden subterranean flavors are not shared. We know what comes with and what is left out of linguistic transactions, more or less. Language is a public medium for the exchange of the most private experiences. Each word is surrounded, in each mind, by a rich and inimitable cluster of concepts, and we know that no matter how much we try to bring to the surface, we always miss something.
Douglas Hofstadter and Daniel Dennett in "The Mind's I" (1981)
Words never reveal the private features or reveal full depth of what happens in one's head. The ideas surrounding the word "red" in the mind of a blind person differ from those of a sighted person. Accordingly, the meaning differs, and so, this word fails to communicate the experience.
I find language inadequate for expressing my experience of redness. Perhaps I can muster some poetic reflections about it, but unless you've had the same encounter, it is really not possible for me to share my experience.
Ray Kurzweil in "The Age of Spiritual Machines" (1999)
Understanding new concepts usually involves making a link to one or more familiar concepts. For example, the behavior ripples on the surface of a pond is a useful metaphor to understand the properties of sound waves.
But when it comes to consciousness and qualia, there are no comparisons to be found. Nothing else is quite like a state of consciousness.
If understanding a thing is arriving at a familiarizing metaphor for it, then we can see that there always will be a difficulty in understanding consciousness. For it should be immediately apparent that there is not and cannot be anything in our immediate experience that is like immediate experience itself. There is therefore a sense in which we shall never be able to understand consciousness in the same way that we can understand things that we are conscious of.
Julian Jaynes in "The Origin of Consciousness in the Breakdown of the Bicameral Mind" (1976)
States of consciousness are the lens through we see the world and all its objects. But we never see "the eye" which does this seeing. We can never stand outside ourselves, or get behind our immediate experience.
Because what you are, in your inmost being, escapes your examination in rather the same way that you can't look directly into your own eyes without using a mirror, you can't bite your own teeth, you can't taste your own tongue, and you can't touch the tip of this finger with the tip of this finger. And that's why there's always an element of profound mystery in the problem of who we are.
Alan Watts in "The Tao of Philosophy: Myth of Myself" (1965)
This difficulty with understanding consciousness was recorded millennia ago in the Brihadaranyaka Upanishad. In it, the Hindu sage Yājñavalkya was asked to explain the soul. Yājñavalkya answered:
One does not see the seer of seeing, does not hear the hearer of hearing, does not think the thinker of thinking, does not know the knower of knowing.
Brihadaranyaka Upanishad "3.4.1 The Unknowability of Brahman" (c. 700 B.C.)
The idea has survived to modern times, and is recognized by physicists:
[Consciousness] cannot be accounted for in terms of anything else.
Erwin Schrödinger in interview (1931)
Our brains are capable of extraordinary things, but they're not unlimited. The brain's sophistication works against its ability to describe its products.
Only a fraction of the activity occurring in the brain can be expressed through speech and muscle movement. Our thoughts have a limited capacity to introspect their underlying neurology. And we exercise no conscious control over how our brains process information.
Qualia seem simple, but simple appearances can be misleading. The brain processes underlying our qualia are immensely complex.
Some 30% of the neurons in our cerebral cortex process visual information. A study of the primate visual system found it includes 32 cortical areas connected by 187 links, most of which are bidirectional. These areas perform the tasks of edge and motion detection, depth perception, color filling, and object and facial recognition.
Before you can have even the simplest experience in your visual field, billions of neurons must be recruited to play an active role in producing that unique experience out of all experiences you otherwise could have.
At any given time, we experience a particular conscious state selected out of billions of possible states, each of which can lead to different behavioral consequences. The occurrence of a particular conscious state is therefore highly informative in the specific sense that information is the reduction of uncertainty among a number of alternatives.
Gerald Edelman and Giulio Tononi in "A Universe of Consciousness" (2000)
Each second, trillions of bits are exchanged and processed by the 600 billion synapses in your brain. A conscious state involves billions of neurons making billions of discriminations. So there's always more information in our heads than we could ever hope to articulate.
Consider the seeming simplicity of a note played by an instrument.
Even when its the same note, played at the same pitch and loudness, we can know what kind of instrument produced that sound:
Middle C on a Guitar Middle C on a PianoThis note, middle C, has a frequency of 261.63 Hz. It's played with the same loudness. How then are we able to tell these instruments apart? How would one describe the unique aspects of each instrument's timbre?
After all, it was by the complex pattern of overtones that you were able to recognize the sound as that of a guitar rather than a lute or harpsichord. In other words, although the subjective experience has changed dramatically, the [phenomenal information properties haven't] changed; you are still responding, as before, to a complex property so highly informative that it practically defies verbal description.
Jonathan Bennett in "Substance, Reality and Primary Qualities" (1965)
Since we can recognize a difference between the two instruments, all the complexity of the overtones must be present in our experience. But its an amount of information that exceeds our capacity to describe.
Humans can introspect -- we can look inside ourselves. We can think about our thoughts processes, judge our emotions, and contemplate our qualia.
But what are the limits to how much a mind can determine about its own workings? How deeply can we peer into our own perceptual processes?
Clearly some limits exist. We aren't aware of what our individual neurons are doing. Nor do we feel which parts of our brains are active. We have no clue how words or ideas bubble up to become our conscious thoughts.
Our thoughts seem to run about in their own space, creating new thoughts and modifying old ones, and we never notice any neurons helping us out! But that is to be expected. We can't. [...]
Douglas Hofstadter in "Gödel, Escher, Bach" (1979)
We should remember that physical law is what makes it all happen–way, way down in neural nooks and crannies which are too remote for us to reach with our high-level introspective probes.
Computationalism provides hints for why this might be.
The universality of computation implies at some level, hardware details can be abstracted away and ignored. Not only does a piece of software not need to know what its underlying hardware is doing, it cannot know.
In software engineering, it's common to employ layers of abstraction. Each layer hides the details for how things work from the layer above. This enables more powerful and flexible high-level programs to be written.
Consider, for example, a website that runs in a web browser, which runs on an operating system, that runs on computing hardware.
Abstraction layers enable the website author to write the website without having to know how the web browser works. Likewise, the programmer of the web browser needn't know how the operating system works, and likewise, the operating system developers needn't care how any of the hardware works. All that matters is that each lower layer does work.
Our brains seem to work similarly. Each region of the brain has its own dedicated purpose, and different parts of the brain can use the products of other regions without having to know how they were produced.
Each area of the brain is apparently specialized for a particular type of function. We know this because when specific areas are damaged by an accident or a stroke there is also a corresponding loss of function. For example, damage to areas 44 and 45 on the left side of the frontal lobe--together they are called Broca's area--often robs someone of the ability to produce grammatical speech. People so afflicted may still pronounce words clearly and they may understand the speech of others, but they will be unable to construct grammatical sentences. Damage to an area known as the angular gyrus, located a little farther toward the back of the head, causes difficulties in reading and writing; damage to yet another area results in an inability to recall the names of familiar objects or recognize familiar faces.
Danny Hillis in "Pattern on the Stone" (1998)
Much of this mapping was figured out by Wilder Penfield in the 1950s.
While operating on people under local anesthesia, in an effort to cure focal seizures by excising damaged brain tissue, Penfield would cut and turn back a portion of their skulls, expose their cerebral cortices, and then touch an electrical probe to specific points on the surface of the brain. This allowed him to identify and map regions of special importance, such as the language centers, that it was crucial were left intact by the surgery. Depending on exactly where on the cortex the mild pulsing current from the probe was allowed to flow, the patient would react in a highly specific, often comical way. He might utter a short, incomprehensible sound, or be prevented from saying anything at all. He might twitch the index finger of his left hand. And if the probe made contact somewhere on the lateral side of the temporal lobe, it would often trigger in the patient a particular, vivid reliving of a past event, as if the play button of a tiny video recorder had been pressed. Touching one spot on the cortex might cause a patient to hear her mother and father singing Christmas carols around the piano as they did in their youth; stimulation of another point nearby might spark off the recollection of a winter walk through fresh-fallen snow, or of a childhood incident in the schoolyard, or of an encounter with a menacing stranger. Each episode would seem strikingly realistic and detailed (much more so than a normal recollection), and each could be played again from the start, exactly as before, by an identical touch of the probe.
David J. Darling in "Zen Physics" (1996)
Jerry Fodor called this view of the brain's organization, the modularity of mind. It says that the brain consists of specialized modules, each of which has only limited access to the internal operations of the other modules.
Some theorize that this limited-access could explain one of the more mysterious features of qualia. Namely, that we can tell two qualia apart, say red and green, but we're incapable of describing how they differ.
Let's consider modularity and its implications using the example of a self-driving car -- one that can autonomously tell the status of a traffic light.
All autonomous vehicles have sensors. These can include cameras, ultrasonic sensors, and LIDAR. These sensors collect raw data from the environment. For example the camera may collect tens of millions of pixels, comprising hundreds of megabytes of data, each second.
This raw data is not immediately actionable. It must be processed and refined before it can be interpreted. This is performed by modules in the second layer. These modules process the raw data with the aim of making useful discriminations. For example, they may identify lane boundaries, read posted signs and traffic signals, or detect obstacles ahead.
The Signal Reading Module, which determines the status of a traffic light, may have to process many megabytes of pixel data to produce a single discrimination, an output indicating one of three possible states:
The simplified outputs from the processing modules are fed into the Control System. The Control System evaluates the discriminations made by each of the modules in the prior layer, and decides on a course of action.
For example, if an obstacle is detected and the vehicle can't stop in time, the Control System may opt to override its usual goal of staying in its lane, in order to meet its higher-priority goal of avoiding a collision.
Let's consider how the world seems from the point of view of this Control System. If we connected it with a Language Module, so we could interrogate the Control System, what would it say about its knowledge of the world?
If we asked it, "How does a red light differ from a green light signal?" What could it say? The Control System would know there's a difference between a red traffic signal and a green one. It might even know how the two states affect its behavior. But the Control System does not know how the Signal Reading Module performed its discrimination.
The Control System lacks access to the raw data of the cameras. Accordingly, it can't know how these two states differ, only that they do.
To get some feel for the situation, imagine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect relationally on what it perceives. What would such a system be like? [...] When we asked the system what perception was like, what would it say? [...]
David Chalmers in "The Conscious Mind" (1996)
When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, "I just see it!" When we ask how it knows the tricycle is red, it would say the same sort of thing that we do: "It just looks red." If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system's point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine's point of view this does not help.
It seems we're in a similar situation to this Control System.
If our brains are modular then the part of our brain that talks lacks access to the raw information processed in the low-level visual centers. So while we can know red differs from green, we can't say how. The information used to make that discrimination simply isn't available there.
So this is why conscious experience seems strange. We are able to make distinctions, but we have direct access neither to the sources of those distinctions, or to how we make the distinctions.
David Chalmers in "Consciousness and Cognition" (1990)
In Frank Jackson's thought experiment with Mary the super-scientist, Mary knew everything there is to know about the human visual system. Despite this, she still did not know what the experience of red was like.
Tell me everything physical there is to tell about what is going on in a living brain, the kind of states, their functional role, their relation to what goes on at other times and in other brains, and so on and so forth, and be I as clever as can be in fitting it all together, you won't have told me about the hurtfulness of pains, the itchiness of itches, pangs of jealousy, or about the characteristic experience of tasting a lemon, smelling a rose, hearing a loud noise or seeing the sky.
Frank Jackson in "Epiphenomenal Qualia" (1982)
There was a person who, like Mary, studied color without seeing it.
Knut Nordby was a vision scientist with achromatopsia -- he was born without color-sensing cone cells and could only see in shades of gray.
Although I have acquired a thorough theoretical knowledge of the physics of colours and the physiology of the colour receptor mechanisms, nothing of this can help me to understand the true nature of colours. From the history of art I have also learned about the meanings often attributed to colours and how colours have been used at different times, but this too does not give me an understanding of the essential character or quality of colours.
Knut Nordby in "Vision in a Complete Achromat: A Personal Account" (1990)
Nordby reached a similar conclusion to Jackson. Writing, "Colours, like tones and tastes, are firsthand sensory experiences, and no amount of acquired theoretical knowledge can create this experience."
But Nordby, like Mary, had the requisite knowledge to perceive colors; he only lacked technological capacity to apply that knowledge to himself.
For instance, he knew his retinas were missing the cone cells he needed to distinguish colors and send color information to his visual cortex. So if he could alter his retinas he would, in theory, gain the ability to see color.
This has been done. In 2009, researchers used a gene therapy to alter the retinas of squirrel monkeys, giving them a new type of cone cell. After a few weeks, the formerly color-blind monkeys gained full color vision.
With an ability to alter herself, Mary could likewise rewire her brain to produce the same patterns a normally-sighted person's brain does when they see a ripe tomato. She could even make herself like someone with synesthesia, who experiences color at the sight of a number.
Our brains rewire themselves when we form memories or learn skills. But the brain's capacity to rewire itself is not unlimited. No matter how much one reads about the human visual system, the brain's language-centers won't rewire themselves to supply raw sensory input to the visual cortex.
So no matter how much language about "red" Mary's brain processes, it is never enough to activate her visual system in a way that lets her see red.
So it is not just the information, but how the information comes in that matters: how it enters the brain, which brain module processes it, and how that module shares its discriminations with other parts of the brain.
Sydney Shoemaker: "If what I want when I drink fine wine is information about its chemical properties, why don't I just read the label?"
Daniel Dennett: "What we want when we sip a great wine is not, indeed, the information about its chemical contents; what we want is to be informed about its chemical contents in our favorite way."
Thomas Nagel wondered if he could ever truly know what it is like to be another creature, particularly if it has a sensory system unlike ours. For example, how might a bat, which uses echolocation, perceive the world?
I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications.
Thomas Nagel in "What is it Like to Be a Bat?" (1974)
There's a degree of unknowability surrounding the question of "what it's like to be another" -- one that no description appears able to bridge.
Being is not describing. A scientific description can have predictive and explanatory power, but it cannot directly convey the phenomenal experience that depends on having an individual brain and body.
Gerald Edelman and Giulio Tononi in "A Universe of Consciousness" (2000)
This is the main difficulty with communicating qualia: having a particular conscious experience requires being in a particular state. Or in other words, one must be a certain thing in order to know a certain thing.
So while language can describe things in objective terms, receiving a description doesn't transform one being into another. Thus, descriptions of subjective experiences can never perfectly convey the essence of what it was like to be that particular mind having that particular experience.
It's easy to fall into the illusion that experiences are communicable. For example, when a friend tells you about their recent dining experience.
Yet you can only imagine the experience using your mind, never theirs. This is evident when the descriptions go beyond one's area of familiarity: when you hear of a food you've never tasted, when a blind person ponders color, or when Nagel tries to imagine what it's like to echolocate.
Modern science began with the recognition of a divide between the objective and the subjective, between the observer and the observed.
Einstein called Galileo, "the father of modern physics--indeed, of modern science altogether." It was Galileo who drew the line between observer-independent primary qualities and observer-dependent secondary qualities.
Galileo thought science was concerned only with primary qualities -- measurable quantities like shape, mass, position, time and motion. Secondary qualities, meanwhile, are subject-dependent. These are first-hand experiences and sensations -- qualities like color, taste, and smell.
[If] the living creature were removed, all these qualities [tastes, odors, colors, and so on] would be wiped away and annihilated.
Galileo Galilei in "The Assayer" (1623)
But making science about primary qualities served to exclude secondary qualities from scientific investigation. If we seek to understand qualia, science can't ignore secondary qualities or pretend they don't exist.
As Nagel writes, "while we are right to leave [the subjective] point of view aside in seeking a fuller understanding of the external world, we cannot ignore it permanently, since it is the essence of the internal world [...]."
It's easy to forget that qualia have no existence beyond subjects.
While there is 650 nm wavelength light, there is no such thing as "red" just out there in the universe. There's "red for Alice," and "red for Bob," but it's always a "red for someone" -- and these things may not be the same.
That qualia can differ between subjects is clear when we consider the different associations and degrees of appreciation people have for the secondary qualities they perceive when sensing the same thing.
When one person enjoys a particular taste, smell, or sound, while another finds it off-putting, how could they be having the same experience?
People disagree on all types of secondary qualities: For example, we find no broad agreement on the most beautiful work of art in the world, the most delicious food, or the most pleasing color. Such disagreements suggests that different subjects perceive the same things differently.
It's said that "objectivity" is "intersubjective agreement." In other words, only those things that all rational subjects can agree on, are objective.
It's then clear why there can be no objectively most delicious food, or objectively most beautiful work of art. This also means there can be no objective meaning for any quale, which are only known subjectively.
Understanding the Divide through Computationalism
Let's consider two distinct observers: Alice and Bob:
Bob sees a red square, giving him a vivid experience of red. Alice, on the other hand, has a full account of everything that happens in Bob's brain as he looks at the red square and has this red experience.
Can Alice ever truly know "what it is like for Bob" when Bob sees red?
Computationalism can help us understand why it's impossible for Alice to ever truly know what it is like to be Bob as he has his red experience.
According to computationalism, a conscious state is a particular high-level state of a computable function. Anything that performs the exact same high-level function will have the exact same state of consciousness.
Let's examine this situation using a functional view of consciousness.
In mathematics, a function "f" will accept a possible input "x" and process it to produce an output value "y" for that input. This is written as:
y = f(x)
According to computationalism, there's a function that corresponds to the processing performed by Bob's brain. Let's call this function: "B."
Alice's brain, being different from Bob's, processes things differently. It therefore corresponds to a different function. Let's call this function: "A."
For brains, input is sensory information. Let's label the sensory input Bob's brain receives as he looks at the red square "red." So Bob's "experience of red" corresponds with computing the function "B" on the input of "red":
B(\text{red})
Even if the same sensory input were supplied to Alice's brain, when the high-level function describing the processing performed by Alice's brain differs from Bob's, the resulting state of consciousness will differ too:
A(\text{red}) \neq B(\text{red})
Having a complete description of Bob's brain doesn't help. Let's call the complete description of everything Bob's brain does "d." If this description were provided to Alice's brain as sensory information, it corresponds to a very different function than that of Bob's brain perceiving red:
A(d) \neq B(\text{red})
Alice's brain receiving a description of Bob's brain experiencing red, isn't the same as Bob receiving red. So again, Alice has a different experience.
In fact, there's no input we can provide to Alice's brain to make her brain function exactly like Bob's does when Bob's brain processes the red input:
\text{For all } x \text{: } A(x) \neq B(\text{red})
Changing the input changes what is processed, not how it is processed.
Therefore, no input to Alice's brain can make her brain perform the same process as Bob's brain does when it processes red. This is why there's nothing Bob can say, no description Alice can have, nor anything Alice might learn, that puts her brain in the same functional state as Bob's.
Even if we had a fully satisfactory theory of our mental process, it would not convey the subjective experience of redness or yellowness.
Ray Kurzweil in "The Age of Spiritual Machines" (1999)
In 1931, Kurt Gödel discovered what's been called, "the most significant mathematical truth of the century," Gödel's incompleteness theorems.
What Gödel showed is that what can be proved in mathematics, depends on the mathematical system one happens to be operating within.
So some mathematical truths that are accessible in some systems, will be inaccessible from others. Furthermore, Gödel proved that no system can access all mathematical truth, hence every system is incomplete.
So even in mathematics, there's a sort of subjective-objective divide. The mathematical truths that can be surveyed depend on one's vantage point.
What is it like to hate chocolate (or your personal favorite flavor)?
Douglas Hofstadter and Daniel Dennett in "The Mind's I" (1981)
What is it like to hear English (or one's native language) without understanding it? [...]
In a sense, Gödel's Theorem is a mathematical analogue of the fact that I cannot understand what it is like not to like chocolate, or to be a bat, except by an infinite sequence of ever-more-accurate simulation processes that converge toward, but never reach, emulation. I am trapped inside myself and therefore can't see how other systems are. Gödel's Theorem follows from a consequence of the general fact: I am trapped inside myself and therefore can't see how other systems see me. Thus the objectivity-subjectivity dilemmas that Nagel has sharply posed are somehow related to epistemological problems in both mathematical logic, and as we saw earlier, the foundations of physics.
Nagel described this frustrating difficulty.
Whenever he tries to imagine another's sensations, he does it through approximations of imagination. And if he could somehow morph his brain to be more like that of another, he would cease being himself.
Even if I could by gradual degrees be transformed into a bat, nothing in my present constitution enables me to imagine what the experiences of such a future stage of myself thus metamorphosed would be like.
Thomas Nagel in "What is it Like to Be a Bat?" (1974)
Even if we possessed the technology to morph one brain into another, it still would not overcome the limits imposed by Gödel's theorem.
Consider if Hofstadter, on a quest to understand what it's like to hate chocolate, underwent a radical procedure to morph his brain to become like the brain of his friend John, who hates chocolate:
When Hofstadter's brain is morphed into John's brain, the resulting brain will know what it's like to hate chocolate. But this brain is no longer Hofstadter, it's John. So Hofstadter still doesn't know what it's like.
If we morph the brain back to how it was before, such that it is back to being Hofstadter's brain again, that brain no longer knows what it's like to hate chocolate. Such knowledge is lost as the brain morphs back into Hofstadter's; John's knowledge of hating chocolate doesn't make the trip.
What if instead or morphing a brain, we try another strategy: merging two brains. We take a brain that loves chocolate and a brain that hates it, and merge the two into a single brain with a single consciousness. Would this enable the merged mind to know what it is like to be the other?
Again this does not help.
Since the merged brain possesses a single consciousness, it can only have a single experience when tasting chocolate. The merged brain will either like chocolate, or it won't. Whatever the case, the experience of the merged brain cannot be like what it was like for both Hofstadter and John.
Since one cannot be oneself and another at the same time, knowledge states like "Alice knowing what it is like to be Bob," are contradictions. Alice must be Bob to have and know Bob's experience, but then she would no longer be Alice. Thus, Alice can never know what it's like to be Bob.
No description can convey a private experience because others only perceive that description, never the private experience itself.
Everything that we learn from what other people say and write involves inference from something perceived (spoken or written words) to something unperceived–namely, the "mental" events of the speaker or writer.
Bertrand Russell in "The Analysis of Matter" (1927)
Qualia are inextricably bound to their subjects. To know "how mint tastes to Bob" one must be Bob. Since this kind of knowledge can't be taken from one subject and given to another, it is inherently private and unsharable.
Because of the nature of embodiment, we still remain, to some extent, prisoners of description.
Gerald Edelman and Giulio Tononi in "A Universe of Consciousness" (2000)
To be conscious is to possess a particular bundle of immediate knowledge. In any instant, that "knowledge bundle" is everything that one is aware of.
Though we change in each moment, we lose that perfect grasp of what it was like to be in a prior state. We find ourselves in the present, where any past state of consciousness is only accessible as a memory.
W are, in a sense, trapped in a single experience: one's present moment of consciousness. So as a conscious being, the only thing you really know is precisely what it is like to be you, now in this moment.
Being and experiencing go hand in hand.
We are embodied beings, inhabiting a particular body, brain, and sensory experience. We can't escape this, nor we can share it. We can only be it.
To expect that a theoretical explanation of consciousness can itself provide an observer with the experience of ‘the redness of red’ is to ignore just those phenotypic properties and life history that enable an individual animal to know what it is like to be such an animal. A scientific theory cannot presume to replicate the experience that it describes or explains; a theory to account for a hurricane is not a hurricane. A third-person description by a theorist of the qualia associated with wine tasting can, for example, take detailed account of the reported personal experiences of that theorist and his human subjects. It cannot, however, directly convey or induce qualia by description; to experience the discriminations of an individual, it is necessary to be that individual.
Gerald Edelman in "Naturalizing Consciousness" (2003)
There is a difference between the inside view that comes from being something, and an outside view which is only a description of something. This is the answer to Mary's Room. Mary can have a complete description of a brain looking at a ripe tomato. But that other brain is not her.
While we failed to find a way to describe qualia, we have succeeded in understanding why qualia are incommunicable, ineffable, and private.
Since we can only ever be ourselves, and only know what it is like to be oneself, there is an inherent limitation in knowing. One can only know what it's like to be someone else, if one is that someone else.
You don't like the term “concept” as in phenomenal and propositional concepts. Fine, but it is not the word that matters. What does matter is that these two kinds of conscious experience, though different, are both ideas in our minds about reality.
Some of our country ideas about reality are in the first person and some are in the second or third person, but all mental phenomena have the same ontology. As a functionalist, you should be as happy about that as any physicalist.Different conscious experiences require different activations of different neural hardware, they require having/being a particular state of mind.That is one reason they are critical.
Descriptions don't have read/write power to arbitrarily rewire the recipient's brain, and hence, no textual account can cause you to experience a bat's echolocation, nor enable a congenitally blind person to see red.I'm glad we agree on that much! Too bad you think certain sensorless computer programs have amazing superpowers that humans don’t have.
-gts
On Sun, Aug 17, 2025, 1:39 AM Gordon Swobe <gordon...@gmail.com> wrote:On Sat, Aug 16, 2025 at 9:05 PM Jason Resch <jason...@gmail.com> wrote:
Rather than say the whole world is conscious, I would say the world is filled with conscious things.If the wind blows, the cradle will rock.The rocking cradle is conditionally sensitive to the wind blowing. So you could say that when the cradle is rocking it has realized a knowledge state of knowing the wind is blowing.To consider such a system as a conscious system I admit sounds ridiculous…Because it is ridiculous.Yet you didn't answer my main argument in defense of it. How is it any more (or less) ridiculous than any of the conditional states used as a basis of consciousness in human brains, or in the circuitry of conscious robots?
If the cradle is conscious then so is the tree which is sensitive to the wind and the bough is sensitive to the tree and the cradle is sensitive to the bough. Even the wind is conscious as it is sensitive to the atmospheric conditions, which are sensitive to the sun, and so on. The whole kit and kaboodle is conscious, baby and all.You have to be more specific about the state and it's relationship to the truth of something else.
Also the conditional relationship must be reliable. Finally, most of these situations you describe are binary, one or the other, type knowledge states, all of which are atomic, minimal states of consciousness of the kind a thermostat has when it knows if it's too hot or too cold.
Note that this is not human consciousness we are talking about, but rather, how anything can even begin to be conscious. For example when photosensitive bacteria.:.
…The simplest possible mind is purely reactive. To be reactive requires reaching a state conditionally.
On Sun, Aug 17, 2025 at 7:37 AM Jason Resch <jason...@gmail.com> wrote:On Sun, Aug 17, 2025, 1:39 AM Gordon Swobe <gordon...@gmail.com> wrote:On Sat, Aug 16, 2025 at 9:05 PM Jason Resch <jason...@gmail.com> wrote:
Rather than say the whole world is conscious, I would say the world is filled with conscious things.If the wind blows, the cradle will rock.The rocking cradle is conditionally sensitive to the wind blowing. So you could say that when the cradle is rocking it has realized a knowledge state of knowing the wind is blowing.To consider such a system as a conscious system I admit sounds ridiculous…Because it is ridiculous.Yet you didn't answer my main argument in defense of it. How is it any more (or less) ridiculous than any of the conditional states used as a basis of consciousness in human brains, or in the circuitry of conscious robots?Do you have a name for your pet conditionals-are-the-atoms-of-consciousness theory? I will call it CAC here.You’re telling me that according to CAC, the Rock a Bye Baby nursery rhyme is a story about a conscious cradle falling out of a tree. If that is ridiculous (and we agree it sounds ridiculous) then so is the idea that conditionals are the basis of consciousness in humans.
If the cradle is conscious then so is the tree which is sensitive to the wind and the bough is sensitive to the tree and the cradle is sensitive to the bough. Even the wind is conscious as it is sensitive to the atmospheric conditions, which are sensitive to the sun, and so on. The whole kit and kaboodle is conscious, baby and all.You have to be more specific about the state and it's relationship to the truth of something else.Also the conditional relationship must be reliable. Finally, most of these situations you describe are binary, one or the other, type knowledge states, all of which are atomic, minimal states of consciousness of the kind a thermostat has when it knows if it's too hot or too cold.So then my ridiculous continuation of your rediculous story is consistent with CAC.
Note that this is not human consciousness we are talking about, but rather, how anything can even begin to be conscious. For example when photosensitive bacteria.:.…The simplest possible mind is purely reactive. To be reactive requires reaching a state conditionally.I think you should cut to the chase and embrace panpsychism. Just as the cradle is sensitive to the wind, physical atoms are sensitive and react to their environments.
For example, atoms of phosphorus emit photons IF they are exposed to other photons. Therefore, according to CAC, phosphorus atoms are conscious. Basically, on CAC, the entire physical universe is conscious.
For example, atoms of phosphorus emit photons IF they are exposed to other photons. Therefore, according to CAC, phosphorus atoms are conscious. Basically, on CAC, the entire physical universe is conscious.If you can show how a phosphorus atom enters one of two distinct states based conditionally on some event,you could make that argument.
For example if a phosphorus atom is an excited state vs. when the phosphorus atom is in a ground state, you could say the excited phosphorus atom is an embodiment of the knowledge that this particular phosphorus atom has been excited. But what excited it, how long ago it was excited, etc..
On Sun, 17 Aug 2025 at 09:15, Brent Allsop <brent....@gmail.com> wrote:Yes, I agree.Would you also agree that our thoughts and our phenomenal knowledge (qualities) can be all subjectively bound so that we directly apprehend both of them at the same time?Yes. I must say I don’t understand what the big deal is about the binding problem. A self-driving car functionally integrates multiple data types such as input from cameras, lidar, sonar etc. without any special binding circuit.