Qualia and communicability

10 views
Skip to first unread message

Jason Resch

unread,
Mar 31, 2021, 11:58:44 AM3/31/21
to Everything List
I was thinking about what aspects of conscious experience are communicable and which are not, and I realized all communication relies on some pre-existing shared framework.

It's not only things like "red" that are meaningless to someone whose never seen it, but likewise things like spatial extent and dimensionslity would likewise be incommunicable to someone who had no experience with moving in, or through, space.

Even communicating quantities requires a pre-existing and common system of units and measures.

So all communication (inputs/outputs) consist of meaningless but strings. It is only when a bit string is combined with some processing that meaning can be shared. The reason we can't communicate "red" to someone whose never seen it is we would need to transmit a description of the processing done by our brains in order to share what red means to oneself.

So in summary, I wonder if anything is communicabke, not just qualia, but anything at all, when there's not already common processing systems between the sender and receiver, of the information.

Jason

Telmo Menezes

unread,
Apr 8, 2021, 12:11:37 PM4/8/21
to Everything List
Hi Jason,

I believe that you are alluding to what is known in Cognitive Science as the "Symbol Grounding Problem":

My intuition goes in the same direction as yours, that of "procedural semantics". Furthermore, I am inclined to believe that language is an emergent feature of computational processes with self-replication. From signaling between unicellular organisms all the way up to human language.

Luc Steels has some really interesting work exploring this sort of idea, with his evolutionary language games:

I have been working a lot with language these days. I and my co-author (Camille Roth) developed a formalism called Semantic Hypergraphs, which is an attempt to represent natural language in structures that are akin to typed lambda-calculus:

Here's the Python library that implements these ideas:

So far we use modern machine learning to parse natural language into this representation, and then take advantage of the regularity of the structures to automatically identify stuff in text corpora for the purpose of computational social science research.

Something I dream of, and intend to explore at some point, is to attempt to go beyond the parser and actually "execute the code", and thus try to close the loop with the idea of procedural semantics.

Best,
Telmo
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Apr 8, 2021, 3:38:26 PM4/8/21
to Everything List
Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
  2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.


More comments below:



On Thu, Apr 8, 2021 at 11:11 AM Telmo Menezes <te...@telmomenezes.net> wrote:
Hi Jason,

I believe that you are alluding to what is known in Cognitive Science as the "Symbol Grounding Problem":

My intuition goes in the same direction as yours, that of "procedural semantics". Furthermore, I am inclined to believe that language is an emergent feature of computational processes with self-replication. From signaling between unicellular organisms all the way up to human language.

That is interesting. I do think there is something self-defining about the meaning of processes. Something that multiplies two inputs can always be said to be multiplying. The meaning of the operation is then grounded in the function and processes of that process, which is made unambiguous. The multiplication process could not be confused with addition or subtraction. This is in contrast to a N-bit string on a piece of paper, which could be interpreted in at least 2^N ways (or perhaps even an infinite number of ways, if you consider what function is applied to that bit string).
 

Luc Steels has some really interesting work exploring this sort of idea, with his evolutionary language games:

Evolving systems that can communicate amongst themselves seems to be a fruitful way to explore these issues. Has anyone attempted to take simple examples, like computer simulated evolved versions of robots playing soccer, and add in a layer that lets each player emit and receive arbitrary signals from other players? I would expect there would be strong evolutionary pressures for learning to communicate things like "I see the ball", "I'm about to take a shot", etc. to other teammates.
 


I have been working a lot with language these days. I and my co-author (Camille Roth) developed a formalism called Semantic Hypergraphs, which is an attempt to represent natural language in structures that are akin to typed lambda-calculus:

One curiosity is that all human languages appear to be "Turing complete" in the sense that we can use natural language to describe and define any finite process. I don't know how significant this is though, as in general it is pretty easy to achieve Turing complete languages.

I think the central problem with "Mary the super-scientist" is our brains, in general, don't have a way to take received code/instructions and process them accordingly. I think if our brains could do this, if we could take book knowledge and use it to build neural structures for processing information in specified ways, then Mary could learn what it is like to see red without red light ever hitting her retina. But such flexibility in the brain would make us vulnerable to "mind viruses" that spread by words or symbols. Modern computers clearly demarcate executable and non-executable memory to limit similar dangers.

This might also explain the apparent third-person / first-person distinction. We can communicate through language any Turing machine, and understand the functioning of that machine and processing in a third person way, but without re-wiring ourselves, we have no way to perceive it in a direct first-person way.

 


Here's the Python library that implements these ideas:


Very cool!
 
So far we use modern machine learning to parse natural language into this representation, and then take advantage of the regularity of the structures to automatically identify stuff in text corpora for the purpose of computational social science research.

Something I dream of, and intend to explore at some point, is to attempt to go beyond the parser and actually "execute the code", and thus try to close the loop with the idea of procedural semantics.


I have often wondered, if an alien race discovered an english dictionary (containing no pictures) would there be enough internal consistency and information present in that dictionary to work out all the meaning? I have the feeling that because there is enough redundancy in it, together with a shared heritage of evolving in the same physical universe, there is some hope that they could, but it might involve a massive computational process to bootstrap. Once they make some headway towards a correct interpretation of the words, however, I think it will end up confirming itself as a correct understanding, much like the end stages of solving a Sudoku puzzle become easier and self-confirming of the correctness of the solution.

Is this the problem you are attempting to solve with the semantic hyper graphs/graph brain, or that such graphs could one day solve?

Jason
 


Am Mi, 31. Mär 2021, um 17:58, schrieb Jason Resch:
I was thinking about what aspects of conscious experience are communicable and which are not, and I realized all communication relies on some pre-existing shared framework.

It's not only things like "red" that are meaningless to someone whose never seen it, but likewise things like spatial extent and dimensionslity would likewise be incommunicable to someone who had no experience with moving in, or through, space.

Even communicating quantities requires a pre-existing and common system of units and measures.

So all communication (inputs/outputs) consist of meaningless but strings. It is only when a bit string is combined with some processing that meaning can be shared. The reason we can't communicate "red" to someone whose never seen it is we would need to transmit a description of the processing done by our brains in order to share what red means to oneself.

So in summary, I wonder if anything is communicabke, not just qualia, but anything at all, when there's not already common processing systems between the sender and receiver, of the information.

Jason


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 8, 2021, 8:40:14 PM4/8/21
to everyth...@googlegroups.com


On 4/8/2021 12:38 PM, Jason Resch wrote:
Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
  2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.

I think you exaggerate the problem.  Consider how bats "see" by sonar.  I think this is quite communicable to humans by analogies.   And submarines have sonar which produces images on screens.  Is redness communicable?  My father who was red/green color blind had to guess at the color of traffic lights or just watch other cars when he first started to drive around 1928. But he understood the concept of color because he could tell blue from red/green.  And later, traffic engineers adjust the spectrum of traffic lights so that he could tell the difference (they also started to put the red at the top). 

Brent

Bruno Marchal

unread,
Apr 9, 2021, 6:48:29 AM4/9/21
to everyth...@googlegroups.com
Hi Jason,

I discover your post just now, sorry.


On 31 Mar 2021, at 17:58, Jason Resch <jason...@gmail.com> wrote:

I was thinking about what aspects of conscious experience are communicable and which are not, and I realized all communication relies on some pre-existing shared framework.

OK. It presupposes that we (the communicating entities) share some Reality, which is not rationally justifiable (by using both Gödel completeness theorem and Gödel’s second incompleteness theorem).

Consistency (~[]f, <>t) is equivalent with “existence of a reality/model” by the completeness theorem, and in no provable by arithmetically sound machine by the second incompleteness theorem).

That corroborates the idea that consciousness is an (instinctive) belief in *some* reality.



It's not only things like "red" that are meaningless to someone whose never seen it, but likewise things like spatial extent and dimensionslity would likewise be incommunicable to someone who had no experience with moving in, or through, space.

Even communicating quantities requires a pre-existing and common system of units and measures.

Only for the quantities that we assume to be correlated to some empirical reality. In mathematics there is no units, so if we can agree on some mathematical axiomatic, we can communicate/justify-rationally many things. It is the link with some assumed Reality which is not communicable here.


So all communication (inputs/outputs) consist of meaningless but strings. It is only when a bit string is combined with some processing that meaning can be shared.

Yes. You communicate the number x, and the universal machine will interpret it as phi_x. 

The universal machine is the interpreter. It works, apparently, as you have succeeded to make a machine understanding that she has to send me your mail.




The reason we can't communicate "red" to someone whose never seen it is we would need to transmit a description of the processing done by our brains in order to share what red means to oneself.

But that would not be enough, as this presupposes that you could know-for-sure, your mechanist substitution level, which is impossible. So again, the communication of a qualia is impossible. We can communicate a theory. We can agree on the axioms, and communicate consequences, but the semantic is not communicated and we can only hope the others have enough similar interpretations, although that is not part of what can ever be communicated (in the strong sense of “rationally justified”).




So in summary, I wonder if anything is communicabke, not just qualia, but anything at all, when there's not already common processing systems between the sender and receiver, of the information.

We need to share a common axiomatics (implicit in our brain, or explicit by agreeing on some theory). If you agree that for all x and y we have that Kxy = x, I will be able to communicate that KAB = A. I you agree with Robinson’s axioms for arithmetic, we will agree on all sigma_1 sentences, which includes the universal dovetailing… If we agree on Mechanism, the whole of physics becomes communicable, despite it being a first person (plural) notions. Then, in case we do share some physical reality, we can communicate units in the ostensive way (like defining a meter to be the length of some metallic piece in a French museum, or defining it by some natural phenomena described in some theory on which we already agree).

In the machine’s metaphysics/theology/psychology we have the 5 modes, which separated into 8 modes, and what is on the right is what is not communicable (in the strong sense above):

1) p
2) []p []p
3) []p & p
4) []p & <>t []p & <>t
5) []p & <>t & p []p & <>t & p

The qualia (and the quanta, actually, which are special case of qualia, with mechanism) appears on the right (so they belong to G* \ G) at line “4)" and “5)", although a case can be made they appears also at line “3)”.

Basically, everything provable in G is communicable, and everything in G* minus G is not. (Technically I should use G1 and G1*, that is G + (p->[]p), but I don’t want to dig on technics here).

We can communicate what is rational, but incompleteness impose a surrational corona in between the rational and the irrational (falsity).

If we assume mechanism, it is provable that already the intended semantic of arithmetical theories are not communicable, even if we have the intimate feeling of not having any problem to conceive the standard model of arithmetic. Yet, we can communicate 0, s0, ss0, …, and we can communicate codes of universal machines, making the whole sigma_1 truth communicable, although not as such, without accepting larger non communicable intuition, like Mechanism itself (which is non rationally justifiable but still extrapolable from personal life and public theories, like Darwin, biology, …).

I do think that there is no problem below sigma_1. Above sigma_1, there is already matter of debate…, and things get different if we accept or reject the Mechanist Hypothesis.

Bruno

Bruno Marchal

unread,
Apr 9, 2021, 7:12:35 AM4/9/21
to everyth...@googlegroups.com
On 8 Apr 2021, at 18:10, Telmo Menezes <te...@telmomenezes.net> wrote:

Hi Jason,

I believe that you are alluding to what is known in Cognitive Science as the "Symbol Grounding Problem":


Like wiki says itself, there are a lot of issues here. The main one being the implicit materialism perhaps.



My intuition goes in the same direction as yours, that of "procedural semantics”.

I think that a universal machine is automatically a procedural semantics. Now, in AI, the term can, have a slightly different meaning, and in logic, we know that a “rich” theory can handle a part of its semantics, and even to incorporate into itself (which leads to a different machine, having again a new enlarged semantics). Those partial tractable semantics make sense.




Furthermore, I am inclined to believe that language is an emergent feature of computational processes with self-replication. From signaling between unicellular organisms all the way up to human language.

Luc Steels has some really interesting work exploring this sort of idea, with his evolutionary language games:

I have been working a lot with language these days. I and my co-author (Camille Roth) developed a formalism called Semantic Hypergraphs, which is an attempt to represent natural language in structures that are akin to typed lambda-calculus:

Are you using universal types. Are you aware of the semantic of Natural Language based on an extended (typed) lambda calculus by Montague? 

A long time ago, I have used Sowa semantic network, for some work in AI. It is interesting, but later I have used Montague semantics instead, based on Tarski semantics, and on Church Lambda Calculus (with universal types). Note that Lambda calculus is about the same thing as the theory of combinators (K = [x][y]x, S = [x][y]z]xz(yz).
I do consider that the work of Tarski is the most interesting work on semantics. It led to a whole branch of mathematical logic “model theory”, and Montague has convinced me that this is the best approach, even for natural language semantics, although they are many difficulties, and there is a split between concrete applicable approaches, and theoretical foundational concerns. By Gödel, we “know” (assuming mechanism) that this problem is with us … forever. 

Slides summary:

Here is a book: 


Here's the Python library that implements these ideas:

So far we use modern machine learning to parse natural language into this representation, and then take advantage of the regularity of the structures to automatically identify stuff in text corpora for the purpose of computational social science research.

Something I dream of, and intend to explore at some point, is to attempt to go beyond the parser and actually "execute the code", and thus try to close the loop with the idea of procedural semantics.

That seems interesting.

Bruno




Best,
Telmo

Am Mi, 31. Mär 2021, um 17:58, schrieb Jason Resch:
I was thinking about what aspects of conscious experience are communicable and which are not, and I realized all communication relies on some pre-existing shared framework.

It's not only things like "red" that are meaningless to someone whose never seen it, but likewise things like spatial extent and dimensionslity would likewise be incommunicable to someone who had no experience with moving in, or through, space.

Even communicating quantities requires a pre-existing and common system of units and measures.

So all communication (inputs/outputs) consist of meaningless but strings. It is only when a bit string is combined with some processing that meaning can be shared. The reason we can't communicate "red" to someone whose never seen it is we would need to transmit a description of the processing done by our brains in order to share what red means to oneself.

So in summary, I wonder if anything is communicabke, not just qualia, but anything at all, when there's not already common processing systems between the sender and receiver, of the information.

Jason


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Apr 9, 2021, 7:51:05 AM4/9/21
to everyth...@googlegroups.com
On 8 Apr 2021, at 21:38, Jason Resch <jason...@gmail.com> wrote:

Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
With mechanism, that is still first person plural. Third person knowledge is confined in elementary arithmetic.


  1. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
… believing in some god, or primitive Reality, ...



But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers.

Which is a good insight, as the physical reality is a sort of partially sharable qualia, although not in any provable way. 
But there is no real problem with the simple combinatorial or partially computable (assuming Church-Thesis) part of the arithmetical reality.
We can “communicate” that 24 is an even number, from finite simple hypothesis we can agree on, like x + 0 = x, etc.



All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

… which explains why, except for the minimum amount of arithmetic needed to define what is a digital machine, everything is eventually defined through the “8 eyes” of the self-introspecting machine. Yet, eventually physics can be (re)-defined by the set on laws of prediction that all machine agree when introspecting themselves deep enough. 

The physical universe is not “out there”. It is only a common sharable illusion about prediction shared by all universal numbers. The rest is geography/history, with contingent aspect related to long computations, that we can locally share.



There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red?

How could they know they see the same red, or that they have the same experience, without defining red ostensively, like Brent mentions regularly (and correctly, Imo)?


If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red?

I don’t see why synesthesia could help here. It seems you would need some telepathy added here, which is probably not what you are thinking?



In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.

Yes, but not in a provable way. We might discover later that our substitution level is much lower than we have thought, and that the qualia “red” needs to emulate the glial cells. The neuron would keep people agreeing on may aspect of red, and we might agree on many overlapping experience, but eventually realised, when getting a better artificial brain, that we were not really seeing red in the same way, after all.

Honestly, I don’t see why synesthesia could help, without postulating some substitution level. 






More comments below:



On Thu, Apr 8, 2021 at 11:11 AM Telmo Menezes <te...@telmomenezes.net> wrote:
Hi Jason,

I believe that you are alluding to what is known in Cognitive Science as the "Symbol Grounding Problem":

My intuition goes in the same direction as yours, that of "procedural semantics". Furthermore, I am inclined to believe that language is an emergent feature of computational processes with self-replication. From signaling between unicellular organisms all the way up to human language.

That is interesting. I do think there is something self-defining about the meaning of processes. Something that multiplies two inputs can always be said to be multiplying. The meaning of the operation is then grounded in the function and processes of that process, which is made unambiguous. The multiplication process could not be confused with addition or subtraction. This is in contrast to a N-bit string on a piece of paper, which could be interpreted in at least 2^N ways (or perhaps even an infinite number of ways, if you consider what function is applied to that bit string).

All right.


 

Luc Steels has some really interesting work exploring this sort of idea, with his evolutionary language games:

Evolving systems that can communicate amongst themselves seems to be a fruitful way to explore these issues. Has anyone attempted to take simple examples, like computer simulated evolved versions of robots playing soccer, and add in a layer that lets each player emit and receive arbitrary signals from other players? I would expect there would be strong evolutionary pressures for learning to communicate things like "I see the ball", "I'm about to take a shot", etc. to other teammates.

All the world agree that Alpha-Go has won the Go Championship Game.
That requires deep learning, that is many neuron layers. We can do the same in natural language, but it is much more complicated to get a person grounded in our reality. We need the equivalent of an hyppocampus, a good handling of long term and short term memories. Here, I do think that some small programs can get this, except that they might take millions of years to evolve. Then such machine will acts “intelligently”, fight for their right, and social security ...



 


I have been working a lot with language these days. I and my co-author (Camille Roth) developed a formalism called Semantic Hypergraphs, which is an attempt to represent natural language in structures that are akin to typed lambda-calculus:

One curiosity is that all human languages appear to be "Turing complete" in the sense that we can use natural language to describe and define any finite process. I don't know how significant this is though, as in general it is pretty easy to achieve Turing complete languages.

Typed lambda calculus (combinators) are usually NOT Turing-complete, unless you add a universal type (but then the semantics is again inaccessible to the entity itself).

It is related to the eternal hesitation between security-totality, and liberty-universality-partiality lived by *all¨Turing complete entity.



I think the central problem with "Mary the super-scientist" is our brains, in general, don't have a way to take received code/instructions and process them accordingly. I think if our brains could do this, if we could take book knowledge and use it to build neural structures for processing information in specified ways, then Mary could learn what it is like to see red without red light ever hitting her retina.

Only by assuming some substitution level, which requires always some act of faith...



But such flexibility in the brain would make us vulnerable to "mind viruses" that spread by words or symbols. Modern computers clearly demarcate executable and non-executable memory to limit similar dangers.

That’s what “types” are for. In untyped lambda calculus, like in LISP, or the SK combinators, the option is complete freedom, with the risk of crashing the machine...



This might also explain the apparent third-person / first-person distinction.

Hmm… I do think this appears with the fact that G¨proves []p <-> ([]p & p), but the machine can never see this, and indeed, the machine can define “[]p”, but cannot define “[]p & p”. It is like the difference between seeing torture and being tortured. It is very different.
The universal machine is aware of “[]p & p”, but cannot associate it to any machine, nor to any third person describable way, in any rationally justifiable way. She can say “yes” to the doctor, but that requires an act of faith, in mechanism, and in some doctor...



We can communicate through language any Turing machine, and understand the functioning of that machine and processing in a third person way, but without re-wiring ourselves, we have no way to perceive it in a direct first-person way.

With rewiring yourself, you change yourself, including the possible interpretation of previous experience, but without “you" being able to see the difference, because “you” has become another.



 


Here's the Python library that implements these ideas:


Very cool!
 
So far we use modern machine learning to parse natural language into this representation, and then take advantage of the regularity of the structures to automatically identify stuff in text corpora for the purpose of computational social science research.

Something I dream of, and intend to explore at some point, is to attempt to go beyond the parser and actually "execute the code", and thus try to close the loop with the idea of procedural semantics.


I have often wondered, if an alien race discovered an english dictionary (containing no pictures) would there be enough internal consistency and information present in that dictionary to work out all the meaning?

Certainly not *all* the meaning. We cannot do that among the humans.



I have the feeling that because there is enough redundancy in it, together with a shared heritage of evolving in the same physical universe, there is some hope that they could, but it might involve a massive computational process to bootstrap. Once they make some headway towards a correct interpretation of the words, however, I think it will end up confirming itself as a correct understanding, much like the end stages of solving a Sudoku puzzle become easier and self-confirming of the correctness of the solution.

They will be able to progress, but will never get all the meaning, even for simple arithmetical notions. That is a consequence of incompleteness. Yet, better and better approximations can be accessible.

Bruno




Is this the problem you are attempting to solve with the semantic hyper graphs/graph brain, or that such graphs could one day solve?

Jason
 


Am Mi, 31. Mär 2021, um 17:58, schrieb Jason Resch:
I was thinking about what aspects of conscious experience are communicable and which are not, and I realized all communication relies on some pre-existing shared framework.

It's not only things like "red" that are meaningless to someone whose never seen it, but likewise things like spatial extent and dimensionslity would likewise be incommunicable to someone who had no experience with moving in, or through, space.

Even communicating quantities requires a pre-existing and common system of units and measures.

So all communication (inputs/outputs) consist of meaningless but strings. It is only when a bit string is combined with some processing that meaning can be shared. The reason we can't communicate "red" to someone whose never seen it is we would need to transmit a description of the processing done by our brains in order to share what red means to oneself.

So in summary, I wonder if anything is communicabke, not just qualia, but anything at all, when there's not already common processing systems between the sender and receiver, of the information.

Jason


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/60bab9c1-98a1-4001-830f-fd7a469b3a8d%40www.fastmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Apr 9, 2021, 7:57:03 AM4/9/21
to everyth...@googlegroups.com
On 9 Apr 2021, at 02:40, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/8/2021 12:38 PM, Jason Resch wrote:
Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
  2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.

I think you exaggerate the problem.  Consider how bats "see" by sonar.  I think this is quite communicable to humans by analogies.

That will communicate the third person aspect, but not the qualia itself. 


  And submarines have sonar which produces images on screens.  Is redness communicable?  My father who was red/green color blind had to guess at the color of traffic lights or just watch other cars when he first started to drive around 1928. But he understood the concept of color because he could tell blue from red/green.  And later, traffic engineers adjust the spectrum of traffic lights so that he could tell the difference (they also started to put the red at the top).  

Yes, in practice there is not much problem, which explains the lack of interest in the mind-body problem, but this does not help to solve the conceptual issue. 
It is a bit like saying that in practice GR and QM works very well, so that we lost our time when trying to get a coherent theory of all forces. It depends if we are interested in foundational issues and understanding or in practical applications, I guess.

Bruno




Brent


More comments below:



On Thu, Apr 8, 2021 at 11:11 AM Telmo Menezes <te...@telmomenezes.net> wrote:
Hi Jason,

I believe that you are alluding to what is known in Cognitive Science as the "Symbol Grounding Problem":

My intuition goes in the same direction as yours, that of "procedural semantics". Furthermore, I am inclined to believe that language is an emergent feature of computational processes with self-replication. From signaling between unicellular organisms all the way up to human language.

That is interesting. I do think there is something self-defining about the meaning of processes. Something that multiplies two inputs can always be said to be multiplying. The meaning of the operation is then grounded in the function and processes of that process, which is made unambiguous. The multiplication process could not be confused with addition or subtraction. This is in contrast to a N-bit string on a piece of paper, which could be interpreted in at least 2^N ways (or perhaps even an infinite number of ways, if you consider what function is applied to that bit string).
 

Luc Steels has some really interesting work exploring this sort of idea, with his evolutionary language games:

Evolving systems that can communicate amongst themselves seems to be a fruitful way to explore these issues. Has anyone attempted to take simple examples, like computer simulated evolved versions of robots playing soccer, and add in a layer that lets each player emit and receive arbitrary signals from other players? I would expect there would be strong evolutionary pressures for learning to communicate things like "I see the ball", "I'm about to take a shot", etc. to other teammates.
 


I have been working a lot with language these days. I and my co-author (Camille Roth) developed a formalism called Semantic Hypergraphs, which is an attempt to represent natural language in structures that are akin to typed lambda-calculus:

One curiosity is that all human languages appear to be "Turing complete" in the sense that we can use natural language to describe and define any finite process. I don't know how significant this is though, as in general it is pretty easy to achieve Turing complete languages.

I think the central problem with "Mary the super-scientist" is our brains, in general, don't have a way to take received code/instructions and process them accordingly. I think if our brains could do this, if we could take book knowledge and use it to build neural structures for processing information in specified ways, then Mary could learn what it is like to see red without red light ever hitting her retina. But such flexibility in the brain would make us vulnerable to "mind viruses" that spread by words or symbols. Modern computers clearly demarcateexecutable and non-executable memory to limit similar dangers.

Brent Meeker

unread,
Apr 9, 2021, 5:23:32 PM4/9/21
to everyth...@googlegroups.com


On 4/9/2021 4:57 AM, Bruno Marchal wrote:

On 9 Apr 2021, at 02:40, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/8/2021 12:38 PM, Jason Resch wrote:
Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
  2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.

I think you exaggerate the problem.  Consider how bats "see" by sonar.  I think this is quite communicable to humans by analogies.

That will communicate the third person aspect, but not the qualia itself.

When a blind person has a tactile array placed in their back and attached to a video camera, they learn to see.  I sighted person can have the same tactile array and video camera and also learn to see thru it.  On what grounds would you deny they experience the same qualia via the video camera.  And then you can ask the sighted person how or whether the qualia of the two kinds of sight differ...or you could do the experiment yourself.




  And submarines have sonar which produces images on screens.  Is redness communicable?  My father who was red/green color blind had to guess at the color of traffic lights or just watch other cars when he first started to drive around 1928. But he understood the concept of color because he could tell blue from red/green.  And later, traffic engineers adjust the spectrum of traffic lights so that he could tell the difference (they also started to put the red at the top).  

Yes, in practice there is not much problem, which explains the lack of interest in the mind-body problem, but this does not help to solve the conceptual issue. 
It is a bit like saying that in practice GR and QM works very well, so that we lost our time when trying to get a coherent theory of all forces. It depends if we are interested in foundational issues and understanding or in practical applications, I guess.

If all the practical problems can be solved, the "foundational issues" are reduced to armchair philosophizing.  There's a reason theology fell into disrepute.  I see work on foundational issues as theory that will help guide the practical solutions.

Jason Resch

unread,
Apr 9, 2021, 6:31:40 PM4/9/21
to Everything List


On Fri, Apr 9, 2021, 4:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/9/2021 4:57 AM, Bruno Marchal wrote:

On 9 Apr 2021, at 02:40, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/8/2021 12:38 PM, Jason Resch wrote:
Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
  2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.

I think you exaggerate the problem.  Consider how bats "see" by sonar.  I think this is quite communicable to humans by analogies.

They could in some sense even feel the surfaces with such sonar: is the surface smooth or rough, hard or soft, etc. Sound reflects differently from different types of surfaces. Would they feel these surface differences as colors, or would it feel more like tactile sensations of one's immediate surroundings?


That will communicate the third person aspect, but not the qualia itself.

When a blind person has a tactile array placed in their back and attached to a video camera, they learn to see.  I sighted person can have the same tactile array and video camera and also learn to see thru it.  On what grounds would you deny they experience the same qualia via the video camera.  And then you can ask the sighted person how or whether the qualia of the two kinds of sight differ...or you could do the experiment yourself.

I read about this experiment recently. One apparent difference was that the blind students fitted with this array were dismayed that when they learned that in looking at erotic images with this device they were not stimulated in the ways as their sighted peers.

Perhaps the array was too low resolution, or perhaps the brain's tactile wiring isn't connected to the other parts of the brain in the necessary ways as the visual processing centers are.

Some 30% of the cortex is dedicated to processing visual stimuli whereas only 8% is used for tactile stimuli. I would have to imagine then that the resulting qualia could not be the same, though with the right bandwidth, sufficient cortex, and similar interconnections it's less obvious that identical qualia could not be achieved.

I think there have been experiments where researchers wired the optic nerve into a monkey's auditory cortex, and after a while similar structures to the visual cortex appeared, and I think the monkey behaved as though it could see.

Jason




  And submarines have sonar which produces images on screens.  Is redness communicable?  My father who was red/green color blind had to guess at the color of traffic lights or just watch other cars when he first started to drive around 1928. But he understood the concept of color because he could tell blue from red/green.  And later, traffic engineers adjust the spectrum of traffic lights so that he could tell the difference (they also started to put the red at the top).  

Yes, in practice there is not much problem, which explains the lack of interest in the mind-body problem, but this does not help to solve the conceptual issue. 
It is a bit like saying that in practice GR and QM works very well, so that we lost our time when trying to get a coherent theory of all forces. It depends if we are interested in foundational issues and understanding or in practical applications, I guess.

If all the practical problems can be solved, the "foundational issues" are reduced to armchair philosophizing.  There's a reason theology fell into disrepute.  I see work on foundational issues as theory that will help guide the practical solutions.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 9, 2021, 7:42:14 PM4/9/21
to everyth...@googlegroups.com


On 4/9/2021 3:31 PM, Jason Resch wrote:


On Fri, Apr 9, 2021, 4:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/9/2021 4:57 AM, Bruno Marchal wrote:

On 9 Apr 2021, at 02:40, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/8/2021 12:38 PM, Jason Resch wrote:
Hi Telmo,

Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.

I used to think there were two kinds of knowledge:
  1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest
  2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose
But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject.

There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number 5 as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way.

I think you exaggerate the problem.  Consider how bats "see" by sonar.  I think this is quite communicable to humans by analogies.

They could in some sense even feel the surfaces with such sonar: is the surface smooth or rough, hard or soft, etc. Sound reflects differently from different types of surfaces. Would they feel these surface differences as colors, or would it feel more like tactile sensations of one's immediate surroundings?


That will communicate the third person aspect, but not the qualia itself.

When a blind person has a tactile array placed in their back and attached to a video camera, they learn to see.  I sighted person can have the same tactile array and video camera and also learn to see thru it.  On what grounds would you deny they experience the same qualia via the video camera.  And then you can ask the sighted person how or whether the qualia of the two kinds of sight differ...or you could do the experiment yourself.

I read about this experiment recently. One apparent difference was that the blind students fitted with this array were dismayed that when they learned that in looking at erotic images with this device they were not stimulated in the ways as their sighted peers.

In the ways as their sighted peers using the array?



Perhaps the array was too low resolution, or perhaps the brain's tactile wiring isn't connected to the other parts of the brain in the necessary ways as the visual processing centers are.

Some 30% of the cortex is dedicated to processing visual stimuli whereas only 8% is used for tactile stimuli. I would have to imagine then that the resulting qualia could not be the same, though with the right bandwidth, sufficient cortex, and similar interconnections it's less obvious that identical qualia could not be achieved.

Of course.  But if they had been using the array from birth their brain would probably have developed differently.

Brent


I think there have been experiments where researchers wired the optic nerve into a monkey's auditory cortex, and after a while similar structures to the visual cortex appeared, and I think the monkey behaved as though it could see.

Jason




  And submarines have sonar which produces images on screens.  Is redness communicable?  My father who was red/green color blind had to guess at the color of traffic lights or just watch other cars when he first started to drive around 1928. But he understood the concept of color because he could tell blue from red/green.  And later, traffic engineers adjust the spectrum of traffic lights so that he could tell the difference (they also started to put the red at the top).  

Yes, in practice there is not much problem, which explains the lack of interest in the mind-body problem, but this does not help to solve the conceptual issue. 
It is a bit like saying that in practice GR and QM works very well, so that we lost our time when trying to get a coherent theory of all forces. It depends if we are interested in foundational issues and understanding or in practical applications, I guess.

If all the practical problems can be solved, the "foundational issues" are reduced to armchair philosophizing.  There's a reason theology fell into disrepute.  I see work on foundational issues as theory that will help guide the practical solutions.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/32a30cdc-3e00-62c6-62a8-d2b95fbbe0fa%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages