Fwd: what chatGPT is and is not

3 views
Skip to first unread message

Brent Meeker

unread,
May 24, 2023, 12:12:00 PM5/24/23
to Everything List



On 5/23/2023 10:37 PM, Jason Resch wrote:


On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,

Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.

Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.


How deeply do you have to go when you consider or define those "other parts" though? That seems to be a critical but unstated assumption, and something that depends on how finely grained you consider the relevant/important parts of a brain to be.

For reference, this is what Chalmers says:


"In this paper I defend this view. Specifically, I defend a principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization. More precisely, the principle states that given any system that has conscious experiences, then any system that has the same functional organization at a fine enough grain will have qualitatively identical conscious experiences. A full specification of a system's fine-grained functional organization will fully determine any conscious experiences that arise."

But this is literally false, unless one also specify that the system exists within, or includes, what one refers to as "its environment".  Experience begins with perception and perception implies things to perceive.

Brent

By substituting a fine-grained functional organization for a coarse-grained one, you change the functional definition and can no longer guarantee identical experiences, nor identical behaviors in all possible situations. They're no longer"functional isomorphs" as Chalmers's argument requires.

By substituting a recording of a computation for a computation, you replace a conscious mind with a tape recording of the prior behavior of a conscious mind. This is what happens in the Blockhead thought experiment. The result is something that passes a Turing test, but which is itself not conscious (though creating such a recording requires prior invocation of a conscious mind or extraordinary luck).

Jason 






 
Accordingly, I think intermediate steps and the fine-grained organization are important (to some minimum level of fidelity) but as Bruno would say, we can never be certain what this necessary substitution level is. Is it neocortical columns, is it the connectome, is it the proteome, is it the molecules and atoms, is it QFT? Chalmers argues that at least at the level where noise introduces deviations in a brain simulation, simulating lower levels should not be necessary, as human consciousness appears robust to such noise at low levels (photon strikes, brownian motion, quantum uncertainties, etc.)

--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUC5Yz7DSQWDyD2jXP_EQf8wQ2h3xzjbQEkgmJr5Zu78A%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjTbmMqBNacuN7Zn2t-qW08Xn4PFTbaeVt%3D8H1fnmrHEQ%40mail.gmail.com.

Jason Resch

unread,
May 24, 2023, 4:35:15 PM5/24/23
to everyth...@googlegroups.com
On Wed, May 24, 2023 at 11:12 AM Brent Meeker <meeke...@gmail.com> wrote:



On 5/23/2023 10:37 PM, Jason Resch wrote:


On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,

Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.

Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.


How deeply do you have to go when you consider or define those "other parts" though? That seems to be a critical but unstated assumption, and something that depends on how finely grained you consider the relevant/important parts of a brain to be.

For reference, this is what Chalmers says:


"In this paper I defend this view. Specifically, I defend a principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization. More precisely, the principle states that given any system that has conscious experiences, then any system that has the same functional organization at a fine enough grain will have qualitatively identical conscious experiences. A full specification of a system's fine-grained functional organization will fully determine any conscious experiences that arise."

But this is literally false, unless one also specify that the system exists within, or includes, what one refers to as "its environment".  Experience begins with perception and perception implies things to perceive.

True, I would bet Chalmers was implicitly assuming the same sensory input was provided from the environment.

Jason 
Reply all
Reply to author
Forward
0 new messages