what chatGPT is and is not

34 views
Skip to first unread message

Terren Suydam

unread,
May 22, 2023, 5:56:36 PM5/22/23
to Everything List
Many, myself included, are captivated by the amazing capabilities of chatGPT and other LLMs. They are, truly, incredible. Depending on your definition of Turing Test, it passes with flying colors in many, many contexts. It would take a much stricter Turing Test than we might have imagined this time last year, before we could confidently say that we're not talking to a human. One way to improve chatGPT's performance on an actual Turing Test would be to slow it down, because it is too fast to be human.

All that said, is chatGPT actually intelligent?  There's no question that it behaves in a way that we would all agree is intelligent. The answers it gives, and the speed it gives them in, reflect an intelligence that often far exceeds most if not all humans.

I know some here say intelligence is as intelligence does. Full stop, conversation over. ChatGPT is intelligent, because it acts intelligently.

But this is an oversimplified view!  The reason it's over-simple is that it ignores what the source of the intelligence is. The source of the intelligence is in the texts it's trained on. If ChatGPT was trained on gibberish, that's what you'd get out of it. It is amazingly similar to the Chinese Room thought experiment proposed by John Searle. It is manipulating symbols without having any understanding of what those symbols are. As a result, it does not and can not know if what it's saying is correct or not. This is a well known caveat of using LLMs.

ChatGPT, therefore, is more like a search engine that can extract the intelligence that is already structured within the data it's trained on. Think of it as a semantic google. It's a huge achievement in the sense that training on the data in the way it does, it encodes the context that words appear in with sufficiently high resolution that it's usually indistinguishable from humans who actually understand context in a way that's grounded in experience. LLMs don't experience anything. They are feed-forward machines. The algorithms that implement chatGPT are useless without enormous amounts of text that expresses actual intelligence.

Cal Newport does a good job of explaining this here.

Terren

Stathis Papaioannou

unread,
May 22, 2023, 7:34:31 PM5/22/23
to everyth...@googlegroups.com
It could be argued that the human brain is just a complex machine that has been trained on vast amounts of data to produce a certain output given a certain input, and doesn’t really understand anything. This is a response to the Chinese room argument. How would I know if I really understand something or just think I understand something?
--
Stathis Papaioannou

Terren Suydam

unread,
May 22, 2023, 8:03:16 PM5/22/23
to everyth...@googlegroups.com
it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.

Terren

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com.

Stathis Papaioannou

unread,
May 22, 2023, 8:42:12 PM5/22/23
to everyth...@googlegroups.com
On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 07:56, Terren Suydam <terren...@gmail.com> wrote:
Many, myself included, are captivated by the amazing capabilities of chatGPT and other LLMs. They are, truly, incredible. Depending on your definition of Turing Test, it passes with flying colors in many, many contexts. It would take a much stricter Turing Test than we might have imagined this time last year, before we could confidently say that we're not talking to a human. One way to improve chatGPT's performance on an actual Turing Test would be to slow it down, because it is too fast to be human.

All that said, is chatGPT actually intelligent?  There's no question that it behaves in a way that we would all agree is intelligent. The answers it gives, and the speed it gives them in, reflect an intelligence that often far exceeds most if not all humans.

I know some here say intelligence is as intelligence does. Full stop, conversation over. ChatGPT is intelligent, because it acts intelligently.

But this is an oversimplified view!  The reason it's over-simple is that it ignores what the source of the intelligence is. The source of the intelligence is in the texts it's trained on. If ChatGPT was trained on gibberish, that's what you'd get out of it. It is amazingly similar to the Chinese Room thought experiment proposed by John Searle. It is manipulating symbols without having any understanding of what those symbols are. As a result, it does not and can not know if what it's saying is correct or not. This is a well known caveat of using LLMs.

ChatGPT, therefore, is more like a search engine that can extract the intelligence that is already structured within the data it's trained on. Think of it as a semantic google. It's a huge achievement in the sense that training on the data in the way it does, it encodes the context that words appear in with sufficiently high resolution that it's usually indistinguishable from humans who actually understand context in a way that's grounded in experience. LLMs don't experience anything. They are feed-forward machines. The algorithms that implement chatGPT are useless without enormous amounts of text that expresses actual intelligence.

Cal Newport does a good job of explaining this here.

It could be argued that the human brain is just a complex machine that has been trained on vast amounts of data to produce a certain output given a certain input, and doesn’t really understand anything. This is a response to the Chinese room argument. How would I know if I really understand something or just think I understand something?
--
Stathis Papaioannou

it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.

The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.
--
Stathis Papaioannou

Terren Suydam

unread,
May 22, 2023, 8:48:04 PM5/22/23
to everyth...@googlegroups.com
I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying?  That chatGPT must be having a phenomenal experience if it talks like a human?   If so, that is stretching the explanatory domain of functionalism past its breaking point.

Terren
 
--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 22, 2023, 11:13:02 PM5/22/23
to everyth...@googlegroups.com
The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.

Terren Suydam

unread,
May 22, 2023, 11:37:33 PM5/22/23
to everyth...@googlegroups.com
You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?

Terren

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 23, 2023, 12:14:27 AM5/23/23
to everyth...@googlegroups.com
On Tue, 23 May 2023 at 13:37, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:

it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.

The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.

I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying?  That chatGPT must be having a phenomenal experience if it talks like a human?   If so, that is stretching the explanatory domain of functionalism past its breaking point.

The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.

You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?

It's not just talking about them in a way that is indistinguishable from a human, in order to have human-like consciousness the entire I/O behaviour of the human would need to be replicated. But in principle, I don't see why a LLM could not have some other type of phenomenal experience. And I don't think the grounding problem is a problem: I was never grounded in anything, I just grew up associating one symbol with another symbol, it's symbols all the way down.

--
Stathis Papaioannou

Terren Suydam

unread,
May 23, 2023, 12:23:21 AM5/23/23
to everyth...@googlegroups.com
Is the smell of your grandmother's kitchen a symbol?
 

--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 23, 2023, 12:32:50 AM5/23/23
to everyth...@googlegroups.com
On Tue, 23 May 2023 at 14:23, Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 13:37, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:

it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.

The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.

I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying?  That chatGPT must be having a phenomenal experience if it talks like a human?   If so, that is stretching the explanatory domain of functionalism past its breaking point.

The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.

You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?

It's not just talking about them in a way that is indistinguishable from a human, in order to have human-like consciousness the entire I/O behaviour of the human would need to be replicated. But in principle, I don't see why a LLM could not have some other type of phenomenal experience. And I don't think the grounding problem is a problem: I was never grounded in anything, I just grew up associating one symbol with another symbol, it's symbols all the way down.

Is the smell of your grandmother's kitchen a symbol?

Yes, I can't pull away the facade to check that there was a real grandmother and a real kitchen against which I can check that the sense data matches.

--
Stathis Papaioannou

Jesse Mazer

unread,
May 23, 2023, 12:40:02 AM5/23/23
to everyth...@googlegroups.com
On Mon, May 22, 2023 at 11:37 PM Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:

it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.

The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.

I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying?  That chatGPT must be having a phenomenal experience if it talks like a human?   If so, that is stretching the explanatory domain of functionalism past its breaking point.

The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.

You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?

Terren

Are you talking here about Chalmer's thought experiment in which each neuron is replaced by a functional duplicate, or about an algorithm like ChatGPT that has no detailed resemblance to the structure of a human being's brain? I think in the former case the case for identical experience is very strong, though note Chalmers is not really a functionalist, he postulates "psychophysical laws" which map physical patterns to experiences, and uses the replacement argument to argue that such laws would have the property of "functional invariance". 

In you are just talking about ChatGPT style programs, I would agree with you, a system trained only on the high-level symbols of human language (as opposed to symbols representing neural impulses or other low-level events on the microscopic level) is not likely to have experience anything like a human being using the same symbols. If Stathis' black box argument is meant to suggest otherwise I don't the logic, since it's not like a ChatGPT style program would replicate the detailed output of a composite group of neurons either, or even the exact verbal output of a specific person, so there is no equivalent to gradual replacement of parts of a real human. If we are just talking about qualitatively behaving in a "human-like" way without replicating the behavior of a specific person or sub-component of a person like a group of neurons in their brain, Chalmer's thought-experiment doesn't apply. And even in a qualitative sense, count me as very skeptical that a LLM trained only on human writing will ever pass any really rigorous Turing test.

Jesse


 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 23, 2023, 1:44:28 AM5/23/23
to everyth...@googlegroups.com
On Tue, 23 May 2023 at 14:40, Jesse Mazer <laser...@gmail.com> wrote:


On Mon, May 22, 2023 at 11:37 PM Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:48, Terren Suydam <terren...@gmail.com> wrote:


On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 10:03, Terren Suydam <terren...@gmail.com> wrote:

it is true that my brain has been trained on a large amount of data - data that contains intelligence outside of my own. But when I introspect, I notice that my understanding of things is ultimately rooted/grounded in my phenomenal experience. Ultimately, everything we know, we know either by our experience, or by analogy to experiences we've had. This is in opposition to how LLMs train on data, which is strictly about how words/symbols relate to one another.

The functionalist position is that phenomenal experience supervenes on behaviour, such that if the behaviour is replicated (same output for same input) the phenomenal experience will also be replicated. This is what philosophers like Searle (and many laypeople) can’t stomach.

I think the kind of phenomenal supervenience you're talking about is typically asserted for behavior at the level of the neuron, not the level of the whole agent. Is that what you're saying?  That chatGPT must be having a phenomenal experience if it talks like a human?   If so, that is stretching the explanatory domain of functionalism past its breaking point.

The best justification for functionalism is David Chalmers' "Fading Qualia" argument. The paper considers replacing neurons with functionally equivalent silicon chips, but it could be generalised to replacing any part of the brain with a functionally equivalent black box, the whole brain, the whole person.

You're saying that an algorithm that provably does not have experiences of rabbits and lollipops - but can still talk about them in a way that's indistinguishable from a human - essentially has the same phenomenology as a human talking about rabbits and lollipops. That's just absurd on its face. You're essentially hand-waving away the grounding problem. Is that your position? That symbols don't need to be grounded in any sort of phenomenal experience?

Terren

Are you talking here about Chalmer's thought experiment in which each neuron is replaced by a functional duplicate, or about an algorithm like ChatGPT that has no detailed resemblance to the structure of a human being's brain? I think in the former case the case for identical experience is very strong, though note Chalmers is not really a functionalist, he postulates "psychophysical laws" which map physical patterns to experiences, and uses the replacement argument to argue that such laws would have the property of "functional invariance". 

In you are just talking about ChatGPT style programs, I would agree with you, a system trained only on the high-level symbols of human language (as opposed to symbols representing neural impulses or other low-level events on the microscopic level) is not likely to have experience anything like a human being using the same symbols. If Stathis' black box argument is meant to suggest otherwise I don't the logic, since it's not like a ChatGPT style program would replicate the detailed output of a composite group of neurons either, or even the exact verbal output of a specific person, so there is no equivalent to gradual replacement of parts of a real human. If we are just talking about qualitatively behaving in a "human-like" way without replicating the behavior of a specific person or sub-component of a person like a group of neurons in their brain, Chalmer's thought-experiment doesn't apply. And even in a qualitative sense, count me as very skeptical that a LLM trained only on human writing will ever pass any really rigorous Turing test.

Chalmers considers replacing individual neurons and then extending this to groups of neurons with silicon chips. My variation on this is to replace any part of a human with a black box that replicates the interactions of that part with the surrounding tissue. This preserves the behaviour of the behaviour of the human and also the consciousness, otherwise, the argument goes, we could make a partial zombie, which is absurd. We could extend the replacement to any arbitrarily large proportion of the human, say all but a few cells on the tip of his nose, and the argument still holds. Once those cells are replaced, the entire human is replaced, and his consciousness remains unchanged. It is not necessary that inside the black box is anything resembling or even simulating human physiological processes: that would be one way to do it, but a completely different method would work as long as the I/O behaviour of the human was preserved. If techniques analogous to LLM's could be used to train AI's on human movements instead of words, for example, it might be possible to perfectly replicate human behaviour, and from the above argument, the resulting robot should also have human-like consciousness. And if that is the case, I don't see why a more limited system such as ChatGPT should not have a more limited form of consciousness.
 

--
Stathis Papaioannou

Terren Suydam

unread,
May 23, 2023, 1:58:12 AM5/23/23
to everyth...@googlegroups.com
The ground problem is about associating symbols with a phenomenal experience, or the memory of one - which is not the same thing as the functional equivalent or the neural correlate. It's the feeling, what it's like to experience the thing the symbol stands for. The experience of redness. The shock of plunging into cold water. The smell of coffee. etc.

Take a migraine headache - if that's just a symbol, then why does that symbol feel bad while others feel good?  Why does any symbol feel like anything? If you say evolution did it, that doesn't actually answer the question, because evolution doesn't do anything except select for traits, roughly speaking. So it just pushes the question to: how did the subjective feeling of pain or pleasure emerge from some genetic mutation, when it wasn't there before?

Without a functionalist explanation of the origin of aesthetic valence, then I don't think you can "get it from bit".

Terren
 

--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Dylan Distasio

unread,
May 23, 2023, 2:25:46 AM5/23/23
to everyth...@googlegroups.com
While we may not know everything about explaining it, pain doesn't seem to be that much of a mystery to me, and I don't consider it a symbol per se.   It seems obvious to me anyways that pain arose out of a very early neural circuit as a survival mechanism.   Pain is the feeling you experience when pain receptors detect an area of the body is being damaged.   It is ultimately based on a sensory input that transmits to the brain via nerves where it is translated into a sensation that tells you to avoid whatever is causing the pain if possible, or let's you know you otherwise have a problem with your hardware.   

That said, I agree with you on LLMs for the most part, although I think they are showing some potentially emergent, interesting behaviors.

Stathis Papaioannou

unread,
May 23, 2023, 2:47:58 AM5/23/23
to everyth...@googlegroups.com
That seems more like the hard problem of consciousness. There is no solution to it.

--
Stathis Papaioannou

Jason Resch

unread,
May 23, 2023, 7:09:48 AM5/23/23
to Everything List
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

Jason 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 23, 2023, 8:15:43 AM5/23/23
to everyth...@googlegroups.com
On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment, replicating the behaviour of the human through any means, such as training an AI not only on language but also movement, would also preserve consciousness, even though it does not simulate any physiological processes. Another way to say this is that it is not possible to make a philosophical zombie.
--
Stathis Papaioannou

Terren Suydam

unread,
May 23, 2023, 9:12:36 AM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 2:25 AM Dylan Distasio <inte...@gmail.com> wrote:
While we may not know everything about explaining it, pain doesn't seem to be that much of a mystery to me, and I don't consider it a symbol per se.   It seems obvious to me anyways that pain arose out of a very early neural circuit as a survival mechanism.   

But how?  What was the biochemical or neural change that suddenly birthed the feeling of pain?  I'm not asking you to know the details, just the principle - by what principle can a critter that comes into being with some modification of its organization start having a negative feeling when it didn't exist in its progenitors?  This doesn't seem mysterious to you?

Very early neural circuits are relatively easy to simulate, and I'm guessing some team has done this for the level of organization you're talking about. What you're saying, if I'm reading you correctly, is that that simulation feels pain. If so, how do you get that feeling of pain out of code?

Terren

 
Pain is the feeling you experience when pain receptors detect an area of the body is being damaged.   It is ultimately based on a sensory input that transmits to the brain via nerves where it is translated into a sensation that tells you to avoid whatever is causing the pain if possible, or let's you know you otherwise have a problem with your hardware.   

That said, I agree with you on LLMs for the most part, although I think they are showing some potentially emergent, interesting behaviors.

On Tue, May 23, 2023 at 1:58 AM Terren Suydam <terren...@gmail.com> wrote:

Take a migraine headache - if that's just a symbol, then why does that symbol feel bad while others feel good?  Why does any symbol feel like anything? If you say evolution did it, that doesn't actually answer the question, because evolution doesn't do anything except select for traits, roughly speaking. So it just pushes the question to: how did the subjective feeling of pain or pleasure emerge from some genetic mutation, when it wasn't there before?


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
May 23, 2023, 9:15:24 AM5/23/23
to everyth...@googlegroups.com
On Mon, May 22, 2023 at 5:56 PM Terren Suydam <terren...@gmail.com> wrote:

> Many, myself included, are captivated by the amazing capabilities of chatGPT and other LLMs. They are, truly, incredible. Depending on your definition of Turing Test, it passes with flying colors in many, many contexts. It would take a much stricter Turing Test than we might have imagined this time last year,

The trouble with having a much tougher Turing Test  is that although it would correctly conclude that it was talking to a computer it would also incorrectly conclude that it was talking with a computer when in reality it was talking to a human being who had an IQ of 200. Yes GPT can occasionally do something that is very stupid, but if you had not also at one time or another in your life done something that is very stupid then you are a VERY remarkable person.  

> One way to improve chatGPT's performance on an actual Turing Test would be to slow it down, because it is too fast to be human.

It would be easy to make GPT dumber, but what would that prove ? We could also mass-produce Olympic gold medals so everybody on earth could get one, but what would be the point?

> All that said, is chatGPT actually intelligent?

Obviously.
 
> There's no question that it behaves in a way that we would all agree is intelligent. The answers it gives, and the speed it gives them in, reflect an intelligence that often far exceeds most if not all humans. I know some here say intelligence is as intelligence does. Full stop, 

All I'm saying is you should play fair, whatever test you decide to use to measure the intelligence of a human you should use exactly the same test on an AI. Full stop. 

> But this is an oversimplified view! 

Maybe so, but it's the only view we're ever going to get so we're just gonna have to make the best of it.  But I know there are some people who will continue to disagree with me about that until the day they die.

.... and so just five seconds before he was vaporized the last surviving human being turned to Mr. Jupiter Brain and said "I still think I'm smarter than you".

< If ChatGPT was trained on gibberish, that's what you'd get out of it.

And if you were trained on gibberish what sort of post do you imagine you'd be writing right now?  

> the Chinese Room thought experiment proposed by John Searle.

You mean the silliest thought experiment ever devised by the mind of man?

> ChatGPT, therefore, is more like a search engine

Oh for heaven sake, not that canard again!  I'm not young but since my early teens I've been hearing people say you only get out of a computer what you put in. I thought that was silly when I was 13 and I still do.
John K Clark    See what's on my new list at  Extropolis
nw4


 

Terren Suydam

unread,
May 23, 2023, 9:34:11 AM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.

Terren
 

Terren Suydam

unread,
May 23, 2023, 9:52:59 AM5/23/23
to everyth...@googlegroups.com
I'm just going to say up front that I'm not going to engage with you on this particular topic, because I'm already well aware of your position, that you do not take consciousness seriously, and that your mind won't be changed on that. So anything we argue about will be about that fundamental difference, and that's just not interesting or productive, not to mention we've already had that pointless argument.

Terren
 

nw4


 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Dylan Distasio

unread,
May 23, 2023, 11:08:47 AM5/23/23
to everyth...@googlegroups.com
Let me start out by saying I don't believe in zombies.   We are biophysical systems with a long history of building on and repurposing earlier systems of genes and associated proteins.   I saw you don't believe it is symbols all the way down.   I agree with you, but I am arguing the beginning of that chain of symbols for many things begins with sensory input and ends with a higher level symbol/abstraction, particularly in fully conscious animals like human beings that are self aware and capable of an inner dialogue. 

An earlier example I gave of someone born blind results in someone with no concept of red or any color for that matter, or images, and so on.   I don't believe redness is hiding in some molecule in the brain like Brent does.   It's only created via pruned neural networks based on someone who has sensory inputs working properly.   That's the beginning of the chain of symbols, but it starts with an electrical impulse sent down nerves from a sensory organ.

It's the same thing with pain.  If a certain gene is screwed up related to a subset of sodium channels (which are critical for proper transmission of signals propagating along certain nerves), a human being is incapable of feeling pain.   I'd argue they don't know what pain is, just like a congenital blind person doesn't know what red is.   It's the same thing with hearing and music.   If a brain is missing that initial sensory input, your consciousness does not have the ability to feel the related subjective sensation.

And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.  

You asked me for the principle behind how a critter could start having a negative feeling that didn't exist in its progenitors.   Again, I believe the answer is as simple as it happened when pain receptors evolved that may have started as a random mutation where the behavior they induced in lower organisms resulted in increased survival.    I'm not claiming to have solved the hard problem of consciousness.   I don't claim to have the answer for why pain subjectively feels the way it does, or why pleasure does, but I do know that reward systems that evolved much earlier are involved (like dopamine based ones), and that pleasure can be directly triggered via various recreational drugs.   That doesn't mean I think the dopamine molecule is where the pleasure qualia is hiding.

Even lower forms of life like bacteria move towards what their limited sensory systems tell them is a reward and away from what it tells them is a danger.   I believe our subjective experiences are layered onto these much earlier evolutionary artifacts, although as eukaryotes I am not claiming that much of this is inherited from LUCA.   I think it blossomed once predator/prey dynamics were possible in the Cambrian explosion and was built on from there over many many years.

Getting slightly off topic, I don't think substrate likely matters as far as producing consciousness.   The only possible way I could see that it would is if quantum effects are actually involved in generating it that we can't reasonably replicate.   That said, I think Penrose and others do not have the odds on their side there for a number of reasons.

Like I said though, I don't believe in zombies. 

Jason Resch

unread,
May 23, 2023, 1:46:28 PM5/23/23
to Everything List


On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

I agree everything you have experienced is rooted in consciousness. 

But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.

Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.

This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.




In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.


Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?

Jason 

Jason Resch

unread,
May 23, 2023, 2:03:55 PM5/23/23
to everyth...@googlegroups.com
Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.

Accordingly, I think intermediate steps and the fine-grained organization are important (to some minimum level of fidelity) but as Bruno would say, we can never be certain what this necessary substitution level is. Is it neocortical columns, is it the connectome, is it the proteome, is it the molecules and atoms, is it QFT? Chalmers argues that at least at the level where noise introduces deviations in a brain simulation, simulating lower levels should not be necessary, as human consciousness appears robust to such noise at low levels (photon strikes, brownian motion, quantum uncertainties, etc.)
 
replicating the behaviour of the human through any means, such as training an AI not only on language but also movement, would also preserve consciousness, even though it does not simulate any physiological processes. Another way to say this is that it is not possible to make a philosophical zombie.

I agree zombies are impossible. I think they are even logically impossible.

Jason

Jesse Mazer

unread,
May 23, 2023, 2:05:54 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.

Terren

But are you talking specifically about symbols with high-level meaning like the words humans use in ordinary language, which large language models like ChatGPT are trained on? Or are you talking more generally about any kinds of symbols, including something like the 1s and 0s in a giant computer that was performing an extremely detailed simulation of a physical world, perhaps down to the level of particle physics, where that simulation could include things like detailed physical simulations of things in external environment (a flower, say) and components of a simulated biological organism with a nervous system (with particle-level simulations of neurons etc.)? Would you say that even in the case of the detailed physics simulation, nothing in there could ever give rise to conscious experience like our own? 

Jesse



 

John Clark

unread,
May 23, 2023, 2:12:07 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023  Terren Suydam <terren...@gmail.com> wrote:

> What was the biochemical or neural change that suddenly birthed the feeling of pain? 

It would not be difficult to make a circuit such that that whenever a specific binary sequence of zeros and ones is in a register the circuit stops doing everything else and changes that sequence to something else as fast as possible. As I've said before, intelligence is hard but emotion is easy.  

> I don't believe symbols can be grounded in other symbols

But it would be easy to ground symbols with examples, such as the symbol "2" with the number of shoes most people wear and the number of arms most people have, and the symbol "greenness" with the thing that leaves and emeralds and Harry Potter's eyes have in common.


 > There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

And that's why examples are important but definitions are not.  

John K Clark    See what's on my new list at  Extropolis
tlw



 

Terren Suydam

unread,
May 23, 2023, 2:15:54 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:
Let me start out by saying I don't believe in zombies.   We are biophysical systems with a long history of building on and repurposing earlier systems of genes and associated proteins.   I saw you don't believe it is symbols all the way down.   I agree with you, but I am arguing the beginning of that chain of symbols for many things begins with sensory input and ends with a higher level symbol/abstraction, particularly in fully conscious animals like human beings that are self aware and capable of an inner dialogue. 

Of course, and hopefully I was clear that I meant symbols are ultimately grounded in phenomenal experience, but yes, there are surely many layers to get there depending on the concept.

An earlier example I gave of someone born blind results in someone with no concept of red or any color for that matter, or images, and so on.   I don't believe redness is hiding in some molecule in the brain like Brent does.   It's only created via pruned neural networks based on someone who has sensory inputs working properly.   That's the beginning of the chain of symbols, but it starts with an electrical impulse sent down nerves from a sensory organ.

I agree, the idea of a certain kind of physical molecule transducing a quale by virtue of the properties of that physical molecule is super problematic. Like, if redness were inherent to glutamate, what about all the other colors?  And sounds? And smells?  And textures?  Just how many molecules would we need to represent the vast pantheon of possible qualia?
 
It's the same thing with pain.  If a certain gene is screwed up related to a subset of sodium channels (which are critical for proper transmission of signals propagating along certain nerves), a human being is incapable of feeling pain.   I'd argue they don't know what pain is, just like a congenital blind person doesn't know what red is.   It's the same thing with hearing and music.   If a brain is missing that initial sensory input, your consciousness does not have the ability to feel the related subjective sensation.

No problem there.
 
And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.

This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.
 
You asked me for the principle behind how a critter could start having a negative feeling that didn't exist in its progenitors.   Again, I believe the answer is as simple as it happened when pain receptors evolved that may have started as a random mutation where the behavior they induced in lower organisms resulted in increased survival. 

Before you said that you don't believe redness is hiding in a molecule. But here, you're saying pain is hiding in a pain receptor, which is nothing more or less than a protein molecule.
 
  I'm not claiming to have solved the hard problem of consciousness.   I don't claim to have the answer for why pain subjectively feels the way it does, or why pleasure does, but I do know that reward systems that evolved much earlier are involved (like dopamine based ones), and that pleasure can be directly triggered via various recreational drugs.   That doesn't mean I think the dopamine molecule is where the pleasure qualia is hiding.

Even lower forms of life like bacteria move towards what their limited sensory systems tell them is a reward and away from what it tells them is a danger.   I believe our subjective experiences are layered onto these much earlier evolutionary artifacts, although as eukaryotes I am not claiming that much of this is inherited from LUCA.   I think it blossomed once predator/prey dynamics were possible in the Cambrian explosion and was built on from there over many many years.

Bacteria can move towards or away from certain stimuli, but it doesn't follow that it feels pain or pleasure as it does so. That is using functionalism to sweep the hard problem under the rug.

Terren
 
Getting slightly off topic, I don't think substrate likely matters as far as producing consciousness.   The only possible way I could see that it would is if quantum effects are actually involved in generating it that we can't reasonably replicate.   That said, I think Penrose and others do not have the odds on their side there for a number of reasons.

Like I said though, I don't believe in zombies. 

On Tue, May 23, 2023 at 9:12 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 2:25 AM Dylan Distasio <inte...@gmail.com> wrote:
While we may not know everything about explaining it, pain doesn't seem to be that much of a mystery to me, and I don't consider it a symbol per se.   It seems obvious to me anyways that pain arose out of a very early neural circuit as a survival mechanism.   

But how?  What was the biochemical or neural change that suddenly birthed the feeling of pain?  I'm not asking you to know the details, just the principle - by what principle can a critter that comes into being with some modification of its organization start having a negative feeling when it didn't exist in its progenitors?  This doesn't seem mysterious to you?

Very early neural circuits are relatively easy to simulate, and I'm guessing some team has done this for the level of organization you're talking about. What you're saying, if I'm reading you correctly, is that that simulation feels pain. If so, how do you get that feeling of pain out of code?

Terren

 
Pain is the feeling you experience when pain receptors detect an area of the body is being damaged.   It is ultimately based on a sensory input that transmits to the brain via nerves where it is translated into a sensation that tells you to avoid whatever is causing the pain if possible, or let's you know you otherwise have a problem with your hardware.   

That said, I agree with you on LLMs for the most part, although I think they are showing some potentially emergent, interesting behaviors.

On Tue, May 23, 2023 at 1:58 AM Terren Suydam <terren...@gmail.com> wrote:

Take a migraine headache - if that's just a symbol, then why does that symbol feel bad while others feel good?  Why does any symbol feel like anything? If you say evolution did it, that doesn't actually answer the question, because evolution doesn't do anything except select for traits, roughly speaking. So it just pushes the question to: how did the subjective feeling of pain or pleasure emerge from some genetic mutation, when it wasn't there before?


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
May 23, 2023, 2:20:52 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 1:12 PM John Clark <johnk...@gmail.com> wrote:
On Tue, May 23, 2023  Terren Suydam <terren...@gmail.com> wrote:

> What was the biochemical or neural change that suddenly birthed the feeling of pain? 

It would not be difficult to make a circuit such that that whenever a specific binary sequence of zeros and ones is in a register the circuit stops doing everything else and changes that sequence to something else as fast as possible. As I've said before, intelligence is hard but emotion is easy.  

I believe I have made simple neural networks that are conscious and can experience both pleasure and displeasure, insofar as they have evolved to learn and apply multiple and various strategies for both attraction and avoidance behaviors. They can achieve this even with just 16 artificial neurons and within only a dozen generations of simulated evolution:



I am of course interested in hearing any arguments for why these bots are either capable of some primitive sensation or not.

Jason

Jason Resch

unread,
May 23, 2023, 2:27:27 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 1:15 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:
 
And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.

This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.


I would say stuff doesn't give rise to conscious experience. Conscious experience is the logically necessary and required state of knowledge that is present in any consciousness-necessitating behaviors. If you design a simple robot with a camera and robot arm that is able to reliably catch a ball thrown in its general direction, then something in that system *must* contain knowledge of the ball's relative position and trajectory. It simply isn't logically possible to have a system that behaves in all situations as if it knows where the ball is, without knowing where the ball is. Consciousness is simply the state of being with knowledge.

Con- "Latin for with"
-Scious- "Latin for knowledge"
-ness "English suffix meaning the state of being X"

Consciousness -> The state of being with knowledge.

There is an infinite variety of potential states and levels of knowledge, and this contributes to much of the confusion, but boiled down to the simplest essence of what is or isn't conscious, it is all about knowledge states. Knowledge states require activity/reactivity to the presence of information, and counterfactual behaviors (if/then, greater than less than, discriminations and comparisons that lead to different downstream consequences in a system's behavior). At least, this is my theory of consciousness.

Jason

Terren Suydam

unread,
May 23, 2023, 3:50:17 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 1:46 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

I agree everything you have experienced is rooted in consciousness. 

But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.

Ah, I see your point now. That's valid, thanks for raising it and let me clarify.

Bringing this back to LLMs, it's clear to me that LLMs do not have phenomenal experience, but you're right to insist that I explain why I think so. I don't know if this amounts to a theory of consciousness, but the reason I believe that LLMs are not conscious is that, in my view, consciousness entails a continuous flow of experience. Assuming for this discussion that consciousness is realizable in a substrate-independent way, that means that consciousness is, in some sort of way, a process in the domain of information. And so to realize a conscious process, whether in a brain or in silicon, the physical dynamics of that information process must also be continuous, which is to say, recursive. The behavior or output of the brain in one moment is the input to the brain in the next moment.

But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.

So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd. 
 

Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.

This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.




In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.


Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?

I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence. It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.

I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream. It has its own "hard problem", which is explaining why the world appears so orderly. But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be!  it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo Kastrup has some interesting ideas on idealism, and he approaches it in a way that is totally devoid of woo. That said, one really intriguing set of evidence in favor of idealism is near-death-experience (NDE) testimony, which is pretty remarkable if one actually studies it.

Terren
 

Jason 

Terren Suydam

unread,
May 23, 2023, 3:59:48 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 2:05 PM Jesse Mazer <laser...@gmail.com> wrote:


On Tue, May 23, 2023 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.

Terren

But are you talking specifically about symbols with high-level meaning like the words humans use in ordinary language, which large language models like ChatGPT are trained on? Or are you talking more generally about any kinds of symbols, including something like the 1s and 0s in a giant computer that was performing an extremely detailed simulation of a physical world, perhaps down to the level of particle physics, where that simulation could include things like detailed physical simulations of things in external environment (a flower, say) and components of a simulated biological organism with a nervous system (with particle-level simulations of neurons etc.)? Would you say that even in the case of the detailed physics simulation, nothing in there could ever give rise to conscious experience like our own? 

Jesse

No, I wouldn't deny that possibility. As I mentioned in the reply I just made to Jason, I'm coming from an idealist perspective, which is to say that reality is fundamentally consciousness. So the simulation you're hypothesizing would itself be a manifestation of consciousness - though admittedly that's not a super helpful thing to say. At least, it's no more helpful than panpsychist assumptions that all matter has some aspect of consciousness. It doesn't tell you why this chunk of matter that looks like a rock doesn't appear at all to be conscious, and why this chunk of matter that looks like a Jesse Mazer is. Or why Jesse Mazer's left kneecap doesn't seem to have its own consciousness - or if it does, how it interacts with the holisitic Jesse Mazer consciousness. Idealism isn't a theory of consciousness in the sense that it explains those differences. My current take on it is just that it's the only way to make sense of reality if you don't believe in religious dualism, and you acknowledge that the Hard Problem is a fatal flaw for physicalism and you're willing to update your beliefs based on that.  But I reserve the right to change my mind on that.

Terren
 


Terren Suydam

unread,
May 23, 2023, 4:14:26 PM5/23/23
to everyth...@googlegroups.com
This still runs into the valence problem though. Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?  I'm not talking about the functional accounts of positive and negative experiences. I'm talking about phenomenology. The functional aspect of it is not irrelevant, but to focus only on that is to sweep the feeling under the rug. So many dialogs on this topic basically terminate here, where it's just a clash of belief about the relative importance of consciousness and phenomenology as the mediator of all experience and knowledge.

Terren
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
May 23, 2023, 4:17:32 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 3:50 PM Terren Suydam <terren...@gmail.com> wrote:
 
> in my view, consciousness entails a continuous flow of experience.

If I could instantly stop all physical processes that are going on inside your head for one year and then start them up again, to an outside objective observer you would appear to lose consciousness for one year, but to you your consciousness would still feel continuous but the outside world would appear to have discontinuously jumped to something new.   

John K Clark    See what's on my new list at  Extropolis
2b0




Terren Suydam

unread,
May 23, 2023, 4:30:39 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 4:17 PM John Clark <johnk...@gmail.com> wrote:
On Tue, May 23, 2023 at 3:50 PM Terren Suydam <terren...@gmail.com> wrote:
 
> in my view, consciousness entails a continuous flow of experience.

If I could instantly stop all physical processes that are going on inside your head for one year and then start them up again, to an outside objective observer you would appear to lose consciousness for one year, but to you your consciousness would still feel continuous but the outside world would appear to have discontinuously jumped to something new.   

I meant continuous in terms of the flow of state from one moment to the next. What you're describing is continuous because it's not the passage of time that needs to be continuous, but the state of information in the model as the physical processes evolve. And my understanding is that in an LLM, each new query starts from the same state... it does not evolve in time.

Terren
 

John K Clark    See what's on my new list at  Extropolis
2b0




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
May 23, 2023, 4:33:50 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023  Terren Suydam <terren...@gmail.com> wrote:

> reality is fundamentally consciousness. 

Then why does a simple physical molecule like N2O stop consciousness temporarily and another simple physical molecule like CN- do so permanently?
 
> Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?

Because sometimes new knowledge requires you to re-organize hundreds of other important concepts you already had in your brain and that could be difficult and depending on circumstances may endanger or benefit your mental health.  

John K Clark    See what's on my new list at  Extropolis
2nv


John Clark

unread,
May 23, 2023, 4:42:43 PM5/23/23
to everyth...@googlegroups.com
On Tue, May 23, 2023 at 4:30 PM Terren Suydam <terren...@gmail.com> wrote:

>> If I could instantly stop all physical processes that are going on inside your head for one year and then start them up again, to an outside objective observer you would appear to lose consciousness for one year, but to you your consciousness would still feel continuous but the outside world would appear to have discontinuously jumped to something new.   

> I meant continuous in terms of the flow of state from one moment to the next. What you're describing is continuous because it's not the passage of time that needs to be continuous, but the state of information in the model as the physical processes evolve.

Sorry but it's not at all clear to me what you're talking about. If the state of information is not evolving in time then what in the world is it evolving in?!  If nothing changes then nothing can evolve, and the very definition of time stopping is that nothing changes and nothing evolves.

  John K Clark    See what's on my new list at  Extropolis
xqj

Terren Suydam

unread,
May 23, 2023, 4:47:04 PM5/23/23
to everyth...@googlegroups.com
If I had confidence that my answers to your questions would be met with anything but a "defend/destroy" mentality I'd go there with you. I's gotta be fun for me, and you're not someone I enjoy getting into it with. Not trying to be insulting, but it's the truth.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
May 23, 2023, 5:47:31 PM5/23/23
to Everything List


On Tue, May 23, 2023, 3:50 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 1:46 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

I agree everything you have experienced is rooted in consciousness. 

But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.

Ah, I see your point now. That's valid, thanks for raising it and let me clarify.

I appreciate that thank you.


Bringing this back to LLMs, it's clear to me that LLMs do not have phenomenal experience, but you're right to insist that I explain why I think so. I don't know if this amounts to a theory of consciousness, but the reason I believe that LLMs are not conscious is that, in my view, consciousness entails a continuous flow of experience. Assuming for this discussion that consciousness is realizable in a substrate-independent way, that means that consciousness is, in some sort of way, a process in the domain of information. And so to realize a conscious process, whether in a brain or in silicon, the physical dynamics of that information process must also be continuous, which is to say, recursive.


I am quite partial to the idea that recursion or loops may me necessary to realize consciousness, or at least certain types of consciousness, such as self-consciousness (which I take to be models which include the self as an actor within the environment), but I also believe that loops may exist in non obvious forms, and even extend beyond the physical domain of a creature's body or the confines of a physical computer. 

Allow me to explain.

Consider a something like the robot arm I described that is programmed to catch a ball. Now consider that the at each time step, a process is run they receives the current coordinates of th robot arm position and the ball position. This is not technically a loop, and bit really recursive, it may be implemented by a time that fires off the process say 1000 times a second.

But, if you consider the pair of the robot arm and the environment, a recursive loop emerges, in the sense that the action decided and executed in the previous time step affects the sensory input in subsequent time steps. If the robots had enough sophistication to have a language function and we asked it, "what caused your arm to move?" The only answer it could give would have to be a reflexive one: a process within me caused my arm to move. So we get self reference, and recursion through environmental interaction.

 Now let's consider the LLM in this context, each invocation is indeed a feed forward independent process, but through this back and forth flow, the LLM interacting with the user, a recursive continuous loop of processing emerges. The LLM could be said to perceive an ever growing thread of conversation, with new words constantly being appended to its perception window. Moreover, some of these words would be external inputs, while others are internal outputs. If you ask the LLM: where did those internally generated outputs come from? Again th only valid answer it could supply would have to be reflexive.

Reflexivity is I think the essence of self awareness, and though a single LLM invocation cannot do this, an LLM that generates output and then is subsequently asked about the source of this output, must turn it's attention inward towards itself.

This is something like how Dennett describes how a zombie asked to look inward bootstraps itself into consciousness.

The behavior or output of the brain in one moment is the input to the brain in the next moment.

But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.

So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd. 

One aspect of artificial neural networks that is worth considering here is that they are (by the 'universal approximation theorem') completely general and universal in the functions they can learn and model. That is, any logical circuit which can be computed in finite time, can in principle, be learned and implemented by a neural network. This gives me some pause when I consider what things neural networks will never be able to do.



 

Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.

This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.




In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.


Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?

I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence.

But is there not much more to conscious then these two binary states? Is the state space sufficiently large in their theory to account for the seemingly infinite possible diversity of conscious experience?


It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.

I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream.

I agree. Or at least I would say, consciousness is more fundamental than the physical universe. It might then be more appropriate to say my position is a kind of neutral monism, where platonically existing information/computation is the glue that relates consciousness to physics and explains why we perceive an ordered world with apparent laws.

I explain this in much more detail here:



It has its own "hard problem", which is explaining why the world appears so orderly.

Yes, the "hard problem of matter" as some call it. I agree this problem is much more solvable than the hard problem of consciousness.


But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be!  it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo Kastrup

I will have to look into him.

 has some interesting ideas on idealism, and he approaches it in a way that is totally devoid of woo. That said, one really intriguing set of evidence in favor of idealism is near-death-experience (NDE) testimony, which is pretty remarkable if one actually studies it.

It is indeed.

Jason 

Jason Resch

unread,
May 23, 2023, 6:00:09 PM5/23/23
to Everything List


On Tue, May 23, 2023, 4:14 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 2:27 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 1:15 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:
 
And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.

This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.


I would say stuff doesn't give rise to conscious experience. Conscious experience is the logically necessary and required state of knowledge that is present in any consciousness-necessitating behaviors. If you design a simple robot with a camera and robot arm that is able to reliably catch a ball thrown in its general direction, then something in that system *must* contain knowledge of the ball's relative position and trajectory. It simply isn't logically possible to have a system that behaves in all situations as if it knows where the ball is, without knowing where the ball is. Consciousness is simply the state of being with knowledge.

Con- "Latin for with"
-Scious- "Latin for knowledge"
-ness "English suffix meaning the state of being X"

Consciousness -> The state of being with knowledge.

There is an infinite variety of potential states and levels of knowledge, and this contributes to much of the confusion, but boiled down to the simplest essence of what is or isn't conscious, it is all about knowledge states. Knowledge states require activity/reactivity to the presence of information, and counterfactual behaviors (if/then, greater than less than, discriminations and comparisons that lead to different downstream consequences in a system's behavior). At least, this is my theory of consciousness.

Jason

This still runs into the valence problem though. Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?

That is a great question. Though I'm not sure it's fundamentally insoluble within model where every conscious state is a particular state of knowledge.

I would propose that having positive and negative experiences, i.e. pain or pleasure, requires knowledge states with a certain minium degree of sophistication. For example, knowing:

Pain being associated with knowledge states such as: "I don't like this, this is bad, I'm in pain, I want to change my situation."

Pleasure being associated with knowledge states such as: "This is good for me, I could use more of this, I don't want this to end.'

Such knowledge states require a degree of reflexive awareness, to have a notion of a self where some outcomes may be either positive or negative to that self, and perhaps some notion of time or a sufficient agency to be able to change one's situation.

Sone have argued that plants can't feel pain because there's little they can do to change their situation (though I'm agnostic on this).

  I'm not talking about the functional accounts of positive and negative experiences. I'm talking about phenomenology. The functional aspect of it is not irrelevant, but to focus only on that is to sweep the feeling under the rug. So many dialogs on this topic basically terminate here, where it's just a clash of belief about the relative importance of consciousness and phenomenology as the mediator of all experience and knowledge.

You raise important questions which no complete theory of consciousness should ignore. I think one reason things break down here is because there's such incredible complexity behind and underlying the states of consciousness we humans perceive and no easy way to communicate all the salient properties of those experiences.

Jason 

Stathis Papaioannou

unread,
May 24, 2023, 1:15:40 AM5/24/23
to everyth...@googlegroups.com
On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,

Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.

Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.
 
Accordingly, I think intermediate steps and the fine-grained organization are important (to some minimum level of fidelity) but as Bruno would say, we can never be certain what this necessary substitution level is. Is it neocortical columns, is it the connectome, is it the proteome, is it the molecules and atoms, is it QFT? Chalmers argues that at least at the level where noise introduces deviations in a brain simulation, simulating lower levels should not be necessary, as human consciousness appears robust to such noise at low levels (photon strikes, brownian motion, quantum uncertainties, etc.)

--
Stathis Papaioannou

Jason Resch

unread,
May 24, 2023, 1:37:24 AM5/24/23
to Everything List


On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,

Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.

Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.


How deeply do you have to go when you consider or define those "other parts" though? That seems to be a critical but unstated assumption, and something that depends on how finely grained you consider the relevant/important parts of a brain to be.

For reference, this is what Chalmers says:


"In this paper I defend this view. Specifically, I defend a principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization. More precisely, the principle states that given any system that has conscious experiences, then any system that has the same functional organization at a fine enough grain will have qualitatively identical conscious experiences. A full specification of a system's fine-grained functional organization will fully determine any conscious experiences that arise."

By substituting a fine-grained functional organization for a coarse-grained one, you change the functional definition and can no longer guarantee identical experiences, nor identical behaviors in all possible situations. They're no longer"functional isomorphs" as Chalmers's argument requires.

By substituting a recording of a computation for a computation, you replace a conscious mind with a tape recording of the prior behavior of a conscious mind. This is what happens in the Blockhead thought experiment. The result is something that passes a Turing test, but which is itself not conscious (though creating such a recording requires prior invocation of a conscious mind or extraordinary luck).

Jason 






 
Accordingly, I think intermediate steps and the fine-grained organization are important (to some minimum level of fidelity) but as Bruno would say, we can never be certain what this necessary substitution level is. Is it neocortical columns, is it the connectome, is it the proteome, is it the molecules and atoms, is it QFT? Chalmers argues that at least at the level where noise introduces deviations in a brain simulation, simulating lower levels should not be necessary, as human consciousness appears robust to such noise at low levels (photon strikes, brownian motion, quantum uncertainties, etc.)

--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 24, 2023, 3:20:08 AM5/24/23
to everyth...@googlegroups.com
On Wed, 24 May 2023 at 15:37, Jason Resch <jason...@gmail.com> wrote:


On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 24 May 2023 at 04:03, Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 23 May 2023 at 21:09, Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I think you’ve captured my position. But in addition I think replicating the fine-grained causal organisation is not necessary in order to replicate higher level phenomena such as GMK. By extension of Chalmers’ substitution experiment,

Note that Chalmers's argument is based on assuming the functional substitution occurs at a certain level of fine-grained-ness. If you lose this step, and look at only the top-most input-output of the mind as black box, then you can no longer distinguish a rock from a dreaming person, nor a calculator computing 2+3 and a human computing 2+3, and one also runs into the Blockhead "lookup table" argument against functionalism.

Yes, those are perhaps problems with functionalism. But a major point in Chalmers' argument is that if qualia were substrate-specific (hence, functionalism false) it would be possible to make a partial zombie or an entity whose consciousness and behaviour diverged from the point the substitution was made. And this argument works not just by replacing the neurons with silicon chips, but by replacing any part of the human with anything that reproduces the interactions with the remaining parts.


How deeply do you have to go when you consider or define those "other parts" though? That seems to be a critical but unstated assumption, and something that depends on how finely grained you consider the relevant/important parts of a brain to be.

For reference, this is what Chalmers says:


"In this paper I defend this view. Specifically, I defend a principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization. More precisely, the principle states that given any system that has conscious experiences, then any system that has the same functional organization at a fine enough grain will have qualitatively identical conscious experiences. A full specification of a system's fine-grained functional organization will fully determine any conscious experiences that arise."

By substituting a fine-grained functional organization for a coarse-grained one, you change the functional definition and can no longer guarantee identical experiences, nor identical behaviors in all possible situations. They're no longer"functional isomorphs" as Chalmers's argument requires.

By substituting a recording of a computation for a computation, you replace a conscious mind with a tape recording of the prior behavior of a conscious mind. This is what happens in the Blockhead thought experiment. The result is something that passes a Turing test, but which is itself not conscious (though creating such a recording requires prior invocation of a conscious mind or extraordinary luck).

The replaced part must of course be functionally identical, otherwise both the behaviour and the qualia could change. But this does not mean that it must replicate the functional organisation at a particular scale. If a volume of brain tissue is removed, in order to guarantee identical behaviour the replacement part must interact at the cut surfaces of the surrounding tissue in the same way as the original. It is at these surfaces that the interactions must be sufficiently fine-grained, but what goes inside the volume doesn't matter: it could be conventional simulation of neurons, it could be a giant lookup table. Also, the volume could be any size, and could comprise an arbitrarily large proportion of the subject.
 

--
Stathis Papaioannou

John Clark

unread,
May 24, 2023, 5:35:46 AM5/24/23
to everyth...@googlegroups.com
On Wed, May 24, 2023 at 1:37 AM Jason Resch <jason...@gmail.com> wrote:

> By substituting a recording of a computation for a computation, you replace a conscious mind with a tape recording of the prior behavior of a conscious mind. 

But you'd still need a computation to find the particular tape recording that you need, and the larger your library of recordings the more complex the computation you'd need to do would be.

> This is what happens in the Blockhead thought experiment

And in that very silly thought experiment your library needs to contain every sentence that is syntactically and grammatically correct. And there are an astronomical number to an astronomical power of those. Even if every electron, proton, neutron, photon and neutrino in the observable universe could record 1000 million billion trillion sentences there would still be well over a googolplex number of sentences that remained unrecorded.  Blockhead is just a slight variation on Searle's idiotic Chinese room.

John K Clark    See what's on my new list at  Extropolis
hdf

Jason Resch

unread,
May 24, 2023, 7:56:05 AM5/24/23
to Everything List
Can I ask you what you would believe would happen to the conscious of the individual if you replaced the right hemisphere of the brain with a black box that interfaced identically with the left hemisphere, but internal to this black box is nothing but a random number generator, and it is only by fantastic luck that the output of the RNG happens to have caused it's interfacing with the left hemisphere to remain unchanged?


After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?


Jason 

Jason Resch

unread,
May 24, 2023, 8:07:02 AM5/24/23
to Everything List
It's very different.

Note they you don't need to realize or store every possible input for the central point of Block's argument to work.

For example, let's say that AlphaZero was conscious for the purposes of this argument. We record each of its 361 possible responses AlphaZero produces to each of the different opening moves on a Go board and store the result in a lookup table. This table would be only a few kilobytes. Then we can ask, what has happened to the conscious of AlphaZero? Here we have a functionally equivalent response for all possible second moves, but we've done away with all the complexity of the prior computation.

What the substitution level argument really asks is how far up in the subroutines of a mind's program can we implement memoization ( https://en.m.wikipedia.org/wiki/Memoization ) before the result is some kind of altered consciousness, or at least some diminished contribution to the measure of a conscious experience (under duplicationist conceptions of measure).


Jason 

Brent Meeker

unread,
May 24, 2023, 12:14:16 PM5/24/23
to everyth...@googlegroups.com
Doesn't it need to be able to change in order to have memory and to learn?

Brent

Stathis Papaioannou

unread,
May 24, 2023, 1:20:27 PM5/24/23
to everyth...@googlegroups.com
I was going to propose just that next: nothing, the consciousness would continue.

After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?

The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses, which amounts to saying that consciousness exists independently of any hardware. This is consistent with Bruno Marchal’s theory.
--
Stathis Papaioannou

Stathis Papaioannou

unread,
May 24, 2023, 1:41:50 PM5/24/23
to everyth...@googlegroups.com
Yes, I meant change from what the original parts would do. If you get a neural implant you would want it to leave your brain functioning as it was originally, which means all the remaining neurons firing in the same way as they were originally. This would guarantee that your consciousness would also continue as it was originally.
--
Stathis Papaioannou

Brent Meeker

unread,
May 24, 2023, 2:08:00 PM5/24/23
to everyth...@googlegroups.com
Except I couldn't learn anything or form any new memories; at least not if they depended on the implant.  Right?

Brent

John Clark

unread,
May 24, 2023, 2:20:16 PM5/24/23
to everyth...@googlegroups.com
On Wed, May 24, 2023 at 8:07 AM Jason Resch <jason...@gmail.com> wrote:

>> But you'd still need a computation to find the particular tape recording that you need, and the larger your library of recordings the more complex the computation you'd need to do would be. And in that very silly thought experiment your library needs to contain every sentence that is syntactically and grammatically correct. And there are an astronomical number to an astronomical power of those. Even if every electron, proton, neutron, photon and neutrino in the observable universe could record 1000 million billion trillion sentences there would still be well over a googolplex number of sentences that remained unrecorded.  Blockhead is just a slight variation on Searle's idiotic Chinese room.


> It's very different. Note they you don't need to realize or store every possible input for the central point of Block's argument to work.
For example, let's say that AlphaZero was conscious for the purposes of this argument. We record each of its 361 possible responses AlphaZero produces to each of the different opening moves on a Go board and store the result in a lookup table. This table would be only a few kilobytes.

Nobody in their right mind would conclude that  AlphaZero is intelligent or conscious after just watching the opening move, but after watching an entire game is another matter because in a typical game of GO there 150 moves and there are 10^360 different 150 move Go games, and there are only 10^78 atoms in the observable universe. And the number of possible responses that GPT4 can produce is VASTLY greater than 10^360.

 
> Then we can ask, what has happened to the conscious of AlphaZero?

I'm not saying  intelligent behavior creates consciousness, I'm just saying intelligent behavior is a TEST for consciousness, and it's an imperfect one too, but it's the only test for consciousness that we've got. I'm saying if something displays intelligent behavior then it's intelligent and conscious, but if something does NOT display intelligent behavior then it may or may not be intelligent or conscious.  

John K Clark    See what's on my new list at  Extropolis
nic

Stathis Papaioannou

unread,
May 24, 2023, 3:53:43 PM5/24/23
to everyth...@googlegroups.com
It wouldn’t be functionally equivalent in that case.
--
Stathis Papaioannou

Jason Resch

unread,
May 24, 2023, 4:46:13 PM5/24/23
to everyth...@googlegroups.com
A RNG has a different functional description though, from any conventional mind. It seems to me you may be operating within a more physicalist notion of consciousness than a functionalist one, in that you seem to be putting more weight on the existence of a particular physical state being reached, regardless of how it got there. In my view (as a functionalist), being in a particular physical state is not sufficient. It also matters how one reached that particular state. A RNG and a human can both output the string "I am conscious", but in my view only one of them is.
 

After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?

The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses,

Rocks don't implement all computations. I am aware some philosophers have said as much, but they achieve this trick by labeling successive states of a computation to each time-ordered state of the rock. I don't think any computer scientist accepts this as valid. The transitions of the rock states lack the counterfactual relations which are necessary for computation. If you were to try to map states S_1 to state S_5000 of a rock to a program computing Pi, looking at state S_6000 of the rock won't provide you any meaningful information about what the next digit of Pi happens to be.
 
which amounts to saying that consciousness exists independently of any hardware. This is consistent with Bruno Marchal’s theory.

It depends how you define hardware. Marchal's theory still requires computations supported by platonic truths/number relations. This is not physical hardware, but it's still a platform for supporting threads of computation.

Jason

Stathis Papaioannou

unread,
May 24, 2023, 9:31:37 PM5/24/23
to everyth...@googlegroups.com
An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.
 
After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?

The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses,

Rocks don't implement all computations. I am aware some philosophers have said as much, but they achieve this trick by labeling successive states of a computation to each time-ordered state of the rock. I don't think any computer scientist accepts this as valid. The transitions of the rock states lack the counterfactual relations which are necessary for computation. If you were to try to map states S_1 to state S_5000 of a rock to a program computing Pi, looking at state S_6000 of the rock won't provide you any meaningful information about what the next digit of Pi happens to be.
 
Yes, so it can't be used as a computer that interacts with its environment and provides useful results. But we could say that the computation is still in there hidden, in the way every possible sculpture is hidden inside a block of marble.
 
which amounts to saying that consciousness exists independently of any hardware. This is consistent with Bruno Marchal’s theory.

It depends how you define hardware. Marchal's theory still requires computations supported by platonic truths/number relations. This is not physical hardware, but it's still a platform for supporting threads of computation.

Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
May 24, 2023, 9:48:30 PM5/24/23
to Everything List
I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.


 
After answering that, let me ask what you think would happen to the conscious of the individual if we replaced all but one neuron in the brain with this RNG-driven black box that continues to stimulate this sole remaining neuron in exactly the same way as the rest of the brain would have?

The consciousness would continue. And then we could get rid of the neuron and the consciousness would continue. So we end up with the same result as the rock implementing all computations and hence all consciousnesses,

Rocks don't implement all computations. I am aware some philosophers have said as much, but they achieve this trick by labeling successive states of a computation to each time-ordered state of the rock. I don't think any computer scientist accepts this as valid. The transitions of the rock states lack the counterfactual relations which are necessary for computation. If you were to try to map states S_1 to state S_5000 of a rock to a program computing Pi, looking at state S_6000 of the rock won't provide you any meaningful information about what the next digit of Pi happens to be.
 
Yes, so it can't be used as a computer that interacts with its environment and provides useful results. But we could say that the computation is still in there hidden, in the way every possible sculpture is hidden inside a block of marble.

I am not so sure. All the work is offloaded to the one doing the interpretation, none of the relations are inherent in the state transitions. If you change one of the preceding states, it does not alter the flow of the computation in the expected way, and the period of the rock's state transitions (it's Poincare recurrence time) bears no relation to the period of the purported computation being executed.

Jason 

Stathis Papaioannou

unread,
May 24, 2023, 9:56:25 PM5/24/23
to everyth...@googlegroups.com
On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:

>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.

I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.

I think the concept of a partial zombie makes consciousness nonsensical. How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday? What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?


--
Stathis Papaioannou

Stathis Papaioannou

unread,
May 24, 2023, 9:59:59 PM5/24/23
to everyth...@googlegroups.com
All the work is offloaded onto the one doing the interpretation, but what if we consider a virtual environment with no outside input?

--
Stathis Papaioannou

Jason Resch

unread,
May 24, 2023, 11:59:02 PM5/24/23
to Everything List


On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:

>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.

I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.

I think the concept of a partial zombie makes consciousness nonsensical.

It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.


How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?

Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.


What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?

The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.

Jason 

Stathis Papaioannou

unread,
May 25, 2023, 12:30:45 AM5/25/23
to everyth...@googlegroups.com
On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:


On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:

>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.

I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.

I think the concept of a partial zombie makes consciousness nonsensical.

It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.

If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process. This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.
 
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?

Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.

But the subject's sphere of awareness would not shrink in the thought experiment, since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.
 
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?

The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.

Yes, so an RNG would be a bad design choice. But the point remains that if the output of the system remains the same, the consciousness remains the same, regardless of how the system functions. The reasonable-sounding belief that somehow the consciousness resides in the brain, in particular biochemical reactions or even in electronic circuits simulating the brain is wrong.


--
Stathis Papaioannou

Brent Meeker

unread,
May 25, 2023, 12:47:33 AM5/25/23
to everyth...@googlegroups.com


On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:


On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:


On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:

>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.

I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.

I think the concept of a partial zombie makes consciousness nonsensical.

It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.

If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process. This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.
 
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?

Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.

But the subject's sphere of awareness would not shrink in the thought experiment, since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.

Why do you think they would notice?  Color blind people don't notice they are color blind...until somebody tells them about it and even then they don't "notice" it.

Brent


 
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?

The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.

Yes, so an RNG would be a bad design choice. But the point remains that if the output of the system remains the same, the consciousness remains the same, regardless of how the system functions. The reasonable-sounding belief that somehow the consciousness resides in the brain, in particular biochemical reactions or even in electronic circuits simulating the brain is wrong.


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
May 25, 2023, 1:20:15 AM5/25/23
to everyth...@googlegroups.com
On Thu, 25 May 2023 at 14:47, Brent Meeker <meeke...@gmail.com> wrote:


On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:


On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:


On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:

>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.

I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.

I think the concept of a partial zombie makes consciousness nonsensical.

It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.

If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process. This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.
 
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?

Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.

But the subject's sphere of awareness would not shrink in the thought experiment, since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.

Why do you think they would notice?  Color blind people don't notice they are color blind...until somebody tells them about it and even then they don't "notice" it.

There would either be objective or subjective evidence of a change due to the substitution. If there is neither objective nor subjective evidence of a change, then there is no change.


--
Stathis Papaioannou

John Clark

unread,
May 25, 2023, 5:55:09 AM5/25/23
to everyth...@googlegroups.com
On Wed, May 24, 2023 at 7:56 AM Jason Resch <jason...@gmail.com> wrote:

> Can I ask you what you would believe would happen to the conscious of the individual if you replaced the right hemisphere of the brain with a black box that interfaced identically with the left hemisphere, but internal to this black box is nothing but a random number generator, and it is only by fantastic luck that the output of the RNG happens to have caused it's interfacing with the left hemisphere to remain unchanged?

If that were to happen absolutely positively nothing would happen to the consciousness of the individual, except that such a thing would be astronomically (that's far too wimpy a word but is the best I could come up with) to occur, and even if it did it would be astronomically squared unlikely that such "fantastic luck" would continue and the individual would remain conscious for another nanosecond. But I want to compete with you in figuring out a thought experiment that is even more ridiculous than yours, in fact I want to find one that is almost as ridiculous as the Chinese Room. Here is my modest proposal:  

Have ALL the neurons and not just half behave randomly,  and let them produce exactly the same output that Albert Einstein's brain did, and let them continue doing that for all of the 76 years of Albert Einstein's life. 

In anticipation of your inevitable questions .... Yes, that would be a reincarnation of Albert Einstein. And yes he would be conscious, assuming that the original Albert Einstein was conscious and that I am not the only conscious being in the universe. And yes randomness producing consciousness would be bizarre, but the bizarre is always to be expected if the starting conditions are bizarre, and in this case your starting conditions are bizarre to the bazaar power.  And no, something like that is not going to happen, not for 76 years, not even for a picosecond .  

John K Clark    See what's on my new list at  Extropolis
n76



Jason Resch

unread,
May 25, 2023, 7:28:43 AM5/25/23
to Everything List


On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 25 May 2023 at 13:59, Jason Resch <jason...@gmail.com> wrote:


On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Thu, 25 May 2023 at 11:48, Jason Resch <jason...@gmail.com> wrote:

>An RNG would be a bad design choice because it would be extremely unreliable. However, as a thought experiment, it could work. If the visual cortex were removed and replaced with an RNG which for five minutes replicated the interactions with the remaining brain, the subject would behave as if they had normal vision and report that they had normal vision, then after five minutes behave as if they were blind and report that they were blind. It is perhaps contrary to intuition that the subject would really have visual experiences in that five minute period, but I don't think there is any other plausible explanation.

I think they would be a visual zombie in that five minute period, though as described they would not be able to report any difference.

I think if one's entire brain were replaced by an RNG, they would be a total zombie who would fool us into thinking they were conscious and we would not notice a difference. So by extension a brain partially replaced by an RNG would be a partial zombie that fooled the other parts of the brain into thinking nothing was amiss.

I think the concept of a partial zombie makes consciousness nonsensical.

It borders on the nonsensical, but between the two bad alternatives I find the idea of a RNG instantiating human consciousness somewhat less sensical than the idea of partial zombies.

If consciousness persists no matter what the brain is replaced with as long as the output remains the same this is consistent with the idea that consciousness does not reside in a particular substance (even a magical substance) or in a particular process.

Yes but this is a somewhat crude 1960s version of functionalism, which as I described and as you recognized, is vulnerable to all kinds of attacks. Modern functionalism is about more than high level inputs and outputs, and includes causal organization and implementation details at some level (the functional substitution level).

Don't read too deeply into the mathematical definition of function as simply inputs and outputs, think of it more in terms of what a mind does, rather than what a mind is, this is the thinking that led to functionalism and an acceptance of multiple realizability.



This is a strange idea, but it is akin to the existence of platonic objects. The number three can be implemented by arranging three objects in a row but it does not depend those three objects unless it is being used for a particular purpose, such as three beads on an abacus.

Bubble sort and merge sort both compute the same thing and both have the same inputs and outputs, but they are different mathematical objects, with different behaviors, steps, subroutines and runtime efficiency.


 
How would I know that I am not a visual zombie now, or a visual zombie every Tuesday, Thursday and Saturday?

Here, we have to be careful what we mean by "I". Our own brains have various spheres of consciousness as demonstrated by the Wada Test: we can shut down one hemisphere of the brain and lose partial awareness and functionality such as the ability to form words and yet one remains conscious. I think being a partial zombie would be like that, having one's sphere of awareness shrink.

But the subject's sphere of awareness would not shrink in the thought experiment,

Have you ever wondered what delineates the mind from its environment? Why it is that you are not aware of my thoughts but you see me as an object that only affects your senses, even though we could represent the whole earth as one big functional system?

I don't have a good answer to this question but it seems it might be a factor here. The randomly generated outputs from the RNG would seem an environmental noise/sensation coming from the outside, rather than a recursively linked and connected loop of processing as would exist in a genuinely functioning brain of two hemispheres.


since by assumption their behaviour stays the same, while if their sphere of awareness shrank they notice that something was different and say so.

But here (almost by magic), the RNG outputs have forced the physical behavior of the remaining hemisphere to remain the same while fundamentally altering the definition of the computation that underlies the mind.

If this does not alter the consciousness, if neurons don't need to interact in a computationally meaningful way with other neurons, then in principle all we need is one neuron to fire once, and this can stand for all possible consciousness invoked by all possible minds.

Arnold Zuboff has written a thought experiment to this effect.

I think it leads to a kind of absurdity. Why write books or emails when every possible combination of letters is already inherent in the alphabet. We just had to write the alphabet down once and we could call it a day. Or: combinations, patterns, and interrelations *are* important and meaningful, in ways that isolated instances of letters (or neurons) are not.

 
What is the advantage of having "real" visual experiences if they make no objective difference and no subjective difference either?

The advantage of real computations (which imply having real awareness/experiences) is that real computations are more reliable than RNGs for producing intelligent behavioral responses.

Yes, so an RNG would be a bad design choice. But the point remains that if the output of the system remains the same, the consciousness remains the same, regardless of how the system functions.

If you don't care about how the system functions and care only about outputs, then I think you are operating within an older, and I think largely abandoned, version of functionalism.

Consider: an electron has the same outputs as a dreaming brain locked inside a skull: none.

But if a theory cannot acknowledge a difference in the conscious between an electron and a dreaming brain inside a skull, then the theory is (in my opinion) operationally useless.


The reasonable-sounding belief that somehow the consciousness resides in the brain, in particular biochemical reactions or even in electronic circuits simulating the brain is wrong.

Right, I fully accept multiple realizability.

But it does not follow from the ability to multiply realize functions with different substrates that the internal details of a function's implementation can be ignored and we can focus only on the output of a function.

I don't know to what degree you are familiar with programming or computer code but I would like you to consider these two functions for a moment:

int sum1(int a, int b) {
    return a + b;
}

int sum2(int a, int b) {
    runBrainSimulation();
    return a + b;
}

Here we have two functions defined, sum1() and sum2(). Both take in two integers as inputs. Both return one integer as an output. Both return the mathematical sum of the two inputs. At a high level functional definition, they are identical and we can abstract away the internal implementation details.

But, what these two functions compute are very different. The function sum2(), before computing and returning the sum, continues the computation of an emulation of a human brain by invoking another function "runBrainSimulation()". This function advances the simulation of an uploaded human brain by five subjective minutes. But this simulation function itself has no outputs, and it has no effect on what sum2() returns.

Given this, are you still of the opinion that the only thing that matters in a mind are high level outputs, or does this example reveal that sometimes implementation details of a function are relevant and bear on the states of consciousness that a function realizes?

Jason 

Stathis Papaioannou

unread,
May 25, 2023, 9:43:10 AM5/25/23
to everyth...@googlegroups.com
In your example with the two functions there is a conscious process which is separate from the outputs. The analogous case in Chalmers’ experiment is that the visual qualia are altered by the replacement process, the subject notices, but he continues to say that everything is fine, because the inputs to his language centres etc. are the same. But what part of the brain does the noticing, the trying to speak, the experience of horror at helplessly observing oneself say that everything is fine? There isn’t a special part of the brain that runs conscious subroutines disconnected from the outputs.
--
Stathis Papaioannou

Terren Suydam

unread,
May 25, 2023, 10:05:04 AM5/25/23
to everyth...@googlegroups.com


On Tue, May 23, 2023 at 5:47 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023, 3:50 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 1:46 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 7:09 AM Jason Resch <jason...@gmail.com> wrote:
As I see this thread, Terren and Stathis are both talking past each other. Please either of you correct me if i am wrong, but in an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same fine-grained causal organization *would* have the same phenomenology, the same experience, and the same qualia as the brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe this is partly responsible for why you are both talking past each other, because there are many levels involved in brains (and computational systems). I believe you were discussing completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings, quale, etc. and there are low-level, be they neurons, neurotransmitters, atoms, quantum fields, and laws of physics as in human brains, or circuits, logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale or idea or memory of the smell of GMK is a very high-level feature of a mind. When Terren asks for or discusses a symbol for it, a complete answer/description for it can only be supplied in terms of a vast amount of information concerning low level structures, be they patterns of neuron firings, or patterns of bits being processed. When we consider things down at this low level, however, we lose all context for what the meaning, idea, and quale are or where or how they come in. We cannot see or find the idea of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a leap from how we get "it" from a bunch of cells squirting ions back and forth. Trying to understand a smartphone by looking at the flows of electrons is a similar kind of problem, it would seem just as difficult or impossible to explain and understand the high-level features and complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level one is operation on when one discusses symbols, substrates, or quale. In summary, I think a chief reason you have been talking past each other is because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my perspective in the hope it might help the conversation.

I appreciate the callout, but it is necessary to talk at both the micro and the macro for this discussion. We're talking about symbol grounding. I should make it clear that I don't believe symbols can be grounded in other symbols (i.e. symbols all the way down as Stathis put it), that leads to infinite regress and the illusion of meaning.  Symbols ultimately must stand for something. The only thing they can stand for, ultimately, is something that cannot be communicated by other symbols: conscious experience. There is no concept in our brains that is not ultimately connected to something we've seen, heard, felt, smelled, or tasted.

I agree everything you have experienced is rooted in consciousness. 

But at the low level, that only thing your brain senses are neural signals (symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human brains to establish that they have grounding, but then use the low-level descriptions of machines to deny their own consciousness, and hence deny they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and machine intelligences, however, you would see each struggles to make a connection to what may exist at the high-level. You would see, the lack of any apparent grounding in what are just neurons firing or not firing at certain times. Just as a wire in a circuit either carries or doesn't carry a charge.

Ah, I see your point now. That's valid, thanks for raising it and let me clarify.

I appreciate that thank you.


Bringing this back to LLMs, it's clear to me that LLMs do not have phenomenal experience, but you're right to insist that I explain why I think so. I don't know if this amounts to a theory of consciousness, but the reason I believe that LLMs are not conscious is that, in my view, consciousness entails a continuous flow of experience. Assuming for this discussion that consciousness is realizable in a substrate-independent way, that means that consciousness is, in some sort of way, a process in the domain of information. And so to realize a conscious process, whether in a brain or in silicon, the physical dynamics of that information process must also be continuous, which is to say, recursive.


I am quite partial to the idea that recursion or loops may me necessary to realize consciousness, or at least certain types of consciousness, such as self-consciousness (which I take to be models which include the self as an actor within the environment), but I also believe that loops may exist in non obvious forms, and even extend beyond the physical domain of a creature's body or the confines of a physical computer. 

Allow me to explain.

Consider a something like the robot arm I described that is programmed to catch a ball. Now consider that the at each time step, a process is run they receives the current coordinates of th robot arm position and the ball position. This is not technically a loop, and bit really recursive, it may be implemented by a time that fires off the process say 1000 times a second.

But, if you consider the pair of the robot arm and the environment, a recursive loop emerges, in the sense that the action decided and executed in the previous time step affects the sensory input in subsequent time steps. If the robots had enough sophistication to have a language function and we asked it, "what caused your arm to move?" The only answer it could give would have to be a reflexive one: a process within me caused my arm to move. So we get self reference, and recursion through environmental interaction.

 Now let's consider the LLM in this context, each invocation is indeed a feed forward independent process, but through this back and forth flow, the LLM interacting with the user, a recursive continuous loop of processing emerges. The LLM could be said to perceive an ever growing thread of conversation, with new words constantly being appended to its perception window. Moreover, some of these words would be external inputs, while others are internal outputs. If you ask the LLM: where did those internally generated outputs come from? Again th only valid answer it could supply would have to be reflexive.

Reflexivity is I think the essence of self awareness, and though a single LLM invocation cannot do this, an LLM that generates output and then is subsequently asked about the source of this output, must turn it's attention inward towards itself.

This is something like how Dennett describes how a zombie asked to look inward bootstraps itself into consciousness.

I see what you're saying, and within the context of a single conversation, what you're suggesting seems possible. But with every new conversation it starts at the same exact state. There is no learning, no updating, from one conversation to the next. It doesn't pass the smell test to me. I would think for real sentience to occur, that kind of emergent self-model would need more than just a few iterations. But this is all just intuition. You raise an interesting possibility.

To take it one step further, if chatGPT's next iteration of training included the millions of conversations humans had with it, you could see a self model become instantiated in a more permanent way. But again, at the end of its training, the state would be frozen. That's the sticking point for me.


The behavior or output of the brain in one moment is the input to the brain in the next moment.

But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.

So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd. 

One aspect of artificial neural networks that is worth considering here is that they are (by the 'universal approximation theorem') completely general and universal in the functions they can learn and model. That is, any logical circuit which can be computed in finite time, can in principle, be learned and implemented by a neural network. This gives me some pause when I consider what things neural networks will never be able to do.

Yup, you made that point a couple months ago here and that stuck with me - that it's possible the way that LLMs are sort of outperforming expectations could be that it's literally modelling minds and using that to generate its responses. I'm not sure that's possible, because I'm not clear on whether the neural networks used in LLMs qualify as being general/universal.
 


Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.

This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.




In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.


Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?

I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence.

But is there not much more to conscious then these two binary states? Is the state space sufficiently large in their theory to account for the seemingly infinite possible diversity of conscious experience?

They're not saying the state is binary. I don't even think they're saying symmetry is a binary. They're deriving the property of symmetry (presumably through some kind of mathematical transform) and hypothesizing that aesthetic valence corresponds to the outcome of that transform. I also think it's possible for symmetry and anti-symmetry to be present at the same time; the mathematical object isomorphic to experience is a high-dimensional object and probably has nearly infinite ways of being symmetrical and anti-symmetrical.
 


It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.

I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream.

I agree. Or at least I would say, consciousness is more fundamental than the physical universe. It might then be more appropriate to say my position is a kind of neutral monism, where platonically existing information/computation is the glue that relates consciousness to physics and explains why we perceive an ordered world with apparent laws.

I explain this in much more detail here:


I assume that's inspired by Bruno's ideas?  I miss that guy. I still see him on FB from time to time. He was super influential on me too. Probably the single smartest person I ever "met".
 

It has its own "hard problem", which is explaining why the world appears so orderly.

Yes, the "hard problem of matter" as some call it. I agree this problem is much more solvable than the hard problem of consciousness.


But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be!  it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo Kastrup

I will have to look into him.

I take him with a grain of salt - he's fairly combative and dismissive of people who are physicalists. But his ideas are super interesting, I don't know if he's the first to take analytical approach to idealism, but he's definitely the first to become well known for it.

Terren
 

 has some interesting ideas on idealism, and he approaches it in a way that is totally devoid of woo. That said, one really intriguing set of evidence in favor of idealism is near-death-experience (NDE) testimony, which is pretty remarkable if one actually studies it.

It is indeed.

Jason 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
May 25, 2023, 10:16:26 AM5/25/23
to everyth...@googlegroups.com


On Tue, May 23, 2023 at 6:00 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023, 4:14 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 2:27 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, May 23, 2023 at 1:15 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, May 23, 2023 at 11:08 AM Dylan Distasio <inte...@gmail.com> wrote:
 
And yes, I'm arguing that a true simulation (let's say for the sake of a thought experiment we were able to replicate every neural connection of a human being in code, including the connectomes, and neurotransmitters, along with a simulated nerve that was connected to a button on the desk we could press which would simulate the signal sent when a biological pain receptor is triggered) would feel pain that is just as real as the pain you and I feel as biological organisms.

This follows from the physicalist no-zombies-possible stance. But it still runs into the hard problem, basically. How does stuff give rise to experience.


I would say stuff doesn't give rise to conscious experience. Conscious experience is the logically necessary and required state of knowledge that is present in any consciousness-necessitating behaviors. If you design a simple robot with a camera and robot arm that is able to reliably catch a ball thrown in its general direction, then something in that system *must* contain knowledge of the ball's relative position and trajectory. It simply isn't logically possible to have a system that behaves in all situations as if it knows where the ball is, without knowing where the ball is. Consciousness is simply the state of being with knowledge.

Con- "Latin for with"
-Scious- "Latin for knowledge"
-ness "English suffix meaning the state of being X"

Consciousness -> The state of being with knowledge.

There is an infinite variety of potential states and levels of knowledge, and this contributes to much of the confusion, but boiled down to the simplest essence of what is or isn't conscious, it is all about knowledge states. Knowledge states require activity/reactivity to the presence of information, and counterfactual behaviors (if/then, greater than less than, discriminations and comparisons that lead to different downstream consequences in a system's behavior). At least, this is my theory of consciousness.

Jason

This still runs into the valence problem though. Why does some "knowledge" correspond with a positive feeling and other knowledge with a negative feeling?

That is a great question. Though I'm not sure it's fundamentally insoluble within model where every conscious state is a particular state of knowledge.

I would propose that having positive and negative experiences, i.e. pain or pleasure, requires knowledge states with a certain minium degree of sophistication. For example, knowing:

Pain being associated with knowledge states such as: "I don't like this, this is bad, I'm in pain, I want to change my situation."

Pleasure being associated with knowledge states such as: "This is good for me, I could use more of this, I don't want this to end.'

Such knowledge states require a degree of reflexive awareness, to have a notion of a self where some outcomes may be either positive or negative to that self, and perhaps some notion of time or a sufficient agency to be able to change one's situation.

Sone have argued that plants can't feel pain because there's little they can do to change their situation (though I'm agnostic on this).

  I'm not talking about the functional accounts of positive and negative experiences. I'm talking about phenomenology. The functional aspect of it is not irrelevant, but to focus only on that is to sweep the feeling under the rug. So many dialogs on this topic basically terminate here, where it's just a clash of belief about the relative importance of consciousness and phenomenology as the mediator of all experience and knowledge.

You raise important questions which no complete theory of consciousness should ignore. I think one reason things break down here is because there's such incredible complexity behind and underlying the states of consciousness we humans perceive and no easy way to communicate all the salient properties of those experiences.

Jason 

Thanks for that. These kinds of questions are rarely acknowledged in the mainstream. The problem is how much we take valence as a given, or how much it's conflated with its function, that most people aren't aware of how strange it is if you're coming from a physicalist metaphysics.  "Evolution did it" is the common refrain, but it begs the question.

With your proposal would bacterium potentially possess the knowledge states required? 

And the idea that plants cannot influence their environments is patently false. There's an emerging recognition of just how much plants do respond to environmental stimuli. There's a symbiotic relationship between plants and fungal networks in the soil, and these networks have been shown to mediate communication, where trees will signal threats and direct resources to other trees who need it. I can try to dig up some references on that.

Terren
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
May 25, 2023, 10:21:18 AM5/25/23
to Everything List
Should I take this to mean you think consciousness is more appropriately identified with the process than the functional outputs?


The analogous case in Chalmers’ experiment is that the visual qualia are altered by the replacement process, the subject notices, but he continues to say that everything is fine, because the inputs to his language centres etc. are the same. But what part of the brain does the noticing, the trying to speak, the experience of horror at helplessly observing oneself say that everything is fine?

I think the above scenario is too underspecified to say. Certainly in the scenario where every neuron was replaced and all causal structures and information flows maintained, there is no room for such alternative thoughts to exist. But I can imagine another scenario where we insert a chip into your spinal cord which takes over your body and there would remain ample rooting the rest of your brain for such thought of horror to exist, even if the chip causes your outward appearances to act as if nothing were wrong.

I don't know where your change to the visual center that affects visual experience but preserves Linguistics behavior fits in between these two extremes, I need more information about what is changed where and how the rest of the brain is firewalled off from the downstream causal effects of the change to this one brain region.


There isn’t a special part of the brain that runs conscious subroutines disconnected from the outputs.

Sure there are. This is the whole notion of the modularity of the mind, a theory proposed by Jerry Fodor, an early pioneer of functionalism. Specialized brain region perform computations more complex than they can fully communicate to other brain regions, so much more happens behind the scenes than any of the some 400 modules of the brain is able to output to the rest of the brain

And yet, how the outputs are computed is still important to the rest of the brain (in terms of defining the computational state it is in).

Think of it this way: a multiply function that takes in two inputs (2,2) and returns "4", and an add function that takes in two inputs (2,2) and return "4" have the same output for the same input, at least in this case, but the functional meaning is very different. The computations that occured in the function carry a meaning which is different, even though the same output comes out. The internal functional implementation then, defines a different computational state for any later process that receives this output of "4".


Jason 

Jason Resch

unread,
May 25, 2023, 11:33:23 AM5/25/23
to everyth...@googlegroups.com
Thank you. I do think such a learning capacity would greatly expand the potential of these systems.
 

To take it one step further, if chatGPT's next iteration of training included the millions of conversations humans had with it, you could see a self model become instantiated in a more permanent way. But again, at the end of its training, the state would be frozen. That's the sticking point for me.

I think in a round-about way, they do. I believe OpenAI makes users opt-out to have their data used for further training. This suggests to me that they may be collecting user conversational interactions and using them to train successive versions of the model. Such training does not take place in real time, but perhaps we can view it as somewhat analogous to how the brain learns and incorporates new memories and skills while we sleep.

I also am not sure how necessary real-time modification to long-term memories is necessary to consciousness. There have been several cases of humans that due to some kind of brain damage, lost the ability to form long term memories (e.g. https://en.wikipedia.org/wiki/Clive_Wearing and H.M. https://www.pbs.org/newshour/show/bringing-new-life-patient-h-m-man-couldnt-make-memories ). Though they live only in the immediate present of their short-term working memory. And though they are certainly greatly impaired in their functioning, I do not doubt these people are conscious. Perhaps then, short-term working memory is enough, and GPT has this in terms of its context window (which is on the order of tens of thousands of words, perhaps 100 pages of text).
 


The behavior or output of the brain in one moment is the input to the brain in the next moment.

But LLMs do not exhibit this. They have a training phase, and then they respond to discrete queries. As far as I know, once it's out of the training phase, there is no feedback outside of the flow of a single conversation. None of that seems isomorphic to the kind of process that could support a flow of experience, whatever experience would mean for an LLM.

So to me, the suggestion that chatGPT could one day be used to functionally replace some subset of the brain that is responsible for mediating conscious experience in a human, just strikes me as absurd. 

One aspect of artificial neural networks that is worth considering here is that they are (by the 'universal approximation theorem') completely general and universal in the functions they can learn and model. That is, any logical circuit which can be computed in finite time, can in principle, be learned and implemented by a neural network. This gives me some pause when I consider what things neural networks will never be able to do.

Yup, you made that point a couple months ago here and that stuck with me - that it's possible the way that LLMs are sort of outperforming expectations could be that it's literally modelling minds and using that to generate its responses. I'm not sure that's possible, because I'm not clear on whether the neural networks used in LLMs qualify as being general/universal.

I should note that I am not an expert in this space either, so take what I say as supposition rather than fact, but the task these models are trained to do is one that requires a kind of universal intelligence (predicting the next observation O_n, given prior observations O_1 ... O_(n-1). All forms of intelligence derive from this kind of prediction ability, and any intelligent behavior can be framed in these terms. To accomplish this task efficiently, I believe LLMs have internally developed all kinds of specific neural circuitry to handle prediction in each domain it has experienced, and this is where the universal approximation theorem comes in. In order to learn to better predict future observations from past ones, all kinds of unique abilities had to be learned, and none of these were explicitly put in. For example, consider this testing which found ChatGPT able to play chess better than most humans:
https://dkb.blog/p/chatgpts-chess-elo-is-1400 This implies it has learned the ability to model the board state in its mind merely from a textual list of past moves, and it can predict what move it expects a good player to make next based on this history. I think this remarkable ability can only be explained by the universal ability of neural networks to learn any computable function.
 

 


Conversely, if you stay in the high-level realm of consciousness ideas, well then you must face the problem of other minds. You know you are conscious, but you cannot prove or disprove the conscious of others, at least not with first defining a theory of consciousness and explaining why some minds satisfy the definition of not. Until you present a theory of consciousness then this conversation is, I am afraid, doomed to continue in this circle forever.

This same conversation and outcome played out over the past few months on the extropy-chat-list, although with different actors, so I can say with some confidence where some topics are likely to lead.




In my experience with conversations like this, you usually have people on one side who take consciousness seriously as the only thing that is actually undeniable, and you have people who'd rather not talk about it, hand-wave it away, or outright deny it. That's the talking-past that usually happens, and that's what's happening here.


Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?

I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence. As an aside, the folks at Qualia Computing have put forth a theory that symmetry in the state space isomorphic to ongoing experience is what corresponds to positive valence, and anti-symmetry to negative valence.

Looking at Qualia Computing I realize I have read much of this site in the past and seen many of Andrés Gómez Emilsson's videos. I e-mailed him a few years back but never got a reply. I thought that we were both interested in many of the same topics and had been asking similar questions. I like Emilsson's approach and ideas, though I don't know that I embrace his theory of consciousness (if I recall they are related to or inspired by the ideas of David Pearce, who I also admire).
 

But is there not much more to conscious then these two binary states? Is the state space sufficiently large in their theory to account for the seemingly infinite possible diversity of conscious experience?

They're not saying the state is binary. I don't even think they're saying symmetry is a binary. They're deriving the property of symmetry (presumably through some kind of mathematical transform) and hypothesizing that aesthetic valence corresponds to the outcome of that transform. I also think it's possible for symmetry and anti-symmetry to be present at the same time; the mathematical object isomorphic to experience is a high-dimensional object and probably has nearly infinite ways of being symmetrical and anti-symmetrical.

I see what you mean, though I don't know what this symmetry-antisymmetry buys that isn't already possible to similarly structure in high-dimensional objects that have infinite ways of being  related to 1-ness and 0-ness.


 

 


It's a very interesting argument but one is still forced to leap from a mathematical concept to a subjective feeling. Regardless, it's the most sophisticated attempt to reconcile the hard problem that I've come across.

I've since come around to the idealist stance that reality is fundamentally consciousness, and that the physical is a manifestation of that consciousness, like in a dream.

I agree. Or at least I would say, consciousness is more fundamental than the physical universe. It might then be more appropriate to say my position is a kind of neutral monism, where platonically existing information/computation is the glue that relates consciousness to physics and explains why we perceive an ordered world with apparent laws.

I explain this in much more detail here:


I assume that's inspired by Bruno's ideas? 

Yes, largely. There has been much related work by Russell Standish, Markus Muller ( https://arxiv.org/abs/1712.01826 ), Mark Tegmark, and more recently by Stephen Wolfram ( https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/ ).
 
I miss that guy. I still see him on FB from time to time.

I was worried about him, I noticed he had dropped off this list and he hadn't replied to an e-mail I sent him. I am glad to know he is still active.
 
He was super influential on me too. Probably the single smartest person I ever "met".

Yes, I feel the same.


 

It has its own "hard problem", which is explaining why the world appears so orderly.

Yes, the "hard problem of matter" as some call it. I agree this problem is much more solvable than the hard problem of consciousness.


But if you don't get too hung up on that, it's not as clear that artificial consciousness is possible. It might be!  it may even be that efforts like the above to explain how you get it from bit are relevant to idealist explanations of physical reality. But the challenge with idealism is that the explanations that are on offer sound more like mythology and metaphor than science. I should note that Bernardo Kastrup

I will have to look into him.

I take him with a grain of salt - he's fairly combative and dismissive of people who are physicalists. But his ideas are super interesting, I don't know if he's the first to take analytical approach to idealism, but he's definitely the first to become well known for it.

Seeing his face now, I remembered I watched many of his videos some months ago. He was interesting and I agreed with many of his points.

You might also like some of the writings by Galen Strawson: https://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html 

Jason

Jason Resch

unread,
May 25, 2023, 11:49:55 AM5/25/23
to everyth...@googlegroups.com
Yes, I have recently argued that the first single-celled organism with a photosensitive pigment could represent the emergence of consciousness on Earth. Once you have something that is reactive/responsive to stimuli, with the ability to differentiate or distinguish one state from another, and respond appropriately and uniquely to either situation, this is, if not consciousness, then at least the atom of consciousness from which all states of consciousness are composed.

 
And the idea that plants cannot influence their environments is patently false. There's an emerging recognition of just how much plants do respond to environmental stimuli. There's a symbiotic relationship between plants and fungal networks in the soil, and these networks have been shown to mediate communication, where trees will signal threats and direct resources to other trees who need it. I can try to dig up some references on that.

I am aware of plants' fascinating abilities to learn, communicate, adapt, etc. I strongly lean towards the possibility of them being conscious. Some quotes:

“When a plant is wounded, its body immediately kicks into protection mode. It releases a bouquet of volatile chemicals, which in some cases have been shown to induce neighboring plants to pre-emptively step up their own chemical defenses and in other cases to lure in predators of the beasts that may be causing the damage to the plants. Inside the plant, repair systems are engaged and defenses are mounted, the molecular details of which scientists are still working out, but which involve signaling molecules coursing through the body to rally the cellular troops, even the enlisting of the genome itself, which begins churning out defense-related proteins ... If you think about it, though, why would we expect any organism to lie down and die for our dinner? Organisms have evolved to do everything in their power to avoid being extinguished. How long would any lineage be likely to last if its members effectively didn't care if you killed them?”

“The research of Ariel Novoplansky, from the Ben-Gurion University of the Begev, has demonstrated that plants can communicate with each other in sophisticated ways. Novoplansy’s experiment involved putting plants in a series of adjacent pots, with each plant having one root in its neighbor's pot. He then subjected one of the plants to drought. What he discovered was that this information was passed down the series of plant pots through the roots, as revealed by the fact that all of the plants closed their pores to reduce water loss. Closing of pores is generally the action of thirsty plants, but in this case it was the action of perfectly well-watered plants responding to the danger signals of a neighbor several pots along. The plants were even able to retain the information, which prevented them from dying in the drought that Novoplansky subjected the plants to in a later stage of the experiment.”
“By infecting trees with isotope traces, Simard has shown that there is beneath our feet a complex web of communication between trees, which she has dubbed the “Wood-Wide Web.” Communication happens via mycorrhiza structures, which connect trees to other trees via fungi. /the trees and the fungi enjoy a quid pro quo relationship: the trees deliver carbon to the fungi and the fungi reciprocate by delivering nutrients to the trees. A dense web of connections is formed in this way, with the busiest trees at the center connected to hundreds of other trees.”
“Many vegans and vegetarians feel that it is wrong to kill or exploit sentient creatures. But if plants also have sentience, what is there left to eat? These are very hard ethical questions; it may turn out that some killing of sentient life is inevitable if we want to survive ourselves. But accepting the consciousness of plant life means at the very least accepting that plants have genuine interests, interests that deserve our respect and consideration.”
-- Phillip Goff in "Galileo’s Error" (2019)

“He speaks with plant scientists from around the world whose research has led them to conclude that plants can communicate, learn, and even remember. Some even go as far as to say plants are intelligent.”
“But in principle, there is no doubt that plants are processing and sharing information, potentially in an incredibly complex way.”


Jason

John Clark

unread,
May 25, 2023, 1:32:27 PM5/25/23
to everyth...@googlegroups.com
On Thu, May 25, 2023 at 7:28 AM Jason Resch <jason...@gmail.com> wrote:

> Have you ever wondered what delineates the mind from its environment?

No.
 
> Why it is that you are not aware of my thoughts but you see me as an object that only affects your senses, even though we could represent the whole earth as one big functional system?

The reason is lack of information and lack of computational resources, it's the same reason you're not aware of the velocity of every molecule of the air in the room you're in right now  nor can you predict what all the molecules will be doing one hour from now, but you are aware of the air's temperature now and you can make a pretty good guess about what the temperature will be in one hour.
 
> I don't have a good answer to this question

Then how fortunate it is for you to be able to talk to me.

> The randomly generated outputs from the RNG would seem an environmental noise/sensation coming from the outside, rather than a recursively linked and connected loop of processing 

In your ridiculous example the cause of the neuron acting the way it does is not coming from the inside and it does not come from the outside either because you claim the neuron is acting randomly and the very definition of "random" is an event without a cause.  

> But here (almost by magic), the RNG outputs have forced the physical behavior of the remaining hemisphere to remain the same

That is incorrect. The neuron is not behaving "ALMOST" magically, it IS magical; but you were the one who dreamed up this magical thought experiment, not me.

Arnold Zuboff has written a thought experiment to this effect.

I'm not going to bother looking it up because you and I have very different ideas about what constitutes a good thought experiment.  

> But if a theory cannot acknowledge a difference in the conscious between an electron and a dreaming brain inside a skull, then the theory is (in my opinion) operationally useless.

Correct. Unless you make the unprovable assumption that intelligent behavior implies consciousness then EVERY consciousness theory is operationally useless. And useless for the study of Ontology and Epistemology too. In other words just plain useless. That's why I'm vastly more interested in intelligence theories than consciousness theories; one is easy to fake and the other is impossible to fake.


John K Clark    See what's on my new list at  Extropolis
tic

Stathis Papaioannou

unread,
May 25, 2023, 2:38:34 PM5/25/23
to everyth...@googlegroups.com
If it doesn’t make a difference to the output, it isn’t the sort of change we are discussing. The experiment involves replacing a part of the brain that would result in an arbitrarily large change in qualia that the subject would notice and communicate, because the part of the brain that notices and communicates is intact (there are neurological conditions such as locked in syndrome and various anosognosias where the latter is not the case but we can exclude those).

And yet, how the outputs are computed is still important to the rest of the brain (in terms of defining the computational state it is in).

Think of it this way: a multiply function that takes in two inputs (2,2) and returns "4", and an add function that takes in two inputs (2,2) and return "4" have the same output for the same input, at least in this case, but the functional meaning is very different. The computations that occured in the function carry a meaning which is different, even though the same output comes out. The internal functional implementation then, defines a different computational state for any later process that receives this output of "4".


Jason 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
Stathis Papaioannou

Brent Meeker

unread,
May 25, 2023, 2:38:37 PM5/25/23
to everyth...@googlegroups.com


On 5/25/2023 4:28 AM, Jason Resch wrote:
> Have you ever wondered what delineates the mind from its environment?
> Why it is that you are not aware of my thoughts but you see me as an
> object that only affects your senses, even though we could represent
> the whole earth as one big functional system?
>
> I don't have a good answer to this question but it seems it might be a
> factor here. The randomly generated outputs from the RNG would seem an
> environmental noise/sensation coming from the outside, rather than a
> recursively linked and connected loop of processing as would exist in
> a genuinely functioning brain of two hemispheres.

I would reject this radical output=function.  The brain evolved as
support for the sensory systems.  I is inherently and sensitively
engaged with the environment.  The RNG thought experiment is based on
the idea that the brain can function in isolation, an idea supported by
concentrating on consciousness as the main function of the brain, which
I also reject.  Consciousness is a relatively small part of the brain's
function, mainly concerned with communication to others.  Remember that
the Poincare' effect was described by a great mathematician.

Brent

Brent Meeker

unread,
May 25, 2023, 3:19:12 PM5/25/23
to everyth...@googlegroups.com


On 5/25/2023 7:04 AM, Terren Suydam wrote:
Do you have a theory for why neurology supports consciousness but silicon circuitry cannot?

I'm agnostic about this, but that's because I no longer assume physicalism. For me, the hard problem signals that physicalism is impossible. I've argued on this list many times as a physicalist, as one who believes in the possibility of artificial consciousness, uploading, etc. I've argued that there is something it is like to be a cybernetic system. But at the end of it all, I just couldn't overcome the problem of aesthetic valence

Why would aesthetic valence be a problem for physicalism.  Even bacteria know enough to swim away from some chemical gradients and toward others.

Brent

smitra

unread,
May 27, 2023, 8:19:31 PM5/27/23
to everyth...@googlegroups.com
Indeed, and as I pointed out, it's not all that difficult to debunk the
idea that it understands anything at all by asking simple questions that
are not included in its database. You can test chatGPT just like you can
test students who you suspect have cheated at exams. You invite them for
clarification in your office, and let them do some problems in front of
you on the blackboard. If those questions are simpler than the exam
problems and the student cannot do those, then that's a red flag.

Similarly, as discussed here, chatGPT was able to give the derivation of
the moment of inertia of a sphere, but was unable to derive this in a
much simpler way by invoking spherical symmetry even when given lots of
hints. All it could do was write down the original derivation again and
the argue that the moment of inertia is the same for all axes, and that
the result is spherically symmetric, But it couldn't derive the
expression for the moment of inertia by making use of that (adding up
the momenta of inertia in 3 orthogonal directions yields a spherically
symmetric integral that's much easier to compute). The reason why it
can't do this is because it's not in its database.

And there are quite few of such cases where there the widely published
solution is significantly more complex than another solution which isn't
widely published and may not be in chatGPT's database. For example:

Derive that the flux of isotropic radiation incident on an area is 1/4 u
c where u is the energy density and c the speed of light.

Standard solution: The part of the flux coming from a solid angle range
dOmega is u c cos(theta) dOmega/(4 pi) where theta is the angle with the
normal of the surface. Writing dOmega as sin(theta) dtheta dphi and
integrating over the half-sphere from which the radiation can reach the
area, yields:

Flux = u c/(4 pi)Integral over phi from 0 o 2 pi dphi Integral over
theta from 0 to pi/2 of sin(theta) cos(theta) d theta =1/4 u c

chatGPT will probably have no problems blurting this out, because this
can be found in almost all sources.

But the fact that radiation is isotropic should be something that we
could exploit to simplify this derivation. That's indeed possible. The
reason why we couldn't in the above derivation was because we let the
area be a small straight area that broke spherical symmetry. So let's
fix that:

Much simpler derivation: Consider a small sphere of radius r inside a
cavity filled with isotropic radiation. The amount of radiation
intercepted from a solid angle range dOmega around any direction is then
u c pi r^2 dOmega/(4 pi), because the radiation is intercepted by the
cross section of the sphere in the direction orthogonal from where the
radiation is coming and hat's always pi r^2. Because this doesn't depend
on the direction the radiation is coming from, integrating over the
solid angle is now trivial, this yields u c pi r^2. The flux intercepted
by an area element on the sphere is then obtained by dividing this by
the area 4 pi r^2 of the sphere which is therefore 1/4 u c. And if
that's the flux incident on an area element of a sphere, it is also the
flux though it if the rest of the sphere wouldn't be there.

chatGPT probably won't be able to present this much simpler derivation
regardless of how many hints you give it.


Saibal







On 22-05-2023 23:56, Terren Suydam wrote:
> Many, myself included, are captivated by the amazing capabilities of
> chatGPT and other LLMs. They are, truly, incredible. Depending on your
> definition of Turing Test, it passes with flying colors in many, many
> contexts. It would take a much stricter Turing Test than we might have
> imagined this time last year, before we could confidently say that
> we're not talking to a human. One way to improve chatGPT's performance
> on an actual Turing Test would be to slow it down, because it is too
> fast to be human.
>
> All that said, is chatGPT actually intelligent? There's no question
> that it behaves in a way that we would all agree is intelligent. The
> answers it gives, and the speed it gives them in, reflect an
> intelligence that often far exceeds most if not all humans.
>
> I know some here say intelligence is as intelligence does. Full stop,
> conversation over. ChatGPT is intelligent, because it acts
> intelligently.
>
> But this is an oversimplified view! The reason it's over-simple is
> that it ignores what the source of the intelligence is. The source of
> the intelligence is in the texts it's trained on. If ChatGPT was
> trained on gibberish, that's what you'd get out of it. It is amazingly
> similar to the Chinese Room thought experiment proposed by John
> Searle. It is manipulating symbols without having any understanding of
> what those symbols are. As a result, it does not and can not know if
> what it's saying is correct or not. This is a well known caveat of
> using LLMs.
>
> ChatGPT, therefore, is more like a search engine that can extract the
> intelligence that is already structured within the data it's trained
> on. Think of it as a semantic google. It's a huge achievement in the
> sense that training on the data in the way it does, it encodes the
> _context_ that words appear in with sufficiently high resolution that
> it's usually indistinguishable from humans who actually understand
> context in a way that's _grounded in experience_. LLMs don't
> experience anything. They are feed-forward machines. The algorithms
> that implement chatGPT are useless without enormous amounts of text
> that expresses actual intelligence.
>
> Cal Newport does a good job of explaining this here [1].
>
> Terren
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mmt0jKwGAGWrZo%2BQc%3DgcEq-o0jMd%3DCGEiJA_4cN6B6g%40mail.gmail.com
> [2].
>
>
> Links:
> ------
> [1]
> https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have
> [2]
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mmt0jKwGAGWrZo%2BQc%3DgcEq-o0jMd%3DCGEiJA_4cN6B6g%40mail.gmail.com?utm_medium=email&utm_source=footer

John Clark

unread,
May 28, 2023, 7:15:58 AM5/28/23
to everyth...@googlegroups.com
On Sat, May 27, 2023 at 8:19 PM smitra <smi...@zonnet.nl> wrote:

> chatGPT was able to give the derivation of the moment of inertia of a sphere, but was unable to derive this in a
much simpler way 

First of all, GPT-4 is much smarter than chatGPT, so you should try that. And for reasons that are not entirely clear, if you find it acting dumb on a particular problem you can greatly improve performance by encouraging it simply by ending your request with the words "Let's work this out in a step by step way to be sure we have the right answer".

Also, there are an infinite number of ways to prove a true statement, the fact that chatGPT did not use the proof that you personally like best does not necessarily mean it doesn't understand the concept involved because the shortest derivation is not necessarily the simplest if "simplest" is to mean easiest to understand. If that's what the word means then the "simplest" would be a subjective matter of taste. A proof that's simple for me may be confusing to you and vice versa even though both proofs are correct.  

And by the way, currently GPT-4 is as dumb as it's ever going to be.

John K Clark    See what's on my new list at  Extropolis
sfm
Reply all
Reply to author
Forward
0 new messages