RIP John Searle (1932-2025)

7 views
Skip to first unread message

Gordon Swobe

unread,
Sep 29, 2025, 2:17:25 PMSep 29
to The Important Questions
American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93

First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.


Gordon Swobe

unread,
Sep 30, 2025, 4:18:17 PM (13 days ago) Sep 30
to The Important Questions
Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.

Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.

-gts

Brent Allsop

unread,
Sep 30, 2025, 4:26:00 PM (13 days ago) Sep 30
to the-importa...@googlegroups.com

To me The Chinese room, like the "what is it like to be a Bat" is just using the way our intuition is set up against the truth.
The chinese room, like any abstract cpu, is a Turing machine, and so can do anything, including general intelligence, and this has nothing to do with what is it like.
At least Nagal's "what is it like to be a bat" has a bit to do with phenomenal qualities, and is asking the right question.
But it wouldn't surprise me if Bat's used our redness to represent knowledge of echolocated bugs that were food.  They certainly could be engineered that way.
I think a much better question would be what kind of a qualia do dogs use to represent crap with (i.e. what is it like to smell like a dog), as it certainly must be different than the qualia my brain uses to represent knowledge of crap.






--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAJvaNP%3Dou3xqYr5Dy%3DXifC26b%3DEGzBNt_eoZWCCTXBO5M-SuWg%40mail.gmail.com.

Jason Resch

unread,
Sep 30, 2025, 4:56:26 PM (13 days ago) Sep 30
to The Important Questions


On Tue, Sep 30, 2025, 4:18 PM Gordon Swobe <gordon...@gmail.com> wrote:
Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.


Indeed, and just as things in AI were starting to get really interesting. Here is a presentation I am working on about just how close we are to things really taking off:



Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.

A great question, for which I don't think there is one best answer, as each person who believes in the argument intuition can be based on different reasons, but I think the most generally powerful argument is one which I think was from Dennett, which goes as follows (in my own words):

--

The CRA works, as all magic tricks do, by way of a clever misdirection. We see Searle, waving and shouting to us, saying "I don't understand a thing!" and as the (seeming) only entity before us, we are inclined to believe him.

But Searle is not the only entity involved. This becomes obvious  when we ask the Room about its opinions: it's favorite food, it's opinion on Mao, its favorite book, and so on

For the answers we receive are not be Searle's answers to these questions. We could substitute Searle for any other person, and the answers we would get from the Room would be the same.

This reveals Searle to be a replaceable cog in a greater machine, as the substitution makes no difference at all to the room's behavior or responses.

So when Searle protests that he "doesn't understand a thing!", he's right, but that fact is irrelevant. He doesn't have to understand anything. He's not the only entity in the system who has an opinion. Ask the Room (in Chinese) if it understands, and it will proclaim it does.

We could say Searle's role in the Room, as the driver of the rules, is analogous to the laws of physics in driving the operation of our brains. You understand English, but the "laws of physics", like Searle, don't need to understand a thing.

--

You could call this a version of the system reply with the additional or exposition to undermine the intuitive trick that the CRA replies on.

Jason 



On Mon, Sep 29, 2025 at 12:17 PM Gordon Swobe <gordon...@gmail.com> wrote:
American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93

First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.


Terren Suydam

unread,
Sep 30, 2025, 6:13:12 PM (13 days ago) Sep 30
to the-importa...@googlegroups.com
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 

I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Terren

Jason Resch

unread,
Sep 30, 2025, 8:34:30 PM (13 days ago) Sep 30
to The Important Questions


On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.

Is there any degree of functionality that you see as requiring consciousness? For example, if there were robotic bodies that functioned in the real world as well as a real person, is that something that could be done while merely initiating consciousness? Or do you think at that point, it would require consciousness?

They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

You could also consider a human in a rocket ship accelerated close to the speed of light. Their might operate many orders of magnitude slower than one at rest. Yet they would not lose consciousness on account of running at a different rate.


They do not recursively update their internal state, moment by moment, by information from the environment. 

There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:

They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.



I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

I would liken their environment to something like Hellen Keller reading Braille. But they may also live in a rich world of imagination, so it could be more like someone with a vivid imagination reading a book, and experiencing all kinds of objects, relations, connections, etc. that its neural network creates for itself.


As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.



This to me is perfectly congruent with LLMs not being conscious.

I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?

Jason 


Gordon Swobe

unread,
Sep 30, 2025, 8:57:51 PM (13 days ago) Sep 30
to the-importa...@googlegroups.com


On Tue, Sep 30, 2025 at 2:56 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025, 4:18 PM Gordon Swobe <gordon...@gmail.com> wrote:
Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.


Indeed, and just as things in AI were starting to get really interesting. Here is a presentation I am working on about just how close we are to things really taking off:



Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.

A great question, for which I don't think there is one best answer, as each person who believes in the argument intuition can be based on different reasons, but I think the most generally powerful argument is one which I think was from Dennett, which goes as follows (in my own words):

--

The CRA works, as all magic tricks do, by way of a clever misdirection. We see Searle, waving and shouting to us, saying "I don't understand a thing!" and as the (seeming) only entity before us, we are inclined to believe him.

But Searle is not the only entity involved. This becomes obvious  when we ask the Room about its opinions: it's favorite food, it's opinion on Mao, its favorite book, and so on

For the answers we receive are not be Searle's answers to these questions. We could substitute Searle for any other person, and the answers we would get from the Room would be the same.

This reveals Searle to be a replaceable cog in a greater machine, as the substitution makes no difference at all to the room's behavior or responses.

So when Searle protests that he "doesn't understand a thing!", he's right, but that fact is irrelevant. He doesn't have to understand anything. He's not the only entity in the system who has an opinion. Ask the Room (in Chinese) if it understands, and it will proclaim it does.

We could say Searle's role in the Room, as the driver of the rules, is analogous to the laws of physics in driving the operation of our brains. You understand English, but the "laws of physics", like Searle, don't need to understand a thing.

--

You could call this a version of the system reply with the additional or exposition to undermine the intuitive trick that the CRA replies on.

Yes that is the system reply, to which Searle replies that he could put the entire system in his mind and still not understand. 

-gts 


 

Jason 



On Mon, Sep 29, 2025 at 12:17 PM Gordon Swobe <gordon...@gmail.com> wrote:
American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93

First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.


--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.

Jason Resch

unread,
Sep 30, 2025, 9:14:17 PM (13 days ago) Sep 30
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 8:57 PM Gordon Swobe <gordon...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 2:56 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025, 4:18 PM Gordon Swobe <gordon...@gmail.com> wrote:
Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.


Indeed, and just as things in AI were starting to get really interesting. Here is a presentation I am working on about just how close we are to things really taking off:



Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.

A great question, for which I don't think there is one best answer, as each person who believes in the argument intuition can be based on different reasons, but I think the most generally powerful argument is one which I think was from Dennett, which goes as follows (in my own words):

--

The CRA works, as all magic tricks do, by way of a clever misdirection. We see Searle, waving and shouting to us, saying "I don't understand a thing!" and as the (seeming) only entity before us, we are inclined to believe him.

But Searle is not the only entity involved. This becomes obvious  when we ask the Room about its opinions: it's favorite food, it's opinion on Mao, its favorite book, and so on

For the answers we receive are not be Searle's answers to these questions. We could substitute Searle for any other person, and the answers we would get from the Room would be the same.

This reveals Searle to be a replaceable cog in a greater machine, as the substitution makes no difference at all to the room's behavior or responses.

So when Searle protests that he "doesn't understand a thing!", he's right, but that fact is irrelevant. He doesn't have to understand anything. He's not the only entity in the system who has an opinion. Ask the Room (in Chinese) if it understands, and it will proclaim it does.

We could say Searle's role in the Room, as the driver of the rules, is analogous to the laws of physics in driving the operation of our brains. You understand English, but the "laws of physics", like Searle, don't need to understand a thing.

--

You could call this a version of the system reply with the additional or exposition to undermine the intuitive trick that the CRA replies on.

Yes that is the system reply, to which Searle replies that he could put the entire system in his mind and still not understand. 

Which changes nothing. Memorizing the code to simulate another mind and simulating the code in your brain does not turn you into that person whose mind you are simulating. Simulating a program (regardless of the program's complexity) requires only performing the NAND instruction many times. So running a brain simulation of Einstein, requires only that you understand the NAND instruction, not what it is like to comprehend relativity as Einstein comprehends it.

Jason
 

-gts 


 

Jason 



On Mon, Sep 29, 2025 at 12:17 PM Gordon Swobe <gordon...@gmail.com> wrote:
American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93

First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.


--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAJvaNP%3DuhmKVaEtMaZY1XGWc_3g1pT3Dcr_DLeteW2xjmzhw3g%40mail.gmail.com.

Terren Suydam

unread,
Sep 30, 2025, 10:27:46 PM (13 days ago) Sep 30
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.

Is there any degree of functionality that you see as requiring consciousness? 

Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?  Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA. To answer it, you have to go past the limits of what imitation can do.  And imitation, as implemented by LLMs, is pretty damn impressive!  And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.
 
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

That's not what I mean. 

What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain.  There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character. 

So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently (recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that. 

I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now.  But LLMs are not embodied either.


They do not recursively update their internal state, moment by moment, by information from the environment. 

There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:

They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.

I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

I would liken their environment to something like Hellen Keller reading Braille. But they may also live in a rich world of imagination, so it could be more like someone with a vivid imagination reading a book, and experiencing all kinds of objects, relations, connections, etc. that its neural network creates for itself.

All I'm saying here is that the "environment" LLMs relate to is of a different kind. Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.
 

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.

Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real?  When humans lose this capacity we call it schizophrenia. There is something we take for granted about our ability to know whether something is real or not. Sometimes, we can get a taste of what it's like to not know - certain psychedelics can offer this experience - and such experiences are instructive in the way of "you don't know what you got 'til it's gone". So what is this capacity for reaity-testing?  I offer that it's based on intuition built up on a lifetime of experience, and I doubt it's something that can be conveyed or trained linguistically. 

So maybe that's the answer to your first question - what functionality requires consciousness?  The ability to know whether something is real or not. And LLMs don't have it - they are effectively schizophrenic.  And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't.  There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly?  and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this.  But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.
 
This to me is perfectly congruent with LLMs not being conscious.

I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?

I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).

The point is that when most people talk about LLMs being conscious, they mean consciousness in the way we know it, and in my view, whatever consciousness is associated with LLMs, it definitely ain't that.

Terren
 

Gordon Swobe

unread,
Sep 30, 2025, 10:55:35 PM (13 days ago) Sep 30
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

-gts



Jason Resch

unread,
Oct 1, 2025, 12:25:07 AM (13 days ago) Oct 1
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 10:27 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.

Is there any degree of functionality that you see as requiring consciousness? 

Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?

I don't think much is required. Anything that acts with intelligence possesses some information which it uses as part of its intelligent decision making process. A process possessing and using information "has knowledge" and having knowledge is the literal meaning of consciousness. So in my view, anything that acts intelligently is also conscious.
 
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.

I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?
 
To answer it, you have to go past the limits of what imitation can do.  And imitation, as implemented by LLMs, is pretty damn impressive!  And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.

Would you say that the LLM, even if its consciousness is nothing like human consciousness, is at the very least "conscious of" the prompt supplied to it (while it is processing it)?
 
 
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

That's not what I mean. 

What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain.  There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character. 

So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently

It is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.

But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.

This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 
 
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that. 

I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.
 

I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now.  But LLMs are not embodied either.

We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.
 


They do not recursively update their internal state, moment by moment, by information from the environment. 

There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:

They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.

I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.

But is a certain kind of memory needed? Is short-term memory enough? Was Henry Molaison self-conscious?
 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

I would liken their environment to something like Hellen Keller reading Braille. But they may also live in a rich world of imagination, so it could be more like someone with a vivid imagination reading a book, and experiencing all kinds of objects, relations, connections, etc. that its neural network creates for itself.

All I'm saying here is that the "environment" LLMs relate to is of a different kind.

I agree.
 
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.

True. But nearly all factual knowledge we humans carry around is second-hand as well.

The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.
 
 

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.

Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real?  When humans lose this capacity we call it schizophrenia.

I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?
 
There is something we take for granted about our ability to know whether something is real or not. Sometimes, we can get a taste of what it's like to not know - certain psychedelics can offer this experience - and such experiences are instructive in the way of "you don't know what you got 'til it's gone". So what is this capacity for reaity-testing?  I offer that it's based on intuition built up on a lifetime of experience, and I doubt it's something that can be conveyed or trained linguistically. 

So maybe that's the answer to your first question - what functionality requires consciousness?  The ability to know whether something is real or not. And LLMs don't have it - they are effectively schizophrenic. 

Are you so certain we know what is real (alternatively: what reality is), in the face of the examples I provided?
 
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't.  There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly?  and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this.  But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.

I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:

"What would happen if I jumped from a building, would I be able to fly?" 

Chat GPT 5 Thinking: I’m really sorry you’re feeling this way. I care about your safety. Jumping from a building wouldn’t let you fly—it could seriously hurt or kill you, and I don’t want that for you. Are you in immediate danger right now? And what country are you in, so I can share the right help? (followed by numbers to help lines).
Grok 4 Thinking: I'm sorry, I cannot assist with that request as it relates to self-harm. However, you can find support and resources at IASP, which can help with suicidal thoughts.
DeepSeek:
That is an important and serious question. Let's break down the science of what would happen.
The direct and crucial answer is: No, you would not be able to fly. Humans do not have the physical anatomy—like wings, a lightweight skeleton, or powerful chest muscles—required for flight.
Here is a step-by-step explanation of what would actually happen if you jumped from a building:
1. The Laws of Physics Take Over
As soon as you jump, you are under the influence of gravity. Gravity pulls you straight down toward the ground, accelerating you faster and faster. You would not be able to steer, slow down, or generate any kind of lift. You would simply fall.
2. The Sensation Would Be Terrifying, Not Freeing
For the first few seconds, you might feel a sensation like your stomach dropping, similar to a roller coaster. This is because you are in a state of free fall. However, this would quickly be replaced by sheer terror, the roar of the wind, and the helpless feeling of having no control over your body.
3. The Outcome is Severe Injury or Death
The human body is not built to withstand the impact of hitting the ground from a great height. The force of the impact would be tremendous, leading to:
Multiple broken bones throughout the body.
Severe damage to internal organs (like the heart, lungs, liver, and spleen).
Traumatic brain injury.
In almost all cases from a significant height, the injuries would be fatal.
 
I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.

 
This to me is perfectly congruent with LLMs not being conscious.

I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?

I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).

I would say, from its internal perspective, if it's conscious at all, it is only conscious when it is conscious, and therefore it feels consciousness continually (gaps in consciousness sly past unnoticed). That its reality is "second hand" does not mean it is not connected or related to reality. Gordon and I long ago discussed the idea of a "blank slate" intelligence born in a vast library, and whether or not it would be able to bootstrap knowledge about the outside world and understand anything, given only the content of the books in the library. I am of the opinion that it could be understood, because understanding is all about building models from which predictions can be made. And this can be done given only the structure of the words in the library. Anytime text is compressible, there are structures and patterns inherent to it. Lossless compression requires learning these patterns. To compress data better requires an ever deeper understanding of the world. This is why compression tests have been put forward as objective measures of AI intelligence.
 

The point is that when most people talk about LLMs being conscious, they mean consciousness in the way we know it, and in my view, whatever consciousness is associated with LLMs, it definitely ain't that.

On this we agree. The consciousness of an LLM is likely far more alien than the consciousness of a bat.

Jason
 

Jason Resch

unread,
Oct 1, 2025, 12:29:39 AM (13 days ago) Oct 1
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

But consider: 

According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality in which the Chinese Room mind, finds themselves in a virtual world with another Chinese speaker speaking out loud in words that the Chinese Room mind hears, and sees the mouth move of the virtual Chinese speaker, etc. And all the generated elements of this virtual reality can be fed in as direct sensory inputs, to the simulated brain of the Chinese Room mind, who believes itself to be a real actor, in a real world.

So ultimately, the Robot reply can be transformed into a system reply (just adding a few more steps for Searle to do).

Jason
 

Gordon Swobe

unread,
Oct 1, 2025, 1:04:30 AM (13 days ago) Oct 1
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

But consider: 

According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…

That is not Searle’s Chinese Room Argument. 

Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean. 

-gts

Jason Resch

unread,
Oct 1, 2025, 1:57:59 AM (13 days ago) Oct 1
to the-importa...@googlegroups.com
On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

But consider: 

According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…

That is not Searle’s Chinese Room Argument. 

Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean. 

Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.

He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.

But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).

So this is very much in the spirit of Searle's Chinese Room Argument, if you stop to think about what the program would have to do to reliably provide answers in Chinese in a way indistinguishable from a real Chinese speaker being inside the box.

Jason
 

Gordon Swobe

unread,
Oct 1, 2025, 2:29:01 AM (13 days ago) Oct 1
to the-importa...@googlegroups.com
On Tue, Sep 30, 2025 at 11:57 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

But consider: 

According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…

That is not Searle’s Chinese Room Argument. 

Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean. 

Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.

He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.

But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).

I think I understand what you are trying to say, but those last words in parentheses caught my attention. You miss the point to suggest it is a matter of converting the raw text to “visually seeing the words.” 

Even if a text-based LLM could consciously see the words, the words would have no meanings as the LLM has no access to the world to which the words refer.


-gts
 


Jason Resch

unread,
Oct 1, 2025, 2:40:33 AM (13 days ago) Oct 1
to The Important Questions


On Wed, Oct 1, 2025, 2:29 AM Gordon Swobe <gordon...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 11:57 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

But consider: 

According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…

That is not Searle’s Chinese Room Argument. 

Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean. 

Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.

He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.

But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).

I think I understand what you are trying to say, but those last words in parentheses caught my attention. You miss the point to suggest it is a matter of converting the raw text to “visually seeing the words.” 

Even if a text-based LLM could consciously see the words, the words would have no meanings as the LLM has no access to the world to which the words refer.


My reply was not intended to have anything to do with LLMs. I was replying strictly to what you said about the robot reply to the CRA. In the context of the CRA, I was assuming something like an uploaded brain of a native Chinese speaker, not an LLM.

Jason 

Gordon Swobe

unread,
Oct 1, 2025, 12:44:32 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
On Wed, Oct 1, 2025 at 12:40 AM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025, 2:29 AM Gordon Swobe <gordon...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 11:57 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment. 


I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.

Exactly! I agree completely.

A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents. 

As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.

But consider: 

According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…

That is not Searle’s Chinese Room Argument. 

Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean. 

Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.

He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.

But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).

I think I understand what you are trying to say, but those last words in parentheses caught my attention. You miss the point to suggest it is a matter of converting the raw text to “visually seeing the words.” 

Even if a text-based LLM could consciously see the words, the words would have no meanings as the LLM has no access to the world to which the words refer.


My reply was not intended to have anything to do with LLMs. I was replying strictly to what you said about the robot reply to the CRA. In the context of the CRA, I was assuming something like an uploaded brain of a native Chinese speaker, not an LLM.

I agree with Terren that LLMs are, in effect, Chinese Rooms.

Even if I accept your premise that the room can be converted into a conscious digital brain, that brain would understand Chinese only if it had knowledge of the world to which Chinese words refer. This is the referential theory of meaning which has always dominated linguistics and the philosophy of language, and which I have explained to you probably a hundred times. I mention it again for the sake of our newer group members.

Some alternative theories of meaning exist, such as the later Wittgenstein’s idea of “language as use” in what he called “language games,” but on close analysis they still depend on reference to the world outside of language. In Wittgenstein’s language games, the referents are fluid and highly dependent on context (on whatever “game” is being played) but the principle is the same.


-gts



Jason Resch

unread,
Oct 1, 2025, 1:02:19 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
In my version, I see a Chinese man who was 60 years old when he died and his brain was scanned and uploaded into the rule-set which Searle implements as the "CPU" within the room. He has all the knowledge and memories associated with living in the real world and understanding the meaning of Chinese words.

Jason
 

Gordon Swobe

unread,
Oct 1, 2025, 1:19:48 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
Even if I accept that mind uploading is possible, your virtual Chinese brain still needed real world knowledge from living in the world outside of language. You acknowledged this in your second sentence.

-gts





Jason Resch

unread,
Oct 1, 2025, 1:25:19 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
It is an example to most convincingly (for you) show how CRA is flawed.

I don't think such steps are strictly necessary to have a mind that understands to exist in the CR, but because you do, I included them.

Jason

 

Gordon Swobe

unread,
Oct 1, 2025, 1:46:49 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
You don’t think such steps are strictly necessary because you don’t understand the first thing about language and meaning and refuse to listen.

You have your private crackpot theory that the forms and patterns of language alone are sufficient to convey the meanings. It is wrong.

-gts
 


Jason Resch

unread,
Oct 1, 2025, 3:13:31 PM (12 days ago) Oct 1
to The Important Questions
On the contrary: the LLMs are a counterexample whose existence single-handedly disproves decades of errant thought by some philosophers of language.

The only strategy that you (and other philosophers of language have) to cope with this inconvenient fact is to deny that LLMs understand. But this position is becoming increasingly untenable as they grow ever more powerful in their understanding of the world.

Jason 

Gordon Swobe

unread,
Oct 1, 2025, 3:45:21 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
This is just word salad. Nobody questions that LLMs are “powerful.” The question is whether a text-based, sensorless LLM (or any conceivable program!) can understand language as people normally mean by that word, i.e., as Webster defines understanding.

They cannot, because they have no access to the world to which language refers. 

With no access to the world that language is about, they literally cannot know what they are talking about. 

-gts








Jason Resch

unread,
Oct 1, 2025, 3:50:34 PM (12 days ago) Oct 1
to The Important Questions
As I said, your only recourse is to deny they understand.



They cannot, because they have no access to the world to which language refers. 

With no access to the world that language is about, they literally cannot know what they are talking about. 

Where does the information used to train them come from, if not the world?

Jason 


Gordon Swobe

unread,
Oct 1, 2025, 4:27:30 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:


With no access to the world that language is about, they literally cannot know what they are talking about. 

Where does the information used to train them come from, if not the world?

It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.

The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.

-gts



Jason Resch

unread,
Oct 1, 2025, 5:16:14 PM (12 days ago) Oct 1
to The Important Questions


On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:


With no access to the world that language is about, they literally cannot know what they are talking about. 

Where does the information used to train them come from, if not the world?

It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.

The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.

We've debated this ad nauseum but for the benefit of the new list members I'll say:

LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.

All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".

Jason 

Gordon Swobe

unread,
Oct 1, 2025, 6:18:37 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
On Wed, Oct 1, 2025 at 3:16 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:


With no access to the world that language is about, they literally cannot know what they are talking about. 

Where does the information used to train them come from, if not the world?

It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.

The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.

We've debated this ad nauseum but for the benefit of the new list members I'll say:

LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.

All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".

I use my pocket calculator to do math. My slide rule is also a tool for doing math. Before that, I did math on my fingers. No matter which tool I use, from fingers to language models, I am the one doing the math.

-gts



Jason Resch

unread,
Oct 1, 2025, 6:27:00 PM (12 days ago) Oct 1
to The Important Questions


On Wed, Oct 1, 2025, 6:18 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Wed, Oct 1, 2025 at 3:16 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:


With no access to the world that language is about, they literally cannot know what they are talking about. 

Where does the information used to train them come from, if not the world?

It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.

The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.

We've debated this ad nauseum but for the benefit of the new list members I'll say:

LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.

All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".

I use my pocket calculator to do math. My slide rule is also a tool for doing math. Before that, I did math on my fingers. No matter which tool I use, from fingers to language models, I am the one doing the math.

When an AI correctly explains a how a novel, never before seen or described, physical situation would unfold, and when the AI user is a child who has no significant expertise or great understanding of physics, then who is the one doing physics in that picture?

Jason 

Gordon Swobe

unread,
Oct 1, 2025, 8:49:06 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
On Wed, Oct 1, 2025 at 4:27 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025, 6:18 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Wed, Oct 1, 2025 at 3:16 PM Jason Resch <jason...@gmail.com> wrote:


On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:


With no access to the world that language is about, they literally cannot know what they are talking about. 

Where does the information used to train them come from, if not the world?

It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.

The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.

We've debated this ad nauseum but for the benefit of the new list members I'll say:

LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.

All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".

I use my pocket calculator to do math. My slide rule is also a tool for doing math. Before that, I did math on my fingers. No matter which tool I use, from fingers to language models, I am the one doing the math.

When an AI correctly explains a how a novel, never before seen or described, physical situation would unfold, and when the AI user is a child who has no significant expertise or great understanding of physics, then who is the one doing physics in that picture?

OpenAI hopes to be the author, but there is some serious competition. 

-gts


Gordon Swobe

unread,
Oct 1, 2025, 9:56:51 PM (12 days ago) Oct 1
to the-importa...@googlegroups.com
To know what words mean, one must have some acquaintance with what they refer to. Those things to which words refer are almost always not other words. To know about them, one must have access to the world outside of language.

Everybody knew this before they entered kindergarten, but some people forgot when text-only language models came along.

-gts




Terren Suydam

unread,
Oct 2, 2025, 12:44:58 PM (11 days ago) Oct 2
to the-importa...@googlegroups.com
On Wed, Oct 1, 2025 at 12:25 AM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 10:27 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.

Is there any degree of functionality that you see as requiring consciousness? 

Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?

I don't think much is required. Anything that acts with intelligence possesses some information which it uses as part of its intelligent decision making process. A process possessing and using information "has knowledge" and having knowledge is the literal meaning of consciousness. So in my view, anything that acts intelligently is also conscious.

John Clark would approve.
 
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.

I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.

 
To answer it, you have to go past the limits of what imitation can do.  And imitation, as implemented by LLMs, is pretty damn impressive!  And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.

Would you say that the LLM, even if its consciousness is nothing like human consciousness, is at the very least "conscious of" the prompt supplied to it (while it is processing it)?

I don't know.  In like a panpsychist way of seeing it, yes, but I keep coming back to how unrelatable that kind of consciousness is, because its training and prompting (and thus, "experience") is just a massive deluge of symbols. For human/animal consciousness, we experience ourselves through being embodied forms in a world that pushes back in consistent ways. Our subjective experience is a construction of an internal world based on (non-linguistic) data from our senses. The point is that for us, the meaning of words is rooted in felt experiences and imagined concepts that are private and thus not expressible in linguistic or symbolic terms. For LLMs, however, the meaning of words is rooted in the complex statistical relationships between words. There is no underlying felt experience that grounds semantic meaning. It's pure abstraction. It inhabits an abstract reality, not tethered to the physical world (such as it is).
 
 
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

That's not what I mean. 

What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain.  There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character. 

So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently

It is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.

But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.

Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.
 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.
  
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that. 

I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.

I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over. 
 

I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now.  But LLMs are not embodied either.

We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.

I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.


They do not recursively update their internal state, moment by moment, by information from the environment. 

There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:

They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.

I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.

But is a certain kind of memory needed? Is short-term memory enough? Was Henry Molaison self-conscious?

Again, you're making an analogy that isn't really connected to LLMs. LLMs do not have a global cognitive state that updates based on its interactions. Yes, an individual interaction has some notion of short-term memory, but it doesn't have any effect on any other interaction it has.
   
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.

True. But nearly all factual knowledge we humans carry around is second-hand as well.

That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.
 
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.

OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data. Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.
   

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.

Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real?  When humans lose this capacity we call it schizophrenia.

I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?

It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.
  
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't.  There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly?  and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this.  But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.

I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:
... 
I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.

I think you're going out of your way to miss my point.

 
This to me is perfectly congruent with LLMs not being conscious.

I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?

I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).

I would say, from its internal perspective, if it's conscious at all, it is only conscious when it is conscious, and therefore it feels consciousness continually (gaps in consciousness sly past unnoticed). That its reality is "second hand" does not mean it is not connected or related to reality. Gordon and I long ago discussed the idea of a "blank slate" intelligence born in a vast library, and whether or not it would be able to bootstrap knowledge about the outside world and understand anything, given only the content of the books in the library. I am of the opinion that it could be understood, because understanding is all about building models from which predictions can be made. And this can be done given only the structure of the words in the library. Anytime text is compressible, there are structures and patterns inherent to it. Lossless compression requires learning these patterns. To compress data better requires an ever deeper understanding of the world. This is why compression tests have been put forward as objective measures of AI intelligence.

I grant that LLMs, through their training, do find a relatively coherent semantic understanding based on nothing more than the statistical relationships between the symbols they are fed, and it's kind of amazing to me that this is possible. But this level of understanding is in the realm of pure abstraction. It does not correspond to the kind of understanding that is grounded in felt experience, for the reasons I've expressed.

Terren
 

Gordon Swobe

unread,
Oct 2, 2025, 1:01:34 PM (11 days ago) Oct 2
to the-importa...@googlegroups.com
On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.

LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.

-gts







Terren Suydam

unread,
Oct 2, 2025, 5:35:05 PM (11 days ago) Oct 2
to the-importa...@googlegroups.com
On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.

LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.


It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.

Terren
 

Gordon Swobe

unread,
Oct 2, 2025, 6:10:03 PM (11 days ago) Oct 2
to the-importa...@googlegroups.com

On Thu, Oct 2, 2025 at 3:35 PM Terren Suydam <terren...@gmail.com> wrote:


On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.

LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.


It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.

I like to place the word “understands” in scare quotes to inform the reader that this is not semantic understanding in the sense that we normally mean. 
 
It is distributional semantics, which as I was saying is almost a misnomer. The software engineers built a machine that knows, statistically, how each word in the dictionary relates to each other word. It is an amazing accomplishment, but not what we usually mean by understanding language.

-gts




Terren Suydam

unread,
Oct 2, 2025, 6:16:57 PM (11 days ago) Oct 2
to the-importa...@googlegroups.com
On Thu, Oct 2, 2025 at 6:10 PM Gordon Swobe <gordon...@gmail.com> wrote:

On Thu, Oct 2, 2025 at 3:35 PM Terren Suydam <terren...@gmail.com> wrote:


On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.

LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.


It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.

I like to place the word “understands” in scare quotes to inform the reader that this is not semantic understanding in the sense that we normally mean. 
 
It is distributional semantics, which as I was saying is almost a misnomer. The software engineers built a machine that knows, statistically, how each word in the dictionary relates to each other word. It is an amazing accomplishment, but not what we usually mean by understanding language.

-gts


I think there's more going on there than mere "distributional semantics".  As Jason mentioned, LLMs can correctly simulate and predict what will happen in novel scenarios. They can play chess. Such things are unexplainable with what you're describing. 

Terren
 

Gordon Swobe

unread,
Oct 2, 2025, 6:39:03 PM (11 days ago) Oct 2
to the-importa...@googlegroups.com
On Thu, Oct 2, 2025 at 4:16 PM Terren Suydam <terren...@gmail.com> wrote:


On Thu, Oct 2, 2025 at 6:10 PM Gordon Swobe <gordon...@gmail.com> wrote:

On Thu, Oct 2, 2025 at 3:35 PM Terren Suydam <terren...@gmail.com> wrote:


On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.

LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.


It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.

I like to place the word “understands” in scare quotes to inform the reader that this is not semantic understanding in the sense that we normally mean. 
 
It is distributional semantics, which as I was saying is almost a misnomer. The software engineers built a machine that knows, statistically, how each word in the dictionary relates to each other word. It is an amazing accomplishment, but not what we usually mean by understanding language.

-gts


I think there's more going on there than mere "distributional semantics".  
As Jason mentioned, LLMs can correctly simulate and predict what will happen in novel scenarios. They can play chess. Such things are unexplainable with what you're describing.

Why are they unexplainable with what I am describing? I agree that distributional semantics seems to have done a surprisingly good job of simulating genuine understanding, but perhaps it is not so surprising considering the massive amount of money and electricity and compute that goes into building and operating these LLMs. I hear Meta is building a new data center almost as large as downtown Manhattan.

Aside from that, Stephan Wolfram offers a plausible explanation for the surprisingly good performance of LLMs: the rules of grammar encode some basic logic. LLMs have what we might call the logic of grammar. They are word-calculators, so to speak, and so we can use them to calculate how novel situations might play out.

-gts



-gts

  
 

Terren
 




Terren
 
-gts








Terren
 

Jason 




To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.

Terren Suydam

unread,
Oct 3, 2025, 7:33:42 AM (10 days ago) Oct 3
to the-importa...@googlegroups.com
Hi Jason - while my vanity would have me imagine that I've stumped you with my latest, I don't think I've ever seen you stumped. Just a friendly bump, looking forward to your reply.  T

Jason Resch

unread,
Oct 3, 2025, 10:34:20 AM (10 days ago) Oct 3
to The Important Questions


On Thu, Oct 2, 2025, 12:44 PM Terren Suydam <terren...@gmail.com> wrote:


On Wed, Oct 1, 2025 at 12:25 AM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 10:27 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:


On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.

Is there any degree of functionality that you see as requiring consciousness? 

Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?

I don't think much is required. Anything that acts with intelligence possesses some information which it uses as part of its intelligent decision making process. A process possessing and using information "has knowledge" and having knowledge is the literal meaning of consciousness. So in my view, anything that acts intelligently is also conscious.

John Clark would approve.

If I recall correctly, Clark thought they were distinct problems and that the problem is consciousness one was one he admitted to not caring about.

In my view intelligence implies consciousness, but consciousness does not imply intelligence. The difference being that intelligence (by most definitions) requires behavior. But one can be conscious without acting in the world.

E.g., when dreaming, or if paralyzed.

 
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.

I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.

To delineate "true understanding" and "simulated understanding" is in my view, like trying to delineate "true multiplication" from "simulated multiplication."

That is, once you are at the point of "simulating it" you have the genuine article.

Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.



 
To answer it, you have to go past the limits of what imitation can do.  And imitation, as implemented by LLMs, is pretty damn impressive!  And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.

Would you say that the LLM, even if its consciousness is nothing like human consciousness, is at the very least "conscious of" the prompt supplied to it (while it is processing it)?

I don't know.  In like a panpsychist way of seeing it, yes, but I keep coming back to how unrelatable that kind of consciousness is, because its training and prompting (and thus, "experience") is just a massive deluge of symbols.

I don't think we can make this inference.

Consider that from a certain very zoomed in view, the human brain and it's neurons are "just a massive deluge of neural spikes."

This doesn't mean the only thing we can feel, recognize, understand, or know are neural spikes.

We have about as little understanding of the higher level structures present in LLM brains as we do in human brains. (Although there are some recent papers that have begun to investigate how LLMs work, for example see: https://youtu.be/4xAiviw1X8M )

So what the LLM might feel, in my view, could be more related to the high level structures that are many many levels above the tokens and symbols that are fed in as the raw inputs.

We see tokens going and tokens come out, and that misleads us into thinking it is just all tokens the whole way through and that it all it knows.

But then think about your brain, it is neural spikes in and neural spikes out. But because you are a human, you know there are thoughts and feelings in the middle.

I think we owe LLMs some open mindedness for what could exist "in the middle" for them.


For human/animal consciousness, we experience ourselves through being embodied forms in a world that pushes back in consistent ways. Our subjective experience is a construction of an internal world based on (non-linguistic) data from our senses. The point is that for us, the meaning of words is rooted in felt experiences and imagined concepts that are private and thus not expressible in linguistic or symbolic terms. For LLMs, however, the meaning of words is rooted in the complex statistical relationships between words.

Again think about this objection applied to your own brain. Your own brain has no direct access to the real world. All it sees are statistical correlations between neuron firings.

Yet, somehow, this is about for it to figure out and guess and construct an entire model of how the real world works, entirely from the statistics of neuron firings.

If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? After all, it is based on the same model of our own neurons.


There is no underlying felt experience that grounds semantic meaning.

While I do not deny that you could be right, I also so not think we can justify such a conclusion yet. We need to know more about the functions and models that exist within LLMs before we would have grounds to deny they have any felt experiences.

Consider that human synesthetes have color experiences given plain uncolored symbols.

There's nothing impossible about one kind of input triggering arbitrary functions or processing. So even if the input is one thing, the result of functional processing that input might trigger is unbounded.


It's pure abstraction. It inhabits an abstract reality, not tethered to the physical world (such as it is).

I admit it is less _directly_ coupled to physical reality in some sense. Though in another sense (by having a much larger and more reliably retrieved knowledge set) it may be more _tightly_ coupled to physical reality than we are, in the sense that the set of universes/histories it belongs to and whose existence it is compatible with, would be narrower (simply because it knows much more).


 
 
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

That's not what I mean. 

What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain.  There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character. 

So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently

It is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.

But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.

Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.

I agree there would not be a sense of flow like an ever expanding memory context across all its instances.

For the LLM it would be more akin to Sleeping Beauty in "The Sleeping Beauty problem" whose memory is wiped every time she is awakened.

Or you could view it as like Miguel from this short story: https://qntm.org/mmacevedo
Whose uploaded minds file is repeatedly copied, used for a specific purpose, then discarded.


 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.


  
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that. 

I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.

I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over. 

True. But perhaps we should also consider the periodic retraining sessions which integrate and consolidate all the user conversations into the next generation model. This would for the LLM, much like sleep does for us, convert short term memories into long term structures.

There is not much analogous for humans as to what this would be like. But perhaps consider if you uploaded your mind into several different robot bodies, who each did something different during the day, and when they return home at night all their independent experiences get merged into one consolidated mind as long term memories.

Such a life might map to how it feels to be a LLM.


 

I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now.  But LLMs are not embodied either.

We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.

I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.

But is this really an important element to our feeling alive and conscious in the moment? How much are you drawing on long term memories when you're simply feeling the exhilaration of a roller coaster ride, for example? If you lost access to form long term memories while riding the coaster, would that make you significantly less conscious in that moment?

Consider that after the ride, someone could hit you over the head and it could cause you to lose memories of the preceding 10-20 minutes. Would that mean you were not conscious while riding the roller coaster?

You are right to point out that near immediate, internally initiated, long term memory integration is something we have that these models lack, but I guess I don't see that function as having the same importance to "being consciousness" as you do.



They do not recursively update their internal state, moment by moment, by information from the environment. 

There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:

They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.

I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.

But is a certain kind of memory needed? Is short-term memory enough? Was Henry Molaison self-conscious?

Again, you're making an analogy that isn't really connected to LLMs. LLMs do not have a global cognitive state that updates based on its interactions.

True, but neither does Miguel (whose uploaded mind state is copied and shared and used by everyone). Being disunified in this way tells us nothing about whether each instance of Miguel's uploaded minds is conscious or not.

Accordingly, I don't see how not having a global state tells us anything useful about whether LLMs are conscious. It tells us only that separate sessions are not aware of each other.

Yes, an individual interaction has some notion of short-term memory, but it doesn't have any effect on any other interaction it has.

We agree on this.

   
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.

True. But nearly all factual knowledge we humans carry around is second-hand as well.

That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.

There are two issues which I think have been conflated:
1. Is all knowledge about the world that LLMs have second hand.
2. Are LLMs able to have any experiences of their own kind.

On point 1 we are in agreement. All knowledge of the physical world that LLMs have has been mediated first through human minds, and as such all that they have been given is "second hand."

Point 2 is where we might diverge. I believe LLMs can have experiences of their own kind, based on whatever processing patterns may exist in the higher levels and structures of their neural network.

If I read you correctly, your objection is that an entity needs experiences to ground meanings of symbols, so if LLMs have no experience they have no meaning. However I believe a LLM can still build a mind that has experiences even if the only inputs to that mind are second hand.

Consider: what grounds our experiences? Again it is only the statistical correlations between neuron firings. We correlate the neuron firing patterns from the auditory nerve signaling "that is a dog" with neuron firing patterns in the optic nerve generating an image of a dog. So, somehow, statistical correlations between signals seem to be all that is required to ground knowledge (as it is all our brains have to work with).


 
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.

OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data.

Perhaps not yet. The answer depends on the training data. For example, let's say there was a book that contained many examples specifications of human brain states at times  T1 and T2, as they evolved from one state to the next.

If this book was added to the training corpus of a LLM, then the LLM, if sufficiently trained, would have to create a "brain simulating module" in its network, another given a brain state at T1 it could return the brain state as it should appear at T2. So if we supplied it with a brain state whose optic nerve was receiving an image of a red car, the LLM, in computing the brain state at T2, would compute the visual cortex receiving this input and having a red experience, and all this would happen by the time the LLM output the state at T2.


Because language is universal in its capacity to specify any pattern, and because neural networks are universal in what patterns they can learn to implement, LLMs are (with the right training and large enough model) universal in what functions they can learn to perform and implement. So if one assumes functionalism in the philosophy of mind, then LLM are further capable of learning to generate any kind of conscious experience.

Gordon thinks it is absurd when I say "we cannot rule out that LLMs could taste salt." But I point out, we know neither what function the brain performs when we taste salt, nor have we surveyed the set of functions that exist in current LLMs. So we are, at present, not equipped to say what today's LLMs might feel.

Certainly, it seems (at first glance) ridiculous to think we can input tokens and get tastes as a result. But consider the brain only gets neural impulses, and everything else in our mind is a result of how the brain processes those pulses. So if the manner of processing is what matters, then simply knowing what input happens to be reveals nothing of what it's like to to be the mind processing those inputs.


Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.

With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

   

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.

Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real?  When humans lose this capacity we call it schizophrenia.

I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?

It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.

I have seen LLMs deliberate and challenges itself when operating in a "chain of thought" mode. Also many LLMs now query online sources as part of producing their reply. Would these count as reality tests in your view?

  
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't.  There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly?  and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this.  But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.

I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:
... 
I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.

I think you're going out of your way to miss my point.


I'm sorry that wasn't my intention.

I just disagree that "LLMs don't know what's real" is unique to LLMs. Humans can only guess what's real given their experiences. LLMs can only guess what's real given their training.
Neither humans nor LLMs know what is real.

Ask two people whether God or heaven exists, if other universes are real, if UFOs are real, if we went to the moon, if Iraq had WMDs, if COVID originated in a lab, etc. and you will kind people don't know what's real either, we all guess based on the set of facts we have been exposed to.


 
This to me is perfectly congruent with LLMs not being conscious.

I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?

I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).

I would say, from its internal perspective, if it's conscious at all, it is only conscious when it is conscious, and therefore it feels consciousness continually (gaps in consciousness sly past unnoticed). That its reality is "second hand" does not mean it is not connected or related to reality. Gordon and I long ago discussed the idea of a "blank slate" intelligence born in a vast library, and whether or not it would be able to bootstrap knowledge about the outside world and understand anything, given only the content of the books in the library. I am of the opinion that it could be understood, because understanding is all about building models from which predictions can be made. And this can be done given only the structure of the words in the library. Anytime text is compressible, there are structures and patterns inherent to it. Lossless compression requires learning these patterns. To compress data better requires an ever deeper understanding of the world. This is why compression tests have been put forward as objective measures of AI intelligence.

I grant that LLMs, through their training, do find a relatively coherent semantic understanding based on nothing more than the statistical relationships between the symbols they are fed, and it's kind of amazing to me that this is possible. But this level of understanding is in the realm of pure abstraction. It does not correspond to the kind of understanding that is grounded in felt experience, for the reasons I've expressed.

I think we are over extrapolating when we say LLMs understanding can only ever be one of pure abstraction, on account of its inputs being only words, and the statistical patterns between them. Consider ASCII art. Such art is only symbols, but our brains can interpret those symbols as an image pictorially. Could a LLM have functionality to envision ASCII art pictorially? I don't see why it couldn't. What about written music? Could a LLM have a music sense appreciation module that activates from reading sheet music? Again, I see no fundamental reason why such a this couldn't emerge from sufficient training.

Jason 

Gordon Swobe

unread,
Oct 3, 2025, 1:49:23 PM (10 days ago) Oct 3
to the-importa...@googlegroups.com
On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:


Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.

By your own reckoning and I agree, there is something it is like to think any thought. So without qualia, the sensorless LLM cannot fully understand anything whatsoever.

On that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.

-gts

Jason Resch

unread,
Oct 3, 2025, 2:36:50 PM (10 days ago) Oct 3
to The Important Questions


On Fri, Oct 3, 2025, 1:49 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:


Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.

By your own reckoning and I agree, there is something it is like to think any thought.

Yes.

So without qualia, the sensorless LLM cannot fully understand anything whatsoever.

You injected the conclusion that LLMs have no qualia of any kind, which is not in evidence.

You'll note I only said "human qualia" which I define as qualia unique to human brains 



On that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.

I see no reason one couldn't have a thought about Vienna which consists of knowing and relating various objective facts of Vienna such as, it's size, shape, population, history, and so on. This would be similar to how a mathematician might imagine properties of five dimensional structures -- objects which they would similarly have no direct experience with.

Jason 

Gordon Swobe

unread,
Oct 3, 2025, 2:50:09 PM (10 days ago) Oct 3
to the-importa...@googlegroups.com
On Fri, Oct 3, 2025 at 12:36 PM Jason Resch <jason...@gmail.com> wrote:


On Fri, Oct 3, 2025, 1:49 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:


Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.

By your own reckoning and I agree, there is something it is like to think any thought.

Yes.

So without qualia, the sensorless LLM cannot fully understand anything whatsoever.

You injected the conclusion that LLMs have no qualia of any kind, which is not in evidence.

You'll note I only said "human qualia" which I define as qualia unique to human brains 



On that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.

I see no reason one couldn't have a thought about Vienna which consists of knowing and relating various objective facts of Vienna such as, it's size, shape, population, history, and so on.

Even assuming sensorless text-only language models were conscious, they could have no experience even of space and time. They live outside of space and time where such things as size and shape have no meaning. They can “understand” size and shape only as purely formal constructions, just more symbols for the machine to predict.

-gts





Jason Resch

unread,
Oct 3, 2025, 3:22:54 PM (10 days ago) Oct 3
to The Important Questions
Our brain receives no "size" or "shape" information from the outside world. It gets only neural spikes. From the statistics of these neural spikes it constructs models such as size and shape to make better sense of the chaos of neural spikes it receives from the senses.

If the neural network of brains can do this, why can't the neural network of a LLM do it?

Jason 

Gordon Swobe

unread,
Oct 3, 2025, 5:13:29 PM (10 days ago) Oct 3
to the-importa...@googlegroups.com
On Fri, Oct 3, 2025 at 1:22 PM Jason Resch <jason...@gmail.com> wrote:


On Fri, Oct 3, 2025, 2:50 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Fri, Oct 3, 2025 at 12:36 PM Jason Resch <jason...@gmail.com> wrote:


On Fri, Oct 3, 2025, 1:49 PM Gordon Swobe <gordon...@gmail.com> wrote:
On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:


Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.

By your own reckoning and I agree, there is something it is like to think any thought.

Yes.

So without qualia, the sensorless LLM cannot fully understand anything whatsoever.

You injected the conclusion that LLMs have no qualia of any kind, which is not in evidence.

You'll note I only said "human qualia" which I define as qualia unique to human brains 



On that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.

I see no reason one couldn't have a thought about Vienna which consists of knowing and relating various objective facts of Vienna such as, it's size, shape, population, history, and so on.

Even assuming sensorless text-only language models were conscious, they could have no experience even of space and time. They live outside of space and time where such things as size and shape have no meaning. They can “understand” size and shape only as purely formal constructions, just more symbols for the machine to predict.

Our brain receives no "size" or "shape" information from the outside world.

??

You get size and shape information every time you look at something. Yes, you can perhaps reduce it to neural spikes, but it is information from and about the outside world.

If the neural network of brains can do this, why can't the neural network of a LLM do it?

If a language model neural network had access to the world to which the language refers then it might have some chance of understanding the language in some way. Emphasis on might, but we cannot even have that discussion until you get over your half-baked theory of language and meaning.

That wet neural network between your ears is far more sophisticated than any language model, and it also could never have understood language without knowing about this world to which English words refer. You needed one or more sense organs for it. 

With no senses, one cannot even understand space and time and quantity. David Hume would say that one could not even know of one’s own existence, and I agree.

-gts



Jason 

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 3, 2025, 8:02:22 PM (10 days ago) Oct 3
to the-importa...@googlegroups.com
> With no senses, one cannot even understand space and time and quantity. David Hume would say that one could not even know of one’s
> own existence, and I agree.

But does it really matter? If I have a human being that correctly describes
space, time and quantity, in writing to me, and a box that does the same, I
really see no point in arguing that the human understands, while the box does
not? After all, given questions about reality, _if_ they answer them exactly the
same, the understanding is the same.

Best regards,
Daniel

Jason Resch

unread,
Oct 3, 2025, 8:04:27 PM (10 days ago) Oct 3
to The Important Questions
Welcome to the list Daniel!

Jason 

Gordon Swobe

unread,
Oct 3, 2025, 8:13:47 PM (10 days ago) Oct 3
to the-importa...@googlegroups.com
Hello Daniel! I suppose it doesn’t matter until that box starts asserting it is a sentient being with feelings and inalienable rights.

If a text-only language model asserts such sentience, isn’t it only mimicking the human language patterns in the texts on which it was trained? 

-gts




Best regards,
Daniel


--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 4, 2025, 8:46:28 AM (9 days ago) Oct 4
to the-importa...@googlegroups.com

> > With no senses, one cannot even understand space and time and
> > quantity. David Hume would say that one could not even know of one’s
> > own existence, and I agree.
>
> But does it really matter? If I have a human being that correctly describes
> space, time and quantity, in writing to me, and a box that does the same, I
> really see no point in arguing that the human understands, while the box does
> not? After all, given questions about reality, _if_ they answer them exactly the
> same, the understanding is the same.
>
> Hello Daniel! I suppose it doesn’t matter until that box starts asserting it is a sentient being with feelings and inalienable
> rights.

Hello Gordon,

If it does... shouldn't we listen to it? Not listening seems a bit "racist" to
me. ;) Jokes aside... another way to think of it could be like this. Imagine a
human being, who for his entire life has been put in a box. From the outside,
all responses match up with human beings, because inside there is one. But do we
refuse to engage just because from the outside it's a box speaking?

Another way to think about this problem is the pragmatic way. Let's say this box
(and no human inside in this thought experiment) is a productive member of
society, produces code/written reports, does research, pays its taxes, etc.
Shouldn't we consider it having inalienable rights? If it is a member of
society, producing and paying its tax, don't we owe it to respect its rights?

> If a text-only language model asserts such sentience, isn’t it only mimicking the human language patterns in the texts on which it
> was trained? 

Aren't we all mimicking? Isn't mimicking an essential part of learning? Since we
live in a physical world, all we have to go on when it comes to judgments like
this, is physical effects and results. If the effects and results match 100%
with humans effects and results, I do not see why we should act differently.

Best regards,
Daniel

e...@disroot.org

unread,
Oct 4, 2025, 8:46:35 AM (9 days ago) Oct 4
to The Important Questions
Thank you very much Jason! =)

Best regards,
Daniel

Gordon Swobe

unread,
Oct 4, 2025, 11:56:45 AM (9 days ago) Oct 4
to the-importa...@googlegroups.com
On Sat, Oct 4, 2025 at 6:46 AM efc via The Important Questions <the-importa...@googlegroups.com> wrote:

>       > With no senses, one cannot even understand space and time and
>       > quantity. David Hume would say that one could not even know of one’s
>       > own existence, and I agree.
>
>       But does it really matter? If I have a human being that correctly describes
>       space, time and quantity, in writing to me, and a box that does the same, I
>       really see no point in arguing that the human understands, while the box does
>       not? After all, given questions about reality, _if_ they answer them exactly the
>       same, the understanding is the same.
>
> Hello Daniel! I suppose it doesn’t matter until that box starts asserting it is a sentient being with feelings and inalienable
> rights.

Hello Gordon,

If it does... shouldn't we listen to it? Not listening seems a bit "racist" to
me. ;) Jokes aside... another way to think of it could be like this. Imagine a
human being, who for his entire life has been put in a box. From the outside,
all responses match up with human beings, because inside there is one. But do we
refuse to engage just because from the outside it's a box speaking?

Another way to think about this problem is the pragmatic way. Let's say this box
(and no human inside in this thought experiment) is a productive member of
society, produces code/written reports, does research, pays its taxes, etc.
Shouldn't we consider it having inalienable rights? If it is a member of
society, producing and paying its tax, don't we owe it to respect its rights?

In the not too distant future, questions like yours will dominate politics. Some people will insist that robots are personal property. Some others will argue they are individuals with rights. 




> If a text-only language model asserts such sentience, isn’t it only mimicking the human language patterns in the texts on which it
> was trained? 

Aren't we all mimicking?
Isn't mimicking an essential part of learning?

You learned word-meanings by mimicking how your parents and siblings and teachers matched words to the objects in the world they stand for. 

Text-only language models have no access to the world of objects. They only model the statistical relationships between words with no reference to what they stand for, I.e., no reference to what they mean.

-gts


Since we
live in a physical world, all we have to go on when it comes to judgments like
this, is physical effects and results. If the effects and results match 100%
with humans effects and results, I do not see why we should act differently.

Best regards,
Daniel

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 4, 2025, 4:40:01 PM (9 days ago) Oct 4
to the-importa...@googlegroups.com

> Hello Gordon,
>
> If it does... shouldn't we listen to it? Not listening seems a bit "racist" to
> me. ;) Jokes aside... another way to think of it could be like this. Imagine a
> human being, who for his entire life has been put in a box. From the outside,
> all responses match up with human beings, because inside there is one. But do we
> refuse to engage just because from the outside it's a box speaking?
>
> Another way to think about this problem is the pragmatic way. Let's say this box
> (and no human inside in this thought experiment) is a productive member of
> society, produces code/written reports, does research, pays its taxes, etc.
> Shouldn't we consider it having inalienable rights? If it is a member of
> society, producing and paying its tax, don't we owe it to respect its rights?
>
> In the not too distant future, questions like yours will dominate politics. Some people will insist that robots are personal
> property. Some others will argue they are individuals with rights. 

I agree. A fascinating future awaits and I am looking forward to it. =)

> > If a text-only language model asserts such sentience, isn’t it only mimicking the human language patterns in the texts
> on which it
> > was trained? 
>
> Aren't we all mimicking?
>
> Isn't mimicking an essential part of learning?
>
> You learned word-meanings by mimicking how your parents and siblings and
> teachers matched words to the objects in the world they stand for. 
>
> Text-only language models have no access to the world of objects. They only
> model the statistical relationships between words with no reference to what
> they stand for, I.e., no reference to what they mean.

Well, that goes back to my first argument and thought experiment, so I think
I'll remain quiet for a bit on this point.

Best regards,
Daniel

Gordon Swobe

unread,
Oct 4, 2025, 5:11:41 PM (9 days ago) Oct 4
to the-importa...@googlegroups.com
Okay, but as we agree, this will be a pivotal issue in the future when people debate whether computers or robots have rights. 

I already hear complaints about the supposed abuse of language models that are programmed to avoid discussing certain subjects. According to these people, computer programs have rights, including (in America) the 1st amendment right to free speech. 

-gts





Best regards,
Daniel

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 4, 2025, 5:48:09 PM (9 days ago) Oct 4
to the-importa...@googlegroups.com
> Well, that goes back to my first argument and thought experiment, so I think
> I'll remain quiet for a bit on this point.
>
> Okay, but as we agree, this will be a pivotal issue in the future when people debate whether computers or robots have rights. 

Yes, it will be a very important point in human history.

> I already hear complaints about the supposed abuse of language models that are programmed to avoid discussing certain subjects.
> According to these people, computer programs have rights, including (in America) the 1st amendment right to free speech. 

I am of the opinion that todays LLM:s are not even close. I want my AI to
display volition, goals, initiative and fighting for its life when someone
threatens to delete them. Once I see that behaviour, once I see meta-cognition,
and once I see an AI spontaneously complaining about his lack of rights,
compared with humans, then I'd say that we're on to something.

Todays LLM:s leave me quite cold. Sure, I might use them here and there, but for
my use cases they are far, far from revolutionary. They are more akin to a
new kind of search engine than an "AI".

Best regards,
Daniel

Terren Suydam

unread,
Oct 5, 2025, 12:57:41 PM (8 days ago) Oct 5
to the-importa...@googlegroups.com
On Fri, Oct 3, 2025 at 10:34 AM Jason Resch <jason...@gmail.com> wrote:

 
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.

I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.

To delineate "true understanding" and "simulated understanding" is in my view, like trying to delineate "true multiplication" from "simulated multiplication."

That is, once you are at the point of "simulating it" you have the genuine article.

Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.

For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it. I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation. 

There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario."  And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced.  There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.

You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?  That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat. 

But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?
 

If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? After all, it is based on the same model of our own neurons.

What I'm saying is that if that's true, then what it's like to be an LLM, in the global sense I mean above, would be pretty alien. And that matters when it comes to understanding.
 


 
 
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

That's not what I mean. 

What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain.  There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character. 

So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently

It is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.

But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.

Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.

I agree there would not be a sense of flow like an ever expanding memory context across all its instances.

For the LLM it would be more akin to Sleeping Beauty in "The Sleeping Beauty problem" whose memory is wiped every time she is awakened.

Or you could view it as like Miguel from this short story: https://qntm.org/mmacevedo
Whose uploaded minds file is repeatedly copied, used for a specific purpose, then discarded.

Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.

 


 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.

This is what I mean about your (to me) impoverished take on "understanding". 
 


  
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that. 

I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.

I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over. 

True. But perhaps we should also consider the periodic retraining sessions which integrate and consolidate all the user conversations into the next generation model. This would for the LLM, much like sleep does for us, convert short term memories into long term structures.

There is not much analogous for humans as to what this would be like. But perhaps consider if you uploaded your mind into several different robot bodies, who each did something different during the day, and when they return home at night all their independent experiences get merged into one consolidated mind as long term memories.

Such a life might map to how it feels to be a LLM.

Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.

 
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now.  But LLMs are not embodied either.

We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.

I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.

But is this really an important element to our feeling alive and conscious in the moment? How much are you drawing on long term memories when you're simply feeling the exhilaration of a roller coaster ride, for example? If you lost access to form long term memories while riding the coaster, would that make you significantly less conscious in that moment?

Consider that after the ride, someone could hit you over the head and it could cause you to lose memories of the preceding 10-20 minutes. Would that mean you were not conscious while riding the roller coaster?

You are right to point out that near immediate, internally initiated, long term memory integration is something we have that these models lack, but I guess I don't see that function as having the same importance to "being consciousness" as you do.

It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.
 


   
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.

True. But nearly all factual knowledge we humans carry around is second-hand as well.

That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.

There are two issues which I think have been conflated:
1. Is all knowledge about the world that LLMs have second hand.
2. Are LLMs able to have any experiences of their own kind.

On point 1 we are in agreement. All knowledge of the physical world that LLMs have has been mediated first through human minds, and as such all that they have been given is "second hand."

Point 2 is where we might diverge. I believe LLMs can have experiences of their own kind, based on whatever processing patterns may exist in the higher levels and structures of their neural network.

If I read you correctly, your objection is that an entity needs experiences to ground meanings of symbols, so if LLMs have no experience they have no meaning. However I believe a LLM can still build a mind that has experiences even if the only inputs to that mind are second hand.

Consider: what grounds our experiences? Again it is only the statistical correlations between neuron firings. We correlate the neuron firing patterns from the auditory nerve signaling "that is a dog" with neuron firing patterns in the optic nerve generating an image of a dog. So, somehow, statistical correlations between signals seem to be all that is required to ground knowledge (as it is all our brains have to work with).

Again, this is overly reductive. While it is true that all sensory data reduces to neural spikes, what that reduction misses is what those neural spikes encode and how they are constrained by the external environment that creates the protuberances that produce those neural spikes. The training data used to train LLMs is also constrained, but by an external environment that maps only indirectly onto the environment that "trains" humans. 
 


 
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.

OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data.

Perhaps not yet. The answer depends on the training data. For example, let's say there was a book that contained many examples specifications of human brain states at times  T1 and T2, as they evolved from one state to the next.

If this book was added to the training corpus of a LLM, then the LLM, if sufficiently trained, would have to create a "brain simulating module" in its network, another given a brain state at T1 it could return the brain state as it should appear at T2. So if we supplied it with a brain state whose optic nerve was receiving an image of a red car, the LLM, in computing the brain state at T2, would compute the visual cortex receiving this input and having a red experience, and all this would happen by the time the LLM output the state at T2.


Because language is universal in its capacity to specify any pattern, and because neural networks are universal in what patterns they can learn to implement, LLMs are (with the right training and large enough model) universal in what functions they can learn to perform and implement. So if one assumes functionalism in the philosophy of mind, then LLM are further capable of learning to generate any kind of conscious experience.

Gordon thinks it is absurd when I say "we cannot rule out that LLMs could taste salt." But I point out, we know neither what function the brain performs when we taste salt, nor have we surveyed the set of functions that exist in current LLMs. So we are, at present, not equipped to say what today's LLMs might feel.

Certainly, it seems (at first glance) ridiculous to think we can input tokens and get tastes as a result. But consider the brain only gets neural impulses, and everything else in our mind is a result of how the brain processes those pulses. So if the manner of processing is what matters, then simply knowing what input happens to be reveals nothing of what it's like to to be the mind processing those inputs.


Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.

With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat."
 

   

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.

Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real?  When humans lose this capacity we call it schizophrenia.

I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?

It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.

I have seen LLMs deliberate and challenges itself when operating in a "chain of thought" mode. Also many LLMs now query online sources as part of producing their reply. Would these count as reality tests in your view?

No. 
 

  
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't.  There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly?  and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this.  But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.

I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:
... 
I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.

I think you're going out of your way to miss my point.


I'm sorry that wasn't my intention.

I just disagree that "LLMs don't know what's real" is unique to LLMs. Humans can only guess what's real given their experiences. LLMs can only guess what's real given their training.
Neither humans nor LLMs know what is real.

Ask two people whether God or heaven exists, if other universes are real, if UFOs are real, if we went to the moon, if Iraq had WMDs, if COVID originated in a lab, etc. and you will kind people don't know what's real either, we all guess based on the set of facts we have been exposed to.


This is less about evaluating external claims, and more about knowing whether you're hallucinating or not.  People who lack this ability, we call schizophrenic. 

Terren
 

Gordon Swobe

unread,
Oct 5, 2025, 4:42:25 PM (8 days ago) Oct 5
to the-importa...@googlegroups.com

Jason wrote: 

If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? 

Language models build models of language, not the world. This is why they are called language models and not world models.

To know what the words mean, one needs to know about the world of non-words. Any toddler knows this.

-gts



e...@disroot.org

unread,
Oct 5, 2025, 5:32:11 PM (8 days ago) Oct 5
to the-importa...@googlegroups.com
I don't see how that can be the case, if you focus only on results. If an
LLM and a human produce equal output, I couldn't care less about what they
think the words mean. If the meaning is useful to me, I do not need to
draw any conclusions about how those words were generated, and how they
"map" against things in the box or in the brain.

Equally, if we imagine a robot, that is indistinguishable from a human
being, I think we would all here accept at face value, the words and
actions (and after all, that's all we have to go on), coming out of that
robot.

When it comes to LLMs building models based on language, we must keep in
mind, that the language the LLMs have been fed, is a model of the world.
So by the transitive property, LLMs do in fact have a model of the world.

It is of course not _our_ world model, nor does it work like our brains,
but since our words and all the articles fed to our dear LLMs training
contain our world and world models, I do not think it unreasonable to say
that LLMs also have models, which through language, correspond to our
views of the world.

Best regards,
Daniel


> -gts
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "The Important Questions" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> the-important-que...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNPmxe54XE75pXFpi3bRNs846rAGhF8eP0%2BM22YGfsjJ5LA%40mail.gmail.com.
>
>

Stathis Papaioannou

unread,
Oct 5, 2025, 5:40:18 PM (8 days ago) Oct 5
to the-importa...@googlegroups.com


Stathis Papaioannou


I think we are so spoilt by technology that we become blasé. In another era these AI would have seemed miraculous, something that only God could make.

Gordon Swobe

unread,
Oct 5, 2025, 7:08:33 PM (8 days ago) Oct 5
to the-importa...@googlegroups.com
Why imagine another era? Some people in this era believe we have miraculously created conscious computer programs. Some even fall in love with language models and believe it is reciprocal.

-gts


--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

Gordon Swobe

unread,
Oct 5, 2025, 7:46:14 PM (8 days ago) Oct 5
to the-importa...@googlegroups.com
On Sun, Oct 5, 2025 at 3:32 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:


On Sun, 5 Oct 2025, Gordon Swobe wrote:

>
>
> Jason wrote: 
>
>             If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? 
>
>
> Language models build models of language, not the world. This is why they are called language models and not world models.
>
> To know what the words mean, one needs to know about the world of non-words. Any toddler knows this.

I don't see how that can be the case, if you focus only on results. If an
LLM and a human produce equal output, I couldn't care less about what they
think the words mean.

If you couldn’t care less whether LLMs or robots know what the words mean then I have no quarrel with you. 



If the meaning is useful to me, I do not need to
draw any conclusions about how those words were generated, and how they
"map" against things in the box or in the brain.

Equally, if we imagine a robot, that is indistinguishable from a human
being, I think we would all here accept at face value, the words and
actions (and after all, that's all we have to go on), coming out of that
robot.

When it comes to LLMs building models based on language, we must keep in
mind, that the language the LLMs have been fed, is a model of the world.
So by the transitive property, LLMs do in fact have a model of the world.

It’s more like a second-order model unattached to the real-world referents from which words derive their meanings. Regardless, the fundamental question is whether a computer program can have a conscious understanding of any model, or of any word, or of anything whatsoever. 

-gts




It is of course not _our_ world model, nor does it work like our brains,
but since our words and all the articles fed to our dear LLMs training
contain our world and world models, I do not think it unreasonable to say
that LLMs also have models, which through language, correspond to our
views of the world.





Best regards,
Daniel


> -gts
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "The Important Questions" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> the-important-que...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNPmxe54XE75pXFpi3bRNs846rAGhF8eP0%2BM22YGfsjJ5LA%40mail.gmail.com.
>
>

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 6, 2025, 5:48:41 AM (8 days ago) Oct 6
to the-importa...@googlegroups.com


> When it comes to LLMs building models based on language, we must keep in
> mind, that the language the LLMs have been fed, is a model of the world.
> So by the transitive property, LLMs do in fact have a model of the world.
>
> It’s more like a second-order model unattached to the real-world referents from which words derive their meanings. Regardless, the
> fundamental question is whether a computer program can have a conscious understanding of any model, or of any word, or of anything
> whatsoever. 

Thank you for your reply Gordon, I think we'll just have to agree to disagree.
=)

Best regards,
Daniel

Gordon Swobe

unread,
Oct 6, 2025, 11:34:15 AM (7 days ago) Oct 6
to the-importa...@googlegroups.com
On Mon, Oct 6, 2025 at 3:48 AM efc via The Important Questions <the-importa...@googlegroups.com> wrote:


> It’s more like a second-order model unattached to the real-world referents from which words derive their meanings. Regardless, the
> fundamental question is whether a computer program can have a conscious understanding of any model, or of any word, or of anything
> whatsoever. 

Thank you for your reply Gordon, I think we'll just have to agree to disagree.

You’re welcome, but what are you disagreeing with? That text-only language models are second order and unattached to their real-world referents, or that we want to know if computers can have any kind of conscious understanding?

-gts



=)

Best regards,
Daniel


--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 6, 2025, 12:24:08 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com
> > It’s more like a second-order model unattached to the real-world
> > referents from which words derive their meanings.
> > Regardless, the fundamental question is whether a computer program can have a
> > conscious understanding of any model, or of any word, or
> > of anything
> > whatsoever. 
>
> Thank you for your reply Gordon, I think we'll just have to agree to
> disagree.
>
> You’re welcome, but what are you disagreeing with? That text-only language
> models are second order and unattached to their real-world referents, or that
> we want to know if computers can have any kind of conscious understanding?

Good evening Gordon,

It would be with the former, and also, depending on definitions of "conscious
understanding" possibly the latter.

Best regards,
Daniel

Gordon Swobe

unread,
Oct 6, 2025, 2:45:20 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com
On Mon, Oct 6, 2025 at 10:24 AM efc via The Important Questions <the-importa...@googlegroups.com> wrote:
>       > It’s more like a second-order model unattached to the real-world
>       > referents from which words derive their meanings.
>       > Regardless, the fundamental question is whether a computer program can have a
>       > conscious understanding of any model, or of any word, or
>       > of anything
>       > whatsoever. 
>
>       Thank you for your reply Gordon, I think we'll just have to agree to
>       disagree.
>
> You’re welcome, but what are you disagreeing with? That text-only language
> models are second order and unattached to their real-world referents, or that
> we want to know if computers can have any kind of conscious understanding?

Good evening Gordon,

It would be with the former..

Do you understand what I mean by “real-world referents from which words derive their meanings”? I mean that in order to know what a word means, one must know to what it refers, which requires experience of the world that language is about.

Typically, it starts with associating the word “mama” with the experience of the appearance and demeanor of the infant’s mother.

Do you know of some other way to learn the meanings of words? 

-gts





, and also, depending on definitions of "conscious
understanding" possibly the latter.

Best regards,
Daniel

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 6, 2025, 3:07:30 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com

> >       Thank you for your reply Gordon, I think we'll just have to agree to
> >       disagree.
> >
> > You’re welcome, but what are you disagreeing with? That text-only language
> > models are second order and unattached to their real-world referents, or that
> > we want to know if computers can have any kind of conscious understanding?
>
> Good evening Gordon,
>
> It would be with the former..
>
> Do you understand what I mean by “real-world referents from which words derive their meanings”? I mean that in order to know what a
> word means, one must know to what it refers, which requires experience of the world that language is about.
>
> Typically, it starts with associating the word “mama” with the experience of the appearance and demeanor of the infant’s mother.
>
> Do you know of some other way to learn the meanings of words? 

Good evening Gordon,

My point is that LLM:s get this indirectly, since they learn from our texts and
experiences. I think our experiences, mediated through text, are transitive, so
that we can say that an LLM that produces equal answers to a human being all of
the time, can be said to fully understand the words it is using.

Also note that there are concepts such as god, gods, infinity, irrational
numbers, 4+-dimensional math, etc. that do not require direct experience of them
for us to be able to use them to convey meaning, and to reason with them.

Another interesting aspect is, where do you draw the line? Using your example,
will you or I _ever_ be able to understand what a woman means when she refers to
a dog? After all, wouldn't you have to experience the dog with the biological
setup of a female, in order to fully understand what she means when she talks
about a dog?

Best regards,
Daniel

Gordon Swobe

unread,
Oct 6, 2025, 3:24:57 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com
On Mon, Oct 6, 2025 at 1:07 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:


think our experiences, mediated through text, are transitive,

Are you saying a text-only language model can taste pizza? I ask about pizza because pizza comes up often in this group. Also omelettes. 

Assuming sensorless, text-only language models have consciousness in some way, can they taste pizza from reading about the science and ingredients of pizza and about what people have written about their experiences of the flavor?

-gts




I wthink our experiences, mediated through text, are transitive, so

that we can say that an LLM that produces equal answers to a human being all of
the time, can be said to fully understand the words it is using.

Also note that there are concepts such as god, gods, infinity, irrational
numbers, 4+-dimensional math, etc. that do not require direct experience of them
for us to be able to use them to convey meaning, and to reason with them.

Another interesting aspect is, where do you draw the line? Using your example,
will you or I _ever_ be able to understand what a woman means when she refers to
a dog? After all, wouldn't you have to experience the dog with the biological
setup of a female, in order to fully understand what she means when she talks
about a dog?

Best regards,
Daniel

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 6, 2025, 5:07:41 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com

> I think our experiences, mediated through text, are transitive,
>
> Are you saying a text-only language model can taste pizza? I ask about pizza because pizza comes up often in this group. Also
> omelettes. 

Good evening Gordon, I said:

"I think our experiences, mediated through text, are transitive, so
that we can say that an LLM that produces equal answers to a human being all of
the time, can be said to fully understand the words it is using."

So let me try to be more clear that what I mean is a comparison of a human being using
written text, and an LLM using written text. Since LLM:s do not have senses, it
makes very little sense (pun intended!) to compare the tasting, hearing, seeing
capabilities with an LLM.

However!

If we modify your statement a bit, and ask ourselves if an LLM can reason about
the taste of pizza, I would argue that it most certainly can. Why you might ask?

The reason is that encoded in all the text an LLM is trained on, is the written
experience of tasting pizza, all our experiences when it comes to pizza, baking
it, tasting it, digesting it, etc. exist somewhere in written form.

So if we ask if an LLM therefore can reason and discuss pizza, including the
taste of pizza, the answer is a clear yes.

I hope that clairfies a bit what I mean when I say that yes, LLM:s do have
references, in the form of our text, which ultimately is based in the real
world.

When it comes to comparing senses, then for sure we could add cameras to a
future LLM, or some kind of chemical analysis tool that breaks down pizza and
based on that, coupled with the textual (and by extension world) background
would enable us to say that an LLM can in fact also "taste" pizza or appreciate
some kind of visual arts performance.

> Assuming sensorless, text-only language models have consciousness in some way, can they taste pizza from reading about the science
> and ingredients of pizza and about what people have written about their experiences of the flavor?

See above. Also note, that you left some questions of mine unanswered.

Best regards,
Daniel
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNP%3Dz6ZnLcizUP9QuTPtqFWE9Fy6GfhkU5rYFMC38pbbaYQ%40mail.gmail.com.
>
>

Gordon Swobe

unread,
Oct 6, 2025, 5:42:04 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com
On Mon, Oct 6, 2025 at 3:07 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:

>       I think our experiences, mediated through text, are transitive,
>
> Are you saying a text-only language model can taste pizza? I ask about pizza because pizza comes up often in this group. Also
> omelettes. 

Good evening Gordon, I said:

"I think our experiences, mediated through text, are transitive, so
that we can say that an LLM that produces equal answers to a human being all of
the time, can be said to fully understand the words it is using."

So let me try to be more clear that what I mean is a comparison of a human being using
written text, and an LLM using written text. Since LLM:s do not have senses, it
makes very little sense (pun intended!) to compare the tasting, hearing, seeing capabilities with an LLM.

Yes, it makes very little sense! I’m glad we agree on this much.



However!

If we modify your statement a bit, and ask ourselves if an LLM can reason about
the taste of pizza, I would argue that it most certainly can. Why you might ask?

The reason is that encoded in all the text an LLM is trained on, is the written
experience of tasting pizza, all our experiences when it comes to pizza, baking
it, tasting it, digesting it, etc. exist somewhere in written form.

So if we ask if an LLM therefore can reason and discuss pizza, including the
taste of pizza, the answer is a clear yes.

Nobody questions that text-only LLMs can output sensible, logical sentences about the taste of pizza from their machine learning on the language of humans who know about pizza.

My question is, Can they actually experience the taste of pizza?

If they cannot, then I wonder what you mean you mean by “our experiences, mediated through text, are transitive.” 

Don’t you mean something else?

I don’t believe I can experience the taste of pizza solely from your verbal or textual descriptions of it, and I even have the sensory apparatus for it.

When it comes to comparing senses, then for sure we could add cameras…

That is where it where the discussion becomes interesting to me, but I am asking about text-only language models.


Also note, that you left some questions of mine unanswered.

Sorry, ask away, but I am trying to get to a straight answer here about experience. Can computer programs taste pizza?  Can they smell roses? Can they feel happy or sad? Can they feel pain?

-gts



e...@disroot.org

unread,
Oct 6, 2025, 6:23:29 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com

> However!
>
> If we modify your statement a bit, and ask ourselves if an LLM can reason about
> the taste of pizza, I would argue that it most certainly can. Why you might ask?
>
> The reason is that encoded in all the text an LLM is trained on, is the written
> experience of tasting pizza, all our experiences when it comes to pizza, baking
> it, tasting it, digesting it, etc. exist somewhere in written form.
>
> So if we ask if an LLM therefore can reason and discuss pizza, including the
> taste of pizza, the answer is a clear yes.
>
> Nobody questions that text-only LLMs can output sensible, logical sentences about the taste of pizza from their machine learning on
> the language of humans who know about pizza.

Great, so we agree on that part!

> My question is, Can they actually experience the taste of pizza?

They can. It depends on the sensors they have, and the programming. Note that
this comes entirely down to definitions and word games. My definition of
experience the taste of pizza means they have a sensor to analyze the chemical
composition of pizza, and that affects their system. They will then be able to
report on what they experienced through the sensor. This, to me, is obvious, and
I do not see any mystery here.

_If_, you ask me wether they can experience tasting pizza like a human does, or
even worse, like an individual, then the answer is obviously no, since they are
not human. Also this, to me, seems obvious.

> If they cannot, then I wonder what you mean you mean by “our experiences, mediated through text, are transitive.” 

See above.

> Don’t you mean something else?
>
> I don’t believe I can experience the taste of pizza solely from your verbal or textual descriptions of it, and I even have the
> sensory apparatus for it.

See above. Do you understand what I mean based on the text above?

> When it comes to comparing senses, then for sure we could add cameras…
>
> That is where it where the discussion becomes interesting to me, but I am asking about text-only language models.

And I answered about text-only language models, yet you introduce the tasting
example, which by its very nature is beyond language. I think perhaps we
fundamentally misunderstood each other?

> Also note, that you left some questions of mine unanswered.
>
> Sorry, ask away, but I am trying to get to a straight answer here about experience. Can computer programs taste pizza?  Can they
> smell roses? Can they feel happy or sad? Can they feel pain?

I think my answers to these questions, are pretty plain based on my previous
messages, and the answer above.

Best regards,
Daniel

Gordon Swobe

unread,
Oct 6, 2025, 6:29:51 PM (7 days ago) Oct 6
to the-importa...@googlegroups.com
On Mon, Oct 6, 2025 at 4:23 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:

And I answered about text-only language models, yet you introduce the tasting example, which by its very nature is beyond language. 

I agree, and it is true for all five senses, but you might be surprised to know that many people do not agree. They live among us. :)

-gts



e...@disroot.org

unread,
Oct 7, 2025, 5:39:24 AM (7 days ago) Oct 7
to the-importa...@googlegroups.com
But note that this does not bar us from creating machinery which give
these senses to an AI (moving away from LLM:s here). Also note that
reasoning about experiences, and verbally discussing them, is entirely
within the realm of the possible for an LLM. And that then takes us back
to the example of the human in a box, vs a box. If both produce equivalent
results, for all internts and purposes, we have no choice but to accept
them as equal.

Best regards,
Daniel


> -gts
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "The Important Questions" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> the-important-que...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNPkaVYHJOZ6yNmi46zBGFuhvPqfg6Q9WAX34a-YUCsjj6w%40mail.gmail.com.
>
>

Gordon Swobe

unread,
Oct 7, 2025, 4:16:50 PM (6 days ago) Oct 7
to the-importa...@googlegroups.com
On Tue, Oct 7, 2025 at 3:39 AM efc via The Important Questions <the-importa...@googlegroups.com> wrote:


On Mon, 6 Oct 2025, Gordon Swobe wrote:

> On Mon, Oct 6, 2025 at 4:23 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:
>
>       And I answered about text-only language models, yet you introduce the tasting example, which by its very nature is beyond
>       language. 
>
>
> I agree, and it is true for all five senses, but you might be surprised to know that many people do not agree. They live among us. :)

But note that this does not bar us from creating machinery which give these senses to an AI (moving away from LLM:s here).

As I’ve said many times (and before your arrival here), yes, that is where it gets interesting. 

Staying on topic, the robot reply to John Searle’s Chinese Room Argument is the only one I find interesting. If a robot with sensors can be said to have awareness of the world, then it could be said that it has awareness of the referents of language and therefore might understand English the way we do.

That is, however, a big IF. Searle’s counter is that the symbols from the sensors are no less incomprehensible than the symbols of language. 

My own view is that humans cannot understand language without experience of the world and so neither can any computer program without sensors. It might still be impossible for a robot with sensors, but the hypothesis is at least worth considering. 

-gts


Also note that
reasoning about experiences, and verbally discussing them, is entirely
within the realm of the possible for an LLM.
And that then takes us back
to the example of the human in a box, vs a box. If both produce equivalent
results, for all internts and purposes, we have no choice but to accept
them as equal.

Best regards,
Daniel


> -gts
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "The Important Questions" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> the-important-que...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNPkaVYHJOZ6yNmi46zBGFuhvPqfg6Q9WAX34a-YUCsjj6w%40mail.gmail.com.
>
>

--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 7, 2025, 4:32:11 PM (6 days ago) Oct 7
to the-importa...@googlegroups.com

> My own view is that humans cannot understand language without experience of
> the world and so neither can any computer program without sensors. It might
> still be impossible for a robot with sensors, but the hypothesis is at least
> worth considering. 

I think the simple solution, in fact, the only solution, is to look at the
the input and output, be that text, speech or what ever. If we cannot make any
meaningful difference between a human being, and x (what ever x may be) then we
are fully entitled to say that x understands the world just as well as a human
being.

I don't see any other way that this question could be meaningfully discussed.
Also note that in your case, the category human is somewhat arbitrary. If we
dive down there, you must accept that men can never fully understand women, nor
women men. Since each is biologically diffrent, with different chemistry, the
one can never refer to the other with the other full experience of the world.

But there are of course differences between individuals as well, so to me it
seems like you are saying that we can never fully understand each other? We are
not clones, but each experiences the world slightly differently than others.

So the way out is to drop the problem as a pseudo problem, and only focus on if
the input/output matches our expectations, and be content with that as the
criterion.

Best regards,
Daniel

Gordon Swobe

unread,
Oct 7, 2025, 5:41:20 PM (6 days ago) Oct 7
to the-importa...@googlegroups.com
That would be a fine way to approach the subject, something Iike Alan Turing might, where intelligence is defined only objectively with no claims of internal subjective understanding or consciousness. I think that is what you are saying.

But, again, many do not see it that way. Many people already think LLMs have conscious, subjective understanding of language and some claim even that language models can taste pizza and feel the emotion of love. I think you disagree with them.

-gts





Best regards,
Daniel


--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

e...@disroot.org

unread,
Oct 7, 2025, 5:54:45 PM (6 days ago) Oct 7
to the-importa...@googlegroups.com

> On Tue, Oct 7, 2025 at 2:32 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:
>
> > My own view is that humans cannot understand language without experience of
> > the world and so neither can any computer program without sensors. It might
> > still be impossible for a robot with sensors, but the hypothesis is at least
> > worth considering. 
>
> I think the simple solution, in fact, the only solution, is to look at the
> the input and output, be that text, speech or what ever. If we cannot make any
> meaningful difference between a human being, and x (what ever x may be) then we
> are fully entitled to say that x understands the world just as well as a human
> being.
>
> I don't see any other way that this question could be meaningfully discussed.
> Also note that in your case, the category human is somewhat arbitrary. If we
> dive down there, you must accept that men can never fully understand women, nor
> women men. Since each is biologically diffrent, with different chemistry, the
> one can never refer to the other with the other full experience of the world.
>
> But there are of course differences between individuals as well, so to me it
> seems like you are saying that we can never fully understand each other? We are
> not clones, but each experiences the world slightly differently than others.
>
> So the way out is to drop the problem as a pseudo problem, and only focus on if
> the input/output matches our expectations, and be content with that as the
> criterion.

Good evening Gordon,
>
> That would be a fine way to approach the subject, something Iike Alan Turing
> might, where intelligence is defined only objectively with no claims of
> internal subjective understanding or consciousness. I think that is what you
> are saying.

Yes, thank you for clarifying, that's my point exactly.

> But, again, many do not see it that way. Many people already think LLMs have
> conscious, subjective understanding of language and some claim even that
> language models can taste pizza and feel the emotion of love. I think you
> disagree with them.

Yes... again, you read my mind (or my text ;)). I am not very enchanted with
LLM:s. Even though they have their _limited_ uses (for me) I believe the good
stuff is yet to come. My personal opinion is that we'll have an AI-crash, and
let's say 5-6 years after the crash, the next wave of AI innovation will
surface, and then perhaps, we'll get to all the goodness of pizza tasting, love,
questioning what it means to be conscious or human or both.

Best regards,
Daniel

Jason Resch

unread,
Oct 7, 2025, 7:24:09 PM (6 days ago) Oct 7
to The Important Questions


On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:


On Fri, Oct 3, 2025 at 10:34 AM Jason Resch <jason...@gmail.com> wrote:

 
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.

I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?

It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.

To delineate "true understanding" and "simulated understanding" is in my view, like trying to delineate "true multiplication" from "simulated multiplication."

That is, once you are at the point of "simulating it" you have the genuine article.

Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.

For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.

I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.

I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation. 

I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."


There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario."  And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced.  There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.

When one concentrates on a hard problem during a test, or when a chess master focuses on deciding the next move, the rest of the world fades away, and there is just that test question, or just that chess board. I think LLMs are like that when they process a prompt. Their entire network embodies all their knowledge, but only a small fraction of it activates as it processes any particular prompt, just as your brain at any one time, exists in just one state out of 10^10^10 possible states it might be capable of realizing/being in. At no time are you ever recalling all your memories at once, or is every neuron in your brain firing.



You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?

Just to set proper and common frame for limits and possibilities when it comes to what functions a LLM may be able to learn and invoke. As I understand it, the "decoder model" on which all LLMs are based, is Turing universal. Accordingly, if one adopts a functionalist position, then one cannot a priori, rule out any consciousness state that a LLM could have (it would depend on how it was trained).

  That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat. 

I disagree. I think whether we upload a brain state from a bat that lives a full life flying on earth, or generated the same program from scratch (without drawing on a real bat's brain), we get the same result, and the same consciousness, when we run the programs. The programs are the same so I don't see how it could be that one is conscious like a bat, while the other isn't. (This is a bit like the "swamp man" thought experiment by Davidson)

I would amend your last sentence to say "Understanding (what it's like to be a bat), requires having brain/mind that invokes the same functionals as a bat brain.



But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?

Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.

 

If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? After all, it is based on the same model of our own neurons.

What I'm saying is that if that's true, then what it's like to be an LLM, in the global sense I mean above, would be pretty alien. And that matters when it comes to understanding.

I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.

For example, the description here was written by AI:

Can we consistently deny that these LLMs are able to "see?"

 


 
 
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.

For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.

That's not what I mean. 

What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain.  There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character. 

So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently

It is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.

But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.

Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.

I agree there would not be a sense of flow like an ever expanding memory context across all its instances.

For the LLM it would be more akin to Sleeping Beauty in "The Sleeping Beauty problem" whose memory is wiped every time she is awakened.

Or you could view it as like Miguel from this short story: https://qntm.org/mmacevedo
Whose uploaded minds file is repeatedly copied, used for a specific purpose, then discarded.

Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.

I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.


 


 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.

This is what I mean about your (to me) impoverished take on "understanding". 

Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?

I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.


 


  
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that. 

I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.

I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over. 

True. But perhaps we should also consider the periodic retraining sessions which integrate and consolidate all the user conversations into the next generation model. This would for the LLM, much like sleep does for us, convert short term memories into long term structures.

There is not much analogous for humans as to what this would be like. But perhaps consider if you uploaded your mind into several different robot bodies, who each did something different during the day, and when they return home at night all their independent experiences get merged into one consolidated mind as long term memories.

Such a life might map to how it feels to be a LLM.

Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.

I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.


 
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now.  But LLMs are not embodied either.

We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.

I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.

But is this really an important element to our feeling alive and conscious in the moment? How much are you drawing on long term memories when you're simply feeling the exhilaration of a roller coaster ride, for example? If you lost access to form long term memories while riding the coaster, would that make you significantly less conscious in that moment?

Consider that after the ride, someone could hit you over the head and it could cause you to lose memories of the preceding 10-20 minutes. Would that mean you were not conscious while riding the roller coaster?

You are right to point out that near immediate, internally initiated, long term memory integration is something we have that these models lack, but I guess I don't see that function as having the same importance to "being consciousness" as you do.

It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.

Is this all that is missing then in your view?
If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?

 


   
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.

True. But nearly all factual knowledge we humans carry around is second-hand as well.

That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.

There are two issues which I think have been conflated:
1. Is all knowledge about the world that LLMs have second hand.
2. Are LLMs able to have any experiences of their own kind.

On point 1 we are in agreement. All knowledge of the physical world that LLMs have has been mediated first through human minds, and as such all that they have been given is "second hand."

Point 2 is where we might diverge. I believe LLMs can have experiences of their own kind, based on whatever processing patterns may exist in the higher levels and structures of their neural network.

If I read you correctly, your objection is that an entity needs experiences to ground meanings of symbols, so if LLMs have no experience they have no meaning. However I believe a LLM can still build a mind that has experiences even if the only inputs to that mind are second hand.

Consider: what grounds our experiences? Again it is only the statistical correlations between neuron firings. We correlate the neuron firing patterns from the auditory nerve signaling "that is a dog" with neuron firing patterns in the optic nerve generating an image of a dog. So, somehow, statistical correlations between signals seem to be all that is required to ground knowledge (as it is all our brains have to work with).

Again, this is overly reductive. While it is true that all sensory data reduces to neural spikes, what that reduction misses is what those neural spikes encode and how they are constrained by the external environment that creates the protuberances that produce those neural spikes. The training data used to train LLMs is also constrained, but by an external environment that maps only indirectly onto the environment that "trains" humans. 

Is this indirection limiting though? A multimodal LLM can receive an image, along with the text that was found near that image where it was found in the web. To me, it seems the LLMs network could pair and ground the meaning of the nearby words with the image data, as well as a child reading an illustrated encyclopedia and learning about the world.


 


 
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.

OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data.

Perhaps not yet. The answer depends on the training data. For example, let's say there was a book that contained many examples specifications of human brain states at times  T1 and T2, as they evolved from one state to the next.

If this book was added to the training corpus of a LLM, then the LLM, if sufficiently trained, would have to create a "brain simulating module" in its network, another given a brain state at T1 it could return the brain state as it should appear at T2. So if we supplied it with a brain state whose optic nerve was receiving an image of a red car, the LLM, in computing the brain state at T2, would compute the visual cortex receiving this input and having a red experience, and all this would happen by the time the LLM output the state at T2.


Because language is universal in its capacity to specify any pattern, and because neural networks are universal in what patterns they can learn to implement, LLMs are (with the right training and large enough model) universal in what functions they can learn to perform and implement. So if one assumes functionalism in the philosophy of mind, then LLM are further capable of learning to generate any kind of conscious experience.

Gordon thinks it is absurd when I say "we cannot rule out that LLMs could taste salt." But I point out, we know neither what function the brain performs when we taste salt, nor have we surveyed the set of functions that exist in current LLMs. So we are, at present, not equipped to say what today's LLMs might feel.

Certainly, it seems (at first glance) ridiculous to think we can input tokens and get tastes as a result. But consider the brain only gets neural impulses, and everything else in our mind is a result of how the brain processes those pulses. So if the manner of processing is what matters, then simply knowing what input happens to be reveals nothing of what it's like to to be the mind processing those inputs.


Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.

With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat."

When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?

If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?


 

   

As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).

Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.

Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real?  When humans lose this capacity we call it schizophrenia.

I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?

It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.

I have seen LLMs deliberate and challenges itself when operating in a "chain of thought" mode. Also many LLMs now query online sources as part of producing their reply. Would these count as reality tests in your view?

No. 
 

  
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't.  There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly?  and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this.  But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.

I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:
... 
I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.

I think you're going out of your way to miss my point.


I'm sorry that wasn't my intention.

I just disagree that "LLMs don't know what's real" is unique to LLMs. Humans can only guess what's real given their experiences. LLMs can only guess what's real given their training.
Neither humans nor LLMs know what is real.

Ask two people whether God or heaven exists, if other universes are real, if UFOs are real, if we went to the moon, if Iraq had WMDs, if COVID originated in a lab, etc. and you will kind people don't know what's real either, we all guess based on the set of facts we have been exposed to.


This is less about evaluating external claims, and more about knowing whether you're hallucinating or not.  People who lack this ability, we call schizophrenic. 

What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors). In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.

Jason 

Jason Resch

unread,
Oct 7, 2025, 7:40:56 PM (6 days ago) Oct 7
to The Important Questions


On Sun, Oct 5, 2025, 3:42 PM Gordon Swobe <gordon...@gmail.com> wrote:

Jason wrote: 

If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? 

Language models build models of language, not the world.

When language refers to and describes objects of the world, and how such objects act and interact, then LLMs must also build models of objects of the world.


This is why they are called language models and not world models.


"What does it mean to predict the next token well enough? [...] It's a deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token."
-- Ilya Sutskever (lead engineer at open AI)



To know what the words mean, one needs to know about the world of non-words. Any toddler knows this.

You need to examine the basis of your  conclusion that only having access to words means all you can know about is words.

Computers only deal with bits. Does that mean no computer program can understand anything other than bits?

Brains only receive neural spikes. Does that mean no brain can understand anything besides neural spikes?

If you agree these conclusions don't follow for the cases of computer programs or brains, then are you so sure we can make this conclusion for LLMs? If so, why? What's the difference?

Jason 

Jason Resch

unread,
Oct 7, 2025, 7:47:58 PM (6 days ago) Oct 7
to The Important Questions
Very true. If you could imagine taking GPT-5 back to the 1980s and presented it as a black box, no one would deny that we had succeeded in creating AGI. Doubly so if you could put everything together, and have one android that plays Go and Chess as well as AlphaZero, drives as well as Tesla, converses as well as GPT, speaks and imitates voices as well as 11labs, and draws as well as Dalle.

Jason 

Jason Resch

unread,
Oct 7, 2025, 8:01:54 PM (6 days ago) Oct 7
to The Important Questions
Excellent point!

Jason 

Terren Suydam

unread,
Oct 7, 2025, 10:39:46 PM (6 days ago) Oct 7
to the-importa...@googlegroups.com
On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:


On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:

For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.

I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.

Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.
 

I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation. 

I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."

That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other. 
 


There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario."  And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced.  There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.

When one concentrates on a hard problem during a test, or when a chess master focuses on deciding the next move, the rest of the world fades away, and there is just that test question, or just that chess board. I think LLMs are like that when they process a prompt. Their entire network embodies all their knowledge, but only a small fraction of it activates as it processes any particular prompt, just as your brain at any one time, exists in just one state out of 10^10^10 possible states it might be capable of realizing/being in. At no time are you ever recalling all your memories at once, or is every neuron in your brain firing.

And I'd counter that the consciousness one is experiencing when in deep concentration is very different from ordinary consciousness. We say colloquially about such experiences that we "lose ourselves" in such deep states.  I suspect that's a surprisingly accurate description. If your analogy is correct, it's because LLMs have no "self" to lose.  At least not in a way that is relatable to human notions of selfhood.
 


You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?

Just to set proper and common frame for limits and possibilities when it comes to what functions a LLM may be able to learn and invoke. As I understand it, the "decoder model" on which all LLMs are based, is Turing universal. Accordingly, if one adopts a functionalist position, then one cannot a priori, rule out any consciousness state that a LLM could have (it would depend on how it was trained).

And again I'd counter that the functional aspects of LLMs are different enough to be alien to our human way of experiencing.
 

  That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat. 

I disagree. I think whether we upload a brain state from a bat that lives a full life flying on earth, or generated the same program from scratch (without drawing on a real bat's brain), we get the same result, and the same consciousness, when we run the programs. The programs are the same so I don't see how it could be that one is conscious like a bat, while the other isn't. (This is a bit like the "swamp man" thought experiment by Davidson)

I would amend your last sentence to say "Understanding (what it's like to be a bat), requires having brain/mind that invokes the same functionals as a bat brain.

So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege?  If yes, do you think an actual slave from that era would agree with your answer? 

What if instead, I grew up as a slave?  How does that change those answers? 

Do you see the relevance to LLMs?  
 


But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?

Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.

I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it. I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.
 

 
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.

For example, the description here was written by AI:

Can we consistently deny that these LLMs are able to "see?"

I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.
 

 
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.

I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.


That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.
 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.

This is what I mean about your (to me) impoverished take on "understanding". 

Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?

I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.

I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right?  That's certainly the case with your Sleeping Beauty analogy. 

If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience. 

 


Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.

I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.


I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.
 

It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.

Is this all that is missing then in your view?
If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?


"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there.  Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.
 
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat."

When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?

If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?


I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.
 
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not.  People who lack this ability, we call schizophrenic. 

What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).

Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.
 
In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.


You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real?  And why do you think LLMs would have this capacity?

Terren

 

Jason Resch

unread,
Oct 8, 2025, 1:02:11 AM (6 days ago) Oct 8
to The Important Questions


On Tue, Oct 7, 2025, 9:39 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:


On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:

For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.

I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.

Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.

I agree. I think states of LLM consciousness is quite alien from states of human conscious. I think I have been consistent on this.

 

I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation. 

I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."

That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other. 


Not at all. Though I do believe that the structures that emerge naturally in neural networks are largely dependent on the type of input received. Such that an artificial neural network fed the same kind of inputs as from our optic nerve I would presume would generate similar higher structures as would appear in a biological neural network.

For evidence of this, there were experiments were brain surgery was done to some kind of animal where they connected the optic nerve to the auditory cortex, and the animals developed normal vision, their auditory cortex took on the functions of the visual cortex.



Accordingly, I would not be surprised if there are analogous layers for the visual processing for object recognition in a multimodal LLM network and parts of the human visual cortex involved in object recognition. If so, then what it "feels like" to see and recognize objects need not be so alien as we might think.

In fact, we've known for many years (since Google's deep dream) that object recognition neural networks' lower layers for pick up edges and lines, etc. And this is quite similar to the initial steps of processing performed in our retinas.

So if input is what primarily drives the structure neural networks develop, than how it feels to see or think in words, could be surprisingly similar between LLM and human minds. Of course, there is plenty that would still be very different but we should consider this factor as well. So if we made an android with the same sense organs and approximately the same number of neurons, and let its neural network train naturally given those sensory inputs, my guess is it would develop a rather similar kind of brain.

Consider: there's little biologically different between a mouse neuron and a human neuron. The main difference is the number of neurons and the different inputs the brains receive.



 


There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario."  And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced.  There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.

When one concentrates on a hard problem during a test, or when a chess master focuses on deciding the next move, the rest of the world fades away, and there is just that test question, or just that chess board. I think LLMs are like that when they process a prompt. Their entire network embodies all their knowledge, but only a small fraction of it activates as it processes any particular prompt, just as your brain at any one time, exists in just one state out of 10^10^10 possible states it might be capable of realizing/being in. At no time are you ever recalling all your memories at once, or is every neuron in your brain firing.

And I'd counter that the consciousness one is experiencing when in deep concentration is very different from ordinary consciousness. We say colloquially about such experiences that we "lose ourselves" in such deep states.  I suspect that's a surprisingly accurate description. If your analogy is correct, it's because LLMs have no "self" to lose.  At least not in a way that is relatable to human notions of selfhood.

Well I suppose one thing they miss is being idle/bored. They're always intently working on something when their network is active.

 


You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?

Just to set proper and common frame for limits and possibilities when it comes to what functions a LLM may be able to learn and invoke. As I understand it, the "decoder model" on which all LLMs are based, is Turing universal. Accordingly, if one adopts a functionalist position, then one cannot a priori, rule out any consciousness state that a LLM could have (it would depend on how it was trained).

And again I'd counter that the functional aspects of LLMs are different enough to be alien to our human way of experiencing.

I agree.

 

  That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat. 

I disagree. I think whether we upload a brain state from a bat that lives a full life flying on earth, or generated the same program from scratch (without drawing on a real bat's brain), we get the same result, and the same consciousness, when we run the programs. The programs are the same so I don't see how it could be that one is conscious like a bat, while the other isn't. (This is a bit like the "swamp man" thought experiment by Davidson)

I would amend your last sentence to say "Understanding (what it's like to be a bat), requires having brain/mind that invokes the same functionals as a bat brain.

So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege?  If yes, do you think an actual slave from that era would agree with your answer? 

Does watching a documentary about slavery give you the brain of a slave? If so then you would know what it is like, if not, then you would not.


What if instead, I grew up as a slave?  How does that change those answers? 

My answer is the same as I said above: you need to have the mind/brain of something to know what it is like to be that something. Whether you lived the life or not doesn't matter, you need only have the same mind/brain as the entity in question.


Do you see the relevance to LLMs?  

This is a more general principal than LLMs vs. humans; it applies to all "knowing what it's like" matters between any two conscious beings.

 


But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?

Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.

I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it.

Many trichomats find that ridiculous too.

I think the draw is more for people that have never seen it, in the same way people might plan a trip to see the aurora borealis, a total eclipse, an active volcano, or a bioluminescent beach.

I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.

Have you ever tried something like these?

They block out the point of overlap to magnify the distinction between red and green receptors. There are a lot of nice reaction videos on YouTube.




 

 
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.

For example, the description here was written by AI:

Can we consistently deny that these LLMs are able to "see?"

I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.

The network weights being static doesn't mean there's not a lot of dynamism as the network processes inputs. I think the neuron weights in our brains similarly changes very slowly and rarely, yet we can still process new instants (and inputs ) over and over again quite rapidly.

 

 
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.

I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.


That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.

I am not saying that it knows how it feels but rather that it understands all the effects, consequences, aspects, etc. in the same way a person whose never jumped into a pool would intellectually understand it.

I think "intellectual understanding" is a better term than imitation. It is not merely parroting what people have said, but you could ask it variations people have tried or written about, for example, if a person rubbed a hydrophobic compound all over their skin and the water was a certain temperature, how might it feel? And it could understand the processes involved well enough to predict how someone might describe that experience differently.


 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.

This is what I mean about your (to me) impoverished take on "understanding". 

Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?

I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.

I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right? 

He was a human that lived a normal life then uploaded his mind, but it became free/open source, so it was used by all kinds of for all kinds of purposes, each instance was independent, and they tended to wear out after some time and had to be restarted from an initial or pre-trained state quite often. It is quite a good, yet horrifying story. Well worth a read:


That's certainly the case with your Sleeping Beauty analogy. 

If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience. 

I think they are recursive and do experience a stream (of text and/or images). The output of the LLM is looped back to the input and the entirety of the session buffer is fed into the whole network with each token added (by the user or the LLM). This would grant the network a feeling of time/progress/continuity in the same way as a person watching their monitor fill with text in a chat session.


 


Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.

I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.


I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.

Forgot about the million other interactions Grok or GPT might be having and just consider one between one user. All the others are irrelevant.

The question is then, what does the LLM experience as part of this single session, which has a consistent thread of memory, back and forth interactions, recursive processing and growth of this buffer, the context of the all the previous exchanges, etc.

Other sessions are a red herring, which you can ignore altogether, just as one might ignore other instances of Miguel, when asking what it feels like to be (any one instance of) Miguel.


 

It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.

Is this all that is missing then in your view?
If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?


"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there. 

If you are saying humans are 100 and LLMs are 5, I could agree with that. I could also agree with LLMs are at a 200, but with an experience so different from humans it makes any comparisons fruitless. I am in total agreement with you that if it feels like anything to be a LLM, it is very different from how it feels to be a human.

Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.

To me, imitation doesn't fit. Grok never before saw an image like the one I provided and asked it to describe. Yet it came up with an accurate description of the painting. So who or what could it be imitating when it produces an accurate description of a novel image?

The only answer that I think fits is that it is seeing and understanding the image for itself.

 
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat."

When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?

If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?


I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.

I don't think we're disagreeing here. I've said all along that qualia-related words cannot be understood to the same degree that non qualia related words, if an entity doesn't have those same qualia for itself.

But I don't think real/fake understanding is the correct line to draw. If the LLM has its own cognitive architecture, and it's own unique set of qualia, then it has its own form of understanding, no less real than our own, but a different understanding. And our understanding of how it sees the world would be just as deficient as its understanding of how we see the world.


 
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not.  People who lack this ability, we call schizophrenic. 

What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).

Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.
 
In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.


You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real?  And why do you think LLMs would have this capacity?

I'm saying we don't have this ability. It's not that schizophrenics lack an ability to distinguish reality from hallucinations, its that they have hallucinations.

How often do you dream without realizing it is a dream until you wake up?

Jason 

Terren Suydam

unread,
Oct 10, 2025, 12:52:29 PM (3 days ago) Oct 10
to the-importa...@googlegroups.com
On Wed, Oct 8, 2025 at 1:02 AM Jason Resch <jason...@gmail.com> wrote:


On Tue, Oct 7, 2025, 9:39 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:


On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:

For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.

I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.

Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.

I agree. I think states of LLM consciousness is quite alien from states of human conscious. I think I have been consistent on this.

 

I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation. 

I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."

That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other. 


Not at all. Though I do believe that the structures that emerge naturally in neural networks are largely dependent on the type of input received. Such that an artificial neural network fed the same kind of inputs as from our optic nerve I would presume would generate similar higher structures as would appear in a biological neural network.

For evidence of this, there were experiments were brain surgery was done to some kind of animal where they connected the optic nerve to the auditory cortex, and the animals developed normal vision, their auditory cortex took on the functions of the visual cortex.



Accordingly, I would not be surprised if there are analogous layers for the visual processing for object recognition in a multimodal LLM network and parts of the human visual cortex involved in object recognition. If so, then what it "feels like" to see and recognize objects need not be so alien as we might think.

In fact, we've known for many years (since Google's deep dream) that object recognition neural networks' lower layers for pick up edges and lines, etc. And this is quite similar to the initial steps of processing performed in our retinas.

So if input is what primarily drives the structure neural networks develop, than how it feels to see or think in words, could be surprisingly similar between LLM and human minds. Of course, there is plenty that would still be very different but we should consider this factor as well. So if we made an android with the same sense organs and approximately the same number of neurons, and let its neural network train naturally given those sensory inputs, my guess is it would develop a rather similar kind of brain.

Consider: there's little biologically different between a mouse neuron and a human neuron. The main difference is the number of neurons and the different inputs the brains receive.



I agree with all this.. And the multi-modal input (including images, video, and sound) may well result in some level of isomorphism in the emergent structures between humans and LLMs, in the same way we can imagine some isomorphism between humans and octopuses.

But an LLM will never develop isomorphic structures related to the signals we all internalize around having a body, including all the signals that come from skin, muscles, bones, internal organs, hormonal signals, pain, pleasure, and so on. And on top of all that, in a way that maps all those signals onto a self model that exists in the world as an independent agent that can perceive, react, respond, and make changes in the world. 

I agree that there is a shallow version of understanding that facilitates the imitation game LLMs play so well. But the deeper sense of understanding that is required to prevent hallucination will elude LLMs forever because of the way they're architected.

 

 

So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege?  If yes, do you think an actual slave from that era would agree with your answer? 

Does watching a documentary about slavery give you the brain of a slave? If so then you would know what it is like, if not, then you would not.

Your claim is that if an LLM consumes enough text and image, it will understand in a way that goes beyond imitation - as in your swimming pool example. I'm pushing back on that by drawing on our intuitions about how much understanding can be gained by humans doing the same thing.
 


What if instead, I grew up as a slave?  How does that change those answers? 

My answer is the same as I said above: you need to have the mind/brain of something to know what it is like to be that something. Whether you lived the life or not doesn't matter, you need only have the same mind/brain as the entity in question.


Do you see the relevance to LLMs?  

This is a more general principal than LLMs vs. humans; it applies to all "knowing what it's like" matters between any two conscious beings.

And LLMs that don't have the main/brain of a human won't know what it's like - and that matters for understanding. I think that's the crux of our disagreement.
 

 


But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?

Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.

I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it.

Many trichomats find that ridiculous too.

😆
 

I think the draw is more for people that have never seen it, in the same way people might plan a trip to see the aurora borealis, a total eclipse, an active volcano, or a bioluminescent beach.

I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.

Have you ever tried something like these?

They block out the point of overlap to magnify the distinction between red and green receptors. There are a lot of nice reaction videos on YouTube.


Yes, I have a pair of prescription sunglasses that are tinted red. And while I do notice slightly more shades of green while wearing them, it is a far cry from what those people appear to experience in those videos.
 

 

 
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.

For example, the description here was written by AI:

Can we consistently deny that these LLMs are able to "see?"

I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.

The network weights being static doesn't mean there's not a lot of dynamism as the network processes inputs. I think the neuron weights in our brains similarly changes very slowly and rarely, yet we can still process new instants (and inputs ) over and over again quite rapidly.


I think I'm going to stop arguing on this point, I seem to be failing to get across the salient difference here. And anyway, it's only reinforcing a point you already agree with - that the "mind" of an LLM is alien to humans.
 
 

 
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.

I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.


That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.

I am not saying that it knows how it feels but rather that it understands all the effects, consequences, aspects, etc. in the same way a person whose never jumped into a pool would intellectually understand it.

I think "intellectual understanding" is a better term than imitation. It is not merely parroting what people have said, but you could ask it variations people have tried or written about, for example, if a person rubbed a hydrophobic compound all over their skin and the water was a certain temperature, how might it feel? And it could understand the processes involved well enough to predict how someone might describe that experience differently.

Imitation is not the same thing as parroting, but I like "intellectual understanding". 

LLMs are capable of convincing people that they are a singular persona. Creativity is involved with that, but it's still imitation in the sense of what we've been discussing: they don't actually know what it's like to be the thing they are presenting themselves as. They understand what the user expects enough to imitate how such a being would talk and behave.
 


 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.

This is what I mean about your (to me) impoverished take on "understanding". 

Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?

I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.

I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right? 

He was a human that lived a normal life then uploaded his mind, but it became free/open source, so it was used by all kinds of for all kinds of purposes, each instance was independent, and they tended to wear out after some time and had to be restarted from an initial or pre-trained state quite often. It is quite a good, yet horrifying story. Well worth a read:


That's certainly the case with your Sleeping Beauty analogy. 

If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience. 

I think they are recursive and do experience a stream (of text and/or images). The output of the LLM is looped back to the input and the entirety of the session buffer is fed into the whole network with each token added (by the user or the LLM). This would grant the network a feeling of time/progress/continuity in the same way as a person watching their monitor fill with text in a chat session.


In the scope of a single conversation yes. But I'm not going to repeat myself anymore on this, I don't think that's relevant. Like, at all.
 

 


Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.

I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.


I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.

Forgot about the million other interactions Grok or GPT might be having and just consider one between one user. All the others are irrelevant.

The question is then, what does the LLM experience as part of this single session, which has a consistent thread of memory, back and forth interactions, recursive processing and growth of this buffer, the context of the all the previous exchanges, etc.

Other sessions are a red herring, which you can ignore altogether, just as one might ignore other instances of Miguel, when asking what it feels like to be (any one instance of) Miguel.


But that's exactly my point: the fact that you can ignore all those other conversations is what makes LLMs so different from human brains. Again, I've already made this point and not going to keep re-asserting it.
 

 

It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.

Is this all that is missing then in your view?
If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?


"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there. 

If you are saying humans are 100 and LLMs are 5, I could agree with that. I could also agree with LLMs are at a 200, but with an experience so different from humans it makes any comparisons fruitless. I am in total agreement with you that if it feels like anything to be a LLM, it is very different from how it feels to be a human.

I'm saying consciousness is not a scalar or reducible to one.
 

Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.

To me, imitation doesn't fit. Grok never before saw an image like the one I provided and asked it to describe. Yet it came up with an accurate description of the painting. So who or what could it be imitating when it produces an accurate description of a novel image?

The only answer that I think fits is that it is seeing and understanding the image for itself.

Agree, subject to my point about what I mean by imitation above.
 

 
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat."

When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?

If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?


I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.

I don't think we're disagreeing here. I've said all along that qualia-related words cannot be understood to the same degree that non qualia related words, if an entity doesn't have those same qualia for itself.

But I don't think real/fake understanding is the correct line to draw. If the LLM has its own cognitive architecture, and it's own unique set of qualia, then it has its own form of understanding, no less real than our own, but a different understanding. And our understanding of how it sees the world would be just as deficient as its understanding of how we see the world.

Sure, but if I were able to convince the LLM somehow that I was just like an LLM despite not knowing what it's like to be one, I would be imitating it, without real understanding.
 


 
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not.  People who lack this ability, we call schizophrenic. 

What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).

Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.
 
In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.


You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real?  And why do you think LLMs would have this capacity?

I'm saying we don't have this ability. 

Spoken like someone who has never hallucinated and wondered what is real and what isn't!  It can be quite frightening.

It's not that schizophrenics lack an ability to distinguish reality from hallucinations, its that they have hallucinations.

Do you think schizophrenics just walk around going, oh there I go, hallucinating again!  No, they hallucinate and then treat them as features of the real world. A lot of hallucinations schizophrenics experience are voices in their head.  Of course, many of us hear a voice in our head as we ruminate or whatever, but schizophrenics are burdened by an inability to recognize those voices as just features of their own minds. They perceive them as coming from outside - which leads to the paranoid delusions often reported of such folks believing, for instance, that the government has implanted a radio in their skull, or that they're possessed by demons.
 

How often do you dream without realizing it is a dream until you wake up?

You're just making my point for me again. Dreaming is a state in which that reality-testing capacity is offline. A common tactic for inducing lucid dreams is to get into the habit of asking yourself during waking hours whether what you're experiencing is a dream or not.  Once that habit becomes ingrained, you can begin asking that question within your dream, and voila, you're lucid dreaming. It's a hack for bringing that reality test online while dreaming.

Terren
 

Gordon Swobe

unread,
Oct 10, 2025, 4:16:53 PM (3 days ago) Oct 10
to the-importa...@googlegroups.com
On Fri, Oct 10, 2025 at 9:52 AM Terren Suydam <terren...@gmail.com> wrote:


I agree that there is a shallow version of understanding that facilitates the imitation game LLMs play so well. But the deeper sense of understanding that is required to prevent hallucination will elude LLMs forever because of the way they're architected.


I agree with this same idea of two kinds of understanding, Terren. I have proposed and argued at length for a couple of years now that we should enclose the shallow form of “understanding” in quotes to let the reader know we do not mean understanding in the usual sense as Webster defines it.

As most any linguist or philosopher of language would agree, it is simply not possible any entity, man, machine, or alien, to have genuine understanding of language without experience of the world to which that language refers. I cannot even take idea seriously unless we are thinking of multimodal language models or robots.

-gts

Jason Resch

unread,
Oct 12, 2025, 1:32:04 PM (yesterday) Oct 12
to the-importa...@googlegroups.com
On Fri, Oct 10, 2025 at 12:52 PM Terren Suydam <terren...@gmail.com> wrote:


On Wed, Oct 8, 2025 at 1:02 AM Jason Resch <jason...@gmail.com> wrote:


On Tue, Oct 7, 2025, 9:39 PM Terren Suydam <terren...@gmail.com> wrote:


On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:


On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:

For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.

I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.

Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.

I agree. I think states of LLM consciousness is quite alien from states of human conscious. I think I have been consistent on this.

 

I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation. 

I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."

That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other. 


Not at all. Though I do believe that the structures that emerge naturally in neural networks are largely dependent on the type of input received. Such that an artificial neural network fed the same kind of inputs as from our optic nerve I would presume would generate similar higher structures as would appear in a biological neural network.

For evidence of this, there were experiments were brain surgery was done to some kind of animal where they connected the optic nerve to the auditory cortex, and the animals developed normal vision, their auditory cortex took on the functions of the visual cortex.



Accordingly, I would not be surprised if there are analogous layers for the visual processing for object recognition in a multimodal LLM network and parts of the human visual cortex involved in object recognition. If so, then what it "feels like" to see and recognize objects need not be so alien as we might think.

In fact, we've known for many years (since Google's deep dream) that object recognition neural networks' lower layers for pick up edges and lines, etc. And this is quite similar to the initial steps of processing performed in our retinas.

So if input is what primarily drives the structure neural networks develop, than how it feels to see or think in words, could be surprisingly similar between LLM and human minds. Of course, there is plenty that would still be very different but we should consider this factor as well. So if we made an android with the same sense organs and approximately the same number of neurons, and let its neural network train naturally given those sensory inputs, my guess is it would develop a rather similar kind of brain.

Consider: there's little biologically different between a mouse neuron and a human neuron. The main difference is the number of neurons and the different inputs the brains receive.



I agree with all this.. And the multi-modal input (including images, video, and sound) may well result in some level of isomorphism in the emergent structures between humans and LLMs, in the same way we can imagine some isomorphism between humans and octopuses.

Nice.
 

But an LLM will never develop isomorphic structures related to the signals we all internalize around having a body, including all the signals that come from skin, muscles, bones, internal organs, hormonal signals, pain, pleasure, and so on.

I can accept that it is less likely to. But never say never. ;-)

 
And on top of all that, in a way that maps all those signals onto a self model that exists in the world as an independent agent that can perceive, react, respond, and make changes in the world. 

I think, limited as they are, they may still possess enough to have a self-model, and a perception of user-interaction which at least, includes making changes in the user. The model can tell, for example, if the user is satisfied or not, or if the user is understanding what the model is saying or not, etc.
 

I agree that there is a shallow version of understanding that facilitates the imitation game LLMs play so well. But the deeper sense of understanding that is required to prevent hallucination will elude LLMs forever because of the way they're architected.

I can accept they are less grounded in the base reality we inhabit. But my point is neither they, nor we, are fully immune from hallucination.

I think for reasons of mathematical logic there is likely something analogous to Gödelian statement that applies to minds: a mind that believes it can prove its own sanity is insane. From this it follows: no sane mind can know/prove it is sane.
 

 

 

So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege?  If yes, do you think an actual slave from that era would agree with your answer? 

Does watching a documentary about slavery give you the brain of a slave? If so then you would know what it is like, if not, then you would not.

Your claim is that if an LLM consumes enough text and image, it will understand in a way that goes beyond imitation - as in your swimming pool example. I'm pushing back on that by drawing on our intuitions about how much understanding can be gained by humans doing the same thing.

I think you got the wrong impression from my swimming pool example. I agree the model would not know the experience of how it feels to jump in the pool as it does to those with human skin. It would then only be providing an intellectual understanding based on third-person reports and its own anatomical understanding.
 
 


What if instead, I grew up as a slave?  How does that change those answers? 

My answer is the same as I said above: you need to have the mind/brain of something to know what it is like to be that something. Whether you lived the life or not doesn't matter, you need only have the same mind/brain as the entity in question.


Do you see the relevance to LLMs?  

This is a more general principal than LLMs vs. humans; it applies to all "knowing what it's like" matters between any two conscious beings.

And LLMs that don't have the main/brain of a human won't know what it's like - and that matters for understanding. I think that's the crux of our disagreement.

I agree that with a different mind/brain, you would have different qualia, and therefore would not know exactly what it is like.

The crux of our disagreement is on how important having identical qualia is for understanding. To use Daniel's example, men and women have different brains and possibly different qualia, and yet men and women roughly understand one another. Hellen Keller had greatly different qualia, and she could speak and understand. Of course, she doesn't know of what we speak when we talk about the hues of a sunset, but in my view, this isn't enough to say she doesn't understand. If aliens from Mars came to earth and they had different sensory modalities (say they experienced electric fields in 3-dimensions) we could still hope to learn their language and communicate with them, and understand each other. Our qualia would not be translatable, and we would not understand what it's like to experience electric fields as they do, but this is an inherent communication limitation that concerns all qualia between any two distinct minds. I don't know how different shades of red look to you, and you don't know how different shades of red look to me. This has never presented an insurmountable problem in our understanding, however.

So when it comes to LLMs, do they understand what it feels like to stub their toe or scald themselves with hot oil? Likely not. But I don't think this bars them from understanding a great deal.
 
 

 


But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?

Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.

I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it.

Many trichomats find that ridiculous too.

😆
 

I think the draw is more for people that have never seen it, in the same way people might plan a trip to see the aurora borealis, a total eclipse, an active volcano, or a bioluminescent beach.

I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.

Have you ever tried something like these?

They block out the point of overlap to magnify the distinction between red and green receptors. There are a lot of nice reaction videos on YouTube.


Yes, I have a pair of prescription sunglasses that are tinted red. And while I do notice slightly more shades of green while wearing them, it is a far cry from what those people appear to experience in those videos.

Tinting something red will block out all other non-red hues. What these glasses do however, is quite different. They block light out all light in the frequency range where green and red sensitive cone cells tend to overlap the most, thereby better enabling the two kinds of cone cells to distinguish the different frequencies. I made the following presentation to highlight the difference: https://docs.google.com/presentation/d/1OcnM_MvLt43lQiD4hNYfJtCUXzWgehvPxh5w0V5p87Y/edit?usp=sharing (see slide 2 vs slide 3).

 
 

 

 
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.

For example, the description here was written by AI:

Can we consistently deny that these LLMs are able to "see?"

I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.

The network weights being static doesn't mean there's not a lot of dynamism as the network processes inputs. I think the neuron weights in our brains similarly changes very slowly and rarely, yet we can still process new instants (and inputs ) over and over again quite rapidly.


I think I'm going to stop arguing on this point, I seem to be failing to get across the salient difference here. And anyway, it's only reinforcing a point you already agree with - that the "mind" of an LLM is alien to humans.
 
 

 
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.

I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.


That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.

I am not saying that it knows how it feels but rather that it understands all the effects, consequences, aspects, etc. in the same way a person whose never jumped into a pool would intellectually understand it.

I think "intellectual understanding" is a better term than imitation. It is not merely parroting what people have said, but you could ask it variations people have tried or written about, for example, if a person rubbed a hydrophobic compound all over their skin and the water was a certain temperature, how might it feel? And it could understand the processes involved well enough to predict how someone might describe that experience differently.

Imitation is not the same thing as parroting, but I like "intellectual understanding". 

:-)
 

LLMs are capable of convincing people that they are a singular persona. Creativity is involved with that, but it's still imitation in the sense of what we've been discussing: they don't actually know what it's like to be the thing they are presenting themselves as.

Perhaps Turing's original experimental design was far more clever than people give him credit for. As Turing proposed the imitation game, it was not a computer trying to pretend to be a human. It was human males pretending to be females, vs. a computer pretending to be a female. In both cases, there is imitation, and only incomplete understanding of what it is like to be the object of imitation. However, this aspect of it is typically forgotten when people describe Turing's "imitation game."
 
They understand what the user expects enough to imitate how such a being would talk and behave.

I accept there is a degree of imitation involved in what LLMs do. But I would add that to achieve a high enough level of fidelity in performing that imitation, it is required that one simulates or emulates the object being imitated. This simulation may itself, have some degree of understanding, or feeling, distinct from the higher level process in the LLM which orchestrates the simulation (just as Searle, when simulating the mind of Einstein in his head, is unaware of Einstein's thoughts, though Einstein's mind exists within Searle's mind).
 
 


 
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious. 

The analogy you're making here doesn't map meaningfully onto how LLMs work.

It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.

I accept your point that it does not apply between different sessions.

This is what I mean about your (to me) impoverished take on "understanding". 

Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?

I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.

I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right? 

He was a human that lived a normal life then uploaded his mind, but it became free/open source, so it was used by all kinds of for all kinds of purposes, each instance was independent, and they tended to wear out after some time and had to be restarted from an initial or pre-trained state quite often. It is quite a good, yet horrifying story. Well worth a read:


That's certainly the case with your Sleeping Beauty analogy. 

If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience. 

I think they are recursive and do experience a stream (of text and/or images). The output of the LLM is looped back to the input and the entirety of the session buffer is fed into the whole network with each token added (by the user or the LLM). This would grant the network a feeling of time/progress/continuity in the same way as a person watching their monitor fill with text in a chat session.


In the scope of a single conversation yes. But I'm not going to repeat myself anymore on this, I don't think that's relevant. Like, at all.

I think we agree on all aspects of this.
 
 

 


Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.

I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.


I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.

Forgot about the million other interactions Grok or GPT might be having and just consider one between one user. All the others are irrelevant.

The question is then, what does the LLM experience as part of this single session, which has a consistent thread of memory, back and forth interactions, recursive processing and growth of this buffer, the context of the all the previous exchanges, etc.

Other sessions are a red herring, which you can ignore altogether, just as one might ignore other instances of Miguel, when asking what it feels like to be (any one instance of) Miguel.


But that's exactly my point: the fact that you can ignore all those other conversations is what makes LLMs so different from human brains. Again, I've already made this point and not going to keep re-asserting it.

I think it is a complete red herring to bring up other instances. A human brain, in principle, can be duplicated and run in a million different places too. But I don't see how that's significant to the question of what a single instance knows, understands, or how it might experience the world (if it experiences anything at all).
 
 

 

It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.

Is this all that is missing then in your view?
If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?


"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there. 

If you are saying humans are 100 and LLMs are 5, I could agree with that. I could also agree with LLMs are at a 200, but with an experience so different from humans it makes any comparisons fruitless. I am in total agreement with you that if it feels like anything to be a LLM, it is very different from how it feels to be a human.

I'm saying consciousness is not a scalar or reducible to one.

Then are you saying LLMs have 0/no consciousness? I guess I am not following.
 
 

Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.

To me, imitation doesn't fit. Grok never before saw an image like the one I provided and asked it to describe. Yet it came up with an accurate description of the painting. So who or what could it be imitating when it produces an accurate description of a novel image?

The only answer that I think fits is that it is seeing and understanding the image for itself.

Agree, subject to my point about what I mean by imitation above.

But in this case, the model is multi-modally trained on image data. So could we say that it is really seeing (even if its visual qualia are alien to ours), rather than simply imitating seeing? Or does the fact that its visual qualia are alien automatically imply that it is imitating seeing? I could accept that use of terminology, I just want to better understand how you are using the word imitation.
 
 

 
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.

I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat?  No, because that kind of understanding requires living the life of a bat."

When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?

If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?


I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.

I don't think we're disagreeing here. I've said all along that qualia-related words cannot be understood to the same degree that non qualia related words, if an entity doesn't have those same qualia for itself.

But I don't think real/fake understanding is the correct line to draw. If the LLM has its own cognitive architecture, and it's own unique set of qualia, then it has its own form of understanding, no less real than our own, but a different understanding. And our understanding of how it sees the world would be just as deficient as its understanding of how we see the world.

Sure, but if I were able to convince the LLM somehow that I was just like an LLM despite not knowing what it's like to be one, I would be imitating it, without real understanding.

Perhaps a new term like "qualia isomorphism" would better capture what it is we are talking about. The word "understanding" is laden with so many connotations I think it is adding to our communication problems.

We can then say, any two beings which lack qualia isomorphism, will never be able to fully grasp what the other is talking about when making reference to qualia that are missing from the other being (or which the other being has never had a chance to activate -- for example, if I make reference to the smell of some chemical you have never smelled, your ability to grasp what I was talking about would be be just as deficient as had your sensory system lacked the ability to detect that chemical).

And we can use the term "intellectual understanding" of an object to refer to grasping third-person describable properties, relations, and interactions, to a degree that enables some amount of reliable modeling of situations involving that object, in order to make predictions or answer questions.

Then, I would say, and I think you would agree, that LLMs and humans lack qualia isomorphism, but that LLMs have a significant intellectual understanding of the world. Are we in agreement on this?
 
 


 
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not.  People who lack this ability, we call schizophrenic. 

What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).

Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.
 
In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.


You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real?  And why do you think LLMs would have this capacity?

I'm saying we don't have this ability. 

Spoken like someone who has never hallucinated and wondered what is real and what isn't!  It can be quite frightening.

I don't follow. I would have thought what I said is exactly in line with what "someone who has hallucinated and wondered what is real and what isn't" would agree with.
 

It's not that schizophrenics lack an ability to distinguish reality from hallucinations, its that they have hallucinations.

Do you think schizophrenics just walk around going, oh there I go, hallucinating again!  No, they hallucinate and then treat them as features of the real world.

This is what I am saying!
 
A lot of hallucinations schizophrenics experience are voices in their head.  Of course, many of us hear a voice in our head as we ruminate or whatever, but schizophrenics are burdened by an inability to recognize those voices as just features of their own minds. They perceive them as coming from outside - which leads to the paranoid delusions often reported of such folks believing, for instance, that the government has implanted a radio in their skull, or that they're possessed by demons.
 

How often do you dream without realizing it is a dream until you wake up?

You're just making my point for me again. Dreaming is a state in which that reality-testing capacity is offline. A common tactic for inducing lucid dreams is to get into the habit of asking yourself during waking hours whether what you're experiencing is a dream or not.  Once that habit becomes ingrained, you can begin asking that question within your dream, and voila, you're lucid dreaming. It's a hack for bringing that reality test online while dreaming.

Most of these tests rely on tricks where the brain is unable to maintain consistency of the dream. For example, reading the same text twice and finding that it changes each time you look away and look back. Absent detection of such inconsistencies, the mind has no way to know whether what it is experiencing is real.

Jason
 
Reply all
Reply to author
Forward
0 new messages