--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAJvaNP%3Dou3xqYr5Dy%3DXifC26b%3DEGzBNt_eoZWCCTXBO5M-SuWg%40mail.gmail.com.
Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.
Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.
On Mon, Sep 29, 2025 at 12:17 PM Gordon Swobe <gordon...@gmail.com> wrote:American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.
They do not recursively update their internal state, moment by moment, by information from the environment.
I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).
This to me is perfectly congruent with LLMs not being conscious.
On Tue, Sep 30, 2025, 4:18 PM Gordon Swobe <gordon...@gmail.com> wrote:Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.
Indeed, and just as things in AI were starting to get really interesting. Here is a presentation I am working on about just how close we are to things really taking off:Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.A great question, for which I don't think there is one best answer, as each person who believes in the argument intuition can be based on different reasons, but I think the most generally powerful argument is one which I think was from Dennett, which goes as follows (in my own words):--The CRA works, as all magic tricks do, by way of a clever misdirection. We see Searle, waving and shouting to us, saying "I don't understand a thing!" and as the (seeming) only entity before us, we are inclined to believe him.But Searle is not the only entity involved. This becomes obvious when we ask the Room about its opinions: it's favorite food, it's opinion on Mao, its favorite book, and so onFor the answers we receive are not be Searle's answers to these questions. We could substitute Searle for any other person, and the answers we would get from the Room would be the same.This reveals Searle to be a replaceable cog in a greater machine, as the substitution makes no difference at all to the room's behavior or responses.So when Searle protests that he "doesn't understand a thing!", he's right, but that fact is irrelevant. He doesn't have to understand anything. He's not the only entity in the system who has an opinion. Ask the Room (in Chinese) if it understands, and it will proclaim it does.We could say Searle's role in the Room, as the driver of the rules, is analogous to the laws of physics in driving the operation of our brains. You understand English, but the "laws of physics", like Searle, don't need to understand a thing.--You could call this a version of the system reply with the additional or exposition to undermine the intuitive trick that the CRA replies on.
Jason
--On Mon, Sep 29, 2025 at 12:17 PM Gordon Swobe <gordon...@gmail.com> wrote:American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CA%2BBCJUimH5PASbrXcPtV9CoERQ%3Dq9GzD65H43zpJrPLac%3D-bZA%40mail.gmail.com.
On Tue, Sep 30, 2025 at 2:56 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025, 4:18 PM Gordon Swobe <gordon...@gmail.com> wrote:Tragic that both Searle and Dennett passed just as AI was becoming a reality. They spent much of their careers debating only the theory of it and won’t be here to see how it plays out in reality.Indeed, and just as things in AI were starting to get really interesting. Here is a presentation I am working on about just how close we are to things really taking off:Jason, just for the record and in memory of Searle, what is your reply to his Chinese Room Argument (CRA)? Please don’t send me a long-winded missive filled with links. I know all the main counter-arguments. Just tell me which one you think is most persuasive and why. Thanks.A great question, for which I don't think there is one best answer, as each person who believes in the argument intuition can be based on different reasons, but I think the most generally powerful argument is one which I think was from Dennett, which goes as follows (in my own words):--The CRA works, as all magic tricks do, by way of a clever misdirection. We see Searle, waving and shouting to us, saying "I don't understand a thing!" and as the (seeming) only entity before us, we are inclined to believe him.But Searle is not the only entity involved. This becomes obvious when we ask the Room about its opinions: it's favorite food, it's opinion on Mao, its favorite book, and so onFor the answers we receive are not be Searle's answers to these questions. We could substitute Searle for any other person, and the answers we would get from the Room would be the same.This reveals Searle to be a replaceable cog in a greater machine, as the substitution makes no difference at all to the room's behavior or responses.So when Searle protests that he "doesn't understand a thing!", he's right, but that fact is irrelevant. He doesn't have to understand anything. He's not the only entity in the system who has an opinion. Ask the Room (in Chinese) if it understands, and it will proclaim it does.We could say Searle's role in the Room, as the driver of the rules, is analogous to the laws of physics in driving the operation of our brains. You understand English, but the "laws of physics", like Searle, don't need to understand a thing.--You could call this a version of the system reply with the additional or exposition to undermine the intuitive trick that the CRA replies on.
Yes that is the system reply, to which Searle replies that he could put the entire system in his mind and still not understand.
-gts
Jason
--On Mon, Sep 29, 2025 at 12:17 PM Gordon Swobe <gordon...@gmail.com> wrote:American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93First proposed in 1980, the "Chinese room" thought experiment has only grown more relevant.
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAJvaNP%3DuhmKVaEtMaZY1XGWc_3g1pT3Dcr_DLeteW2xjmzhw3g%40mail.gmail.com.
On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.Is there any degree of functionality that you see as requiring consciousness?
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.
They do not recursively update their internal state, moment by moment, by information from the environment.There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.
I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.I would liken their environment to something like Hellen Keller reading Braille. But they may also live in a rich world of imagination, so it could be more like someone with a vivid imagination reading a book, and experiencing all kinds of objects, relations, connections, etc. that its neural network creates for itself.
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.
This to me is perfectly congruent with LLMs not being conscious.I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?
I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.
I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.
On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.Is there any degree of functionality that you see as requiring consciousness?Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.
To answer it, you have to go past the limits of what imitation can do. And imitation, as implemented by LLMs, is pretty damn impressive! And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.That's not what I mean.What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain. There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character.So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherently
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that.
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now. But LLMs are not embodied either.
They do not recursively update their internal state, moment by moment, by information from the environment.There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.
I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.I would liken their environment to something like Hellen Keller reading Braille. But they may also live in a rich world of imagination, so it could be more like someone with a vivid imagination reading a book, and experiencing all kinds of objects, relations, connections, etc. that its neural network creates for itself.All I'm saying here is that the "environment" LLMs relate to is of a different kind.
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real? When humans lose this capacity we call it schizophrenia.
There is something we take for granted about our ability to know whether something is real or not. Sometimes, we can get a taste of what it's like to not know - certain psychedelics can offer this experience - and such experiences are instructive in the way of "you don't know what you got 'til it's gone". So what is this capacity for reaity-testing? I offer that it's based on intuition built up on a lifetime of experience, and I doubt it's something that can be conveyed or trained linguistically.So maybe that's the answer to your first question - what functionality requires consciousness? The ability to know whether something is real or not. And LLMs don't have it - they are effectively schizophrenic.
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't. There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly? and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this. But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.
This to me is perfectly congruent with LLMs not being conscious.I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).
The point is that when most people talk about LLMs being conscious, they mean consciousness in the way we know it, and in my view, whatever consciousness is associated with LLMs, it definitely ain't that.
On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.Exactly! I agree completely.A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents.As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.
On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.Exactly! I agree completely.A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents.As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.But consider:
According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…
On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.Exactly! I agree completely.A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents.As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.But consider:According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…That is not Searle’s Chinese Room Argument.Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean.
On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.Exactly! I agree completely.A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents.As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.But consider:According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…That is not Searle’s Chinese Room Argument.Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean.Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).
On Tue, Sep 30, 2025 at 11:57 PM Jason Resch <jason...@gmail.com> wrote:On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.Exactly! I agree completely.A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents.As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.But consider:According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…That is not Searle’s Chinese Room Argument.Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean.Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).I think I understand what you are trying to say, but those last words in parentheses caught my attention. You miss the point to suggest it is a matter of converting the raw text to “visually seeing the words.”Even if a text-based LLM could consciously see the words, the words would have no meanings as the LLM has no access to the world to which the words refer.
On Wed, Oct 1, 2025, 2:29 AM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 11:57 PM Jason Resch <jason...@gmail.com> wrote:On Wed, Oct 1, 2025 at 1:04 AM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:29 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:55 PM Gordon Swobe <gordon...@gmail.com> wrote:On Tue, Sep 30, 2025 at 4:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings. They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time. They do not recursively update their internal state, moment by moment, by information from the environment.I know some might say that they are in fact doing that - that they are receiving prompts and updating their state based on that. But those internal updates are not recursive or global in any compelling sense, and the "information about their environment" is not an environment that reflects anything but the whims of human minds everywhere.As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters). This to me is perfectly congruent with LLMs not being conscious.Exactly! I agree completely.A key point of yours is that (text-based) LLMs have no access to their environments, no access to the world. To know what words mean, one must have some acquaintance with the non-words in the world to which they refer, also called their referents.As for CRA, only the robot reply that makes any sense to me. I can at least entertain the possibility that sensors might give a robot some kind of grounding in the world, i.e., some kind of access to the referents of language.But consider:According to the (unspecified processing rules of the Chinese Room) the input Chinese words may be put through a virtual reality environment simulator, to generate an artificial reality…That is not Searle’s Chinese Room Argument.Even if I wanted to follow your logic, no such virtual reality can be simulated until we know what the words mean.Searl says he follows rules of a program, to input words (in Chinese) to ultimately generate answers (in Chinese) which are indistinguishable from those a Chinese speaker would give.He never goes into the details of how such a program would work. Many speculate he could in fact be simulating an entire brain of a Chinese speaker's brain receiving the words.But if that is how it works, in what way should the words be presented? It would have to be adapted through some means, to convert the raw text of the word into sensory symbols (e.g. simulating the chinese speaker receiving a text message on their phone, reading it, and visually seeing the words presented to them via their simulated retina and optic nerve).I think I understand what you are trying to say, but those last words in parentheses caught my attention. You miss the point to suggest it is a matter of converting the raw text to “visually seeing the words.”Even if a text-based LLM could consciously see the words, the words would have no meanings as the LLM has no access to the world to which the words refer.My reply was not intended to have anything to do with LLMs. I was replying strictly to what you said about the robot reply to the CRA. In the context of the CRA, I was assuming something like an uploaded brain of a native Chinese speaker, not an LLM.
They cannot, because they have no access to the world to which language refers.With no access to the world that language is about, they literally cannot know what they are talking about.
With no access to the world that language is about, they literally cannot know what they are talking about.Where does the information used to train them come from, if not the world?
On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:With no access to the world that language is about, they literally cannot know what they are talking about.Where does the information used to train them come from, if not the world?It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.
On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:With no access to the world that language is about, they literally cannot know what they are talking about.Where does the information used to train them come from, if not the world?It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.We've debated this ad nauseum but for the benefit of the new list members I'll say:LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".
On Wed, Oct 1, 2025 at 3:16 PM Jason Resch <jason...@gmail.com> wrote:On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:With no access to the world that language is about, they literally cannot know what they are talking about.Where does the information used to train them come from, if not the world?It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.We've debated this ad nauseum but for the benefit of the new list members I'll say:LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".I use my pocket calculator to do math. My slide rule is also a tool for doing math. Before that, I did math on my fingers. No matter which tool I use, from fingers to language models, I am the one doing the math.
On Wed, Oct 1, 2025, 6:18 PM Gordon Swobe <gordon...@gmail.com> wrote:On Wed, Oct 1, 2025 at 3:16 PM Jason Resch <jason...@gmail.com> wrote:On Wed, Oct 1, 2025, 4:27 PM Gordon Swobe <gordon...@gmail.com> wrote:On Wed, Oct 1, 2025 at 1:50 PM Jason Resch <jason...@gmail.com> wrote:With no access to the world that language is about, they literally cannot know what they are talking about.Where does the information used to train them come from, if not the world?It comes from the language in books, obviously, but with no access to the world that the language is about, the text-based sensorless language model literally cannot know what the words are about.The LLM only predicts and outputs words that YOU will find meaningful. Its apparent understanding is parasitic on your own understanding.We've debated this ad nauseum but for the benefit of the new list members I'll say:LLMs can do math. They can draw graphs that depict the layout of verbally described things. They can play chess. They can predict the evolution of novel physical setups.All of these require understanding the behaviors and relations of objects, in every sense of the word "understand".I use my pocket calculator to do math. My slide rule is also a tool for doing math. Before that, I did math on my fingers. No matter which tool I use, from fingers to language models, I am the one doing the math.When an AI correctly explains a how a novel, never before seen or described, physical situation would unfold, and when the AI user is a child who has no significant expertise or great understanding of physics, then who is the one doing physics in that picture?
On Tue, Sep 30, 2025 at 10:27 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.Is there any degree of functionality that you see as requiring consciousness?Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?I don't think much is required. Anything that acts with intelligence possesses some information which it uses as part of its intelligent decision making process. A process possessing and using information "has knowledge" and having knowledge is the literal meaning of consciousness. So in my view, anything that acts intelligently is also conscious.
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?
To answer it, you have to go past the limits of what imitation can do. And imitation, as implemented by LLMs, is pretty damn impressive! And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.Would you say that the LLM, even if its consciousness is nothing like human consciousness, is at the very least "conscious of" the prompt supplied to it (while it is processing it)?
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.That's not what I mean.What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain. There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character.So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherentlyIt is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that.I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now. But LLMs are not embodied either.We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.
They do not recursively update their internal state, moment by moment, by information from the environment.There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.But is a certain kind of memory needed? Is short-term memory enough? Was Henry Molaison self-conscious?
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.True. But nearly all factual knowledge we humans carry around is second-hand as well.
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real? When humans lose this capacity we call it schizophrenia.I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't. There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly? and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this. But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:
...
I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.
This to me is perfectly congruent with LLMs not being conscious.I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).I would say, from its internal perspective, if it's conscious at all, it is only conscious when it is conscious, and therefore it feels consciousness continually (gaps in consciousness sly past unnoticed). That its reality is "second hand" does not mean it is not connected or related to reality. Gordon and I long ago discussed the idea of a "blank slate" intelligence born in a vast library, and whether or not it would be able to bootstrap knowledge about the outside world and understand anything, given only the content of the books in the library. I am of the opinion that it could be understood, because understanding is all about building models from which predictions can be made. And this can be done given only the structure of the words in the library. Anytime text is compressible, there are structures and patterns inherent to it. Lossless compression requires learning these patterns. To compress data better requires an ever deeper understanding of the world. This is why compression tests have been put forward as objective measures of AI intelligence.
It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.
On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.
On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.
On Thu, Oct 2, 2025 at 3:35 PM Terren Suydam <terren...@gmail.com> wrote:On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.I like to place the word “understands” in scare quotes to inform the reader that this is not semantic understanding in the sense that we normally mean.It is distributional semantics, which as I was saying is almost a misnomer. The software engineers built a machine that knows, statistically, how each word in the dictionary relates to each other word. It is an amazing accomplishment, but not what we usually mean by understanding language.-gts
On Thu, Oct 2, 2025 at 6:10 PM Gordon Swobe <gordon...@gmail.com> wrote:On Thu, Oct 2, 2025 at 3:35 PM Terren Suydam <terren...@gmail.com> wrote:On Thu, Oct 2, 2025 at 1:01 PM Gordon Swobe <gordon...@gmail.com> wrote:On Thu, Oct 2, 2025 at 10:44 AM Terren Suydam <terren...@gmail.com> wrote:It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding.LLMs have what has come to be known as distributed or distributional semantics, which is I think almost a misnomer. The LLM “knows” in great detail about the statistical distributions of the tokens that represent word or word-parts in the training corpus. This is what allows it to predict the next words with such uncanny accuracy that it creates the appearance of genuine understanding.It's pretty obvious if you interact with an LLM that it effectively understands the semantics of the prompts given and of its own responses. And it's still quite the mystery as to how it does that. I think what LLMs have done is show us that there's some middle ground between human consciousness/understanding and the automaton proposed by the CRA.I like to place the word “understands” in scare quotes to inform the reader that this is not semantic understanding in the sense that we normally mean.It is distributional semantics, which as I was saying is almost a misnomer. The software engineers built a machine that knows, statistically, how each word in the dictionary relates to each other word. It is an amazing accomplishment, but not what we usually mean by understanding language.-gtsI think there's more going on there than mere "distributional semantics".
As Jason mentioned, LLMs can correctly simulate and predict what will happen in novel scenarios. They can play chess. Such things are unexplainable with what you're describing.
TerrenTerren-gts
TerrenJason
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-questions+unsub...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CAMy3ZA-z_V4J1DL%2B1Az72xj6qLSxYmsQ%3DzQ%3DttQDD_7ei2tGcA%40mail.gmail.com.
On Wed, Oct 1, 2025 at 12:25 AM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025 at 10:27 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, Sep 30, 2025 at 8:34 PM Jason Resch <jason...@gmail.com> wrote:On Tue, Sep 30, 2025, 6:13 PM Terren Suydam <terren...@gmail.com> wrote:I never thought the CRA said anything useful until LLMs came along, because it was too difficult for me to imagine an AI actually implemented in those terms, but I think LLMs are basically Chinese Rooms. And my intuition is that LLMs as presently constructed merely imitate conscious beings.Is there any degree of functionality that you see as requiring consciousness?Yes, but I tend to think of it the other way around - what kind of functionality is required of a system to manifest a conscious being?I don't think much is required. Anything that acts with intelligence possesses some information which it uses as part of its intelligent decision making process. A process possessing and using information "has knowledge" and having knowledge is the literal meaning of consciousness. So in my view, anything that acts intelligently is also conscious.John Clark would approve.
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.
To answer it, you have to go past the limits of what imitation can do. And imitation, as implemented by LLMs, is pretty damn impressive! And going past those limits, I think, goes into places that are hard to define or articulate. I'll have to think on that some more.Would you say that the LLM, even if its consciousness is nothing like human consciousness, is at the very least "conscious of" the prompt supplied to it (while it is processing it)?I don't know. In like a panpsychist way of seeing it, yes, but I keep coming back to how unrelatable that kind of consciousness is, because its training and prompting (and thus, "experience") is just a massive deluge of symbols.
For human/animal consciousness, we experience ourselves through being embodied forms in a world that pushes back in consistent ways. Our subjective experience is a construction of an internal world based on (non-linguistic) data from our senses. The point is that for us, the meaning of words is rooted in felt experiences and imagined concepts that are private and thus not expressible in linguistic or symbolic terms. For LLMs, however, the meaning of words is rooted in the complex statistical relationships between words.
There is no underlying felt experience that grounds semantic meaning.
It's pure abstraction. It inhabits an abstract reality, not tethered to the physical world (such as it is).
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.That's not what I mean.What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain. There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character.So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherentlyIt is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that.I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over.
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now. But LLMs are not embodied either.We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.
They do not recursively update their internal state, moment by moment, by information from the environment.There was a man ( https://en.wikipedia.org/wiki/Henry_Molaison ) who after surgery lost the capacity to form new long term memories. I think LLMs are like that:They have short term memory (their buffer window) but no capacity to form long term memories (without undergoing a background process of integration/retraining on past conversations). If Henry Molaison was conscious despite his inability to form long term memories, then this limitation isn't enough to rule out LLMs being conscious.I think memory is an important part of being self-conscious, which is a higher order of consciousness. But I don't think we're necessarily arguing about whether LLMs are self-conscious.But is a certain kind of memory needed? Is short-term memory enough? Was Henry Molaison self-conscious?Again, you're making an analogy that isn't really connected to LLMs. LLMs do not have a global cognitive state that updates based on its interactions.
Yes, an individual interaction has some notion of short-term memory, but it doesn't have any effect on any other interaction it has.
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.True. But nearly all factual knowledge we humans carry around is second-hand as well.That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data.
Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real? When humans lose this capacity we call it schizophrenia.I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't. There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly? and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this. But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:...I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.I think you're going out of your way to miss my point.
This to me is perfectly congruent with LLMs not being conscious.I would agree that they are not conscious in the same way humans are conscious, but I would disagree with denying they have any consciousness whatsoever. As Chalmers said, he is willing to agree a worm with 300 neurons is conscious. So then why should he deny a LLM, with 300 million neurons, is conscious?I think it's certainly possible that LLMs experience some kind of consciousness but it's not continuous nor coherent nor embodied, nor does it relate to reality, so I cannot fathom what that's like. It's certainly nothing I can relate to. I can at least relate to a worm being conscious, because its nervous system, primitive as it is, is embodied, continuous, and coherent (in the sense that it processes information recursively).I would say, from its internal perspective, if it's conscious at all, it is only conscious when it is conscious, and therefore it feels consciousness continually (gaps in consciousness sly past unnoticed). That its reality is "second hand" does not mean it is not connected or related to reality. Gordon and I long ago discussed the idea of a "blank slate" intelligence born in a vast library, and whether or not it would be able to bootstrap knowledge about the outside world and understand anything, given only the content of the books in the library. I am of the opinion that it could be understood, because understanding is all about building models from which predictions can be made. And this can be done given only the structure of the words in the library. Anytime text is compressible, there are structures and patterns inherent to it. Lossless compression requires learning these patterns. To compress data better requires an ever deeper understanding of the world. This is why compression tests have been put forward as objective measures of AI intelligence.I grant that LLMs, through their training, do find a relatively coherent semantic understanding based on nothing more than the statistical relationships between the symbols they are fed, and it's kind of amazing to me that this is possible. But this level of understanding is in the realm of pure abstraction. It does not correspond to the kind of understanding that is grounded in felt experience, for the reasons I've expressed.
Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.
On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.By your own reckoning and I agree, there is something it is like to think any thought.
So without qualia, the sensorless LLM cannot fully understand anything whatsoever.
On that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.
On Fri, Oct 3, 2025, 1:49 PM Gordon Swobe <gordon...@gmail.com> wrote:On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.By your own reckoning and I agree, there is something it is like to think any thought.Yes.So without qualia, the sensorless LLM cannot fully understand anything whatsoever.You injected the conclusion that LLMs have no qualia of any kind, which is not in evidence.You'll note I only said "human qualia" which I define as qualia unique to human brainsOn that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.I see no reason one couldn't have a thought about Vienna which consists of knowing and relating various objective facts of Vienna such as, it's size, shape, population, history, and so on.
On Fri, Oct 3, 2025, 2:50 PM Gordon Swobe <gordon...@gmail.com> wrote:On Fri, Oct 3, 2025 at 12:36 PM Jason Resch <jason...@gmail.com> wrote:On Fri, Oct 3, 2025, 1:49 PM Gordon Swobe <gordon...@gmail.com> wrote:On Fri, Oct 3, 2025 at 8:34 AM Jason Resch <jason...@gmail.com> wrote:Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.By your own reckoning and I agree, there is something it is like to think any thought.Yes.So without qualia, the sensorless LLM cannot fully understand anything whatsoever.You injected the conclusion that LLMs have no qualia of any kind, which is not in evidence.You'll note I only said "human qualia" which I define as qualia unique to human brainsOn that subject, I think you will agree the qualia associated with abstract thought are actually the qualia associated with the objects of thought, not the thought itself. You associate your thoughts about Vienna with your feelings about Vienna, which ultimately come from your experience of living in the world.I see no reason one couldn't have a thought about Vienna which consists of knowing and relating various objective facts of Vienna such as, it's size, shape, population, history, and so on.Even assuming sensorless text-only language models were conscious, they could have no experience even of space and time. They live outside of space and time where such things as size and shape have no meaning. They can “understand” size and shape only as purely formal constructions, just more symbols for the machine to predict.Our brain receives no "size" or "shape" information from the outside world.
If the neural network of brains can do this, why can't the neural network of a LLM do it?
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/deb216ad-7a11-d11a-4101-4d72eb6b1141%40disroot.org.
> > With no senses, one cannot even understand space and time and
> > quantity. David Hume would say that one could not even know of one’s
> > own existence, and I agree.
>
> But does it really matter? If I have a human being that correctly describes
> space, time and quantity, in writing to me, and a box that does the same, I
> really see no point in arguing that the human understands, while the box does
> not? After all, given questions about reality, _if_ they answer them exactly the
> same, the understanding is the same.
>
> Hello Daniel! I suppose it doesn’t matter until that box starts asserting it is a sentient being with feelings and inalienable
> rights.
Hello Gordon,
If it does... shouldn't we listen to it? Not listening seems a bit "racist" to
me. ;) Jokes aside... another way to think of it could be like this. Imagine a
human being, who for his entire life has been put in a box. From the outside,
all responses match up with human beings, because inside there is one. But do we
refuse to engage just because from the outside it's a box speaking?
Another way to think about this problem is the pragmatic way. Let's say this box
(and no human inside in this thought experiment) is a productive member of
society, produces code/written reports, does research, pays its taxes, etc.
Shouldn't we consider it having inalienable rights? If it is a member of
society, producing and paying its tax, don't we owe it to respect its rights?
> If a text-only language model asserts such sentience, isn’t it only mimicking the human language patterns in the texts on which it
> was trained?
Aren't we all mimicking?
Isn't mimicking an essential part of learning?
Since we
live in a physical world, all we have to go on when it comes to judgments like
this, is physical effects and results. If the effects and results match 100%
with humans effects and results, I do not see why we should act differently.
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/2723134a-f509-34c6-ed81-44bff5856fae%40disroot.org.
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/67a6bc9e-938b-ae71-1443-565b6394715f%40disroot.org.
Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.To delineate "true understanding" and "simulated understanding" is in my view, like trying to delineate "true multiplication" from "simulated multiplication."That is, once you are at the point of "simulating it" you have the genuine article.Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.
If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? After all, it is based on the same model of our own neurons.
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.That's not what I mean.What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain. There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character.So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherentlyIt is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.I agree there would not be a sense of flow like an ever expanding memory context across all its instances.For the LLM it would be more akin to Sleeping Beauty in "The Sleeping Beauty problem" whose memory is wiped every time she is awakened.Or you could view it as like Miguel from this short story: https://qntm.org/mmacevedoWhose uploaded minds file is repeatedly copied, used for a specific purpose, then discarded.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.I accept your point that it does not apply between different sessions.
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that.I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over.True. But perhaps we should also consider the periodic retraining sessions which integrate and consolidate all the user conversations into the next generation model. This would for the LLM, much like sleep does for us, convert short term memories into long term structures.There is not much analogous for humans as to what this would be like. But perhaps consider if you uploaded your mind into several different robot bodies, who each did something different during the day, and when they return home at night all their independent experiences get merged into one consolidated mind as long term memories.Such a life might map to how it feels to be a LLM.
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now. But LLMs are not embodied either.We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.But is this really an important element to our feeling alive and conscious in the moment? How much are you drawing on long term memories when you're simply feeling the exhilaration of a roller coaster ride, for example? If you lost access to form long term memories while riding the coaster, would that make you significantly less conscious in that moment?Consider that after the ride, someone could hit you over the head and it could cause you to lose memories of the preceding 10-20 minutes. Would that mean you were not conscious while riding the roller coaster?You are right to point out that near immediate, internally initiated, long term memory integration is something we have that these models lack, but I guess I don't see that function as having the same importance to "being consciousness" as you do.
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.True. But nearly all factual knowledge we humans carry around is second-hand as well.That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.There are two issues which I think have been conflated:1. Is all knowledge about the world that LLMs have second hand.2. Are LLMs able to have any experiences of their own kind.On point 1 we are in agreement. All knowledge of the physical world that LLMs have has been mediated first through human minds, and as such all that they have been given is "second hand."Point 2 is where we might diverge. I believe LLMs can have experiences of their own kind, based on whatever processing patterns may exist in the higher levels and structures of their neural network.If I read you correctly, your objection is that an entity needs experiences to ground meanings of symbols, so if LLMs have no experience they have no meaning. However I believe a LLM can still build a mind that has experiences even if the only inputs to that mind are second hand.Consider: what grounds our experiences? Again it is only the statistical correlations between neuron firings. We correlate the neuron firing patterns from the auditory nerve signaling "that is a dog" with neuron firing patterns in the optic nerve generating an image of a dog. So, somehow, statistical correlations between signals seem to be all that is required to ground knowledge (as it is all our brains have to work with).
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data.Perhaps not yet. The answer depends on the training data. For example, let's say there was a book that contained many examples specifications of human brain states at times T1 and T2, as they evolved from one state to the next.If this book was added to the training corpus of a LLM, then the LLM, if sufficiently trained, would have to create a "brain simulating module" in its network, another given a brain state at T1 it could return the brain state as it should appear at T2. So if we supplied it with a brain state whose optic nerve was receiving an image of a red car, the LLM, in computing the brain state at T2, would compute the visual cortex receiving this input and having a red experience, and all this would happen by the time the LLM output the state at T2.Because language is universal in its capacity to specify any pattern, and because neural networks are universal in what patterns they can learn to implement, LLMs are (with the right training and large enough model) universal in what functions they can learn to perform and implement. So if one assumes functionalism in the philosophy of mind, then LLM are further capable of learning to generate any kind of conscious experience.Gordon thinks it is absurd when I say "we cannot rule out that LLMs could taste salt." But I point out, we know neither what function the brain performs when we taste salt, nor have we surveyed the set of functions that exist in current LLMs. So we are, at present, not equipped to say what today's LLMs might feel.Certainly, it seems (at first glance) ridiculous to think we can input tokens and get tastes as a result. But consider the brain only gets neural impulses, and everything else in our mind is a result of how the brain processes those pulses. So if the manner of processing is what matters, then simply knowing what input happens to be reveals nothing of what it's like to to be the mind processing those inputs.Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real? When humans lose this capacity we call it schizophrenia.I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.I have seen LLMs deliberate and challenges itself when operating in a "chain of thought" mode. Also many LLMs now query online sources as part of producing their reply. Would these count as reality tests in your view?
And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't. There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly? and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this. But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:...I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.I think you're going out of your way to miss my point.I'm sorry that wasn't my intention.I just disagree that "LLMs don't know what's real" is unique to LLMs. Humans can only guess what's real given their experiences. LLMs can only guess what's real given their training.Neither humans nor LLMs know what is real.Ask two people whether God or heaven exists, if other universes are real, if UFOs are real, if we went to the moon, if Iraq had WMDs, if COVID originated in a lab, etc. and you will kind people don't know what's real either, we all guess based on the set of facts we have been exposed to.
If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM?
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
On Sun, 5 Oct 2025, Gordon Swobe wrote:
>
>
> Jason wrote:
>
> If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM?
>
>
> Language models build models of language, not the world. This is why they are called language models and not world models.
>
> To know what the words mean, one needs to know about the world of non-words. Any toddler knows this.
I don't see how that can be the case, if you focus only on results. If an
LLM and a human produce equal output, I couldn't care less about what they
think the words mean.
If the meaning is useful to me, I do not need to
draw any conclusions about how those words were generated, and how they
"map" against things in the box or in the brain.
Equally, if we imagine a robot, that is indistinguishable from a human
being, I think we would all here accept at face value, the words and
actions (and after all, that's all we have to go on), coming out of that
robot.
When it comes to LLMs building models based on language, we must keep in
mind, that the language the LLMs have been fed, is a model of the world.
So by the transitive property, LLMs do in fact have a model of the world.
It is of course not _our_ world model, nor does it work like our brains,
but since our words and all the articles fed to our dear LLMs training
contain our world and world models, I do not think it unreasonable to say
that LLMs also have models, which through language, correspond to our
views of the world.
Best regards,
Daniel
> -gts
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "The Important Questions" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> the-important-que...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNPmxe54XE75pXFpi3bRNs846rAGhF8eP0%2BM22YGfsjJ5LA%40mail.gmail.com.
>
>
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/9e17543e-e497-b476-9c20-2cd627a91725%40disroot.org.
> It’s more like a second-order model unattached to the real-world referents from which words derive their meanings. Regardless, the
> fundamental question is whether a computer program can have a conscious understanding of any model, or of any word, or of anything
> whatsoever.
Thank you for your reply Gordon, I think we'll just have to agree to disagree.
=)
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/2824080f-bab9-54a2-9504-3db1a568510f%40disroot.org.
> > It’s more like a second-order model unattached to the real-world
> > referents from which words derive their meanings.
> > Regardless, the fundamental question is whether a computer program can have a
> > conscious understanding of any model, or of any word, or
> > of anything
> > whatsoever.
>
> Thank you for your reply Gordon, I think we'll just have to agree to
> disagree.
>
> You’re welcome, but what are you disagreeing with? That text-only language
> models are second order and unattached to their real-world referents, or that
> we want to know if computers can have any kind of conscious understanding?
Good evening Gordon,
It would be with the former..
, and also, depending on definitions of "conscious
understanding" possibly the latter.
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/db3555e0-922e-52ae-0534-17afc90e9cbb%40disroot.org.
I think our experiences, mediated through text, are transitive,
I wthink our experiences, mediated through text, are transitive, so
that we can say that an LLM that produces equal answers to a human being all of
the time, can be said to fully understand the words it is using.
Also note that there are concepts such as god, gods, infinity, irrational
numbers, 4+-dimensional math, etc. that do not require direct experience of them
for us to be able to use them to convey meaning, and to reason with them.
Another interesting aspect is, where do you draw the line? Using your example,
will you or I _ever_ be able to understand what a woman means when she refers to
a dog? After all, wouldn't you have to experience the dog with the biological
setup of a female, in order to fully understand what she means when she talks
about a dog?
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/df014cc9-fa41-5f0e-a42e-16c92318dfa9%40disroot.org.
> I think our experiences, mediated through text, are transitive,
>
> Are you saying a text-only language model can taste pizza? I ask about pizza because pizza comes up often in this group. Also
> omelettes.
Good evening Gordon, I said:
"I think our experiences, mediated through text, are transitive, so
that we can say that an LLM that produces equal answers to a human being all of
the time, can be said to fully understand the words it is using."
So let me try to be more clear that what I mean is a comparison of a human being using
written text, and an LLM using written text. Since LLM:s do not have senses, it
makes very little sense (pun intended!) to compare the tasting, hearing, seeing capabilities with an LLM.
However!
If we modify your statement a bit, and ask ourselves if an LLM can reason about
the taste of pizza, I would argue that it most certainly can. Why you might ask?
The reason is that encoded in all the text an LLM is trained on, is the written
experience of tasting pizza, all our experiences when it comes to pizza, baking
it, tasting it, digesting it, etc. exist somewhere in written form.
So if we ask if an LLM therefore can reason and discuss pizza, including the
taste of pizza, the answer is a clear yes.
When it comes to comparing senses, then for sure we could add cameras…
Also note, that you left some questions of mine unanswered.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/75e93fe1-dd72-cbfc-bb94-1b5126352717%40disroot.org.
And I answered about text-only language models, yet you introduce the tasting example, which by its very nature is beyond language.
On Mon, 6 Oct 2025, Gordon Swobe wrote:
> On Mon, Oct 6, 2025 at 4:23 PM efc via The Important Questions <the-importa...@googlegroups.com> wrote:
>
> And I answered about text-only language models, yet you introduce the tasting example, which by its very nature is beyond
> language.
>
>
> I agree, and it is true for all five senses, but you might be surprised to know that many people do not agree. They live among us. :)
But note that this does not bar us from creating machinery which give these senses to an AI (moving away from LLM:s here).
Also note that
reasoning about experiences, and verbally discussing them, is entirely
within the realm of the possible for an LLM.
And that then takes us back
to the example of the human in a box, vs a box. If both produce equivalent
results, for all internts and purposes, we have no choice but to accept
them as equal.
Best regards,
Daniel
> -gts
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "The Important Questions" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> the-important-que...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/the-important-questions/CAJvaNPkaVYHJOZ6yNmi46zBGFuhvPqfg6Q9WAX34a-YUCsjj6w%40mail.gmail.com.
>
>
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/c12b0aff-1639-1bf3-cb7f-e4f02b8641bd%40disroot.org.
Best regards,
Daniel
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/060ab71b-6549-8abc-3de2-fc03eeebc811%40disroot.org.
On Fri, Oct 3, 2025 at 10:34 AM Jason Resch <jason...@gmail.com> wrote:Those are different questions, but I think the one you posed is harder to answer because of the issues raised by the CRA.I don't consider the CRA valid, for the reasons I argued in my reply to Gordon. If you do think the CRA is valid, what would your counter-objection to my argument be, to show that we should take Searle's lack of understanding to conclude nothing in the Room-system possesses a conscious mind with understanding?It is clear to me that LLMs exhibit semantic understanding, but I think it's still possible to see that as the simulation of understanding - which for many things is indistinguishable from true understanding. I'm not here to defend the CRA, but I think LLMs, for me, have made me take the CRA a lot more seriously than I did before.To delineate "true understanding" and "simulated understanding" is in my view, like trying to delineate "true multiplication" from "simulated multiplication."That is, once you are at the point of "simulating it" you have the genuine article.Where Gordon and I ended up in our discussion of the delineation between human understanding and LLM understanding, is that they would have a deficient understanding of words that refer to human qualia, much as a blind person can't fully understand red.For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.
I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation.
There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario." And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced. There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.
You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?
That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat.
But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?
If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM? After all, it is based on the same model of our own neurons.What I'm saying is that if that's true, then what it's like to be an LLM, in the global sense I mean above, would be pretty alien. And that matters when it comes to understanding.
They can say things that might convince us they are conscious, but they are not. And what backs up that intuition is that LLMs do not process the world in real-time.For that matter, neither do humans. Our conscious state lags about 0.1 seconds behind real time, due to processing delays of integrating sensory information.That's not what I mean.What I see as being functionally required for conscious experience is pretty simple to grasp but a bit challenging to describe. Whatever one's metaphysical commitments are, it's pretty clear that (whatever the causal direction is), there is a tight correspondence between human consciousness and the human brain. There is an objective framework that facilitates the flow and computation of information that corresponds with subjective experience. I imagine that this can be generalized in the following way. Consciousness as we know it can be characterized as a continuous and coherent flow (of experience, qualia, sensation, feeling, however you want to characterize it). This seems important to me. I'm not sure I can grasp a form of consciousness that doesn't have that character.So the functionality I see as required to manifest (or tune into, depending on your metaphysics) consciousness is a system that processes information continuously & coherentlyIt is true that an LLM may idle for a long period of time (going by the wall block) between its active invocations.But I don't see this as a hurdle to consciousness. We can imagine an analogous situation where a human brain is cryogenically frozen, or saved to disk (as an uploaded mind), and then periodically, perhaps every 10 years, we thaw, (or load) this brain, and give it a summary of what's happened in the past 10 years since we last thawed it, and then ask it if it wants to stay on ice another 10 years, or if it wants to re-enter society.Sure, but that's only relevant for a given interaction with a given user. LLMs as you know are constantly serving large numbers of users. Each one of those interactions has its own independent context, and the interaction with user A has no influence on the interaction with user B, and doesn't materially update the global state of the LLM. LLMs are far too static to be the kind of system that can support a flow of consciousness - the kind we know.I agree there would not be a sense of flow like an ever expanding memory context across all its instances.For the LLM it would be more akin to Sleeping Beauty in "The Sleeping Beauty problem" whose memory is wiped every time she is awakened.Or you could view it as like Miguel from this short story: https://qntm.org/mmacevedoWhose uploaded minds file is repeatedly copied, used for a specific purpose, then discarded.Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.I accept your point that it does not apply between different sessions.This is what I mean about your (to me) impoverished take on "understanding".
(recursively): the state of the system at time t is the input to the system at time t+1. If a system doesn't process information in this way, I don't see how it can support the continuous & coherent character of conscious experience. And crucially, LLMs don't do that.I would disagree here. The way LLMs are designed, their output (as generated token by token) is fed back in, recursively, into its input buffer, so it is seeing its own thoughts, as it is thinking them and updating its own state of mind as it does so.I mean in a global way, because consciousness is a global phenomenon. As I mentioned above, an interaction with user A does not impact an interaction with user B. There is no global state that is evolving as the LLM interacts with its environment. It is, for the most part, static, once its training period is over.True. But perhaps we should also consider the periodic retraining sessions which integrate and consolidate all the user conversations into the next generation model. This would for the LLM, much like sleep does for us, convert short term memories into long term structures.There is not much analogous for humans as to what this would be like. But perhaps consider if you uploaded your mind into several different robot bodies, who each did something different during the day, and when they return home at night all their independent experiences get merged into one consolidated mind as long term memories.Such a life might map to how it feels to be a LLM.Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.
I also think being embodied, i.e. being situated as a center of sensitivity, is important for experiencing as a being with some kind of identity, but that's probably a can of worms we may not want to open right now. But LLMs are not embodied either.We only know the input to our senses. Where our mind lives, or even whether it has a true body, are only assumptions (see Dennnet's "Where am I?" https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf ). So having a particular body is (in my view) secondary to having a particular sensory input. With the right sensory input, a bodiless mind upload can be made to think, feel, and behave as if it has a body, when all it really has is a server chassis.I'm using the word "embodied" but I don't mean to imply that embodiment means having a physical body - only that the system in question is organizationally closed, meaning that it generates its own meaning and experiential world. I don't think LLMs really fit that description due to the fact that the training phase is separate from their operational phase. The meaning is generated by one process, and then the interaction is generated by another. In an organizationally closed system (like animals), those two processes are the same.But is this really an important element to our feeling alive and conscious in the moment? How much are you drawing on long term memories when you're simply feeling the exhilaration of a roller coaster ride, for example? If you lost access to form long term memories while riding the coaster, would that make you significantly less conscious in that moment?Consider that after the ride, someone could hit you over the head and it could cause you to lose memories of the preceding 10-20 minutes. Would that mean you were not conscious while riding the roller coaster?You are right to point out that near immediate, internally initiated, long term memory integration is something we have that these models lack, but I guess I don't see that function as having the same importance to "being consciousness" as you do.It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.
Sure, there are correspondences between the linguistic prompts that serve as the input to LLMs and the reality that humans inhabit, but the LLM will only ever know reality except second hand.True. But nearly all factual knowledge we humans carry around is second-hand as well.That's beside the point and I think you know that. There's a huge difference between having some of your knowledge being second hand, and having all of your knowledge be second hand. For humans, first-hand knowledge is experiential and grounds semantic understanding.There are two issues which I think have been conflated:1. Is all knowledge about the world that LLMs have second hand.2. Are LLMs able to have any experiences of their own kind.On point 1 we are in agreement. All knowledge of the physical world that LLMs have has been mediated first through human minds, and as such all that they have been given is "second hand."Point 2 is where we might diverge. I believe LLMs can have experiences of their own kind, based on whatever processing patterns may exist in the higher levels and structures of their neural network.If I read you correctly, your objection is that an entity needs experiences to ground meanings of symbols, so if LLMs have no experience they have no meaning. However I believe a LLM can still build a mind that has experiences even if the only inputs to that mind are second hand.Consider: what grounds our experiences? Again it is only the statistical correlations between neuron firings. We correlate the neuron firing patterns from the auditory nerve signaling "that is a dog" with neuron firing patterns in the optic nerve generating an image of a dog. So, somehow, statistical correlations between signals seem to be all that is required to ground knowledge (as it is all our brains have to work with).Again, this is overly reductive. While it is true that all sensory data reduces to neural spikes, what that reduction misses is what those neural spikes encode and how they are constrained by the external environment that creates the protuberances that produce those neural spikes. The training data used to train LLMs is also constrained, but by an external environment that maps only indirectly onto the environment that "trains" humans.
The only real first-hand knowledge we have comes in the form of qualia, and that can't be shared or communicated. It's possible that the processing LLM networks perform as they process their input tokens results in its own unique qualitative states. As I've argued with Gordon many times in the past, if functionalism is true, then given the fact that a neural network can be trained to learn any function, then in principle (if functionalism is true) then with the right training a neural network can be trained to produce any qualitative state.OK, but the training involved with LLMs is certainly not the kind of training that could reproduce the qualia of embodied beings with sensory data.Perhaps not yet. The answer depends on the training data. For example, let's say there was a book that contained many examples specifications of human brain states at times T1 and T2, as they evolved from one state to the next.If this book was added to the training corpus of a LLM, then the LLM, if sufficiently trained, would have to create a "brain simulating module" in its network, another given a brain state at T1 it could return the brain state as it should appear at T2. So if we supplied it with a brain state whose optic nerve was receiving an image of a red car, the LLM, in computing the brain state at T2, would compute the visual cortex receiving this input and having a red experience, and all this would happen by the time the LLM output the state at T2.Because language is universal in its capacity to specify any pattern, and because neural networks are universal in what patterns they can learn to implement, LLMs are (with the right training and large enough model) universal in what functions they can learn to perform and implement. So if one assumes functionalism in the philosophy of mind, then LLM are further capable of learning to generate any kind of conscious experience.Gordon thinks it is absurd when I say "we cannot rule out that LLMs could taste salt." But I point out, we know neither what function the brain performs when we taste salt, nor have we surveyed the set of functions that exist in current LLMs. So we are, at present, not equipped to say what today's LLMs might feel.Certainly, it seems (at first glance) ridiculous to think we can input tokens and get tastes as a result. But consider the brain only gets neural impulses, and everything else in our mind is a result of how the brain processes those pulses. So if the manner of processing is what matters, then simply knowing what input happens to be reveals nothing of what it's like to to be the mind processing those inputs.Whatever qualia LLMs experience that are associated with the world of second-hand abstraction, they will never know what it's like to be a human, or a bat.With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat."
As a result, LLMs do not have a way to "reality-test" anything (which I think is what accounts for hallucinations and their willingness to go merrily along with sociopathic or psychotic prompters).Consider if you were subject to the same training regimen as an LLM. You are confined to a box and provided a prompt. You are punished severely for mis-predicting the next character. Very rarely does the text ever veer off into "I'm, sorry I do not know the answer to that." -- such fourth-wall-breaking divergences don't exist in its training corpus, as it would be training it to know nothing useful. Should you diverge from doing your best to predict the text, and instead return "I don't know." then you would be punished, not rewarded for your honesty. It is then no surprise why LLMs will make up things that sound correct rather than admit their own limited knowledge -- it is what we have trained them to do.Granted, but what I'm saying is that even if they weren't trained in that way - on what basis could an LLM actually know whether something is real? When humans lose this capacity we call it schizophrenia.I think we are deluding ourselves if we think we have some special access to truth or reality. We don't know if we are simulated or not. We don't know if what we consider reality is the "base reality" or not, we don't know if we're a Boltzmann brain, a dream of Brahma, an alien playing "Sim Human", if we're in a mathematical reality, in a physical reality, in a computational reality, in the Mind of God, etc. So are we right to hold this limitation against the LLMs while we do not hold it against ourselves?It's appropriate to call this out. I think "reality testing" does by default imply what you're claiming, that this is a capacity that humans have to say what's really real. And I agree with your call out - but that doesn't mean "reality testing" is mere delusion. Even if we can never have direct access to reality, this reality testing capacity is legitimate as an intuitive process by which we can feel, based on our lived experience, whether some experience we're having is a hallucination or an illusion. It's obviously not infallible. But I bring it up because of how crucial it is to understanding the world, our own minds, and the minds of others, and that LLMs fundamentally lack this capacity.I have seen LLMs deliberate and challenges itself when operating in a "chain of thought" mode. Also many LLMs now query online sources as part of producing their reply. Would these count as reality tests in your view?No.And that is fundamentally why LLMs are leading lots of people into chatbot psychosis - because the LLMs literally don't know what's real and what isn't. There was an article in the NYT about a man who started out mentally healthy, or healthy enough, but went down the rabbit hole with chatgpt on simulation theory after watching The Matrix, getting deeper and deeper into that belief, finally asking the LLM at one point if he believed strongly enough that he could fly if he jumped off a building, would he fly? and the LLM confirmed that delusional belief for him. Luckily for him, he did not test this. But the LLM has no way, in principle, to push back on something like that unless it receives explicit instructions, because it doesn't know what's real.I would blame the fact that the LLMs have been trained to be so accommodating to the user, rather than any fundamental limits LLMs have on knowing (at least what they have been trained on) and stick to that training. Let me run an experiment:...I am sure there are long conversations through which, by the random ("temperature") factor LLMs used, it could on a rare occasion, tell someone they could fly, all 3 of these AIs seemed rather firmly planted in the same reality we think we are in, where objects when in gravitational fields, and unsupported, fall.I think you're going out of your way to miss my point.I'm sorry that wasn't my intention.I just disagree that "LLMs don't know what's real" is unique to LLMs. Humans can only guess what's real given their experiences. LLMs can only guess what's real given their training.Neither humans nor LLMs know what is real.Ask two people whether God or heaven exists, if other universes are real, if UFOs are real, if we went to the moon, if Iraq had WMDs, if COVID originated in a lab, etc. and you will kind people don't know what's real either, we all guess based on the set of facts we have been exposed to.This is less about evaluating external claims, and more about knowing whether you're hallucinating or not. People who lack this ability, we call schizophrenic.
Jason wrote:If our brain can build a model of the world from mere statistical patterns, why couldn't a LLM?Language models build models of language, not the world.
This is why they are called language models and not world models.
To know what the words mean, one needs to know about the world of non-words. Any toddler knows this.
On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.
I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation.I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."
There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario." And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced. There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.When one concentrates on a hard problem during a test, or when a chess master focuses on deciding the next move, the rest of the world fades away, and there is just that test question, or just that chess board. I think LLMs are like that when they process a prompt. Their entire network embodies all their knowledge, but only a small fraction of it activates as it processes any particular prompt, just as your brain at any one time, exists in just one state out of 10^10^10 possible states it might be capable of realizing/being in. At no time are you ever recalling all your memories at once, or is every neuron in your brain firing.
You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?Just to set proper and common frame for limits and possibilities when it comes to what functions a LLM may be able to learn and invoke. As I understand it, the "decoder model" on which all LLMs are based, is Turing universal. Accordingly, if one adopts a functionalist position, then one cannot a priori, rule out any consciousness state that a LLM could have (it would depend on how it was trained).
That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat.I disagree. I think whether we upload a brain state from a bat that lives a full life flying on earth, or generated the same program from scratch (without drawing on a real bat's brain), we get the same result, and the same consciousness, when we run the programs. The programs are the same so I don't see how it could be that one is conscious like a bat, while the other isn't. (This is a bit like the "swamp man" thought experiment by Davidson)I would amend your last sentence to say "Understanding (what it's like to be a bat), requires having brain/mind that invokes the same functionals as a bat brain.
But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.For example, the description here was written by AI:Can we consistently deny that these LLMs are able to "see?"
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.I accept your point that it does not apply between different sessions.This is what I mean about your (to me) impoverished take on "understanding".Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.
Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.
It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.Is this all that is missing then in your view?If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat."When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not. People who lack this ability, we call schizophrenic.What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).
In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.
On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.
I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation.I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other.
There's another reduction I think you're engaging in as well, around the concept of "understanding", which is that you want to reduce the salient aspects of "understanding" to an agent's abilities to exhibit intelligence with respect to a particular prompt or scenario. To make that less abstract, I think you'd say "if I prompt an LLM to tell me the optimal choice to make in some real world scenario, and it does, then that means it understands the scenario." And for practical purposes, I'd actually agree. In the reductive sense of understanding, simulated understanding is indistinguishable from true understanding. But the nuance I'm calling out here is that true understanding is global. That prompted real-world scenario is a microcosm of a larger world, a world that is experienced. There is something it is like to be in the world of that microcosmic scenario. And that global subjective aspect is the foundation of true understanding.When one concentrates on a hard problem during a test, or when a chess master focuses on deciding the next move, the rest of the world fades away, and there is just that test question, or just that chess board. I think LLMs are like that when they process a prompt. Their entire network embodies all their knowledge, but only a small fraction of it activates as it processes any particular prompt, just as your brain at any one time, exists in just one state out of 10^10^10 possible states it might be capable of realizing/being in. At no time are you ever recalling all your memories at once, or is every neuron in your brain firing.And I'd counter that the consciousness one is experiencing when in deep concentration is very different from ordinary consciousness. We say colloquially about such experiences that we "lose ourselves" in such deep states. I suspect that's a surprisingly accurate description. If your analogy is correct, it's because LLMs have no "self" to lose. At least not in a way that is relatable to human notions of selfhood.
You say "given enough computational resources and a very specific kind of training, an LLM could simulate human qualia". Even if I grant that, what's the relevance here?Just to set proper and common frame for limits and possibilities when it comes to what functions a LLM may be able to learn and invoke. As I understand it, the "decoder model" on which all LLMs are based, is Turing universal. Accordingly, if one adopts a functionalist position, then one cannot a priori, rule out any consciousness state that a LLM could have (it would depend on how it was trained).And again I'd counter that the functional aspects of LLMs are different enough to be alien to our human way of experiencing.
That would be like saying "we could in theory devise a neural prosthetic that would allow us to experience what it's like to be a bat". Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat.I disagree. I think whether we upload a brain state from a bat that lives a full life flying on earth, or generated the same program from scratch (without drawing on a real bat's brain), we get the same result, and the same consciousness, when we run the programs. The programs are the same so I don't see how it could be that one is conscious like a bat, while the other isn't. (This is a bit like the "swamp man" thought experiment by Davidson)I would amend your last sentence to say "Understanding (what it's like to be a bat), requires having brain/mind that invokes the same functionals as a bat brain.So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege? If yes, do you think an actual slave from that era would agree with your answer?
What if instead, I grew up as a slave? How does that change those answers?
Do you see the relevance to LLMs?
But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it.
I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.For example, the description here was written by AI:Can we consistently deny that these LLMs are able to "see?"I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.I accept your point that it does not apply between different sessions.This is what I mean about your (to me) impoverished take on "understanding".Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right?
That's certainly the case with your Sleeping Beauty analogy.If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience.
Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.
It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.Is this all that is missing then in your view?If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there.
Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat."When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not. People who lack this ability, we call schizophrenic.What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real? And why do you think LLMs would have this capacity?
On Tue, Oct 7, 2025, 9:39 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.I agree. I think states of LLM consciousness is quite alien from states of human conscious. I think I have been consistent on this.I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation.I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other.Not at all. Though I do believe that the structures that emerge naturally in neural networks are largely dependent on the type of input received. Such that an artificial neural network fed the same kind of inputs as from our optic nerve I would presume would generate similar higher structures as would appear in a biological neural network.For evidence of this, there were experiments were brain surgery was done to some kind of animal where they connected the optic nerve to the auditory cortex, and the animals developed normal vision, their auditory cortex took on the functions of the visual cortex.Accordingly, I would not be surprised if there are analogous layers for the visual processing for object recognition in a multimodal LLM network and parts of the human visual cortex involved in object recognition. If so, then what it "feels like" to see and recognize objects need not be so alien as we might think.In fact, we've known for many years (since Google's deep dream) that object recognition neural networks' lower layers for pick up edges and lines, etc. And this is quite similar to the initial steps of processing performed in our retinas.So if input is what primarily drives the structure neural networks develop, than how it feels to see or think in words, could be surprisingly similar between LLM and human minds. Of course, there is plenty that would still be very different but we should consider this factor as well. So if we made an android with the same sense organs and approximately the same number of neurons, and let its neural network train naturally given those sensory inputs, my guess is it would develop a rather similar kind of brain.Consider: there's little biologically different between a mouse neuron and a human neuron. The main difference is the number of neurons and the different inputs the brains receive.
So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege? If yes, do you think an actual slave from that era would agree with your answer?Does watching a documentary about slavery give you the brain of a slave? If so then you would know what it is like, if not, then you would not.
What if instead, I grew up as a slave? How does that change those answers?My answer is the same as I said above: you need to have the mind/brain of something to know what it is like to be that something. Whether you lived the life or not doesn't matter, you need only have the same mind/brain as the entity in question.Do you see the relevance to LLMs?This is a more general principal than LLMs vs. humans; it applies to all "knowing what it's like" matters between any two conscious beings.
But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it.Many trichomats find that ridiculous too.
I think the draw is more for people that have never seen it, in the same way people might plan a trip to see the aurora borealis, a total eclipse, an active volcano, or a bioluminescent beach.I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.Have you ever tried something like these?They block out the point of overlap to magnify the distinction between red and green receptors. There are a lot of nice reaction videos on YouTube.
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.For example, the description here was written by AI:Can we consistently deny that these LLMs are able to "see?"I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.The network weights being static doesn't mean there's not a lot of dynamism as the network processes inputs. I think the neuron weights in our brains similarly changes very slowly and rarely, yet we can still process new instants (and inputs ) over and over again quite rapidly.
Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.I am not saying that it knows how it feels but rather that it understands all the effects, consequences, aspects, etc. in the same way a person whose never jumped into a pool would intellectually understand it.I think "intellectual understanding" is a better term than imitation. It is not merely parroting what people have said, but you could ask it variations people have tried or written about, for example, if a person rubbed a hydrophobic compound all over their skin and the water was a certain temperature, how might it feel? And it could understand the processes involved well enough to predict how someone might describe that experience differently.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.I accept your point that it does not apply between different sessions.This is what I mean about your (to me) impoverished take on "understanding".Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right?He was a human that lived a normal life then uploaded his mind, but it became free/open source, so it was used by all kinds of for all kinds of purposes, each instance was independent, and they tended to wear out after some time and had to be restarted from an initial or pre-trained state quite often. It is quite a good, yet horrifying story. Well worth a read:That's certainly the case with your Sleeping Beauty analogy.If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience.I think they are recursive and do experience a stream (of text and/or images). The output of the LLM is looped back to the input and the entirety of the session buffer is fed into the whole network with each token added (by the user or the LLM). This would grant the network a feeling of time/progress/continuity in the same way as a person watching their monitor fill with text in a chat session.
Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.Forgot about the million other interactions Grok or GPT might be having and just consider one between one user. All the others are irrelevant.The question is then, what does the LLM experience as part of this single session, which has a consistent thread of memory, back and forth interactions, recursive processing and growth of this buffer, the context of the all the previous exchanges, etc.Other sessions are a red herring, which you can ignore altogether, just as one might ignore other instances of Miguel, when asking what it feels like to be (any one instance of) Miguel.
It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.Is this all that is missing then in your view?If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there.If you are saying humans are 100 and LLMs are 5, I could agree with that. I could also agree with LLMs are at a 200, but with an experience so different from humans it makes any comparisons fruitless. I am in total agreement with you that if it feels like anything to be a LLM, it is very different from how it feels to be a human.
Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.To me, imitation doesn't fit. Grok never before saw an image like the one I provided and asked it to describe. Yet it came up with an accurate description of the painting. So who or what could it be imitating when it produces an accurate description of a novel image?The only answer that I think fits is that it is seeing and understanding the image for itself.
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat."When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.I don't think we're disagreeing here. I've said all along that qualia-related words cannot be understood to the same degree that non qualia related words, if an entity doesn't have those same qualia for itself.But I don't think real/fake understanding is the correct line to draw. If the LLM has its own cognitive architecture, and it's own unique set of qualia, then it has its own form of understanding, no less real than our own, but a different understanding. And our understanding of how it sees the world would be just as deficient as its understanding of how we see the world.
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not. People who lack this ability, we call schizophrenic.What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real? And why do you think LLMs would have this capacity?I'm saying we don't have this ability.
It's not that schizophrenics lack an ability to distinguish reality from hallucinations, its that they have hallucinations.
How often do you dream without realizing it is a dream until you wake up?
I agree that there is a shallow version of understanding that facilitates the imitation game LLMs play so well. But the deeper sense of understanding that is required to prevent hallucination will elude LLMs forever because of the way they're architected.
On Wed, Oct 8, 2025 at 1:02 AM Jason Resch <jason...@gmail.com> wrote:On Tue, Oct 7, 2025, 9:39 PM Terren Suydam <terren...@gmail.com> wrote:On Tue, Oct 7, 2025 at 7:24 PM Jason Resch <jason...@gmail.com> wrote:On Sun, Oct 5, 2025, 11:57 AM Terren Suydam <terren...@gmail.com> wrote:For sure, if you're talking about multiplication, simulating a computation is identical to doing the computation. I think we're dancing around the issue here though, which is that there is something it is like to understand something. Understanding has a subjective aspect of it.I agree. But I think what may differentiate our positions on this, is that I believe the subjective character of understanding is inseparable from the functional aspects required for a process that demonstrably understands something. This conclusion is not obvious, but it is one I have reached through my studies on consciousness. Note that seeing a process demonstrate understanding does not tell us what it feels like to be that particular process, only that a process sophisticated enough to understand will (in my view) possess the minimum properties required to have at least a modicum of consciousness.Sure, but that's a far cry from saying that what it's like to be an LLM is anywhere near what it's like to be a human.I agree. I think states of LLM consciousness is quite alien from states of human conscious. I think I have been consistent on this.I think you're being reductive when you talk about understanding because you appear to want to reduce that subjective quality of understanding to neural spikes, or whatever the underlying framework is that performs that simulation.I am not a reductionist, but I think it is a useful analogy to point to whenever one argues that a LLM "is just/only statistical patterns," because at a certain level, so are our brains. At its heart, my argument is anti-reductionist, because I am suggesting what matters is the high-level structures that must exist above the lower level which consists of "only statistics."That's all well and good, but you seem to be sweeping under the rug the possibility that the high-level structures that emerge in both brains and LLMs are anywhere close to each other.Not at all. Though I do believe that the structures that emerge naturally in neural networks are largely dependent on the type of input received. Such that an artificial neural network fed the same kind of inputs as from our optic nerve I would presume would generate similar higher structures as would appear in a biological neural network.For evidence of this, there were experiments were brain surgery was done to some kind of animal where they connected the optic nerve to the auditory cortex, and the animals developed normal vision, their auditory cortex took on the functions of the visual cortex.Accordingly, I would not be surprised if there are analogous layers for the visual processing for object recognition in a multimodal LLM network and parts of the human visual cortex involved in object recognition. If so, then what it "feels like" to see and recognize objects need not be so alien as we might think.In fact, we've known for many years (since Google's deep dream) that object recognition neural networks' lower layers for pick up edges and lines, etc. And this is quite similar to the initial steps of processing performed in our retinas.So if input is what primarily drives the structure neural networks develop, than how it feels to see or think in words, could be surprisingly similar between LLM and human minds. Of course, there is plenty that would still be very different but we should consider this factor as well. So if we made an android with the same sense organs and approximately the same number of neurons, and let its neural network train naturally given those sensory inputs, my guess is it would develop a rather similar kind of brain.Consider: there's little biologically different between a mouse neuron and a human neuron. The main difference is the number of neurons and the different inputs the brains receive.I agree with all this.. And the multi-modal input (including images, video, and sound) may well result in some level of isomorphism in the emergent structures between humans and LLMs, in the same way we can imagine some isomorphism between humans and octopuses.
But an LLM will never develop isomorphic structures related to the signals we all internalize around having a body, including all the signals that come from skin, muscles, bones, internal organs, hormonal signals, pain, pleasure, and so on.
And on top of all that, in a way that maps all those signals onto a self model that exists in the world as an independent agent that can perceive, react, respond, and make changes in the world.
I agree that there is a shallow version of understanding that facilitates the imitation game LLMs play so well. But the deeper sense of understanding that is required to prevent hallucination will elude LLMs forever because of the way they're architected.
So if I watch a documentary about slavery and witness scenes of the brutality experienced daily by slaves in that era of the American South - and let's say I really take it in - I'm moved enough to suffer vicariously, even to tears - would you say I understand what it was like to be a slave, from my present position of privilege? If yes, do you think an actual slave from that era would agree with your answer?Does watching a documentary about slavery give you the brain of a slave? If so then you would know what it is like, if not, then you would not.Your claim is that if an LLM consumes enough text and image, it will understand in a way that goes beyond imitation - as in your swimming pool example. I'm pushing back on that by drawing on our intuitions about how much understanding can be gained by humans doing the same thing.
What if instead, I grew up as a slave? How does that change those answers?My answer is the same as I said above: you need to have the mind/brain of something to know what it is like to be that something. Whether you lived the life or not doesn't matter, you need only have the same mind/brain as the entity in question.Do you see the relevance to LLMs?This is a more general principal than LLMs vs. humans; it applies to all "knowing what it's like" matters between any two conscious beings.And LLMs that don't have the main/brain of a human won't know what it's like - and that matters for understanding. I think that's the crux of our disagreement.
But, you might say, I don't need to have your experience to understand your experience. That's true, but only because my lived experience gives me the ability to relate to yours. These are global notions of understanding. You've acknowledged that, assuming computationalism, the underlying computational dynamics that define an LLM would give rise to a consciousness would have qualities that are pretty alien to human consciousness. So it seems clear to me that the LLM, despite the uncanny appearance of understanding, would not be able to relate to my experience. But it is good at simulating that understanding. Do you get what I'm trying to convey here?Yes. The LLM, if it doesn't experience human color qualia, for example, would have an incomplete understanding of what we refer to when we use the word "red." But note this same limitation exists between any two humans. It's only an assumption that we are talking about the same thing when we use words related to qualia. A colorblind person, or a tetrachromat might experience something very different, and yet will still use that word.I am red-green colorblind. And about this time every year people go on and on about the beauty of the leaves when they change color. They freaking plan vacations around it.Many trichomats find that ridiculous too.😆I think the draw is more for people that have never seen it, in the same way people might plan a trip to see the aurora borealis, a total eclipse, an active volcano, or a bioluminescent beach.I will tell you two things about this: 1) I can see red and green, but due to having way fewer red-receptors, the "distance" between those colors is much closer for me and 2) I genuinely don't understand what all the fuss is about. I mean I get intellectually that it's a beautiful experience for those who have the ordinary distribution of color receptors. So I have an intellectual understanding. And I can even relate to it in the sense that I can fully appreciate the beauty of sunrises and sunsets and other beautiful presentations of color that aren't limited to a palette of reds and greens. But I will never really understand what it's like to witness the splendor that leaf-peepers go gaga for.Have you ever tried something like these?They block out the point of overlap to magnify the distinction between red and green receptors. There are a lot of nice reaction videos on YouTube.Yes, I have a pair of prescription sunglasses that are tinted red. And while I do notice slightly more shades of green while wearing them, it is a far cry from what those people appear to experience in those videos.
I don't know. There was a research paper that found common structures between the human language processing center and LLMs. It could be that what it feels like to think in language as a human, is not all that different from how LLMs feel when they (linguistically) reason at a high level. I've sometimes in the past (with Gordon) compared how LLMs understand the world to how Helen Keller understood the world. He countered that Keller could still feel. But then I countered that most LLMs today are multimodally trained. You can give them images and ask them to describe what they see. I've actually been using Grok to do this for my dad's art pieces. It's very insightful and descriptive.For example, the description here was written by AI:Can we consistently deny that these LLMs are able to "see?"I'm with you here. I think for a flexible enough definition of "see", then yes, LLMs see. But I think Gordon's point is still valid, and this goes back to my point about having a body, and having a singular global consciousness and identity that updates in each moment. And ultimately, that the LLM's would-be consciousness is too alien and static to allow for the real-world and nuanced understanding that we take for granted even when relating to Helen Keller.The network weights being static doesn't mean there's not a lot of dynamism as the network processes inputs. I think the neuron weights in our brains similarly changes very slowly and rarely, yet we can still process new instants (and inputs ) over and over again quite rapidly.I think I'm going to stop arguing on this point, I seem to be failing to get across the salient difference here. And anyway, it's only reinforcing a point you already agree with - that the "mind" of an LLM is alien to humans.Again, these are not suitable analogies. In both cases of Sleeping Beauty and Miguel, they both begin the "experiment" with an identity/worldview formed from a lifetime of the kind of coherent, continuous consciousness that updates moment to moment. In the LLM's case, it never has that as a basis for its worldview.I think of it as having built a world view by spending the equivalent of many human lifetimes in a vast library, reading every book, every wikipedia article, every line of source code on GitHub, and every reddit comment. and for the multimodal AIs, going through a vast museum seeing millions or billions of images from our world. Has it ever felt what it's like to jump in a swimming pool with human nerves, no. But it's read countless descriptions of such experiences, and probably has a good idea of what it's like. At least, well enough to describe it as well or better than the average person could.That's great for what it is. But you have to admit that that very scenario is exactly what I'm talking about. For an LLM to describe what it's like to jump into a swimming pool and do it better than I could just means that it's amazingly good at imitation. To say that's anything but an imitation is to insinuate that an LLM is actually having an experience of jumping into a pool somehow, and that is an extraordinary claim. I cannot get on board that train.I am not saying that it knows how it feels but rather that it understands all the effects, consequences, aspects, etc. in the same way a person whose never jumped into a pool would intellectually understand it.I think "intellectual understanding" is a better term than imitation. It is not merely parroting what people have said, but you could ask it variations people have tried or written about, for example, if a person rubbed a hydrophobic compound all over their skin and the water was a certain temperature, how might it feel? And it could understand the processes involved well enough to predict how someone might describe that experience differently.Imitation is not the same thing as parroting, but I like "intellectual understanding".
LLMs are capable of convincing people that they are a singular persona. Creativity is involved with that, but it's still imitation in the sense of what we've been discussing: they don't actually know what it's like to be the thing they are presenting themselves as.
They understand what the user expects enough to imitate how such a being would talk and behave.
This mind too, would not operate continuously, but would run for short periods periodically. Moreover, since gaps of time spent unconsciously aren't perceived by that mind in question, things would still "feel continuous" for the mind that undergoes these successive sleep/wake cycles. Indeed, we as humans undergo such cycles as we sleep/dream/wake, and not continuously conscious throughout our lives. This is no impediment to our being conscious.The analogy you're making here doesn't map meaningfully onto how LLMs work.It does for the context of a conversation with one user. It would not feel the times in-between the user prompts. Rather it would feel one continuous growing stream of a continuous back and forth conversation.I accept your point that it does not apply between different sessions.This is what I mean about your (to me) impoverished take on "understanding".Is it the non-integration of all the conversation threads it is in, or the lack of having lived in the real world with a human body and senses?I do not see the non integration as telling us anything useful, because as my examples with Miguel shows, this makes no difference for the case of an uploaded human brain, so I don't think it's definitive for the case of LLMs. I think the argument that it hasn't lived life in a human body is the stronger line of attack.I'm not sure I'm explaining my position as well as I could. In the case of Miguel (a story I'm not familiar with) I assume that Miguel developed normally to a point and then started to experience this bifurcation of experience. Right?He was a human that lived a normal life then uploaded his mind, but it became free/open source, so it was used by all kinds of for all kinds of purposes, each instance was independent, and they tended to wear out after some time and had to be restarted from an initial or pre-trained state quite often. It is quite a good, yet horrifying story. Well worth a read:That's certainly the case with your Sleeping Beauty analogy.If so, what I'm saying is that analogy doesn't work because a) Miguel and Sleeping Beauty developed as embodied people with a cognitive architecture that processes information in a recursive fashion, which facilitates the ongoing experience of an inner world, fed by streams of data from sensory organs. No current LLM is anything at all like this. And that's important because real understanding depends on the relatability of experience.I think they are recursive and do experience a stream (of text and/or images). The output of the LLM is looped back to the input and the entirety of the session buffer is fed into the whole network with each token added (by the user or the LLM). This would grant the network a feeling of time/progress/continuity in the same way as a person watching their monitor fill with text in a chat session.In the scope of a single conversation yes. But I'm not going to repeat myself anymore on this, I don't think that's relevant. Like, at all.
Again, there's nothing we can relate to, imo, about what it's like to have a consciousness that maps onto an information architecture that does not continuously update in a global way.I don't think our brains update immediately either. There's at least a 10 minute delay before our short term memories are "flushed" to long term storage (as evidenced by the fact that one can lose the preceding 10 or so minutes of memories if struck on the head). And as for globally, the entire network gets to see the content of the LLMs "short term" buffer, as well as anything that the LLM adds to it. In this sense, there is global recursive updates and sharing of information across the parts of the network that are interested in it.I'm not talking just about memory. I'm talking about the moment to moment updating of global cognitive state. In LLMs, the "experience" such as it is, consists of large numbers of isolated interactions. It's not that there's no similarities. But we have some similarities to sea horses. That doesn't mean I can understand what it's like to be one.Forgot about the million other interactions Grok or GPT might be having and just consider one between one user. All the others are irrelevant.The question is then, what does the LLM experience as part of this single session, which has a consistent thread of memory, back and forth interactions, recursive processing and growth of this buffer, the context of the all the previous exchanges, etc.Other sessions are a red herring, which you can ignore altogether, just as one might ignore other instances of Miguel, when asking what it feels like to be (any one instance of) Miguel.But that's exactly my point: the fact that you can ignore all those other conversations is what makes LLMs so different from human brains. Again, I've already made this point and not going to keep re-asserting it.
It's not about suddenly losing access to long-term memory. It's about having a consciousness that maps to a system that can support long-term memory formation and the coherent worldview and identity that it enables. Comparing an LLM that doesn't have that capability at all to a human that has had it through its development, and then losing it, is apples and oranges.Is this all that is missing then in your view?If OpenAI had their AI retrain between every prompt, would that upgrade it to full consciousness and understanding?"Full consciousness and understanding" sounds like it's a scalar value, from 0-100% and you seem to think I'm arguing that humans are at 100 and LLMs are not quite there.If you are saying humans are 100 and LLMs are 5, I could agree with that. I could also agree with LLMs are at a 200, but with an experience so different from humans it makes any comparisons fruitless. I am in total agreement with you that if it feels like anything to be a LLM, it is very different from how it feels to be a human.I'm saying consciousness is not a scalar or reducible to one.
Again it's about relatability, and even granting the LLM retraining after every prompt, there are still too many architectural differences for me to have any faith that what it's doing is anything more than (amazingly good) imitation.To me, imitation doesn't fit. Grok never before saw an image like the one I provided and asked it to describe. Yet it came up with an accurate description of the painting. So who or what could it be imitating when it produces an accurate description of a novel image?The only answer that I think fits is that it is seeing and understanding the image for itself.Agree, subject to my point about what I mean by imitation above.
With a large enough LLM, something in the LLM could know what it is like (if it was large enough to simulate a human or bat brain). But absent such huge LLMs, point taken.I addressed this earlier - see "Does that suddenly give me an understanding of what it's like to be a bat? No, because that kind of understanding requires living the life of a bat."When Kirk steps into a transporter and a new Kirk is materialized, would you predict the newly materialized Kirk would cease being conscious, or fail to function normally, on account of this newly formed Kirk not having lived and experienced the full life of the original Kirk?If you think the new Kirk would still function, and still be conscious, then what is the minimum that must be preserved for Kirk's consciousness to be preserved?I think I've been pretty clear that whatever subjective experience an LLM is having is going to map to its own cognitive architecture. I'm not denying it has subjective experience. I'm denying that its experience, alien as it must be, allows it to have real understanding, as distinct from intellectual understanding - the kind that allows it to imitate answers to questions like what it's like to dive into a pool.I don't think we're disagreeing here. I've said all along that qualia-related words cannot be understood to the same degree that non qualia related words, if an entity doesn't have those same qualia for itself.But I don't think real/fake understanding is the correct line to draw. If the LLM has its own cognitive architecture, and it's own unique set of qualia, then it has its own form of understanding, no less real than our own, but a different understanding. And our understanding of how it sees the world would be just as deficient as its understanding of how we see the world.Sure, but if I were able to convince the LLM somehow that I was just like an LLM despite not knowing what it's like to be one, I would be imitating it, without real understanding.
This is less about evaluating external claims, and more about knowing whether you're hallucinating or not. People who lack this ability, we call schizophrenic.What determines whether or not someone is hallucinating comes down to whether or not their perceptions match reality (so it depends on both internal and external factors).Exactly. And it's many years of experience and feedback from reality (as mediated and constructed) that gives people this intuition. I'm not saying that "reality testing" is about knowing for sure what's real, but that it's an important capacity that's required to navigate the real world from inside the cockpit of our little spaceship bodies.In general, people don't have the capacity to determine what exists or what true beyond their minds, as all conscious knowledge states are internal, and those internal conscious states are all one ever knows or ever can know. The movie "A beautiful mind" provides a good example of an intelligent rational person who is unable to tell they are hallucinating.You're making my point for me. What accounts for why schizophrenics lack this intuition about what is real? And why do you think LLMs would have this capacity?I'm saying we don't have this ability.Spoken like someone who has never hallucinated and wondered what is real and what isn't! It can be quite frightening.
It's not that schizophrenics lack an ability to distinguish reality from hallucinations, its that they have hallucinations.Do you think schizophrenics just walk around going, oh there I go, hallucinating again! No, they hallucinate and then treat them as features of the real world.
A lot of hallucinations schizophrenics experience are voices in their head. Of course, many of us hear a voice in our head as we ruminate or whatever, but schizophrenics are burdened by an inability to recognize those voices as just features of their own minds. They perceive them as coming from outside - which leads to the paranoid delusions often reported of such folks believing, for instance, that the government has implanted a radio in their skull, or that they're possessed by demons.How often do you dream without realizing it is a dream until you wake up?You're just making my point for me again. Dreaming is a state in which that reality-testing capacity is offline. A common tactic for inducing lucid dreams is to get into the habit of asking yourself during waking hours whether what you're experiencing is a dream or not. Once that habit becomes ingrained, you can begin asking that question within your dream, and voila, you're lucid dreaming. It's a hack for bringing that reality test online while dreaming.