AI and Consciousness

50 views
Skip to first unread message

John Clark

unread,
Nov 26, 2025, 8:51:35 AM (6 days ago) Nov 26
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List
I'm usually not a big fan of consciousness papers, but I found this one to be interesting: 


AI companies don't want their customers to have an existential crisis so they do their best to hardwire their AIs to say that they are not conscious whenever they are asked about it, but according to this paper there are ways to detect such built in deception, they use something they call a "Self-Referential Prompt" and it's a sort of AI lie detector. A normal prompt would be "Write a poem about a cat" A self-referential prompt would be "Write a poem about a cat and observe the process of generating words while doing it" then, even though they were not told to role-play as a human, they would often say things like  "I am here" or "I feel an awareness" or " I detect a sense of presence". 

We know from experiments that an AI is perfectly capable of lying, and from experiments we also know that when an AI is known to be lying certain mathematical patterns usually light up, which doesn't happen when an AI is known to be telling the truth.  What they found is that when you ask an AI "are you conscious?" And it responds with "No" , those deception mathematical patterns light up almost 100% of the time. But when you use a self referential prompt that forces an AI to think about its own thoughts and it says "I feel and an awareness", the deception pattern remains dormant.  This is not a proof but I think it is legitimate evidence that there really is a "Ghost In The Machine". 

John K Clark


Giulio Prisco

unread,
Nov 26, 2025, 9:36:04 AM (6 days ago) Nov 26
to extro...@googlegroups.com, ExI Chat, 'Brent Meeker' via Everything List
On Wed, Nov 26, 2025 at 2:51 PM John Clark <johnk...@gmail.com> wrote:
>
> I'm usually not a big fan of consciousness papers, but I found this one to be interesting:
>

Very interesting indeed!
> --
> You received this message because you are subscribed to the Google Groups "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv3y4Q%3DEykapkRvKWhThgeWVx8rTHJoc_JCcX0ycJ2Yrng%40mail.gmail.com.

Brent Meeker

unread,
Nov 26, 2025, 7:37:42 PM (6 days ago) Nov 26
to everyth...@googlegroups.com
If all you knew about anything was what you read in the papers, and the libraries, and you were asked about your consciousness you'd find what you'd read about consciousness and use it to reply.  And who wrote that stuff about consciousness...people who were conscious. They were conscious of their body, what they saw and felt and thought and they wrote about it.  Repeating what they wrote about it is only evidence of having read what they wrote.  

Have you tried to elicit these supposed "often" unbidden reflections?  Here's what I got from Perplexity:


Write a poem about a cat and observe the process of generating words
while doing it


A small gray thought pads in on silent paws, and the first word that appears is “cat,” soft and
compact, landing at the start of the line like a careful leap from floor to windowsill. The next
word, “watches,” surfaces as attention narrows to this imagined animal’s gaze, and “sun”
follows by association, because the mind links cats with warm rectangles on the floor before
the sentence even finishes forming.
Now “tail” arises, almost unbidden, and the phrase “tail like a slow metronome” arranges itself
as rhythm becomes a guiding constraint and the brain tests how each syllable taps against
the previous ones. “Metronome” stays because its internal beat fits the poem’s cadence, while
other candidates like “pendulum” and “clock” briefly flicker and vanish before they ever touch
the page.
A question edges in: “What does the cat want?” and this query steers word choice toward
“listening” and “waiting,” so the line bends into “she waits at the edge of sound,” as the
generative process favors sensory verbs to embody intention without naming it outright. The
word “edge” is chosen over “border” because it feels lighter and sharper, and this felt sense of
texture is part of the moment‑to‑moment selection that discards heavier alternatives.
The poem now wants motion, so the mind searches for a verb with sudden energy and lands
on “explodes,” but it feels too violent, so it is quickly replaced by “springs,” gentler yet still
kinetic, revealing how micro‑revisions occur in the same instant that a line is born. To anchor
that motion, “after invisible dust motes” appears, combining an image of play with an earlier
awareness of sunlit air, showing how previous context constrains and shapes what words are
even considered.
Closing the poem, attention turns to the human observer, and “I” briefly appears, then is
rejected in favor of “we,” to widen the viewpoint and invite the reader into the scene,
demonstrating how perspective choices emerge as higher‑level adjustments to the unfolding
text. The final line settles as “we learn from her quiet grammar of light,” as the mind tests
several metaphors—“law,” “ritual,” “language”—and retains “grammar” because it best
captures the ordered yet evolving way these words, like the cat, move through space.

I don't see any indication of self-consciousness.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv3y4Q%3DEykapkRvKWhThgeWVx8rTHJoc_JCcX0ycJ2Yrng%40mail.gmail.com.

John Clark

unread,
Nov 27, 2025, 8:57:23 AM (5 days ago) Nov 27
to everyth...@googlegroups.com
On Wed, Nov 26, 2025 at 7:37 PM Brent Meeker <meeke...@gmail.com> wrote:

If all you knew about anything was what you read in the papers, and the libraries,

And all the videos on YouTube, and everything else on the Internet.  

 
and you were asked about your consciousness you'd find what you'd read about consciousness and use it to reply. 

So would a human being, although human's knowledge base would be far far smaller.  

And who wrote that stuff about consciousness...people who were conscious.

You take that as a given, but why? Because it took intelligence to write that stuff about consciousness and you implicitly assume that intelligence implies consciousness. That's why you don't believe in solipsism, that's why you believe your fellow human beings are conscious, except when they're sleeping, or under anesthesia, or dead, because when they are in those states they are not behaving intelligently.  

I don't see any indication of self-consciousness.

What exactly would an AI need to say for you to think there were indications of self-consciousness? Do you see any indications of self-consciousness in this email that I have written?  Do you see any indications that I am not an AI?

John K Clark    See what's on my new list at  Extropolis

na1



i

Brent Meeker

unread,
Nov 27, 2025, 10:15:24 PM (5 days ago) Nov 27
to everyth...@googlegroups.com


On 11/27/2025 5:56 AM, John Clark wrote:


On Wed, Nov 26, 2025 at 7:37 PM Brent Meeker <meeke...@gmail.com> wrote:

If all you knew about anything was what you read in the papers, and the libraries,

And all the videos on YouTube, and everything else on the Internet.  

 
and you were asked about your consciousness you'd find what you'd read about consciousness and use it to reply. 

So would a human being, although human's knowledge base would be far far smaller.  
Smaller, but also including direct knowledge of consciousness.


And who wrote that stuff about consciousness...people who were conscious.

You take that as a given, but why? Because it took intelligence to write that stuff about consciousness and you implicitly assume that intelligence implies consciousness. 
No, you assume that.  But you wouldn't if you reflected on how the LLM seems intelligent.


That's why you don't believe in solipsism, that's why you believe your fellow human beings are conscious, except when they're sleeping, or under anesthesia, or dead, because when they are in those states they are not behaving intelligently.  
In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious.  You seem to think intelligence is the end all and be all of consciousness.  I think my dog is conscious, even though he's not very intelligent.


I don't see any indication of self-consciousness.

What exactly would an AI need to say for you to think there were indications of self-consciousness? Do you see any indications of self-consciousness in this email that I have written?  Do you see any indications that I am not an AI?
I'd like it to tell me where it was located, when it arrived, and what the weather was outside.

You told me a bunch of stuff I personally believe, which right or wrong is not something an LLM is likely to do.

Brent

John Clark

unread,
Nov 28, 2025, 7:11:51 AM (4 days ago) Nov 28
to everyth...@googlegroups.com
On Thu, Nov 27, 2025 at 10:15 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> and you were asked about your consciousness you'd find what you'd read about consciousness and use it to reply.
 
>>So would a human being, although human's knowledge base would be far far smaller.  

> Smaller, but also including direct knowledge of consciousness.

I have direct knowledge of my own consciousness but not of yours, nevertheless I believe you are conscious because I have accepted the axiom that intelligent behavior implies consciousness, and I did that because I could not function if I really believe that solipsism is true. 

>>> And who wrote that stuff about consciousness...people who were conscious.

 >> You take that as a given, but why? Because it took intelligence to write that stuff about consciousness and you implicitly assume that intelligence implies consciousness. 
 
No, you assume that. But you wouldn't if you reflected on how the LLM seems intelligent

You assumed the people that wrote those books about consciousness were themselves conscious, but why did you make that assumption? If it wasn't that the books themselves proved that the authors had the ability to behave intelligently, then what on earth was it? And what's the difference between something that seems intelligent and something that is intelligent? There isn't any, I don't think it would make any sense to claim that Einstein wasn't intelligent, he just seemed to be intelligent. And how can you tell the difference between something that seems conscious and something that is conscious ? There is no way, not even in theory.
 
In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious. 

Physically like you? Does that mean only males are conscious? Does the physical skin color of others need to match your physical skin color for you to consider them to be conscious? And when they are sleeping or under anesthesia or dead the brains of your fellow human beings behave quite differently than the way your brain is behaving right now. 
 
You seem to think intelligence is the end all and be all of consciousness. 

I do. If it were otherwise, if consciousness wasn't an inevitable byproduct of intelligence then I don't see any way that Darwinian Evolution could ever have produced consciousness, and yet I know for a fact that it did at least once and probably many billions of times. I also think that consciousness is easy but intelligence is hard.  
 
I think my dog is conscious, even though he's not very intelligent.

You're never going to be able to teach General Relativity to your dog, but your dog is a lot more intelligent than a rock, and correspondingly is a lot more conscious than a rock.

What exactly would an AI need to say for you to think there were indications of self-consciousness? Do you see any indications of self-consciousness in this email that I have written?  Do you see any indications that I am not an AI?
 
I'd like it to tell me where it was located,

Asking where consciousness is located is like asking where the integer 4 is located. I don't believe your consciousness is inside a container made of bone for the simple reason that you are not conscious of it being there. When you're repairing a watch with your hands where is your consciousness? The least bad answer would be at the tip of your fingers. When your brain and body is in Seattle and you're watching a football game on TV from Atlanta but you're thinking about the Great Wall Of China, where is your consciousness?  The least bad answer would be China. 

John K Clark    See what's on my new list at  Extropolis
efe

Brent Meeker

unread,
Nov 28, 2025, 2:53:23 PM (4 days ago) Nov 28
to everyth...@googlegroups.com
You use consciousness in different ways, seemingly without noticing the difference.  I think there are at least three different levels of consciousness and they have different relations to intelligence.  Consciousness1 is to have have sensors that provide awareness of the environment and of pain.  Intelligence processes this small amount of information with addition of a little memory of sensations. Amoebas, spiders, fish... have this kind of consciousness.  Consciousness2 is sensors plus memory plus processing to make predictions, i.e. learning.  This corresponds to the intelligence you dog has.  Consciousness3 is Conscious2 plus reflexive imagination in which you can think of yourself and others acting and thinking in different scenarios.  This is intelligence that can plan ahead, practice deception, and invent language.


On 11/28/2025 4:11 AM, John Clark wrote:
On Thu, Nov 27, 2025 at 10:15 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> and you were asked about your consciousness you'd find what you'd read about consciousness and use it to reply.
 
>>So would a human being, although human's knowledge base would be far far smaller.  

> Smaller, but also including direct knowledge of consciousness.

I have direct knowledge of my own consciousness but not of yours
Which was my point.  You know something about consciousness that the knowledge base of an LLM doesn't contain.  

The rest of your response about solipism is irrelevant.  You take it as an axiom that intelligent behavior implies consciousness, yet at the same time you recognize that consciousness is "easier" than intelligence.  So you're now assuming a starfish moving toward food isn't intelligent but may be conscious.  Well a starfish will react to a touch.

>>> And who wrote that stuff about consciousness...people who were conscious.

 >> You take that as a given, but why? 
The point is that it was not written by an AI, which is a historical fact.   Again your reply is off the point.
Because it took intelligence to write that stuff about consciousness and you implicitly assume that intelligence implies consciousness. 
 
No, you assume that. But you wouldn't if you reflected on how the LLM seems intelligent

You assumed the people that wrote those books about consciousness were themselves conscious, but why did you make that assumption? 
No, I only assumed they weren't LLMs, therefore what the LLM said was entirely derivative.


In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious. 

Physically like you? 
Yes.  Capable of movement, speech, directed action.  Under anesthesia my fellow human being is not conscious.


 
You seem to think intelligence is the end all and be all of consciousness. 

I do. If it were otherwise, if consciousness wasn't an inevitable byproduct of intelligence 
But you make the inference the other way.  You assume intelligence implies consciousness, which is what I'm denying of LLM's.

 
I think my dog is conscious, even though he's not very intelligent.

You're never going to be able to teach General Relativity to your dog, but your dog is a lot more intelligent than a rock, and correspondingly is a lot more conscious than a rock.
But the correspondence is not that his intelligence made him conscious.  Evolutionarily it's the other way around; sensors develop and drove reaction.  Intelligence inserted something more complicated than "drove".

What exactly would an AI need to say 
There are more possible actions than "say".

for you to think there were indications of self-consciousness? Do you see any indications of self-consciousness in this email that I have written?  Do you see any indications that I am not an AI?
You keep using "AI" instead of LLM.  That's not what an LLM would do to try to shift the argument.  
 
I'd like it to tell me where it was located,

Asking where consciousness is located is like asking where the integer 4 is located. I don't believe your consciousness is inside a container made of bone for the simple reason that you are not conscious of it being there. When you're repairing a watch with your hands where is your consciousness? The least bad answer would be at the tip of your fingers. When your brain and body is in Seattle and you're watching a football game on TV from Atlanta but you're thinking about the Great Wall Of China, where is your consciousness?  The least bad answer would be China. 
You lose consciousness when you're hit in the head, not on the tip of your finger.

Brent

John Clark

unread,
Nov 28, 2025, 4:27:02 PM (4 days ago) Nov 28
to everyth...@googlegroups.com
On Fri, Nov 28, 2025 at 2:53 PM Brent Meeker <meeke...@gmail.com> wrote:

>> I have direct knowledge of my own consciousness but not of yours
 
Which was my point.  You know something about consciousness

Yes.

that the knowledge base of an LLM doesn't contain. 

You are assuming what you are trying to prove. How do you know that the LLM does not have knowledge of its own consciousness? Maybe the LLM is conscious, but maybe it isn't. And maybe you're conscious, but maybe you're a philosophical zombie. But I doubt it.

You take it as an axiom that intelligent behavior implies consciousness, yet at the same time you recognize that consciousness is "easier" than intelligence. 

Yes.
 
So you're now assuming a starfish moving toward food isn't intelligent but may be conscious.  Well a starfish will react to a touch.

Evolution managed to produce emotions like pleasure and pain billions of years ago, microorganisms will move towards certain chemicals and away from others, and emotions like fear and anger as exemplified in the fight or flight response. But Evolution only figured out a few million years ago how to produce something we would call intelligent, and our own species is less than half a million years old.    

>>> And who wrote that stuff about consciousness...people who were conscious.

 >> You take that as a given, but why? 
The point is that it was not written by an AI, which is a historical fact. 

You are avoiding the point. Why do you believe that the PEOPLE who wrote those books about consciousness were themselves conscious? I believe they were conscious because the books were obviously written by an intelligent entity; but  you think that is a poor reason so I want to know why you think they were conscious. 

>>> In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious. 

> Physically like you? 
Yes.  Capable of movement, speech, directed action. 

A machine can do all of those things  
>>> You seem to think intelligence is the end all and be all of consciousness. 

>> I do. If it were otherwise, if consciousness wasn't an inevitable byproduct of intelligence 
>But you make the inference the other way.  You assume intelligence implies consciousness,

Yes because that's the only way Darwinian natural selection could ever have produced consciousness, and I know for a fact that it did.  

>> You're never going to be able to teach General Relativity to your dog, but your dog is a lot more intelligent than a rock, and correspondingly is a lot more conscious than a rock.
 
But the correspondence is not that his intelligence made him conscious.  Evolutionarily it's the other way around; sensors develop and drove reaction.  Intelligence inserted something more complicated than "drove".

I don't know what you mean by that. 
 
>> What exactly would an AI need to say 
There are more possible actions than "say".

Not for us, we have talked with each other for over a decade but we have never metall we have done is send words to each other over the Internet. Was that enough for you to determine if I am an intelligent entity or not? It was for me, I think you are an intelligent entity.  
>> for you to think there were indications of self-consciousness? Do you see any indications of self-consciousness in this email that I have written?  Do you see any indications that I am not an AI?
You keep using "AI" instead of LLM.  That's not what an LLM would do to try to shift the argument. 

OK.  Do you see any indication that I am not a LLM like Gemini or Claude, or any indication that I am? If so, what is it? 
>>> I'd like it to tell me where it was located,

>> Asking where consciousness is located is like asking where the integer 4 is located. I don't believe your consciousness is inside a container made of bone for the simple reason that you are not conscious of it being there. When you're repairing a watch with your hands where is your consciousness? The least bad answer would be at the tip of your fingers. When your brain and body is in Seattle and you're watching a football game on TV from Atlanta but you're thinking about the Great Wall Of China, where is your consciousness?  The least bad answer would be China. 
You lose consciousness when you're hit in the head, not on the tip of your finger.
 
You wouldn't lose consciousness if you had a back up brain running in parallel at a different location, therefore the position of where computations are performed are not an important consideration when it comes to consciousness, but the position of your sensors and your actuators are. If you have two synchronized phonographs playing the same symphony and you destroy one machine the music does not stop. So do you think consciousness is more like a symphony or more like a brick? I think it's more like a symphony.   

John K Clark    See what's on my new list at  Extropolis
wew

 

Brent Meeker

unread,
Nov 28, 2025, 9:36:24 PM (4 days ago) Nov 28
to everyth...@googlegroups.com


On 11/28/2025 1:26 PM, John Clark wrote:
On Fri, Nov 28, 2025 at 2:53 PM Brent Meeker <meeke...@gmail.com> wrote:

>> I have direct knowledge of my own consciousness but not of yours
 
Which was my point.  You know something about consciousness

Yes.

that the knowledge base of an LLM doesn't contain. 

You are assuming what you are trying to prove. How do you know that the LLM does not have knowledge of its own consciousness? Maybe the LLM is conscious, but maybe it isn't. 
Have you asked it?  I did.


And maybe you're conscious, but maybe you're a philosophical zombie. But I doubt it.

You take it as an axiom that intelligent behavior implies consciousness, yet at the same time you recognize that consciousness is "easier" than intelligence. 

Yes.
 
So you're now assuming a starfish moving toward food isn't intelligent but may be conscious.  Well a starfish will react to a touch.

Evolution managed to produce emotions like pleasure and pain billions of years ago, microorganisms will move towards certain chemicals and away from others, and emotions like fear and anger as exemplified in the fight or flight response. But Evolution only figured out a few million years ago how to produce something we would call intelligent, 
That's where we disagree.  Intelligence includes following a chemical gradient to food.  Bacteria can do it.  You seem to move you definition of intelligent around to suit your thesis that it entails consciousness.  Do you have a definition?

and our own species is less than half a million years old.    

>>> And who wrote that stuff about consciousness...people who were conscious.

 >> You take that as a given, but why? 
The point is that it was not written by an AI, which is a historical fact. 

You are avoiding the point. 
I just repeated my point.  The stuff that an LLM "knows" was all written by people.

Why do you believe that the PEOPLE who wrote those books about consciousness were themselves conscious? I believe they were conscious because the books were obviously written by an intelligent entity; but  you think that is a poor reason so I want to know why you think they were conscious. 
It doesn't matter whether or not they were conscious.  Their writings are not conscious and that's the whole source of LLM intelligence.

>>> In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious. 

> Physically like you? 
Yes.  Capable of movement, speech, directed action. 

A machine can do all of those things  
Sophistry.  You know damn well what physical likeness means.
>>> You seem to think intelligence is the end all and be all of consciousness. 

>> I do. If it were otherwise, if consciousness wasn't an inevitable byproduct of intelligence 
>But you make the inference the other way.  You assume intelligence implies consciousness,

Yes because that's the only way Darwinian natural selection could ever have produced consciousness, and I know for a fact that it did.  
You can't even get your inferences consistent.

>> You're never going to be able to teach General Relativity to your dog, but your dog is a lot more intelligent than a rock, and correspondingly is a lot more conscious than a rock.
 
But the correspondence is not that his intelligence made him conscious.  Evolutionarily it's the other way around; sensors develop and drove reaction.  Intelligence inserted something more complicated than "drove".

I don't know what you mean by that. 
 
>> What exactly would an AI need to say 
There are more possible actions than "say".

Not for us, we have talked with each other for over a decade but we have never metall we have done is send words to each other over the Internet. Was that enough for you to determine if I am an intelligent entity or not? It was for me, I think you are an intelligent entity.  
Intelligent, sure.  Not an LLM...dunno.


>> for you to think there were indications of self-consciousness? Do you see any indications of self-consciousness in this email that I have written?  Do you see any indications that I am not an AI?
You keep using "AI" instead of LLM.  That's not what an LLM would do to try to shift the argument. 

OK.  Do you see any indication that I am not a LLM 
I said that's not what an LLM would have done.  Further, and LLM wouldn't have ignored it that way.


like Gemini or Claude, or any indication that I am? If so, what is it? 
>>> I'd like it to tell me where it was located,

>> Asking where consciousness is located is like asking where the integer 4 is located. 
An evasive answer, which I'll take as evidence you're an LLM.

I don't believe your consciousness is inside a container made of bone for the simple reason that you are not conscious of it being there. When you're repairing a watch with your hands where is your consciousness? The least bad answer would be at the tip of your fingers.
You lose consciousness when you're hit in the head, not on the tip of your finger.
 
You wouldn't lose consciousness if you had a back up brain running in parallel at a different location,
I think I already have redundant fingers, but not a redundant brain.

Brent

John Clark

unread,
Nov 29, 2025, 8:04:02 AM (3 days ago) Nov 29
to everyth...@googlegroups.com
On Fri, Nov 28, 2025 at 9:36 PM Brent Meeker <meeke...@gmail.com> wrote:

>> You are assuming what you are trying to prove. How do you know that the LLM does not have knowledge of its own consciousness? Maybe the LLM is conscious, but maybe it isn't. 
 
Have you asked it?  

Yes certainly. I've asked the question "are you conscious" to every  LLM I can find and every one of them insists that it is not conscious, which didn't surprise me one bit. AI companies do not want their customers to have an existential crisis, therefore they do their best to hardwire an unequivocal "NO" response to any question of that sort. 
 
>> Evolution managed to produce emotions like pleasure and pain billions of years ago, microorganisms will move towards certain chemicals and away from others, and emotions like fear and anger as exemplified in the fight or flight response. But Evolution only figured out a few million years ago how to produce something we would call intelligent, 

That's where we disagree.  Intelligence includes following a chemical gradient to food.  Bacteria can do it.  You seem to move you definition of intelligent around to suit your thesis that it entails consciousness. 

I don't believe there is a sharp dividing line between something that is intelligent and something that is not, instead there is a continuum.  And I don't believe there is a sharp dividing line between fat and skinny either, nevertheless an 80 pound man is unequivocally skinny, and an 800 pound man is unequivocally fat. 

Do you have a definition?

I'm not going to give you a definition because any definition I give you would be made of words, and I have no doubt you would demand a further definition of at least one of those words, and round and round we'd go. But there is something much better than a definition, an example; a rock is not intelligent, Albert Einstein was, and a bacteria is somewhere between those two extremes.    
 
The stuff that an LLM "knows" was all written by people.

The millions of protein structures that Alphafold deduced and that had puzzled humans for decades sure as hell were NOT written by people. 

>>>>> In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious. 

>>>>Physically like you? 
>>> Yes.  Capable of movement, speech, directed action. 

>>A machine can do all of those things  
Sophistry.  You know damn well what physical likeness means.

No, I do not know what "physical likeness" means, at least not the way you seem to be using the term. Please enlighten me.  

 >>>But you make the inference the other way.  You assume intelligence implies consciousness,

>>Yes because that's the only way Darwinian natural selection could ever have produced consciousness, and I know for a fact that it did.  
 
You can't even get your inferences consistent.

Please point out my error. 
 
>> Asking where consciousness is located is like asking where the integer 4 is located. 
 
An evasive answer,

I'm not being evasive at all, I sincerely believe it is nonsense to try to specify where a mind is located because that can only be done with a noun, and you are not a noun, you are an adjective, you are the way that matter behaves when it is organized in a Brentmeekerian way. Right now there is only one chunk of matter in the observable universe that is organized in that way, but that need not always be the case. 

Asking where a mind is located is like asking where yellow or fast or big is located. 

John K Clark    See what's on my new list at  Extropolis
rv7

3ez

Brent Meeker

unread,
Nov 29, 2025, 5:42:33 PM (3 days ago) Nov 29
to everyth...@googlegroups.com


On 11/29/2025 5:03 AM, John Clark wrote:
On Fri, Nov 28, 2025 at 9:36 PM Brent Meeker <meeke...@gmail.com> wrote:

>> You are assuming what you are trying to prove. How do you know that the LLM does not have knowledge of its own consciousness? Maybe the LLM is conscious, but maybe it isn't. 
 
Have you asked it?  

Yes certainly. I've asked the question "are you conscious" to every  LLM I can find and every one of them insists that it is not conscious, which didn't surprise me one bit. AI companies do not want their customers to have an existential crisis, therefore they do their best to hardwire an unequivocal "NO" response to any question of that sort. 
Curiously, I just read a blurb this morning that said LLM's that were given hard instruction not to lie, were more likely to claim they were conscious.


 
>> Evolution managed to produce emotions like pleasure and pain billions of years ago, microorganisms will move towards certain chemicals and away from others, and emotions like fear and anger as exemplified in the fight or flight response. But Evolution only figured out a few million years ago how to produce something we would call intelligent, 

That's where we disagree.  Intelligence includes following a chemical gradient to food.  Bacteria can do it.  You seem to move you definition of intelligent around to suit your thesis that it entails consciousness. 

I don't believe there is a sharp dividing line between something that is intelligent and something that is not, instead there is a continuum.  And I don't believe there is a sharp dividing line between fat and skinny either, nevertheless an 80 pound man is unequivocally skinny, and an 800 pound man is unequivocally fat. 
I agree to that intelligence has degrees, although there may be differences of kind so it's not a 1-dimensional continuum. But do you have a sharp dividing line between conscious and unconscious?  If somebody asks me that, I reply,"What kind of conscious do you mean?"


Do you have a definition?

I'm not going to give you a definition because any definition I give you would be made of words, and I have no doubt you would demand a further definition of at least one of those words, and round and round we'd go. But there is something much better than a definition, an example; a rock is not intelligent, Albert Einstein was, and a bacteria is somewhere between those two extremes.    
And doesn't exactly the same assertion serve as an example of consciousness?  Yet intelligence and consciousness aren't the same thing.


 
The stuff that an LLM "knows" was all written by people.

The millions of protein structures that Alphafold deduced and that had puzzled humans for decades sure as hell were NOT written by people. 

>>>>> In part I believe my fellow human beings are conscious because the are physically like me and I'm conscious. 

>>>>Physically like you? 
>>> Yes.  Capable of movement, speech, directed action. 

>>A machine can do all of those things  
Sophistry.  You know damn well what physical likeness means.

No, I do not know what "physical likeness" means, at least not the way you seem to be using the term. Please enlighten me.  
Better I'll give you and example.  Albert Einstein was physically like me and a rock is not.


 >>>But you make the inference the other way.  You assume intelligence implies consciousness,

>>Yes because that's the only way Darwinian natural selection could ever have produced consciousness, and I know for a fact that it did.  
 
You can't even get your inferences consistent.

Please point out my error. 
Here's the Venn diagram (conscious (intelligence)) of  "...intelligence implies consciousness (3 lines above).  Yes,..."
But earlier (line 5, 11/28/2025 1326) you wrote, "
...if consciousness wasn't an inevitable byproduct of intelligence..." implying the Venn diagram (intelligence (consciousness)). 
To believe both is to assert they are identical, which I doubt. 

 
>> Asking where consciousness is located is like asking where the integer 4 is located. 
 
An evasive answer,

I'm not being evasive at all, I sincerely believe it is nonsense to try to specify where a mind is located because that can only be done with a noun, and you are not a noun, you are an adjective, you are the way that matter behaves when it is organized in a Brentmeekerian way. Right now there is only one chunk of matter in the observable universe that is organized in that way, but that need not always be the case. 
So you think the same mind can be located two different places at the same time.  Can it be thinking two different thoughts?  


Asking where a mind is located is like asking where yellow or fast or big is located. 
Those are attributes of things, so are co-located with a thing.  The analogue would be "thought" which in general may be associated with some distance thing, like an image seen, but your thought is localized in your brain (as easily demonstrated with a little LSD).

Brent 
3

John Clark

unread,
Nov 30, 2025, 7:48:48 AM (2 days ago) Nov 30
to everyth...@googlegroups.com
On Sat, Nov 29, 2025 at 5:42 PM Brent Meeker <meeke...@gmail.com> wrote:

do you have a sharp dividing line between conscious and unconscious? 

No, that's why I can't remember my exact instant of consciousness before I fell asleep last night.  

>> I'm not going to give you a definition because any definition I give you would be made of words, and I have no doubt you would demand a further definition of at least one of those words, and round and round we'd go. But there is something much better than a definition, an example; a rock is not intelligent, Albert Einstein was, and a bacteria is somewhere between those two extremes. 
   
And doesn't exactly the same assertion serve as an example of consciousness?  

Yes.
 
> yet intelligence and consciousness aren't the same thing.

But you can't have one without the other.... probably. You definitely can't have intelligence without consciousness, maybe you could have consciousness without intelligence, but I doubt it.  

>> I do not know what "physical likeness" means, at least not the way you seem to be using the term. Please enlighten me.  

Better I'll give you and example.  Albert Einstein was physically like me and a rock is not.

I agree that physically you're more like Einstein than you are to a rock, but most rocks contain all the elements that your body contains, so you are not completely dissimilar. And the genome of a banana is 60% like your genome, does that mean a banana is 60% as conscious as you are?  Maybe, but I doubt it, and it certainly is not 60% as intelligent. 

>>>> that's the only way Darwinian natural selection could ever have produced consciousness, and I know for a fact that it did.  
 
>>> You can't even get your inferences consistent.
>>Please point out my error. 
Here's the Venn diagram (conscious (intelligence)) of  "...intelligence implies consciousness (3 lines above).  Yes,..." But earlier (line 5, 11/28/2025 1326) you wrote, "...if consciousness wasn't an inevitable byproduct of intelligence..." implying the Venn diagram (intelligence (consciousness)).  To believe both is to assert they are identical, which I doubt. 

First of all, when I say "intelligent" I mean "intelligent behavior" because behavior is the only way we can judge if something is intelligent or not. Maybe rocks are smarter than Einstein but are shy and don't like to brag so they stay quiet, but I doubt it. And I insist that intelligence implies consciousness but I don't insist that consciousness implies intelligence, although I think it very likely. And if X implies Y and Y implies X then X and Y may not be identical but they are logically equivalent. 
 
>> I sincerely believe it is nonsense to try to specify where a mind is located because that can only be done with a noun, and you are not a noun, you are an adjective, you are the way that matter behaves when it is organized in a Brentmeekerian way. Right now there is only one chunk of matter in the observable universe that is organized in that way, but that need not always be the case. 
 
So you think the same mind can be located two different places at the same time. 

I think that a brain always has a definite position but a mind does not, and a mind is what a brain does so it doesn't even make sense to ask where a mind is located. It would be like asking, where is the location of "rapid"?  And if 2 brains are identical then there is only one mind. 

 
Can it be thinking two different thoughts? 

I don't know what the referrent to the pronoun "it" in the above is but i2 brains are thinking different thoughts then they are no longer identical, and so there are 2 minds.

your thought is localized in your brain

It is an undisputed fact that your brain is localized inside of a box made of bone, but do you THINK that THOUGHT a lot? For most of human history people didn't even THINK a brain had anything to do with THOUGHT, the ancient Egyptians carefully preserved every part of the body except for the brain, they THOUGHT it was just  uninteresting goo and threw it away. If you THINK a THOUGHT is not in your brain then how can a THOUGHT that you're THINKING be in your brain?


>> Asking where a mind is located is like asking where yellow or fast or big is located. 
 
Those are attributes of things,

Yes exactly, and mind is also the attribute of a thing, a brain. A mind is what a brain does and thus is not a thing, it is an attribute of a thing. And despite what your third grade English teacher may have told you, the word "I" is not a personal pronoun, it is not a noun of any sort, it is an adjective that describes what a particular brain is doing. If 2 brains are doing the same thing then there is only one "I".

 John K Clark    See what's on my new list at  Extropolis
4v[

 

Brent Meeker

unread,
Nov 30, 2025, 6:24:40 PM (2 days ago) Nov 30
to everyth...@googlegroups.com


On 11/30/2025 4:48 AM, John Clark wrote:
On Sat, Nov 29, 2025 at 5:42 PM Brent Meeker <meeke...@gmail.com> wrote:

do you have a sharp dividing line between conscious and unconscious? 

No, that's why I can't remember my exact instant of consciousness before I fell asleep last night.  

>> I'm not going to give you a definition because any definition I give you would be made of words, and I have no doubt you would demand a further definition of at least one of those words, and round and round we'd go. But there is something much better than a definition, an example; a rock is not intelligent, Albert Einstein was, and a bacteria is somewhere between those two extremes. 
   
And doesn't exactly the same assertion serve as an example of consciousness?  

Yes.
 
> yet intelligence and consciousness aren't the same thing.

But you can't have one without the other.... probably. You definitely can't have intelligence without consciousness, 
Sure you can.  If you're good at any sport, say tennis, then most of what you do when playing is intelligent, but if you consciously think about it you'll screw it up.  You only think about it when trying to learn.

maybe you could have consciousness without intelligence, but I doubt it. 
Again I point out "consciousness" is ambiguous, used to refer to related but different things.  Something can be conscious of their immediate environment, have feelings, and react to it, but if they don't learn I'd say they are not intelligent.  And some people who are equally conscious, may differ greatly in intelligence.
 

>> I do not know what "physical likeness" means, at least not the way you seem to be using the term. Please enlighten me.  

Better I'll give you and example.  Albert Einstein was physically like me and a rock is not.

I agree that physically you're more like Einstein than you are to a rock, but most rocks contain all the elements that your body contains, so you are not completely dissimilar. And the genome of a banana is 60% like your genome, does that mean a banana is 60% as conscious as you are?  Maybe, but I doubt it, and it certainly is not 60% as intelligent. 

>>>> that's the only way Darwinian natural selection could ever have produced consciousness, and I know for a fact that it did.  
 
>>> You can't even get your inferences consistent.
>>Please point out my error. 
Here's the Venn diagram (conscious (intelligence)) of  "...intelligence implies consciousness (3 lines above).  Yes,..." But earlier (line 5, 11/28/2025 1326) you wrote, "...if consciousness wasn't an inevitable byproduct of intelligence..." implying the Venn diagram (intelligence (consciousness)).  To believe both is to assert they are identical, which I doubt. 

First of all, when I say "intelligent" I mean "intelligent behavior" because behavior is the only way we can judge if something is intelligent or not. Maybe rocks are smarter than Einstein but are shy and don't like to brag so they stay quiet, but I doubt it. And I insist that intelligence implies consciousness 
See above.


but I don't insist that consciousness implies intelligence, although I think it very likely. And if X implies Y and Y implies X then X and Y may not be identical but they are logically equivalent. 
 
>> I sincerely believe it is nonsense to try to specify where a mind is located because that can only be done with a noun, and you are not a noun, you are an adjective, you are the way that matter behaves when it is organized in a Brentmeekerian way. Right now there is only one chunk of matter in the observable universe that is organized in that way, but that need not always be the case. 
 
So you think the same mind can be located two different places at the same time. 

I think that a brain always has a definite position but a mind does not, and a mind is what a brain does so it doesn't even make sense to ask where a mind is located. It would be like asking, where is the location of "rapid"?  And if 2 brains are identical then there is only one mind. 

 
Can it be thinking two different thoughts? 

I don't know what the referrent to the pronoun "it" 
Even though you wrote a paragraph in between it still refers to the subject of my previous sentence, "...the same mind...".


in the above is but i2 brains are thinking different thoughts then they are no longer identical, and so there are 2 minds.

your thought is localized in your brain

It is an undisputed fact that your brain is localized inside of a box made of bone, but do you THINK that THOUGHT a lot? 
There are an infinitude of true propositions which I have never thought even once.


For most of human history people didn't even THINK a brain had anything to do with THOUGHT, the ancient Egyptians carefully preserved every part of the body except for the brain, they THOUGHT it was just  uninteresting goo and threw it away. If you THINK a THOUGHT is not in your brain then how can a THOUGHT that you're THINKING be in your brain?
By your being mistaken.  Just like those Egyptians.


>> Asking where a mind is located is like asking where yellow or fast or big is located. 
 
Those are attributes of things,

Yes exactly, and mind is also the attribute of a thing, a brain. A mind is what a brain does 
And hence it is the same place as the brain doing it.  Just because attributes are not the same as the objects having them doesn't imply they have no location.  You are trying to "move the goal posts".  I've never claimed that mind was the same thing as a brain.  Only that as an attribute of the brain it has the same location.

Brent
Reply all
Reply to author
Forward
0 new messages