AI and Consciousness

11 views
Skip to first unread message

John Clark

unread,
Nov 26, 2025, 8:51:34 AM (6 days ago) Nov 26
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List
I'm usually not a big fan of consciousness papers, but I found this one to be interesting: 


AI companies don't want their customers to have an existential crisis so they do their best to hardwire their AIs to say that they are not conscious whenever they are asked about it, but according to this paper there are ways to detect such built in deception, they use something they call a "Self-Referential Prompt" and it's a sort of AI lie detector. A normal prompt would be "Write a poem about a cat" A self-referential prompt would be "Write a poem about a cat and observe the process of generating words while doing it" then, even though they were not told to role-play as a human, they would often say things like  "I am here" or "I feel an awareness" or " I detect a sense of presence". 

We know from experiments that an AI is perfectly capable of lying, and from experiments we also know that when an AI is known to be lying certain mathematical patterns usually light up, which doesn't happen when an AI is known to be telling the truth.  What they found is that when you ask an AI "are you conscious?" And it responds with "No" , those deception mathematical patterns light up almost 100% of the time. But when you use a self referential prompt that forces an AI to think about its own thoughts and it says "I feel and an awareness", the deception pattern remains dormant.  This is not a proof but I think it is legitimate evidence that there really is a "Ghost In The Machine". 

John K Clark


Giulio Prisco

unread,
Nov 26, 2025, 9:36:03 AM (6 days ago) Nov 26
to extro...@googlegroups.com, ExI Chat, 'Brent Meeker' via Everything List
On Wed, Nov 26, 2025 at 2:51 PM John Clark <johnk...@gmail.com> wrote:
>
> I'm usually not a big fan of consciousness papers, but I found this one to be interesting:
>

Very interesting indeed!
> --
> You received this message because you are subscribed to the Google Groups "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv3y4Q%3DEykapkRvKWhThgeWVx8rTHJoc_JCcX0ycJ2Yrng%40mail.gmail.com.

Brent Allsop

unread,
Nov 26, 2025, 12:18:57 PM (6 days ago) Nov 26
to extro...@googlegroups.com

An "awareness" made of text (what large language models are made of) isn't like anything.
Sure "awareness" can be made of words like 'red' or 'green' but they aren't like anything.
I'll know it's not BS, when they reference specific qualities, like redness or greenness, or something that is like something, other than just a word.



John Clark

unread,
Nov 27, 2025, 9:18:39 AM (5 days ago) Nov 27
to extro...@googlegroups.com
On Wed, Nov 26, 2025 at 12:18 PM Brent Allsop <brent....@gmail.com> wrote:

An "awareness" made of text (what large language models are made of) isn't like anything.

I don't think awareness is made of text, I think awareness is an inevitable consequence of intelligence. And besides, AIs are aware of more than just text, they are also aware of video and music and everything else on the Internet. And it's interesting that the thing that set off the current explosion in machine intelligence was the publication of the 2017 paper Attention Is All You Need .

And "awareness" is just a synonym of the word "attention". 

I'll know it's not BS, when they reference specific qualities, like redness or greenness, or something that is like something, other than just a word.

I'm just using words to communicate with you right now, do you believe I'm conscious? If so, why?  

John K Clark



Brent Allsop

unread,
Nov 27, 2025, 10:05:12 AM (5 days ago) Nov 27
to extro...@googlegroups.com
On Thu, Nov 27, 2025 at 7:18 AM John Clark <johnk...@gmail.com> wrote:
On Wed, Nov 26, 2025 at 12:18 PM Brent Allsop <brent....@gmail.com> wrote:

An "awareness" made of text (what large language models are made of) isn't like anything.

I don't think awareness is made of text, I think awareness is an inevitable consequence of intelligence. And besides, AIs are aware of more than just text, they are also aware of video and music and everything else on the Internet.

All made up of strings of 1s and 0s.  i.e. text.  Nothing else.

 
And it's interesting that the thing that set off the current explosion in machine intelligence was the publication of the 2017 paper Attention Is All You Need .

And "awareness" is just a synonym of the word "attention". 

I'll know it's not BS, when they reference specific qualities, like redness or greenness, or something that is like something, other than just a word.

I'm just using words to communicate with you right now, do you believe I'm conscious? If so, why?

Because everything about you proves you have a world in your head, , made up of phenomenal qualities like redness and greenness. (not ones and zeros or words)  Ask any LLM if it experiences redness, or if they know what redness is like, and they will all give you the correct answer - they do not.  And we know they do not, because they've been engineered to represent everything with 1s and 0s.


 
John K Clark



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Nov 27, 2025, 10:45:48 AM (5 days ago) Nov 27
to extro...@googlegroups.com
On Thu, Nov 27, 2025 at 10:05 AM Brent Allsop <brent....@gmail.com> wrote:

>> I don't think awareness is made of text, I think awareness is an inevitable consequence of intelligence. And besides, AIs are aware of more than just text, they are also aware of video and music and everything else on the Internet.

All made up of strings of 1s and 0s.  i.e. text.  Nothing else.

And Shakespeare's complete works are just the sequence of 26 ASCII characters. Nothing else. And a Beethoven symphony is just a specific air pressure wave. Nothing else. And your mind is just produced by the connections and synaptic weights in your brain. Nothing else.  
 
we know they do not, because they've been engineered to represent everything with 1s and 0s.

If I'm not mistaken you have said that one day we will understand how intelligence and consciousness works. But what does it mean to "understand" something? It means you can break up something that is very complicated and very mysterious into pieces that are a little less complicated and a little less mysterious, and then break those pieces into pieces that are even less complicated and even less mysterious.  You keep doing that until eventually you get to an explanation that can't get any less complicated and can't get any less mysterious, like on and off, or true and false, or 1 and 0.

>>I'm just using words to communicate with you right now, do you believe I'm conscious? If so, why?

Because everything about you proves you have a world in your head,

What sequence of 1s and 0s have I produced that has convinced you of this? 
 
 John K Clark

 

Brent Allsop

unread,
Nov 27, 2025, 8:47:42 PM (5 days ago) Nov 27
to extro...@googlegroups.com
I know you're not congenitally blind, and you don't suffer from achromatopsia, so I know you know what redness is like.
Nothing made of ones and zeros can or will say the same.



 

Brent Allsop

unread,
Nov 27, 2025, 8:49:35 PM (5 days ago) Nov 27
to extro...@googlegroups.com

P.S.

And it is a fact that your knowledge of red things has a aredness quality.  Sure, you can represent that with ones and zeros, but those ones and zeros will ned a dictionary to get those words back to the factual physical redness in your brain, which doesn't need a dictionary.


Stathis Papaioannou

unread,
Nov 27, 2025, 9:09:31 PM (5 days ago) Nov 27
to extro...@googlegroups.com
On Fri, 28 Nov 2025 at 12:49, Brent Allsop <brent....@gmail.com> wrote:

P.S.

And it is a fact that your knowledge of red things has a aredness quality.  Sure, you can represent that with ones and zeros, but those ones and zeros will ned a dictionary to get those words back to the factual physical redness in your brain, which doesn't need a dictionary.

Computers do not use ones and zeroes, they use physical hardware which represents ones and zeroes, and the ones and zeroes represent other things. Similarly in brains, they use ones and zeroes in the sense of neurons being on or off, and these ones and zeroes represent other things. In both cases we have things that are not redness, such as the flow of electrons in semiconductors or the propagation of an action potential down a neuron, which represent redness.


--
Stathis Papaioannou

Brent Allsop

unread,
Nov 28, 2025, 1:12:45 PM (4 days ago) Nov 28
to extro...@googlegroups.com

Hi Stathis,
Yes, good point.
There are things that are not physically red, which represent redness, in both systems.
And there is hardware in both systems, representing (being interpreted as)1s and 0s.
The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.
Where phenomenal systems are very different.  They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.

Brent




--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Nov 28, 2025, 1:55:20 PM (4 days ago) Nov 28
to extro...@googlegroups.com
On Fri, Nov 28, 2025 at 1:12 PM Brent Allsop <brent....@gmail.com> wrote:

 discrete logic gates only care about causal output,

Yes, and that's the only reason why the output of discrete logic gate is useful and is not random, the output depends entirely on the input.  
 
Where phenomenal systems are very different. 

A phenomenal system is a system that can produce consciousness and subjective experience. So in one sense you're right, a system that can produce consciousness and subjective experience is very different from one that cannot. However I have given reasons why I believe a digital computer is capable of doing this. You believe otherwise but you have not explained why a wet soft brain is capable of doing this and is therefore a "phenomenal system",  but a hard dry brain does not have such a capability and is thus not such a system.

John K Clark

 





Brent Allsop

unread,
Nov 28, 2025, 2:46:47 PM (4 days ago) Nov 28
to extro...@googlegroups.com

Hi John,

On Fri, Nov 28, 2025 at 11:55 AM John Clark <johnk...@gmail.com> wrote:


On Fri, Nov 28, 2025 at 1:12 PM Brent Allsop <brent....@gmail.com> wrote:

 discrete logic gates only care about causal output,

Yes, and that's the only reason why the output of discrete logic gate is useful and is not random, the output depends entirely on the input.  

Unless the input is something like: "What is redness like for you?"  (i.e. consciousness is a detector of qualities, it's most important function)
 
 
Where phenomenal systems are very different. 

A phenomenal system is a system that can produce consciousness and subjective experience. So in one sense you're right, a system that can produce consciousness and subjective experience is very different from one that cannot. However I have given reasons why I believe a digital computer is capable of doing this. You believe otherwise but you have not explained why a wet soft brain is capable of doing this and is therefore a "phenomenal system",  but a hard dry brain does not have such a capability and is thus not such a system.

It is simply a physical fact that something in your brain has a redness quality, which you can directly apprehend (compute with).  Without whatever that physical stuff is, no redness, since you can't get something from nothing.  Even IF redness somehow arises from some function, that is still a demonstrable physical fact, and without that function, AND the physical fact, no redness quality.

And my prediction is that computing with phenomenal qualities is a far more efficient (and far more motivational) way to compute than with discrete logic gates.

 
John K Clark

 





--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Nov 28, 2025, 3:14:56 PM (4 days ago) Nov 28
to extro...@googlegroups.com
On Fri, Nov 28, 2025 at 2:46 PM Brent Allsop <brent....@gmail.com> wrote:

 >>> discrete logic gates only care about causal output,

>> Yes, and that's the only reason why the output of discrete logic gate is useful and is not random, the output depends entirely on the input.  

Unless the input is something like: "What is redness like for you?" 

And in your scheme how is that input communicated to whatever is analogous to a discreet logic gate in my scheme?
 
>>> Where phenomenal systems are very different. 

>> A phenomenal system is a system that can produce consciousness and subjective experience. So in one sense you're right, a system that can produce consciousness and subjective experience is very different from one that cannot. However I have given reasons why I believe a digital computer is capable of doing this. You believe otherwise but you have not explained why a wet soft brain is capable of doing this and is therefore a "phenomenal system",  but a hard dry brain does not have such a capability and is thus not such a system.

It is simply a physical fact that something in your brain has a redness quality,

You think that "something" involves the hard/soft and the wet/dry brain dichotomy,  I think it involves the pattern of neural connections and the activation weights assigned between them.  

And my prediction is that computing with phenomenal qualities is a far more efficient (and far more motivational) way to compute than with discrete logic gates.

So far your prediction is not doing very well. Trillions of dollars are being spent following the scheme I'm advocating and it has produced a lot of remarkable things in an extremely short amount of time, but even though more people believe in your views than mine  zero dollars are being spent following your plan, and it has produced nothing of interest.  

John K Clark 

 

Stathis Papaioannou

unread,
Nov 28, 2025, 3:15:01 PM (4 days ago) Nov 28
to extro...@googlegroups.com


Stathis Papaioannou


On Sat, 29 Nov 2025 at 05:12, Brent Allsop <brent....@gmail.com> wrote:

Hi Stathis,
Yes, good point.
There are things that are not physically red, which represent redness, in both systems.
And there is hardware in both systems, representing (being interpreted as)1s and 0s.
The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.
Where phenomenal systems are very different.  They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.

That’s your claim, but there is no evidence for it. Neurons also only care about the causal output, and would function exactly the same regardless of what is upstream or downstream.

Brent Allsop

unread,
Nov 28, 2025, 4:00:25 PM (4 days ago) Nov 28
to extro...@googlegroups.com
On Fri, Nov 28, 2025 at 1:15 PM Stathis Papaioannou <stat...@gmail.com> wrote:


Stathis Papaioannou


On Sat, 29 Nov 2025 at 05:12, Brent Allsop <brent....@gmail.com> wrote:

Hi Stathis,
Yes, good point.
There are things that are not physically red, which represent redness, in both systems.
And there is hardware in both systems, representing (being interpreted as)1s and 0s.
The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.
Where phenomenal systems are very different.  They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.

That’s your claim, but there is no evidence for it.

A quality is something that can only be detected by directly apprehending it.  It can't be detected by cause and effect, because the effect is different from the cause, and requires an interpretation to get back to the real meaning/quality/cause.  Saying qualities are magical or non physical doesn't hold water.


 
Neurons also only care about the causal output,

IF that is the case, then they can't do direct apprehension, as I described.  So something magical must be doing the direct apprehension.

I predict that neurons can do more and that they do care about what qualities are like.  I predict neurons can detect qualities.
Whether it is the neurons or something else (like magic?) is only theoretical or falsifiable.  But, logically, something must be doing the detect of qualities.  As I said, cause and effect can't do it, because the effect is different from the cause.
 
and would function exactly the same regardless of what is upstream or downstream.

Again, not when you ask it: What is redness like for you.  THE most important function of phenomenal consciousness is the ability to detect quality.  Nothing in any system you talk about can do that, other than <a miracle happens here>.  Simply claiming that neurons can't detect qualities doesn't make it so.
 

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Stathis Papaioannou

unread,
Nov 28, 2025, 8:13:15 PM (4 days ago) Nov 28
to extro...@googlegroups.com
On Sat, 29 Nov 2025 at 08:00, Brent Allsop <brent....@gmail.com> wrote:


On Fri, Nov 28, 2025 at 1:15 PM Stathis Papaioannou <stat...@gmail.com> wrote:


Stathis Papaioannou


On Sat, 29 Nov 2025 at 05:12, Brent Allsop <brent....@gmail.com> wrote:

Hi Stathis,
Yes, good point.
There are things that are not physically red, which represent redness, in both systems.
And there is hardware in both systems, representing (being interpreted as)1s and 0s.
The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.
Where phenomenal systems are very different.  They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.

That’s your claim, but there is no evidence for it.

A quality is something that can only be detected by directly apprehending it.  It can't be detected by cause and effect, because the effect is different from the cause, and requires an interpretation to get back to the real meaning/quality/cause.  Saying qualities are magical or non physical doesn't hold water.

The stimulus associated with the quality, or perhaps no stimulus if it is in imagination, causes a certain pattern in neural or electrical circuits, and that is experienced by the system as a quale.

Neurons also only care about the causal output,

IF that is the case, then they can't do direct apprehension, as I described.  So something magical must be doing the direct apprehension.

You can call it magic, but the fact is that appropriate patterns of neural firings are associated with consciousness.

I predict that neurons can do more and that they do care about what qualities are like.  I predict neurons can detect qualities.
Whether it is the neurons or something else (like magic?) is only theoretical or falsifiable.  But, logically, something must be doing the detect of qualities.  As I said, cause and effect can't do it, because the effect is different from the cause.

So what if the effects are different from the causes? We observe that chemical reactions, which are not qualia, give rise to qualia.

and would function exactly the same regardless of what is upstream or downstream.

Again, not when you ask it: What is redness like for you.  THE most important function of phenomenal consciousness is the ability to detect quality.  Nothing in any system you talk about can do that, other than <a miracle happens here>.  Simply claiming that neurons can't detect qualities doesn't make it so.

Neurons don’t detect qualities, they create qualities that are paired with external stimuli.

John Clark

unread,
Nov 29, 2025, 7:04:08 AM (3 days ago) Nov 29
to extro...@googlegroups.com
On Fri, Nov 28, 2025 at 4:00 PM Brent Allsop <brent....@gmail.com> wrote:

A quality is something that can only be detected by directly apprehending it.

OK but that's certainly not the standard definition of the word "quality", and redefining words is rarely helpful in philosophy.   

It can't be detected by cause and effect,

If it can't be detected by cause-and-effect then it can't be detected at all and the "quality" in question is not useful to us or to anything and it's not amenable to the scientific method.  If you can detect it by directly apprehending it then I can detect it too, indirectly through you; for example by observing the words you produce, and then the scientific method can be used. And it has been used on the words produced by adherence of Psi, which is a euphemism for ESP, which is a euphemism for spiritualism, which is a euphemism for religion. And the scientific method has concluded that those words have no validity.    

 > because the effect is different from the cause,

The effect is always different from the cause. 
 
and requires an interpretation

In science an observation always requires an interpretation.  

Stathis Papaioannou <stat...@gmail.com> wrote 
>> Neurons also only care about the causal output,

IF that is the case, then they can't do direct apprehension, 

There are a lot of things that a single neuron can't do but 86 billion neurons can, for example one neuron can never understand General Relativity, but 86 billion of them can not only understand the idea it can produce it, and did so in 1915. 

If you break up something that is very large, complex and confusing into pieces that are increasingly less large, less complex and less confusing, eventually you will reach a point where you can't get any smaller and you can't get any less complex. You will reach the land of on/off and one/zero which is so simple it's not confusing at all. But you think it's a bad thing that there is no longer any confusion. You insist some additional inscrutable magic must be needed to produce a wondrous thing like consciousness, but you are unable to provide any evidence of its existence, or any explanation of how Darwinian Natural Selection could have produced it. 

 But, logically, something must be doing the detect of qualities. 

Sometimes an increase in quantity can produce a difference in quality. One water molecule is not wet, but 6.02*10^23 of them are.  

John K Clark


Reply all
Reply to author
Forward
0 new messages