> An "awareness" made of text (what large language models are made of) isn't like anything.
> I'll know it's not BS, when they reference specific qualities, like redness or greenness, or something that is like something, other than just a word.
On Wed, Nov 26, 2025 at 12:18 PM Brent Allsop <brent....@gmail.com> wrote:> An "awareness" made of text (what large language models are made of) isn't like anything.I don't think awareness is made of text, I think awareness is an inevitable consequence of intelligence. And besides, AIs are aware of more than just text, they are also aware of video and music and everything else on the Internet.
And it's interesting that the thing that set off the current explosion in machine intelligence was the publication of the 2017 paper Attention Is All You Need .And "awareness" is just a synonym of the word "attention".> I'll know it's not BS, when they reference specific qualities, like redness or greenness, or something that is like something, other than just a word.I'm just using words to communicate with you right now, do you believe I'm conscious? If so, why?
--John K Clark
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv3FJfxC2Vr3pVaLeD2XD_%2B3dUieDjGUvcNcK8OZoOa0yQ%40mail.gmail.com.
>> I don't think awareness is made of text, I think awareness is an inevitable consequence of intelligence. And besides, AIs are aware of more than just text, they are also aware of video and music and everything else on the Internet.> All made up of strings of 1s and 0s. i.e. text. Nothing else.
> we know they do not, because they've been engineered to represent everything with 1s and 0s.
>>I'm just using words to communicate with you right now, do you believe I'm conscious? If so, why?> Because everything about you proves you have a world in your head,
P.S.And it is a fact that your knowledge of red things has a aredness quality. Sure, you can represent that with ones and zeros, but those ones and zeros will ned a dictionary to get those words back to the factual physical redness in your brain, which doesn't need a dictionary.
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAH%3D2ypXm%3DDiFgU22iGgsmwk03VM6K2Dm5i6BfUUQjzh_B4fdWw%40mail.gmail.com.
> discrete logic gates only care about causal output,
> Where phenomenal systems are very different.
On Fri, Nov 28, 2025 at 1:12 PM Brent Allsop <brent....@gmail.com> wrote:> discrete logic gates only care about causal output,Yes, and that's the only reason why the output of a discrete logic gate is useful and is not random, the output depends entirely on the input.
> Where phenomenal systems are very different.A phenomenal system is a system that can produce consciousness and subjective experience. So in one sense you're right, a system that can produce consciousness and subjective experience is very different from one that cannot. However I have given reasons why I believe a digital computer is capable of doing this. You believe otherwise but you have not explained why a wet soft brain is capable of doing this and is therefore a "phenomenal system", but a hard dry brain does not have such a capability and is thus not such a system.
John K Clark
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv2DFJcoHyTHFiBmUP4CA%2BvJtSWhSAqyxcr5qQ9Ruj%2BLpA%40mail.gmail.com.
>>> discrete logic gates only care about causal output,>> Yes, and that's the only reason why the output of a discrete logic gate is useful and is not random, the output depends entirely on the input.> Unless the input is something like: "What is redness like for you?"
>>> Where phenomenal systems are very different.>> A phenomenal system is a system that can produce consciousness and subjective experience. So in one sense you're right, a system that can produce consciousness and subjective experience is very different from one that cannot. However I have given reasons why I believe a digital computer is capable of doing this. You believe otherwise but you have not explained why a wet soft brain is capable of doing this and is therefore a "phenomenal system", but a hard dry brain does not have such a capability and is thus not such a system.> It is simply a physical fact that something in your brain has a redness quality,
> And my prediction is that computing with phenomenal qualities is a far more efficient (and far more motivational) way to compute than with discrete logic gates.
Hi Stathis,Yes, good point.There are things that are not physically red, which represent redness, in both systems.And there is hardware in both systems, representing (being interpreted as)1s and 0s.The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.Where phenomenal systems are very different. They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.
Stathis PapaioannouOn Sat, 29 Nov 2025 at 05:12, Brent Allsop <brent....@gmail.com> wrote:Hi Stathis,Yes, good point.There are things that are not physically red, which represent redness, in both systems.And there is hardware in both systems, representing (being interpreted as)1s and 0s.The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.Where phenomenal systems are very different. They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.That’s your claim, but there is no evidence for it.
Neurons also only care about the causal output,
and would function exactly the same regardless of what is upstream or downstream.
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAH%3D2ypVccQMcG5uZNnsHYMqucXeALZjMFeppZgaukC4EQ%3DwFTA%40mail.gmail.com.
On Fri, Nov 28, 2025 at 1:15 PM Stathis Papaioannou <stat...@gmail.com> wrote:Stathis PapaioannouOn Sat, 29 Nov 2025 at 05:12, Brent Allsop <brent....@gmail.com> wrote:Hi Stathis,Yes, good point.There are things that are not physically red, which represent redness, in both systems.And there is hardware in both systems, representing (being interpreted as)1s and 0s.The difference is, the discrete logic gates only care about causal output, and are specifically architected to still work, despite whatever upstream physical property is causing (being interpreted as) the correct output.Where phenomenal systems are very different. They are designed to run on specific qualities, they are detectors of those specific qualities, so the quality, itself, is the focus, not just the causally downstream effects.That’s your claim, but there is no evidence for it.A quality is something that can only be detected by directly apprehending it. It can't be detected by cause and effect, because the effect is different from the cause, and requires an interpretation to get back to the real meaning/quality/cause. Saying qualities are magical or non physical doesn't hold water.
Neurons also only care about the causal output,IF that is the case, then they can't do direct apprehension, as I described. So something magical must be doing the direct apprehension.
I predict that neurons can do more and that they do care about what qualities are like. I predict neurons can detect qualities.Whether it is the neurons or something else (like magic?) is only theoretical or falsifiable. But, logically, something must be doing the detect of qualities. As I said, cause and effect can't do it, because the effect is different from the cause.
and would function exactly the same regardless of what is upstream or downstream.Again, not when you ask it: What is redness like for you. THE most important function of phenomenal consciousness is the ability to detect quality. Nothing in any system you talk about can do that, other than <a miracle happens here>. Simply claiming that neurons can't detect qualities doesn't make it so.
> A quality is something that can only be detected by directly apprehending it.
> It can't be detected by cause and effect,
> because the effect is different from the cause,
> and requires an interpretation
Stathis Papaioannou <stat...@gmail.com> wrote
>> Neurons also only care about the causal output,> IF that is the case, then they can't do direct apprehension,
> But, logically, something must be doing the detect of qualities.