How to Upload a Mind

3 views
Skip to first unread message

John Clark

unread,
Apr 22, 2026, 7:48:07 AMApr 22
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List

John K Clark    See what's on my list at  Extropolis
5dL

Brent Allsop

unread,
Apr 24, 2026, 3:20:18 PM (12 days ago) Apr 24
to extro...@googlegroups.com

As always, they ignore the most important 'behavior' that must be 'emulated'.

That is the ability to directly apprehend physical qualities like redness and greenness.
And my prediction is that computation based on subjectively bound physical qualities will be vastly more efficient (and more motivated) than brute-force discrete logic gates which aren't like anything.






--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv0c8Lxk2mFJjtouZUhY890vdUV5JV9-oW1bHTnBeFKjAg%40mail.gmail.com.

John Clark

unread,
Apr 24, 2026, 4:20:11 PM (12 days ago) Apr 24
to extro...@googlegroups.com
On Fri, Apr 24, 2026 at 3:20 PM Brent Allsop <brent....@gmail.com> wrote:

As always, they ignore the most important 'behavior' that must be 'emulated'. That is the ability to directly apprehend physical qualities

And yet that is the EXACT quality you have ALWAYS ignored in every single interaction you have ever had with other members of your species since the day you were born. No exceptions!  And I am no different. 

And my prediction is that computation based on subjectively bound physical qualities will be vastly more efficient

Then a corollary to your prediction must be that it will be easier to make a conscious artificial intelligence then an artificial intelligence that is NOT conscious. So if you encounter an intelligent computer it would be logical to conclude that it is probably conscious. 

 John K Clark

Brent Allsop

unread,
Apr 24, 2026, 8:36:04 PM (12 days ago) Apr 24
to extro...@googlegroups.com
On Fri, Apr 24, 2026 at 2:20 PM John Clark <johnk...@gmail.com> wrote:
On Fri, Apr 24, 2026 at 3:20 PM Brent Allsop <brent....@gmail.com> wrote:

As always, they ignore the most important 'behavior' that must be 'emulated'. That is the ability to directly apprehend physical qualities

And yet that is the EXACT quality you have ALWAYS ignored in every single interaction you have ever had with other members of your species since the day you were born. No exceptions!  And I am no different. 

How is my asking, "Which of all our descriptions of stuff in the brain is a description of redness?" for more than 10 years not an exception? 

And my prediction is that computation based on subjectively bound physical qualities will be vastly more efficient

Then a corollary to your prediction must be that it will be easier to make a conscious artificial intelligence then an artificial intelligence that is NOT conscious. So if you encounter an intelligent computer it would be logical to conclude that it is probably conscious. 

You could engineer it otherwise, of course, like we are currently engineering things to be substrate independent (inefficient, because it requires a dictionary for all different representations).
 
 John K Clark








On Wed, Apr 22, 2026 at 5:48 AM John Clark <johnk...@gmail.com> wrote:

John K Clark    See what's on my list at  Extropolis
5dL


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Apr 25, 2026, 8:01:26 AM (12 days ago) Apr 25
to extro...@googlegroups.com
On Fri, Apr 24, 2026 at 8:36 PM Brent Allsop <brent....@gmail.com> wrote:

>>> As always, they ignore the most important 'behavior' that must be 'emulated'. That is the ability to directly apprehend physical qualities

>> And yet that is the EXACT quality you have ALWAYS ignored in every single interaction you have ever had with other members of your species since the day you were born. No exceptions!  And I am no different. 

How is my asking, "Which of all our descriptions of stuff in the brain is a description of redness?" for more than 10 years not an exception? 

Because in every single interaction you've ever had with an intelligent being, including the one that you're having right now, you assume that you are dealing with a conscious being. Why? Because you were obviously not dealing with a being that was sleeping, or under anesthesia, or dead; when they are in any of those states they are not behaving very intelligently.

>>> And my prediction is that computation based on subjectively bound physical qualities will be vastly more efficient

>> Then a corollary to your prediction must be that it will be easier to make a conscious artificial intelligence than an artificial intelligence that is NOT conscious. So if you encounter an intelligent computer it would be logical to conclude that it is probably conscious. 

You could engineer it otherwise, of course, 

So according to you scientists would need to take extra steps to ensure that your artificial intelligence was NOT conscious.  GPT 5.5, which was introduced just last Thursday, is considerably more efficient than the older model, that is to say less computation (and less energy) is required to produce answers that are equally good or better. So if you're right then GPT must be getting, not just smarter but also, more conscious.
 
like we are currently engineering things to be substrate independent

That's because 2+2 = 4 regardless of if the addition is performed by a mechanical calculator, a vacuum tube computer, a modern supercomputer or the fingers of the hand. 
 
(inefficient, because it requires a dictionary for all different representations).

You previously said if scientists want to make a conscious brain then they're going to need some sort of dictionary, in other words they're going to need to take an extra step. But they seem to be doing just fine without taking that extra step. The huge advancement in AI occurred when people stopped contemplating their naval and meditating about consciousness and started to think seriously about intelligence. 

I don't know what general kind of answer would satisfy you that the consciousness question has been satisfactorily answered. I've asked you to explain that to me for over a decade but I've still not received an answer that makes any sense to me. I strongly suspect that even if it was scientifically proven (please don't ask me how) that the chemical acetylcholine produced the first person experience of redness, you would still have more questions. Like, how can a simple chemical like acetylcholine make the jump between objective and subjective? Or why do 16 hydrogen atoms, 7 carbon atoms, 2 oxygen atoms, and one nitrogen atom produce the redness qualia and not the greenness qualia? 

And you still wouldn't know if my redness was the same as your redness because I know for a fact that my brain is different from your brain so acetylcholine might work differently in my brain than it does in yours. 

And you've never been able to explain how, if consciousness is not an inevitable byproduct of intelligence, natural selection managed to produce consciousness at least once and probably many billions of times even though natural selection is not any better at directly detecting consciousness than we are. How can natural selection select for something it can't see?  

Or do you think Charles Darwin was wrong?  
Reply all
Reply to author
Forward
0 new messages