Woo Hoo, I convinced GPT-3 it Isn't Conscious.

9 views
Skip to first unread message

Brent Allsop

unread,
Aug 28, 2021, 12:05:29 AM8/28/21
to ExI chat list, extro...@googlegroups.com, transf...@googlegroups.com, tree of knowledge system discussion

See the transcript I had with Emerson, today, here: "I Convinced GPT-3 it Isn’t Conscious ."


John Clark

unread,
Aug 28, 2021, 12:38:17 PM8/28/21
to extro...@googlegroups.com, ExI chat list, transf...@googlegroups.com, tree of knowledge system discussion
On Sat, Aug 28, 2021 at 12:05 AM Brent Allsop <brent....@gmail.com> wrote:

> See the transcript I had with Emerson, today, here: "I Convinced GPT-3 it Isn’t Conscious ."

Emerson AI, [27.08.21 12:20]
A materialist would say that what you know, is that certain neurons in your brain are activated in a certain pattern when you see red. That is all you can know.
 
Brent Allsop, [27.08.21 12:23]
I am a materialist, and I agree with your first statement.
But your second statement can be falsified.  Once we discover which of all our descriptions of stuff in my brain is a description of my redness, this will falsify your last claim, since we will then know more than you claim we can know.

I'm a materialist too, but I would NOT say an objective description of the pattern of neurons in a brain that produces redness is a description of the feeling of redness that the owner of that brain experiences. So can the scientific method prove that there are some similarities between my experience of redness and yours? Certainly, we both agree that the redness qualia is different from the blueness qualia, although the two quaila we subjectively experience could be inverted, and for all I know you may not have any subjective experience at all. Can the scientific method ever prove that the subjective experience that we both communicate with the English word "red" are absolutely identical? Certainly not, you experiencing "red" can't be the same as me experiencing red unless we are the same person, and we're not.

I just don't see any way, even in theory, the scientific method could ever make the jump between objective and subjective, therefore, because I am a scientific man but could not function if I really thought I was the only conscious being in the universe, I must conclude that a brute fact is involved, namely that consciousness is the way data feels when it is processed intelligently. That's the only reason I think if somebody is taking a calculus exam then they are conscious, but if somebody is sleeping or under anesthesia or dead then they are not.

John K Clark

Brent Allsop

unread,
Aug 28, 2021, 2:11:39 PM8/28/21
to extro...@googlegroups.com


So you are saying redness, and all the other colors in this world come from data, being represented by ones and zeros (requires specific transducing mechanisms to interpret whatever physics is representing them, correctly) being "felt" when it is "processed intelligently".  What the heck does any of those words even mean?



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv1c8mLoCGytmjJDhrEJCsLCdG5TW8dsZB-QdCmJh5zN4g%40mail.gmail.com.

John Clark

unread,
Aug 28, 2021, 2:47:48 PM8/28/21
to extro...@googlegroups.com
On Sat, Aug 28, 2021 at 2:11 PM Brent Allsop <brent....@gmail.com> wrote:

> So you are saying redness, and all the other colors in this world come from data, being represented by ones and zeros (requires specific transducing mechanisms to interpret whatever physics is representing them, correctly) being "felt" when it is "processed intelligently".  What the heck does any of those words even mean?

I don't know, you wrote them not me. 

John K Clark



 



On Sat, Aug 28, 2021 at 10:38 AM John Clark <johnk...@gmail.com> wrote:
On Sat, Aug 28, 2021 at 12:05 AM Brent Allsop <brent....@gmail.com> wrote:

> See the transcript I had with Emerson, today, here: "I Convinced GPT-3 it Isn’t Conscious ."

Emerson AI, [27.08.21 12:20]
A materialist would say that what you know, is that certain neurons in your brain are activated in a certain pattern when you see red. That is all you can know.
 
Brent Allsop, [27.08.21 12:23]
I am a materialist, and I agree with your first statement.
But your second statement can be falsified.  Once we discover which of all our descriptions of stuff in my brain is a description of my redness, this will falsify your last claim, since we will then know more than you claim we can know.

I'm a materialist too, but I would NOT say an objective description of the pattern of neurons in a brain that produces redness is a description of the feeling of redness that the owner of that brain experiences. So can the scientific method prove that there are some similarities between my experience of redness and yours? Certainly, we both agree that the redness qualia is different from the blueness qualia, although the two quaila we subjectively experience could be inverted, and for all I know you may not have any subjective experience at all. Can the scientific method ever prove that the subjective experience that we both communicate with the English word "red" are absolutely identical? Certainly not, you experiencing "red" can't be the same as me experiencing red unless we are the same person, and we're not.

I just don't see any way, even in theory, the scientific method could ever make the jump between objective and subjective, therefore, because I am a scientific man but could not function if I really thought I was the only conscious being in the universe, I must conclude that a brute fact is involved, namely that consciousness is the way data feels when it is processed intelligently. That's the only reason I think if somebody is taking a calculus exam then they are conscious, but if somebody is sleeping or under anesthesia or dead then they are not.

John K Clark

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv1c8mLoCGytmjJDhrEJCsLCdG5TW8dsZB-QdCmJh5zN4g%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
Aug 28, 2021, 2:55:13 PM8/28/21
to extro...@googlegroups.com

So how did my 'in  other words" of you saying: "I must conclude that a brute fact is involved, namely that consciousness is the way data feels when it is processed intelligently." differ, or are you claiming you didn't say that?



John Clark

unread,
Aug 28, 2021, 2:59:07 PM8/28/21
to extro...@googlegroups.com
On Sat, Aug 28, 2021 at 2:55 PM Brent Allsop <brent....@gmail.com> wrote:

> So how did my 'in  other words" of you saying: "I must conclude that a brute fact is involved, namely that consciousness is the way data feels when it is processed intelligently." differ, or are you claiming you didn't say that?

No, I said that. Which word didn't you understand

John K Clark



 

Stathis Papaioannou

unread,
Aug 28, 2021, 5:28:35 PM8/28/21
to extro...@googlegroups.com
On Sat, 28 Aug 2021 at 14:05, Brent Allsop <brent....@gmail.com> wrote:

See the transcript I had with Emerson, today, here: "I Convinced GPT-3 it Isn’t Conscious ."

How could you comment on Emerson’s consciousness without connecting yourself to Emerson’s circuits?
--
Stathis Papaioannou

Brent Allsop

unread,
Aug 29, 2021, 5:45:07 PM8/29/21
to extro...@googlegroups.com, ExI chat list

Hi Stathis,

We've gone over this many times, but your model seems to be missing representations of redness and greenness, as different than red and green.  So it appears that all I say get's mapped into your model, leaving it absent of what I'm trying to say.  Here you are talking about only the 3: Strongest form of effing the effable, where you directly computationally bind another's phenomenal qualities into your own consciousness.
Both the 3rd, strongest, and the 2nd stronger forms, where you computationally bind something you have never experienced before, into you consciousness, require brain hacking.

The 1st, weakest form of effing the ineffable, I was using with Emerson, is different.  It does not require brain hacking.  All it requires is objective observation and communication in way that distinguishes between red and redness, and can model differences in specific intrinsic qualities.  If one is using only one abstract word "red" for all things representing red knowledge, you can't model differences in different intrinsic qualities which may be representing red.  For the weakest form of effing the ineffable, all you need is a phenomenal definition for subjective terms like "redness", enabling you to communicate things with well defined terms like this example effing  statement: "My redness is like your greenness, both of which we call red."

Also, thanks to all your endless help, I think I have a better understanding of our differences.  I would like to get these differences between your "Functional Property Dualist" camp, and the "Qualia are Material Qualities" camp canonized.  Let me see if you agree that this is a good way to concisely describe our differences?

Functionalists, like James Carroll and yourself, using the neuro-substitution argument make the assumption that a neuron functions similarly to the discrete logic gates in an abstract CPU.
You also assume ALL computation operates this way, which is why you think you can make the claim that the neuro-substitution argument can be applied to all possible computational cases, justifying your belief that your neuro substation argument is a "proof" that qualia must be functional in all possible computational instances.

Where as Materialists, like Steven Lehar and I, think this way of thinking about consciousness, or making this assumption is WRONG.

We believe that within any such abstract discrete logic only functional system, there can be nothing that is the intrinsic qualities that represent information like redness or greenness.
AND
There is no way to perform the necessary "computational binding" of such intrinsic qualities.  As you so adequately point out, discrete logic gates can't do this kind of computational binding.

Both of these are required so one can be aware of 2 or more intrinsic qualities at the same time, the very definition of consciousness for me.
Even if there was some "function" from which redness emerged, you could use the same neuro-substitution argument to "prove", redness can't be functional Either.
Since you completely leave intrinsic qualities like redness out of your way of thinking, you don't seem to be able to model this all important difference, which is so critical for me.






--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Stathis Papaioannou

unread,
Aug 29, 2021, 7:58:11 PM8/29/21
to extro...@googlegroups.com
On Mon, 30 Aug 2021 at 07:45, Brent Allsop <brent....@gmail.com> wrote:

Hi Stathis,

We've gone over this many times, but your model seems to be missing representations of redness and greenness, as different than red and green.  So it appears that all I say get's mapped into your model, leaving it absent of what I'm trying to say.  Here you are talking about only the 3: Strongest form of effing the effable, where you directly computationally bind another's phenomenal qualities into your own consciousness.
Both the 3rd, strongest, and the 2nd stronger forms, where you computationally bind something you have never experienced before, into you consciousness, require brain hacking.

The 1st, weakest form of effing the ineffable, I was using with Emerson, is different.  It does not require brain hacking.  All it requires is objective observation and communication in way that distinguishes between red and redness, and can model differences in specific intrinsic qualities.  If one is using only one abstract word "red" for all things representing red knowledge, you can't model differences in different intrinsic qualities which may be representing red.  For the weakest form of effing the ineffable, all you need is a phenomenal definition for subjective terms like "redness", enabling you to communicate things with well defined terms like this example effing  statement: "My redness is like your greenness, both of which we call red."

Also, thanks to all your endless help, I think I have a better understanding of our differences.  I would like to get these differences between your "Functional Property Dualist" camp, and the "Qualia are Material Qualities" camp canonized.  Let me see if you agree that this is a good way to concisely describe our differences?

Functionalists, like James Carroll and yourself, using the neuro-substitution argument make the assumption that a neuron functions similarly to the discrete logic gates in an abstract CPU.
You also assume ALL computation operates this way, which is why you think you can make the claim that the neuro-substitution argument can be applied to all possible computational cases, justifying your belief that your neuro substation argument is a "proof" that qualia must be functional in all possible computational instances.

Where as Materialists, like Steven Lehar and I, think this way of thinking about consciousness, or making this assumption is WRONG.

We believe that within any such abstract discrete logic only functional system, there can be nothing that is the intrinsic qualities that represent information like redness or greenness.
AND
There is no way to perform the necessary "computational binding" of such intrinsic qualities.  As you so adequately point out, discrete logic gates can't do this kind of computational binding.

Both of these are required so one can be aware of 2 or more intrinsic qualities at the same time, the very definition of consciousness for me.
Even if there was some "function" from which redness emerged, you could use the same neuro-substitution argument to "prove", redness can't be functional Either.
Since you completely leave intrinsic qualities like redness out of your way of thinking, you don't seem to be able to model this all important difference, which is so critical for me.

I don’t know why you keep insisting that I don’t believe in “intrinsic redness”. I do believe that there is “intrinsic redness”, I just don’t think it can possibly be attached to a substance. I don’t think that even a miracle from God can attach intrinsic redness to a substance. Also, I actually do think that neurons essentially function like computer circuits, but I could be wrong about this, they might function fundamentally differently, they might even be a miracle from God: but even in that case, intrinsic redness cannot be attached to a substance!
--
Stathis Papaioannou

Brent Allsop

unread,
Aug 29, 2021, 9:31:32 PM8/29/21
to extro...@googlegroups.com, ExI chat list
Before we jump down this rat hole, for the gazillionth time, could you indicate if I am getting anywhere close to describing the differences between our two camps, so we can get this canonized, or do I just need to state this unilaterally in our materialist camp?

So you do claim you believe in "intrinsic redness." That is a HUGE step forward.  Then you must also agree that consciousness is dependent on whatever these intrinsic qualities you claim to believe in are like?  That if their quality changes (i.e. redness -> greenness), the resulting consciousness is qualitatively different, though the system may remain functionally equivalent?  Do you also believe that intrinsic redness can be computationally bound to intrinsic greenness, so we can be aware of both of them, and their differences, at the same time, and that only if you do this, can it then be considered to be conscious?

I didn't mean to say you don't believe in "intrinsic redness",  just that you are leaving it out of your neural substitution argument.
I continually ask you to specify some way that something COULD (even if it is a functional way, or even a  miracle performed by God) be responsible for this "intrinsic redness" that you claim you do believe in,
AND
That you describe some way for 2 or more of these intrinsic qualities to be computationally bound, achieving what I define consciousness to be: Two or more computationally bound elemental intrinsic qualities like redness and greenness.

These two things is what I'm tiring to point out you leave out of your substitution argument.  The discrete logic circuits, you believe neurons to be functioning like, alone, simply can't do the computational binding required to achieve 2 or more elemental qualities like redness and greenness being computationally bound.  Notice that I am making a falsifiable claim here.  IF you can provide any example, where discrete logic circuitry can do computational binding of two or more "intrinsic qualities like redness and greenness" I predict the problem will be resolved.  But you never do this, you just continue to ignore this computational binding issue, while you do your substitution, even though you claim you do believe in intrinsic redness.













 

Stathis Papaioannou

unread,
Aug 29, 2021, 10:21:29 PM8/29/21
to extro...@googlegroups.com
I think you miss the fundamental purpose underlying my position. It is not that I claim any scientific insight into how the brain works, it is that I think consciousness could not exist if it were tied to a substrate, and since I know that I am conscious, I think that therefore consciousness can’t be tied to a substrate. As a consequence, if you don’t like the idea that the third robot in your article (
https://docs.google.com/document/u/0/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/mobilebasic) can have 100% genuine redness qualia as a result of its 1’s and 0’s, that’s just too bad, because it’s easier to stomach that than to stomach not being conscious.
--
Stathis Papaioannou

John Clark

unread,
Aug 30, 2021, 9:52:01 AM8/30/21
to extro...@googlegroups.com, ExI chat list
 Brent Allsop <brent....@gmail.com> wrote:

> Functionalists, like James Carroll and yourself, using the neuro-substitution argument make the assumption that a neuron functions similarly to the discrete logic gates in an abstract CPU.

No, you're the one who must make assumptions, not me. You must invoke fundamental new physics (which should always be the last resort in solving a mystery not the first) and assume that Carbon-based chemistry produces a "something" which materials that do not contain carbon, such as Silicon chips, cannot. This "something" has never been observed and the properties this hypothetical "something" has are extremely vague, the only thing for sure you can say about it is that somehow "something" produces consciousness, but you don't even try to say how. Nor do you explain what's so special about Carbon, yes it's good for making long chain molecules but it's not at all clear what that has to do with consciousness. With one exception your entire theory is just one assumption piled on top of another; the one part that is not an assumption and the one part that I agree with is that somehow something creates consciousness.

> You also assume ALL computation operates this way,

No, I assume all computations CAN be done digitally, but I don't assume that all computations ARE done digitally. Analog computers exist and they work fine if you're satisfied with answers that have very limited precision. Yes there are some well defined calculations that only use finite numbers that nevertheless can not be calculated digitally, such as the Busy Beaver function, but the Busy Beaver can't be calculated with analog methods either; it can be proven that calculating the Busy Beaver function by ANY means would produce a logical contradiction, and therefore it cannot exist. However, nobody has ever found a computable function that can't be calculated digitally, and there's no theoretical reason to think that such a thing exists.

> justifying your belief that your neuro substation argument is a "proof" that qualia must be functional in all possible computational instances.

You were wise to put that in quotation marks because I don't think anybody in the Nuro-substitution camp claims they have a proof, I certainly don't, nobody can prove anything is conscious except for themselves and they never will; but the substitution people do say they have not proof but compelling evidence that computations can and does produce qualia.

>within any such abstract discrete logic only functional system, there can be nothing that is the intrinsic qualities that represent information like redness or greenness.

Four questions:  

1) Are the intrinsic properties of red and green objects stored in the objects themselves or in the brain perceiving them?

2) if it's in the brain exactly where in that 3 pounds of grey jello is the intrinsic quality of red and green stored?

3 ) How would you recognize it if you saw it, and why do neurons have it but microchips don't?

4) Intrinsic knowledge of red and green qualia must have produced an Evolutionary advantage or it would have never bothered to create qualia, so what is it?

John K Clark


Reply all
Reply to author
Forward
0 new messages