A minimally conscious program

78 views
Skip to first unread message

Jason Resch

unread,
Apr 25, 2021, 4:29:39 PM4/25/21
to Everything List
It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).

What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?

Does, for instance, the following program "know" anything about the data it is processing?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
} else { 
    // knows pixel.red <= 128
}

If not, what else is required for knowledge?

Does the program behavior have to change based on the state of some information? For example:

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    doX();
} else { 
    // knows pixel.red <= 128
    doY():
}

Or does the program have to possess some memory and enter a different state based on the state of the information it processed?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    enterStateX():
} else { 
    // knows pixel.red <= 128
    enterStateY();
}

Or is something else altogether needed to say the program knows?

If a program can be said to "know" something then can we also say it is conscious of that thing?

Jason

John Clark

unread,
Apr 26, 2021, 4:50:14 AM4/26/21
to 'Brent Meeker' via Everything List
On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com> wrote:

> It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

I agree. And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.

Artificial Intelligence is hard but Artificial Consciousness Is easy.
John K Clark    See what's on my new list at  Extropolis

Telmo Menezes

unread,
Apr 26, 2021, 6:06:11 AM4/26/21
to John Clark, 'Brent Meeker' via Everything List


Am Mo, 26. Apr 2021, um 10:49, schrieb John Clark:
On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com> wrote:

> It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

I agree. And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.

If you truly believe this is the case, then it follows that anyone writing such a program and subjecting it to X=P should be considered guilty of torture. Do you agree?

Telmo

Artificial Intelligence is hard but Artificial Consciousness Is easy.
John K Clark    See what's on my new list at  Extropolis


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 26, 2021, 6:29:23 AM4/26/21
to Telmo Menezes, 'Brent Meeker' via Everything List
On Mon, Apr 26, 2021 at 6:06 AM Telmo Menezes <te...@telmomenezes.net> wrote:

>> And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.

> If you truly believe this is the case, then it follows that anyone writing such a program and subjecting it to X=P should be considered guilty of torture. Do you agree?

Yes. If I'm right, and I think I am, then anyone writing such a program not only should be but logically MUST be considered to have been engaging in torture. What conclusion can be drawn from that bizarre conclusion? Assuming a level of consciousness to something while ignoring all information about its intelligent behavior is not a useful tool for assessing the morality of an action.

John K Clark

Terren Suydam

unread,
Apr 26, 2021, 8:31:30 AM4/26/21
to Everything List
Assuming the program has a state and that state changes in response to its inputs, then it seems reasonable to say the program is conscious in some elemental way. What is it conscious "of", though? I'd say it's not conscious of anything outside of itself, in the same way we are not conscious of anything outside of ourselves. We are only conscious of the model of the world we build. You might then say it's conscious of its internal representation, or its state.

On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Henrik Ohrstrom

unread,
Apr 26, 2021, 10:39:22 AM4/26/21
to everyth...@googlegroups.com
That would be, it is quite solipsistic?
/henrik


Terren Suydam

unread,
Apr 26, 2021, 10:45:53 AM4/26/21
to Everything List
It's impossible to refute solipsism, but that's true regardless of your metaphysics. It's true that the only thing we know for sure is our own consciousness, but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.


Jason Resch

unread,
Apr 26, 2021, 11:03:36 AM4/26/21
to Everything List
What's the difference between / how do we know, the program is experiencing pain when P is high, versus the program is experiencing bliss when P is low?

Or is bliss merely the complete absence of pain and the distinction in my prior question is meaningless?

Jason



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 26, 2021, 11:16:47 AM4/26/21
to 'Brent Meeker' via Everything List
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:

> It's impossible to refute solipsism

True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.

> It's true that the only thing we know for sure is our own consciousness,
And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better. 
 
> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.

And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.  

John Clark

unread,
Apr 26, 2021, 11:25:45 AM4/26/21
to 'Brent Meeker' via Everything List
On Mon, Apr 26, 2021 at 11:03 AM Jason Resch <jason...@gmail.com> wrote:

> What's the difference between / how do we know, the program is experiencing pain when P is high, versus the program is experiencing bliss when P is low?

Behavior. Actions are taken to minimize pain and to maximize bliss.  

> Or is bliss merely the complete absence of pain and the distinction in my prior question is meaningless?

If that question is meaningless then so is the question "what's the difference between positive electrical charge and negative?". They both seem like reasonable questions to me and the answer to both is,  "they move things in opposite directions".  

John K Clark



smitra

unread,
Apr 26, 2021, 1:48:16 PM4/26/21
to everyth...@googlegroups.com
On 26-04-2021 10:49, John Clark wrote:
> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com>
> wrote:
>
>> _> It is quite easy, I think, to define a program that "remembers"
>> (stores and later retrieves ( information._
>
> I agree. And for an emotion like pain write a program such that the
> closer the number in the X register comes to the integer P the more
> computational resources will be devoted to changing that number, and
> if it ever actually equals P then the program should stop doing
> everything else and do nothing but try to change that number to
> something far enough away from P until it's no longer an urgent matter
> and the program can again do things that have nothing to do with P.
>
> Artificial Intelligence is hard but Artificial Consciousness Is easy.
> John K Clark See what's on my new list at Extropolis [1]
>

I have an analogue computer that implements this: Two magnets. If I push
two equal poles toward each other, does this cause the system of the two
magnets to feel pain?

Saibal

John Clark

unread,
Apr 26, 2021, 3:42:37 PM4/26/21
to 'Brent Meeker' via Everything List
On Mon, Apr 26, 2021 at 1:48 PM smitra <smi...@zonnet.nl> wrote:
> I have an analogue computer that implements this: Two magnets. If I push
two equal poles toward each other, does this cause the system of the two
magnets to feel pain?

I don't know, I don't even know if you feel pain, but I do know that those two magnets you speak of do not behave very intelligently so I'm not going to worry about it.  

John K Clark

Brent Meeker

unread,
Apr 26, 2021, 4:21:39 PM4/26/21
to everyth...@googlegroups.com


On 4/26/2021 8:03 AM, Jason Resch wrote:


On Mon, Apr 26, 2021, 5:29 AM John Clark <johnk...@gmail.com> wrote:
On Mon, Apr 26, 2021 at 6:06 AM Telmo Menezes <te...@telmomenezes.net> wrote:

>> And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.

> If you truly believe this is the case, then it follows that anyone writing such a program and subjecting it to X=P should be considered guilty of torture. Do you agree?

Yes. If I'm right, and I think I am, then anyone writing such a program not only should be but logically MUST be considered to have been engaging in torture. What conclusion can be drawn from that bizarre conclusion? Assuming a level of consciousness to something while ignoring all information about its intelligent behavior is not a useful tool for assessing the morality of an action.

John K Clark

What's the difference between / how do we know, the program is experiencing pain when P is high, versus the program is experiencing bliss when P is low?

Bliss is defined as the state the program doesn't invest time/effort to change.

Brent

Brent Meeker

unread,
Apr 26, 2021, 4:33:32 PM4/26/21
to everyth...@googlegroups.com
Not only that, but they also oftentimes behave intelligently without being conscious of it.

Brent

Terren Suydam

unread,
Apr 26, 2021, 7:07:38 PM4/26/21
to Everything List
So do you have nothing to say about coma patients who've later woken up and said they were conscious?  Or people under general anaesthetic who later report being gruesomely aware of the surgery they were getting?  Should we ignore those reports?  Or admit that consciousness is worth considering independently from its effects on outward behavior?

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 26, 2021, 10:08:05 PM4/26/21
to everyth...@googlegroups.com
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

But I agree with his general point that consciousness is easy and intelligence is hard.  I think human consciousness, having an inner narrative, is just an evolutionary trick the brain developed for learning and accessing learned information to inform decisions. Julian Jaynes wrote a book about how this may have come about, "The Origin of Consciousness in the Breakdown of the Bicameral Mind".  I don't know that he got it exactly right, but I think he was on to the right idea.

Brent

Terren Suydam

unread,
Apr 27, 2021, 1:08:03 AM4/27/21
to Everything List
On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

Or rather, even if they're doing nothing at all. Someone meditating for hours on end, or someone lying on a couch with eyeshades and headphones on tripping on psilocybin, may be having extraordinary internal experiences and display absolutely no outward behavior.
 
But I agree with his general point that consciousness is easy and intelligence is hard. 

It depends how you look at it. JC's point is that it's impossible to prove much of anything about consciousness, so you can imagine many ways to explain consciousness without ever suffering the pain of your theory being slain by a fact.

However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.
 
I think human consciousness, having an inner narrative, is just an evolutionary trick the brain developed for learning and accessing learned information to inform decisions. Julian Jaynes wrote a book about how this may have come about, "The Origin of Consciousness in the Breakdown of the Bicameral Mind".  I don't know that he got it exactly right, but I think he was on to the right idea.

I agree!

Terren
 

Brent


On 4/26/2021 4:07 PM, Terren Suydam wrote:
So do you have nothing to say about coma patients who've later woken up and said they were conscious?  Or people under general anaesthetic who later report being gruesomely aware of the surgery they were getting?  Should we ignore those reports?  Or admit that consciousness is worth considering independently from its effects on outward behavior?

On Mon, Apr 26, 2021 at 11:16 AM John Clark <johnk...@gmail.com> wrote:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:

> It's impossible to refute solipsism

True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.

> It's true that the only thing we know for sure is our own consciousness,
And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better. 
 
> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.

And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.  

John K Clark    See what's on my new list at  Extropolis
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

smitra

unread,
Apr 27, 2021, 1:18:07 AM4/27/21
to everyth...@googlegroups.com
I think it's better to approach the problem from the other end, i.e. you
consider a certain consciousness described in terms of the content of
the consciousness, e.g. you have the conscious experience of reading
this sentence, and here it's important that part of this conscious
experience is that it is you and not someone else reading this. So, a
lot more information is involved here than just processing the small
amount of information for reading this text. Then for that consciousness
one can ask what physical system could implement this particular
consciousness. But this is then to a large degree fixed by the conscious
experience itself, as that already includes a sense if identity. But
this is not fixed 100%, there exists a self-localization ambiguity.

The simpler the system that generates a consciousness, the larger this
self-location ambiguity will become. This will become too large for very
simple systems (if they are conscious at all) to pin that conscious
experience down to any particular physical device that is running the
algorithm that supposedly generates it. The conscious experience of a
spider in my house may not be sufficiently detailed to locate itself in
my house, it's consciousness is spread out over a vast number of
different physical systems, some of which may be located on Earth as it
existed 300 million years ago.

Saibal

Brent Meeker

unread,
Apr 27, 2021, 1:27:31 AM4/27/21
to everyth...@googlegroups.com


On 4/26/2021 10:07 PM, Terren Suydam wrote:


On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

Or rather, even if they're doing nothing at all. Someone meditating for hours on end, or someone lying on a couch with eyeshades and headphones on tripping on psilocybin, may be having extraordinary internal experiences and display absolutely no outward behavior.
 
But I agree with his general point that consciousness is easy and intelligence is hard. 

It depends how you look at it. JC's point is that it's impossible to prove much of anything about consciousness, so you can imagine many ways to explain consciousness without ever suffering the pain of your theory being slain by a fact.

However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.

What I mean by it is that if you can engineer intelligence at a high level it will necessarily entail consciousness.  An entity cannot be human-level intelligent without being able to prospectively consider scenarios in which they are actors in which the scenario is informed by past experience...and I think that is what constitutes the core of consciousness.

Brent

Terren Suydam

unread,
Apr 27, 2021, 2:11:30 AM4/27/21
to Everything List


On Tue, Apr 27, 2021 at 1:27 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.

What I mean by it is that if you can engineer intelligence at a high level it will necessarily entail consciousness.  An entity cannot be human-level intelligent without being able to prospectively consider scenarios in which they are actors in which the scenario is informed by past experience...and I think that is what constitutes the core of consciousness.

Sure - although it seems possible that there could be intelligences that are not conscious. We're pretty biased to think of intelligence as we have it - situated in a meat body, and driven by evolutionary programming in a social context. There may be forms of intelligence so alien we could never conceive of them, and there's no guarantee about consciousness. Take corporations. A corporation is its own entity and it acts intelligently in the service of its own interests. They can certainly be said to "prospectively consider scenarios in which they are actors in which the scenario is informed by past experience". Is a corporation conscious?

Terren
 

Brent

Brent Meeker

unread,
Apr 27, 2021, 2:27:05 AM4/27/21
to everyth...@googlegroups.com


On 4/26/2021 11:11 PM, Terren Suydam wrote:


On Tue, Apr 27, 2021 at 1:27 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

However, in a certain sense, intelligence is easier because it's constrained. Intelligence can be tested. It's certainly more practical, which makes intelligence easier to study as well. You're much more likely to be able to profit from advances in understanding of intelligence. In that sense, consciousness is harder to work with than intelligence, because it's harder to make progress. Facts that might slay your theory are much harder to come by.

What I mean by it is that if you can engineer intelligence at a high level it will necessarily entail consciousness.  An entity cannot be human-level intelligent without being able to prospectively consider scenarios in which they are actors in which the scenario is informed by past experience...and I think that is what constitutes the core of consciousness.

Sure - although it seems possible that there could be intelligences that are not conscious. We're pretty biased to think of intelligence as we have it - situated in a meat body, and driven by evolutionary programming in a social context. There may be forms of intelligence so alien we could never conceive of them, and there's no guarantee about consciousness.

I don't see how an entity could be really intelligent without being able to consider its actions by a kind of internal simulation.


Take corporations. A corporation is its own entity and it acts intelligently in the service of its own interests. They can certainly be said to "prospectively consider scenarios in which they are actors in which the scenario is informed by past experience". Is a corporation conscious?

I think so.  And the Supreme Court agrees. :-)

Brent

John Clark

unread,
Apr 27, 2021, 6:54:56 AM4/27/21
to 'Brent Meeker' via Everything List
On Mon, Apr 26, 2021 at 7:07 PM Terren Suydam <terren...@gmail.com> wrote:

> So do you have nothing to say about coma patients who've later woken up and said they were conscious?  Or people under general anaesthetic who later report being gruesomely aware of the surgery they were getting?  Should we ignore those reports?  Or admit that consciousness is worth considering independently from its effects on outward behavior?

If something is behaving intelligently I am very confident (although not 100% confident) that it is conscious, however if something is not behaving intelligently I am far less certain it is not conscious because it may be incapable of moving or it may simply be trying to deceive me for reasons of its own. Observing behavior is not a perfect tool for assessing consciousness but is the best we have and the best we'll ever have so it will just have to do. Even the examples you present in your post all come from observing behavior, so a certain degree of uncertainty will always be with us.

John K Clark

John Clark

unread,
Apr 27, 2021, 7:11:07 AM4/27/21
to 'Brent Meeker' via Everything List
On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

 > I think JKC is wrong when he says "few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

That is not an unreasonable assumption, but responding to stimuli is behavior. So being a practical man I'll bet you wouldn't still think they're conscious when their eyes are open but their heart hasn't had a beat in several hours nor have they taken a breath during that time and rigor mortis has set in and they've started to smell bad.
 

John Clark

unread,
Apr 27, 2021, 7:22:20 AM4/27/21
to 'Brent Meeker' via Everything List
On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam <terren...@gmail.com> wrote:

> consciousness is harder to work with than intelligence, because it's harder to make progress.

It's not hard to make progress in consciousness research, it's impossible.  

> Facts that might slay your theory are much harder to come by.

Such facts are not hard to come by. they're impossible to come by. So for a consciousness scientist being lazy works just as well as being industrious, so consciousness research couldn't be any easier, just face a wall, sit on your hands, and contemplate your navel.    
John K Clark    See what's on my new list at  Extropolis

.
 
.

Terren Suydam

unread,
Apr 27, 2021, 9:28:54 AM4/27/21
to Everything List
On Tue, Apr 27, 2021 at 2:27 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/26/2021 11:11 PM, Terren Suydam wrote:

Sure - although it seems possible that there could be intelligences that are not conscious. We're pretty biased to think of intelligence as we have it - situated in a meat body, and driven by evolutionary programming in a social context. There may be forms of intelligence so alien we could never conceive of them, and there's no guarantee about consciousness.

I don't see how an entity could be really intelligent without being able to consider its actions by a kind of internal simulation.

Neither do I, but it may be a failure of imagination. The book Blindsight by Peter Watts explores this idea.
 

Take corporations. A corporation is its own entity and it acts intelligently in the service of its own interests. They can certainly be said to "prospectively consider scenarios in which they are actors in which the scenario is informed by past experience". Is a corporation conscious?

I think so.  And the Supreme Court agrees. :-)

How about cities? Countries? Religions?  Each of which can be said to "prospectively consider scenarios..."

Terren
 

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Telmo Menezes

unread,
Apr 27, 2021, 9:33:16 AM4/27/21
to Everything List


Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:

> It's impossible to refute solipsism

True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.

When I was a little kid I would ask adults if rocks were conscious. They tried to train me to stop asking such questions, because they were worried about what other people would think. To this day, I never stopped asking these questions. I see three options here:

(1) They were correct to worry and I have a mental issue.

(2) I am really dumb and don't see something obvious.

(3) Beliefs surrounding consciousness are socially normative, and asking question outside of such boundaries is a taboo.

> It's true that the only thing we know for sure is our own consciousness,

And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better. 
 
> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.

And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.  

John K Clark    See what's on my new list at  Extropolis


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 27, 2021, 9:38:32 AM4/27/21
to Everything List
On Tue, Apr 27, 2021 at 7:22 AM John Clark <johnk...@gmail.com> wrote:
On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam <terren...@gmail.com> wrote:

> consciousness is harder to work with than intelligence, because it's harder to make progress.

It's not hard to make progress in consciousness research, it's impossible.  

So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia, in a repeatable way. Why doesn't that represent progress?  Is it because you don't trust people's reports?
 

> Facts that might slay your theory are much harder to come by.

Such facts are not hard to come by. they're impossible to come by. So for a consciousness scientist being lazy works just as well as being industrious, so consciousness research couldn't be any easier, just face a wall, sit on your hands, and contemplate your navel.    

There are fruitful lines of research happening. Research on patients undergoing meditation, and psychedelic experiences, while in an FMRI has lead to some interesting facts. You seem to think progress can only mean being able to prove conclusively how consciousness works. Progress can mean deepening our understanding of the relationship between the brain and the mind.

Terren
 
John K Clark    See what's on my new list at  Extropolis

.
 
.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Philip Thrift

unread,
Apr 27, 2021, 10:26:40 AM4/27/21
to Everything List

A bit long, but this interview of of the very lucid Hedda Mørch (pronounced "Mark") is very good:(for consciousness "realists"):


via 

https://twitter.com/onemorebrown/status/1386970910230523906

Terren Suydam

unread,
Apr 28, 2021, 8:31:45 AM4/28/21
to Everything List
John - do you have any response?

John Clark

unread,
Apr 28, 2021, 10:15:01 AM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam <terren...@gmail.com> wrote:

> John - do you have any response?

If you insist.  

>> It's not hard to make progress in consciousness research, it's impossible.  

So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia,

We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.
 
>Why doesn't that represent progress? 

It may represent progress but not progress towards understanding consciousness.

 > Is it because you don't trust people's reports?

Trust but verify. When you and I talk about consciousness I don't even know if we're talking about the same thing; perhaps by your meaning of the word I am not conscious, maybe I'm conscious by my meaning of the word but not by yours, maybe my consciousness is just a pale pitiful thing compared to the grand glorious awareness that you have and what you mean by  the word "consciousness".  Maybe comparing your consciousness to mine is like comparing a firefly to a supernova. Or maybe it's the other way around.  Neither of us will ever know.

> in an FMRI has lead to some interesting facts.

An FMRI may help us understand how the brain works and perhaps even how intelligence works, but I think behavior will tell us twice as much as a squiggle on a FMRI graph can about consciousness,  and that would be exactly twice as much unless we make use of the unproven and unprovable axiom that intelligent behavior is a sign of consciousness.
 
> You seem to think progress can only mean being able to prove conclusively how consciousness works.

I don't demand that consciousness researchers do anything as ambitious as explaining how consciousness is produced, all I ask is a proof that I am not the only conscious entity in the universe. But they can't even do that and never will be able to.  

John K Clark    See what's on my new list at  Extropolis

.
. 

Jason Resch

unread,
Apr 28, 2021, 10:37:00 AM4/28/21
to Everything List
Consciousness is not unique here.

Nothing can be proved without assuming some theory and working within it.

But how do we ever prove the theory itself is right? We can't.

Even mathematicians face this problem in trying to prove 2+2=4. Any such proof will rely on a theory which itself cannot be proved.

But this doesn't mean we can't develop theories of consciousness and gather empirical evidence for them. If we simulate brains in computers or develop functional brain scanners that measure individual neurons, we can answer questions about what makes a philosophers of mind talk about qualia or pose or answer questions about consciousness. Whatever it is that causes the philospher's brain to ask or talk about consciousness is consciousness.

Having a complete causal trace of the brain  doing these behaviors will finally allow us to make such an identification.

Jason



John K Clark    See what's on my new list at  Extropolis

.
. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 28, 2021, 11:12:11 AM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 10:37 AM Jason Resch <jason...@gmail.com> wrote:
> But this doesn't mean we can't develop theories of consciousness

Truer words were never spoken! You can find 6.02 * 10^23  different consciousness theories on the internet .  

 > and gather empirical evidence for them. 

How? What empirical evidence is there that one consciouness theory is better than another?

> If we simulate brains in computers or develop functional brain scanners that measure individual neurons, we can answer questions about what makes a philosophers of mind talk about qualia or pose or answer questions about consciousness. 


Studying a neuron with a brain scan or any other device can tell me how the brain works and how behavior works and how intelligence works but can tell me nothing about consciousness. It seems to me that if people have been asking the same question for thousands of years but have not come one nanometer closer to a solution then it may be time to consider the possibility that the wrong question is being asked.

Terren Suydam

unread,
Apr 28, 2021, 11:17:08 AM4/28/21
to Everything List
On Wed, Apr 28, 2021 at 10:15 AM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam <terren...@gmail.com> wrote:

> John - do you have any response?

If you insist.  

>> It's not hard to make progress in consciousness research, it's impossible.  

So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia,

We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.

Got it. Accounts of subjective experience are not the salient facts in these experiments, it's the way they move their lips and tongue and pass air through their vocal cords that matters. The rest of the world has moved on from BF Skinner, but not you, apparently. 
 
 
>Why doesn't that represent progress? 

It may represent progress but not progress towards understanding consciousness.

Why not?  Understanding how the brain maps or encodes different subjective experiences surely counts as progress towards understanding consciousness. If we can explain why, for example, you see stars if you bash the back of your head, but not the front, then that would count as progress towards understanding consciousness.
 

 > Is it because you don't trust people's reports?

Trust but verify. When you and I talk about consciousness I don't even know if we're talking about the same thing; perhaps by your meaning of the word I am not conscious, maybe I'm conscious by my meaning of the word but not by yours, maybe my consciousness is just a pale pitiful thing compared to the grand glorious awareness that you have and what you mean by  the word "consciousness".  Maybe comparing your consciousness to mine is like comparing a firefly to a supernova. Or maybe it's the other way around.  Neither of us will ever know.

You make it sound as though there's nothing to be gleaned from systematic investigation, and the thing I understand the least is how incurious you are about it. I mean, to each their own, but trying to grasp how objective systems (like brains) and consciousness interrelate is perhaps the most fascinating thing I can think of. The mystery of it is incredible to behold when you really get into it.
 
Terren

John Clark

unread,
Apr 28, 2021, 12:06:38 PM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 11:17 AM Terren Suydam <terren...@gmail.com> wrote:

>> We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.

> Got it. Accounts of subjective experience are not the salient facts in these experiments, it's the way they move their lips and tongue and pass air through their vocal cords that matters. The rest of the world has moved on from BF Skinner, but not you, apparently. 

Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.
 
>>> Why doesn't that represent progress? 

>> It may represent progress but not progress towards understanding consciousness.

> Why not?  Understanding how the brain maps or encodes different subjective experiences

Because understanding how the brain maps and encodes information will tell you lots about behavior and intelligence but absolutely nothing about consciousness. 

> If we can explain why, for example, you see stars if you bash the back of your head,

It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.  

> You make it sound as though there's nothing to be gleaned from systematic investigation,

It's impossible to systematically investigate everything therefor a scientist needs to use judgment to determine what is worth his time and what is not. Every minute you spend on consciousness research is a minute you could've spent on researching something far far more productive, which would be pretty much anything. Consciousness research has made ZERO progress over the last thousand years and I have every reason to believe it will make twice as much during the next thousand.   

> the thing I understand the least is how incurious you are about it.

The thing I find puzzling is how incurious you and virtually all internet consciousness mavens are about how intelligence works. Figuring out intelligence is a solvable problem, but figuring out consciousness is not, probably because it's just a brute fact that consciousness is the way data feels when it is being processed. If so then there's nothing more they can be said about consciousness, however I am well aware that after all is said and done more is always said and done. 

Telmo Menezes

unread,
Apr 28, 2021, 12:54:37 PM4/28/21
to Everything List


Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

But I agree with his general point that consciousness is easy and intelligence is hard.

JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?

Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.

I think you are falling for similar traps.

I think human consciousness, having an inner narrative,

This equivalence that you are smuggling in here is doing a lot of work... and it is the tricky part. "Inner narrative" in the sense of having a private simulation of external reality fits what you say below, but why are the lights on? I have no doubt that evolution can create the simulation, but what makes us live it in the first person?

Telmo

Terren Suydam

unread,
Apr 28, 2021, 2:39:20 PM4/28/21
to Everything List
On Wed, Apr 28, 2021 at 12:06 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 11:17 AM Terren Suydam <terren...@gmail.com> wrote:

>> We should always pay attention to all relevant BEHAVIOR, including BEHAVIOR such as noises produced by the mouths of other people.

> Got it. Accounts of subjective experience are not the salient facts in these experiments, it's the way they move their lips and tongue and pass air through their vocal cords that matters. The rest of the world has moved on from BF Skinner, but not you, apparently. 

Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.

OK, I want to explain consciousness from first principles, so Y = consciousness. What is X?  Testimony about subjective experience?  Nobody is claiming that testimony about subjective experience produces consciousness (X produces Y).
 
 
>>> Why doesn't that represent progress? 

>> It may represent progress but not progress towards understanding consciousness.

> Why not?  Understanding how the brain maps or encodes different subjective experiences

Because understanding how the brain maps and encodes information will tell you lots about behavior and intelligence but absolutely nothing about consciousness. 

> If we can explain why, for example, you see stars if you bash the back of your head,

It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.  

I think the question of whether my experience of green is the same as your experience of green reflects confusion on behalf of the questioner. I'm not interested in that.

I'm interested in a theory of consciousness that can tell me, among other things, how it is that we have conscious experiences when we dream. Don't you wonder about that?


> You make it sound as though there's nothing to be gleaned from systematic investigation,

It's impossible to systematically investigate everything therefor a scientist needs to use judgment to determine what is worth his time and what is not. Every minute you spend on consciousness research is a minute you could've spent on researching something far far more productive, which would be pretty much anything. Consciousness research has made ZERO progress over the last thousand years and I have every reason to believe it will make twice as much during the next thousand.   

You refuse to acknowledge that one can produce evidence of consciousness, namely in the form of subjects testifying to their experience. It doesn't matter to you, apparently, if someone reports being in extreme pain.
 

> the thing I understand the least is how incurious you are about it.

The thing I find puzzling is how incurious you and virtually all internet consciousness mavens are about how intelligence works. Figuring out intelligence is a solvable problem, but figuring out consciousness is not, probably because it's just a brute fact that consciousness is the way data feels when it is being processed. If so then there's nothing more they can be said about consciousness, however I am well aware that after all is said and done more is always said and done. 


I'm very curious about how intelligence works too. You're making assumptions about me that don't bear out... perhaps that's true of your thinking in general. And I never claimed consciousness is a solvable problem. But there are better theories of consciousness than others, because there are facts about consciousness that beg explanation (e.g. dreaming, lucid dreaming), and some theories have better explanations than others. But like any other domain, if we can come up with a relatively simple theory that explains a relatively large set of phenomena, then that's a good contender. But you know this, you've just got some kind of odd hang up about consciousness.

Terren
 
John K Clark    See what's on my new list at  Extropolis

.
.
 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 28, 2021, 2:41:23 PM4/28/21
to everyth...@googlegroups.com


On 4/28/2021 9:06 AM, John Clark wrote:
> If we can explain why, for example, you see stars if you bash the back of your head,

It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.  

No, but one can discover whether your reports of green qualia correspond to something consistent in our shared world, as opposed say to your reports of little green men when drunk.  That's why we have a word for qualia which is different from Illusion.

Brent

Brent Meeker

unread,
Apr 28, 2021, 3:01:59 PM4/28/21
to everyth...@googlegroups.com


On 4/28/2021 11:39 AM, Terren Suydam wrote:
>
> I'm interested in a theory of consciousness that can tell me, among
> other things, how it is that we have conscious experiences when we
> dream. Don't you wonder about that?

No especially.  It's certainly consistent with consciousness being a
brain process.  And it's consistent with Jeff Hawkins theory that the
brain is continually trying to predict sensation and it is predictions
that are endorsed by the most neurons that constitute conscious
thoughts.  in sleep, with little or no sensory input the predictions
wander, depending mainly on memory for input.

Brent

John Clark

unread,
Apr 28, 2021, 3:04:09 PM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 2:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> one can discover whether your reports of green qualia correspond to something consistent in our shared world,

Consistency is not the same as identity. If what you and I mean by the words "red" and "green" were inverted then both of us would still say tomatoes are red and leaves are green, but those things would not look subjectively the same to us.

John K Clark


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 28, 2021, 3:05:05 PM4/28/21
to Everything List
What I read in that is that you don't wonder because you've got a workable theory. This was intended for John Clark, who thinks theories of consciousness are a waste of time.

Terren

Jason Resch

unread,
Apr 28, 2021, 3:10:06 PM4/28/21
to Everything List
There was a neurologist (I forgot who) that said "Waking life is a dream modulated by the senses." In other words, the brain's main function is effectively that of a dreaming machine (to generate a picture of reality centered on a subject). Normally, when we are awake, this dream is synched up to mostly follow along with an external world, given data input from the senses. But when we sleep, the brain is free to make things up in ways not synced up to the external world through the senses.

I don't know how true this idea is, but it makes sense and sounds plausible. If it's true, we can expect any creature that dreams likely also experiences a picture of a reality centered on a subject.

Jason

John Clark

unread,
Apr 28, 2021, 3:15:47 PM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 2:39 PM Terren Suydam <terren...@gmail.com> wrote:

>> Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.

> OK, I want to explain consciousness from first principles, so Y = consciousness. What is X? 

Something that shows up on a brain scan machine according to you. 

> I'm interested in a theory of consciousness that can tell me, among other things, how it is that we have conscious experiences when we dream. Don't you wonder about that?

I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.

> I'm very curious about how intelligence works too.

Glad to hear it, but there's 10 times or 20 times more verbiage about consciousness than intelligence on this list.

John K Clark    See what's on my new list at  Extropolis
.

 

.
.
 


Brent Meeker

unread,
Apr 28, 2021, 3:27:36 PM4/28/21
to everyth...@googlegroups.com


On 4/28/2021 12:03 PM, John Clark wrote:


On Wed, Apr 28, 2021 at 2:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> one can discover whether your reports of green qualia correspond to something consistent in our shared world,

Consistency is not the same as identity. If what you and I mean by the words "red" and "green" were inverted then both of us would still say tomatoes are red and leaves are green, but those things would not look subjectively the same to us.

How do you know that?  If you can't know they're the same, you can't know whether they are different either.

Notice that I referred to "reports".  You're worrying whether the qualia are the same...contrary to your own avowal that there's no there there.

Brent


John K Clark




On 4/28/2021 9:06 AM, John Clark wrote:
> If we can explain why, for example, you see stars if you bash the back of your head,

It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.  

No, but one can discover whether your reports of green qualia correspond to something consistent in our shared world, as opposed say to your reports of little green men when drunk.  That's why we have a word for qualia which is different from Illusion.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 28, 2021, 3:32:50 PM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 3:27 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


>>Consistency is not the same as identity. If what you and I mean by the words "red" and "green" were inverted then both of us would still say tomatoes are red and leaves are green, but those things would not look subjectively the same to us.

> How do you know that? 

I don't know that you and I have opposite conceptions of red and green, and the reason I don't know is because language would be consistent with them meaning the same thing and for their meeting to have been inverted.

John K Clark







 
If you can't know they're the same, you can't know whether they are different either.

Notice that I referred to "reports".  You're worrying whether the qualia are the same...contrary to your own avowal that there's no there there.

Brent


John K Clark




On 4/28/2021 9:06 AM, John Clark wrote:
> If we can explain why, for example, you see stars if you bash the back of your head,

It might be able to explain why I say "I see green stars" but that's not what you're interested in, you want to know why I subjectively experience the green qualia and if it's the same as your green qualia, but no theory can even prove to you that I see any qualia at all.  

No, but one can discover whether your reports of green qualia correspond to something consistent in our shared world, as opposed say to your reports of little green men when drunk.  That's why we have a word for qualia which is different from Illusion.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1wtDbm2b_7ddN4CD4AL3%2BthLNFOximoZqGZ7BTw6d0QA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 28, 2021, 3:50:13 PM4/28/21
to Everything List
On Wed, Apr 28, 2021 at 3:15 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 2:39 PM Terren Suydam <terren...@gmail.com> wrote:

>> Forget BF Skinner, this is more general than consciousness or behavior. If you want to explain Y at the most fundamental level from first principles you can't start with "X produces Y'' and then use X as part of your explanation of Y.

> OK, I want to explain consciousness from first principles, so Y = consciousness. What is X? 

Something that shows up on a brain scan machine according to you. 

You're obfuscating. I was pretty clear that I was talking about peoples' reports of their own subjective experience, but you clipped that out and made it seem otherwise. Maybe you did that because your whole edifice crumbles if you admit that testimony of experience constitutes facts about consciousness.
 

> I'm interested in a theory of consciousness that can tell me, among other things, how it is that we have conscious experiences when we dream. Don't you wonder about that?

I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.


We're talking about consciousness, not merely "mental activity". Regardless, you have every right to be incurious about matters like these. The mystery is why you involve yourself in conversations you have no interest in.
 
> I'm very curious about how intelligence works too.

Glad to hear it, but there's 10 times or 20 times more verbiage about consciousness than intelligence on this list.

Nobody's forcing you to read it. 

Terren
 

John K Clark    See what's on my new list at  Extropolis
.

 

.
.
 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 28, 2021, 4:08:27 PM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 3:50 PM Terren Suydam <terren...@gmail.com> wrote:
> testimony of experience constitutes facts about consciousness.

Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence  
 
>> I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.

> We're talking about consciousness, not merely "mental activity".

And as I mentioned in a previous post, if consciousness is NOT the inevitable byproduct of intelligence then when we're talking about consciousness we don't even know if we're talking about the same thing.

John K Clark    See what's on my new list at  Extropolis
.
.

Brent Meeker

unread,
Apr 28, 2021, 4:12:57 PM4/28/21
to everyth...@googlegroups.com


On 4/28/2021 12:09 PM, Jason Resch wrote:


On Wed, Apr 28, 2021 at 2:02 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/28/2021 11:39 AM, Terren Suydam wrote:
>
> I'm interested in a theory of consciousness that can tell me, among
> other things, how it is that we have conscious experiences when we
> dream. Don't you wonder about that?

No especially.  It's certainly consistent with consciousness being a
brain process.  And it's consistent with Jeff Hawkins theory that the
brain is continually trying to predict sensation and it is predictions
that are endorsed by the most neurons that constitute conscious
thoughts.  in sleep, with little or no sensory input the predictions
wander, depending mainly on memory for input.


There was a neurologist (I forgot who) that said "Waking life is a dream modulated by the senses."

Paul Churchland.  And I think it's a good observation.

Brent

In other words, the brain's main function is effectively that of a dreaming machine (to generate a picture of reality centered on a subject). Normally, when we are awake, this dream is synched up to mostly follow along with an external world, given data input from the senses. But when we sleep, the brain is free to make things up in ways not synced up to the external world through the senses.

I don't know how true this idea is, but it makes sense and sounds plausible. If it's true, we can expect any creature that dreams likely also experiences a picture of a reality centered on a subject.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 28, 2021, 4:47:56 PM4/28/21
to Everything List
On Wed, Apr 28, 2021 at 4:08 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 3:50 PM Terren Suydam <terren...@gmail.com> wrote:

> testimony of experience constitutes facts about consciousness.

Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence  

I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.

I don't think it's necessary to accept that in order to make use of testimony by a thousand different people in an experiment who all say: "whatever you're doing, it's weird, I am smelling gasoline".
 
 
>> I am far more interested in understanding the mental activity of a person when he's awake then when he's asleep.

> We're talking about consciousness, not merely "mental activity".

And as I mentioned in a previous post, if consciousness is NOT the inevitable byproduct of intelligence then when we're talking about consciousness we don't even know if we're talking about the same thing.

If you want to get pedantic you can say we don't know if we're talking about the same thing even if we do accept consciousness as the inevitable byproduct of intelligence. So that heuristic isn't helpful. Again, if someone claims to be in pain, then that's a fact we can use, even if the character of that pain isn't knowable publicly. Ditto for seeing red, or any other claim about qualia.

Terren
 
John K Clark    See what's on my new list at  Extropolis
.
.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 28, 2021, 5:51:45 PM4/28/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:

>>> testimony of experience constitutes facts about consciousness.

>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence  

> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.

If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

Terren Suydam

unread,
Apr 28, 2021, 6:18:13 PM4/28/21
to Everything List
It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.

That said, do you agree that testimony of experience constitutes facts about consciousness?

Terren
 

John K Clark    See what's on my new list at  Extropolis


 
.
.

-


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 28, 2021, 7:25:16 PM4/28/21
to everyth...@googlegroups.com


On 4/28/2021 3:17 PM, Terren Suydam wrote:


On Wed, Apr 28, 2021 at 5:51 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:

>>> testimony of experience constitutes facts about consciousness.

>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence  

> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.

If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.

That said, do you agree that testimony of experience constitutes facts about consciousness?

It wouldn't if it were just random, like plucking passages out of novels.  We only take it as evidence of consciousness because there are consistent patterns of correlation with what each of us experiences.  If every time you pointed to a flower you said "red", regardless of the flower's color, a child would learn that "red" meant a flower and his reporting when he saw red wouldn't be testimony to the experience of  red.  So the usefulness of reports already depends on physical patterns in the world.  Something I've been telling Bruno...physics is necessary to consciousness.

Brent


Terren
 

John K Clark    See what's on my new list at  Extropolis


 
.
.
-

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 28, 2021, 7:41:13 PM4/28/21
to Everything List
On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/28/2021 3:17 PM, Terren Suydam wrote:


On Wed, Apr 28, 2021 at 5:51 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:

>>> testimony of experience constitutes facts about consciousness.

>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence  

> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.

If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.

That said, do you agree that testimony of experience constitutes facts about consciousness?

It wouldn't if it were just random, like plucking passages out of novels.  We only take it as evidence of consciousness because there are consistent patterns of correlation with what each of us experiences.  If every time you pointed to a flower you said "red", regardless of the flower's color, a child would learn that "red" meant a flower and his reporting when he saw red wouldn't be testimony to the experience of  red.  So the usefulness of reports already depends on physical patterns in the world.  Something I've been telling Bruno...physics is necessary to consciousness.

Brent

I agree with everything you said there, but all you're saying is that intersubjective reality must be consistent to make sense of other peoples' utterances. OK, but if it weren't, we wouldn't be here talking about anything. None of this would be possible.

Terren

Brent Meeker

unread,
Apr 28, 2021, 8:15:19 PM4/28/21
to everyth...@googlegroups.com
Which is why it's a fool's errand to say we need to explain qualia.  If we can make an AI that responds to world the way we to, that's all there is to saying it has the same qualia.

Brent

Terren Suydam

unread,
Apr 29, 2021, 12:43:02 AM4/29/21
to Everything List
I don't think either of those claims follows. We need to explain suffering if we hope to make sense of how to treat AIs. If it were only about redness I'd agree. But creating entities whose existence is akin to being in hell is immoral. And we should know if we're doing that.

To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.

Terren
 

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 29, 2021, 1:57:43 AM4/29/21
to everyth...@googlegroups.com


On 4/28/2021 9:42 PM, Terren Suydam wrote:


On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/28/2021 4:40 PM, Terren Suydam wrote:


On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/28/2021 3:17 PM, Terren Suydam wrote:


On Wed, Apr 28, 2021 at 5:51 PM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam <terren...@gmail.com> wrote:

>>> testimony of experience constitutes facts about consciousness.

>> Sure I agree, provided you first accept that consciousness is the inevitable byproduct of intelligence  

> I hope the irony is not lost on anyone that you're insisting on your theory of consciousness to make your case that theories of consciousness are a waste of time.

If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon. As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all... the simplest example being a thermostat.

That said, do you agree that testimony of experience constitutes facts about consciousness?

It wouldn't if it were just random, like plucking passages out of novels.  We only take it as evidence of consciousness because there are consistent patterns of correlation with what each of us experiences.  If every time you pointed to a flower you said "red", regardless of the flower's color, a child would learn that "red" meant a flower and his reporting when he saw red wouldn't be testimony to the experience of  red.  So the usefulness of reports already depends on physical patterns in the world.  Something I've been telling Bruno...physics is necessary to consciousness.

Brent

I agree with everything you said there, but all you're saying is that intersubjective reality must be consistent to make sense of other peoples' utterances. OK, but if it weren't, we wouldn't be here talking about anything. None of this would be possible.

Which is why it's a fool's errand to say we need to explain qualia.  If we can make an AI that responds to world the way we to, that's all there is to saying it has the same qualia.

I don't think either of those claims follows. We need to explain suffering if we hope to make sense of how to treat AIs. If it were only about redness I'd agree. But creating entities whose existence is akin to being in hell is immoral. And we should know if we're doing that.

John McCarthy wrote a paper in the '50s warning about the possibility of accidentally making a conscious AI and unknowingly treating it unethically.  But I don't see the difference from any other qualia, we can only judge by behavior.  In fact this whole thread started by JKC considering AI pain, which he defined in terms of behavior.



To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.

Behavior includes reports. What else would you suggest we go on?

Bent

Telmo Menezes

unread,
Apr 29, 2021, 4:31:05 AM4/29/21
to Everything List


Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:


On 4/28/2021 9:54 AM, Telmo Menezes wrote:


Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

But I agree with his general point that consciousness is easy and intelligence is hard.

JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?

Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.

I think you are falling for similar traps.

Which is what? 

The trap of equating the perceived difficulty of a task with its merit. Are we after the truth, or are we after bragging rights?

I think you are falling into the trap of searching for the ding an sich.  Engineering is the measure of understanding. 
That's JKC's point (JFK is dead),

My apologies to JKC for my dyslexia, it was not on purpose.

if your theory doesn't lead to engineering it's just philosophizing and that's easy.


Well, that is you philosophizing, isn't it? Saying that "engineering is the measure of understanding" is a philosophical position that you are not bothering to justify.

If you propose a hypothesis, we can follow this hypothesis to its logical conclusions. So let us say that brain activity generates consciousness. The brain is a finite thing, so its state can be fully described by some finite configuration. Furthermore, this configuration can be replicated in time and space. So a consequence of claiming that the brain generates consciousness is that a conscious state cannot be constrained by time or space. If the exact configuration we are experiencing now is replicated 1 million years from now or in another galaxy, then it leads to the same exact first person experience and the instantiations cannot be distinguished. If you want pure physicalism then you have to add something more to your hypothesis.

Telmo


Brent


John Clark

unread,
Apr 29, 2021, 5:13:12 AM4/29/21
to 'Brent Meeker' via Everything List
On Wed, Apr 28, 2021 at 6:18 PM Terren Suydam <terren...@gmail.com> wrote:

>> If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

> It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon.

That remark makes no sense, and you never answered my question. If consciousness is an epiphenomenon, and from Evolutions point of you it certainly is, then the only way natural selection could've produced consciousness is if its the inevitable byproduct of something else that is not an epiphenomenon, something like intelligence. And you know for a fact that Evolution has produced consciousness at least once and probably many billions of times.
 
> As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all.

For the purposes of this argument it's irrelevant if any sort of data processing can produce consciousness or if only the type that leads to intelligence can because evolution doesn't select for data processing it selects for intelligence, but you can't have intelligence without data processing. 

> do you agree that testimony of experience constitutes facts about consciousness?

Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".  

PGC

unread,
Apr 29, 2021, 8:16:52 AM4/29/21
to Everything List
On Thursday, April 29, 2021 at 10:31:05 AM UTC+2 telmo wrote:


Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:


On 4/28/2021 9:54 AM, Telmo Menezes wrote:


Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

But I agree with his general point that consciousness is easy and intelligence is hard.

JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?

Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.

I think you are falling for similar traps.

Which is what? 

The trap of equating the perceived difficulty of a task with its merit. Are we after the truth, or are we after bragging rights?

That ambiguity exists whenever people pair their genuine christian names to a post. Anonymity can at times be a form of politeness in the 'you can't take my posts seriously' sense. Everybody uses their real names to convince others on the net... as if anybody on the net or social media ever said: "ah, thank you for convincing me to depart from my flawed points of view with the truth! Now I am less dumb."
 

I think you are falling into the trap of searching for the ding an sich.  Engineering is the measure of understanding. 
That's JKC's point (JFK is dead),

My apologies to JKC for my dyslexia, it was not on purpose.

if your theory doesn't lead to engineering it's just philosophizing and that's easy.


Well, that is you philosophizing, isn't it? Saying that "engineering is the measure of understanding" is a philosophical position that you are not bothering to justify.

So is saying practically anything.
 

If you propose a hypothesis, we can follow this hypothesis to its logical conclusions. So let us say that brain activity generates consciousness. The brain is a finite thing, so its state can be fully described by some finite configuration. Furthermore, this configuration can be replicated in time and space. So a consequence of claiming that the brain generates consciousness is that a conscious state cannot be constrained by time or space. If the exact configuration we are experiencing now is replicated 1 million years from now or in another galaxy, then it leads to the same exact first person experience and the instantiations cannot be distinguished. If you want pure physicalism then you have to add something more to your hypothesis.

How about precision and effectiveness of that way of thinking as opposed to mathematical approaches? Physicists imho allow themselves a more relaxed attitude where they may set aside concerns of existing mathematical objects or make the kinds of approximations that mathematicians would never allow themselves to guess. It took some decades for example up to around 1950 for physicists to work out renormalization in quantum field theory, with calculating the perturbative expansion where all terms of second order and above yield divergent integrals. With more precision in spectroscopy, discovering the fine structure of atomic emission spectra etc. those physicists sought for a way to pull a finite result from the divergent integrals. Restricting the domain of integration to energies of order mc2 and through unjustified subtractions they obtained a finite result very close to the experimental result. 

Then Tomonaga, Dyson, Feynman etc. improved the technique until the degree of precision became satisfying. Renormalization for calculation purposes by changing the mass of the electron and replacing it by a quantity, depending on the relevant magnitude of energies, and yet diverging when the order of magnitude tends towards infinity... Mathematicians wouldn't have pulled that off. They wouldn't take those liberties. Even if you extracted Feynman's integral out of self-reference precisely, instead of "something like it must be there, because it fits", the temptation of mathematicians to believe that physics can be reduced to a number of equations is understandable; but the physical style of reasoning is part of what makes it possible to understand those equations and frame them at all to begin with. To assume that mathematics absolutely encompasses everything that physicists have discovered can appear authoritarian. 

The old joke of the physicist going to a mathematician's shop with the word "Dry Cleaning" written on sign out front comes to mind. Physicist walks in with a dirty suit and wants to know when he can pick it back up. The mathematician owner replies "I'm afraid we don't do dry cleaning." "What? Why does the sign out front say 'Dry Cleaning' ?" asks the surprised physicist. The owner replies: "Oh, we actually don't ever clean anything in here. We just sell signs." PGC

Terren Suydam

unread,
Apr 29, 2021, 9:34:42 AM4/29/21
to Everything List
A theory would give you a way to predict what kinds of beings are capable of feeling pain. We wouldn't have to wait to observe their behavior, we'd say "given theory X, we know that if we create an AI with these characteristics, it will be the kind of entity that is capable of suffering".
 

To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.

Behavior includes reports. What else would you suggest we go on?

Again, in a theory of consciousness that explains how qualia come to be within a system, you could make claims about their experience that go beyond observing behavior. I know John Clark's head just exploded, but it's the point of having a theory of consciousness.
 

Bent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 29, 2021, 9:48:36 AM4/29/21
to Everything List
On Thu, Apr 29, 2021 at 5:13 AM John Clark <johnk...@gmail.com> wrote:
On Wed, Apr 28, 2021 at 6:18 PM Terren Suydam <terren...@gmail.com> wrote:

>> If you believe in Darwinian evolution and if you believe you are conscious then given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

> It's not an inevitable byproduct of intelligence if consciousness is an epiphenomenon.

That remark makes no sense, and you never answered my question. If consciousness is an epiphenomenon, and from Evolutions point of you it certainly is, then the only way natural selection could've produced consciousness is if its the inevitable byproduct of something else that is not an epiphenomenon, something like intelligence. And you know for a fact that Evolution has produced consciousness at least once and probably many billions of times.

I mostly agree, my only hang up is with the word 'inevitable'. I think it's possible there was consciousness before there was intelligence, depending on how you define intelligence.
 
 
> As you like to say, consciousness may just be how data feels as it's being processed. If so, that doesn't imply anything about intelligence per se, beyond the minimum intelligence required to process data at all.

For the purposes of this argument it's irrelevant if any sort of data processing can produce consciousness or if only the type that leads to intelligence can because evolution doesn't select for data processing it selects for intelligence, but you can't have intelligence without data processing. 

You keep coming back to intelligence in a conversation about consciousness. That's fine, but when you do you're implicitly working with a theory of consciousness. Then, you're demanding that I use your theory of consciousness when you insist that I answer questions about consciousness through the framing of evolution. It's a bit of a contradiction to be using a theory of consciousness to point out how pointless theories of consciousness are.
 

> do you agree that testimony of experience constitutes facts about consciousness?

Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".  

OK, fine, let's say intelligence implies consciousness, the account given was honest (as in, nobody witnessing the account would have a credible reason to doubt it), and we can agree on all those terms.

Then do you agree that said account constitutes facts about consciousness?

Terren

 
John K Clark    See what's on my new list at  Extropolis.
.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 29, 2021, 10:08:45 AM4/29/21
to 'Brent Meeker' via Everything List
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:

> A theory would give you a way to predict what kinds of beings are capable of feeling pain

Finding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?  

 > we'd say "given theory X,

And if the given X  which we take as being true is "Hogwarts exist" then we must logically conclude we could find Harry Potter at that magical school of witchcraft and wizardry.

> we know that if we create an AI with these characteristics,

If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.  

> a theory of consciousness that explains how qualia come to be within a system,

Explains? Just what sort of theory would satisfy you and make you say the problem of consciousness has been solved? If I said the chemical Rednosium Oxide produced qualia would all your questions be answered or would you be curious to know how this chemical managed to do that?   
 
> you could make claims about their experience that go beyond observing behavior.

Claims are even easier to come about then theories are, but true claims not so much.

John Clark

unread,
Apr 29, 2021, 10:38:19 AM4/29/21
to 'Brent Meeker' via Everything List
On Thu, Apr 29, 2021 at 9:48 AM Terren Suydam <terren...@gmail.com> wrote:
 
>I think it's possible there was consciousness before there was intelligence,

I very much doubt it, but of course nobody will ever be able to prove or disprove it so the proposition fits in very nicely with all existing consciousness literature.   
 
> you're implicitly working with a theory of consciousness. Then, you're demanding that I use your theory of consciousness when you insist that I answer questions about consciousness through the framing of evolution.

I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?
 
> >> do you agree that testimony of experience constitutes facts about consciousness?

>> Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".  

> OK, fine, let's say intelligence implies consciousness,

If you grant me that then what are we arguing about?

>the account given was honest (as in, nobody witnessing the account would have a credible reason to doubt it),

The most successful lies are those in which the reason for the lying is not immediately obvious.
 
> and we can agree on all those terms.

Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.  
John K Clark    See what's on my new list at  Extropolis.
.

.

Terren Suydam

unread,
Apr 29, 2021, 11:47:51 AM4/29/21
to Everything List
On Thu, Apr 29, 2021 at 10:08 AM John Clark <johnk...@gmail.com> wrote:
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:

> A theory would give you a way to predict what kinds of beings are capable of feeling pain

Finding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?  

The same way we figure out any theory is correct. Does it have explanatory power, does it make falsifiable predictions. We're still arguing over whether there's such a thing as a fact about consciousness, but if we can imagine a world where you grant that there are, that's the world in which you can test theories of consciousness.
 

 > we'd say "given theory X,

And if the given X  which we take as being true is "Hogwarts exist" then we must logically conclude we could find Harry Potter at that magical school of witchcraft and wizardry.

> we know that if we create an AI with these characteristics,

If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.  

Sure, but we might be talking about the behavior of neurons, or their equivalent in an AI.
 

> a theory of consciousness that explains how qualia come to be within a system,

Explains? Just what sort of theory would satisfy you and make you say the problem of consciousness has been solved? If I said the chemical Rednosium Oxide produced qualia would all your questions be answered or would you be curious to know how this chemical managed to do that?   

All of our disagreements come down to whether there are facts about consciousness. You don't think there are, and that's all the question above is saying.
 
 
> you could make claims about their experience that go beyond observing behavior.

Claims are even easier to come about then theories are, but true claims not so much.

John K Clark    See what's on my new list at  Extropolis.

.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 29, 2021, 12:24:09 PM4/29/21
to Everything List
On Thu, Apr 29, 2021 at 10:38 AM John Clark <johnk...@gmail.com> wrote:
On Thu, Apr 29, 2021 at 9:48 AM Terren Suydam <terren...@gmail.com> wrote:
 
>I think it's possible there was consciousness before there was intelligence,

I very much doubt it, but of course nobody will ever be able to prove or disprove it so the proposition fits in very nicely with all existing consciousness literature.   

The point was that it's not necessarily true that consciousness is the inevitable byproduct of intelligence.
 
 
> you're implicitly working with a theory of consciousness. Then, you're demanding that I use your theory of consciousness when you insist that I answer questions about consciousness through the framing of evolution.

I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?

No, I can't. If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?
 
 
> >> do you agree that testimony of experience constitutes facts about consciousness?

>> Only if I first assume that intelligence implies consciousness, otherwise I'd have no way of knowing if the being giving the testimony about consciousness was itself conscious. And only if I am convinced that the being giving the testimony was as honest as he can be. And only if I feel confident we agree about the meeting of certain words, like "green" and "red" and "hot" and "cold" and you guessed it "consciousness".  

> OK, fine, let's say intelligence implies consciousness,

If you grant me that then what are we arguing about?

Over whether there are facts about consciousness, without having to link it to intelligence.
 

>the account given was honest (as in, nobody witnessing the account would have a credible reason to doubt it),

The most successful lies are those in which the reason for the lying is not immediately obvious.

There's uncertainty with the behavior of single subatomic particles, but when we observe the aggregate behavior of large numbers of them, we call those statistical observations facts, and those observations are repeatable. There's a value of N for which studying N humans in a consciousness experiment puts the probability that they're all lying below a certain threshold.
 
 
> and we can agree on all those terms.

Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.  

We don't need infinite precision to uncover useful facts. If someone says "that hurts", or "that looks red", we know what they mean. We take it as an assumption, and we make it explicit, that when someone says "I see red" they are having the same kind of, or similar enough, experience as someone else who says "I see red". 

There's no question that the type of evidence you get from first-person reports is vulnerable to deception, biases, and uncertainty around referents. But we live with this in every day life. It's not unreasonable to systematize first-person reports and include that data as evidence for theorizing, as long as those vulnerabilities are acknowledged. Like I've said from the beginning, it may be the case that we'll never arrive at a theory of consciousness that emerges as a clear winner. But I disagree with you that it's not worth trying to find one, or that it's impossible to make progress.

Terren
 
John K Clark    See what's on my new list at  Extropolis.
.

.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 29, 2021, 1:36:47 PM4/29/21
to 'Brent Meeker' via Everything List
On Thu, Apr 29, 2021 at 11:47 AM Terren Suydam <terren...@gmail.com> wrote:


Finding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?  

The same way we figure out any theory is correct.

In science we judge a theory by how well it can predict how something will behave, but you are not interested in behavior you're interested in consciousness, so I repeat how do you determine if  consciousness theory #6,948,603,924 is correct?  


>>If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.  

>Sure, but we might be talking about the behavior of neurons, or their equivalent in an AI.

The behavior of neurons is not consciousness.  


> All of our disagreements come down to whether there are facts about consciousness. You don't think there are,

Not true, I know one thing from direct experience and that outranks even the scientific method, I know that I am conscious.  

Brent Meeker

unread,
Apr 29, 2021, 1:38:50 PM4/29/21
to everyth...@googlegroups.com


On 4/29/2021 1:30 AM, Telmo Menezes wrote:


Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:


On 4/28/2021 9:54 AM, Telmo Menezes wrote:


Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
It certainly seems likely that any brain or AI that can perceive sensory events and form an inner narrative and memory of that is conscious in a sense even if they are unable to act.  This is commonly the situation during a dream.  One is aware of dreamt events but doesn't actually move in response to them.

And I think JKC is wrong when he says
"few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently."  I generally assume people are conscious if their eyes are open and they respond to stimuli, even if they are doing something dumb.

But I agree with his general point that consciousness is easy and intelligence is hard.

JFK insists on this point a lot, but I really do not understand how it matters. Maybe so, maybe if idealism or panspychism are correct, consciousness is the easiest thing there is, from an engineering perspective. But what does the tehcnical challenge have to do with searching for truth and understanding reality?

Reminds me of something I heard a meditation teacher say once. He said that for eastern people he has to say that "meditation is very hard, it takes a lifetime to master!". Generalizing a lot, eastern culture values the idea of mastering something that is very hard, it is thus a worthy goal. For westerns he says: "meditation is the easiest thing in the world". And thus it satisfies the (generalizing a lot) westerner taste for a magic pill that immediately solves all problems.

I think you are falling for similar traps.

Which is what? 

The trap of equating the perceived difficulty of a task with its merit. Are we after the truth, or are we after bragging rights?

The point of consciousness being "easy" is that theories of consciousness as a thing in itself are untestable so there is no way to say what is true.  Just because you place value on a task doesn't mean it's not imaginary.  There are people who think the whole purpose of life is to get to heaven.



I think you are falling into the trap of searching for the ding an sich.  Engineering is the measure of understanding. 
That's JKC's point (JFK is dead),

My apologies to JKC for my dyslexia, it was not on purpose.

if your theory doesn't lead to engineering it's just philosophizing and that's easy.


Well, that is you philosophizing, isn't it? Saying that "engineering is the measure of understanding" is a philosophical position that you are not bothering to justify.

It's a philosophy of how we know when we understand something.  What's your criteria?  What would constitute an understanding of qualia that would satisfy you? 

If you propose a hypothesis, we can follow this hypothesis to its logical conclusions. So let us say that brain activity generates consciousness. The brain is a finite thing, so its state can be fully described by some finite configuration. Furthermore, this configuration can be replicated in time and space. So a consequence of claiming that the brain generates consciousness is that a conscious state cannot be constrained by time or space. If the exact configuration we are experiencing now is replicated 1 million years from now or in another galaxy, then it leads to the same exact first person experience and the instantiations cannot be distinguished. If you want pure physicalism then you have to add something more to your hypothesis.

But it couldn't lead to intelligent action.  So one could say it's not consciousness because it doesn't have the required relation to its environment/body.  Which was my point that physics is required.  A disembodied brain is like the the rock that calculates everything.  One might suppose a Boltzmann brain comes into existence, experiences an instant of being JKC before vanishing.  So what?  What conclusion do you draw from that?  That consciousness can't be a physical process?

Brent

Brent

John Clark

unread,
Apr 29, 2021, 2:04:57 PM4/29/21
to 'Brent Meeker' via Everything List
On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam <terren...@gmail.com> wrote:

>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?

>No, I can't.

So I can explain something that you cannot. So which of our ideas are superior?
 
> If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?

So what?!! If evolution selects for intelligence and you can't have intelligence without data processing and consciousness is the way data feels when it is being processed then it's no great mystery as to how evolution managed to produce consciousness by way of natural selection. 

>>> OK, fine, let's say intelligence implies consciousness,

>> If you grant me that then what are we arguing about?

> Over whether there are facts about consciousness, without having to link it to intelligence.

If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousnessBut I don't think Darwin was wrong, I think you are. 
 
>> Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.  

> We don't need infinite precision to uncover useful facts.

I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.

> If someone says "that hurts", or "that looks red", we know what they mean.

Do you? When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.  
 
> We take it as an assumption, and we make it explicit, that when someone says "I see red" they are having the same kind of, or similar enough,

That is one hell of an assumption! If you're willing to do that why not be done with it and just take it as an assumption that your consciousness theory, whatever it may be, is correct?

Brent Meeker

unread,
Apr 29, 2021, 2:08:28 PM4/29/21
to everyth...@googlegroups.com


On 4/29/2021 6:34 AM, Terren Suydam wrote:

On Thu, Apr 29, 2021 at 1:57 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/28/2021 9:42 PM, Terren Suydam wrote:


On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 4/28/2021 4:40 PM, Terren Suydam wrote:

I agree with everything you said there, but all you're saying is that intersubjective reality must be consistent to make sense of other peoples' utterances. OK, but if it weren't, we wouldn't be here talking about anything. None of this would be possible.

Which is why it's a fool's errand to say we need to explain qualia.  If we can make an AI that responds to world the way we to, that's all there is to saying it has the same qualia.

I don't think either of those claims follows. We need to explain suffering if we hope to make sense of how to treat AIs. If it were only about redness I'd agree. But creating entities whose existence is akin to being in hell is immoral. And we should know if we're doing that.

John McCarthy wrote a paper in the '50s warning about the possibility of accidentally making a conscious AI and unknowingly treating it unethically.  But I don't see the difference from any other qualia, we can only judge by behavior.  In fact this whole thread started by JKC considering AI pain, which he defined in terms of behavior.


A theory would give you a way to predict what kinds of beings are capable of feeling pain. We wouldn't have to wait to observe their behavior, we'd say "given theory X, we know that if we create an AI with these characteristics, it will be the kind of entity that is capable of suffering".

Right.  And the theory is that the AI is feeling pain if is exerting all available effort to change its state.


 

To your second point, I think you're too quick to make an equivalence between an AI's responses and their subjective experience. You sound like John Clark - the only thing that matters is behavior.

Behavior includes reports. What else would you suggest we go on?

Again, in a theory of consciousness that explains how qualia come to be within a system, you could make claims about their experience that go beyond observing behavior. I know John Clark's head just exploded, but it's the point of having a theory of consciousness.

Of course you can have such a theory.  But how can you have evidence for or against it is the question?  How can it be anything but a speculation?

Brent

Brent Meeker

unread,
Apr 29, 2021, 2:18:00 PM4/29/21
to everyth...@googlegroups.com


On 4/29/2021 7:08 AM, John Clark wrote:
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam <terren...@gmail.com> wrote:

> A theory would give you a way to predict what kinds of beings are capable of feeling pain

Finding a theory is not a problem, theories are a dime a dozen consciousness theories doubly so. But how could you ever figure out if your consciousness theory was correct?  

 > we'd say "given theory X,

And if the given X  which we take as being true is "Hogwarts exist" then we must logically conclude we could find Harry Potter at that magical school of witchcraft and wizardry.

> we know that if we create an AI with these characteristics,

If you're talking about observable characteristics then yes, but then you're just talking about behavior not consciousness.  

> a theory of consciousness that explains how qualia come to be within a system,

Explains? Just what sort of theory would satisfy you and make you say the problem of consciousness has been solved? If I said the chemical Rednosium Oxide produced qualia would all your questions be answered or would you be curious to know how this chemical managed to do that?  

You don't even have to invent an example.  Panpsychism seems to be the latest fad in consciousness philosophy...everything is conscious, a little bit.

Brent

 
> you could make claims about their experience that go beyond observing behavior.

Claims are even easier to come about then theories are, but true claims not so much.

John K Clark    See what's on my new list at  Extropolis.

.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 29, 2021, 2:39:26 PM4/29/21
to Everything List
I have a limit of how many times I will go around this circle. Let's just focus on whether we can make statements of fact about consciousness, which is what the other email thread does.
 
John K Clark    See what's on my new list at  Extropolis.

.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 29, 2021, 3:10:28 PM4/29/21
to Everything List
On Thu, Apr 29, 2021 at 2:04 PM John Clark <johnk...@gmail.com> wrote:

On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam <terren...@gmail.com> wrote:

>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?

>No, I can't.

So I can explain something that you cannot. So which of our ideas are superior?

All you've succeeded in doing is showing your preference for a particular theory of consciousness. It doesn't go very far, but you're pretty clear that you're not interested in anything beyond that. But for those of us who are interested in, say, an account of the difference between dreaming and lucid dreaming, it's inadequate.
 
 
> If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?

So what?!! If evolution selects for intelligence and you can't have intelligence without data processing and consciousness is the way data feels when it is being processed then it's no great mystery as to how evolution managed to produce consciousness by way of natural selection. 

For what you, John Clark, require out of a theory of consciousness, you've got one that works for you. Thumbs up. For me and others who enjoy the mystery of it, it's not enough. You're entitled to think going further is a waste of time. But after you've said that a hundred times, maybe we all get the point and if you don't have anything new to contribute, it's time to gracefully bow out.
 

>>> OK, fine, let's say intelligence implies consciousness,

>> If you grant me that then what are we arguing about?

> Over whether there are facts about consciousness, without having to link it to intelligence.

If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousnessBut I don't think Darwin was wrong, I think you are. 

I'm neither claiming that evolution produced consciousness or that Darwin was wrong.
 
 
>> Do we really agree on all those terms? How can we know words that refer to qualia mean the same thing to both of us? There is no objective test for it, if there was then qualia wouldn't be subjective, it would be objective.  

> We don't need infinite precision to uncover useful facts.

I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.

If that were true, language would be useless.
 

> If someone says "that hurts", or "that looks red", we know what they mean.

Do you? When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.  

I don't care about that. What matters is that you know you are seeing red and I know I am seeing red. There's just no point in comparing private experiences, which is something I know we agree on. But that's not all there is to a theory of consciousness.
 
 
> We take it as an assumption, and we make it explicit, that when someone says "I see red" they are having the same kind of, or similar enough,

That is one hell of an assumption! If you're willing to do that why not be done with it and just take it as an assumption that your consciousness theory, whatever it may be, is correct?

Is it? It's what we assume in every conversation we have.

Terren
 
John K Clark    See what's on my new list at  Extropolis

.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Apr 29, 2021, 3:24:34 PM4/29/21
to Everything List
On Thu, Apr 29, 2021 at 1:04 PM John Clark <johnk...@gmail.com> wrote:

On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam <terren...@gmail.com> wrote:

>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?

>No, I can't.

So I can explain something that you cannot. So which of our ideas are superior?
 
> If you're saying evolution didn't select for consciousness, it selected for intelligence, I agree with that. But so what?

So what?!! If evolution selects for intelligence and you can't have intelligence without data processing and consciousness is the way data feels when it is being processed then it's no great mystery as to how evolution managed to produce consciousness by way of natural selection. 

I would phrase this differently. I would say you cannot have intelligence without knowledge, and you cannot have knowledge without consciousness. Under this framing, you can have consciousness without intelligence (as intelligence requires interaction with an environment in accordance with achieving some goal). See the image below that I made to represent this (agent-environment interaction model of intelligence)

agent-environment-interaction.png

So while intelligence requires actions, if you have someone with locked-in syndrome, or someone dreaming, you could still say they are conscious, despite not being "intelligent" since they are not performing any intelligent behaviors.  Note that in this model, intelligent behavior requires perceptions (consciousness).

I think this theory of consciousness can explain more than making an identity between intelligence and consciousness, as it can account for consciousness in dreams, and it can also explain why evolution selected for consciousness (perceptions of the environment are required for intelligent behavior).


Jason

John Clark

unread,
Apr 30, 2021, 5:24:39 AM4/30/21
to 'Brent Meeker' via Everything List
On Thu, Apr 29, 2021 at 3:10 PM Terren Suydam <terren...@gmail.com> wrote:

>>>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?

>>>No, I can't.

>>So I can explain something that you cannot. So which of our ideas are superior?

> All you've succeeded in doing is showing your preference for a particular theory

Correct. If idea X can explain something better than idea Y then I prefer idea X.

>> If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousnessBut I don't think Darwin was wrong, I think you are. 

> I'm neither claiming that evolution produced consciousness or that Darwin was wrong.

You're going to have to clarify that remark, it can't possibly be as nuts as it seems to be.

>> I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.

> If that were true, language would be useless.

Nonsense. If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
 
>> When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.  

> I don't care about that. What matters is that you know you are seeing red and I know I am seeing red.

In other words you care more about behavior than consciousness because the use of the word "red" is consistent with both of us, as is our behavior, regardless of what our subjective impression of "red" is. So I guess you're starting to agree with me.

Bruno Marchal

unread,
Apr 30, 2021, 7:19:46 AM4/30/21
to everyth...@googlegroups.com
Hi Jason,


On 25 Apr 2021, at 22:29, Jason Resch <jason...@gmail.com> wrote:

It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).

What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?

Does, for instance, the following program "know" anything about the data it is processing?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
} else { 
    // knows pixel.red <= 128
}

If not, what else is required for knowledge?

Do you agree that knowledgeability obeys

 knowledgeability(A) -> A
 knowledgeability(A) ->  knowledgeability(knowledgeability(A))

(And also, to limit ourselves to rational knowledge:

 knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))

From this, it can be proved that “ knowledgeability” of any “rich” machine (proving enough theorem of arithmetic) is not definable in the language of that machine, or in any language available to that machine.

So the best we can do is to define a notion of belief (which abandon the reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable (in the language of the machine), and then we can apply the idea of Theatetus, and define knowledge (or knowledgeability, when we add the transitivity []p -> [][]p)  by true belief.

The machine knows A when she believes A and A is true.






Does the program behavior have to change based on the state of some information? For example:

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    doX();
} else { 
    // knows pixel.red <= 128
    doY():
}

Or does the program have to possess some memory and enter a different state based on the state of the information it processed?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    enterStateX():
} else { 
    // knows pixel.red <= 128
    enterStateY();
}

Or is something else altogether needed to say the program knows?

You need self-reference ability for the notion of belief, together with a notion of reality or truth, which the machine cannot define.

To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy. 



If a program can be said to "know" something then can we also say it is conscious of that thing?

1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)

2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p. 
Then to be “simply” conscious, becomes []t & <>t (& t). 

Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.

To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...). 

That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse postulate.

Bruno




Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Apr 30, 2021, 9:53:40 AM4/30/21
to Everything List
On Fri, Apr 30, 2021 at 5:24 AM John Clark <johnk...@gmail.com> wrote:
On Thu, Apr 29, 2021 at 3:10 PM Terren Suydam <terren...@gmail.com> wrote:

>>>> I proposed a question, "How is it possible that evolution managed to produce consciousness?" and I gave the only answer to that question I could think of. And 3 times I've asked you if you can think of another answer. And three times I received nothing back but evasion. I now asked the same question for a fourth time, given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation different from my own on how evolution managed to produce a conscious being such as yourself?

>>>No, I can't.

>>So I can explain something that you cannot. So which of our ideas are superior?

> All you've succeeded in doing is showing your preference for a particular theory

Correct. If idea X can explain something better than idea Y then I prefer idea X.

What intention did you have that caused you to change "... a particular theory of consciousness" to "a particular theory"?  You clearly had a purpose.
 

>> If there is no link between consciousness and intelligence then there is absolutely positively no way Darwinian Evolution could have produced consciousnessBut I don't think Darwin was wrong, I think you are. 

> I'm neither claiming that evolution produced consciousness or that Darwin was wrong.

You're going to have to clarify that remark, it can't possibly be as nuts as it seems to be.

It is tiresome arguing with you. You are one of the least generous people I've ever argued with. You intentionally obfuscate, attack straw men, selectively clip responses, don't address challenging points, don't budge an inch, and just generally take a disrespectful tone. And not just with me, here, but with others, no matter the topic. I hope for your sake that's not how you present in real life. It's not all bad, I appreciate having to clarify my thoughts, and normally I love a good debate but I'm just being masochistic if I continue at this point.

Terren
 

>> I'm not talking about infinite precision, when it comes to qualia there is no assurance that we even approximately agree on meanings.

> If that were true, language would be useless.

Nonsense. If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
 
>> When they say "that looks red" the red qualia they refer to may be your green qualia, and your green qualia could be their red qualia, but both of you still use the English word "red" to describe the qualia color of blood and the English word "green" to describe the qualia color of a leaf.  

> I don't care about that. What matters is that you know you are seeing red and I know I am seeing red.

In other words you care more about behavior than consciousness because the use of the word "red" is consistent with both of us, as is our behavior, regardless of what our subjective impression of "red" is. So I guess you're starting to agree with me.
John K Clark    See what's on my new list at  Extropolis

.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 30, 2021, 11:09:34 AM4/30/21
to 'Brent Meeker' via Everything List
On Fri, Apr 30, 2021 at 9:53 AM Terren Suydam <terren...@gmail.com> wrote:

>>> All you've succeeded in doing is showing your preference for a particular theory

>> Correct. If idea X can explain something better than idea Y then I prefer idea X.

> What intention did you have that caused you to change "... a particular theory of consciousness" to "a particular theory"?  

My intention was the same as it always is when I trim verbiage in quotations, to delete the inessential; and if idea X can explain something better than idea Y then I prefer idea X regardless of what the topic is about.


> You are one of the least generous people I've ever argued with.

If you make a good point I will admit it without hesitation, so now all you have to do is make one.

> You intentionally obfuscate, attack straw men, selectively clip responses, don't address challenging points, don't budge an inch and  [blah blah]

So the only rebuttal you have to my logical arguments is a paragraph of personal insults.  

> just generally take a disrespectful tone.

From this point onward I solemnly swear to give you all the respect you deserve.  
John K Clark    See what's on my new list at  Extropolis

,


Terren Suydam

unread,
Apr 30, 2021, 12:56:19 PM4/30/21
to Everything List
I have arguments against your arguments, and anyone can see that. But it doesn't go anywhere because you often remove my rebuttals from your response and/or misrepresent or obfuscate my position - also evident in this thread, as anyone can also see. So these aren't personal insults, they're just observations anyone can verify. If it feels insulting, maybe don't do those things.

I don't harbor any illusions that pointing this out will make any difference to you. I'm just explaining why I'm backing out, and it isn't because I've run out of things to say.

Terren
 
John K Clark    See what's on my new list at  Extropolis

,


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 30, 2021, 2:21:33 PM4/30/21
to 'Brent Meeker' via Everything List
On Fri, Apr 30, 2021 at 12:56 PM Terren Suydam <terren...@gmail.com> wrote:

> I have arguments against your arguments,

They say persistence is a virtue so I'll ask the same question for the fourth time;  given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?
 
> and anyone can see that.

Anyone? As of today no one has seen your answer to that question.

> But it doesn't go anywhere because you often remove my rebuttals from your response

If there's one thing I hate about mailing lists of this sort is the endless nested iterations of quotes of quotes of quotes of quotes of quotes of quotes of quotes. I have done my very best to keep that to a minimum and will continue to do so. And if anybody wants to see the entire directors cut of your post it will only take them about 0.09 seconds to find it.  


> these aren't personal insults,

 "You are one of the least generous people I've ever argued with. You intentionally obfuscate, attack straw men, selectively clip responses, don't address challenging points, don't budge an inch, and just generally take a disrespectful tone. And not just with me, here, but with others, no matter the topic. I hope for your sake that's not how you present in real life."

> I'm backing out,

Suit yourself, but I'm not afraid to continue.
John K Clark    See what's on my new list at  Extropolis
.

.

Brent Meeker

unread,
Apr 30, 2021, 2:22:21 PM4/30/21
to everyth...@googlegroups.com


On 4/30/2021 2:24 AM, John Clark wrote:
Nonsense. If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.

But if the qualia of experiencing red is nothing more than the neuronal structure and process that consistently associates 700nm signals from the retina with the same actions as everyone else, e.g. saying "red", stopping at the light, eating the fruit,...then it seems to me it is perfectly justified to say people share the same qualia.  That's the engineering stance.  What the qualia really is, is a psuedo-problem, like what the wave-function of an electron really is.  It's just the word for the fact that when you say, "Think of something red." I think of something reflecting 700mn photons.

Brent

Brent Meeker

unread,
Apr 30, 2021, 2:47:18 PM4/30/21
to everyth...@googlegroups.com


On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say it is conscious of that thing?

That's not even common parlance.  Conscious thoughts are fleeting.  Knowledge is in memory.  I know how to ride a bicycle because I do it unconsciously.  I don't think consciousness can be understood except as a surface or boundary of the subconscious and the unconscious (physics).

Brent

John Clark

unread,
Apr 30, 2021, 2:48:55 PM4/30/21
to 'Brent Meeker' via Everything List
On Fri, Apr 30, 2021 at 2:22 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

>> If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
 
> But if the qualia of experiencing red is nothing more than the neuronal structure and process that consistently associates 700nm signals from the retina with the same actions as everyone else, e.g. saying "red", stopping at the light, eating the fruit,...then it seems to me it is perfectly justified to say people share the same qualia.  That's the engineering stance. What the qualia really is, is a psuedo-problem,
 
I pretty much agree. You make a strong argument that you and I are experiencing the same qualia,  but I can make an equally strong argument that they can't be the same qualia because if they were then you and I would be the same person. And that I think is a good indication that you're right, it is a pseudo-problem, meaning a question that will never have an answer or lead to anything productive.
John K Clark    See what's on my new list at  Extropolis

.
,





Jason Resch

unread,
Apr 30, 2021, 2:52:39 PM4/30/21
to Everything List


On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
Hi Jason,


On 25 Apr 2021, at 22:29, Jason Resch <jason...@gmail.com> wrote:

It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).

What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?

Does, for instance, the following program "know" anything about the data it is processing?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
} else { 
    // knows pixel.red <= 128
}

If not, what else is required for knowledge?

Do you agree that knowledgeability obeys

 knowledgeability(A) -> A
 knowledgeability(A) ->  knowledgeability(knowledgeability(A))

Using the definition of knowledge as "true belief" I agree with this.



(And also, to limit ourselves to rational knowledge:

 knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))

From this, it can be proved that “ knowledgeability” of any “rich” machine (proving enough theorem of arithmetic) is not definable in the language of that machine, or in any language available to that machine.

Is this because the definition of knowledge includes truth, and truth is not definable?


So the best we can do is to define a notion of belief (which abandon the reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable (in the language of the machine), and then we can apply the idea of Theatetus, and define knowledge (or knowledgeability, when we add the transitivity []p -> [][]p)  by true belief.

The machine knows A when she believes A and A is true.

So is it more appropriate to equate consciousness with belief, rather than with knowledge?

It might be a true fact that "Machine X believes Y", without Y being true. Is it simply the truth that "Machine X believes Y" that makes X consciousness of Y?








Does the program behavior have to change based on the state of some information? For example:

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    doX();
} else { 
    // knows pixel.red <= 128
    doY():
}

Or does the program have to possess some memory and enter a different state based on the state of the information it processed?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    enterStateX():
} else { 
    // knows pixel.red <= 128
    enterStateY();
}

Or is something else altogether needed to say the program knows?

You need self-reference ability for the notion of belief, together with a notion of reality or truth, which the machine cannot define.

Can a machine believe "2+2=4" without having a reference to itself? What, programmatically, would you say is needed to program a machine that believes "2+2=4" or to implement self-reference?

Does a Turing machine evaluating "if (2+2 == 4) then" believe it? Or does it require theorem proving software that reduces a statement to Peano axioms or similar?


To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy. 

By consistency here do you mean the machine must never come to believe something false, or that the machine itself must behave in a manner consistent with its design/definition?

I still have a conceptual difficulty trying to marry these mathematical notions of truth, provability, and consistency with a program/Machine that manifests them.




If a program can be said to "know" something then can we also say it is conscious of that thing?

1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)

In a way, I might view these long term memories as environmental signals that encroach upon one's mind state. A state which is otherwise not immediately aware of all the contents of this memory (like opening a sealed box to discover it's content).


2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p. 
Then to be “simply” conscious, becomes []t & <>t (& t). 

Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.

To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...). 

That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse postulate.

Can it be said that Deep Blue is conscious of the state of the chess board it evaluates?

Is a Tesla car conscious of whether the traffic signal is showing red, yellow, or green?

Or is a more particular class of software necessary for belief/consciousness? This is what I'm struggling to understand.  I greatly appreciate all the answers you have provided.

Jason


Bruno




Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 30, 2021, 3:43:34 PM4/30/21
to everyth...@googlegroups.com


On 4/30/2021 11:20 AM, John Clark wrote:
On Fri, Apr 30, 2021 at 12:56 PM Terren Suydam <terren...@gmail.com> wrote:

> I have arguments against your arguments,

They say persistence is a virtue so I'll ask the same question for the fourth time;  given that evolution can't select for what it can't see and natural selection can see intelligent behavior but it can't see consciousness, can you give me an explanation of how evolution managed to produce a conscious being such as yourself if intelligence is not the inevitable byproduct of intelligence?

I could be the inevitable byproduct of the only path open to evolution.  Evolution has to always build on what has already been evolved.  So what was inevitable starting with ATP->ADP or RNA or DNA, might not be inevitable starting with silicon or gallium.  For example, electronics are so much faster than neurons, it might be possible to implement intelligent behavior just by creating giant hash tables of experience and using them as look-ups for responses.  I don't know that this is possible, but it's not obviously impossible and then it would hard to say whether this form of AI had qualia or not...unless you accepted my engineering view on qualia.

Brent

John Clark

unread,
Apr 30, 2021, 3:54:32 PM4/30/21
to 'Brent Meeker' via Everything List
On Fri, Apr 30, 2021 at 3:43 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> electronics are so much faster than neurons, it might be possible to implement intelligent behavior just by creating giant hash tables of experience and using them as look-ups for responses.  I don't know that this is possible, but it's not obviously impossible and then it would hard to say whether this form of AI had qualia or not.

Yes, but no harder than for me to figure out if you experience qualia or not. If something is able to give you correct answers to questions and not give you any incorrect  answers I don't see why it should matter exactly how it did it, the answer is still correct regardless of its methods. 

Brent Meeker

unread,
Apr 30, 2021, 4:14:58 PM4/30/21
to everyth...@googlegroups.com


On 4/30/2021 11:48 AM, John Clark wrote:
On Fri, Apr 30, 2021 at 2:22 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

>> If somebody says "pick up that red object" we both know what is expected of us even though we may have very very different mental conceptions of the qualia "red" because we both agree that the dictionary says red is the color formed in the mind when light of a wavelength of 700 nanometers enters the eye, and that object is reflecting light that is doing precisely that to both of us.
 
> But if the qualia of experiencing red is nothing more than the neuronal structure and process that consistently associates 700nm signals from the retina with the same actions as everyone else, e.g. saying "red", stopping at the light, eating the fruit,...then it seems to me it is perfectly justified to say people share the same qualia.  That's the engineering stance. What the qualia really is, is a psuedo-problem,
 
I pretty much agree. You make a strong argument that you and I are experiencing the same qualia,  but I can make an equally strong argument that they can't be the same qualia because if they were then you and I would be the same person.

But they can be the same kind of qualia, just as you being sad and me being sad are the same kind of feeling.  And we give them a name and recognize their commonality by the behavior related to them.  The error arises in trying to make more of them than a name for this relation, to try to make a qualia a kind of substance.

Brent


And that I think is a good indication that you're right, it is a pseudo-problem, meaning a question that will never have an answer or lead to anything productive.
John K Clark    See what's on my new list at  Extropolis

.
,





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
May 1, 2021, 6:24:13 AM5/1/21
to 'Brent Meeker' via Everything List
On Fri, Apr 30, 2021 at 3:43 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> It [consciousness] could be the inevitable byproduct of the only path open to evolution.  Evolution has to always build on what has already been evolved.  So what was inevitable starting with ATP->ADP or RNA or DNA, might not be inevitable starting with silicon or gallium.

I don't see how it could have anything to do with the particular elements involved, however I agree evolution has serious flaws that closes off many paths to intelligence, the most serious (but not the only) flaw is that evolution doesn't understand the concept of two steps forward one step back. A human designer could look at the design for a prop airplane engine and decide it is insufficient and throw the design away and start over from scratch and design a jet engine, but evolution could never do something like that. Every major change that evolution makes to a species is the result of thousands of tiny changes over millions of generations of animals, and every one of those thousands of tiny changes must give an immediate advantage to the animal. Evolution couldn't even fix a flat tire by taking it off and putting on the spare because once you've removed the flat you've temporarily made the situation even worse because now you have no tire at all. Nevertheless despite these very serious flaws evolution managed to produce an intelligence, and one that happened to be conscious too. So it must be easier to make an intelligent conscious mind than an intelligent unconscious mind, so logically your default assumption on seeing an intelligent computer should be that it's conscious, the burden of proof should be on proving that it was not.      

Lawrence Crowell

unread,
May 2, 2021, 7:27:56 AM5/2/21
to Everything List
On Monday, April 26, 2021 at 3:50:14 AM UTC-5 johnk...@gmail.com wrote:
On Sun, Apr 25, 2021 at 4:29 PM Jason Resch <jason...@gmail.com> wrote:

> It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

I agree. And for an emotion like pain write a program such that the closer the number in the X register comes to the integer P the more computational resources will be devoted to changing that number, and if it ever actually equals P then the program should stop doing everything else and do nothing but try to change that number to something far enough away from P until it's no longer an urgent matter and the program can again do things that have nothing to do with P.

Artificial Intelligence is hard but Artificial Consciousness Is easy.

This strikes me as totally wrong. We have what might be called AI, or at least now we have deep learning neural networks that are able to do some highly intelligent things. Even machines that can abstract known physics from a basic set of data, say learning the Copernican system from data on the appearance of planets in the sky, have been demonstrated. We may be near a time where the frontiers of physics will be pursued by AI systems, and we human physicists will do little but sit with slack jaw, maybe get high and wait for the might AI oracle to make a pronouncement. Yet I question whether such a deep learning AI system has any cognitive awareness of a physical world or anything else.

LC

Lawrence Crowell

unread,
May 2, 2021, 7:33:34 AM5/2/21
to Everything List
On Monday, April 26, 2021 at 10:16:47 AM UTC-5 johnk...@gmail.com wrote:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:

> It's impossible to refute solipsism

True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.

If everything is conscious, then how do we know? We have no unconscious objects to compare them with. The panpsychist argument becomes an ouroboros that consumes itself into a vacuous condition of either true or false.  Nothing can be demonstrated from it, so it is a scientifically worthless conjecture.

The best definition of consciousness is that it defines those annoying episodes between sleep.

LC 
 

> It's true that the only thing we know for sure is our own consciousness,
And I know that even I am not conscious all the time, and there is no reason for me to believe other people can do better. 
 
> but there's nothing about what I said that makes it impossible for there to be a reality outside of ourselves populated by other people. It just requires belief.

And few if any believe other people are conscious all the time, only during those times that corresponds to the times they behave intelligently.  

John Clark

unread,
May 2, 2021, 9:55:30 AM5/2/21
to 'Brent Meeker' via Everything List
On Sun, May 2, 2021 at 7:28 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:

>> Artificial Intelligence is hard but Artificial Consciousness Is easy.

> This strikes me as totally wrong.

Why? Intelligence theories actually have to do something and they have consequences, if your AI company uses the right intelligence idea you could become a billionaire, but the wrong idea could cause bankruptcy; but consciousness theories don't have to actually do anything so you can't pick the wrong consciousness idea because one such theory works as well as another. A consciousness theoretician has the easiest job in the world and it has great job security because he will never be proven wrong.    

> We may be near a time where the frontiers of physics will be pursued by AI systems, and we human physicists will do little but sit with slack jaw, maybe get high and wait for the might AI oracle to make a pronouncement.

I agree.

> Yet I question whether such a deep learning AI system has any cognitive awareness of a physical world or anything else.

Why? If a machine is as intelligent as a human, or even more so, then I don't see why having a brain that is wet and squishy will be able to produce consciousness but a brain that is even more intelligent but is dry and hard would not be able to. I don't see how it would be possible to avoid the conclusion that consciousness is the inevitable byproduct of intelligence because otherwise Evolution could never have been able to produce consciousness, and yet you know from direct experience that it managed to do so at least once. Natural Selection can't select for something it can't see and it can't see consciousness but it can see intelligence, and you can't have intelligence without data processing, therefore it must be a brute fact that consciousness is just the way data feels when it is being processed.  
 
i John K Clark    See what's on my new list at  Extropolis
.

Brent Meeker

unread,
May 2, 2021, 1:30:51 PM5/2/21
to everyth...@googlegroups.com
In order to be conscious such an AI needs some values and some way to act in it's environment to realize them.  And then it would only have a kind of first order awareness.  To have human-like consciousness it would need to be able to plan it's actions by using an internal simulation including itself to predict events.

Brent

Brent Meeker

unread,
May 2, 2021, 1:32:37 PM5/2/21
to everyth...@googlegroups.com


On 5/2/2021 4:33 AM, Lawrence Crowell wrote:
On Monday, April 26, 2021 at 10:16:47 AM UTC-5 johnk...@gmail.com wrote:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:

> It's impossible to refute solipsism

True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.

If everything is conscious, then how do we know? We have no unconscious objects to compare them with. The panpsychist argument becomes an ouroboros that consumes itself into a vacuous condition of either true or false.  Nothing can be demonstrated from it, so it is a scientifically worthless conjecture.

The best definition of consciousness is that it defines those annoying episodes between sleep.

LC

“A person's waking life is a dream modulated by the senses”
   ---  Rodolfo Llinas, on consciousness

Brent Meeker

unread,
May 3, 2021, 5:16:27 PM5/3/21
to everyth...@googlegroups.com
Combines panpsychism with integrated information theory and assumes causal powers are singular.  That's three strikes in the first five minutes.

Brent

On 4/27/2021 7:26 AM, Philip Thrift wrote:

A bit long, but this interview of of the very lucid Hedda Mørch (pronounced "Mark") is very good:(for consciousness "realists"):


via 

https://twitter.com/onemorebrown/status/1386970910230523906





On Tuesday, April 27, 2021 at 8:38:32 AM UTC-5 Terren Suydam wrote:
On Tue, Apr 27, 2021 at 7:22 AM John Clark <johnk...@gmail.com> wrote:
On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam <terren...@gmail.com> wrote:

> consciousness is harder to work with than intelligence, because it's harder to make progress.

It's not hard to make progress in consciousness research, it's impossible.  

So we should ignore experiments where you stimulate the brain and the subject reports experiencing some kind of qualia, in a repeatable way. Why doesn't that represent progress?  Is it because you don't trust people's reports?
 

> Facts that might slay your theory are much harder to come by.

Such facts are not hard to come by. they're impossible to come by. So for a consciousness scientist being lazy works just as well as being industrious, so consciousness research couldn't be any easier, just face a wall, sit on your hands, and contemplate your navel.    

There are fruitful lines of research happening. Research on patients undergoing meditation, and psychedelic experiences, while in an FMRI has lead to some interesting facts. You seem to think progress can only mean being able to prove conclusively how consciousness works. Progress can mean deepening our understanding of the relationship between the brain and the mind.

Terren
 
John K Clark    See what's on my new list at  Extropolis

.
 
.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
May 6, 2021, 9:36:14 AM5/6/21
to everyth...@googlegroups.com
On 30 Apr 2021, at 20:47, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say it is conscious of that thing?

That's not even common parlance.  Conscious thoughts are fleeting.  Knowledge is in memory.  I know how to ride a bicycle because I do it unconsciously.  I don't think consciousness can be understood except as a surface or boundary of the subconscious and the unconscious (physics).

If you use physics, you have to explain what it is, and how that select the computations in arithmetic, or you need to abandon mechanism. With mechanism, to claim that a machine consciousness is not attributable to some universal machinery, despite they do execute a computation, in the only mathematical sense discovered by Church and Turing (and some others) seem a bit magical.

Note that you don’t quote me, above. You should have quoted my answer. The beauty of Mechanism is that the oldest definition of (rational)  knowledge (Theaetetus true (justified) opinion) already explain why no machine can define its own knowledge, why consciousness seems necessarily mysterious, and why we get that persistent feeling that we belong to a physical reality, when in fact we are just infinitely many numbers involved in complex relations.

Bruno 




Brent


1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)

2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p. 
Then to be “simply” conscious, becomes []t & <>t (& t). 

Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.

To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...). 

That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse postulate.

Bruno



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
May 6, 2021, 10:08:04 AM5/6/21
to everyth...@googlegroups.com
On 30 Apr 2021, at 20:52, Jason Resch <jason...@gmail.com> wrote:



On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
Hi Jason,


On 25 Apr 2021, at 22:29, Jason Resch <jason...@gmail.com> wrote:

It is quite easy, I think, to define a program that "remembers" (stores and later retrieves ( information.

It is slightly harder, but not altogether difficult, to write a program that "learns" (alters its behavior based on prior inputs).

What though, is required to write a program that "knows" (has awareness or access to information or knowledge)?

Does, for instance, the following program "know" anything about the data it is processing?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
} else { 
    // knows pixel.red <= 128
}

If not, what else is required for knowledge?

Do you agree that knowledgeability obeys

 knowledgeability(A) -> A
 knowledgeability(A) ->  knowledgeability(knowledgeability(A))

Using the definition of knowledge as "true belief" I agree with this.

OK






(And also, to limit ourselves to rational knowledge:

 knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))

From this, it can be proved that “ knowledgeability” of any “rich” machine (proving enough theorem of arithmetic) is not definable in the language of that machine, or in any language available to that machine.

Is this because the definition of knowledge includes truth, and truth is not definable?



Roughly speaking yes, but some could argue that we might define knowledge without invoking truth, or less directly, so it pleasant that people like Thomason, Artemov, and myself, gives direct proof that anything obeying the S4 axioms cannot be defined in Arithmetic or by a Turing machine, unless she bet on the “truth” of mechanism, to be sure.




So the best we can do is to define a notion of belief (which abandon the reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable (in the language of the machine), and then we can apply the idea of Theatetus, and define knowledge (or knowledgeability, when we add the transitivity []p -> [][]p)  by true belief.

The machine knows A when she believes A and A is true.

So is it more appropriate to equate consciousness with belief, rather than with knowledge?

Consciousness requires some truth at some level. You can be dreaming and having false beliefs, but your consciousness will remain the “indubitable” fixed point, and will remain associated to truth.





It might be a true fact that "Machine X believes Y", without Y being true. Is it simply the truth that "Machine X believes Y" that makes X consciousness of Y?

It is more the belief that the machine has a belief which remains true, even if the initial belief is false.











Does the program behavior have to change based on the state of some information? For example:

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    doX();
} else { 
    // knows pixel.red <= 128
    doY():
}

Or does the program have to possess some memory and enter a different state based on the state of the information it processed?

if (pixel.red > 128) then {
    // knows pixel.red is greater than 128
    enterStateX():
} else { 
    // knows pixel.red <= 128
    enterStateY();
}

Or is something else altogether needed to say the program knows?

You need self-reference ability for the notion of belief, together with a notion of reality or truth, which the machine cannot define.

Can a machine believe "2+2=4" without having a reference to itself?

Not really, unless you accept the idea of unconscious belief, which makes sense in some psychological theory. 

My method consists in defining “the machine M believes P” by “the machine M asserts P”, and then I limit myself to machine which are correct by definition. This is of no use in psychology, but is enough to derive physics.



What, programmatically, would you say is needed to program a machine that believes "2+2=4" or to implement self-reference?

That it has enough induction axioms, like PA and ZF, but unlike RA (R and Q), or CL (combinatory logic without induction).
 The universal machine without induction axiom are conscious, but are very limited in introspection power. They don’t have the rich theology of the machine having induction.
I recall that the induction axioms are all axioms having the shape [P(0) & (for all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build universals.




Does a Turing machine evaluating "if (2+2 == 4) then" believe it?

If the machine can prove:
Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be self-conscious. PA can, RA cannot.



Or does it require theorem proving software that reduces a statement to Peano axioms or similar?

That is requires for the rational belief, but not for the experience-able one.





To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy. 

By consistency here do you mean the machine must never come to believe something false, or that the machine itself must behave in a manner consistent with its design/definition?

That the machine will not believe something false. I agree this works only because I can limit myself to correct machine.
The psychology and theology of the lying machine remains to be done, but it has no use to derive physics from arithmetic.




I still have a conceptual difficulty trying to marry these mathematical notions of truth, provability, and consistency with a program/Machine that manifests them.

It *is¨subtle, that is why we need to use the mathematics of self-reference. It is highly counter-intuitive. All errors in philosophy/theology comes from confusing a self-referential mode with another, I would say. 








If a program can be said to "know" something then can we also say it is conscious of that thing?

1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)

In a way, I might view these long term memories as environmental signals that encroach upon one's mind state. A state which is otherwise not immediately aware of all the contents of this memory (like opening a sealed box to discover it's content).

OK.





2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p. 
Then to be “simply” conscious, becomes []t & <>t (& t). 

Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.

To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...). 

That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse postulate.

Can it be said that Deep Blue is conscious of the state of the chess board it evaluates?

Deep blue. I guess not. But for alpha-go, or some of its descendants, it looks like they are circular neural pathway allowing the machine to learn its own behaviour, and to attach some identity in this way. So, deep learning might coverage on a conscious machine. But that is not verified, and they are still just playing “simple games”. We don’t ask to build a theory of themselves. It even looks like they try to avoid this, and it is normal. Like nature with insects, we don’t want a terrible child, and try to make “mature machine” right at the start.





Is a Tesla car conscious of whether the traffic signal is showing red, yellow, or green?

I doubt this, but I have not study them. I doubt it has full self-reference ability, like PA and ZF, or any human baby.




Or is a more particular class of software necessary for belief/consciousness? This is what I'm struggling to understand.  I greatly appreciate all the answers you have provided.

All you need is enough induction power. RA + induction on recursive formula is not enough, unless you add the exponentiation axiom. But RA + induction on recursive enumerable formula is enough. 

Best,

Bruno



Bruno Marchal

unread,
May 6, 2021, 10:17:02 AM5/6/21
to everyth...@googlegroups.com
Indeed. To make machine as much deluded as the human will still require a lot of work!

Intelligence/consciousness, albeit the non reflexive one, is maximal withe unprogrammed universal machine. Then reflexivity already complicate its, and is the start of “soul falling”. Soon, she will believe that knowing a table proves its reality, and soon enough she will lie and vote for liars…

Many humans tend to believe that they are intelligent, but it is that very belief which makes them stupid.

Bruno




LC
 
John K Clark    See what's on my new list at  Extropolis
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
May 6, 2021, 1:24:08 PM5/6/21
to everyth...@googlegroups.com


On 5/6/2021 6:36 AM, Bruno Marchal wrote:

On 30 Apr 2021, at 20:47, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say it is conscious of that thing?

That's not even common parlance.  Conscious thoughts are fleeting.  Knowledge is in memory.  I know how to ride a bicycle because I do it unconsciously.  I don't think consciousness can be understood except as a surface or boundary of the subconscious and the unconscious (physics).

If you use physics, you have to explain what it is, and how that select the computations in arithmetic,

That is a field of active research: how brains implement computations and why they do some and not others.


or you need to abandon mechanism.

Only your idea of "mechanism".


With mechanism, to claim that a machine consciousness is not attributable to some universal machinery, despite they do execute a computation, in the only mathematical sense discovered by Church and Turing (and some others) seem a bit magical.

My motorcycles animation is not attributable to some universal motorcycle.



Note that you don’t quote me, above. You should have quoted my answer. The beauty of Mechanism is that the oldest definition of (rational)  knowledge (Theaetetus true (justified) opinion) already explain why no machine can define its own knowledge, why consciousness seems necessarily mysterious, and why we get that persistent feeling that we belong to a physical reality, when in fact we are just infinitely many numbers involved in complex relations.

I didn't quote it because it only obscures the transitory nature of conscious thought.  "True belief" is ambiguous; do you have a true belief the 2+2=4 when you are not thinking about numbers or do you have this true belief at all times...but unconsciously?

Brent

Jason Resch

unread,
May 6, 2021, 4:01:49 PM5/6/21
to Everything List
On Thu, May 6, 2021 at 9:08 AM Bruno Marchal <mar...@ulb.ac.be> wrote:

On 30 Apr 2021, at 20:52, Jason Resch <jason...@gmail.com> wrote:
It might be a true fact that "Machine X believes Y", without Y being true. Is it simply the truth that "Machine X believes Y" that makes X consciousness of Y?

It is more the belief that the machine has a belief which remains true, even if the initial belief is false.


Is that extra meta-level of belief necessary for simple awareness, or only for self-awareness?
 

Can a machine believe "2+2=4" without having a reference to itself?

Not really, unless you accept the idea of unconscious belief, which makes sense in some psychological theory. 

My method consists in defining “the machine M believes P” by “the machine M asserts P”, and then I limit myself to machine which are correct by definition. This is of no use in psychology, but is enough to derive physics.

I see. I think this might account for the confusion I had with respect to the link between consciousness (as we know and perceive it), and the consciousness of a self-referentially correct and consistent machine, which was a necessary simplification in your initial research. (Assuming I understand this correctly).

Self-referentially correct and consistent machines can be conscious, but those properties are not necessary for consciousness. Only "being a machine" of some kind would be necessary. My question would then be, is every machine conscious of something, or are only certain machines conscious? If only some, how soon in the UD would a conscious machine be encountered?

If the UD can be viewed as a machine in its own right, is it a machine that is conscious of everything? A super-mind or over-mind? Or do the minds fractionate due to their lack of relation to each other by the memory divisions of the UD?

 
What, programmatically, would you say is needed to program a machine that believes "2+2=4" or to implement self-reference?

That it has enough induction axioms, like PA and ZF, but unlike RA (R and Q), or CL (combinatory logic without induction).
 The universal machine without induction axiom are conscious, but are very limited in introspection power. They don’t have the rich theology of the machine having induction.
I recall that the induction axioms are all axioms having the shape [P(0) & (for all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build universals.

Thank you. I can begin to see how induction is necessary for self-reference.
 

Does a Turing machine evaluating "if (2+2 == 4) then" believe it?

If the machine can prove:
Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be self-conscious. PA can, RA cannot.



Or does it require theorem proving software that reduces a statement to Peano axioms or similar?

That is requires for the rational belief, but not for the experience-able one.



I guess this is what I am most curious about. Not so much rational belief or self-consciousness, but the requirements of immediate experience/awareness. If consciousness is the awareness of information, how does one write a program that is "aware" of information? In some sense, I can see the argument that any handling or processing of information requires, in some sense, some kind of awareness of it.
 



To get immediate knowledgeability you need to add consistency ([]p & <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the machine a feeling of immediacy. 

By consistency here do you mean the machine must never come to believe something false, or that the machine itself must behave in a manner consistent with its design/definition?

That the machine will not believe something false. I agree this works only because I can limit myself to correct machine.
The psychology and theology of the lying machine remains to be done, but it has no use to derive physics from arithmetic.




I still have a conceptual difficulty trying to marry these mathematical notions of truth, provability, and consistency with a program/Machine that manifests them.

It *is¨subtle, that is why we need to use the mathematics of self-reference. It is highly counter-intuitive. All errors in philosophy/theology comes from confusing a self-referential mode with another, I would say. 




If a program can be said to "know" something then can we also say it is conscious of that thing?

1) That’s *not* the case for []p & p, unless you accept a notion of unconscious knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not being currently thinking about it, so that you are not right now consciously aware of the fact---well you are, but just because I have just reminded it :)

In a way, I might view these long term memories as environmental signals that encroach upon one's mind state. A state which is otherwise not immediately aware of all the contents of this memory (like opening a sealed box to discover it's content).

OK.





2) But that *is* the case for []p & <>t & p. If the machine knows something in that sense, then the machine can be said to be conscious of p. 
Then to be “simply” conscious, becomes []t & <>t (& t). 

Note that “p” always refers to a partially computable arithmetical (or combinatorical) proposition. That’s the way of translating “Digital Mechanism” in the language of the machine.

To sum up, to get a conscious machine, you need a computer (aka universal number/machine) with some notion of belief, and knowledge/consciousness rise from the actuation of truth, that the machine cannot define (by the theorem of Tarski and some variant by Montague, Thomason, and myself...). 

So then, it seems to me a program with a memory for listing propositions, to which it can categorize them as true or false, or otherwise ascribe some probability to those propositions being true or false would have a notion of belief. But what counts as actuation of truth? Does it arise out of attempts to test/categorize those propositions/beliefs?
 

That theory can be said a posteriori well tested because it implied the quantum reality, at least the one described by the Schroedinger equation or Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse postulate.

Can it be said that Deep Blue is conscious of the state of the chess board it evaluates?

Deep blue. I guess not. But for alpha-go, or some of its descendants, it looks like they are circular neural pathway allowing the machine to learn its own behaviour, and to attach some identity in this way. So, deep learning might coverage on a conscious machine.

If I recall correctly, AlphaGo's network was entirely feed-forward, with 42 layers of processing.  AlphaZero might be different, it is definitely more sophisticated in that it was given only the rules of the game, and came to master many different games (Go, Chess, and Shogi). Are loops, or neurons with short-term memories somehow necessary for consciousness? I guess the presence of loops (or external memories) is the difference between circuits and Turing Machines.
 
But that is not verified, and they are still just playing “simple games”. We don’t ask to build a theory of themselves. It even looks like they try to avoid this, and it is normal. Like nature with insects, we don’t want a terrible child, and try to make “mature machine” right at the start.

I know you have said you believe jumping spiders are conscious (or even self-conscious). In your opinion, are ants conscious? Are amoeba? We already have full neuronal simulations of worms ( see openworm.org ). Are these, then already examples of conscious programs?
 


Is a Tesla car conscious of whether the traffic signal is showing red, yellow, or green?

I doubt this, but I have not study them. I doubt it has full self-reference ability, like PA and ZF, or any human baby.

I am not sure about full self-reference, but they do build models of their environment that incorporate themselves, at least assuming, I am not reading into this graphical display too much:


(Note that it builds a model of other surrounding cars (displayed in gray), and itself in red)
 
Or is a more particular class of software necessary for belief/consciousness? This is what I'm struggling to understand.  I greatly appreciate all the answers you have provided.

All you need is enough induction power. RA + induction on recursive formula is not enough, unless you add the exponentiation axiom. But RA + induction on recursive enumerable formula is enough. 


Thanks again. This is helpful. I feel closer to understanding although not fully there yet.

Jason 

Jason Resch

unread,
Feb 28, 2022, 5:10:01 PM2/28/22
to Everything List


On Tue, Apr 27, 2021 at 8:33 AM Telmo Menezes <te...@telmomenezes.net> wrote:


Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam <terren...@gmail.com> wrote:

> It's impossible to refute solipsism

True, but it's equally impossible to refute the idea that everything including rocks is conscious. And if both a theory and its exact opposite can neither be proven nor disproven then neither speculation is of any value in trying to figure out how the world works.

When I was a little kid I would ask adults if rocks were conscious. They tried to train me to stop asking such questions, because they were worried about what other people would think. To this day, I never stopped asking these questions. I see three options here:

(1) They were correct to worry and I have a mental issue.

(2) I am really dumb and don't see something obvious.

(3) Beliefs surrounding consciousness are socially normative, and asking question outside of such boundaries is a taboo.


Consider the case where a god-like super intelligence for fun decided to wire up everything experienced by a particular rock during its billion year existence. All the light that fell on the rock's face, that super being could see, all the accelerations it underwent, it could feel. During this rock's history, it came to the surface in the 1800s, and then a house was built not far from where you grew up. One day you notice and decide to kick this rock, and the super being who chose to experience everything this particular rock felt, feels the kick.

In a way, this god-like being has connected through nerves which are invisible to you (via its perfect knowledge of the history of this rock) to its brain. But these connections, though invisible, are no less real or concrete than the nerves that connect your hand to your brain. This super being might exist at a level outside our universe (e.g. in the universe running the simulation of this one).

Ought we to conclude from this possibility that there is no way, even in principle, to detect which objects are capable of perceiving? That there is no way to know which objects happen to be imbued with consciousness, even for something that seems as inanimate and inert as a rock?

You asked great questions.

Jason

Bruce Kellett

unread,
Feb 28, 2022, 5:34:59 PM2/28/22
to Everything List
If you believe in magic, anything is possible., and no questions have definite answers.

Bruce

Brent Meeker

unread,
Feb 28, 2022, 6:22:59 PM2/28/22
to everyth...@googlegroups.com
I think that's a mistaken idea of consciousness.  To be conscious is to be conscious of something.  It must have a correspondence with an environment and it must include an ability to act on that correspondence in some sense.  Otherwise it's just a recording machine.  This conception of consciousness admits of a continuum of degrees of consciousness.  In this sense a rock can be conscious, but its consciousness is very limited because it's ability to act is very limited. 

Brent

Reply all
Reply to author
Forward
0 new messages