Are Philosophical Zombies possible?

206 views
Skip to first unread message

Jason Resch

unread,
Jul 5, 2024, 1:41:28 PMJul 5
to The Important Questions, Everything List
I finished this section for my article on consciousness:


It is an important question, because if zombies are not possible, then consciousness is not optional. Rather, consciousness would be logically necessary, in any system having the right configuration.

(Whether that configuration is functional/organizational/causal/or physical is a separate question).

Jason

John Clark

unread,
Jul 5, 2024, 3:18:43 PMJul 5
to everyth...@googlegroups.com, The Important Questions
On Fri, Jul 5, 2024 at 1:41 PM Jason Resch <jason...@gmail.com> wrote:

I finished this section for my article on consciousness:


It is an important question, because if zombies are not possible, then consciousness is not optional. Rather, consciousness would be logically necessary, in any system having the right configuration.

Anybody who claims that philosophical zombies are possible needs to ask themselves one question. Natural selection cannot select for something it cannot see, and it can't directly see consciousness any better than we can, except in ourselves; so how did Evolution manage to produce at least one conscious being, and probably many billions of them? I think the answer is that although Evolution can't see consciousness it can certainly see intelligent activity, so consciousness must be an inevitable byproduct of intelligence. Or to put it another way, it's a brute fact that consciousness is the way data feels when it is being processed. After all, without exception, every iterated sequence of "why" or "how" questions either goes on forever or terminates in a brute fact. 

John K Clark    See what's on my new list at  Extropolis
wfn    

Brent Meeker

unread,
Jul 6, 2024, 2:52:02 PMJul 6
to everyth...@googlegroups.com
You emphasize that a Zombie would assert that he had a consciousness, but what about the converse?  Suppose you met someone who simply denied that the had a consciousness.  When he stubs his toe and says "OUCH!" and hops around on one foot he says yes that was my reaction but I wasn't conscious of pain.  Can you prove him wrong or do you just DEFINE him as wrong?

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjY6cGV8606u8Xf3_ELbBibF2Cs-dPv_bhuctitQsaUag%40mail.gmail.com.

Brent Meeker

unread,
Jul 6, 2024, 3:03:19 PMJul 6
to everyth...@googlegroups.com


On 7/5/2024 12:18 PM, John Clark wrote:
On Fri, Jul 5, 2024 at 1:41 PM Jason Resch <jason...@gmail.com> wrote:

I finished this section for my article on consciousness:


It is an important question, because if zombies are not possible, then consciousness is not optional. Rather, consciousness would be logically necessary, in any system having the right configuration.

Anybody who claims that philosophical zombies are possible needs to ask themselves one question. Natural selection cannot select for something it cannot see, and it can't directly see consciousness any better than we can, except in ourselves; so how did Evolution manage to produce at least one conscious being, and probably many billions of them? I think the answer is that although Evolution can't see consciousness it can certainly see intelligent activity, so consciousness must be an inevitable byproduct of intelligence.
I think you're jumping to conclusions that make consciousness seem mysterious, as though a laptop processes data and therefore is feeling conscious.  There are different levels/kinds of intelligence and correspondingly different kinds and levels of consciousness and evolution can "see" them.  Simple animals like hydra and paramecia are conscious of chemical gradients and move accordingly.  But I don't think they have a concept of their location in space.  But planaria do.  They can remember directions.  So can bees.  Human and some mammals have the ability to imagine themselves in scenarios and to plan accordingly.  That's where the inner narrative comes in, which is facilitated a lot by language.  I think such foresight is a necessary component of intelligence, not a "byproduct".

Brent

Or to put it another way, it's a brute fact that consciousness is the way data feels when it is being processed. After all, without exception, every iterated sequence of "why" or "how" questions either goes on forever or terminates in a brute fact. 

John K Clark    See what's on my new list at  Extropolis
wfn    
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jul 6, 2024, 3:19:09 PMJul 6
to everyth...@googlegroups.com
On Sat, Jul 6, 2024 at 2:52 PM Brent Meeker <meeke...@gmail.com> wrote:
You emphasize that a Zombie would assert that he had a consciousness, but what about the converse?  Suppose you met someone who simply denied that the had a consciousness.  When he stubs his toe and says "OUCH!" and hops around on one foot he says yes that was my reaction but I wasn't conscious of pain.  Can you prove him wrong or do you just DEFINE him as wrong?

As Chalmers writes, even the statement "Consciousness does not exist" is a third-order phenomenal judgement, of the kind that seems to imply the presence of consciousness in those that come to such conclusions. It seems to be it is neither the assertion of having it, nor the denial of not having it, which proves the presence of consciousness, but rather, it is having a source of knowledge to be able to make such conclusions in the first place, which I think should be taken as the evidence for the presence of a mind.

As to the example of denying a particular perception like pain, there are people who have no sense of pain, and there is also pain dissociation, where the pain's intensity and locus are known, but the experience has no noxiousness. I don't think such denies of pain would constitute evidence of having pain, in the same way denying that one is conscious could be taken as evidence of being conscious (as you have to have some self-awareness to be in a position to deny what aspects of yourself you possess or don't possess).

Jason

 

Brent

On 7/5/2024 10:41 AM, Jason Resch wrote:
I finished this section for my article on consciousness:


It is an important question, because if zombies are not possible, then consciousness is not optional. Rather, consciousness would be logically necessary, in any system having the right configuration.

(Whether that configuration is functional/organizational/causal/or physical is a separate question).

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjY6cGV8606u8Xf3_ELbBibF2Cs-dPv_bhuctitQsaUag%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 7, 2024, 11:58:54 AMJul 7
to everyth...@googlegroups.com
On Sat, Jul 6, 2024 at 3:03 PM Brent Meeker <meeke...@gmail.com> wrote:

 I think such foresight is a necessary component of intelligence, not a "byproduct".

I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it. Why? It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities.  Incidentally, GPT has demonstrated foresight, when shown a picture of somebody holding a pair of scissors next to a string holding down a helium balloon and  asked "what comes next?" it replies that the string is about to be cut by the scissors and then the balloon will float away.

 John K Clark    See what's on my new list at  Extropolis
hbf

 



Anybody who claims that philosophical zombies are possible needs to ask themselves one question. Natural selection cannot select for something it cannot see, and it can't directly see consciousness any better than we can, except in ourselves; so how did Evolution manage to produce at least one conscious being, and probably many billions of them? I think the answer is that although Evolution can't see consciousness it can certainly see intelligent activity, so consciousness must be an inevitable byproduct of intelligence.
I

John Clark

unread,
Jul 7, 2024, 12:14:18 PMJul 7
to everyth...@googlegroups.com
On Sat, Jul 6, 2024 at 2:52 PM Brent Meeker <meeke...@gmail.com> wrote:

You emphasize that a Zombie would assert that he had a consciousness, but what about the converse?  Suppose you met someone who simply denied that the had a consciousness.

Well perhaps I am a philosophical zombie. As many have pointed out, the qualia I experience when looking at something red may be entirely different from the qualia you experience when looking at the same thing, but the same may also be true for consciousness. What I think of as consciousness might just be a weak pale reflection of the enormously larger and grander consciousness that you experience everyday, so compared to you I am a philosophical zombie. Or maybe the reverse is true. Neither of us will ever know. 

John K Clark    See what's on my new list at  Extropolis
zpd


Jason Resch

unread,
Jul 7, 2024, 1:58:46 PMJul 7
to Everything List


On Sun, Jul 7, 2024, 11:58 AM John Clark <johnk...@gmail.com> wrote:
On Sat, Jul 6, 2024 at 3:03 PM Brent Meeker <meeke...@gmail.com> wrote:

 I think such foresight is a necessary component of intelligence, not a "byproduct".

I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

This is the position of epiphenomenalism: that conscious has no effects. It is what makes zombies logically possible. But you don't seem to think zombies are logically possible, so then epiphenomenalism is false, and consciousness does have effects. As you said previously, if consciousness had no effects, there would be no reason for it to evolve in the first place.


Why? It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities. 

There is another possibility: consciousness is not useless.

Jason 



Incidentally, GPT has demonstrated foresight, when shown a picture of somebody holding a pair of scissors next to a string holding down a helium balloon and  asked "what comes next?" it replies that the string is about to be cut by the scissors and then the balloon will float away.

 John K Clark    See what's on my new list at  Extropolis
hbf

 



Anybody who claims that philosophical zombies are possible needs to ask themselves one question. Natural selection cannot select for something it cannot see, and it can't directly see consciousness any better than we can, except in ourselves; so how did Evolution manage to produce at least one conscious being, and probably many billions of them? I think the answer is that although Evolution can't see consciousness it can certainly see intelligent activity, so consciousness must be an inevitable byproduct of intelligence.
I

Or to put it another way, it's a brute fact that consciousness is the way data feels when it is being processed. After all, without exception, every iterated sequence of "why" or "how" questions either goes on forever or terminates in a brute fact. 

John K Clark    See what's on my new list at  Extropolis
wfn    
--

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
Jul 7, 2024, 2:47:24 PMJul 7
to everyth...@googlegroups.com


Stathis Papaioannou


On Mon, 8 Jul 2024 at 03:58, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024, 11:58 AM John Clark <johnk...@gmail.com> wrote:
On Sat, Jul 6, 2024 at 3:03 PM Brent Meeker <meeke...@gmail.com> wrote:

 I think such foresight is a necessary component of intelligence, not a "byproduct".

I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

This is the position of epiphenomenalism: that conscious has no effects. It is what makes zombies logically possible. But you don't seem to think zombies are logically possible, so then epiphenomenalism is false, and consciousness does have effects. As you said previously, if consciousness had no effects, there would be no reason for it to evolve in the first place.


Why? It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities. 

There is another possibility: consciousness is not useless.

Another possibility is that consciousness has no separate causal efficacy but is a necessary side-effect of the behaviour associated with it.

Incidentally, GPT has demonstrated foresight, when shown a picture of somebody holding a pair of scissors next to a string holding down a helium balloon and  asked "what comes next?" it replies that the string is about to be cut by the scissors and then the balloon will float away.

 John K Clark    See what's on my new list at  Extropolis
hbf

 



Anybody who claims that philosophical zombies are possible needs to ask themselves one question. Natural selection cannot select for something it cannot see, and it can't directly see consciousness any better than we can, except in ourselves; so how did Evolution manage to produce at least one conscious being, and probably many billions of them? I think the answer is that although Evolution can't see consciousness it can certainly see intelligent activity, so consciousness must be an inevitable byproduct of intelligence.
I

Or to put it another way, it's a brute fact that consciousness is the way data feels when it is being processed. After all, without exception, every iterated sequence of "why" or "how" questions either goes on forever or terminates in a brute fact. 

John K Clark    See what's on my new list at  Extropolis
wfn    
--

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3XGz7MJdzy7P2cnmq96McL1U_6r8k5bKpQKMCbkS5bBA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 7, 2024, 3:14:41 PMJul 7
to everyth...@googlegroups.com
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.
 
As you said previously, if consciousness had no effects, there would be no reason for it to evolve in the first place.

What I said in my last post was "It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities".

There is another possibility: consciousness is not useless.

If consciousness is not useless from Evolution's point of view then it must produce "something" that natural selection can see, and if natural selection can see that certain "something" then so can you or me. So the Turing Test is not just a good test for intelligence it's also a good test for consciousness. The only trouble is, what is that "something"? Presumably whatever it is that "something" must be related to mind in some way, but If it is not intelligent activity then what the hell is it"?  

Brent Meeker

unread,
Jul 7, 2024, 9:28:35 PMJul 7
to everyth...@googlegroups.com


On 7/7/2024 8:58 AM, John Clark wrote:
On Sat, Jul 6, 2024 at 3:03 PM Brent Meeker <meeke...@gmail.com> wrote:

 I think such foresight is a necessary component of intelligence, not a "byproduct".

I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it. Why? It must be because consciousness is the byproduct of something else
You miss my point.  I thought it was obvious that foresight requires consciousness.  It requires the ability of think in terms of future scenarios in which you are an actor, i.e. to be conscious of yourself with an imaginary play.

Brent

that is not useless, there are no other possibilities.  Incidentally, GPT has demonstrated foresight, when shown a picture of somebody holding a pair of scissors next to a string holding down a helium balloon and  asked "what comes next?" it replies that the string is about to be cut by the scissors and then the balloon will float away.

 John K Clark    See what's on my new list at  Extropolis
hbf

 



Anybody who claims that philosophical zombies are possible needs to ask themselves one question. Natural selection cannot select for something it cannot see, and it can't directly see consciousness any better than we can, except in ourselves; so how did Evolution manage to produce at least one conscious being, and probably many billions of them? I think the answer is that although Evolution can't see consciousness it can certainly see intelligent activity, so consciousness must be an inevitable byproduct of intelligence.
I

Or to put it another way, it's a brute fact that consciousness is the way data feels when it is being processed. After all, without exception, every iterated sequence of "why" or "how" questions either goes on forever or terminates in a brute fact. 

John K Clark    See what's on my new list at  Extropolis
wfn    
--
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 7, 2024, 9:43:15 PMJul 7
to everyth...@googlegroups.com
Foresight.

Brent


Presumably whatever it is that "something" must be related to mind in some way, but If it is not intelligent activity then what the hell is it"?  

John K Clark    See what's on my new list at  Extropolis

 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 8, 2024, 10:29:08 AMJul 8
to everyth...@googlegroups.com
On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker <meeke...@gmail.com> wrote:

>I thought it was obvious that foresight requires consciousness. It requires the ability of think in terms of future scenarios

The keyword in the above is "think". Foresight means using logic to predict, given current starting conditions, what the future will likely be, and determining how change in the initial conditions will likely affect the future.  And to do any of that requires intelligence. Both Large Language Models and picture to video AI programs have demonstrated that they have foresight ; if you ask them what will happen if you cut the string holding down a helium balloon they will tell you it will flow away, but if you add that the instant string is cut an Olympic high jumper will make a grab for the dangling string they will tell you what will likely happen then too. So yes, foresight does imply consciousness because foresight demands intelligence and consciousness is the inevitable byproduct of intelligence.
 
in which you are an actor

Obviously any intelligence will have to take its own actions in account to determine what the likely future will be. After a LLM gives you an answer to a question, based on that answer I'll bet an AI  will be able to make a pretty good guess what your next question to it will be.

John K Clark    See what's on my new list at  Extropolis
ods




 

Jason Resch

unread,
Jul 8, 2024, 2:12:47 PMJul 8
to everyth...@googlegroups.com
On Mon, Jul 8, 2024 at 10:29 AM John Clark <johnk...@gmail.com> wrote:

On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker <meeke...@gmail.com> wrote:

>I thought it was obvious that foresight requires consciousness. It requires the ability of think in terms of future scenarios

The keyword in the above is "think". Foresight means using logic to predict, given current starting conditions, what the future will likely be, and determining how change in the initial conditions will likely affect the future.  And to do any of that requires intelligence. Both Large Language Models and picture to video AI programs have demonstrated that they have foresight ; if you ask them what will happen if you cut the string holding down a helium balloon they will tell you it will flow away, but if you add that the instant string is cut an Olympic high jumper will make a grab for the dangling string they will tell you what will likely happen then too. So yes, foresight does imply consciousness because foresight demands intelligence and consciousness is the inevitable byproduct of intelligence.

Consciousness is a prerequisite of intelligence. One can be conscious without being intelligent, but one cannot be intelligent without being conscious.
Someone with locked-in syndrome can do nothing, and can exhibit no intelligent behavior. They have no measurable intelligence. Yet they are conscious. You need to have perceptions (of the environment, or the current situation) in order to act intelligently. It is in having perceptions that consciousness appears. So consciousness is not a byproduct of, but an integral and necessary requirement for intelligent action.

Jason
 
 
in which you are an actor

Obviously any intelligence will have to take its own actions in account to determine what the likely future will be. After a LLM gives you an answer to a question, based on that answer I'll bet an AI  will be able to make a pretty good guess what your next question to it will be.

John K Clark    See what's on my new list at  Extropolis
ods




 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jul 8, 2024, 2:23:44 PMJul 8
to everyth...@googlegroups.com
On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).

I view mental states as high-level states operating in their own regime of causality (much like a Java computer program). The java computer program can run on any platform, regardless of the particular physical nature of it. It has in a sense isolated itself from the causality of the electrons and semiconductors, and operates in its own realm of the causality of if statements, and for loops. Consider this program, for example:

twin-prime-program2.png

What causes the program to terminate? Is it the inputs, and the logical relation of primality, or is it the electrons flowing through the CPU? I would argue that the higher-level causality, regarding the logical relations of the inputs to the program logic is just as important. It determines the physics of things like when the program terminates. At this level, the microcircuitry is relevant only to its support of the higher level causal structures, but the program doesn't need to be aware of nor consider those low-level things. It operates the same regardless.

I view consciousness as like that high-level control structure. It operates within a causal realm where ideas and thoughts have causal influence and power, and can reach down to the lower level to do things like trigger nerve impulses.


Here is a quote from Roger Sperry, who eloquently describes what I am speaking of:


"I am going to align myself in a counterstand, along with that approximately 0.1 per cent mentalist minority, in support of a hypothetical brain model in which consciousness and mental forces generally are given their due representation as important features in the chain of control. These appear as active operational forces and dynamic properties that interact with and upon the physiological machinery. Any model or description that leaves out conscious forces, according to this view, is bound to be pretty sadly incomplete and unsatisfactory. The conscious mind in this scheme, far from being put aside and dispensed with as an "inconsequential byproduct," "epiphenomenon," or "inner aspect," as is the customary treatment these days, gets located, instead, front and center, directly in the midst of the causal interplay of cerebral mechanisms.

Mental forces in this particular scheme are put in the driver's seat, as it were. They give the orders and they push and haul around the physiology and physicochemical processes as much as or more than the latter control them. This is a scheme that puts mind back in its old post, over matter, in a sense-not under, outside, or beside it. It's a scheme that idealizes ideas and ideals over physico-chemical interactions, nerve impulse traffic-or DNA. It's a brain model in which conscious, mental, psychic forces are recognized to be the crowning achievement of some five hundred million years or more of evolution.

[...] The basic reasoning is simple: First, we contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action -- a point accepted by many, including some of the more tough-minded brain researchers. Second, the argument goes a critical step further, and insists that these emergent pattern properties in the brain have causal control potency -- just as they do elsewhere in the universe. And there we have the answer to the age-old enigma of consciousness.

To put it very simply, it becomes a question largely of who pushes whom around in the population of causal forces that occupy the cranium. There exists within the human cranium a whole world of diverse causal forces; what is more, there are forces within forces within forces, as in no other cubic half-foot of universe that we know.

[...] Along with their internal atomic and subnuclear parts, the brain molecules are obliged to submit to a course of activity in time and space that is determined very largely by the overall dynamic and spatial properties of the whole brain cell as an entity. Even the brain cells, however, with their long fibers and impulse conducting elements, do not have very much to say either about when or in what time pattern, for example, they are going to fire their messages. The firing orders come from a higher command. [...]

In short, if one climbs upward through the chain of command within the brain, one finds at the very top those overall organizational forces and dynamic properties of the large patterns of cerebral excitation that constitute the mental or psychic phenomena. [...]

Near the apex of this compound command system in the brain we find ideas. In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and in distant, foreign brains. And they also interact with real consequence upon the external surroundings to produce in toto an explosive advance in evolution on this globe far beyond anything known before, including the emergence of the living cell."




Jason


 
 
As you said previously, if consciousness had no effects, there would be no reason for it to evolve in the first place.

What I said in my last post was "It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities".

There is another possibility: consciousness is not useless.

If consciousness is not useless from Evolution's point of view then it must produce "something" that natural selection can see, and if natural selection can see that certain "something" then so can you or me. So the Turing Test is not just a good test for intelligence it's also a good test for consciousness. The only trouble is, what is that "something"? Presumably whatever it is that "something" must be related to mind in some way, but If it is not intelligent activity then what the hell is it"?  

John K Clark    See what's on my new list at  Extropolis

 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Cosmin Visan

unread,
Jul 8, 2024, 2:40:29 PMJul 8
to Everything List
Philosophical zombies are not possible, for the trivial reason that body doesn't even exist. "Body" is just an idea in consciousness. See my papers, like "How Self-Reference Builds the World": https://philpeople.org/profiles/cosmin-visan

John Clark

unread,
Jul 8, 2024, 4:01:37 PMJul 8
to everyth...@googlegroups.com
On Mon, Jul 8, 2024 at 2:23 PM Jason Resch <jason...@gmail.com> wrote:

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).

Not if consciousness is the inevitable byproduct of intelligece, and I'm almost certain that it is.    

I view mental states as high-level states operating in their own regime of causality (much like a Java computer program).

I have no problem with that, actually it's very similar to my view.  
 
The java computer program can run on any platform, regardless of the particular physical nature of it.

Right. You could even say that "computer program" is not a noun, it is an adjective, it is the way a computer will behave when the machine's  logical states are organized in a certain way.  And "I" is the way atoms behave when they are organized in a Johnkclarkian way, and "you" is the way atoms behave when they are organized in a Jasonreschian way.

I view consciousness as like that high-level control structure. It operates within a causal realm where ideas and thoughts have causal influence and power, and can reach down to the lower level to do things like trigger nerve impulses.

Consciousness is a high-level description of brain states that can be extremely useful, but that doesn't mean that lower level and much more finely grained description of brain states involving nerve impulses, or even more finely grained descriptions involving electrons and quarks are wrong, it's just that such level of detail is unnecessary and impractical for some purposes.   
 
John K Clark    See what's on my new list at  Extropolis
qb2

John Clark

unread,
Jul 8, 2024, 4:04:54 PMJul 8
to everyth...@googlegroups.com
On Mon, Jul 8, 2024 at 2:12 PM Jason Resch <jason...@gmail.com> wrote:

>Consciousness is a prerequisite of intelligence.

I think you've got that backwards, intelligence is a prerequisite of consciousness. And the possibility of intelligent ACTIONS is a  prerequisite for Darwinian natural selection to have evolved it.
 
One can be conscious without being intelligent,

Sure. The Turing Test is not perfect, it has a lot of flaws, but it's all we've got. If something passes the Turing Test then it's intelligent and conscious, but if it fails the test then it may or may not be intelligent and or conscious. 

 You need to have perceptions (of the environment, or the current situation) in order to act intelligently. 

For intelligence to have evolved, and we know for a fact that it has, you not only need to be able to perceive the environment you also need to be able to manipulate it. That's why zebras didn't evolve great intelligence, they have no hands, so a brilliant zebra wouldn't have a great advantage over a dumb zebra, in fact he'd probably be at a disadvantage because a big brain is a great energy hog.  
  John K Clark    See what's on my new list at  Extropolis
339

3b4


Jason Resch

unread,
Jul 8, 2024, 4:20:47 PMJul 8
to Everything List


On Mon, Jul 8, 2024, 4:01 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Jul 8, 2024 at 2:23 PM Jason Resch <jason...@gmail.com> wrote:

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).

Not if consciousness is the inevitable byproduct of intelligece, and I'm almost certain that it is.    

If consciousness is necessary for intelligence, then it's not a byproduct. If on the other hand, consciousness is just a useless byproduct, then it could (logically if not nomologically) be eliminated without affecting intelligent.

You seem to want it to be both necessary but also be something that makes no difference to anything (which makes it unnecessary).

I would be most curious to hear your thoughts  regarding the section of my article on "Conscious behaviors" -- that is, behaviors which (seem to) require consciousness in order to do them.


I view mental states as high-level states operating in their own regime of causality (much like a Java computer program).

I have no problem with that, actually it's very similar to my view.

That's good to hear.

 
The java computer program can run on any platform, regardless of the particular physical nature of it.

Right. You could even say that "computer program" is not a noun, it is an adjective, it is the way a computer will behave when the machine's  logical states are organized in a certain way.  And "I" is the way atoms behave when they are organized in a Johnkclarkian way, and "you" is the way atoms behave when they are organized in a Jasonreschian way.

I'm not opposed to that framing.

I view consciousness as like that high-level control structure. It operates within a causal realm where ideas and thoughts have causal influence and power, and can reach down to the lower level to do things like trigger nerve impulses.

Consciousness is a high-level description of brain states that can be extremely useful, but that doesn't mean that lower level and much more finely grained description of brain states involving nerve impulses, or even more finely grained descriptions involving electrons and quarks are wrong, it's just that such level of detail is unnecessary and impractical for some purposes.   

I would even say, that at a certain level of abstraction, they become irrelevant. It is the result of what I call "a Turing firewall", software has no ability to know its underlying hardware implementation, it is an inviolable separation of layers of abstraction, which makes the lower levels invisible to the layers above. So the neurons and molecular forces aren't in the drivers seat for what goes on in the brain. That is the domain of higher level structures and forces. We cannot ignore completely the lower levels, they provide the substrate upon which the higher levels are built, but I think it is an abuse of reductionism that leads people to saying consciousness is an epiphenomenon and doesn't do anything. When no one would try to apply reductionism to explain why, when a glider in the game of life hits a block and causes it to self destruct, that it is due to quantum mechanics in our universe, rather than a consequence of the very different rules of the game of life as they operate in the game of life universe.

Jason 


 
John K Clark    See what's on my new list at  Extropolis
qb2

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jul 8, 2024, 4:35:12 PMJul 8
to Everything List


On Mon, Jul 8, 2024, 4:04 PM John Clark <johnk...@gmail.com> wrote:

On Mon, Jul 8, 2024 at 2:12 PM Jason Resch <jason...@gmail.com> wrote:

>Consciousness is a prerequisite of intelligence.

I think you've got that backwards, intelligence is a prerequisite of consciousness. And the possibility of intelligent ACTIONS is a  prerequisite for Darwinian natural selection to have evolved it.

I disagree, but will explain below.

 
One can be conscious without being intelligent,

Sure.

I define intelligence by something capable of intelligent action.

Intelligent action requires non random choice: choice informed by information from the environment.

Having information about the environment (i.e. perceptions) is consciousness. You cannot have perceptions without there being some process or thing to perceive them.

Therefore perceptions (i.e. consciousness) is a requirement and precondition of being able to perform intelligent actions.

Jason 

The Turing Test is not perfect, it has a lot of flaws, but it's all we've got. If something passes the Turing Test then it's intelligent and conscious, but if it fails the test then it may or may not be intelligent and or conscious. 

 You need to have perceptions (of the environment, or the current situation) in order to act intelligently. 

For intelligence to have evolved, and we know for a fact that it has, you not only need to be able to perceive the environment you also need to be able to manipulate it. That's why zebras didn't evolve great intelligence, they have no hands, so a brilliant zebra wouldn't have a great advantage over a dumb zebra, in fact he'd probably be at a disadvantage because a big brain is a great energy hog.  
  John K Clark    See what's on my new list at  Extropolis
339

3b4


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Cosmin Visan

unread,
Jul 8, 2024, 5:17:31 PMJul 8
to Everything List
Brain doesn't exist. "Brain" is just an idea in consciousness. See my papers, like "How Self-Reference Builds the World": https://philpeople.org/profiles/cosmin-visan

Jason Resch

unread,
Jul 8, 2024, 5:47:28 PMJul 8
to Everything List


On Mon, Jul 8, 2024, 5:17 PM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
Brain doesn't exist.

Then it exists as an object in consciousness, which is as much as exist would mean under idealism. Rather than say things don't exist, I think it would be better to redefine what is meant by existence.


"Brain" is just an idea in consciousness.

Sure, and all objects exist in the mind of God. So "exist" goes back to meaning what it has always meant, as Markus Mueller said (roughly): "A exists for B, when changing the state of A can change the state of B, and vice versa, under certain auxiliary conditions."


See my papers, like "How Self-Reference Builds the World": https://philpeople.org/profiles/cosmin-visan


I have, and replied with comments and questions. You, however, dismissed them as me not having read your paper.

Have you seen my paper on how computational observers build the world? It reaches a similar conclusion to yours:


Jason 


Cosmin Visan

unread,
Jul 8, 2024, 6:38:35 PMJul 8
to Everything List
So based on your definition, Santa Claus exists.

Jason Resch

unread,
Jul 9, 2024, 12:31:46 AMJul 9
to Everything List


On Mon, Jul 8, 2024, 6:38 PM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
So based on your definition, Santa Claus exists.

I believe everything possible exists.

That is the idea this mail list was created to discuss, after all. (That is why it is called the "everything list")

Jason 


Cosmin Visan

unread,
Jul 9, 2024, 4:05:07 AMJul 9
to Everything List
So, where is Santa Claus ? Also, does he bring presents to all the children in the world in 1 night ? How does he do that ?

Stathis Papaioannou

unread,
Jul 9, 2024, 4:33:33 AMJul 9
to everyth...@googlegroups.com
On Tue, 9 Jul 2024 at 04:23, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).
 
Mental states could be necessarily tied to physical states without having any separate causal efficacy, and zombies would not be logically possible. Software is necessarily tied to hardware activity: if a computer runs a particular program, it is not optional that the program is implemented. However, the software does not itself have causal efficacy, causing current to flow in wires and semiconductors and so on: there is always a sufficient explanation for such activity in purely physical terms.

Cosmin Visan

unread,
Jul 9, 2024, 7:03:43 AM (14 days ago) Jul 9
to Everything List
Physical doesn't exist. "Physical" is just an idea in consciousness.

John Clark

unread,
Jul 9, 2024, 7:48:49 AM (14 days ago) Jul 9
to everyth...@googlegroups.com
On Mon, Jul 8, 2024 at 4:20 PM Jason Resch <jason...@gmail.com> wrote:

If consciousness is necessary for intelligence [...]
 
Consciousness is the inevitable product of intelligence, it is not the cause of intelligence. And as I cannot emphasize enough, natural selection can't select for something it can't see and it can't see consciousness, but natural selection CAN see intelligent actions. And you know for a fact that natural selection has managed to produce at least one conscious being and probably mini billions of them.
Don't you understand how those two facts are telling you something that is philosophically important?
 
> If on the other hand, consciousness is just a useless byproduct, then it could (logically if not nomologically) be eliminated without affecting intelligent.

That would not be possible if it's a brute fact that consciousness is the way data feels when it is being processed.  

John K Clark

Jason Resch

unread,
Jul 9, 2024, 7:50:10 AM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 4:05 AM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
So, where is Santa Claus ?

If he's possible in this universe he exists very far away. If he's not possible in this universe but possible in other universes then he exists in some subset of those universes where he is possible. If he's not logically possible he doesn't exist anywhere.


Also, does he bring presents to all the children in the world in 1 night ? How does he do that ?

He sprinkles fairy dust all over the planet (nano bot swarms) which travel down chimneys to self-assemble presents from ambient matter, after they scan the brain's of sleeping children to see if they are naughty or nice and what present they hoped for.

Jason 


Jason Resch

unread,
Jul 9, 2024, 7:54:04 AM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 7:48 AM John Clark <johnk...@gmail.com> wrote:
On Mon, Jul 8, 2024 at 4:20 PM Jason Resch <jason...@gmail.com> wrote:

If consciousness is necessary for intelligence [...]
 
Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.


I didn't say it was the cause, I said it is a prerequisite. You conveniently (for you but not for me) ignored and deleted my explanation in your reply.

Jason 

And as I cannot emphasize enough, natural selection can't select for something it can't see and it can't see consciousness, but natural selection CAN see intelligent actions. And you know for a fact that natural selection has managed to produce at least one conscious being and probably mini billions of them.
Don't you understand how those two facts are telling you something that is philosophically important?
 
> If on the other hand, consciousness is just a useless byproduct, then it could (logically if not nomologically) be eliminated without affecting intelligent.

That would not be possible if it's a brute fact that consciousness is the way data feels when it is being processed.  

John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jul 9, 2024, 8:15:03 AM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 9 Jul 2024 at 04:23, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).
 
Mental states could be necessarily tied to physical states without having any separate causal efficacy, and zombies would not be logically possible. Software is necessarily tied to hardware activity: if a computer runs a particular program, it is not optional that the program is implemented. However, the software does not itself have causal efficacy, causing current to flow in wires and semiconductors and so on: there is always a sufficient explanation for such activity in purely physical terms.

I don't disagree that there is sufficient explanation in all the particle movements all following physical laws.

But then consider the question, how do we decide what level is in control? You make the case that we should consider the quantum field level in control because everything is ultimately reducible to it.

But I don't think that's the best metric for deciding whether it's in control or not. Do the molecules in the brain tell neurons what do, or do neurons tell molecules what to do (e.g. when they fire)? Or is it some mutually conditioned relationship?

Do neurons fire on their own and tell brains what to do, or do neurons only fire when other neurons of the whole brain stimulate them appropriately so they have to fire? Or is it again, another case of mutualism?

When two people are discussing ideas, are the ideas determining how each brain thinks and responds, or are the brains determining the ideas by virtue of generating the words through which they are expressed?

Through in each of these cases, we can always drop a layer and explain all the events at that layer, that is not (in my view) enough of a reason to argue that the events at that layer are "in charge." Control structures, such as whole brain regions, or complex computer programs, can involve and be influenced by the actions of billions of separate events and separate parts, and as such, they transcend the behaviors of any single physical particle or physical law. 

Consider: whether or not a program halts might only be determinable by some rules and proof in a mathematical system, and in this case no physical law will reveal the answer to that physical system's (the computer's) behavior. So if higher level laws are required in the explanation, does it still make sense to appeal to the lower level (physical) laws as providing the explanation?

Given the generality of computers, they can also simulate any imaginable set of physical laws. In such simulations, again I think appealing to our physical laws as explaining what happens in these simulations is a mistake, as the simulation is organized in a manner to make our physical laws irrelevant to the simulation. So while you could explain what happens in the simulation in terms of the physics of the computer running it, it adds no explanatory power: it all cancels out leaving you with a model of the simulated physics.

Jason 




Jason Resch

unread,
Jul 9, 2024, 8:18:02 AM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 7:03 AM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
Physical doesn't exist. "Physical" is just an idea in consciousness.

Then so is this e-mail. Does that mean I should ignore it? Is it of no relevance?

Or is it part of a vast (apparent) reality we are each trying to navigate? What's wrong with calling this reality we are each trying to navigate (where this email exists) physical?

Do you see this reality as in any way shared?

Jason 


John Clark

unread,
Jul 9, 2024, 8:18:08 AM (14 days ago) Jul 9
to everyth...@googlegroups.com
On Tue, Jul 9, 2024 at 7:54 AM Jason Resch <jason...@gmail.com> wrote:

>>Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.


I didn't say it was the cause, I said it is a prerequisite.

My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms.
 
You conveniently (for you but not for me) ignored and deleted my explanation in your reply.

Somehow I missed that "detailed explanation" you refer to.  

John K Clark



 

Jason Resch

unread,
Jul 9, 2024, 8:31:48 AM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 8:18 AM John Clark <johnk...@gmail.com> wrote:


On Tue, Jul 9, 2024 at 7:54 AM Jason Resch <jason...@gmail.com> wrote:

>>Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.


I didn't say it was the cause, I said it is a prerequisite.

My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms.

There's a subtle distinction.

Muscles and bones are prerequisites for limbs, but muscles and bones do not cause limbs.

Lemons are a prerequisite for lemonade, but do not cause lemonade.

Intelligence is what you get when you combine perception with action, so that actions can be selected in a manner guided by perceptions.

Perception (i.e. consciousness) and action are prerequisites for intelligence. But perception alone does not cause and will not provide intelligence.

 
You conveniently (for you but not for me) ignored and deleted my explanation in your reply.

Somehow I missed that "detailed explanation" you refer to.  


I copy it here:

Jason Resch

unread,
Jul 9, 2024, 9:47:40 AM (14 days ago) Jul 9
to Everything List
On Tue, Jul 9, 2024 at 8:17 AM Jason Resch <jason...@gmail.com> wrote:


On Tue, Jul 9, 2024, 7:03 AM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
Physical doesn't exist. "Physical" is just an idea in consciousness.


Do you see this reality as in any way shared?

If you don't then why are you arguing with a figment of your imagination?
If you do, then what name should we give to this shared reality?

Jason 

Stathis Papaioannou

unread,
Jul 9, 2024, 10:16:30 AM (14 days ago) Jul 9
to everyth...@googlegroups.com


Stathis Papaioannou


On Tue, 9 Jul 2024 at 22:15, Jason Resch <jason...@gmail.com> wrote:


On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 9 Jul 2024 at 04:23, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).
 
Mental states could be necessarily tied to physical states without having any separate causal efficacy, and zombies would not be logically possible. Software is necessarily tied to hardware activity: if a computer runs a particular program, it is not optional that the program is implemented. However, the software does not itself have causal efficacy, causing current to flow in wires and semiconductors and so on: there is always a sufficient explanation for such activity in purely physical terms.

I don't disagree that there is sufficient explanation in all the particle movements all following physical laws.

But then consider the question, how do we decide what level is in control? You make the case that we should consider the quantum field level in control because everything is ultimately reducible to it.

But I don't think that's the best metric for deciding whether it's in control or not. Do the molecules in the brain tell neurons what do, or do neurons tell molecules what to do (e.g. when they fire)? Or is it some mutually conditioned relationship?

Do neurons fire on their own and tell brains what to do, or do neurons only fire when other neurons of the whole brain stimulate them appropriately so they have to fire? Or is it again, another case of mutualism?

When two people are discussing ideas, are the ideas determining how each brain thinks and responds, or are the brains determining the ideas by virtue of generating the words through which they are expressed?

Through in each of these cases, we can always drop a layer and explain all the events at that layer, that is not (in my view) enough of a reason to argue that the events at that layer are "in charge." Control structures, such as whole brain regions, or complex computer programs, can involve and be influenced by the actions of billions of separate events and separate parts, and as such, they transcend the behaviors of any single physical particle or physical law. 

Consider: whether or not a program halts might only be determinable by some rules and proof in a mathematical system, and in this case no physical law will reveal the answer to that physical system's (the computer's) behavior. So if higher level laws are required in the explanation, does it still make sense to appeal to the lower level (physical) laws as providing the explanation?

Given the generality of computers, they can also simulate any imaginable set of physical laws. In such simulations, again I think appealing to our physical laws as explaining what happens in these simulations is a mistake, as the simulation is organized in a manner to make our physical laws irrelevant to the simulation. So while you could explain what happens in the simulation in terms of the physics of the computer running it, it adds no explanatory power: it all cancels out leaving you with a model of the simulated physics.

I would say that something has separate causal efficacy of its own if physical events cannot be predicted without taking that thing into account. For example, the trajectory of a bullet cannot be predicted without taking the wind into account. In the brain, the trajectory of an atom can be predicted without taking consciousness into account. The wind therefore can be said to have separate causal efficacy of its own, but consciousness cannot. This is perhaps a narrow, reductionist account and it misses out on all that is important about the mind and intelligence, but I think it is a valid difference.

Jason Resch

unread,
Jul 9, 2024, 10:34:45 AM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 10:16 AM Stathis Papaioannou <stat...@gmail.com> wrote:


Stathis Papaioannou


On Tue, 9 Jul 2024 at 22:15, Jason Resch <jason...@gmail.com> wrote:


On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 9 Jul 2024 at 04:23, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).
 
Mental states could be necessarily tied to physical states without having any separate causal efficacy, and zombies would not be logically possible. Software is necessarily tied to hardware activity: if a computer runs a particular program, it is not optional that the program is implemented. However, the software does not itself have causal efficacy, causing current to flow in wires and semiconductors and so on: there is always a sufficient explanation for such activity in purely physical terms.

I don't disagree that there is sufficient explanation in all the particle movements all following physical laws.

But then consider the question, how do we decide what level is in control? You make the case that we should consider the quantum field level in control because everything is ultimately reducible to it.

But I don't think that's the best metric for deciding whether it's in control or not. Do the molecules in the brain tell neurons what do, or do neurons tell molecules what to do (e.g. when they fire)? Or is it some mutually conditioned relationship?

Do neurons fire on their own and tell brains what to do, or do neurons only fire when other neurons of the whole brain stimulate them appropriately so they have to fire? Or is it again, another case of mutualism?

When two people are discussing ideas, are the ideas determining how each brain thinks and responds, or are the brains determining the ideas by virtue of generating the words through which they are expressed?

Through in each of these cases, we can always drop a layer and explain all the events at that layer, that is not (in my view) enough of a reason to argue that the events at that layer are "in charge." Control structures, such as whole brain regions, or complex computer programs, can involve and be influenced by the actions of billions of separate events and separate parts, and as such, they transcend the behaviors of any single physical particle or physical law. 

Consider: whether or not a program halts might only be determinable by some rules and proof in a mathematical system, and in this case no physical law will reveal the answer to that physical system's (the computer's) behavior. So if higher level laws are required in the explanation, does it still make sense to appeal to the lower level (physical) laws as providing the explanation?

Given the generality of computers, they can also simulate any imaginable set of physical laws. In such simulations, again I think appealing to our physical laws as explaining what happens in these simulations is a mistake, as the simulation is organized in a manner to make our physical laws irrelevant to the simulation. So while you could explain what happens in the simulation in terms of the physics of the computer running it, it adds no explanatory power: it all cancels out leaving you with a model of the simulated physics.

I would say that something has separate causal efficacy of its own if physical events cannot be predicted without taking that thing into account.

I agree.


For example, the trajectory of a bullet cannot be predicted without taking the wind into account. In the brain, the trajectory of an atom can be predicted without taking consciousness into account.


Here I disagree. You are hiding consciousness away in the overwhelming complexity and obscurity of atomic motion. But we can't discard it. Consider the following behavior (example from Yudowsky):

"Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences."

"If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think "I am aware that I am aware"—and say out loud, "I am aware that I am aware"—then your consciousness is not without effect on your internal narrative, or your moving lips. You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud."

In the act of reporting one's experience of their own consciousness, in the act of catching the inner listener in the act of listening, is this something that can be explained *without taking consciousness into account*?

Jason 


John Clark

unread,
Jul 9, 2024, 10:50:06 AM (14 days ago) Jul 9
to everyth...@googlegroups.com
On Tue, Jul 9, 2024 at 8:31 AM Jason Resch <jason...@gmail.com> wrote:

>> My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms.

There's a subtle distinction. Muscles and bones are prerequisites for limbs, but muscles and bones do not cause limbs.

There are many things that caused limbs to come into existence, one of them was the existence of muscles, another was the existence of bones, and yet another was the help limbs gave to organisms in getting genes into the next generation.

Lemons are a prerequisite for lemonade, but do not cause lemonade.

You can't make lemonade without lemons, and lemons can't make lemonade without you. 
 
I define intelligence by something capable of intelligent action.

Intelligent action is what drove evolution to amplify intelligence, but if Stephen Hawking's voice generator had broken down for one hour I would still say I have  reason to believe that he remained intelligent during that hour.  


Intelligent action requires non random choice:

If it's non-random then by definition it is deterministic 

Having information about the environment (i.e. perceptions) is consciousness.

But you can't have perceptions without intelligence, sight and sound would just be meaningless gibberish.  

You cannot have perceptions without there being some process or thing to perceive them.

Yes, and that thing is intelligence.  

Therefore perceptions (i.e. consciousness) is a requirement and precondition of being able to perform intelligent actions.

The only perceptions we have firsthand experience with are our own, so investigating perceptions is not very useful in Philosophy or in trying to figure out how the world works, but intelligence is another matter entirely.  That's why in the last few years there has been enormous progress in figuring out how intelligence works, but nobody has found anything new to say about consciousness in centuries.

John K Clark

Stathis Papaioannou

unread,
Jul 9, 2024, 11:18:35 AM (14 days ago) Jul 9
to everyth...@googlegroups.com


Stathis Papaioannou


That is the classic objection to epiphenomenalism, but if consciousness is supervenient, then the physical activity on which the consciousness supervenes also causes the physical activity describing the consciousness.

A different objection is to double down on the idea that physical reality is causally closed. Advanced alien scientists who have no knowledge of human consciousness would not say about the motion of Eliezer Yudkovsky’s vocal cords, “we can’t explain that sequence of vibrations at the 2 minute mark, there must be some force acting on the vocal cords that we are unaware of”.

Cosmin Visan

unread,
Jul 9, 2024, 11:53:30 AM (14 days ago) Jul 9
to Everything List
Don't you ever get bored of this materialistic mumbo-jumbo ? When will you finally understand that brain doesn't exist ?

Jason Resch

unread,
Jul 9, 2024, 12:12:11 PM (14 days ago) Jul 9
to Everything List
At higher levels it's no longer just physical activity. The laws that describe the higher levels aren't physical laws.

Consider that we can also (if we choose) go much lower than the physical laws. We can describe every physical process as an operation on bits. And every operation can be described in terms of the NAND operation. So then everything is just bits and NANDS, there are no particles and fields, just ones and zeros and how they flip according to NAND.

So if you plant your stake at the physical, how do you justify continuing to speak of physical laws and not 0,1, and NAND?

Alternatively, if you see value and reason for talking about particles and fields, why not allow the same for the higher layers too?


A different objection is to double down on the idea that physical reality is causally closed. Advanced alien scientists who have no knowledge of human consciousness would not say about the motion of Eliezer Yudkovsky’s vocal cords, “we can’t explain that sequence of vibrations at the 2 minute mark, there must be some force acting on the vocal cords that we are unaware of”.


You can be a full throated physicalist without being an epiphenomenalist. Epiphenomenalism is usually considered a form of dualism (it says something exists beyond the physical, and no physical thing can ever tell you about that non-physical piece of ourselves).

I think we can refute that, e.g., as Yudowsky does. Neither he nor I say that consciousness violates the laws of physics, just as a computer program does not cause bits to flip in ways that violate the principles of the hardware, but the program is free to define a class of structures and behaviors that exceed those of the underlying instruction set. Consider that the Java virtual machine has just 256 instructions. It can only really do 256 different things. But the programs that can be built using those 256 instructions are unbounded. The behaviors and states and state transitions a program might manifest are infinitely richer than those allowed at the bottom layer.

Jason 

Jason Resch

unread,
Jul 9, 2024, 12:16:30 PM (14 days ago) Jul 9
to Everything List


On Tue, Jul 9, 2024, 10:50 AM John Clark <johnk...@gmail.com> wrote:
On Tue, Jul 9, 2024 at 8:31 AM Jason Resch <jason...@gmail.com> wrote:

>> My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms.

There's a subtle distinction. Muscles and bones are prerequisites for limbs, but muscles and bones do not cause limbs.

There are many things that caused limbs to come into existence, one of them was the existence of muscles, another was the existence of bones, and yet another was the help limbs gave to organisms in getting genes into the next generation.

Lemons are a prerequisite for lemonade, but do not cause lemonade.

You can't make lemonade without lemons, and lemons can't make lemonade without you. 

And this highlights the distinction between a prerequisite and a cause.


 
I define intelligence by something capable of intelligent action.

Intelligent action is what drove evolution to amplify intelligence, but if Stephen Hawking's voice generator had broken down for one hour I would still say I have  reason to believe that he remained intelligent during that hour.  


Sure, but that is just a delayed action. Would he still be intelligent if he never was able to speak again (even with the help of a machine)? He wouldn't be according to evolution.



Intelligent action requires non random choice:

If it's non-random then by definition it is deterministic 

We aren't debating free will here. Not sure why you mention this.


Having information about the environment (i.e. perceptions) is consciousness.

But you can't have perceptions without intelligence, sight and sound would just be meaningless gibberish.  

How do you define intelligence?


You cannot have perceptions without there being some process or thing to perceive them.

Yes, and that thing is intelligence.  

Therefore perceptions (i.e. consciousness) is a requirement and precondition of being able to perform intelligent actions.

The only perceptions we have firsthand experience with are our own, so investigating perceptions is not very useful in Philosophy or in trying to figure out how the world works, but intelligence is another matter entirely. 

It is if we want to answer the question of why consciousness evolved.


That's why in the last few years there has been enormous progress in figuring out how intelligence works, but nobody has found anything new to say about consciousness in centuries.

You don't think functionalism is progress?

Jason 



John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 9, 2024, 1:13:45 PM (14 days ago) Jul 9
to everyth...@googlegroups.com
On Tue, Jul 9, 2024 at 12:16 PM Jason Resch <jason...@gmail.com> wrote:


 
You can't make lemonade without lemons, and lemons can't make lemonade without you. 

And this highlights the distinction between a prerequisite and a cause.

I don't see how. Lemons are a cause of lemonade and so are you. Lemons are a prerequisite for a lemonade and so are you.  



 



 
I define intelligence by something capable of intelligent action.

Intelligent action is what drove evolution to amplify intelligence, but if Stephen Hawking's voice generator had broken down for one hour I would still say I have  reason to believe that he remained intelligent during that hour.  

Sure, but that is just a delayed action. Would he still be intelligent if he never was able to speak again (even with the help of a machine)?

Yes, although we would never know it. Maybe rocks are brilliant but shy and don't like to show off so they played dumb. But I doubt it. 
 
He wouldn't be according to evolution.

True, but natural selection has its opinion about what is intelligent and I have mine.  

>> But you can't have perceptions without intelligence, sight and sound would just be meaningless gibberish.  

> How do you define intelligence?

I don't have a definition for intelligence but I have something much better, examples. After all, definitions are made of words and thus all definitions are inherently circular, examples are the only thing that give meaning to words. Intelligence is the thing that Einstein was famous for having a lot of; from that even somebody who knew nothing about Einstein except for having read a child's biography of the man would understand what the word "intelligence" stands for.  

>> in the last few years there has been enormous progress in figuring out how intelligence works, but nobody has found anything new to say about consciousness in centuries.

You don't think functionalism is progress?

I don't think it says anything fundamental about consciousness that hadn't been said a thousand times many centuries ago.

John K Clark    See what's on my new list at  Extropolis
atq


Stathis Papaioannou

unread,
Jul 9, 2024, 1:47:44 PM (14 days ago) Jul 9
to everyth...@googlegroups.com


Stathis Papaioannou


I am not saying that there is no value in talking about higher level layers, just that the ability to talk about higher level layers does not indicate that they have separate causal efficacy. We could describe the functioning of the brain at the chemical level or at the cellular level. If we consider only the chemical level, there is no physical effect that cannot be explained purely by chemical interactions. So we can say that the cellular level has no separate causal efficacy of its own, or no strongly emergent effect. Does that mean that neural activity is epiphenomenal? If so, consciousness is epiphenomenal in the same way.

Cosmin Visan

unread,
Jul 9, 2024, 2:38:11 PM (14 days ago) Jul 9
to Everything List
Brain doesn't exist. "Brain" is just an idea in consciousness.

Brent Meeker

unread,
Jul 9, 2024, 4:29:16 PM (14 days ago) Jul 9
to everyth...@googlegroups.com
So you wrote a whole paragraph but it's unclear whether you are agreeing with me that consciousness is NOT just some mysterious byproduct of intelligence, but is an essential source of intelligent actions because it provides plans and evaluates planned actions and scenarios.

Brent

On 7/8/2024 7:28 AM, John Clark wrote:

On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker <meeke...@gmail.com> wrote:

>I thought it was obvious that foresight requires consciousness. It requires the ability of think in terms of future scenarios

The keyword in the above is "think". Foresight means using logic to predict, given current starting conditions, what the future will likely be, and determining how change in the initial conditions will likely affect the future.  And to do any of that requires intelligence. Both Large Language Models and picture to video AI programs have demonstrated that they have foresight ; if you ask them what will happen if you cut the string holding down a helium balloon they will tell you it will flow away, but if you add that the instant string is cut an Olympic high jumper will make a grab for the dangling string they will tell you what will likely happen then too. So yes, foresight does imply consciousness because foresight demands intelligence and consciousness is the inevitable byproduct of intelligence.
 
in which you are an actor

Obviously any intelligence will have to take its own actions in account to determine what the likely future will be. After a LLM gives you an answer to a question, based on that answer I'll bet an AI  will be able to make a pretty good guess what your next question to it will be.

John K Clark    See what's on my new list at  Extropolis
ods




 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 9, 2024, 4:56:45 PM (14 days ago) Jul 9
to everyth...@googlegroups.com
On Tue, Jul 9, 2024 at 4:29 PM Brent Meeker <meeke...@gmail.com> wrote:

>So you wrote a whole paragraph but it's unclear whether you are agreeing with me that consciousness is NOT just some mysterious byproduct of intelligence,

Consciousness is not mysterious, unless you think a brute fact is mysterious, but there are only two ways an iterative sequence of "how" or "why" questions can go, it can either terminate with a brute fact or it goes on forever. I think an iterated sequence of questions going on forever is far more mysterious than a brute fact. And I think it's a brute fact that consciousness is the way data feels when it is being processed.    

>but is an essential source of intelligent actions because it provides plans and evaluates planned actions and scenarios.

You've got cause-and-effect mixed up. Consciousness is not a source of intelligent action, consciousness is an inevitable consequence of intelligence.  

John K Clark    See what's on my new list at  Extropolism


asd

sssfisdft

n

Brent Meeker

unread,
Jul 9, 2024, 6:59:26 PM (13 days ago) Jul 9
to everyth...@googlegroups.com


On 7/8/2024 11:12 AM, Jason Resch wrote:


On Mon, Jul 8, 2024 at 10:29 AM John Clark <johnk...@gmail.com> wrote:

On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker <meeke...@gmail.com> wrote:

>I thought it was obvious that foresight requires consciousness. It requires the ability of think in terms of future scenarios

The keyword in the above is "think". Foresight means using logic to predict, given current starting conditions, what the future will likely be, and determining how change in the initial conditions will likely affect the future.  And to do any of that requires intelligence. Both Large Language Models and picture to video AI programs have demonstrated that they have foresight ; if you ask them what will happen if you cut the string holding down a helium balloon they will tell you it will flow away, but if you add that the instant string is cut an Olympic high jumper will make a grab for the dangling string they will tell you what will likely happen then too. So yes, foresight does imply consciousness because foresight demands intelligence and consciousness is the inevitable byproduct of intelligence.

Consciousness is a prerequisite of intelligence. One can be conscious without being intelligent, but one cannot be intelligent without being conscious.
Someone with locked-in syndrome can do nothing, and can exhibit no intelligent behavior. They have no measurable intelligence. Yet they are conscious. You need to have perceptions (of the environment, or the current situation) in order to act intelligently. It is in having perceptions that consciousness appears. So consciousness is not a byproduct of, but an integral and necessary requirement for intelligent action.
And not necessarily a high-level language based consiousness.  Paramecia act intelligently based on perception of chemical gradients.  So one would say they are conscious of said gradients.

Brent

Jason
 
 
in which you are an actor

Obviously any intelligence will have to take its own actions in account to determine what the likely future will be. After a LLM gives you an answer to a question, based on that answer I'll bet an AI  will be able to make a pretty good guess what your next question to it will be.

John K Clark    See what's on my new list at  Extropolis
ods




 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 9, 2024, 7:08:26 PM (13 days ago) Jul 9
to everyth...@googlegroups.com


On 7/8/2024 11:23 AM, Jason Resch wrote:
On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).

I view mental states as high-level states operating in their own regime of causality (much like a Java computer program). The java computer program can run on any platform, regardless of the particular physical nature of it. It has in a sense isolated itself from the causality of the electrons and semiconductors, and operates in its own realm of the causality of if statements, and for loops. Consider this program, for example:

twin-prime-program2.png

What causes the program to terminate? Is it the inputs, and the logical relation of primality, or is it the electrons flowing through the CPU? I would argue that the higher-level causality, regarding the logical relations of the inputs to the program logic is just as important. It determines the physics of things like when the program terminates. At this level, the microcircuitry is relevant only to its support of the higher level causal structures, but the program doesn't need to be aware of nor consider those low-level things. It operates the same regardless.

I view consciousness as like that high-level control structure. It operates within a causal realm where ideas and thoughts have causal influence and power, and can reach down to the lower level to do things like trigger nerve impulses.


Here is a quote from Roger Sperry, who eloquently describes what I am speaking of:


"I am going to align myself in a counterstand, along with that approximately 0.1 per cent mentalist minority, in support of a hypothetical brain model in which consciousness and mental forces generally are given their due representation as important features in the chain of control. These appear as active operational forces and dynamic properties that interact with and upon the physiological machinery. Any model or description that leaves out conscious forces, according to this view, is bound to be pretty sadly incomplete and unsatisfactory. The conscious mind in this scheme, far from being put aside and dispensed with as an "inconsequential byproduct," "epiphenomenon," or "inner aspect," as is the customary treatment these days, gets located, instead, front and center, directly in the midst of the causal interplay of cerebral mechanisms.

Mental forces in this particular scheme are put in the driver's seat, as it were. They give the orders and they push and haul around the physiology and physicochemical processes as much as or more than the latter control them. This is a scheme that puts mind back in its old post, over matter, in a sense-not under, outside, or beside it. It's a scheme that idealizes ideas and ideals over physico-chemical interactions, nerve impulse traffic-or DNA. It's a brain model in which conscious, mental, psychic forces are recognized to be the crowning achievement of some five hundred million years or more of evolution.

[...] The basic reasoning is simple: First, we contend that conscious or mental phenomena are dynamic, emergent, pattern (or configurational) properties of the living brain in action -- a point accepted by many, including some of the more tough-minded brain researchers. Second, the argument goes a critical step further, and insists that these emergent pattern properties in the brain have causal control potency -- just as they do elsewhere in the universe. And there we have the answer to the age-old enigma of consciousness.

To put it very simply, it becomes a question largely of who pushes whom around in the population of causal forces that occupy the cranium. There exists within the human cranium a whole world of diverse causal forces; what is more, there are forces within forces within forces, as in no other cubic half-foot of universe that we know.

[...] Along with their internal atomic and subnuclear parts, the brain molecules are obliged to submit to a course of activity in time and space that is determined very largely by the overall dynamic and spatial properties of the whole brain cell as an entity. Even the brain cells, however, with their long fibers and impulse conducting elements, do not have very much to say either about when or in what time pattern, for example, they are going to fire their messages. The firing orders come from a higher command. [...]

In short, if one climbs upward through the chain of command within the brain, one finds at the very top those overall organizational forces and dynamic properties of the large patterns of cerebral excitation that constitute the mental or psychic phenomena. [...]

Near the apex of this compound command system in the brain we find ideas. In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and in distant, foreign brains. And they also interact with real consequence upon the external surroundings to produce in toto an explosive advance in evolution on this globe far beyond anything known before, including the emergence of the living cell."



I disagree with the metafors of "force", "submission", "obliged" and "higher command" (which is located where?).  The mental life has casual potency, but it is because the brain evolved specifically to provide an ecology of ideas and action.  I would regard it as just a matter of level of description.

Brent


Jason


 
 
As you said previously, if consciousness had no effects, there would be no reason for it to evolve in the first place.

What I said in my last post was "It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities".

There is another possibility: consciousness is not useless.

If consciousness is not useless from Evolution's point of view then it must produce "something" that natural selection can see, and if natural selection can see that certain "something" then so can you or me. So the Turing Test is not just a good test for intelligence it's also a good test for consciousness. The only trouble is, what is that "something"? Presumably whatever it is that "something" must be related to mind in some way, but If it is not intelligent activity then what the hell is it"?  

John K Clark    See what's on my new list at  Extropolis

 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 9, 2024, 7:10:39 PM (13 days ago) Jul 9
to everyth...@googlegroups.com


On 7/8/2024 11:40 AM, 'Cosmin Visan' via Everything List wrote:
> Philosophical zombies are not possible, for the trivial reason that
> body doesn't even exist. "Body" is just an idea in consciousness.
So is consciousness.

Brent

Brent Meeker

unread,
Jul 9, 2024, 7:22:36 PM (13 days ago) Jul 9
to everyth...@googlegroups.com


On 7/8/2024 1:20 PM, Jason Resch wrote:


On Mon, Jul 8, 2024, 4:01 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Jul 8, 2024 at 2:23 PM Jason Resch <jason...@gmail.com> wrote:

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).

Not if consciousness is the inevitable byproduct of intelligece, and I'm almost certain that it is.    

If consciousness is necessary for intelligence, then it's not a byproduct. If on the other hand, consciousness is just a useless byproduct, then it could (logically if not nomologically) be eliminated without affecting intelligent.

You seem to want it to be both necessary but also be something that makes no difference to anything (which makes it unnecessary).

I would be most curious to hear your thoughts  regarding the section of my article on "Conscious behaviors" -- that is, behaviors which (seem to) require consciousness in order to do them.


I view mental states as high-level states operating in their own regime of causality (much like a Java computer program).

I have no problem with that, actually it's very similar to my view.

That's good to hear.

 
The java computer program can run on any platform, regardless of the particular physical nature of it.

Right. You could even say that "computer program" is not a noun, it is an adjective, it is the way a computer will behave when the machine's  logical states are organized in a certain way.  And "I" is the way atoms behave when they are organized in a Johnkclarkian way, and "you" is the way atoms behave when they are organized in a Jasonreschian way.

I'm not opposed to that framing.

I view consciousness as like that high-level control structure. It operates within a causal realm where ideas and thoughts have causal influence and power, and can reach down to the lower level to do things like trigger nerve impulses.

Consciousness is a high-level description of brain states that can be extremely useful, but that doesn't mean that lower level and much more finely grained description of brain states involving nerve impulses, or even more finely grained descriptions involving electrons and quarks are wrong, it's just that such level of detail is unnecessary and impractical for some purposes.   

I would even say, that at a certain level of abstraction, they become irrelevant. It is the result of what I call "a Turing firewall", software has no ability to know its underlying hardware implementation, it is an inviolable separation of layers of abstraction, which makes the lower levels invisible to the layers above.
That's roughly true, but not exactly.  If you think of intelligence implemented on a computer it would make a difference if it had a true random number generator (hardware) or not.  It would make a difference if it were a quantum computer or not.  And going the other way, what if it didn't have a multiply operation.  We're so accustomed the standard Turing-complete von Neumann computer we take it for granted.

Brent

So the neurons and molecular forces aren't in the drivers seat for what goes on in the brain. That is the domain of higher level structures and forces. We cannot ignore completely the lower levels, they provide the substrate upon which the higher levels are built, but I think it is an abuse of reductionism that leads people to saying consciousness is an epiphenomenon and doesn't do anything. When no one would try to apply reductionism to explain why, when a glider in the game of life hits a block and causes it to self destruct, that it is due to quantum mechanics in our universe, rather than a consequence of the very different rules of the game of life as they operate in the game of life universe.

Jason 


 
John K Clark    See what's on my new list at  Extropolis
qb2

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 9, 2024, 7:47:58 PM (13 days ago) Jul 9
to everyth...@googlegroups.com


On 7/8/2024 1:34 PM, Jason Resch wrote:


On Mon, Jul 8, 2024, 4:04 PM John Clark <johnk...@gmail.com> wrote:

On Mon, Jul 8, 2024 at 2:12 PM Jason Resch <jason...@gmail.com> wrote:

>Consciousness is a prerequisite of intelligence.

I think you've got that backwards, intelligence is a prerequisite of consciousness. And the possibility of intelligent ACTIONS is a  prerequisite for Darwinian natural selection to have evolved it.

I disagree, but will explain below.

 
One can be conscious without being intelligent,

Sure.

I define intelligence by something capable of intelligent action.

Intelligent action requires non random choice: choice informed by information from the environment.

Having information about the environment (i.e. perceptions) is consciousness. You cannot have perceptions without there being some process or thing to perceive them.

Therefore perceptions (i.e. consciousness) is a requirement and precondition of being able to perform intelligent actions.

I agree.  And there for evolution can "see" consciousness, just like it can see metabolism.

Brent



Jason 

The Turing Test is not perfect, it has a lot of flaws, but it's all we've got. If something passes the Turing Test then it's intelligent and conscious, but if it fails the test then it may or may not be intelligent and or conscious. 

 You need to have perceptions (of the environment, or the current situation) in order to act intelligently. 

For intelligence to have evolved, and we know for a fact that it has, you not only need to be able to perceive the environment you also need to be able to manipulate it. That's why zebras didn't evolve great intelligence, they have no hands, so a brilliant zebra wouldn't have a great advantage over a dumb zebra, in fact he'd probably be at a disadvantage because a big brain is a great energy hog.  
  John K Clark    See what's on my new list at  Extropolis
339

3b4


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Cosmin Visan

unread,
Jul 10, 2024, 3:13:20 AM (13 days ago) Jul 10
to Everything List
@Brent. Playing with words doesn't make you smart. Quite the opposite. Maaan... you people are so boring. You have the same memes that you keep repeating over and over and over again. Zero presence of intelligent thought. Just memes.

Jason Resch

unread,
Jul 10, 2024, 10:07:32 AM (13 days ago) Jul 10
to Everything List
I think we have a failure to communicate effectively on this. It may be because we are using a different definition of some word, or we are assuming a belief the other has which they do not hold.

To help clarify and ensure we are in the same page, I am not implying any strong emergence nor any violation of physical causality.


Does that mean that neural activity is epiphenomenal?

To be clear, the neither a lack of Cartesian intervention in physics, nor the causal closure of physics implies epiphenomenalism. Though historically this is the path of thinking that took place in the late 1800s, that led to the first proposals of epiphenomenalism.


If so, consciousness is epiphenomenal in the same way.

It can seem so, but only if one stops short in their imagining of a low level explanation. Consider a complex program that halts. We can ask why it halted, and at a low level say it halted because it reaches this last instruction here, which is '00000000' which tells the machine to halt. But then when we keep asking and following back the explanation for why exactly did this machine hit this instruction, as we trace the causes backwards we find we must include millions of other factors, causes, relationships, and so on. This bigger picture is much larger than the story of single instructions followed one at a time by the CPU. The whole story is very large and very complex.

And in the case of explaining atomic motions in the brain, the same thing happens. If we want to explain why this particular neurons released calcium ions, and it happens to be a neuron that controls speech, and it is expressing a complex thought about an idea of epiphenomenalism, we can only explain this physics interaction by invoking a larger story involving 10^23 other atoms interacting in incredibly complex ways (and in that story is a conscious mind). We can only pretend it isn't there by keeping our focus on just two particles interacting at a time, but if we want to know how those two atoms came to interact when and how they did, we have to enlarge our view, and consider webs of relationships that are themselves far more complex then any physical law.

Consider the program from my example. Where does the is_prime() function exist physically in physical law? It doesn't. But the is_prime() function nevertheless exists. Where and how does it exist, when no physical law, and no particles, is the is_prime() function? The existence of this function can only be explained in terms of a complex arrangement of relationships implemented across a myriad of particles. We can't explain this function in terms of single physical interactions between 2 particles. The whole picture is then, something more than physical law.

Also, consider the size of the information state of your current moment of awareness. Does it. It exceed the information state they can be represented by just two particles interacting? What about a computer that multiplies a million digit number, which particle represented that variable? Since no one particle can, the only answer must be that we.hwve to consider a complex computation as an extension across space and time involving a myriad of particles. Single point-like local interactions are. It enough.


So as I see it:
Epiphenomenalism fails, because it is a dualist theory that leads to such things as zombies, but zombies are not logically possible. Epiphenomenalism denies the ability for our physical selves to know anything about our internal mental states.

And reductionist physicalism fails, because it ignores the higher levels structures, and complex history, that must be taken into account to provide a complete explanation of events.

But this leaves plenty of room for physicalism. You can be a physicalist who believes in the causal closure of physics, and the inviolability of physical laws.

For example, see Sean Carroll's arguments against epiphenomenalism while still remaining a strict physicalist:

"Physicalism posits that a conscious experience is an emergent phenomenon
that arises in higher-level models of the same underlying processes described by physics."
[...]
"The passive mentalism option, where mental aspects have no impact on physical
behavior, seems even less promising."

So Carroll is an example of a casual closure accepting physicalist, who rejects "passive mentalism" (which is another term for epiphenomenalism).

Do you see anything wrong with Carroll's view?

Jason 



Jason Resch

unread,
Jul 10, 2024, 10:08:34 AM (13 days ago) Jul 10
to Everything List


On Tue, Jul 9, 2024, 6:59 PM Brent Meeker <meeke...@gmail.com> wrote:


On 7/8/2024 11:12 AM, Jason Resch wrote:


On Mon, Jul 8, 2024 at 10:29 AM John Clark <johnk...@gmail.com> wrote:

On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker <meeke...@gmail.com> wrote:

>I thought it was obvious that foresight requires consciousness. It requires the ability of think in terms of future scenarios

The keyword in the above is "think". Foresight means using logic to predict, given current starting conditions, what the future will likely be, and determining how change in the initial conditions will likely affect the future.  And to do any of that requires intelligence. Both Large Language Models and picture to video AI programs have demonstrated that they have foresight ; if you ask them what will happen if you cut the string holding down a helium balloon they will tell you it will flow away, but if you add that the instant string is cut an Olympic high jumper will make a grab for the dangling string they will tell you what will likely happen then too. So yes, foresight does imply consciousness because foresight demands intelligence and consciousness is the inevitable byproduct of intelligence.

Consciousness is a prerequisite of intelligence. One can be conscious without being intelligent, but one cannot be intelligent without being conscious.
Someone with locked-in syndrome can do nothing, and can exhibit no intelligent behavior. They have no measurable intelligence. Yet they are conscious. You need to have perceptions (of the environment, or the current situation) in order to act intelligently. It is in having perceptions that consciousness appears. So consciousness is not a byproduct of, but an integral and necessary requirement for intelligent action.
And not necessarily a high-level language based consiousness.  Paramecia act intelligently based on perception of chemical gradients.  So one would say they are conscious of said gradients.


Yes, I agree.

Jason


Brent

Jason
 
 
in which you are an actor

Obviously any intelligence will have to take its own actions in account to determine what the likely future will be. After a LLM gives you an answer to a question, based on that answer I'll bet an AI  will be able to make a pretty good guess what your next question to it will be.

John K Clark    See what's on my new list at  Extropolis
ods




 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%3D_HknXVnLpnd2fr6XkTbiDY0TU8hdqq%3DpPW5UfAwYUw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jul 10, 2024, 10:12:08 AM (13 days ago) Jul 10
to Everything List
I think he means in the activity that occurs across the brain.

  The mental life has casual potency, but it is because the brain evolved specifically to provide an ecology of ideas and action.  I would regard it as just a matter of level of description.

We can take issue with his word choices, but it seems you (and I) agree with his general idea, that there is causal potency in ones mental life.

Jason 



 
 
As you said previously, if consciousness had no effects, there would be no reason for it to evolve in the first place.

What I said in my last post was "It must be because consciousness is the byproduct of something else that is not useless, there are no other possibilities".

There is another possibility: consciousness is not useless.

If consciousness is not useless from Evolution's point of view then it must produce "something" that natural selection can see, and if natural selection can see that certain "something" then so can you or me. So the Turing Test is not just a good test for intelligence it's also a good test for consciousness. The only trouble is, what is that "something"? Presumably whatever it is that "something" must be related to mind in some way, but If it is not intelligent activity then what the hell is it"?  

John K Clark    See what's on my new list at  Extropolis

 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3kar8%3De8dFmYXiBLzY-29kYGKyk%2BnNF9xuhK3m_qipEQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhXofreWBf0Ei9k6JxD4_Cbbprq%3DKduBYTZGAnHh8Ufpw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
twin-prime-program2.png

Jason Resch

unread,
Jul 10, 2024, 10:25:06 AM (13 days ago) Jul 10
to Everything List


On Tue, Jul 9, 2024, 7:22 PM Brent Meeker <meeke...@gmail.com> wrote:


On 7/8/2024 1:20 PM, Jason Resch wrote:


On Mon, Jul 8, 2024, 4:01 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Jul 8, 2024 at 2:23 PM Jason Resch <jason...@gmail.com> wrote:

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).

Not if consciousness is the inevitable byproduct of intelligece, and I'm almost certain that it is.    

If consciousness is necessary for intelligence, then it's not a byproduct. If on the other hand, consciousness is just a useless byproduct, then it could (logically if not nomologically) be eliminated without affecting intelligent.

You seem to want it to be both necessary but also be something that makes no difference to anything (which makes it unnecessary).

I would be most curious to hear your thoughts  regarding the section of my article on "Conscious behaviors" -- that is, behaviors which (seem to) require consciousness in order to do them.


I view mental states as high-level states operating in their own regime of causality (much like a Java computer program).

I have no problem with that, actually it's very similar to my view.

That's good to hear.

 
The java computer program can run on any platform, regardless of the particular physical nature of it.

Right. You could even say that "computer program" is not a noun, it is an adjective, it is the way a computer will behave when the machine's  logical states are organized in a certain way.  And "I" is the way atoms behave when they are organized in a Johnkclarkian way, and "you" is the way atoms behave when they are organized in a Jasonreschian way.

I'm not opposed to that framing.

I view consciousness as like that high-level control structure. It operates within a causal realm where ideas and thoughts have causal influence and power, and can reach down to the lower level to do things like trigger nerve impulses.

Consciousness is a high-level description of brain states that can be extremely useful, but that doesn't mean that lower level and much more finely grained description of brain states involving nerve impulses, or even more finely grained descriptions involving electrons and quarks are wrong, it's just that such level of detail is unnecessary and impractical for some purposes.   

I would even say, that at a certain level of abstraction, they become irrelevant. It is the result of what I call "a Turing firewall", software has no ability to know its underlying hardware implementation, it is an inviolable separation of layers of abstraction, which makes the lower levels invisible to the layers above.
That's roughly true, but not exactly.  If you think of intelligence implemented on a computer it would make a difference if it had a true random number generator (hardware) or not. 

There was a study done in the 1950s on probabilistic Turing machines ( https://www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en ) that found what they could compute is no different than what a deterministic Turing machine can compute.

"The computing power of Turing machines
provided with a random number generator was
studied in the classic paper [Computability by
Probabilistic Machines]. It turned out that such
machines could compute only functions that are already computable by ordinary Turing machines."
— Martin Davis in “The Myth of Hypercomputation” (2004)

To see why consider that programs can similarly split themselves and run in parallel
with each of the possible values. To each instance of the split program, the value it is provided will seem random. But importantly: what the program can computes with this value
is the same as what it would compute had the value come from a "truly random" quantum measurement.

It would make a difference if it were a quantum computer or not. 

For us observing the program run from the outside, it would make a difference. But the program itself has way of distinguishing if it is receiving a value that came from a real measurement of a quantum system, or if it was provided the result of a simulated quantum system.


And going the other way, what if it didn't have a multiply operation.  We're so accustomed the standard Turing-complete von Neumann computer we take it for granted.

A program will crash if it's run on a hardware that it's not compatible with. This is why you can't take a .exe from windows and run it on a Mac. But if you run a windows emulator on the Mac you can then run the .exe within it.

The program the has no idea it is running on a Mac, it has every reason to believe it is running on a real windows computer, but it is fooled by the emulation layer (this emulation layer is what I speak of when to refer to the "Turing firewall"). That such layers can be created is a direct consequence of the fact that all Turing machines are capable of emulating each other.

Jason 



So the neurons and molecular forces aren't in the drivers seat for what goes on in the brain. That is the domain of higher level structures and forces. We cannot ignore completely the lower levels, they provide the substrate upon which the higher levels are built, but I think it is an abuse of reductionism that leads people to saying consciousness is an epiphenomenon and doesn't do anything. When no one would try to apply reductionism to explain why, when a glider in the game of life hits a block and causes it to self destruct, that it is due to quantum mechanics in our universe, rather than a consequence of the very different rules of the game of life as they operate in the game of life universe.

Jason 


 
John K Clark    See what's on my new list at  Extropolis
qb2

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv28Yh4o5TGpuZ2nfh7NFxYWbi4yVW%2B5v%3DbeXULDqdbPsg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjBw4RA-YBpKERYR5swmxieSUkv_%3DXyttLmszc8XOtd8g%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
Jul 10, 2024, 10:52:17 AM (13 days ago) Jul 10
to everyth...@googlegroups.com


Stathis Papaioannou


I think that physical reality is causally closed and strong emergence is false. I think zombies are impossible because consciousness is a weakly emergent phenomenon that is necessary rather than contingent, in the way that gliders are necessary rather contingent given a certain GOL pattern. I would use the term “gliders in GOL have no separate causal efficacy of their own” despite all you have said to reflect causal closure and the absence of strong emergence. Maybe this is a form of dualism: property dualism rather than substance dualism. Maybe the term “gliders are epiphenomenal” can be used, but if it doesn’t apply to the gliders (and consciousness) as described, I don’t see what would have to change for it to apply.

Jason Resch

unread,
Jul 10, 2024, 1:35:44 PM (13 days ago) Jul 10
to Everything List
I think we agree on all the principles then. If there's a difference between our views I think it boils down to us having a different definition for the word 'epiphenomenal'.

Jason 



Brent Meeker

unread,
Jul 10, 2024, 6:36:18 PM (12 days ago) Jul 10
to everyth...@googlegroups.com


On 7/9/2024 1:32 AM, Stathis Papaioannou wrote:


On Tue, 9 Jul 2024 at 04:23, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).
 
Mental states could be necessarily tied to physical states without having any separate causal efficacy, and zombies would not be logically possible. Software is necessarily tied to hardware activity: if a computer runs a particular program, it is not optional that the program is implemented. However, the software does not itself have causal efficacy, causing current to flow in wires and semiconductors and so on: there is always a sufficient explanation for such activity in purely physical terms.

That's why I view it as a choice in level of description.  This seems to parallel the 19th Century discussions of life.  That life is an organization of molecules capable of metabolism and reproduction, eventually prevailed over the need for an animating spirit.  But there remained a lot to be discovered about that organization.  It is a very specific organization.  Probably not the only possible kind of life, but certainly distinct from non-life.

Brent

Brent Meeker

unread,
Jul 10, 2024, 11:46:49 PM (12 days ago) Jul 10
to everyth...@googlegroups.com


On 7/9/2024 4:48 AM, John Clark wrote:
On Mon, Jul 8, 2024 at 4:20 PM Jason Resch <jason...@gmail.com> wrote:

If consciousness is necessary for intelligence [...]
 
Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.
I think that's wrong.  It is the cause of some instances of intelligence.  Imagining yourself in various scenario's and running them forward in imagination is very much the cause of on kind of intelligence, i.e. foresight.

And as I cannot emphasize enough, natural selection can't select for something it can't see and it can't see consciousness, but natural selection CAN see intelligent actions.
And intelligent actions (of a certain kind, often speech) follow from conscious thought and so evolution CAN see conscious thought.  Remember you're using "see" metaphorically.  You "see" actions as intelligent by inference, often by modeling in consciousness what you would do in the other's situation.  So you can "see" conscious thoughts by extension of the same kind of inference.

And you know for a fact that natural selection has managed to produce at least one conscious being and probably mini billions of them.
Don't you understand how those two facts are telling you something that is philosophically important?
 
> If on the other hand, consciousness is just a useless byproduct, then it could (logically if not nomologically) be eliminated without affecting intelligent.

That would not be possible if it's a brute fact that consciousness is the way data feels when it is being processed.

That's false.  Lots of data is processed every day by machines that are not conscious and we "see" they are not conscious because they take no intelligent action based on the data being true. 

Brent
 

John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 10, 2024, 11:53:13 PM (12 days ago) Jul 10
to everyth...@googlegroups.com


On 7/9/2024 5:17 AM, John Clark wrote:


On Tue, Jul 9, 2024 at 7:54 AM Jason Resch <jason...@gmail.com> wrote:

>>Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.


I didn't say it was the cause, I said it is a prerequisite.

My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms.

A more careful reading of the definitions would tell you that a prerequisite doesn't does not give rise to an action; but it's absence precludes the action.

Brent


 
You conveniently (for you but not for me) ignored and deleted my explanation in your reply.

Somehow I missed that "detailed explanation" you refer to.  

John K Clark



 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 11, 2024, 12:00:28 AM (12 days ago) Jul 11
to everyth...@googlegroups.com


On 7/9/2024 7:16 AM, Stathis Papaioannou wrote:


Stathis Papaioannou


On Tue, 9 Jul 2024 at 22:15, Jason Resch <jason...@gmail.com> wrote:


On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:


On Tue, 9 Jul 2024 at 04:23, Jason Resch <jason...@gmail.com> wrote:


On Sun, Jul 7, 2024 at 3:14 PM John Clark <johnk...@gmail.com> wrote:
On Sun, Jul 7, 2024 at 1:58 PM Jason Resch <jason...@gmail.com> wrote:

>>>  I think such foresight is a necessary component of intelligence, not a "byproduct".

>>I agree, I can detect the existence of foresight in others and so can natural selection, and that's why we have it.  It aids in getting our genes transferred into the next generation. But I was talking about consciousness not foresight, and regardless of how important we personally think consciousness is, from evolution's point of view it's utterly useless, and yet we have it, or at least I have it.

you don't seem to think zombies are logically possible,

Zombies are possible, it's philosophical zombies, a.k.a. smart zombies, that are impossible because it's a brute fact that consciousness is the way data behaves when it is being processed intelligently, or at least that's what I think. Unless you believe that all iterated sequences of "why" or "how" questions go on forever then you must believe that brute facts exist; and I can't think of a better candidate for one than consciousness.

so then epiphenomenalism is false

According to the Internet Encyclopedia of Philosophy "Epiphenomenalism is a position in the philosophy of mind according to which mental states or events are caused by physical states or events in the brain but do not themselves cause anything". If that is the definition then I believe in Epiphenomenalism.

If you believe mental states do not cause anything, then you believe philosophical zombies are logically possible (since we could remove consciousness without altering behavior).
 
Mental states could be necessarily tied to physical states without having any separate causal efficacy, and zombies would not be logically possible. Software is necessarily tied to hardware activity: if a computer runs a particular program, it is not optional that the program is implemented. However, the software does not itself have causal efficacy, causing current to flow in wires and semiconductors and so on: there is always a sufficient explanation for such activity in purely physical terms.

I don't disagree that there is sufficient explanation in all the particle movements all following physical laws.

But then consider the question, how do we decide what level is in control? You make the case that we should consider the quantum field level in control because everything is ultimately reducible to it.

But I don't think that's the best metric for deciding whether it's in control or not. Do the molecules in the brain tell neurons what do, or do neurons tell molecules what to do (e.g. when they fire)? Or is it some mutually conditioned relationship?

Do neurons fire on their own and tell brains what to do, or do neurons only fire when other neurons of the whole brain stimulate them appropriately so they have to fire? Or is it again, another case of mutualism?

When two people are discussing ideas, are the ideas determining how each brain thinks and responds, or are the brains determining the ideas by virtue of generating the words through which they are expressed?

Through in each of these cases, we can always drop a layer and explain all the events at that layer, that is not (in my view) enough of a reason to argue that the events at that layer are "in charge." Control structures, such as whole brain regions, or complex computer programs, can involve and be influenced by the actions of billions of separate events and separate parts, and as such, they transcend the behaviors of any single physical particle or physical law. 

Consider: whether or not a program halts might only be determinable by some rules and proof in a mathematical system, and in this case no physical law will reveal the answer to that physical system's (the computer's) behavior. So if higher level laws are required in the explanation, does it still make sense to appeal to the lower level (physical) laws as providing the explanation?

Given the generality of computers, they can also simulate any imaginable set of physical laws. In such simulations, again I think appealing to our physical laws as explaining what happens in these simulations is a mistake, as the simulation is organized in a manner to make our physical laws irrelevant to the simulation. So while you could explain what happens in the simulation in terms of the physics of the computer running it, it adds no explanatory power: it all cancels out leaving you with a model of the simulated physics.

I would say that something has separate causal efficacy of its own if physical events cannot be predicted without taking that thing into account. For example, the trajectory of a bullet cannot be predicted without taking the wind into account. In the brain, the trajectory of an atom can be predicted without taking consciousness into account.

I think that's doubtful.  Some atoms have their motion determined by perceptions which are instantiated by things outside the brain and which atom may depend on memory which depends on the whole history of the organism.

Brent

Brent Meeker

unread,
Jul 11, 2024, 12:28:21 AM (12 days ago) Jul 11
to everyth...@googlegroups.com
Most intelligent action is preceded by conscious planning.

Brent

John K Clark    See what's on my new list at  Extropolism


asd

sssfisdft

n
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 11, 2024, 1:02:02 AM (12 days ago) Jul 11
to everyth...@googlegroups.com
Do you know how to avoid this boredom, or do I need to explain it to you?

Brent
"The man who lets himself be bored is even more contemptible than
the bore."
      --- Samuel Butler



On 7/10/2024 12:13 AM, 'Cosmin Visan' via Everything List wrote:
@Brent. Playing with words doesn't make you smart. Quite the opposite. Maaan... you people are so boring. You have the same memes that you keep repeating over and over and over again. Zero presence of intelligent thought. Just memes.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 11, 2024, 7:34:45 AM (12 days ago) Jul 11
to everyth...@googlegroups.com
On Wed, Jul 10, 2024 at 11:46 PM Brent Meeker <meeke...@gmail.com> wrote:

>> Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.
 
 I think that's wrong. 

You say I'm wrong and yet I 99% agree with what you say in the very next sentence, it would be 100% except that I'm not quite sure whether the pronoun "it" refers to intelligence or consciousness.  
 
It is the cause of some instances of intelligence.  Imagining yourself in various scenario's and running them forward in imagination is very much the cause of on kind of intelligence, i.e. foresight.

Yes, foresight is the kind of intelligence that examines possible future scenarios and takes actions that increase the likelihood that the scenario that actually occurs is one that is desirable from its point of view. That's how a Chess program became a grandmaster in the 1990's. AlphaZero did the same thing when it beat the world champion human player at the game of GO, except that it didn't think (a.k.a. imagine) all possible scenarios but only moves that an intelligent opponent would likely make, including an opponent that was as intelligent as itself. That's how AlphaZero went from knowing nothing about GO, except for the few simple rules of the game, to being able to play GO at a superhuman level in just a few hours.    


>> it's a brute fact that consciousness is the way data feels when it is being processed.
 
That's false. 

Once more you say I'm wrong, but this time I agree 100% not 99% with what you say in the very next sentence.  

Lots of data is processed every day by machines that are not conscious and we "see" they are not conscious because they take no intelligent action based on the data being true. 

So like me you believe the Turing Test is not just a test for intelligence, it is also a test for consciousness. In fact, although imperfect, it is the only test for consciousness we have, or will ever have. So if a computer is behaving as intelligently as a human then it must be as conscious as a human. Probably.   

 John K Clark    See what's on my new list at  Extropolis
uu6

 
 

John Clark

unread,
Jul 11, 2024, 8:02:11 AM (12 days ago) Jul 11
to everyth...@googlegroups.com
On Wed, Jul 10, 2024 at 11:53 PM Brent Meeker <meeke...@gmail.com> wrote:
>> "My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms."

A more careful reading of the definitions would tell you that a prerequisite doesn't does not give rise to an action; but it's absence precludes the action.

OK. But if it's a brute fact that consciousness is the way data feels when it is being processed, and if intelligent action requires data processing, then if by some magic (it has to be magic because neither science nor mathematics can help you with this) you knew that system X was not conscious, then you could correctly predict that its actions would not be intelligent.

Consciousness is a high-level description of the state of a system, that's why when I asked somebody "why did you do that?" sometimes "because I wanted to" is an adequate explanation. But sometimes I want a lower level more detailed explanation such as "I frowned when I took a bite of that food because it was much too salty". A neurophysiologist might want an even more detailed explanation involving neurons and synapses.  None of these explanations are wrong and all of them are consistent with each other. It would be correct to say that the reason a balloon gets bigger when I blow into it is because the pressure inside the balloon increases, but it would also be correct to say that the reason a balloon gets bigger when I blow into it is because there are more air molecules inside the balloon randomly hitting the inner surface. 

John K Clark    See what's on my new list at  Extropolism
isb

Brent Meeker

unread,
Jul 11, 2024, 2:39:49 PM (12 days ago) Jul 11
to everyth...@googlegroups.com
I stand corrected.  But that just means I chose a bad example.  My point was that consciousness doesn't require Turing completeness.  You agreed with me about the paramecium.

Brent

Jason Resch

unread,
Jul 11, 2024, 2:43:54 PM (12 days ago) Jul 11
to Everything List


On Thu, Jul 11, 2024, 2:39 PM Brent Meeker <meeke...@gmail.com> wrote:
I stand corrected.  But that just means I chose a bad example.  My point was that consciousness doesn't require Turing completeness.  You agreed with me about the paramecium.


I agree Turing completeness is not required for consciousness. The human brain (given it's limited and faulty memory) wouldn't even meet the definition of being Turing complete.

Jason 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Jul 11, 2024, 3:04:14 PM (12 days ago) Jul 11
to everyth...@googlegroups.com
Only in the most idealized sense of Turing completeness would we argue whether the brain is Turing complete. Neural networks are Turing complete.

If we're interested in whether consciousness requires Turing completeness, it seems silly to use the brain as a counter example of Turing completeness only because it happens to be a finite, physical object with noise/errors in the system. For all practical purposes, whatever properties one would confer to a Turing complete system, the brain has them.

Cosmin Visan

unread,
Jul 11, 2024, 3:40:11 PM (12 days ago) Jul 11
to Everything List
Brain doesn't exist. "Brain" is just an idea in consciousness.

John Clark

unread,
Jul 11, 2024, 3:50:27 PM (12 days ago) Jul 11
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 2:39 PM Brent Meeker <meeke...@gmail.com> wrote:

  My point was that consciousness doesn't require Turing completeness.

Maybe, you and I will never know for sure, but intelligence certainly does require Touring Completeness.

John K Clark    See what's on my new list at  Extropolis
ctc




 

John Clark

unread,
Jul 11, 2024, 3:54:19 PM (12 days ago) Jul 11
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 2:43 PM Jason Resch <jason...@gmail.com> wrote:

> Turing completeness is not required for consciousness. The human brain (given it's limited and faulty memory) wouldn't even meet the definition of being Turing complete.

Sometimes on some problems the human brain could be considered as being Turing Complete, otherwise we would never be able to do anything that was intelligent. And on rare occasions the human brain has been known to do smart things. But sometimes we screw up and do dumb things. You could say pretty much the same thing about a computer,  an idealized Turing Machine could calculate the two-argument Ackermann function for any input numbers and so Akermann is computable, but the input numbers get larger so fast (Super-exponentially) that when the input numbers get larger than about 5 the output number becomes so huge that no real computer, even if it was the size of the observable universe, could compute the output number. 

And the Busy Beaver Function grows even faster then Ackermann, in fact it grows faster than ANY computable function, that's why Busy Beaver is uncomputable. Even in theory an idealized perfect Turing Machine couldn't calculate the Busy Beaver Numbers except for the first 5 and maybe 6, much less a real computer. 

   John K Clark    See what's on my new list at  Extropolis
bac


Brent Meeker

unread,
Jul 11, 2024, 4:42:37 PM (12 days ago) Jul 11
to everyth...@googlegroups.com


On 7/11/2024 4:34 AM, John Clark wrote:
On Wed, Jul 10, 2024 at 11:46 PM Brent Meeker <meeke...@gmail.com> wrote:

>> Consciousness is the inevitable product of intelligence, it is not the cause of intelligence.
 
 I think that's wrong. 

You say I'm wrong and yet I 99% agree with what you say in the very next sentence, it would be 100% except that I'm not quite sure whether the pronoun "it" refers to intelligence or consciousness. 
Consciousness.
 
It is the cause of some instances of intelligence.  Imagining yourself in various scenario's and running them forward in imagination is very much the cause of on kind of intelligence, i.e. foresight.

Yes, foresight is the kind of intelligence that examines possible future scenarios and takes actions that increase the likelihood that the scenario that actually occurs is one that is desirable from its point of view. That's how a Chess program became a grandmaster in the 1990's. AlphaZero did the same thing when it beat the world champion human player at the game of GO, except that it didn't think (a.k.a. imagine) all possible scenarios but only moves that an intelligent opponent would likely make, including an opponent that was as intelligent as itself. That's how AlphaZero went from knowing nothing about GO, except for the few simple rules of the game, to being able to play GO at a superhuman level in just a few hours.    

>> it's a brute fact that consciousness is the way data feels when it is being processed.
 
That's false. 

Once more you say I'm wrong, but this time I agree 100% not 99% with what you say in the very next sentence.  

Lots of data is processed every day by machines that are not conscious and we "see" they are not conscious because they take no intelligent action based on the data being true. 

So like me you believe the Turing Test is not just a test for intelligence, it is also a test for consciousness. In fact, although imperfect, it is the only test for consciousness we have, or will ever have. So if a computer is behaving as intelligently as a human then it must be as conscious as a human. Probably.   

 John K Clark    See what's on my new list at  Extropolis
uu6

 
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 11, 2024, 4:58:54 PM (12 days ago) Jul 11
to everyth...@googlegroups.com


On 7/11/2024 5:01 AM, John Clark wrote:
On Wed, Jul 10, 2024 at 11:53 PM Brent Meeker <meeke...@gmail.com> wrote:
>> "My dictionary says the definition of "prerequisite"  is  "a thing that is required as a prior condition for something else to happen or exist". And it says the definition of "cause" is "a person or thing that gives rise to an action, phenomenon, or condition". So cause and prerequisite are synonyms."

A more careful reading of the definitions would tell you that a prerequisite doesn't does not give rise to an action; but it's absence precludes the action.

OK. But if it's a brute fact that consciousness is the way data feels when it is being processed,
I disagree; and elsewhere you have agreed that many data processing machines are not conscious because they take no actions based on the data being true.   Data has to be processed in ways leading to intelligent action or decisions.


and if intelligent action requires data processing, then if by some magic (it has to be magic because neither science nor mathematics can help you with this) you knew that system X was not conscious, then you could correctly predict that its actions would not be intelligent.

Consciousness is a high-level description of the state of a system, that's why when I asked somebody "why did you do that?" sometimes "because I wanted to" is an adequate explanation. But sometimes I want a lower level more detailed explanation such as "I frowned when I took a bite of that food because it was much too salty". A neurophysiologist might want an even more detailed explanation involving neurons and synapses.  None of these explanations are wrong and all of them are consistent with each other. It would be correct to say that the reason a balloon gets bigger when I blow into it is because the pressure inside the balloon increases, but it would also be correct to say that the reason a balloon gets bigger when I blow into it is because there are more air molecules inside the balloon randomly hitting the inner surface.
Yes, I agree that explanations in terms of consciousness and explanations in terms of rationale may just be differences in level of description.  But perhaps I have a more expansive idea of consciousness than you do.  I think of the paramecium that swims to the left because the water is to salty on the right as being conscious.

Brent


John K Clark    See what's on my new list at  Extropolism
isb
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 11, 2024, 7:01:35 PM (11 days ago) Jul 11
to everyth...@googlegroups.com


On 7/11/2024 12:49 PM, John Clark wrote:
On Thu, Jul 11, 2024 at 2:39 PM Brent Meeker <meeke...@gmail.com> wrote:

  My point was that consciousness doesn't require Turing completeness.

Maybe, you and I will never know for sure, but intelligence certainly does require Touring Completeness.
I know because I'm conscious and brains aren't Turing complete.

Brent

John K Clark    See what's on my new list at  Extropolis
ctc




 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 11, 2024, 7:04:01 PM (11 days ago) Jul 11
to everyth...@googlegroups.com


On 7/11/2024 12:53 PM, John Clark wrote:


On Thu, Jul 11, 2024 at 2:43 PM Jason Resch <jason...@gmail.com> wrote:

> Turing completeness is not required for consciousness. The human brain (given it's limited and faulty memory) wouldn't even meet the definition of being Turing complete.

Sometimes on some problems the human brain could be considered as being Turing Complete, otherwise we would never be able to do anything that was intelligent. 
??? How on Earth do you reach that conclusion.

Brent

Cosmin Visan

unread,
Jul 12, 2024, 4:09:29 AM (11 days ago) Jul 12
to Everything List
@Jason. Uuu... big boy beliving in Santa Claus! Way to go!

On Tuesday 9 July 2024 at 14:50:10 UTC+3 Jason Resch wrote:


On Tue, Jul 9, 2024, 4:05 AM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
So, where is Santa Claus ?

If he's possible in this universe he exists very far away. If he's not possible in this universe but possible in other universes then he exists in some subset of those universes where he is possible. If he's not logically possible he doesn't exist anywhere.


Also, does he bring presents to all the children in the world in 1 night ? How does he do that ?

He sprinkles fairy dust all over the planet (nano bot swarms) which travel down chimneys to self-assemble presents from ambient matter, after they scan the brain's of sleeping children to see if they are naughty or nice and what present they hoped for.

Jason 



On Tuesday 9 July 2024 at 07:31:46 UTC+3 Jason Resch wrote:


On Mon, Jul 8, 2024, 6:38 PM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
So based on your definition, Santa Claus exists.

I believe everything possible exists.

That is the idea this mail list was created to discuss, after all. (That is why it is called the "everything list")

Jason 



On Tuesday 9 July 2024 at 00:47:28 UTC+3 Jason Resch wrote:


On Mon, Jul 8, 2024, 5:17 PM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
Brain doesn't exist.

Then it exists as an object in consciousness, which is as much as exist would mean under idealism. Rather than say things don't exist, I think it would be better to redefine what is meant by existence.


"Brain" is just an idea in consciousness.

Sure, and all objects exist in the mind of God. So "exist" goes back to meaning what it has always meant, as Markus Mueller said (roughly): "A exists for B, when changing the state of A can change the state of B, and vice versa, under certain auxiliary conditions."


See my papers, like "How Self-Reference Builds the World": https://philpeople.org/profiles/cosmin-visan


I have, and replied with comments and questions. You, however, dismissed them as me not having read your paper.

Have you seen my paper on how computational observers build the world? It reaches a similar conclusion to yours:


Jason 



On Monday 8 July 2024 at 23:35:12 UTC+3 Jason Resch wrote:


On Mon, Jul 8, 2024, 4:04 PM John Clark <johnk...@gmail.com> wrote:

On Mon, Jul 8, 2024 at 2:12 PM Jason Resch <jason...@gmail.com> wrote:

>Consciousness is a prerequisite of intelligence.

I think you've got that backwards, intelligence is a prerequisite of consciousness. And the possibility of intelligent ACTIONS is a  prerequisite for Darwinian natural selection to have evolved it.

I disagree, but will explain below.

 
One can be conscious without being intelligent,

Sure.

I define intelligence by something capable of intelligent action.

Intelligent action requires non random choice: choice informed by information from the environment.

Having information about the environment (i.e. perceptions) is consciousness. You cannot have perceptions without there being some process or thing to perceive them.

Therefore perceptions (i.e. consciousness) is a requirement and precondition of being able to perform intelligent actions.

Jason 

The Turing Test is not perfect, it has a lot of flaws, but it's all we've got. If something passes the Turing Test then it's intelligent and conscious, but if it fails the test then it may or may not be intelligent and or conscious. 

 You need to have perceptions (of the environment, or the current situation) in order to act intelligently. 

For intelligence to have evolved, and we know for a fact that it has, you not only need to be able to perceive the environment you also need to be able to manipulate it. That's why zebras didn't evolve great intelligence, they have no hands, so a brilliant zebra wouldn't have a great advantage over a dumb zebra, in fact he'd probably be at a disadvantage because a big brain is a great energy hog.  
  John K Clark    See what's on my new list at  Extropolis
339

3b4


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 12, 2024, 6:25:23 AM (11 days ago) Jul 12
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker <meeke...@gmail.com> wrote:

>> Sometimes on some problems the human brain could be considered as being Turing Complete, otherwise we would never be able to do anything that was intelligent. 
??? How on Earth do you reach that conclusion.

I reached that conclusion because I know that anything that can process data, and the human brain can process data, can be emulated by a Turing Machine. And a Turing Machine is Turing Complete.

John K Clark    See what's on my new list at  Extropolis
mnl

 

John Clark

unread,
Jul 12, 2024, 7:25:38 AM (11 days ago) Jul 12
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 4:58 PM Brent Meeker <meeke...@gmail.com> wrote:

 elsewhere you have agreed that many data processing machines are not conscious because they take no actions based on the data being true.  

I don't recall ever saying that! What I may have said is, something may be intelligent and thus conscious but I would have no way of knowing that unless it acted intelligently. Maybe a rock is processing data in some way that I don't understand and thus is conscious. But I doubt it.  

I think of the paramecium that swims to the left because the water is to salty on the right as being conscious.

Probably true. intelligence is not an all or nothing matter and I have fundamental evidence that the same is true for consciousness: I'm more conscious when I try to solve a calculus problem than I am when I'm about to fall asleep.
 
 John K Clark    See what's on my new list at  Extropolisc
iccd

Jason Resch

unread,
Jul 12, 2024, 9:28:02 AM (11 days ago) Jul 12
to Everything List


On Fri, Jul 12, 2024, 6:25 AM John Clark <johnk...@gmail.com> wrote:
On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker <meeke...@gmail.com> wrote:

>> Sometimes on some problems the human brain could be considered as being Turing Complete, otherwise we would never be able to do anything that was intelligent. 
??? How on Earth do you reach that conclusion.

I reached that conclusion because I know that anything that can process data, and the human brain can process data, can be emulated by a Turing Machine. And a Turing Machine is Turing Complete.


Perhaps you mean the brain is "Turing emulable" i.e. computable here, rather than "Turing complete" (which is having the capacity emulate any other Turing machine).

Jason


John K Clark    See what's on my new list at  Extropolis
mnl

 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 12, 2024, 10:57:34 AM (11 days ago) Jul 12
to everyth...@googlegroups.com
On Fri, Jul 12, 2024 at 9:28 AM Jason Resch <jason...@gmail.com> wrote:

>> I know that anything that can process data, and the human brain can process data, can be emulated by a Turing Machine. And a Turing Machine is Turing Complete.


Perhaps you mean the brain is "Turing emulable" i.e. computable here, rather than "Turing complete" (which is having the capacity emulate any other Turing machine).

OK but by that definition nothing physical is Turing complete because there are an infinite number of Turing Machines and nothing in the non-abstract real world can emulate all of them. 


John K Clark    See what's on my new list at  Extropolis
wcd

Brent Meeker

unread,
Jul 12, 2024, 7:28:58 PM (10 days ago) Jul 12
to everyth...@googlegroups.com


On 7/12/2024 3:24 AM, John Clark wrote:
On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker <meeke...@gmail.com> wrote:

>> Sometimes on some problems the human brain could be considered as being Turing Complete, otherwise we would never be able to do anything that was intelligent. 
??? How on Earth do you reach that conclusion.

I reached that conclusion because I know that anything that can process data, and the human brain can process data, can be emulated by a Turing Machine. And a Turing Machine is Turing Complete.

That says on some (other) problems the human brain may not be able to solve them while a Turing machine can.  So a Turing machine is more powerful than a human brain, therefore a human can be consider a Turing machine.  Invalid inference. 

Brent

John K Clark    See what's on my new list at  Extropolis
mnl

 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 13, 2024, 8:05:27 AM (10 days ago) Jul 13
to everyth...@googlegroups.com
On Fri, Jul 12, 2024 at 7:28 PM Brent Meeker <meeke...@gmail.com> wrote:

 So a Turing machine is more powerful than a human brain

Yes, anything your brain can do there is a Turing Machine that can do them too, including committing all your errors;  but there are lots of Turing Machines that can do lots of things that your brain cannot.  However your brain does have one advantage, it's a real physical thing, but a Turing Machine is not, it's more like a schematic diagram of the underlying logic of a brain or computer at the most detailed and fundamental level possible. A Turing Machine can't calculate anything unless it's actually constructed, and for that you need atoms.

 
therefore a human can be consider a Turing machine. 

Yes, you can be considered to be a Turing Machine, a particular Turing Machine, but you are NOT ALL Turing Machines 
 
Invalid inference. 
I don't think so.  John K Clark    See what's on my new list at  Extropolis
tdi

spudb...@aol.com

unread,
Jul 13, 2024, 8:20:36 AM (10 days ago) Jul 13
to everyth...@googlegroups.com
Related ARS Technica article titled: 

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

Five-level AI classification system probably best seen as a marketing exercise.

(More profoundly,OPENAI's 5 tier system for future capabilities. Looks like we're at '2')








--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

John Clark

unread,
Jul 13, 2024, 12:32:23 PM (10 days ago) Jul 13
to everyth...@googlegroups.com
On Sat, Jul 13, 2024 at 8:20 AM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework Five-level AI classification system probably best seen as a marketing exercise.(More profoundly,OPENAI's 5 tier system for future capabilities. Looks like we're at '2')


The interesting thing is that OpenAI says that while GPT-4 can answer questions about as well as a bright high school student can, GPT-5 will be able to correctly answer the sort of questions a PhD candidate will receive during the verbal defense of this thesis. And GPT-5 is only level two. As you point out it's a 5 tier system.
 John K Clark    See what's on my new list at  Extropolis
u7I


 


Brent Meeker

unread,
Jul 13, 2024, 4:29:06 PM (10 days ago) Jul 13
to everyth...@googlegroups.com


On 7/13/2024 5:04 AM, John Clark wrote:
On Fri, Jul 12, 2024 at 7:28 PM Brent Meeker <meeke...@gmail.com> wrote:

 So a Turing machine is more powerful than a human brain

Yes, anything your brain can do there is a Turing Machine that can do them too, including committing all your errors;  but there are lots of Turing Machines that can do lots of things that your brain cannot.  However your brain does have one advantage, it's a real physical thing, but a Turing Machine is not, it's more like a schematic diagram of the underlying logic of a brain or computer at the most detailed and fundamental level possible. A Turing Machine can't calculate anything unless it's actually constructed, and for that you need atoms.
 
therefore a human can be consider a Turing machine. 

Yes, you can be considered to be a Turing Machine, a particular Turing Machine, but you are NOT ALL Turing Machines

All Turing machines have the same computational capability. 

"A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm."

Brent
 
Invalid inference. 
I don't think so.  John K Clark    See what's on my new list at  Extropolis
tdi

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 13, 2024, 6:22:38 PM (9 days ago) Jul 13
to everyth...@googlegroups.com
On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker <meeke...@gmail.com> wrote:

All Turing machines have the same computational capability. 
 
Well that certainly is not true! There is a Turing Machine for any computable task, but any PARTICULAR  Turing Machine has a finite number of internal states and can only do one thing. If you want something else done then you are going to have to use a Turing Machine with a different set of internal states.  

The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n), This is because there are n-1 non-halting states, and we have n choices for the next state, and 2 choices for which symbol to write, and 2 choices for which direction to move the read head. So for example there are 16,777,216 different three state Turing Machines, and 25,600,000,000 different four state turing machines.

   John K Clark    See what's on my new list at  Extropolis
nrp

Jason Resch

unread,
Jul 13, 2024, 7:44:39 PM (9 days ago) Jul 13
to Everything List


On Sat, Jul 13, 2024, 6:22 PM John Clark <johnk...@gmail.com> wrote:
On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker <meeke...@gmail.com> wrote:

All Turing machines have the same computational capability. 
 
Well that certainly is not true! There is a Turing Machine for any computable task, but any PARTICULAR  Turing Machine has a finite number of internal states and can only do one thing. If you want something else done then you are going to have to use a Turing Machine with a different set of internal states.  

The number of internal states a Turing machine has is unrelated to a Turing machine's universality. Think of internal states as the instruction set in a CPU. A CPU can only be in so many states, but pair it with a memory and a loop, and it can compute anything.

I think what you are saying makes sense if you consider a Turing machine running a particular fixed program. Then the Turing machine acts like some particular machine. And if you want it to act differently, you need to provide a different program.

Jason 



The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n), This is because there are n-1 non-halting states, and we have n choices for the next state, and 2 choices for which symbol to write, and 2 choices for which direction to move the read head. So for example there are 16,777,216 different three state Turing Machines, and 25,600,000,000 different four state turing machines.

   John K Clark    See what's on my new list at  Extropolis
nrp

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 13, 2024, 8:37:29 PM (9 days ago) Jul 13
to everyth...@googlegroups.com


On 7/13/2024 3:21 PM, John Clark wrote:
On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker <meeke...@gmail.com> wrote:

All Turing machines have the same computational capability. 
 
Well that certainly is not true! There is a Turing Machine for any computable task, but any PARTICULAR  Turing Machine has a finite number of internal states and can only do one thing. If you want something else done then you are going to have to use a Turing Machine with a different set of internal states. 
Or a different tape/program.

"A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules.Despite the model's simplicity, it is
capable of implementing any computer algorithm."

Brent

The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n), This is because there are n-1 non-halting states, and we have n choices for the next state, and 2 choices for which symbol to write, and 2 choices for which direction to move the read head. So for example there are 16,777,216 different three state Turing Machines, and 25,600,000,000 different four state turing machines.

   John K Clark    See what's on my new list at  Extropolis
nrp

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 13, 2024, 8:48:53 PM (9 days ago) Jul 13
to everyth...@googlegroups.com
On Sat, Jul 13, 2024 at 8:37 PM Brent Meeker <meeke...@gmail.com> wrote:


>> Well that certainly is not true! There is a Turing Machine for any computable task, but any PARTICULAR  Turing Machine has a finite number of internal states and can only do one thing. If you want something else done then you are going to have to use a Turing Machine with a different set of internal states. 

> Or a different tape/program.
"A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules.Despite the model's simplicity, it is
capable of implementing any computer algorithm."


Yes exactly. As I said before, if you want a Turing Machine to do something different then you've got to pick a Turing machine with a different set of internal states, or to say the same thing with different words, you've got to program it differently. For every computable function there is a Turing Machine that will compute it if it has the correct set of internal states.


John K Clark    See what's on my new list at  Extropolis



The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n), This is because there are n-1 non-halting states, and we have n choices for the next state, and 2 choices for which symbol to write, and 2 choices for which direction to move the read head. So for example there are 16,777,216 different three state Turing Machines, and 25,600,000,000 different four state turing machines.

   
nrp

--

Brent Meeker

unread,
Jul 13, 2024, 9:42:10 PM (9 days ago) Jul 13
to everyth...@googlegroups.com


On 7/13/2024 5:48 PM, John Clark wrote:
On Sat, Jul 13, 2024 at 8:37 PM Brent Meeker <meeke...@gmail.com> wrote:


>> Well that certainly is not true! There is a Turing Machine for any computable task, but any PARTICULAR  Turing Machine has a finite number of internal states and can only do one thing. If you want something else done then you are going to have to use a Turing Machine with a different set of internal states. 

> Or a different tape/program.
"A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules.Despite the model's simplicity, it is
capable of implementing any computer algorithm."


Yes exactly. As I said before, if you want a Turing Machine to do something different then you've got to pick a Turing machine with a different set of internal states, or to say the same thing with different words, you've got to program it differently.
The machine is universal.  You don't need a different machine with different internal states.  A Turing Machine, a single mechanism, will compute anything computable given the appropriate program on the tape.  It doesn't have to have some different set of internal states...only a different tape.  That's the whole point of Turing, a single computer is universal.  It is NOT saying the same thing to say you've got to program it differently.  This is a Turing machine and it can
 compute anything:


For every computable function there is a Turing Machine that will compute it if it has the correct set of internal states.

B.S.  That's trivially true of any function.  Turing wouldn't be famous if that's all he proved.

Brent

John Clark

unread,
Jul 13, 2024, 9:46:01 PM (9 days ago) Jul 13
to everyth...@googlegroups.com
Well Brent I tried to explain it to you, I don't know what else to say except you're simply wrong. 

John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 13, 2024, 9:51:27 PM (9 days ago) Jul 13
to everyth...@googlegroups.com
Yes it's possible to have a universal Turing machine in the sense that you can run any program by just changing the tape, however ONLY if that tape has instructions for changing the set of states  that the machine can be in. 

John K Clark 

PGC

unread,
Jul 13, 2024, 9:54:02 PM (9 days ago) Jul 13
to Everything List
On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:
Yes it's possible to have a universal Turing machine in the sense that you can run any program by just changing the tape, however ONLY if that tape has instructions for changing the set of states  that the machine can be in. 


It still boggles my mind that matter is Turing-complete. And this despite parts of physics being not Turing emulable. We can implement Turing Machines with matter, and even with constraints in the physical world, it appears to be the basic principle of brains, cells, and computers.

Just for clarity’s sake, we should distinguish the idea of Turing/universal machine with some demonstrative physical implementation, like some computer, tape machine, or LLM running on my table/in the cloud: By Turing machine, I mean a T machine u such that phi_u(x, y) = phi_x(y). We call “u” the computer, x is named the program, and y is the data. Of course, (x, y) is supposed to be a number (coding the two numbers x and y). And yeah, you can specify it with infinite tape, print, read, write heads, and many other formalisms that have proven equivalent etc. but the class of functions is the same. The set of partially computable functions from N to N with the standard definitions and axioms.

There are a lot of posts distinguishing this computer here, that LLM there, that brain in my head etc. ostensively, as if we knew what we were talking about. If we believe we are Turing emulable at some level of description, then we are not able to distinguish between ourselves and our experiences when emulated in say Python, which is emulated by Rust, which is emulated by Swift, which is emulated by Kotlin, which is emulated by Go, which is emulated by Elixir, which is emulated by Julia, which is emulated by TypeScript, which is emulated by R, which is emulated by a physical universe, itself emulated by arithmetic (e.g. assuming arithmetical realism like Russell and Bruno), from “our self” emulated in Rust, emulated by Python, emulated by Go, emulated by Swift, emulated by Julia, emulated by Elixir, emulated by Kotlin, emulated by R, emulated by TypeScript, emulated by arithmetic, emulated by a physical universe… 

That’s the difficulty of defining what a physical instantiation of a computation is (See Maudlin and MGA). For if we could distinguish those computations, we’d have something funky in consciousness, which would not be Turing emulable, falsifying the arithmetical realism type approaches. And if you have that, I’d like to know everything about you, your diet, reading habits, pets, family, beverages, medicines etc. and whether something like gravity is Turing emulable, even if I guess it isn’t. Send me that message in private though and don’t publish anything. 

Brent Meeker

unread,
Jul 13, 2024, 10:05:56 PM (9 days ago) Jul 13
to everyth...@googlegroups.com
You could quote some authority...if there were one that agreed with you.

Bren

Brent Meeker

unread,
Jul 13, 2024, 10:34:20 PM (9 days ago) Jul 13
to everyth...@googlegroups.com


On 7/13/2024 6:50 PM, John Clark wrote:
Yes it's possible to have a universal Turing machine in the sense that you can run any program by just changing the tape,
And in fact that's the definition of Turing machine.


however ONLY if that tape has instructions for changing the set of states  that the machine can be in.

Not "changing the set of states", the tape can only change internal states within a fixed finite set.

Brent

Jason Resch

unread,
Jul 13, 2024, 11:42:23 PM (9 days ago) Jul 13
to Everything List


On Sat, Jul 13, 2024, 9:54 PM PGC <multipl...@gmail.com> wrote:


On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:
Yes it's possible to have a universal Turing machine in the sense that you can run any program by just changing the tape, however ONLY if that tape has instructions for changing the set of states  that the machine can be in. 


It still boggles my mind that matter is Turing-complete.


Turing completeness, as incredible as it is, is (remarkably) easy to come by. You can achieve it with addition and multiplication, with billiard balls, with finite automata (rule 110, or game of life), with artificial neurons, etc. That something as sophisticated as matter could achieve it is to me less surprising than the fact that these far simpler things can.


And this despite parts of physics being not Turing emulable.

Finite physical system's can be simulated to any desired degree of accuracy, and moreover all known laws of physics are computable. Which parts of physics do you refer to when you say there are parts that aren't Turing emulable?

Jason 

We can implement Turing Machines with matter, and even with constraints in the physical world, it appears to be the basic principle of brains, cells, and computers.

Just for clarity’s sake, we should distinguish the idea of Turing/universal machine with some demonstrative physical implementation, like some computer, tape machine, or LLM running on my table/in the cloud: By Turing machine, I mean a T machine u such that phi_u(x, y) = phi_x(y). We call “u” the computer, x is named the program, and y is the data. Of course, (x, y) is supposed to be a number (coding the two numbers x and y). And yeah, you can specify it with infinite tape, print, read, write heads, and many other formalisms that have proven equivalent etc. but the class of functions is the same. The set of partially computable functions from N to N with the standard definitions and axioms.

There are a lot of posts distinguishing this computer here, that LLM there, that brain in my head etc. ostensively, as if we knew what we were talking about. If we believe we are Turing emulable at some level of description, then we are not able to distinguish between ourselves and our experiences when emulated in say Python, which is emulated by Rust, which is emulated by Swift, which is emulated by Kotlin, which is emulated by Go, which is emulated by Elixir, which is emulated by Julia, which is emulated by TypeScript, which is emulated by R, which is emulated by a physical universe, itself emulated by arithmetic (e.g. assuming arithmetical realism like Russell and Bruno), from “our self” emulated in Rust, emulated by Python, emulated by Go, emulated by Swift, emulated by Julia, emulated by Elixir, emulated by Kotlin, emulated by R, emulated by TypeScript, emulated by arithmetic, emulated by a physical universe… 

That’s the difficulty of defining what a physical instantiation of a computation is (See Maudlin and MGA). For if we could distinguish those computations, we’d have something funky in consciousness, which would not be Turing emulable, falsifying the arithmetical realism type approaches. And if you have that, I’d like to know everything about you, your diet, reading habits, pets, family, beverages, medicines etc. and whether something like gravity is Turing emulable, even if I guess it isn’t. Send me that message in private though and don’t publish anything. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 14, 2024, 8:40:10 AM (9 days ago) Jul 14
to everyth...@googlegroups.com
On Sat, Jul 13, 2024 at 10:34 PM Brent Meeker <meeke...@gmail.com> wrote:
> The machine is universal.  You don't need a different machine with different internal states.

First of all, the very definition of "a different Turing Machine" is a machine with a different set of internal states. And there is not just one Turing machine, there are an infinite number of them. There are 64 one state two symbol (zero and one) Turing Machines, 20,736 two state, 16,777,216 three state, 25,600,000,000 four state, and 634,033,809,653,824 five state two symbol Turing Machines

A Turing Machine with different sets of internal states will exhibit different behavior even if given identical input statesI think What confuses you is that it is possible to have a machine in which the tape not only provides the program the machine should work on but also the set of internal states that the machine has. In a way you could think of the tape as providing not only the program but also the wiring diagram of the computer. A universal Turing Machine is in an undefined state until the input tape, or something else, puts it in one specific state.  

Consider the Busy beaver function, if you feed in a tape with all zeros on it into all 4 state Turing Machines and ask "which of those 25,600,000,000 machines will print the most ones before stopping" (it's important that the machine eventually stops), you will find this is not an easy question. All the machines are operating on identical input tapes (all zeros) but they behave differently, some stop almost immediately, others just keep printing 1 forever, but for others the behavior is vastly more complicated. It turns out that the winner is a set of states that prints out 13 ones after making 107 moves. 

A five state Turing Machine behaves differently, we just found out that a particular set of internal states prints 2098 ones after making 47,176,870 moves. I wouldn't be surprised if the sixth Busy Beaver number is not computable, we know for a fact that any busy beaver number for a 745 State Turing machine or larger is not computable.  Right now all we know about BB(6) is that it's larger than 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10.

The point of all this is that Turing Machines with different sets of internal states behave very differently. 

John K Clark    See what's on my new list at  Extropolis
mth





Quentin Anciaux

unread,
Jul 14, 2024, 9:35:59 AM (9 days ago) Jul 14
to everyth...@googlegroups.com

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
It is loading more messages.
0 new messages