My book "I Am" published on amazon

437 vues
Accéder directement au premier message non lu

za_w...@yahoo.com

non lue,
7 avr. 2019, 19:10:2807/04/2019
à Everything List
My book "I Am" has been published on amazon. It deals with my ideas about the emergent structure of consciousness and the nature of self-reference which gives birth to the emergent structure, which as far as I know, are new ideas, so they might prove useful in opening new paths in the attempt of obtaining a theory of consciousness.


(also available for the other amazon national websites)

Philip Thrift

non lue,
9 avr. 2019, 01:30:5209/04/2019
à Everything List

Although his book isn't out yet, how do you think your approach compares to Philip Goff's:


He has written a lot about it via @Philip_Goff.

Goff is sort the "next generation" Galen Strawson:


(If you read the above article by Strawson replace "physical..." with "material..."everywhere  it reads OK.)

- pt

Bruno Marchal

non lue,
10 avr. 2019, 19:55:5510/04/2019
à everyth...@googlegroups.com
Hi Cosmin,


It seems your conclusion fits well with the conclusion already given by the universal machine (the Gödel-Löbian one which are those who already knows that they are Turing universal, like ZF, PA, or the combinators + some induction principle).

Self-reference is capital indeed, but you seem to miss the mathematical theory of self-reference, brought by the work of Gödel and Löb, and Solovay ultimate formalisation of it at the first order logic level. You cite Penrose, which is deadly wrong on this.

In fact incompleteness is a chance for mechanism, as it provides almost directly a theory of consciousness, if you are willing to agree that consciousness is true, indubitable, immediately knowable, non provable and non definable, as each Löbian machine is confronted to such proposition all the “time”. But this enforces also, as I have shown, that the whole of physics has to be justified by some of the modes of self-reference, making physics into a subbranch of elementary arithmetic. This works in the sense that at the three places where physics should appear we get a quantum logic, and this with the advantage of a transparent clear-cut between the qualia (not sharable) and the quanta (sharable in the first person plural sense).

You seem to have a good (I mean correct with respect to Mechanism) insight on consciousness, but you seem to have wrong information on the theory of the digital machines/numbers and the role of Gödel. Gödel’s theorem is really a chance for the Mechanist theory, as it explains that the digital machine are non predictable, full of non communicable subjective knowledge and beliefs, and capable of defeating all reductionist theory that we can made of them. Indeed, they are literally universal dissident, and they are born with a conflict between 8 modes of self-apprehension. In my last paper(*) I argue that they can be enlightened, and this shows also that enlightenment and blasphemy are very close, and that religion leads easily to a theological trap making the machine inconsistent, except by staying mute, or referring to Mechanism (which is itself highly unprovable by the consistent machine).

Bruno




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

za_w...@yahoo.com

non lue,
15 avr. 2019, 14:16:1215/04/2019
à Everything List
Where can I find Philip Goff's ideas ? Maybe you can summarize them here so we can discuss. But to answer your question, my book deals specifically with the emergent structure of consciousness and the nature of self-reference, so it is a rather specialized book. It is not your everyday "materialism vs idealism" endless debate. In my book I actually do something in moving the science of consciousness further, by doing real science of consciousness. So I guess my book cannot compare too much with other books out there that are wasting energy in endless debates instead of actually doing something.

Btw, I also have a presentation at the Science & Nonduality conference from last year, where I present about The Emergent Structure of Consciousness:

https://www.youtube.com/watch?v=6jMAy6ft-ZQ

za_w...@yahoo.com

non lue,
15 avr. 2019, 14:28:2215/04/2019
à Everything List
Hmm... the thing is that what I'm arguing for in the book is that self-reference is unformalizable, so there can be no mathematics of self-reference. More than this, self-reference is not some concept in a theory, but it is us, each and everyone of us is a form of manifestation of self-reference. Self-reference is an eternal logical structure that eternally looks-back-at-itself. And this looking-back-at-itself automatically generates a subjective ontology, an "I am". In other words, the very definition of the concept of "existence" is the looking-back-at-itself of self-reference. So, existence can only be subjective, so all that can exists is consciousness. I talk in the book how the looking-back-at-itself implies 3 properties: identity (self-reference is itself, x=x), inclusion (self-reference is included in itself, x<x) and transcendence (self-reference is more than itself, x>x). And all these apparently contradictory properties are happening all at the same time. So, x=x, x<x, x>x all at the same time. But there is no actual contradiction here, because self-reference is unformalizable. The reason why I get to such weird conclusions is explored throughout the book where a phenomenological analysis of consciousness is done and it is shown how it is structured on an emergent holarchy of levels, a holarchy meaning that a higher level includes the lower levels, and I conclude that this can only happen if there is an entity called "self-reference" which has the above mentioned properties. So as you can see, there pretty much cannot be a mathematics of self-reference.

I will also present about self-reference at The Science of Consciousness conference this year at Interlaken, Switzerland, so if you are there we can talk more about these issues.

On Thursday, 11 April 2019 02:55:55 UTC+3, Bruno Marchal wrote:
Hi Cosmin,

It seems your conclusion fits well with the conclusion already given by the universal machine (the Gödel-Löbian one which are those who already knows that they are Turing universal, like ZF, PA, or the combinators + some induction principle).

Self-reference is capital indeed, but you seem to miss the mathematical theory of self-reference, brought by the work of Gödel and Löb, and Solovay ultimate formalisation of it at the first order logic level. You cite Penrose, which is deadly wrong on this.

In fact incompleteness is a chance for mechanism, as it provides almost directly a theory of consciousness, if you are willing to agree that consciousness is true, indubitable, immediately knowable, non provable and non definable, as each Löbian machine is confronted to such proposition all the “time”. But this enforces also, as I have shown, that the whole of physics has to be justified by some of the modes of self-reference, making physics into a subbranch of elementary arithmetic. This works in the sense that at the three places where physics should appear we get a quantum logic, and this with the advantage of a transparent clear-cut between the qualia (not sharable) and the quanta (sharable in the first person plural sense).

You seem to have a good (I mean correct with respect to Mechanism) insight on consciousness, but you seem to have wrong information on the theory of the digital machines/numbers and the role of Gödel. Gödel’s theorem is really a chance for the Mechanist theory, as it explains that the digital machine are non predictable, full of non communicable subjective knowledge and beliefs, and capable of defeating all reductionist theory that we can made of them. Indeed, they are literally universal dissident, and they are born with a conflict between 8 modes of self-apprehension. In my last paper(*) I argue that they can be enlightened, and this shows also that enlightenment and blasphemy are very close, and that religion leads easily to a theological trap making the machine inconsistent, except by staying mute, or referring to Mechanism (which is itself highly unprovable by the consistent machine).

Bruno




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Philip Thrift

non lue,
15 avr. 2019, 16:57:0015/04/2019
à Everything List


Philip Goff is the primary author of the SEP article on the general subject


while he (@Philip_Goff on Twitter, links to his web site and videos there) has written on micropsychism (also reviewed above)

    Cosmopsychism, Micropsychism and the Grounding Relation

His book (likely mostly what he has been presenting on his Twitter feed in the last year) will be called 

     Galileo's Error
     Foundations for a New Science of Consciousness

Goff is in the same "camp" mostly with Galen Strawson

     Consciousness Isn't a Mystery. It's Matter.
 
Experience is a first-class property of matter. But at what levels and configurations of matter is the question.

This is pretty much the opposite of the "emergence" view:

    Panpsychism vs. Emergentism


- pt

Cosmin Visan

non lue,
15 avr. 2019, 18:34:1815/04/2019
à Everything List
"Matter" is just an idea in consciousness.

Brent Meeker

non lue,
15 avr. 2019, 21:44:2215/04/2019
à everyth...@googlegroups.com
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.    Whether it is "formalizable" or not would seem to depend on choosing the right formalization to describe what engineers already create.

Brent
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

spudb...@aol.com

non lue,
16 avr. 2019, 00:25:1716/04/2019
à everyth...@googlegroups.com
"What is Mind? not matter,
What is matter? Never mind!" -The Tao of Homer


Philip Thrift

non lue,
16 avr. 2019, 02:24:4616/04/2019
à Everything List
So, no need to apply? :)



Seeking Research Fellows in Type Theory and Machine Self-Reference


The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.

MIRI is a mathematics and computer science research institute specializing in long-term AI safety and robustness work. Our offices are in Berkeley, California, near the UC Berkeley campus.


Type Theory in Type Theory

Our goal with this project is to build tools for better modeling reflective reasoning in software systems, as with our project modeling the HOL4 proof assistant within itself. There are Gödelian reasons to think that self-referential reasoning is not possible in full generality. However, many real-world tasks that cannot be solved in full generality admit of effective mostly-general or heuristic approaches. Humans, for example, certainly succeed in trusting their own reasoning in many contexts.

There are a number of tools missing in modern-day theorem provers that would be helpful for studying self-referential reasoning. First among these is theorem provers that can construct proofs about software systems that make use of a very similar theorem prover. To build these tools in a strongly typed programming language, we need to start by writing programs and proofs that can make reference to the type of programs and proofs in the same language.

Type theory in type theory has recently received a fair amount of attention. James Chapman’s work is pushing in a similar direction to what we want, as is Matt Brown and Jens Palsberg’s, but these projects don’t yet give us the tools we need. (F-omega is too weak a logic for our purposes, and methods like Chapman’s don’t get us self-representations.)

This is intended to be an independent research project, though some collaborations with other researchers may occur. Our expectation is that this will be a multi-year project, but it is difficult to predict exactly how difficult this task is in advance. It may be easier than it looks, or substantially more difficult.

Depending on how the project goes, researchers interested in continuing to work with us after this project’s completion may be able to collaborate on other parts of our research agenda or propose their own additions to our program.


- pt

Cosmin Visan

non lue,
16 avr. 2019, 04:22:0516/04/2019
à Everything List
esoteric = "intended for or likely to be understood by only a small number of people with a specialized knowledge or interest." According to this definition, I'm not making self-reference esoteric. On the contrary, since I devote a whole book to it, the intention is to make self-reference to be understood by everyone. Probably you want to mean something else by esoteric, something like "out-of-this-world". But this again is not the case, because self-reference is the source of the entire existence, so it is pretty much part of the world.

Also, your example with the Mars Rover is faulty, because the rover doesn't know anything. Knowledge is something that exists in consciousness. Only consciousnesses know things. And things indeed are formal entities, but the process of knowing itself is not. Actually, knowledge can be formal precisely because the processes of knowing is unformalizable.

Cosmin Visan

non lue,
16 avr. 2019, 04:28:1416/04/2019
à Everything List
Yes, no need to apply. They are using the concept of self-reference in a misleading way. The true meaning of self-reference is an entity that refers to itself. There are several problems with the way in which they are using the concept. First problem is that "machine" is not an entity. "Machine" is just an idea in consciousness, it doesn't have an independent existence, it doesn't have any ontological status, it doesn't exist as an entity. And since it doesn't exist, it cannot refer to itself, or for that matter it cannot do anything. Only consciousness (and its forms of manifestation: qualia) has ontological status.

The second issue is that the way self-reference refers to itself is to incorporate itself in the very act of referring. Basically, the observer, the observed, and the act of observation are all one and the same thing. I'm pretty much you cannot think of a machine in these terms. So a "self-referential machine" is just words-play. It doesn't have anything in common whatsoever with the true characteristics of the true self-reference.

Bruno Marchal

non lue,
16 avr. 2019, 08:17:3616/04/2019
à everyth...@googlegroups.com
On 15 Apr 2019, at 20:28, za_wishy via Everything List <everyth...@googlegroups.com> wrote:

Hmm... the thing is that what I'm arguing for in the book is that self-reference is unformalizable,


With mechanism, third-person self-reference is formalisable, and from this we can prove that first person self-reference is not formalisable in the language of the machine concerned, but is “meta-formalisable” by using reference to truth (itself not formalisable). The same occurs for the notion of qualia, consciousness, and many mental and theological attributes.





so there can be no mathematics of self-reference. More than this, self-reference is not some concept in a theory, but it is us, each and everyone of us is a form of manifestation of self-reference. Self-reference is an eternal logical structure that eternally looks-back-at-itself. And this looking-back-at-itself automatically generates a subjective ontology, an "I am”.

That is good insight, well recovered by the machine about its first person self. It is akin to the inner god of the neoplatonist.



In other words, the very definition of the concept of "existence" is the looking-back-at-itself of self-reference.

That type of existence is phenomenological. With mechanism, we assume only the ntaiutal numbers, (or any terms of any Turing complete theory), then we derive the first person self)-reference, including the physical reality which appears ti be a first person plural notion. The physical reality is partially a subjective phenomenon. 






So, existence can only be subjective, so all that can exists is consciousness.


I see this as a critics of your theory. It is almost self-defeating. My goal was to understand matter and consciousness from proposition on which (almost) everybody agree, and with mechanism, elementary arithmetic is enough.





I talk in the book how the looking-back-at-itself implies 3 properties: identity (self-reference is itself, x=x),


x = x is an identity axiom. I don’t see reference there.




inclusion (self-reference is included in itself, x<x) and transcendence (self-reference is more than itself, x>x).

OK. (Except the tiny formula which does not make much sense to me, and seem to assume a lot of things). But with mechanism we get 8 notion of self, and transcendance is indeed derived from them.





And all these apparently contradictory properties are happening all at the same time. So, x=x, x<x, x>x all at the same time.


Without giving a theory or at least a realm, it is hard to figure out what you mean.





But there is no actual contradiction here, because self-reference is unformalizable. The reason why I get to such weird conclusions is explored throughout the book where a phenomenological analysis of consciousness is done and it is shown how it is structured on an emergent holarchy of levels, a holarchy meaning that a higher level includes the lower levels, and I conclude that this can only happen if there is an entity called "self-reference" which has the above mentioned properties. So as you can see, there pretty much cannot be a mathematics of self-reference.

But such theories exist. Even the fact that the first person self-reference is not formalisable is provable in a meta-theory. 

Self-reference is where mathematical logic has got many surprising results, and with mechanism, they are somehow directly usable. To not use them needs some non-mechanist hypothesis, for which there are no evidences, and it looks like bringing complexity to not solve a (scientific) problem.

Bruno





I will also present about self-reference at The Science of Consciousness conference this year at Interlaken, Switzerland, so if you are there we can talk more about these issues.

On Thursday, 11 April 2019 02:55:55 UTC+3, Bruno Marchal wrote:
Hi Cosmin,

It seems your conclusion fits well with the conclusion already given by the universal machine (the Gödel-Löbian one which are those who already knows that they are Turing universal, like ZF, PA, or the combinators + some induction principle).

Self-reference is capital indeed, but you seem to miss the mathematical theory of self-reference, brought by the work of Gödel and Löb, and Solovay ultimate formalisation of it at the first order logic level. You cite Penrose, which is deadly wrong on this.

In fact incompleteness is a chance for mechanism, as it provides almost directly a theory of consciousness, if you are willing to agree that consciousness is true, indubitable, immediately knowable, non provable and non definable, as each Löbian machine is confronted to such proposition all the “time”. But this enforces also, as I have shown, that the whole of physics has to be justified by some of the modes of self-reference, making physics into a subbranch of elementary arithmetic. This works in the sense that at the three places where physics should appear we get a quantum logic, and this with the advantage of a transparent clear-cut between the qualia (not sharable) and the quanta (sharable in the first person plural sense).

You seem to have a good (I mean correct with respect to Mechanism) insight on consciousness, but you seem to have wrong information on the theory of the digital machines/numbers and the role of Gödel. Gödel’s theorem is really a chance for the Mechanist theory, as it explains that the digital machine are non predictable, full of non communicable subjective knowledge and beliefs, and capable of defeating all reductionist theory that we can made of them. Indeed, they are literally universal dissident, and they are born with a conflict between 8 modes of self-apprehension. In my last paper(*) I argue that they can be enlightened, and this shows also that enlightenment and blasphemy are very close, and that religion leads easily to a theological trap making the machine inconsistent, except by staying mute, or referring to Mechanism (which is itself highly unprovable by the consistent machine).

Bruno




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

non lue,
16 avr. 2019, 08:19:3416/04/2019
à everyth...@googlegroups.com
On 16 Apr 2019, at 00:34, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

"Matter" is just an idea in consciousness.

Of course, Mechanism, and the “sound” universal machine agree with you, except the “just” is a bit too much. You need to derive the physical laws from your theory for consciousness, and so you can test your theory by comparing it to the physics inferred from the observations. 

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.

Bruno Marchal

non lue,
16 avr. 2019, 08:25:4016/04/2019
à everyth...@googlegroups.com
On 16 Apr 2019, at 10:22, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

esoteric = "intended for or likely to be understood by only a small number of people with a specialized knowledge or interest." According to this definition, I'm not making self-reference esoteric. On the contrary, since I devote a whole book to it, the intention is to make self-reference to be understood by everyone. Probably you want to mean something else by esoteric, something like "out-of-this-world". But this again is not the case, because self-reference is the source of the entire existence, so it is pretty much part of the world.

Also, your example with the Mars Rover is faulty, because the rover doesn't know anything. Knowledge is something that exists in consciousness. Only consciousnesses know things. And things indeed are formal entities, but the process of knowing itself is not. Actually, knowledge can be formal precisely because the processes of knowing is unformalizable.


As I said, the machine’s notion of knowledge is not formalisable, but the (Löbian, rich) machine already knows that. The universal Löbian machine knows she has a soul, and know that her soul is not a machine, nor anything describable in third person term. This has been proved using the standard epistemological definitions.

How can you argue that Rover has no knowledge, when you say that knowledge is not formalisable?

Introducing some fuzziness to claim a negative thing about a relation of the type consciousness/machine is a bit frightening. It reminds the catholic older sophisticated “reasoning” to assert that Indians have no soul.

Bruno




On Tuesday, 16 April 2019 04:44:22 UTC+3, Brent wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.    Whether it is "formalizable" or not would seem to depend on choosing the right formalization to describe what engineers already create.

Brent

Bruno Marchal

non lue,
16 avr. 2019, 08:37:5316/04/2019
à everyth...@googlegroups.com
On 16 Apr 2019, at 10:28, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Yes, no need to apply. They are using the concept of self-reference in a misleading way. The true meaning of self-reference is an entity that refers to itself. There are several problems with the way in which they are using the concept. First problem is that "machine" is not an entity. "Machine" is just an idea in consciousness, it doesn't have an independent existence, it doesn't have any ontological status, it doesn't exist as an entity. And since it doesn't exist, it cannot refer to itself, or for that matter it cannot do anything. Only consciousness (and its forms of manifestation: qualia) has ontological status.


If we accept Church or Turing thesis, machine, computation and computable are not only very well defined, but they are defined in elementary arithmetic. No need to assume anything more, besides some invariance principle for consciousness.




The second issue is that the way self-reference refers to itself is to incorporate itself in the very act of referring.

But that is what is made possible by a famous result by Kleene in mathematical logic or theoretical computer science.

The basic idea is very simple: to make a machine referring to itself, like M(y) = F( y, “M”), you build the machine D, which send x on F(y, ‘x’x’’), and apply it to its description: D’D’ = F(y, ‘D’D’’). I can explain more on this.







Basically, the observer, the observed, and the act of observation are all one and the same thing.


Observation is still another thing, that you have to relate to notion of truth, belief, knowledge, etc. When done with the standard definition recatsed in theoretical computer science, we find a quantum logical formalism at the exact place where we must find the structure of the observable reality. This was for me a confirmation that Mechanism is plausible, and materialism is not plausible.






I'm pretty much you cannot think of a machine in these terms. So a "self-referential machine" is just words-play.

Of course I disagree completely. 



It doesn't have anything in common whatsoever with the true characteristics of the true self-reference.

What is true, is that we should not confuse this person self-reference, which is mathematicalisable, if I can say, and first person self-reference, which is not, but provably so in the mechanist meta theory. And, yes, we have to invoke “truth” here, but here, we can use the standard notion of truth discovered by Tarski.

Bruno





On Tuesday, 16 April 2019 09:24:46 UTC+3, Philip Thrift wrote:
So, no need to apply? :)


Seeking Research Fellows in Type Theory and Machine Self-Reference


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Philip Thrift

non lue,
16 avr. 2019, 09:42:5316/04/2019
à Everything List


In the experientialist (Strawson-Goff-etc. "panpsychist" view): experiential qualia (EQ) exist in matter at some level on their own -- and EQ cannot be reduced to information (numbers).

So real "selfness" cannot be achieved in any "Gödel-Löb-etc." theorem prover running on the so-called conventional computer.

Now some future biological computers -- made via synthetic biology -- open new possibilities.

- pt

Cosmin Visan

non lue,
16 avr. 2019, 12:04:2816/04/2019
à Everything List
1) What does "third-person" self-reference mean ? To me, this would be equivalent to "third-person color red", which clearly is not the case for red to be third-person, since red only exists in an ontological subjective manner.

2) What "machine" ? What "self of the machine" ? "Machine" is just a concept in human consciousness. It doesn't exist beyond merely a concept.

3) Phenomenological is the only type of existence. Everything else is merely an extrapolation starting from phenomenological existence. i.e. I see an unicorn in my subjective first person existence, and then I extrapolate and say that that unicorn somehow has an independent existence from it being just a quale in my consciousness, which clearly is false.

4) You can set yourself all kinds of goals as you want. But this doesn't mean that reality is the way you want it to be. You can wish for red to be agreed upon by everyone, but a blind person will not agree.

5) There is only 1 notion of the Self: "I Am". But I would be interested to find out the 8 types of Self that you mention.

6) You can look at the emergent phenomenology. For example, in the visual domain you have: black-and-white -> shades-of-gray -> colors -> shapes -> objects -> full visual scene. All these levels have the properties that each level inherits the qualities of the previous levels, while also bringing into existence its own quality. For example, the reason why a color can variate from lighter to darker is because it inherits in itself the quality of shades-of-gray. And if you think carefully about this, this is possible because of the properties of self-reference that I just mentioned, x=x (color is itself), x<x (shades-of-gray are included in color), x>x (color is more than the shades-of-gray). And all these happen at the same time, because the same consciousness is the one that experience the evolution in levels. When you learn something new, that new knowledge emerges on top of previously held knowledge, but this doesn't create a new consciousness to experience the new knowledge, but the same consciousness is maintained. And this is possible because the same consciousness (x=x) includes the previous consciousness that it was (x<x) and becomes more than what it previously was (x>x).

On Tuesday, 16 April 2019 15:17:36 UTC+3, Bruno Marchal wrote:

1) With mechanism, third-person self-reference is formalisable

2) That is good insight, well recovered by the machine about its first person self.
 
3)
 
In other words, the very definition of the concept of "existence" is the looking-back-at-itself of self-reference.
 
That type of existence is phenomenological.

4)

So, existence can only be subjective, so all that can exists is consciousness.

I see this as a critics of your theory. It is almost self-defeating. My goal was to understand matter and consciousness from proposition on which (almost) everybody agree, and with mechanism, elementary arithmetic is enough.

5)
OK. (Except the tiny formula which does not make much sense to me, and seem to assume a lot of things). But with mechanism we get 8 notion of self, and transcendance is indeed derived from them.

6)

Cosmin Visan

non lue,
16 avr. 2019, 12:25:2616/04/2019
à Everything List
This clearly I must do. And I admit that at this point I am not able to do that. But this doesn't mean that phenomenology is not a science in itself. Actually, as I see it in the future, physics would be the one derived from consciousness, not the other way around.

Cosmin Visan

non lue,
16 avr. 2019, 12:42:3516/04/2019
à Everything List
Because Rover is just a bunch of atoms. Is nothing more than the sum of atoms. But in the case of self-reference/emergence, each new level is more than the sum of the previous levels.

I don't know how you can trick yourself so badly into believing that if you put some rocks together, the rocks become alive. Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness. And this doesn't happen in a machine. In a machine, electrons move according to known physics.

Brent Meeker

non lue,
16 avr. 2019, 14:35:5016/04/2019
à everyth...@googlegroups.com


On 4/16/2019 1:22 AM, 'Cosmin Visan' via Everything List wrote:
esoteric = "intended for or likely to be understood by only a small number of people with a specialized knowledge or interest." According to this definition, I'm not making self-reference esoteric. On the contrary, since I devote a whole book to it, the intention is to make self-reference to be understood by everyone. Probably you want to mean something else by esoteric, something like "out-of-this-world". But this again is not the case, because self-reference is the source of the entire existence, so it is pretty much part of the world.

Also, your example with the Mars Rover is faulty, because the rover doesn't know anything.

How do you know this?  Why can a bunch of neurons be conscious, but a bunch of electronics can't?

Brent

Knowledge is something that exists in consciousness. Only consciousnesses know things. And things indeed are formal entities, but the process of knowing itself is not. Actually, knowledge can be formal precisely because the processes of knowing is unformalizable.

On Tuesday, 16 April 2019 04:44:22 UTC+3, Brent wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.    Whether it is "formalizable" or not would seem to depend on choosing the right formalization to describe what engineers already create.

Brent
--

Cosmin Visan

non lue,
16 avr. 2019, 15:43:2916/04/2019
à Everything List
There are no electrons and no neurons. "Electrons" and "neurons" are just ideas in consciousness, are projections in the idea of "physical world" of processes that happen in consciousness. And since in places where there is consciousness, consciousness has certain effects, it is normal for those effects to look different than in places where there is no conscious activity. Is like for example watching a recording of World of Warcraft on youtube vs. watching someone playing World of Warcraft live. In the recording the same things will happen over and over again, and they will be called "laws of physics", while watching someone playing live, different things will happen every time. As you can see, the image is the same: an elf running through the forest. But the effect are different. In the first case God decided the rules at the beginning, but there is no God anymore moving the electrons, they just repeat over and over again the initial trigger, while other electrons are actively influenced by currently existing consciousnesses. As you can see, the causal power doesn't lie in the electrons, but in the consciousnesses behind the curtains.

Brent Meeker

non lue,
16 avr. 2019, 19:29:2416/04/2019
à everyth...@googlegroups.com


On 4/16/2019 6:42 AM, Philip Thrift wrote:
> In the experientialist (Strawson-Goff-etc. "panpsychist" view):
> experiential qualia (EQ) exist in matter at some level on their own --
> and EQ cannot be reduced to information (numbers).
>
> So real "selfness" cannot be achieved in any "Gödel-Löb-etc." theorem
> prover running on the so-called conventional computer.
>
> Now some future biological computers -- made via synthetic biology --
> open new possibilities.

What makes them "biological"?  Do they have to be made of amino acids? 
nuclei acids?  do they have to be powered by a phosphate cycle?  What
makes one bunch of biological molecules conscious and another very
similar bunch dead, or anesthesized?

The only coherent answer is that consciousness is realized by certain
information processing...independent of the molecules instantiating the
process.

Brent

Brent Meeker

non lue,
16 avr. 2019, 20:00:1416/04/2019
à everyth...@googlegroups.com


On 4/16/2019 9:42 AM, 'Cosmin Visan' via Everything List wrote:
Because Rover is just a bunch of atoms. Is nothing more than the sum of atoms.

First, that's false.  The Rover is a very specific arrangement of atoms interacting with a specific environment.  It has memory, purpose, and the ability to act.


But in the case of self-reference/emergence, each new level is more than the sum of the previous levels.

I don't know how you can trick yourself so badly into believing that if you put some rocks together, the rocks become alive.

Try removing the phosphate atoms from your brain and see what you believe...if anything.


Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.

And have you done this observation?  A Nobel prize awaits. 

Brent

And this doesn't happen in a machine. In a machine, electrons move according to known physics.

On Tuesday, 16 April 2019 15:25:40 UTC+3, Bruno Marchal wrote:
How can you argue that Rover has no knowledge, when you say that knowledge is not formalisable?

Introducing some fuzziness to claim a negative thing about a relation of the type consciousness/machine is a bit frightening. It reminds the catholic older sophisticated “reasoning” to assert that Indians have no soul.

Bruno

Telmo Menezes

non lue,
16 avr. 2019, 20:06:4516/04/2019
à 'Brent Meeker' via Everything List


On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.

I don't agree that the Mars Rover checking "it's own" battery levels is an example of what is meant by self-reference in this type of discussion. The entity "Mars Rover" exists in your mind and mine, but there is no "Mars Rover mind" where it also exists. The entity "Telmo" exists in your mind and mine, and I happen to be an entity "Telmo" in whose mind the entity "Telmo" also exists. This is real self-reference.

Or, allow me to invent a programming language where something like this could me made more explicit. Let's say that, in this language, you can define a program P like this:

program P:
    x = 1
    if x == 1:
        print('My variable x s holding the value 1')

The above is the weak form of self-reference that you allude to. It would be like me measuring my arm and noting the result. Oh, my arm is x cm long. But let me show what could me meant instead by real self-reference:

program P:
    if length(P) > 1000:
        print('I am a complicated program')
    else:
        print('I am a simple program')

Do you accept there is a fundamental difference here?

Telmo

Telmo Menezes

non lue,
16 avr. 2019, 20:06:4516/04/2019
à 'Brent Meeker' via Everything List


On Tue, Apr 16, 2019, at 18:42, 'Cosmin Visan' via Everything List wrote:
Because Rover is just a bunch of atoms. Is nothing more than the sum of atoms. But in the case of self-reference/emergence, each new level is more than the sum of the previous levels.

I disagree. My position on this is that people are tricked into thinking that emergence has some ontological status, when if fact it is just an epistemological tool. We need to think in higher-order structures to simplify things (organisms, organs, mean-fields, cells, ant colonies, societies, markets, etc), but a Jupiter-brain could keep track of every entity separately and apprehend the entire thing at the same time. Emergence is a mental shortcut.

Self-reference is another matter (pun was accidental).


I don't know how you can trick yourself so badly into believing that if you put some rocks together, the rocks become alive. Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.

For me, this is yet another version of "God did it". There is no point in attempting to explain some complex behavior if the explanation is even more complex and mysterious.


And this doesn't happen in a machine. In a machine, electrons move according to known physics.

These are fairly extraordinary claims. Do you have any empirical data to support them?

Telmo.


On Tuesday, 16 April 2019 15:25:40 UTC+3, Bruno Marchal wrote:

How can you argue that Rover has no knowledge, when you say that knowledge is not formalisable?

Introducing some fuzziness to claim a negative thing about a relation of the type consciousness/machine is a bit frightening. It reminds the catholic older sophisticated “reasoning” to assert that Indians have no soul.

Bruno


Brent Meeker

non lue,
16 avr. 2019, 20:14:4916/04/2019
à everyth...@googlegroups.com


On 4/16/2019 12:43 PM, 'Cosmin Visan' via Everything List wrote:
There are no electrons and no neurons. "Electrons" and "neurons" are just ideas in consciousness, are projections in the idea of "physical world" of processes that happen in consciousness.

I agree they are ideas of consciousness.  But to say they are "just" ideas of consciousness, implies that they do not evolve according to their own laws.  And I notice you said that consciousness moves them in ways inconsistent with physics; so you are biting the bullet on that point; which is testable.


And since in places where there is consciousness, consciousness has certain effects, it is normal for those effects to look different than in places where there is no conscious activity. Is like for example watching a recording of World of Warcraft on youtube vs. watching someone playing World of Warcraft live. In the recording the same things will happen over and over again, and they will be called "laws of physics", while watching someone playing live, different things will happen every time. As you can see, the image is the same: an elf running through the forest. But the effect are different. In the first case God decided the rules at the beginning, but there is no God anymore moving the electrons, they just repeat over and over again the initial trigger, while other electrons are actively influenced by currently existing consciousnesses. As you can see, the causal power doesn't lie in the electrons, but in the consciousnesses behind the curtains.

But a Mars Rover can also learn to play World of Warcraft..and probably do it better than you and me.

Brent


On Tuesday, 16 April 2019 21:35:50 UTC+3, Brent wrote:

How do you know this?  Why can a bunch of neurons be conscious, but a bunch of electronics can't?

Brent
--

Philip Thrift

non lue,
16 avr. 2019, 20:59:3416/04/2019
à Everything List
We don't know enough about matter to say. Look at all the quirky stuff coming out of materials science news. And biomatter is quirkier still. 

- pt

Brent Meeker

non lue,
16 avr. 2019, 23:03:3416/04/2019
à everyth...@googlegroups.com


On 4/16/2019 6:10 AM, Telmo Menezes wrote:


On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.

I don't agree that the Mars Rover checking "it's own" battery levels is an example of what is meant by self-reference in this type of discussion. The entity "Mars Rover" exists in your mind and mine, but there is no "Mars Rover mind" where it also exists. The entity "Telmo" exists in your mind and mine, and I happen to be an entity "Telmo" in whose mind the entity "Telmo" also exists. This is real self-reference.

Or, allow me to invent a programming language where something like this could me made more explicit. Let's say that, in this language, you can define a program P like this:

program P:
    x = 1
    if x == 1:
        print('My variable x s holding the value 1')

The above is the weak form of self-reference that you allude to. It would be like me measuring my arm and noting the result. Oh, my arm is x cm long. But let me show what could me meant instead by real self-reference:

program P:
    if length(P) > 1000:
        print('I am a complicated program')
    else:
        print('I am a simple program')

Do you accept there is a fundamental difference here?

I take your point.  But I think the difference is only one of degree.  In my example the Rover knows where it is, lat and long and topology.   That entails having a model of the world, admittedly simple, in which the Rover is represented by itself. 

I would also say that I think far too much importance is attached to self-reference.  It's just a part of intelligence to run "simulations" in trying to foresee the consequences of potential actions.  The simulation must generally include the actor at some level.  It's not some mysterious property raising up a ghost in the machine.

Brent

Brent Meeker

non lue,
16 avr. 2019, 23:13:2116/04/2019
à everyth...@googlegroups.com
We know enough about matter that adding a little alcohol to your bloodstream or a small blow to the head (but not the foot) will make you qualia go away.

Brent

Telmo Menezes

non lue,
17 avr. 2019, 02:08:5717/04/2019
à 'Brent Meeker' via Everything List


On Wed, Apr 17, 2019, at 05:03, 'Brent Meeker' via Everything List wrote:


On 4/16/2019 6:10 AM, Telmo Menezes wrote:


On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.

I don't agree that the Mars Rover checking "it's own" battery levels is an example of what is meant by self-reference in this type of discussion. The entity "Mars Rover" exists in your mind and mine, but there is no "Mars Rover mind" where it also exists. The entity "Telmo" exists in your mind and mine, and I happen to be an entity "Telmo" in whose mind the entity "Telmo" also exists. This is real self-reference.

Or, allow me to invent a programming language where something like this could me made more explicit. Let's say that, in this language, you can define a program P like this:

program P:
    x = 1
    if x == 1:
        print('My variable x s holding the value 1')

The above is the weak form of self-reference that you allude to. It would be like me measuring my arm and noting the result. Oh, my arm is x cm long. But let me show what could me meant instead by real self-reference:

program P:
    if length(P) > 1000:
        print('I am a complicated program')
    else:
        print('I am a simple program')

Do you accept there is a fundamental difference here?

I take your point.  But I think the difference is only one of degree.  In my example the Rover knows where it is, lat and long and topology.   That entails having a model of the world, admittedly simple, in which the Rover is represented by itself. 

I would also say that I think far too much importance is attached to self-reference.  It's just a part of intelligence to run "simulations" in trying to foresee the consequences of potential actions.  The simulation must generally include the actor at some level.  It's not some mysterious property raising up a ghost in the machine.

With self-reference comes also self-modification. The self-replicators of nature that slowly adapt and complexify, the brain "rewiring itself"... Things get both weird and generative. I suspect that it goes to the core of what human intelligence is, and what computer intelligence is not (yet). But if you say that self-reference has not magic property that explains consciousness, I agree with you.

On consciousness I have nothing interesting to say (no jokes about ever having had, please :). I think that:

consciousness = existence

Existence entails self-referential machines, self-referential evolutionary processes, the whole shebang. But not the other way around.

Telmo.

Cosmin Visan

non lue,
17 avr. 2019, 02:18:2817/04/2019
à Everything List
It's actually the other way around: biology is realized by certain processes happening in consciousness. Biology is just an external appearance of internal processes happening in consciousness.

Cosmin Visan

non lue,
17 avr. 2019, 02:23:2117/04/2019
à Everything List
1) Well... It might be a very specific arrangement of atoms, but they are still governed by Newton's Laws. Is not like if you put them in certain order magic happens and new things start to appear. It  has no memory, no purpose and no ability to act, since memory, purpose and ability to act are properties of consciousness.

2) Try removing yourself from the house in the middle of the winter. You will stop experience warmth, but this doesn't mean that the quale of warmth is generated by the house.

3) I have done the thinking. I don't have to do the experiment to know it is true.

On Wednesday, 17 April 2019 03:00:14 UTC+3, Brent wrote:
1)

First, that's false.  The Rover is a very specific arrangement of atoms interacting with a specific environment.  It has memory, purpose, and the ability to act.

2)
 
Try removing the phosphate atoms from your brain and see what you believe...if anything.

Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.
3)

Cosmin Visan

non lue,
17 avr. 2019, 02:42:1417/04/2019
à Everything List
1) Oh, I'm clearly not making that mistake. When I talk about emergence, I talk about ontological emergence, not the hand-waving epistemic kind that people usually talk about. The emergence that I'm talking about is the emergence of new qualia on top of previously existing qualia. This is what my book is about. So it's the real deal. Alternatively, have a look at my presentation from the Science & Nonduality conference where I talk about The Emergent Structure of Consciousness, where I talk about ontological emergence and I specifically mention to the audience that the epistemic emergence is false: https://www.youtube.com/watch?v=6jMAy6ft-ZQ
And what realizes the ontological emergence is self-reference through its property of looking-back-at-itself, with looking-back becoming more than itself, like in the cover of the book.

2) Consciousness is not mysterious. And this is exactly what my book is doing: demystifying consciousness. If you decide to read my book, you will gain at the end of it a clarity of thinking through these issues that all people should have such that they will stop making the confusions that robots are alive.

3) No, they are not extraordinarily claims. They are quite trivial. And they start from the trivial realization that the brain does not exist. The "brain" is just an idea in consciousness.


On Wednesday, 17 April 2019 03:06:45 UTC+3, telmo wrote:


On Tue, Apr 16, 2019, at 18:42, 'Cosmin Visan' via Everything List wrote:
Because Rover is just a bunch of atoms. Is nothing more than the sum of atoms. But in the case of self-reference/emergence, each new level is more than the sum of the previous levels.
1)
 
I disagree. My position on this is that people are tricked into thinking that emergence has some ontological status, when if fact it is just an epistemological tool. We need to think in higher-order structures to simplify things (organisms, organs, mean-fields, cells, ant colonies, societies, markets, etc), but a Jupiter-brain could keep track of every entity separately and apprehend the entire thing at the same time. Emergence is a mental shortcut.

Self-reference is another matter (pun was accidental).


I don't know how you can trick yourself so badly into believing that if you put some rocks together, the rocks become alive. Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.
2)
 
For me, this is yet another version of "God did it". There is no point in attempting to explain some complex behavior if the explanation is even more complex and mysterious.
 
And this doesn't happen in a machine. In a machine, electrons move according to known physics.
 
3)
 
These are fairly extraordinary claims. Do you have any empirical data to support them?

Telmo.


On Tuesday, 16 April 2019 15:25:40 UTC+3, Bruno Marchal wrote:

How can you argue that Rover has no knowledge, when you say that knowledge is not formalisable?

Introducing some fuzziness to claim a negative thing about a relation of the type consciousness/machine is a bit frightening. It reminds the catholic older sophisticated “reasoning” to assert that Indians have no soul.

Bruno


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Cosmin Visan

non lue,
17 avr. 2019, 02:55:2717/04/2019
à Everything List
1) They are just ideas. Like the idea of "planet Vulcan" that disappeared when the set of ideas that gave birth to it have been replaced with other set of ideas. In the future, the idea of "dark matter" will also disappear when the set of ideas "General Relativity" will be replaced by other set of ideas. What remains constant in all these changes is the consciousnesses that have the ideas.
Maybe you are referring to the fact that we have no absolute free will. And this is certainly true. We don't know what makes us experience certain qualia and not others. I don't have the ability to choose to imagine a new color. And this is indeed a problem that needs to be solved.

2) Learning is a property of consciousness. So no, robots don't learn anything, in the same way that if I step in the mud, mud doesn't "learn" the shape of my foot.


On Wednesday, 17 April 2019 03:14:49 UTC+3, Brent wrote:


On 4/16/2019 12:43 PM, 'Cosmin Visan' via Everything List wrote:
There are no electrons and no neurons. "Electrons" and "neurons" are just ideas in consciousness, are projections in the idea of "physical world" of processes that happen in consciousness.
1)
 
I agree they are ideas of consciousness.  But to say they are "just" ideas of consciousness, implies that they do not evolve according to their own laws.  And I notice you said that consciousness moves them in ways inconsistent with physics; so you are biting the bullet on that point; which is testable.

And since in places where there is consciousness, consciousness has certain effects, it is normal for those effects to look different than in places where there is no conscious activity. Is like for example watching a recording of World of Warcraft on youtube vs. watching someone playing World of Warcraft live. In the recording the same things will happen over and over again, and they will be called "laws of physics", while watching someone playing live, different things will happen every time. As you can see, the image is the same: an elf running through the forest. But the effect are different. In the first case God decided the rules at the beginning, but there is no God anymore moving the electrons, they just repeat over and over again the initial trigger, while other electrons are actively influenced by currently existing consciousnesses. As you can see, the causal power doesn't lie in the electrons, but in the consciousnesses behind the curtains.
2)

Cosmin Visan

non lue,
17 avr. 2019, 03:04:3117/04/2019
à Everything List
1) Rover doesn't know anything, since knowing is a property of consciousness. Rover doesn't have a model of the world, since having a model of the world means being aware of a world, and awareness is a property of consciousness. What does "Rover is represented by itself" even mean ? I think what you're doing now is to talk about properties that you, as a consciousness, have them, and carelessly starting to attribute those properties to lifeless objects. I would suggest a more rigorous thinking process. Don't haste in applying concepts where they don't belong.

2) Is it you that don't understand the meaning of self-reference, that's why you fail to understand its usage. Self-reference is not "a ghost in the machine", but is an eternal logical structure that is the source of all existence. By employing its eternal property of looking-back-at-itself, self-reference finds objects in itself and gives birth to all the consciousnesses in the world.

On Wednesday, 17 April 2019 06:03:34 UTC+3, Brent wrote:

1)
 
I take your point.  But I think the difference is only one of degree.  In my example the Rover knows where it is, lat and long and topology.   That entails having a model of the world, admittedly simple, in which the Rover is represented by itself. 

2)

Cosmin Visan

non lue,
17 avr. 2019, 03:14:2517/04/2019
à Everything List
Also putting you some blind glasses will make your visual qualia go away. This doesn't mean that the glasses are what generates qualia.

Philip Thrift

non lue,
17 avr. 2019, 04:13:4717/04/2019
à Everything List


On Wednesday, April 17, 2019 at 1:42:14 AM UTC-5, Cosmin Visan wrote:
1) Oh, I'm clearly not making that mistake. When I talk about emergence, I talk about ontological emergence, not the hand-waving epistemic kind that people usually talk about. The emergence that I'm talking about is the emergence of new qualia on top of previously existing qualia. This is what my book is about. So it's the real deal. Alternatively, have a look at my presentation from the Science & Nonduality conference where I talk about The Emergent Structure of Consciousness, where I talk about ontological emergence and I specifically mention to the audience that the epistemic emergence is false: https://www.youtube.com/watch?v=6jMAy6ft-ZQ
And what realizes the ontological emergence is self-reference through its property of looking-back-at-itself, with looking-back becoming more than itself, like in the cover of the book.

2) Consciousness is not mysterious. And this is exactly what my book is doing: demystifying consciousness. If you decide to read my book, you will gain at the end of it a clarity of thinking through these issues that all people should have such that they will stop making the confusions that robots are alive.

3) No, they are not extraordinarily claims. They are quite trivial. And they start from the trivial realization that the brain does not exist. The "brain" is just an idea in consciousness.




 
The panpsychist says

   Matter is all there is, but matter has experientiality.

The proposed alternative is

    Experientiality is all there is, while matter is illusion. 


As science has proceeded so far as the study of matter, I'm not sure how the alternative (vs. panpsychism) helps in the science of consciousness.

- pt





Brent Meeker

non lue,
17 avr. 2019, 11:36:0717/04/2019
à everyth...@googlegroups.com
Yes, I understand your ToE is like ideal monism.  But it is one thing to assert it.  It is another to derive some predictions from it.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

non lue,
17 avr. 2019, 11:43:2017/04/2019
à everyth...@googlegroups.com


On 4/16/2019 11:23 PM, 'Cosmin Visan' via Everything List wrote:
1) Well... It might be a very specific arrangement of atoms, but they are still governed by Newton's Laws. Is not like if you put them in certain order magic happens and new things start to appear. It  has no memory, no purpose and no ability to act, since memory, purpose and ability to act are properties of consciousness.

But a Mars Rover with artificial intelligence does have purpose, to collect and analyze various data.  It has the ability to act, to travel, to take samples, to communicate.  It has memory of its purpose, where it's been, and for an AI system, even the ability to learn.  Yes, no magic happens.  But new things start to appear just as certain arrangements of atoms are your computer that can transform and display these symbols but in another arrangement would be just a lump of metal and plastic.



2) Try removing yourself from the house in the middle of the winter. You will stop experience warmth, but this doesn't mean that the quale of warmth is generated by the house.

True.  But something is different about inside and outside the house that is not ONLY in your consciousness, because others agree about it and measure it...it's called temperature.



3) I have done the thinking. I don't have to do the experiment to know it is true.

That's what all the scholastics thought.

Brent


On Wednesday, 17 April 2019 03:00:14 UTC+3, Brent wrote:
1)
First, that's false.  The Rover is a very specific arrangement of atoms interacting with a specific environment.  It has memory, purpose, and the ability to act.

2)
 
Try removing the phosphate atoms from your brain and see what you believe...if anything.

Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.
3)
 
And have you done this observation?  A Nobel prize awaits. 

Brent Meeker

non lue,
17 avr. 2019, 11:47:0617/04/2019
à everyth...@googlegroups.com


On 4/16/2019 11:55 PM, 'Cosmin Visan' via Everything List wrote:
> 1) They are just ideas. Like the idea of "planet Vulcan" that
> disappeared when the set of ideas that gave birth to it have been
> replaced with other set of ideas. In the future, the idea of "dark
> matter" will also disappear when the set of ideas "General Relativity"
> will be replaced by other set of ideas. What remains constant in all
> these changes is the consciousnesses that have the ideas.
> Maybe you are referring to the fact that we have no absolute free
> will. And this is certainly true. We don't know what makes us
> experience certain qualia and not others. I don't have the ability to
> choose to imagine a new color. And this is indeed a problem that needs
> to be solved.
>
> 2) Learning is a property of consciousness. So no, robots don't learn
> anything, in the same way that if I step in the mud, mud doesn't
> "learn" the shape of my foot.

Sometimes you say there is no magic involved in your theory.  But then
you insist that only you are conscious.  Nothing that is not shaped like
Visan, perhaps nothing else at all, can be conscious. Why?  Magic.

Brent

Bruno Marchal

non lue,
17 avr. 2019, 12:08:4017/04/2019
à everyth...@googlegroups.com
On 16 Apr 2019, at 18:04, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

1) What does "third-person" self-reference mean ?

It is when a program/machine/number invokes itself, in any third person way.

When you say that you have a problem to a tooth, that is third person self-reference, verifiable by the dentist.
When you say that you feel some toothache, that is first person self-reference, not verifiable by anybody.







To me, this would be equivalent to "third-person color red", which clearly is not the case for red to be third-person, since red only exists in an ontological subjective manner.

That is the first person self-reference. Note that I have given here many thought experience making this easily understandable, without delving in the second recursion theorem, used to make this mathematically clean.





2) What "machine" ? What "self of the machine" ? "Machine" is just a concept in human consciousness. It doesn't exist beyond merely a concept.

Church’s thesis makes it 100% mathematically precise.There is no such definition of machine in physics. A physical machine is a concept, but an arithmetical machine is a precise notion, whose existence follows from 2+2=4 (& Co.).





3) Phenomenological is the only type of existence.

It is an experience. To add ontology to such experience makes things more complex, if not unsolvable. It is the main “mistake” in Aristotle theology/metaphysics.



Everything else is merely an extrapolation starting from phenomenological existence.


All theories are extrapolations. But personal consciousness is not a theory, even if it is the only certain things. Theories are never certain.




i.e. I see an unicorn in my subjective first person existence, and then I extrapolate and say that that unicorn somehow has an independent existence from it being just a quale in my consciousness, which clearly is false.

It is more than a quale, but unicorn, like bicorns, might only exist phenomenologically.

The goal is to explain consciousness, so I prefer not to assume it at the start. The same for matter. With mechanism, we have to assume only elementary arithmetic.




4) You can set yourself all kinds of goals as you want. But this doesn't mean that reality is the way you want it to be. You can wish for red to be agreed upon by everyone, but a blind person will not agree.

True. But I do not claim that Digital Mechanism is true. I say that Digital Mechanism explains consciousness and matter, so, by comparing the observation and the matter in the mind of the Turing machine, we can test mechanism, and up to now it works, where physicalism is already refuted.




5) There is only 1 notion of the Self: "I Am". But I would be interested to find out the 8 types of Self that you mention.

I limit myself to sound (arithmetically) rational machine (if they believe A and A-A, they will believe B).

I define “believe rationally” by Gödel’s arithmetical provability predicate, written []A. (A is provable by me).
Incompleteness makes impossible for the machine to prove that []A -> A, nor that []A -> <>A, making the logic of []A & A (Theatetus’ definition of knowledge) working for the machine. Provability is believability, not knowledgeability after Gödel’s theorem, so the 8 nuances are given by the following variant, with p a (sigma_1, partial computable) arithmetical sentences;

p.     (Truth of p)
[]p.    (Provability of p)
[]p & p

and

[]p & <>t
[]p & <>t & p

That gives eight logics, as three of them are divided into two, again by incompleteness.

I can come back on this later. We get intutionistic logic for the first person, and quantum logic for the notion of matter, which is confirmed by independent studies based on empirical inference.










6) You can look at the emergent phenomenology. For example, in the visual domain you have: black-and-white -> shades-of-gray -> colors -> shapes -> objects -> full visual scene. All these levels have the properties that each level inherits the qualities of the previous levels, while also bringing into existence its own quality.

But it is the same for functionality. A machine is not just the addition of its parts, and it makes new things that not parts can do. That type of things is reflected in the first person domain, but the qualities comes from the intersection with Truth/meaning/semantics. 






For example, the reason why a color can variate from lighter to darker is because it inherits in itself the quality of shades-of-gray. And if you think carefully about this, this is possible because of the properties of self-reference that I just mentioned, x=x (color is itself), x<x (shades-of-gray are included in color), x>x (color is more than the shades-of-gray).


The universal machine can agree or disagree with this, except for your bizarre role that you give to identity (x=x), where self-reference, by any entities, will relate a model that the entity has about itself, with itself. It is more like x = ‘x’, or some fix point of that type.





And all these happen at the same time, because the same consciousness is the one that experience the evolution in levels. When you learn something new, that new knowledge emerges on top of previously held knowledge, but this doesn't create a new consciousness to experience the new knowledge, but the same consciousness is maintained.

I agree with this. This will make the consciousness of very weak, but still Turing universal, machine, into the universal consciousness, from which all personal identities differentiates.




And this is possible because the same consciousness (x=x)
includes the previous consciousness that it was (x<x) and becomes more than what it previously was (x>x).

I think I see what you mean, or want to mean, but that’s no problem for Mechanism.

Your “x=x” is plausibly obtained by the machine knowing that if she (or anyone) is sound, then we have both

([]p &p) equivalent to []p, but in a non rationally provable way.

We agree on something crucial in metaphysics, which is that matter is not real, and is only “in consciousness”,  
But that makes matter a phenomenological notion, like consciousness is judged usually to be to. Then with mechanism, we find that the numbers are already aware, when taken relatively to universal numbers, that they have a soul ([]p & p) that only God (Arithmetical truth) can know to be the same (making technically Mechanism into a theology in the original sense of Plato).

My work is done. It is not a project. I invite you to read my papers. You need to understand that before Gödel, we thought we knew about everything about numbers and machines, but after Gödel, we know that we know about nothing, notably because the numbers incarnate the universal machine when living on the border between the computable and the non computable. We know that Truth is *far* bigger than proof, or anything rationally justifiable. Arithmetic promises an infinity of surprise and obligations to change ones mind on the fundamental matter, making clearer a important invariant theological core.


Bruno





On Tuesday, 16 April 2019 15:17:36 UTC+3, Bruno Marchal wrote:

1) With mechanism, third-person self-reference is formalisable

2) That is good insight, well recovered by the machine about its first person self.
 
3)
 
In other words, the very definition of the concept of "existence" is the looking-back-at-itself of self-reference.
 
That type of existence is phenomenological.

4)
So, existence can only be subjective, so all that can exists is consciousness.

I see this as a critics of your theory. It is almost self-defeating. My goal was to understand matter and consciousness from proposition on which (almost) everybody agree, and with mechanism, elementary arithmetic is enough.

5)
OK. (Except the tiny formula which does not make much sense to me, and seem to assume a lot of things). But with mechanism we get 8 notion of self, and transcendance is indeed derived from them.

6)
And all these apparently contradictory properties are happening all at the same time. So, x=x, x<x, x>x all at the same time.

Without giving a theory or at least a realm, it is hard to figure out what you mean.

But there is no actual contradiction here, because self-reference is unformalizable. The reason why I get to such weird conclusions is explored throughout the book where a phenomenological analysis of consciousness is done and it is shown how it is structured on an emergent holarchy of levels, a holarchy meaning that a higher level includes the lower levels, and I conclude that this can only happen if there is an entity called "self-reference" which has the above mentioned properties. So as you can see, there pretty much cannot be a mathematics of self-reference.

But such theories exist. Even the fact that the first person self-reference is not formalisable is provable in a meta-theory. 

Self-reference is where mathematical logic has got many surprising results, and with mechanism, they are somehow directly usable. To not use them needs some non-mechanist hypothesis, for which there are no evidences, and it looks like bringing complexity to not solve a (scientific) problem.


Bruno Marchal

non lue,
17 avr. 2019, 12:11:3117/04/2019
à everyth...@googlegroups.com
Good point, and this is what will lead, when assuming the process are digital, to associate a mind to all “enough similar” digital process realised, in the precise sense of Church and Turing, in arithmetic. It is the same information which os processed, at the “right” level, which exist by the assumption of digital mechanism.

Bruno




>
> Brent

Bruno Marchal

non lue,
17 avr. 2019, 12:25:2117/04/2019
à everyth...@googlegroups.com
On 16 Apr 2019, at 15:10, Telmo Menezes <te...@telmomenezes.net> wrote:



On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.

I don't agree that the Mars Rover checking "it's own" battery levels is an example of what is meant by self-reference in this type of discussion.

It is a correct third person self-reference, like when someone says I have two legs is correct in the case ha has two legs. 



The entity "Mars Rover" exists in your mind and mine, but there is no "Mars Rover mind" where it also exists.

I would say that there is surely a tiny one, but not something like what we get from the second recursion theorem, which make a machine able to compute any function applies to its own code.

Third person self-reference is the base of the whole recursion theory, and the trick which makes this definable is Kleene’s theorem, as I have sometimes explained.

More difficult is to capture the first person self, but we got it by the “meta”-invocation of the notion of truth. 
Rover is conscious, but still dissociated from ‘rover”. But that is just because it has no strong induction axiom, and no way to build approximation of models of itself. It lack a re-entring neural system rich enough to  manage the gap between its first person apprehension, and the third person apparent reality around it.





The entity "Telmo" exists in your mind and mine, and I happen to be an entity "Telmo" in whose mind the entity "Telmo" also exists. This is real self-reference.

I agree. It is unclear for me if Mars Rover has it, or not, as I have not seen the code, and even seeing it, it could ba a Helle of a difficulty to prove it has not that ability. I doubt it has it, because Naza does not want a free exploratory on Mars, but a docile slave.




Or, allow me to invent a programming language where something like this could me made more explicit. Let's say that, in this language, you can define a program P like this:

program P:
    x = 1
    if x == 1:
        print('My variable x s holding the value 1')

The above is the weak form of self-reference that you allude to.

Yes, but it can work for Rover’s task, and I agree it is less sophisticated compared to genuine “second recursion”.





It would be like me measuring my arm and noting the result. Oh, my arm is x cm long. But let me show what could me meant instead by real self-reference:

program P:
    if length(P) > 1000:
        print('I am a complicated program')
    else:
        print('I am a simple program')

Do you accept there is a fundamental difference here?


OK. Perfect. That one is definable by using Kleene second recursion theorem. But it is still “this person self-)reference”, the first person experience needs to involve the concept of truth (making the first person non definable by the first person, but still mathematically precise for anyone assuming the entity is sound (which indeed the entity itself cannot justify rationally).

Bruno

Bruno Marchal

non lue,
17 avr. 2019, 12:35:5117/04/2019
à everyth...@googlegroups.com
On 16 Apr 2019, at 18:56, Telmo Menezes <te...@telmomenezes.net> wrote:



On Tue, Apr 16, 2019, at 18:42, 'Cosmin Visan' via Everything List wrote:
Because Rover is just a bunch of atoms. Is nothing more than the sum of atoms. But in the case of self-reference/emergence, each new level is more than the sum of the previous levels. 

I disagree. My position on this is that people are tricked into thinking that emergence has some ontological status, when if fact it is just an epistemological tool. We need to think in higher-order structures to simplify things (organisms, organs, mean-fields, cells, ant colonies, societies, markets, etc), but a Jupiter-brain could keep track of every entity separately and apprehend the entire thing at the same time. Emergence is a mental shortcut.

Self-reference is another matter (pun was accidental).


OK.

Third person self-reference is P(x) = F(x, ‘P’).

First person self reference is … something more subtle, related to this person self-reference, but with a God knowing that the lmachine refers correctly to itself, and the machine accessing to that knowledge at some level (as is the case with mechanism).





I don't know how you can trick yourself so badly into believing that if you put some rocks together, the rocks become alive. Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.

For me, this is yet another version of "God did it". There is no point in attempting to explain some complex behavior if the explanation is even more complex and mysterious.

I completely agree. 

Now, if the pshycis in the head of all universal machine differs from what we observe, we can have evidence that something more complex needs to be assume, but that means either that Mechanism is false, or that we are in a malevolent simulation (we are intentionally lied).





And this doesn't happen in a machine. In a machine, electrons move according to known physics.

These are fairly extraordinary claims. Do you have any empirical data to support them?


Only physical machine have electron, and electron are only emergent from the arithmetical truth seen by universal number inside. 

With mechanism, is emergent = not real, physics and natural science = not real, but yet, it obeys laws, and is phenomenologically real for all universal number/machine. (Up to now, given that nature confirmes mechanism, and thus seems to refute physicalism.

Bruno

Bruno Marchal

non lue,
17 avr. 2019, 12:45:0917/04/2019
à everyth...@googlegroups.com
On 17 Apr 2019, at 08:08, Telmo Menezes <te...@telmomenezes.net> wrote:



On Wed, Apr 17, 2019, at 05:03, 'Brent Meeker' via Everything List wrote:


On 4/16/2019 6:10 AM, Telmo Menezes wrote:


On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.

I don't agree that the Mars Rover checking "it's own" battery levels is an example of what is meant by self-reference in this type of discussion. The entity "Mars Rover" exists in your mind and mine, but there is no "Mars Rover mind" where it also exists. The entity "Telmo" exists in your mind and mine, and I happen to be an entity "Telmo" in whose mind the entity "Telmo" also exists. This is real self-reference.

Or, allow me to invent a programming language where something like this could me made more explicit. Let's say that, in this language, you can define a program P like this:

program P:
    x = 1
    if x == 1:
        print('My variable x s holding the value 1')

The above is the weak form of self-reference that you allude to. It would be like me measuring my arm and noting the result. Oh, my arm is x cm long. But let me show what could me meant instead by real self-reference:

program P:
    if length(P) > 1000:
        print('I am a complicated program')
    else:
        print('I am a simple program')

Do you accept there is a fundamental difference here?

I take your point.  But I think the difference is only one of degree.  In my example the Rover knows where it is, lat and long and topology.   That entails having a model of the world, admittedly simple, in which the Rover is represented by itself. 

I would also say that I think far too much importance is attached to self-reference.  It's just a part of intelligence to run "simulations" in trying to foresee the consequences of potential actions.  The simulation must generally include the actor at some level.  It's not some mysterious property raising up a ghost in the machine.

With self-reference comes also self-modification. The self-replicators of nature that slowly adapt and complexify, the brain "rewiring itself"... Things get both weird and generative. I suspect that it goes to the core of what human intelligence is, and what computer intelligence is not (yet). But if you say that self-reference has not magic property that explains consciousness, I agree with you.


You need some magic, but the magic of the truth of  “2+3=5” is enough. 





On consciousness I have nothing interesting to say (no jokes about ever having had, please :). I think that:

consciousness = existence


Hmm… That looks like God made it. Or like “it is”.

Are you OK with the ideas that from the point of view of a conscious entity, consciousness is something:

Immediately knowable, and indubitable, (in case the machine can reason)
Non definable, and non provable to any other machine.

Then the mathematical theory of self-reference explains why machine will conclude that they are conscious, in that sense. They will know that they know something that they cannot doubt, yet cannot prove to us, or to anyone. And they can understand that they can test mechanism by observation.





Existence entails self-referential machines, self-referential evolutionary processes, the whole shebang. But not the other way around.

Existence of the natural numbers + the laws of addition and multiplication does that, and also justify what you don’t get any of that with any weaker theory, having less axioms, than Robinson Arithmetic.

We have to assume numbers if we want just define precisely what a machine is, but we cannot assume a physical universe: that is the price, we have to derive it from arithmetic “seen from inside”.

Bruno

Bruno Marchal

non lue,
17 avr. 2019, 12:48:4617/04/2019
à everyth...@googlegroups.com
On 17 Apr 2019, at 08:18, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

It's actually the other way around: biology is realized by certain processes happening in consciousness. Biology is just an external appearance of internal processes happening in consciousness.


I totally agree. More exactly, the classical (paltonist) universal machine agree with you, except it makes precise the assumption of numbers/combiantors/machine, and the consciousness is the consciousness of the universal machine.

It is like:

Number => self-reference (1p = consciousness) => “interfering dream/histories” => matter => human consciousness

Bruno





On Wednesday, 17 April 2019 02:29:24 UTC+3, Brent wrote:

What makes them "biological"?  Do they have to be made of amino acids? 
nuclei acids?  do they have to be powered by a phosphate cycle?  What
makes one bunch of biological molecules conscious and another very
similar bunch dead, or anesthesized?

The only coherent answer is that consciousness is realized by certain
information processing...independent of the molecules instantiating the
process.

Brent

Philip Thrift

non lue,
17 avr. 2019, 13:12:2617/04/2019
à Everything List


On Wednesday, April 17, 2019 at 11:11:31 AM UTC-5, Bruno Marchal wrote:

> On 17 Apr 2019, at 01:29, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
>
>
>
> On 4/16/2019 6:42 AM, Philip Thrift wrote:
>> In the experientialist (Strawson-Goff-etc. "panpsychist" view): experiential qualia (EQ) exist in matter at some level on their own -- and EQ cannot be reduced to information (numbers).
>>
>> So real "selfness" cannot be achieved in any "Gödel-Löb-etc." theorem prover running on the so-called conventional computer.
>>
>> Now some future biological computers -- made via synthetic biology -- open new possibilities.
>
> What makes them "biological"?  Do they have to be made of amino acids?  nuclei acids?  do they have to be powered by a phosphate cycle?  What makes one bunch of biological molecules conscious and another very similar bunch dead, or anesthesized?
>
> The only coherent answer is that consciousness is realized by certain information processing...independent of the molecules instantiating the process.

Good point, and this is what will lead, when assuming the process are digital, to associate a mind to all “enough similar” digital process realised, in the precise sense of Church and Turing, in arithmetic. It is the same information which os processed, at the “right” level, which exist by the assumption of digital mechanism.

Bruno



Information may be sufficient to compose consciousness, but I am skeptical. Wheeler said "it from bit" (or "qbit"), but I say "xbit".


        φbits = 2bits+qbits    (physical bits: information)
        ψbits = xbits               (psychical bits: experience)




- pt 

Brent Meeker

non lue,
17 avr. 2019, 20:19:4617/04/2019
à everyth...@googlegroups.com


On 4/16/2019 11:08 PM, Telmo Menezes wrote:


On Wed, Apr 17, 2019, at 05:03, 'Brent Meeker' via Everything List wrote:


On 4/16/2019 6:10 AM, Telmo Menezes wrote:


On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List wrote:
You seem to make self-reference into something esoteric.   Every Mars Rover knows where it is, the state of its batteries, its instruments, its communications link, what time it is, what its mission plan is.

I don't agree that the Mars Rover checking "it's own" battery levels is an example of what is meant by self-reference in this type of discussion. The entity "Mars Rover" exists in your mind and mine, but there is no "Mars Rover mind" where it also exists. The entity "Telmo" exists in your mind and mine, and I happen to be an entity "Telmo" in whose mind the entity "Telmo" also exists. This is real self-reference.

Or, allow me to invent a programming language where something like this could me made more explicit. Let's say that, in this language, you can define a program P like this:

program P:
    x = 1
    if x == 1:
        print('My variable x s holding the value 1')

The above is the weak form of self-reference that you allude to. It would be like me measuring my arm and noting the result. Oh, my arm is x cm long. But let me show what could me meant instead by real self-reference:

program P:
    if length(P) > 1000:
        print('I am a complicated program')
    else:
        print('I am a simple program')

Do you accept there is a fundamental difference here?

I take your point.  But I think the difference is only one of degree.  In my example the Rover knows where it is, lat and long and topology.   That entails having a model of the world, admittedly simple, in which the Rover is represented by itself. 

I would also say that I think far too much importance is attached to self-reference.  It's just a part of intelligence to run "simulations" in trying to foresee the consequences of potential actions.  The simulation must generally include the actor at some level.  It's not some mysterious property raising up a ghost in the machine.

With self-reference comes also self-modification. The self-replicators of nature that slowly adapt and complexify, the brain "rewiring itself"... Things get both weird and generative. I suspect that it goes to the core of what human intelligence is, and what computer intelligence is not (yet). But if you say that self-reference has not magic property that explains consciousness, I agree with you.

On consciousness I have nothing interesting to say (no jokes about ever having had, please :). I think that:

consciousness = existence

Existence entails self-referential machines, self-referential evolutionary processes, the whole shebang. But not the other way around.

Can't really be an equality relation then.  It's existence=>self-reference  and maybe consciousness.  But I'm not sure what "=>" symbolizes.  Not logical entailment.  Maybe nomological entailment?

Brent

Russell Standish

non lue,
17 avr. 2019, 21:00:3117/04/2019
à everyth...@googlegroups.com
On Wed, Apr 17, 2019 at 06:25:19PM +0200, Bruno Marchal wrote:
> Rover is conscious, but still dissociated from ‘rover”. But that is just
> because it has no strong induction axiom, and no way to build approximation of
> models of itself. It lack a re-entring neural system rich enough to manage the
> gap between its first person apprehension, and the third person apparent
> reality around it.

>
>
>
>
>
>
> The entity "Telmo" exists in your mind and mine, and I happen to be an
> entity "Telmo" in whose mind the entity "Telmo" also exists. This is real
> self-reference.
>
>
> I agree. It is unclear for me if Mars Rover has it, or not, as I have not seen
> the code, and even seeing it, it could ba a Helle of a difficulty to prove it
> has not that ability. I doubt it has it, because Naza does not want a free
> exploratory on Mars, but a docile slave.


I do think self-reference has something to do with it, as without an
observer to give meaning to something, it has no meaning. For
instance, without an observer to interpret a certain pile of atoms as
a machine, it is just a pile of atoms. Unless you propose a la Bishop
Berkley some sort of devine mind from which all meaning radiates, the
only other possibility is that each consciousness bootstraps its own
meaning from self-reference. Unless the mars rover has a self model in
its code (and I don't think it was constructed that way), then I would
extremely doubt it has any sort of consciousness. A more interesting
possibility is Hod Lipson's "starfish" robot, which has self-reference baked
in.

Cheers


--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow hpc...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au
----------------------------------------------------------------------------

Brent Meeker

non lue,
17 avr. 2019, 21:22:4017/04/2019
à everyth...@googlegroups.com


On 4/17/2019 6:00 PM, Russell Standish wrote:
> On Wed, Apr 17, 2019 at 06:25:19PM +0200, Bruno Marchal wrote:
>> Rover is conscious, but still dissociated from ‘rover”. But that is just
>> because it has no strong induction axiom, and no way to build approximation of
>> models of itself. It lack a re-entring neural system rich enough to manage the
>> gap between its first person apprehension, and the third person apparent
>> reality around it.
>>
>>
>>
>>
>>
>> The entity "Telmo" exists in your mind and mine, and I happen to be an
>> entity "Telmo" in whose mind the entity "Telmo" also exists. This is real
>> self-reference.
>>
>>
>> I agree. It is unclear for me if Mars Rover has it, or not, as I have not seen
>> the code, and even seeing it, it could ba a Helle of a difficulty to prove it
>> has not that ability. I doubt it has it, because Naza does not want a free
>> exploratory on Mars, but a docile slave.
>
> I do think self-reference has something to do with it, as without an
> observer to give meaning to something, it has no meaning. For
> instance, without an observer to interpret a certain pile of atoms as
> a machine, it is just a pile of atoms. Unless you propose a la Bishop
> Berkley some sort of devine mind from which all meaning radiates, the
> only other possibility is that each consciousness bootstraps its own
> meaning from self-reference. Unless the mars rover has a self model in
> its code (and I don't think it was constructed that way),

But how complete must the self-model be.  As Bruno has pointed out, it
can't be complete.  Current Mars Rovers have some "house
keeping"self-knowledge, like battery charge, temperature, power draw,
next task, location, time,...  Of course current rovers don't have AI
which would entail them learning and planning, which would require that
they be able to run a simulation which included some representation of
themself; but that representation might be very simple.  When you plan
to travel to the next city your plan includes a representation of
yourself, but probably only as a location.

Brent

Russell Standish

non lue,
17 avr. 2019, 21:29:2517/04/2019
à everyth...@googlegroups.com
On Wed, Apr 17, 2019 at 06:22:35PM -0700, 'Brent Meeker' via Everything List wrote:
>
> But how complete must the self-model be. 

That is the 64 million dollar question.

> As Bruno has pointed out, it can't
> be complete.  Current Mars Rovers have some "house keeping"self-knowledge,
> like battery charge, temperature, power draw, next task, location, time,...

I don't think that's enough. I think it must have the ability to
recognise other (perhaps similar) robots/machines as being like
itself.

> Of course current rovers don't have AI which would entail them learning and
> planning, which would require that they be able to run a simulation which
> included some representation of themself; but that representation might be
> very simple.  When you plan to travel to the next city your plan includes a
> representation of yourself, but probably only as a location.
>

Hod Lipson's starfish's representation of itself is no doubt rather
simple and crude, but it does pose the question of whether it might
have some sort of consciousness.

Philip Thrift

non lue,
18 avr. 2019, 03:11:0918/04/2019
à Everything List
"self reference" has been long been a subject of AI, programming language theory (program reflection), theorem provers (higher-order logic).

I haven't seen yet what Hod Lipson has done

Columbia engineers create a robot that can imagine itself
January 30, 2019 / Columbia Engineering


but here is an interview with another researcher:


The Unavoidable Problem of Self-Improvement in AI: An Interview with Ramana Kumar, Part 1
March 19, 2019/by Jolene Creighton

The Problem of Self-Referential Reasoning in Self-Improving AI: An Interview with Ramana Kumar, Part 2
March 21, 2019/by Jolene Creighton


To break this down a little, in essence, theorem provers are computer programs that assist with the development of mathematical correctness proofs. These mathematical correctness proofs are the highest safety standard in the field, showing that a computer system always produces the correct output (or response) for any given input. Theorem provers create such proofs by using the formal methods of mathematics to prove or disprove the “correctness” of the control algorithms underlying a system. HOL theorem provers, in particular, are a family of interactive theorem proving systems that facilitate the construction of theories in higher-order logic. Higher-order logic, which supports quantification over functions, sets, sets of sets, and more, is more expressive than other logics, allowing the user to write formal statements at a high level of abstraction.

In retrospect, Kumar states that trying to prove a theorem about multiple steps of self-reflection in a HOL theorem prover was a massive undertaking. Nonetheless, he asserts that the team took several strides forward when it comes to grappling with the self-referential problem, noting that they built “a lot of the requisite infrastructure and got a better sense of what it would take to prove it and what it would take to build a prototype agent based on model polymorphism.”

Kumar added that MIRI’s (the Machine Intelligence Research Institute’s) Logical Inductors could also offer a satisfying version of formal self-referential reasoning and, consequently, provide a solution to the self-referential problem.


- pt 

Bruno Marchal

non lue,
18 avr. 2019, 05:19:4018/04/2019
à everyth...@googlegroups.com

> On 18 Apr 2019, at 03:00, Russell Standish <li...@hpcoders.com.au> wrote:
>
> On Wed, Apr 17, 2019 at 06:25:19PM +0200, Bruno Marchal wrote:
>> Rover is conscious, but still dissociated from ‘rover”. But that is just
>> because it has no strong induction axiom, and no way to build approximation of
>> models of itself. It lack a re-entring neural system rich enough to manage the
>> gap between its first person apprehension, and the third person apparent
>> reality around it.
>
>>
>>
>>
>>
>>
>>
>> The entity "Telmo" exists in your mind and mine, and I happen to be an
>> entity "Telmo" in whose mind the entity "Telmo" also exists. This is real
>> self-reference.
>>
>>
>> I agree. It is unclear for me if Mars Rover has it, or not, as I have not seen
>> the code, and even seeing it, it could ba a Helle of a difficulty to prove it
>> has not that ability. I doubt it has it, because Naza does not want a free
>> exploratory on Mars, but a docile slave.
>
>
> I do think self-reference has something to do with it, as without an
> observer to give meaning to something, it has no meaning.

The essence of a universal number is to provide meaning, it associates to number/code some function, or some truth value if the function is a predicate.

What is the meaning of the number x for the number u? It is phi_x, a function.

(I recall for the others that, when phi_i represents a recursive (computable) enumeration of all partial computable functions, a number u is universal means that phi_u(<x,y>) = phi_x(y).
<x,y> is some computable bijection from NxN to N.



> For
> instance, without an observer to interpret a certain pile of atoms as
> a machine, it is just a pile of atoms.

Are you saying that Mars Rover cannot interpret some of its data on Mars, when nobody observed it, or are you saying that Mars Rover has enough observation abilities?






> Unless you propose a la Bishop
> Berkley some sort of devine mind from which all meaning radiates,

The universal numbers are divine enough. The (finite) code of a universal dovetailer radiates all operational meaning of all codes.




> the
> only other possibility is that each consciousness bootstraps its own
> meaning from self-reference.

That has to be the case for self-consciousness. It is a sort of self-self-reference.



> Unless the mars rover has a self model in
> its code (and I don't think it was constructed that way), then I would
> extremely doubt it has any sort of consciousness.

OK, Rover has no self-consciousness (plausibly), but it has the consciousness of the universal machine/number. It is a dissociative state, probably not related, in any genuine way, to Mars Rover activity on Mars. It is still a baby, and most probably does not distinguish truth from its inner truth.

Bruno



> A more interesting
> possibility is Hod Lipson's "starfish" robot, which has self-reference baked
> in.
>
> Cheers
>
>
> --
>
> ----------------------------------------------------------------------------
> Dr Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellow hpc...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>

Cosmin Visan

non lue,
18 avr. 2019, 05:27:2218/04/2019
à Everything List
But it has predictions. Is just that it depends what you understand by "predictions" at this point. If you understand something like predicting the masses of particles from physics, then it doesn't make such a prediction. But neither does physics. But on the other hand, it makes predictions like the fact that it explains the retentional passage of time. Husserl described the fact that time has a retentional structure, but he didn't explain why it is like this. In my theory, this is explained. The way it is explained is through the fact that time inherits the quality of self-reference from the Self and the quality of memory from Memory, so each present moment that goes into the past is pushed back into itself as present, therefore obtaining the retentional structure of time as described by Husserl. This is a big thing!

Bruno Marchal

non lue,
18 avr. 2019, 05:29:3418/04/2019
à everyth...@googlegroups.com

> On 18 Apr 2019, at 03:29, Russell Standish <li...@hpcoders.com.au> wrote:
>
> On Wed, Apr 17, 2019 at 06:22:35PM -0700, 'Brent Meeker' via Everything List wrote:
>>
>> But how complete must the self-model be.
>
> That is the 64 million dollar question.

To have consciousness, I put the bar on universality, to simplify. Technically the notion of “subcreativity”, or the equivalent notion of self-speedability should be enough.

To have self-consciousness, it needs to be Gödel-Löbian, or a reflexive K4 reasoner, to use Smullyan terminology.



>
>> As Bruno has pointed out, it can't
>> be complete. Current Mars Rovers have some "house keeping"self-knowledge,
>> like battery charge, temperature, power draw, next task, location, time,...
>
> I don't think that's enough. I think it must have the ability to
> recognise other (perhaps similar) robots/machines as being like
> itself.

Again, that is equivalent with being “rich enough” or Löbian, but hat is needed to have self-consciousness, to distinguish []p (used to define the belief of another machine) and []p & p, needed to “know itself” and assess the difference.

(Brute) consciousness is a simple form of knowledge. Self-consciousness is more demanding, it needs the transitive formula ([]p -> [][]p, Smullyan’s awareness of self-awareness).

Bruno



>
>> Of course current rovers don't have AI which would entail them learning and
>> planning, which would require that they be able to run a simulation which
>> included some representation of themself; but that representation might be
>> very simple. When you plan to travel to the next city your plan includes a
>> representation of yourself, but probably only as a location.
>>
>
> Hod Lipson's starfish's representation of itself is no doubt rather
> simple and crude, but it does pose the question of whether it might
> have some sort of consciousness.
>
>
> --
>
> ----------------------------------------------------------------------------
> Dr Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellow hpc...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>

Bruno Marchal

non lue,
18 avr. 2019, 05:33:4818/04/2019
à everyth...@googlegroups.com
Proving makes sense only in a theory. How could we know that the theory is correct? That is precisely what Gödel and tarski showed to be impossible.

Bruno






- pt 

Cosmin Visan

non lue,
18 avr. 2019, 05:43:1918/04/2019
à Everything List
I think that you are making the classical confusion that a lot of materialists are doing, namely to not have a proper understanding of precise philosophical concepts, such as: meaning, purpose, free will, etc., and because of this lack of understanding, you randomly apply these concepts where they don't belong. Basically what you are doing is to personify objects. You basically say: "look, that puppet looks like a human, so it must be a human". Therefore, I would ask from you and other people that make such hastily use of these precise philosophical concepts, to do some reading before engaging in such conversations, because otherwise you will think that you say profound things, when in fact you only demonstrate a shallow understanding of serious concepts, which is a pity, because you lower the level of the discussion.

So:

1) Rover doesn't have purpose, since purpose is represented by a thinking that a consciousness is doing through which it bring thoughts in his mind and uses free will to choose between those possibilities. Also, the very concept of "artificial intelligence" is highly flawed, because intelligence (the natural one, the only one that exists) presupposes bringing new qualia into existence that never existed before in the whole universe. A deterministic system cannot do that. Rover doesn't have the ability to act, since act means a consciousness that through free will imposes its causal powers upon the world. Rover doesn't communicate, because communication means to exchange meaning, and meaning is something that exists in consciousness. Sending 0s and 1s is not communication. Rover has no memory because memory means re-experiencing a previously experienced quale. Rover doesn't learn, because learning means the ability to re-act certain qualia that have been experienced before, and improve upon them.

2) You only make that assertion from your position of lack of knowledge. You have no knowledge of a whole field of study, and because you lack this knowledge, you assume that other people are lacking it as well.

As you can see, you are trying to engage in a discussion in which you have total lack of knowledge of basic concepts. With all due respect, I would suggest you first do some serious readings before engaging any further in these issues. You can start with my book. It is a very good starting place.


On Wednesday, 17 April 2019 18:43:20 UTC+3, Brent wrote:


On 4/16/2019 11:23 PM, 'Cosmin Visan' via Everything List wrote:
1) Well... It might be a very specific arrangement of atoms, but they are still governed by Newton's Laws. Is not like if you put them in certain order magic happens and new things start to appear. It  has no memory, no purpose and no ability to act, since memory, purpose and ability to act are properties of consciousness.
1)
 
But a Mars Rover with artificial intelligence does have purpose, to collect and analyze various data.  It has the ability to act, to travel, to take samples, to communicate.  It has memory of its purpose, where it's been, and for an AI system, even the ability to learn.  Yes, no magic happens.  But new things start to appear just as certain arrangements of atoms are your computer that can transform and display these symbols but in another arrangement would be just a lump of metal and plastic.


2) Try removing yourself from the house in the middle of the winter. You will stop experience warmth, but this doesn't mean that the quale of warmth is generated by the house.

True.  But something is different about inside and outside the house that is not ONLY in your consciousness, because others agree about it and measure it...it's called temperature.


3) I have done the thinking. I don't have to do the experiment to know it is true.
2)
 
That's what all the scholastics thought.

Brent


On Wednesday, 17 April 2019 03:00:14 UTC+3, Brent wrote:
1)
First, that's false.  The Rover is a very specific arrangement of atoms interacting with a specific environment.  It has memory, purpose, and the ability to act.

2)
 
Try removing the phosphate atoms from your brain and see what you believe...if anything.

Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.
3)
 
And have you done this observation?  A Nobel prize awaits. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Cosmin Visan

non lue,
18 avr. 2019, 05:57:3618/04/2019
à Everything List
I'm not the only consciousness. There are other consciousnesses as well. But that's all that exists: consciousnesses and their interactions. Everything else are external appearances of the internal interactions that take place between consciousnesses.

Cosmin Visan

non lue,
18 avr. 2019, 06:05:3518/04/2019
à Everything List
Before going deeper into analyzing your claims, I would like to know if your concept of machine has free will. Because this is a very important concept for consciousness. If you machine doesn't have free will, then you are not talking about consciousness.

On Wednesday, 17 April 2019 19:08:40 UTC+3, Bruno Marchal wrote:

machine

Cosmin Visan

non lue,
18 avr. 2019, 06:17:5918/04/2019
à Everything List
What does "self model" even mean ? Notice that any material attempt to implement "self model" leads to infinite regress. Because let's say that a machine has the parts A B C. To have a "self model" would mean to have another part (A B C) which would contain the "self model". But this would be an extra part of the "self" which would be needed to be included in the "self model" in order to actually have a "self model", so you would need another part (A B C (A B C)). But then again you would need to include this part as well in the "self model". So you will get to infinite regress. Therefore, you need a special kind of entity to obtained the desired effect without getting into infinite regress. And that's precisely why the self-reference that I'm talking about in the book is unformalizable. And as you say, being unformalizable, allows for bootstrapping consciousness into existence. You cannot simulate self-reference just by playing around with atoms. Self-reference just is. It just is the source of the entire existence. Is not up to anyone to simulate the source of existence. You can never obtain the properties of consciousness (meaning, purpose, free will, memory, intelligence, learning, acting, etc.) just by playing around with a bunch of atoms. All these properties of consciousness are having their source in the unformalizable self-reference.

Philip Thrift

non lue,
18 avr. 2019, 06:28:5518/04/2019
à Everything List
I think Lumar is just part of the "Gödel-Löb logic hacker"  gang (MIT, MIRI). They want working code, not "correctness".


cf. Löb’s Theorem
A functional pearl of dependently typed quining

- pt

Cosmin Visan

non lue,
18 avr. 2019, 06:34:1618/04/2019
à Everything List
The only downside being that... the robot does not exist. People are tricking themselves too easily into personifying objects. There is no robot there, there are just a bunch of atoms that bang into each others. You can move those atoms around all day long as you want. You will not create self-reference or "self models" or "imaginations of itself". These are just concepts that exist in the mind of the "researchers" and the "researchers" not getting outside of the lab too often, start to believe their own fantasies.

smitra

non lue,
18 avr. 2019, 06:52:2018/04/2019
à everyth...@googlegroups.com
There exists at least a "minimal system" comprising of the rover and
perhaps controllers on Earth that makes the rover capable of taking in
random information from its environment, interpret it and then take
whatever action is necessary to make sure it is able to stay clear of
trouble while performing certain tasks. The action it needs to take to
stay clear of trouble implicitly contains information about itself. Of
all the possible physical states the rover can be in, most correspond to
the rover being in a totally broken down state. Then there are also
ideal states and not so ideal states. The rover (with the aid of
controllers) is programmed to take action to prevent it from drifting
away to a less than ideal state and if it is in a less than ideal state,
it will take action to try to get back into a more ideal state.


Saibal

Philip Thrift

non lue,
18 avr. 2019, 08:33:1218/04/2019
à Everything List


There are 

I: Information
E: Experience 
M: Matter 

Some think selfhood can be made of pure-I; others think pure-E.

Most modern materialists think I-type M is enough.
But experiential materialists think it's (E+I)-type M.

The ancient materialist Epicurus thought there were physical (I) and psychical (E) atoms, so he was already an experiential materialist.

- pt

Telmo Menezes

non lue,
18 avr. 2019, 09:22:1818/04/2019
à 'Brent Meeker' via Everything List
Hi Cosmin,

On Wed, Apr 17, 2019, at 08:42, 'Cosmin Visan' via Everything List wrote:
1) Oh, I'm clearly not making that mistake. When I talk about emergence, I talk about ontological emergence, not the hand-waving epistemic kind that people usually talk about. The emergence that I'm talking about is the emergence of new qualia on top of previously existing qualia. This is what my book is about. So it's the real deal. Alternatively, have a look at my presentation from the Science & Nonduality conference where I talk about The Emergent Structure of Consciousness, where I talk about ontological emergence and I specifically mention to the audience that the epistemic emergence is false: https://www.youtube.com/watch?v=6jMAy6ft-ZQ
And what realizes the ontological emergence is self-reference through its property of looking-back-at-itself, with looking-back becoming more than itself, like in the cover of the book.

Ok, I saw your presentation. We agree on several things, but I don't quite get your qualia emergence idea. The things you describe make sense, for example the dissolution of meaning by repetition, but what makes you think that this is anything more than an observation in the domain of the cognitive sciences? Or, putting it another way, and observation / model on how our cognitive processes work?


2) Consciousness is not mysterious. And this is exactly what my book is doing: demystifying consciousness. If you decide to read my book, you will gain at the end of it a clarity of thinking through these issues that all people should have such that they will stop making the confusions that robots are alive.

I don't mean to discourage or attack you in anyway, but one in a while someone with a book to promote shows up in this mailing list. No problem with me, I have promoted some of my work sometimes. My problem is with "if you read my book...". There are many books to read, please give the main ideas. Then I might read it.


3) No, they are not extraordinarily claims. They are quite trivial. And they start from the trivial realization that the brain does not exist. The "brain" is just an idea in consciousness.

I have no problem with "the brain is just an idea in consciousess". I am not sure if this type of claim can be verified, or if it falls into the category of things we cannot assert, as Bruno would say. I do tend to think privately in those terms.

So ok, the brain does not exist. It is just a bunch of qualia in consciousness. But this is then true of every single thing! Again, no problem with this, but also no reason to abandon science. The machine doesn't exist either, but its elections (that don't exist either) follow a certain pattern of behavior that we call the laws of physics. Why not the electrons in the brain? What's the difference?

Telmo.


On Wednesday, 17 April 2019 03:06:45 UTC+3, telmo wrote:


On Tue, Apr 16, 2019, at 18:42, 'Cosmin Visan' via Everything List wrote:
Because Rover is just a bunch of atoms. Is nothing more than the sum of atoms. But in the case of self-reference/emergence, each new level is more than the sum of the previous levels.
1)
 

I disagree. My position on this is that people are tricked into thinking that emergence has some ontological status, when if fact it is just an epistemological tool. We need to think in higher-order structures to simplify things (organisms, organs, mean-fields, cells, ant colonies, societies, markets, etc), but a Jupiter-brain could keep track of every entity separately and apprehend the entire thing at the same time. Emergence is a mental shortcut.

Self-reference is another matter (pun was accidental).


I don't know how you can trick yourself so badly into believing that if you put some rocks together, the rocks become alive. Maybe because you think that the brain is just a bunch of atoms. No, it is now. If you were to measure what the electrons are doing in the brain, you would see that they are not moving according to known physics, but they are being moved by consciousness.
2)
 

For me, this is yet another version of "God did it". There is no point in attempting to explain some complex behavior if the explanation is even more complex and mysterious.
 
And this doesn't happen in a machine. In a machine, electrons move according to known physics.
 
3)
 
These are fairly extraordinary claims. Do you have any empirical data to support them?

Telmo.


On Tuesday, 16 April 2019 15:25:40 UTC+3, Bruno Marchal wrote:

How can you argue that Rover has no knowledge, when you say that knowledge is not formalisable?

Introducing some fuzziness to claim a negative thing about a relation of the type consciousness/machine is a bit frightening. It reminds the catholic older sophisticated “reasoning” to assert that Indians have no soul.

Bruno


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Telmo Menezes

non lue,
18 avr. 2019, 09:36:4718/04/2019
à 'Brent Meeker' via Everything List
I agree. Would this not also apply to the concept of "existance"?


Then the mathematical theory of self-reference explains why machine will conclude that they are conscious, in that sense. They will know that they know something that they cannot doubt, yet cannot prove to us, or to anyone. And they can understand that they can test mechanism by observation.





Existence entails self-referential machines, self-referential evolutionary processes, the whole shebang. But not the other way around.

Existence of the natural numbers + the laws of addition and multiplication does that, and also justify what you don’t get any of that with any weaker theory, having less axioms, than Robinson Arithmetic.

We have to assume numbers if we want just define precisely what a machine is, but we cannot assume a physical universe: that is the price, we have to derive it from arithmetic “seen from inside”.

I agree.

My point is much less sophisticated. It is such a trivial observation that I would call it a Lapalissade. And yet, in out current culture, you risk being considered insance for saying it:

Our first-person experience of the world is what exists, as far as we know. Everything else is a model, including the third-person view. There was no Big Bang at the same ontological level that there is a blue pen in my desk, because the Big Bang is nobody's experience (or is it?). The Big Bang is something that the machine has to answer if you ask it certain questions. As you say, if the machine is consistent then the big bang is "true" in a sense, if the macine is malevolent all bets are off.

Telmo.

PGC

non lue,
18 avr. 2019, 09:38:0518/04/2019
à Everything List


On Thursday, April 18, 2019 at 3:22:18 PM UTC+2, telmo wrote:
Hi Cosmin,

On Wed, Apr 17, 2019, at 08:42, 'Cosmin Visan' via Everything List wrote:
1) Oh, I'm clearly not making that mistake. When I talk about emergence, I talk about ontological emergence, not the hand-waving epistemic kind that people usually talk about. The emergence that I'm talking about is the emergence of new qualia on top of previously existing qualia. This is what my book is about. So it's the real deal. Alternatively, have a look at my presentation from the Science & Nonduality conference where I talk about The Emergent Structure of Consciousness, where I talk about ontological emergence and I specifically mention to the audience that the epistemic emergence is false: https://www.youtube.com/watch?v=6jMAy6ft-ZQ
And what realizes the ontological emergence is self-reference through its property of looking-back-at-itself, with looking-back becoming more than itself, like in the cover of the book.

Ok, I saw your presentation. We agree on several things, but I don't quite get your qualia emergence idea. The things you describe make sense, for example the dissolution of meaning by repetition, but what makes you think that this is anything more than an observation in the domain of the cognitive sciences? Or, putting it another way, and observation / model on how our cognitive processes work?


2) Consciousness is not mysterious. And this is exactly what my book is doing: demystifying consciousness. If you decide to read my book, you will gain at the end of it a clarity of thinking through these issues that all people should have such that they will stop making the confusions that robots are alive.

I don't mean to discourage or attack you in anyway,

Lol, getting the new kids in line with the program, Telmo? What, did Twitter get too boring for you?
 
but one in a while someone with a book to promote shows up in this mailing list. No problem with me, I have promoted some of my work sometimes. My problem is with "if you read my book...". There are many books to read, please give the main ideas. Then I might read it.


3) No, they are not extraordinarily claims. They are quite trivial. And they start from the trivial realization that the brain does not exist. The "brain" is just an idea in consciousness.

I have no problem with "the brain is just an idea in consciousess". I am not sure if this type of claim can be verified, or if it falls into the category of things we cannot assert, as Bruno would say. I do tend to think privately in those terms.

Your certitude on public display always is impressive.
 

So ok, the brain does not exist. It is just a bunch of qualia in consciousness. But this is then true of every single thing!

Holy Moses.
 
Again, no problem with this, but also no reason to abandon science.

Yeah, in hands of the proper authorities such as ourselves, science is a powerful tool.

Telmo, grow a pair. Nobody ever told you: you don't have to copy Bruno's. Grow your own. PGC
 

Bruno Marchal

non lue,
18 avr. 2019, 10:04:1518/04/2019
à everyth...@googlegroups.com
On 18 Apr 2019, at 12:05, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Before going deeper into analyzing your claims, I would like to know if your concept of machine has free will. Because this is a very important concept for consciousness. If you machine doesn't have free will, then you are not talking about consciousness.

They have as much free will as human (direct consequence of the Mechanist assumption).

Now, many people defending, or attacking the notion of free-will, have an inconsistent notion of free-will.

So, when I say that machines and humans in particular have free will, I mean that they have the ability to make a choice in absence of complete information. Free-will relies on the intrinsic “liberty” or “universality” of the universal number. The universal number are born with 8 conflicting view of reality, but they have a partial control to what happens to them. When they attempt to get total control, they loss universality/liberty/free-will.

Bruno





On Wednesday, 17 April 2019 19:08:40 UTC+3, Bruno Marchal wrote:

machine


Bruno Marchal

non lue,
18 avr. 2019, 10:10:4718/04/2019
à everyth...@googlegroups.com
Quite interesting. That paper explains well why we cannot use the Curry-Howard isomorphism in the mechanist context, except for the S4Grz ([]p & p), which they do not tackle, as far I have seen. I am glad to see people looking at Löb’s theorem, though. It is the main ingredient of the whole machine theology. 

Bruno Marchal

non lue,
18 avr. 2019, 10:19:5218/04/2019
à everyth...@googlegroups.com
On 18 Apr 2019, at 14:33, Philip Thrift <cloud...@gmail.com> wrote:



There are 

I: Information
E: Experience 
M: Matter 

Some think selfhood can be made of pure-I; others think pure-E.

Most modern materialists think I-type M is enough.
But experiential materialists think it's (E+I)-type M.

The ancient materialist Epicurus thought there were physical (I) and psychical (E) atoms, so he was already an experiential materialist.


But we have not found (primitively) psychical atoms.

Nor do we have really found serious evidence for (primitively) material atoms. 

A case could be made that the fermions are material atoms, and the boson would be the psychical atoms, but that would be a sort of abuse of words, as both notions are well defined third or first  person sharable, and adding mind to them seems arbitrary, and does not seem to lead to testable conclusions, nor to explain anything. 

And, as I said, to introduce unintelligible axioms to allow oneself to feel “superior” (in this case: conscious) to some other creature is a bit frightening, as racism proceeds in a similar way.

Bruno






- pt

On Thursday, April 18, 2019 at 5:34:16 AM UTC-5, Cosmin Visan wrote:
The only downside being that... the robot does not exist. People are tricking themselves too easily into personifying objects. There is no robot there, there are just a bunch of atoms that bang into each others. You can move those atoms around all day long as you want. You will not create self-reference or "self models" or "imaginations of itself". These are just concepts that exist in the mind of the "researchers" and the "researchers" not getting outside of the lab too often, start to believe their own fantasies.
 
On Thursday, 18 April 2019 10:11:09 UTC+3, Philip Thrift wrote:

Columbia engineers create a robot that can imagine itself


Brent Meeker

non lue,
18 avr. 2019, 13:34:3118/04/2019
à everyth...@googlegroups.com


On 4/18/2019 2:19 AM, Bruno Marchal wrote:
For
instance, without an observer to interpret a certain pile of atoms as
a machine, it is just a pile of atoms.
Are you saying that Mars Rover cannot interpret some of its data on Mars, when nobody observed it, or are you saying that Mars Rover has enough observation abilities?

What makes the Mars Rover a machine is that it can act and react to its environment.  If it's an AI Rover it can learn and plan and reflect.  To invoke an "observer" is just push the problem away to "What is an observer?"

Brent

Brent Meeker

non lue,
18 avr. 2019, 13:44:0418/04/2019
à everyth...@googlegroups.com
I see you are of the scholastic school of philosophers (I thought they were all dead) who suppose that they can make things true by giving them "precise definitions" in words.  You should study some science and learn the importance of operational definitions in  connecting words to facts.

Brent
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

non lue,
18 avr. 2019, 13:56:0818/04/2019
à everyth...@googlegroups.com


On 4/18/2019 3:17 AM, 'Cosmin Visan' via Everything List wrote:
What does "self model" even mean ? Notice that any material attempt to implement "self model" leads to infinite regress.

No.  A "model" is not a complete description, it's a representation of some specific aspects.  Your "self-reference" cannot refer to everything about yourself...which according to you is a stream of consciousness.

Brent

--

Brent Meeker

non lue,
18 avr. 2019, 16:54:0418/04/2019
à everyth...@googlegroups.com


On 4/18/2019 3:34 AM, 'Cosmin Visan' via Everything List wrote:
The only downside being that... the robot does not exist. People are tricking themselves too easily into personifying objects. There is no robot there, there are just a bunch of atoms

I thought you didn't believe in atoms.  I look forward to your construction of atoms from consciousness of...what?  atoms?

Brent

that bang into each others. You can move those atoms around all day long as you want. You will not create self-reference or "self models" or "imaginations of itself". These are just concepts that exist in the mind of the "researchers" and the "researchers" not getting outside of the lab too often, start to believe their own fantasies.
 
On Thursday, 18 April 2019 10:11:09 UTC+3, Philip Thrift wrote:

Columbia engineers create a robot that can imagine itself

Brent Meeker

non lue,
18 avr. 2019, 16:55:1718/04/2019
à everyth...@googlegroups.com
Pretty much like a bacterium, that was programmed by evolution.

Brent

Russell Standish

non lue,
18 avr. 2019, 19:24:1718/04/2019
à everyth...@googlegroups.com
To not recognise the observer is simply to put the problem under a
rug. Without an interpretation that voltages in excess of 3V represent
1, and voltages less than 2V represent 0, the logic circuits are just
analogue electrical circuits. Without such an interpretation (and
ipso facto an observer), the rover is not processing data at all!

Note an observer need be nothing more than a mapping of physical space
to semantic space. One possibility is to bootstrap the observer by
self-reflection.

Russell Standish

non lue,
18 avr. 2019, 19:27:3118/04/2019
à Everything List
On Thu, Apr 18, 2019 at 03:17:59AM -0700, 'Cosmin Visan' via Everything List wrote:
> What does "self model" even mean ? Notice that any material attempt to
> implement "self model" leads to infinite regress. Because let's say that a
> machine has the parts A B C. To have a "self model" would mean to have another
> part (A B C) which would contain the "self model". But this would be an extra
> part of the "self" which would be needed to be included in the "self model" in
> order to actually have a "self model", so you would need another part (A B C (A
> B C)). But then again you would need to include this part as well in the "self
> model". So you will get to infinite regress. Therefore, you need a special kind
> of entity to obtained the desired effect without getting into infinite regress.
> And that's precisely why the self-reference that I'm talking about in the book
> is unformalizable. And as you say, being unformalizable, allows for
> bootstrapping consciousness into existence. You cannot simulate self-reference
> just by playing around with atoms. Self-reference just is. It just is the
> source of the entire existence. Is not up to anyone to simulate the source of
> existence. You can never obtain the properties of consciousness (meaning,
> purpose, free will, memory, intelligence, learning, acting, etc.) just by
> playing around with a bunch of atoms. All these properties of consciousness are
> having their source in the unformalizable self-reference.
>

The same argument was made in favour of vitalism - before the
structure and mechanics of DNA was discovered.

Self-reference is formalisable. See Löb's theorem.

Brent Meeker

non lue,
18 avr. 2019, 23:45:0218/04/2019
à everyth...@googlegroups.com


On 4/18/2019 4:24 PM, Russell Standish wrote:
> On Thu, Apr 18, 2019 at 10:34:26AM -0700, 'Brent Meeker' via Everything List wrote:
>>
>> On 4/18/2019 2:19 AM, Bruno Marchal wrote:
>>
>> For
>> instance, without an observer to interpret a certain pile of atoms as
>> a machine, it is just a pile of atoms.
>>
>> Are you saying that Mars Rover cannot interpret some of its data on Mars, when nobody observed it, or are you saying that Mars Rover has enough observation abilities?
>>
>>
>> What makes the Mars Rover a machine is that it can act and react to its
>> environment.  If it's an AI Rover it can learn and plan and reflect.  To invoke
>> an "observer" is just push the problem away to "What is an observer?"
> To not recognise the observer is simply to put the problem under a
> rug. Without an interpretation that voltages in excess of 3V represent
> 1, and voltages less than 2V represent 0, the logic circuits are just
> analogue electrical circuits. Without such an interpretation (and
> ipso facto an observer), the rover is not processing data at all!

No.  That's the point of the environment.  Acting in the environment
provides purpose and meaning to all that stuff.  Just think of how you,
if you were an observer, would estimate whether are not the Rover was
intelligent?  was self-aware?

>
> Note an observer need be nothing more than a mapping of physical space
> to semantic space. One possibility is to bootstrap the observer by
> self-reflection.

So you think the Rover will interpret data if some other AI consults
it's dictionary and nominates what it sees as "A Mars Rover"???

Brent

>
>

Cosmin Visan

non lue,
19 avr. 2019, 03:09:3919/04/2019
à Everything List
1) The qualia of black-and-white is not on the same level with the qualia of colors. The qualia of colors include the qualia of black-and-white. You cannot see a color if that color is not emergent upon black-and-white (or more specifically shades-of-gray). You cannot experience music if music is not emergent upon sounds. You cannot taste chocolate if chocolate is not emergent upon sweet. You cannot understand Pythagoras Theorem if the understanding of Pythagoras Theorem doesn't emerge upon the understandings of triangles, angles, lengths, etc. And this is real emergence, because you really get new existent entities that never existed before in the history of existence. God himself never experienced these qualia.

I don't understand your second part of the question regarding our "cognitive processes". Are you referring to our specific form of human consciousness ? I don't think this is only restricted to our human consciousness, for the reason that it happens to all qualia that we have. All qualia domains are structured in an emergent way.

2) The main ideas in my book are the emergent structure of consciousness and the self-reference which gives birth to the emergent structure. The ideas about self-reference that I have are rooted in phenomenology. First I observe that consciousness is structured in an emergent way, and then I conclude that the reason it is like this is because there is an entity called "self-reference" that looks-back-at-itself and in this process includes the previously existing self and brings a new transcendent self into existence, like in the case of colors emerging on top of black-and-white.

3) The difference is that in an emergent system you have top-down influence in levels. Electrons in simple systems like the ones in physical experiments have little input from any top level, so they behaving according to their own level and display certain laws. But when they are part of a greater holistic system, like in the brain (which is just an appearance of internal workings in consciousness) they receive top-down influence from the intentions in consciousness, and so they behave according to the will of consciousness. Is the same phenomenon when we speak, that I also gave in my presentation. When we speak, we act from the level of intending to transmit certain ideas. And this level exercises top-down influence in levels and the sentences, words and letters are coming out in accordance with the intention from the higher level.

On Thursday, 18 April 2019 16:22:18 UTC+3, telmo wrote:
Hi Cosmin,

1)

Cosmin Visan

non lue,
19 avr. 2019, 03:16:1519/04/2019
à Everything List
It's still not clear to me what your concept of "machine" is. Is it just an abstract theory or is it some actually existing entity ? If it is actually existing, is it made out of atoms ? Because if it is made out of atoms, where does its free will come from ? In the case of humans free will comes from the fact that we are not made out of atoms, but we are consciousnesses, "atoms" being just ideas in us.

Cosmin Visan

non lue,
19 avr. 2019, 03:21:4419/04/2019
à Everything List
It is a precise definition in the sense that if I see red, then I see red. You cannot come and tell me: "Well... maybe it wasn't red, maybe it was yellow.". No! It was red! And if you then say: "Oh, but also the robot sees red, because...", then you enter a realm of fantasy that has nothing to do whatsoever with rational thinking. We are not interested in fantasies, i.e. "operational definitions". We are interested in truth. And be saying the robot sees red, you are not doing anything in helping to understand truth, you are just playing word-games.

Cosmin Visan

non lue,
19 avr. 2019, 03:29:0819/04/2019
à Everything List
Then if it is not a complete description, why do you call it "self-reference" ? You should just call it: "a table of parameters". The true self-reference is complete: it is included in itself in its entirety. And is doing this without getting into infinite regress. The reason it can avoid infinite regress is that the true self-reference is an unformal entity. Or as I read some guy saying: self-reference neither is nor not-is, self-reference neither exists nor not-exists. It is a very special kind of entity.

Cosmin Visan

non lue,
19 avr. 2019, 03:40:2819/04/2019
à Everything List
Of course there are no atoms. The point is that the robot follows the same behavior as the appearances of "atoms" in our consciousness. In other words, if you know the behavior of atoms (even though they are nothing more than appearances in consciousness), you know the behavior of the robot. There is no free will there, no act, no purpose, etc. But in the case of consciousnesses, the "atoms" in the "brain" are not enough to predict the behavior of a conscious being, because the "atoms" in the "brain" receive top-down influence in levels from the intentions of consciousnesses. Consciousnesses really have free will, really act, really have purposes. This has nothing to do with "scholastic philosophy". This is just rational thinking. If you you use your rationality you realize these things. If not, you start to believe in fantasies in which robots have souls.

Philip Thrift

non lue,
19 avr. 2019, 03:44:3919/04/2019
à Everything List


On Thursday, April 18, 2019 at 6:27:31 PM UTC-5, Russell Standish wrote:
On Thu, Apr 18, 2019 at 03:17:59AM -0700, 'Cosmin Visan' via Everything List wrote:

Self-reference is formalisable. See Löb's theorem.





The problematic part of "self-reference" is "self".

HOL theorem proving agents - as developed at MIRI and MIT-CSAIL - (attempt to) implement Löbian provability-logic reflection. (*Self-" is used a lot [ https://intelligence.org/files/TilingAgentsDraft.pdf ].) This may be sufficient for non-conscious, intelligent robots.

But if "self" is what (for example) Galen Strawson* defines, the above is not "self-reflection".

Because there is no "self".


* at least some ultimates must be experiential


- pt

Cosmin Visan

non lue,
19 avr. 2019, 03:45:4919/04/2019
à Everything List
Vitalism is still true. Nobody knows how a being develops from embryo to its fully developed form. DNA is just a book. Nobody knows how it actually functions. It might well receive top-down influence in levels from higher order consciousness that guides the development of the biological entity.

Then Lob is just talking about other things. The true self-reference is not formalisable, since neither is nor not-is.

Cosmin Visan

non lue,
19 avr. 2019, 03:52:0319/04/2019
à Everything List
Exactly. This is the whole point. In order to have self-reference, you need to have a self. And you don't just get a self by arranging atoms in certain positions. You don't get a self by bringing a bunch of atoms together and calling them "a robot", because calling them "a robot" is just something that you yourself do in your own consciousness. Only because you call that bunch of atoms "a robot" it doesn't mean that all of a sudden magic happens and that bunch of atoms really become "a robot", or a self. So you don't just get selves. Self is a rather specific entity. Self is exactly that entity that is included by default in the very notion of "self-reference". Self is that ontological entity that has as its very property the property of referring-back-to-itself. And automatically that kind of entity is unformalizable.

Philip Thrift

non lue,
19 avr. 2019, 04:09:3619/04/2019
à Everything List

Of course (as you know) I say one could bring a "bunch of atoms together" to get something that is a conscious self.

First 3D Engineered Vascularized Human Heart Is Bioprinted

In the future: a Brain?

The problem is not appreciating experience* !== information.

* Experience  (Experientiality) as an ultimate property of matter.

- pt

Bruno Marchal

non lue,
19 avr. 2019, 05:21:1019/04/2019
à everyth...@googlegroups.com
I am not sure what you mean by “existence” when used alone. It might be a “professional deformation”, but to me existence is a logical quantifier, and is not a intrinsic property.

I think that may be consciousness is a fixed point of existence, in the sense that “existence of consciousness” is equivalent with “consciousness”.

If you are using “existence” is a more sophisticated sense, then this should be elaborated?

We cannot prove the existence of anything, without assuming the existence of something. With mechanism, we have to assume the existence of numbers (or to derive from something Turing equivalent, like I did with the combinators), so I doubt that existence is immediately knowable, etc. Unless again, you meant “existence of consciousness”, but then this cannot apply to define consciousness.

You might need to elaborate about what you mean by “existence”, when used alone.







Then the mathematical theory of self-reference explains why machine will conclude that they are conscious, in that sense. They will know that they know something that they cannot doubt, yet cannot prove to us, or to anyone. And they can understand that they can test mechanism by observation.





Existence entails self-referential machines, self-referential evolutionary processes, the whole shebang. But not the other way around.

Existence of the natural numbers + the laws of addition and multiplication does that, and also justify what you don’t get any of that with any weaker theory, having less axioms, than Robinson Arithmetic.

We have to assume numbers if we want just define precisely what a machine is, but we cannot assume a physical universe: that is the price, we have to derive it from arithmetic “seen from inside”.

I agree.

My point is much less sophisticated. It is such a trivial observation that I would call it a Lapalissade. And yet, in out current culture, you risk being considered insance for saying it:

Our first-person experience of the world is what exists, as far as we know.

Yes, but as far as we know it for sure, we know only our own personal experience here-and-now. We have no other certainties. OK.





Everything else is a model, including the third-person view.

Yes. (Of course a logician would call that a theory, as a model = a reality, like the painters used that word).



There was no Big Bang at the same ontological level that there is a blue pen in my desk, because the Big Bang is nobody's experience (or is it?).

With resect to Mechanism, the pen of he desk is similar to the Big Bang. We believe in them from indirect evidence. It does not seem so for the pen, because our brain make the relevant computation mostly unconsciously. For the Big Bang, we have used much more brains (using indirectly the brain of colleagues, Hubble, Einstein, and using telescope, making the computations more consciously, but it is just a matter of degree.





The Big Bang is something that the machine has to answer if you ask it certain questions. As you say, if the machine is consistent then the big bang is "true" in a sense, if the macine is malevolent all bets are off.


Gödel proved that “consistent” is the same as having a model (in the logician sense of reality, not a theory). So the notion of truth is always relative to a model, the reality we are pointing too. In our local reality, there are evidence of personal birth, star, galaxies, and the Big Bang. Now if the logic of the material modes where contradicted by nature, that would be an evidence that the Big Bang, and some physical stuff, is ontologically real, but thanks to quantum mechanics, we have the contrary evidences, which means the Big Bang is more like a percept in some video games (which all exist in arithmetic). Below the substitution level, mechanism predict that we can “see” (indirectly) the presence of the infinitely many computations which support us, and that explain the quantum from the machine’s theory of consciousness/knowledge/observation.

The fundamental science is theology, or metaphysics. Physics is a statistics deducible from the logic of the first person plural view ([]p & <>t, you can read it “p is true in all models é there is one model”): that give the probability one for p. ([]p alone cannot work, because of the cul-de-sac worlds where []p is vacuously true).

The malevolent machine must be invoked, for being logically correct, even if that can be judged non reasonable. I mean, if Z1* departs from nature observation, it means that mechanism is false OR we are in a malevolent simulation. But up to now, thanks to “many-world QM”, nature confirms Mechanism, and thus indirectly the whole theory of consciousness or theology.

Bruno

Cosmin Visan

non lue,
19 avr. 2019, 05:21:2519/04/2019
à Everything List
No, this cannot be done. The Self is eternal and it exists necessarily by the fact that it self-refers itself. All you can do is to give the Self different experiences and make him believe he is an individual consciousness. This is happening for example in biological reproduction. What biological reproduction is doing is to make the unique Self to believe he is an independent consciousness. But in order to make him believe that, you need to follow specific conditions as they are realized in biology. As of today, we have no idea what those conditions are that biology satisfy in order to make the Self believe he is an independent consciousness.

Cosmin Visan

non lue,
19 avr. 2019, 05:24:3819/04/2019
à Everything List
And to add more regarding biology, take into account that reincarnation preserves memories from past lives. So biology is not merely "putting atoms in the right order". Is more than that. The conditions that biology satisfies in order to individuate the unique Self are going beyond mere arrangements of atoms.

Bruno Marchal

non lue,
19 avr. 2019, 05:27:1819/04/2019
à everyth...@googlegroups.com
On 18 Apr 2019, at 19:34, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/18/2019 2:19 AM, Bruno Marchal wrote:
For
instance, without an observer to interpret a certain pile of atoms as
a machine, it is just a pile of atoms.
Are you saying that Mars Rover cannot interpret some of its data on Mars, when nobody observed it, or are you saying that Mars Rover has enough observation abilities?

What makes the Mars Rover a machine is that it can act and react to its environment. 

Yes. And thanks to the fact that it is implemented in the physical reality (the sum on all computation), its reaction will fit with its most probable environnement, which is (by definition here) the physical environment.
Just to be precise.




If it's an AI Rover it can learn and plan and reflect. 

An get the right reaction, whatever is the environment, hopefully not departing too much form the physical one.

The “essence” of a computation is to be counterfactually correct. 




To invoke an "observer" is just push the problem away to "What is an observer?”


But to define the physical reality, we need to define the observer. With mechanism, the observer is just a number/machine, relative to some other numbers/machines. We can define an ideal observer by a sound Löbian machine. Its physical reality will be determined by the logic of observation (mainly []p & <>t).

Bruno





Brent

Bruno Marchal

non lue,
19 avr. 2019, 05:39:0919/04/2019
à everyth...@googlegroups.com
On 18 Apr 2019, at 19:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 4/18/2019 3:17 AM, 'Cosmin Visan' via Everything List wrote:
What does "self model" even mean ? Notice that any material attempt to implement "self model" leads to infinite regress.

No.  A "model" is not a complete description, it's a representation of some specific aspects. 

Well, indeed. But that is the sense of “model” when used in physics. In logic, the model is the reality that we are doing the theory about.

We should avoid the term “model” and talk only on “theory” and reality”, or we will risk to bring confusion.

The theory is the (usually incomplete) representation, like a painting. The model/reality is what is supposed to being represented.

For example; the theory of arithmetic is

0 ≠ s(x)
s(x) = s(y) -> x = y
x = 0 v Ey(x = s(y))    
x+0 = x
x+s(y) = s(x+y)
x*0=0
x*s(y)=(x*y)+x

But the arithmetical reality is the highly non computable and non axiomatisable mathematical structure involving the infinite set N, with 0, +, * and s admitting the standard interpretation we are familiar with.



Your "self-reference" cannot refer to everything about yourself...which according to you is a stream of consciousness.


Yes.



Brent

Because let's say that a machine has the parts A B C. To have a "self model" would mean to have another part (A B C) which would contain the "self model". But this would be an extra part of the "self" which would be needed to be included in the "self model" in order to actually have a "self model", so you would need another part (A B C (A B C)). But then again you would need to include this part as well in the "self model". So you will get to infinite regress.

I missed this (from Cosmin). Of course that is Driesch “proofs” that Descartes will never solve its self-reproduction problem, but that has been solved by the second theorem recursion of Kleene (or just Gödel self-referential sentence construction). Self-reference here is just obtained by the syntactical recursion:

If Dx gives x’x’, then D’D’ gives ‘D’D’’.

See my paper “Amoeba, Planaria and Dreaming machine” for more on this. I have used the recursion theorem to program a “planaria”. A program that you can cut in pieces, and each pieces regenerate the whole program, with its original functionality back.

Bruno

Bruno Marchal

non lue,
19 avr. 2019, 05:41:1319/04/2019
à everyth...@googlegroups.com

> On 19 Apr 2019, at 01:24, Russell Standish <li...@hpcoders.com.au> wrote:
>
> On Thu, Apr 18, 2019 at 10:34:26AM -0700, 'Brent Meeker' via Everything List wrote:
>>
>>
>> On 4/18/2019 2:19 AM, Bruno Marchal wrote:
>>
>> For
>> instance, without an observer to interpret a certain pile of atoms as
>> a machine, it is just a pile of atoms.
>>
>> Are you saying that Mars Rover cannot interpret some of its data on Mars, when nobody observed it, or are you saying that Mars Rover has enough observation abilities?
>>
>>
>> What makes the Mars Rover a machine is that it can act and react to its
>> environment. If it's an AI Rover it can learn and plan and reflect. To invoke
>> an "observer" is just push the problem away to "What is an observer?"
>
> To not recognise the observer is simply to put the problem under a
> rug. Without an interpretation that voltages in excess of 3V represent
> 1, and voltages less than 2V represent 0, the logic circuits are just
> analogue electrical circuits. Without such an interpretation (and
> ipso facto an observer), the rover is not processing data at all!
>
> Note an observer need be nothing more than a mapping of physical space
> to semantic space. One possibility is to bootstrap the observer by
> self-reflection.

That is needed to just define the “physical space”. This one cannot be invoked through an ontological commitment, or mechanism is abandoned of course.

Bruno



>
>
> --
>
> ----------------------------------------------------------------------------
> Dr Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellow hpc...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>

Bruno Marchal

non lue,
19 avr. 2019, 06:12:0319/04/2019
à everyth...@googlegroups.com
On 19 Apr 2019, at 09:16, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

It's still not clear to me what your concept of "machine" is. Is it just an abstract theory or is it some actually existing entity ?

It is a machine in the sense of computer science. It is purely immaterial, and can be represented by numbers, or by combinators, of by set of quadruples (Turing).

My favorite definition of machine is true the combinator,. I could use to define a machine in this (recursive) way:

K is a machine
S is a machine

If x and y are machines, then (x y) is a machine.

So example of machine are K, S (K K) , (S, K), … ((K K) K), (K (K K)), …

We abbreviate ((K K) K) by KKK, and (K (K K)) by K(KK). We suppress

The functioning of the machine is given by the two reduction rule:

Kxy -> x
Sxyz -> xz(yz)

This can be shown Turing universal, so any other digital machine, and digital machine execution can be emulated faithfully by such machine.

See the (recent) combinator threads for more on this.

A simple example of a computation is SKSK -> KK(SK) -> K.



If it is actually existing,

If you agree that x + 2 = 5 admits a solution, then it exist in that sense.  All other sense of existence are derived for the existence in that sense. There are many.





is it made out of atoms ? Because if it is made out of atoms, where does its free will come from ?


It is of course not made of physical atoms, but you can call “S” and “K” combinatoric atoms.

No problem fro free-will for the universal combinator, which of course exists, (as the combinator machinery is Turing universal), and universal machine (immaterial or material computer) have free-will.



In the case of humans free will comes from the fact that we are not made out of atoms, but we are consciousnesses, "atoms" being just ideas in us.

OK. But the derivation must explain why atoms have electrons, why orbitals, etc. But yes, the physical atoms are eventually reduce to dream made by us, (us = the combinator, not the humans which are very particular case of machine/number/combinators!).

You might bought some good introductory book on computer science. The original papers are the best, I think, so Martin Davis book at Dover are well suited to begin with. He use the Turing formalism, where a machine is defined by a set of quadruples like q_7 S_9 S_54 q_6, which means if I am in. State 7, in front of the symbol S_9, I overwrite the symbol S_54 and go to the state q_6. There are also instruction to move left or right on some locally finite, but extendendable register/tape. 

If we assume the Church-Turing thesis, any similar formalism will work. 


Bruno





On Thursday, 18 April 2019 17:04:15 UTC+3, Bruno Marchal wrote:

They have as much free will as human (direct consequence of the Mechanist assumption).


Bruno Marchal

non lue,
19 avr. 2019, 06:27:5819/04/2019
à everyth...@googlegroups.com

On 18 Apr 2019, at 12:17, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

What does "self model" even mean ? Notice that any material attempt to implement "self model" leads to infinite regress. Because let's say that a machine has the parts A B C. To have a "self model" would mean to have another part (A B C) which would contain the "self model". But this would be an extra part of the "self" which would be needed to be included in the "self model" in order to actually have a "self model", so you would need another part (A B C (A B C)). But then again you would need to include this part as well in the "self model". So you will get to infinite regress.

That infinite regress problem can be avoided.

See my answer to a post to Brent (sent today).

The idea is simple: if Dx gives xx, then DD gives DD.

In this case, DD will never stop, and that is the usual “first” recursion. But you can make a program stopping on its own code, by using special quotation, or some typical computer science construct, like the SMN theorem of Kleene. It is more like:

If D’x’ gives ‘x’x’’, then D’D’ gives ‘D’D’’.

That is the staring point of almost all of theoretical computer science, and the study of self-reeve,ce in arithmetical is very well developed.

Thismisses the first person self-reference, which typically does not admit any formalisation (provably so), but it is still can be shown to exist, making he point that the universal machine knows that they have a first person notion, and knows that they cannot define it. The machine are as much confused as us with Ramona Mahasrhi koan: “Who am I?”.





Therefore, you need a special kind of entity to obtained the desired effect without getting into infinite regress. And that's precisely why the self-reference that I'm talking about in the book is unformalizable.


As I said, the machine already knows this. The universal machine (number, combinator, or physical) knows that they have a soul (immaterial, immortal, and responsible for the illusion of the physical universe and its lawfulness).



And as you say, being unformalizable, allows for bootstrapping consciousness into existence.

OK.



You cannot simulate self-reference just by playing around with atoms. Self-reference just is.

Not OK. You can simulate the self-reference with atoms, and that enacts the experience of the first person, which is distributed on the whole arithmetic, and can be shown to be non formalisable, nor even definable.



It just is the source of the entire existence.

It is the source of the entire physical existence, but we have to assume the numbers, or the combinators.


Is not up to anyone to simulate the source of existence.

Indeed.



You can never obtain the properties of consciousness (meaning, purpose, free will, memory, intelligence, learning, acting, etc.) just by playing around with a bunch of atoms.

You cannot singularise them in some reality, and indeed atoms are immaterial constructs depending on intrinsic relation between all universal machine/number/combinators.




All these properties of consciousness are having their source in the unformalizable self-reference.

Yes, but still amenable to meta-formalisation, when we assume mechanism, which then explains in detail why the first person is not formalisable, and indeed independent of formalisation.

Bruno






On Thursday, 18 April 2019 04:00:31 UTC+3, Russell Standish wrote:
each consciousness bootstraps its own
meaning from self-reference. Unless the mars rover has a self model in
its code (and I don't think it was constructed that way), then I would
extremely doubt it has any sort of consciousness.

Telmo Menezes

non lue,
19 avr. 2019, 08:09:5419/04/2019
à 'Brent Meeker' via Everything List


On Fri, Apr 19, 2019, at 09:09, 'Cosmin Visan' via Everything List wrote:
1) The qualia of black-and-white is not on the same level with the qualia of colors. The qualia of colors include the qualia of black-and-white. You cannot see a color if that color is not emergent upon black-and-white (or more specifically shades-of-gray). You cannot experience music if music is not emergent upon sounds. You cannot taste chocolate if chocolate is not emergent upon sweet. You cannot understand Pythagoras Theorem if the understanding of Pythagoras Theorem doesn't emerge upon the understandings of triangles, angles, lengths, etc. And this is real emergence, because you really get new existent entities that never existed before in the history of existence. God himself never experienced these qualia.

Ok, I think I understand your presentation better now. You make an interesting point, I don't think I ever considered emergence purely on the side of qualia as you describe.

There is something here that still does not convince me. For example, you say that the "chocolate taste" qualia emerges from simpler qualia, such as "sweet". Can you really justify this hierarchical relation without implicitly alluding to the quanti side? Consider the qualias of eating a piece of chocolate, a spoonful of sugar and french fries. You can feel that the first two have something in common that distinguishes them from the third, and you can give it the label "sweet". At the same time, you could say that the chocolate and french fries are pleasant to eat, while the spoonful of sugar not so much. You can also label this abstraction with some word. Without empirical grounding, nothing makes one distinction more meaningful than another.

What makes the "sweat" abstraction so special? Well, it's that we know about sweet receptors in the tongue and we know it's one of the four(five?) basic flavors because of that. I'm afraid you smuggle this knowledge into the pure qualia world. Without it, there is no preferable hierarchical relation and emergence becomes nonsensical again. There's just a field of qualia.


I don't understand your second part of the question regarding our "cognitive processes". Are you referring to our specific form of human consciousness ? I don't think this is only restricted to our human consciousness, for the reason that it happens to all qualia that we have. All qualia domains are structured in an emergent way.

I was referring to your observation that things lose meaning by repetition, like staring at yourself in the mirror for a long time. I to find this interesting, but I can imagine prosaic explanations. For example, that our brain requires a certain amount of variety in its inputs, otherwise it tends to a simpler state were apprehension of meaning is no longer possible. In other words, I am proposing a plumber-style explanation, and asking you why/if you think it can be discarded?


2) The main ideas in my book are the emergent structure of consciousness and the self-reference which gives birth to the emergent structure. The ideas about self-reference that I have are rooted in phenomenology. First I observe that consciousness is structured in an emergent way, and then I conclude that the reason it is like this is because there is an entity called "self-reference" that looks-back-at-itself and in this process includes the previously existing self and brings a new transcendent self into existence, like in the case of colors emerging on top of black-and-white.

I have the problem above with the first part of what you say, but I like the second part.


3) The difference is that in an emergent system you have top-down influence in levels. Electrons in simple systems like the ones in physical experiments have little input from any top level, so they behaving according to their own level and display certain laws. But when they are part of a greater holistic system, like in the brain (which is just an appearance of internal workings in consciousness) they receive top-down influence from the intentions in consciousness, and so they behave according to the will of consciousness. Is the same phenomenon when we speak, that I also gave in my presentation. When we speak, we act from the level of intending to transmit certain ideas. And this level exercises top-down influence in levels and the sentences, words and letters are coming out in accordance with the intention from the higher level.

Here I think you are making the ontological/epistemological confusion. Another way to describe what you are alluding to above is this: the more complex a system, the higher the amount of branching in the trees of causation that extend into the past. To describe the movement of an election in the ideal conditions of some laboratory experiment, you might just require a couple of equations and variables. To describe the movement of an election in the incredible wet mess that is the human brain, you require trillions of equations with trillions of variables.

The identification of patterns across scales allows us to vastly compress the information of the object we are looking at, making it somewhat tractable by our limited intellects. Some of these patters have names such as "speaking", "word", "presentation", "red", etc. These patterns are not arbitrarily grounded, they are grounded by some correspondence with qualia, as I argue above. Why? I don't have the answer, I think it's a mystery.

I am not saying that the point of view you describe above is not valid or interesting, but I am saying that it is nothing more than epistemology.

Telmo.


On Thursday, 18 April 2019 16:22:18 UTC+3, telmo wrote:
Hi Cosmin,

1)
 
Ok, I saw your presentation. We agree on several things, but I don't quite get your qualia emergence idea. The things you describe make sense, for example the dissolution of meaning by repetition, but what makes you think that this is anything more than an observation in the domain of the cognitive sciences? Or, putting it another way, and observation / model on how our cognitive processes work?


2) Consciousness is not mysterious. And this is exactly what my book is doing: demystifying consciousness. If you decide to read my book, you will gain at the end of it a clarity of thinking through these issues that all people should have such that they will stop making the confusions that robots are alive.

I don't mean to discourage or attack you in anyway, but one in a while someone with a book to promote shows up in this mailing list. No problem with me, I have promoted some of my work sometimes. My problem is with "if you read my book...". There are many books to read, please give the main ideas. Then I might read it.


3) No, they are not extraordinarily claims. They are quite trivial. And they start from the trivial realization that the brain does not exist. The "brain" is just an idea in consciousness.

I have no problem with "the brain is just an idea in consciousess". I am not sure if this type of claim can be verified, or if it falls into the category of things we cannot assert, as Bruno would say. I do tend to think privately in those terms.

So ok, the brain does not exist. It is just a bunch of qualia in consciousness. But this is then true of every single thing! Again, no problem with this, but also no reason to abandon science. The machine doesn't exist either, but its elections (that don't exist either) follow a certain pattern of behavior that we call the laws of physics. Why not the electrons in the brain? What's the difference?

Telmo.


Philip Thrift

non lue,
19 avr. 2019, 08:50:1619/04/2019
à Everything List


With (panpsychic-experiential) materialism:

 - the self is not eternal  :(   [ of course you could be frozen in the hope for some future technology ])
 - it is an independent consciousness (pretty much so, introducing outside chemicals aside)

- pt

Cosmin Visan

non lue,
19 avr. 2019, 15:35:1819/04/2019
à Everything List
1) You raise an interesting point. Can you give another example in that direction beside the qualia of good and bad ? Because you made me think about the case that you mentioned, and it seems to me that it only works for cases of good and bad. A similar example to yours would be: blue and green emerge on top of shades-of-gray, but I like blue and I don't like green, so where does the good and bad appear in my final experience of a quale ? So it might be the case that aesthetic components might be something special. That's why I would like to hear if you can come up with a similar example besides aesthetic components, to pinpoint more precisely where there might be a problem with my ideas about emergence.

2) This is interesting again. And I thought about it before writing my paper about emergence. And indeed I think that your proposal that it might just be something related to brain functioning cannot be discarded. The reason why I prefer to see it as something related directly to consciousness is simply because it can give me the possibility to further pursue the issue. If it is something related to brain, then it might be contingent, and I cannot see how the phenomenon can be understood any further. If it is something related to consciousness, then it is interesting because then it is related to fundamental problems regarding the nature of meaning and how meaning is generated, so deep thinking in these directions can further help us understand consciousness.

3) There is no ontological/epistemological confusion here. I state that even if you are to take into account the entire history that you mention, the electron would still not follow the same laws as in simple systems, because in the brain it will receive top-down influence from a higher consciousness. And the more complex the system, the more the consciousness is evolved and its intentions are beyond comprehension, so the ability to describe the movement of electrons using coherent laws vanishes. The electron will simply appear to not follow any law, because the intentions of consciousness would be more and more complex and diverse.

Btw, you can find my ideas also published for free in papers: https://philpeople.org/profiles/cosmin-visan So if you want to get more details about my ideas regarding emergence and self-reference, you can as well read the papers.

On Friday, 19 April 2019 15:09:54 UTC+3, telmo wrote:

1)


There is something here that still does not convince me. For example, you say that the "chocolate taste" qualia emerges from simpler qualia, such as "sweet". Can you really justify this hierarchical relation without implicitly alluding to the quanti side? Consider the qualias of eating a piece of chocolate, a spoonful of sugar and french fries. You can feel that the first two have something in common that distinguishes them from the third, and you can give it the label "sweet". At the same time, you could say that the chocolate and french fries are pleasant to eat, while the spoonful of sugar not so much. You can also label this abstraction with some word. Without empirical grounding, nothing makes one distinction more meaningful than another.

What makes the "sweat" abstraction so special? Well, it's that we know about sweet receptors in the tongue and we know it's one of the four(five?) basic flavors because of that. I'm afraid you smuggle this knowledge into the pure qualia world. Without it, there is no preferable hierarchical relation and emergence becomes nonsensical again. There's just a field of qualia.

 
2)
 
I was referring to your observation that things lose meaning by repetition, like staring at yourself in the mirror for a long time. I to find this interesting, but I can imagine prosaic explanations. For example, that our brain requires a certain amount of variety in its inputs, otherwise it tends to a simpler state were apprehension of meaning is no longer possible. In other words, I am proposing a plumber-style explanation, and asking you why/if you think it can be discarded?

3) The difference is that in an emergent system you have top-down influence in levels. Electrons in simple systems like the ones in physical experiments have little input from any top level, so they behaving according to their own level and display certain laws. But when they are part of a greater holistic system, like in the brain (which is just an appearance of internal workings in consciousness) they receive top-down influence from the intentions in consciousness, and so they behave according to the will of consciousness. Is the same phenomenon when we speak, that I also gave in my presentation. When we speak, we act from the level of intending to transmit certain ideas. And this level exercises top-down influence in levels and the sentences, words and letters are coming out in accordance with the intention from the higher level.
3)
 
Here I think you are making the ontological/epistemological confusion. Another way to describe what you are alluding to above is this: the more complex a system, the higher the amount of branching in the trees of causation that extend into the past. To describe the movement of an election in the ideal conditions of some laboratory experiment, you might just require a couple of equations and variables. To describe the movement of an election in the incredible wet mess that is the human brain, you require trillions of equations with trillions of variables.

The identification of patterns across scales allows us to vastly compress the information of the object we are looking at, making it somewhat tractable by our limited intellects. Some of these patters have names such as "speaking", "word", "presentation", "red", etc. These patterns are not arbitrarily grounded, they are grounded by some correspondence with qualia, as I argue above. Why? I don't have the answer, I think it's a mystery.

I am not saying that the point of view you describe above is not valid or interesting, but I am saying that it is nothing more than epistemology.

Telmo.


On Thursday, 18 April 2019 16:22:18 UTC+3, telmo wrote:
Hi Cosmin,

1)
 
Ok, I saw your presentation. We agree on several things, but I don't quite get your qualia emergence idea. The things you describe make sense, for example the dissolution of meaning by repetition, but what makes you think that this is anything more than an observation in the domain of the cognitive sciences? Or, putting it another way, and observation / model on how our cognitive processes work?


2) Consciousness is not mysterious. And this is exactly what my book is doing: demystifying consciousness. If you decide to read my book, you will gain at the end of it a clarity of thinking through these issues that all people should have such that they will stop making the confusions that robots are alive.

I don't mean to discourage or attack you in anyway, but one in a while someone with a book to promote shows up in this mailing list. No problem with me, I have promoted some of my work sometimes. My problem is with "if you read my book...". There are many books to read, please give the main ideas. Then I might read it.


3) No, they are not extraordinarily claims. They are quite trivial. And they start from the trivial realization that the brain does not exist. The "brain" is just an idea in consciousness.

I have no problem with "the brain is just an idea in consciousess". I am not sure if this type of claim can be verified, or if it falls into the category of things we cannot assert, as Bruno would say. I do tend to think privately in those terms.

So ok, the brain does not exist. It is just a bunch of qualia in consciousness. But this is then true of every single thing! Again, no problem with this, but also no reason to abandon science. The machine doesn't exist either, but its elections (that don't exist either) follow a certain pattern of behavior that we call the laws of physics. Why not the electrons in the brain? What's the difference?

Telmo.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Brent Meeker

non lue,
19 avr. 2019, 16:04:0619/04/2019
à everyth...@googlegroups.com


On 4/19/2019 5:09 AM, Telmo Menezes wrote:


On Fri, Apr 19, 2019, at 09:09, 'Cosmin Visan' via Everything List wrote:
1) The qualia of black-and-white is not on the same level with the qualia of colors. The qualia of colors include the qualia of black-and-white. You cannot see a color if that color is not emergent upon black-and-white (or more specifically shades-of-gray). You cannot experience music if music is not emergent upon sounds. You cannot taste chocolate if chocolate is not emergent upon sweet. You cannot understand Pythagoras Theorem if the understanding of Pythagoras Theorem doesn't emerge upon the understandings of triangles, angles, lengths, etc. And this is real emergence, because you really get new existent entities that never existed before in the history of existence. God himself never experienced these qualia.

Ok, I think I understand your presentation better now. You make an interesting point, I don't think I ever considered emergence purely on the side of qualia as you describe.

There is something here that still does not convince me. For example, you say that the "chocolate taste" qualia emerges from simpler qualia, such as "sweet". Can you really justify this hierarchical relation without implicitly alluding to the quanti side? Consider the qualias of eating a piece of chocolate, a spoonful of sugar and french fries. You can feel that the first two have something in common that distinguishes them from the third, and you can give it the label "sweet". At the same time, you could say that the chocolate and french fries are pleasant to eat, while the spoonful of sugar not so much. You can also label this abstraction with some word. Without empirical grounding, nothing makes one distinction more meaningful than another.

What makes the "sweat" abstraction so special? Well, it's that we know about sweet receptors in the tongue and we know it's one of the four(five?) basic flavors because of that. I'm afraid you smuggle this knowledge into the pure qualia world. Without it, there is no preferable hierarchical relation and emergence becomes nonsensical again. There's just a field of qualia.


I don't understand your second part of the question regarding our "cognitive processes". Are you referring to our specific form of human consciousness ? I don't think this is only restricted to our human consciousness, for the reason that it happens to all qualia that we have. All qualia domains are structured in an emergent way.

I was referring to your observation that things lose meaning by repetition, like staring at yourself in the mirror for a long time. I to find this interesting, but I can imagine prosaic explanations. For example, that our brain requires a certain amount of variety in its inputs, otherwise it tends to a simpler state were apprehension of meaning is no longer possible. In other words, I am proposing a plumber-style explanation, and asking you why/if you think it can be discarded?


2) The main ideas in my book are the emergent structure of consciousness and the self-reference which gives birth to the emergent structure. The ideas about self-reference that I have are rooted in phenomenology. First I observe that consciousness is structured in an emergent way, and then I conclude that the reason it is like this is because there is an entity called "self-reference" that looks-back-at-itself and in this process includes the previously existing self and brings a new transcendent self into existence, like in the case of colors emerging on top of black-and-white.

I have the problem above with the first part of what you say, but I like the second part.


3) The difference is that in an emergent system you have top-down influence in levels. Electrons in simple systems like the ones in physical experiments have little input from any top level, so they behaving according to their own level and display certain laws. But when they are part of a greater holistic system, like in the brain (which is just an appearance of internal workings in consciousness) they receive top-down influence from the intentions in consciousness, and so they behave according to the will of consciousness. Is the same phenomenon when we speak, that I also gave in my presentation. When we speak, we act from the level of intending to transmit certain ideas. And this level exercises top-down influence in levels and the sentences, words and letters are coming out in accordance with the intention from the higher level.

Here I think you are making the ontological/epistemological confusion. Another way to describe what you are alluding to above is this: the more complex a system, the higher the amount of branching in the trees of causation that extend into the past. To describe the movement of an election in the ideal conditions of some laboratory experiment, you might just require a couple of equations and variables. To describe the movement of an election in the incredible wet mess that is the human brain, you require trillions of equations with trillions of variables.

The identification of patterns across scales allows us to vastly compress the information of the object we are looking at, making it somewhat tractable by our limited intellects. Some of these patters have names such as "speaking", "word", "presentation", "red", etc. These patterns are not arbitrarily grounded, they are grounded by some correspondence with qualia, as I argue above. Why? I don't have the answer, I think it's a mystery.

I am not saying that the point of view you describe above is not valid or interesting, but I am saying that it is nothing more than epistemology.

If your fundamental ontology is qualia, a kind of incorrigble knowledge, then isn't everything going to epistemology?  The very idea of a mind independent reality is a construct to explain the existence of patterns in the qualia.  That's where the self-reference comes in: among the qualia are some that are experience or recognition of patterns in prior qualia. 

Brent
Chargement d'autres messages en cours.
0 nouveau message