http://www.jwz.org/blog/2012/10/smoothlifel/Jason
http://iridia.ulb.ac.be/~marchal/
On Fri, Oct 12, 2012 at 05:50:11AM -0700, Craig Weinberg wrote:
> They are certainly cool looking and biomorphic. The question I have is, at
> what point do they begin to have experiences...or do you think that those
> blobs have experiences already?
>
> Would it give them more of a human experience if an oscillating
> smiley-face/frowny-face algorithm were added graphically into the center of
> each blob?
>
> Craig
Assuming this system exhibits universality like the original GoL, and
assuming COMP, then some patterns will exhibit consciousness. However,
the patterns will no doubt be astronomical in size. The movies you see
here would be like taking an electron microscopic movie of the inner
workings of part of one cell in the human body.
On Sat, Oct 13, 2012 at 02:11:59PM -0700, Craig Weinberg wrote:
>
>
> On Friday, October 12, 2012 4:42:56 PM UTC-4, Russell Standish wrote:
> > Assuming this system exhibits universality like the original GoL, and
> > assuming COMP, then some patterns will exhibit consciousness. However,
> > the patterns will no doubt be astronomical in size. The movies you see
> > here would be like taking an electron microscopic movie of the inner
> > workings of part of one cell in the human body.
> >
>
> Unlike part of a human cell though, they are just an optical presentation
> with no mass or chemical composition.
>
> Craig
I know you don't believe in COMP, but assuming COMP (I am open-minded
on the topic), mass and chemical composition are irrelevant to
consciousness.
--
Stathis Papaioannou
--
Stathis Papaioannou
On Sun, Oct 14, 2012 at 11:10 AM, Craig Weinberg <whats...@gmail.com> wrote:
> Fading qualia is the only argument of Chalmers' that I disagree with. It's a
> natural mistake to make, but I think he goes wrong by assuming a priori that
> consciousness is functional, i.e. that personal consciousness is an assembly
> of sub-personal parts which can be isolated and reproduced based on exterior
> behavior.
No, he does NOT assume this. He assumes the opposite: that
consciousness is a property of the brain and CANNOT be reproduced by
reproducing the behaviour in another substrate.
> I don't assume that at all. I suspect the opposite case, that in
> fact any level of personal consciousness - be it sub-personal-reflex,
> personal-intentional, or super-signifying-synchronistic cannot be modeled by
> the impersonal views from third person perspectives. The impersonal (micro,
> meso, macrocosm) is based on public extension, space, and quantifiable
> lengths, while the personal is based on private intention, time, and
> qualitative oscillation. Each layer of the personal relates to all of the
> impersonal layers in a different way, so that you can't necessarily replace
> a person with a sculpture and expect there to still be a person there - even
> if the sculpture seems extremely convincing to us from the outside
> appearance. My prediction is that rather than fading qualia, we would simply
> see increasing pathology, psychosis, dementia, coma, and death.
But since you misunderstand the first assumption you misunderstand the
whole argument.
--
Stathis Papaioannou
On Sun, Oct 14, 2012 at 2:59 PM, Craig Weinberg <whats...@gmail.com> wrote:
>> No, he does NOT assume this. He assumes the opposite: that
>> consciousness is a property of the brain and CANNOT be reproduced by
>> reproducing the behaviour in another substrate.
>
>
> I'm not talking about what the structure of the thought experiment assumes,
> I am talking about what David Chalmers himself assumed before coming up with
> the paper. We have been over this before. I'm not saying I disagree with the
> reasoning of the thought experiment, I am saying that I see a mistake in the
> initial assumptions which invalidate the thought experiment in the first
> place.
The validity of a proof is not dependent on the beliefs, habits or
psychology of its author!
>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.
Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness.
You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function.
What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?
--
Stathis Papaioannou
> Since we know that our consciousness
> is exquisitely sensitive to particular masses of specific chemicals, yet relatively tolerant of other kinds of chemical changes,
> it suggests that we should strongly suspect that COMP is a fantasy.
> I think he [Chambers] goes wrong by assuming a priori that consciousness is functional,
> that personal consciousness is an assembly of sub-personal parts which can be isolated and reproduced based on exterior behavior. I don't assume that at all.
On Sat, Oct 13, 2012 Craig Weinberg <whats...@gmail.com> wrote:> Since we know that our consciousness
You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.
> is exquisitely sensitive to particular masses of specific chemicals, yet relatively tolerant of other kinds of chemical changes,
And a computer is exquisitely sensitive to particular voltages and not sensitive at all to other voltages that don't make the threshold.
> it suggests that we should strongly suspect that COMP is a fantasy.
And so the computer strongly suspects that biological consciousness is a fantasy.
John K Clark
On Sat, Oct 13, 2012 at 8:10 PM, Craig Weinberg <whats...@gmail.com> wrote:> I think he [Chambers] goes wrong by assuming a priori that consciousness is functional,
I've asked you this question dozens of times but you have never coherently answered it: If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it?
The fact that you have no answer to this means your ideas are fatally flawed.
> that personal consciousness is an assembly of sub-personal parts which can be isolated and reproduced based on exterior behavior. I don't assume that at all.
And I've asked you another question that you also have no answer for: If we can deduce nothing about consciousness from behavior then why do you believe that your fellow human beings are conscious when they are behaving as if they are awake, and why do you believe that they are not conscious when they are sleeping or undergoing anesthesia or behaving as if they were dead and rotting in the ground?
John K Clark
You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.
If that were true, then you don't know diddly squat about what I know.
You can't have it both ways. Either it is possible that we know things or it is not.
> You can't claim to be omniscient about my ignorance.
> Let's see how computer fares under a giant junkyard magnet.
> Evolution did not produce consciousness. Consciousness produced evolution.
> Not human consciousness
> I have said this repeatedly.
> We can deduce a great deal about the consciousness of things which are similar to ourselves.
On Mon, Oct 15, 2012 at 12:41 PM, Craig Weinberg <whats...@gmail.com> wrote:
You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.
If that were true, then you don't know diddly squat about what I know.
Not true, I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.
You can't have it both ways. Either it is possible that we know things or it is not.
That is most certainly true, it is possible to know things, it's just not possible to know all things.> You can't claim to be omniscient about my ignorance.
It's almost as if you're claiming your ignorance is vast, well I admit I am not omniscient about your ignorance, no doubt you are ignorant about things that I don't know you are ignorant of.
> Let's see how computer fares under a giant junkyard magnet.
Let's see how you fare in a junkyard car crusher.
John K Clark
On Sat, Oct 13, 2012 at 8:10 PM, Craig Weinberg <whats...@gmail.com> wrote:
> I think he [Chambers] goes wrong by assuming a priori that consciousness is functional,
I've asked you this question dozens of times but you have never coherently answered it: If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it? The fact that you have no answer to this means your ideas are fatally flawed.
And a computer is exquisitely sensitive to particular voltages and not sensitive at all to other voltages that don't make the threshold.
Let's see how computer fares under a giant junkyard magnet.
Brent
Brent
I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.
Then you are claiming to know about "our consciousness" instead of just your own.
>>> Let's see how computer fares under a giant junkyard magnet.>> Let's see how you fare in a junkyard car crusher.
> translation - "I concede, I have no argument."
>> If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it? The fact that you have no answer to this means your ideas are fatally flawed.
> I don't see this as a *fatal* flaw. Evolution, as you've noted, is not a paradigm of efficient design. Consciousness might just be a side-effect
On Mon, Oct 15, 2012 at 2:02 PM, Craig Weinberg <whats...@gmail.com> wrote:I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.
Then you are claiming to know about "our consciousness" instead of just your own.
I am claiming that you don't possess a valid proof that 2+2=5 because there is no such proof to possess.
>>> Let's see how computer fares under a giant junkyard magnet.>> Let's see how you fare in a junkyard car crusher.
> translation - "I concede, I have no argument."
So lets see, "a giant junkyard magnet" is a devastating logical argument but "a junkyard car crusher" is not. Explain to me how that works.
John K Clark
On Friday, October 12, 2012 10:23:57 AM UTC-4, Bruno Marchal wrote:
On 12 Oct 2012, at 14:50, Craig Weinberg wrote:
> They are certainly cool looking and biomorphic. The question I have
> is, at what point do they begin to have experiences...or do you
> think that those blobs have experiences already?
>
> Would it give them more of a human experience if an oscillating
> smiley-face/frowny-face algorithm were added graphically into the
> center of each blob?
Here is a "deterministic" simple phenomenon looking amazingly
"alive" (non-newtonian fluid):
http://www.youtube.com/watch?v=3zoTKXXNQIU
Is it alive? That question does not make sense for me. Yes with some
definition, no with other one. Unlike consciousness or intelligence
"life" is not a definite concept for me. I use usually the definition
"has a reproductive cycle". But this makes cigarettes and stars alive.
No problem for me.
Bruno
"The good news is, after this operation you'll be every bit as alive as a cigarette is".
There are some cool videos out there of cymatic animation like that. All that it really tells me is that there are a limited number of morphological themes in the universe, not that those themes are positively linked to any particular private phenomenology. They are producing those patterns with a particular acoustic signal, but we could model it mathematically and see the same pattern on a video screen without any acoustic signal at all. Same thing happens when we model the behaviors of a conscious mind. It looks similar from a distance, but that's all.
Craig
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/8-pjDX84CC4J.
Two men and two women live together. The woman has a child. 2+2=5
On 16 Oct 2012, at 18:56, Craig Weinberg wrote:Two men and two women live together. The woman has a child. 2+2=5You mean two men + two women + a baby = five persons.You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".Bruno
Bruno:corn starch is not a fluid (newtinian or not). It is a solid and when dissolved in water (or whatever?) it makes a N.N.fluid ---------My question about it's 'live, or not' status is:does it provide METABOLISM and REPAIR ?????I doubt it.Do not misunderstand me, please: this is not my word about :"LIFE" it pertains to the LIVE STATUS (process) which - according to Robert Rosen's brilliant distinction - shows a relying upon environmental (material??) support for its substinence (called metabolism) and a mechanism to repair damages that occur in the process of being alive.
Minds with chemistry impediment look differently at things.
On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:On 16 Oct 2012, at 18:56, Craig Weinberg wrote:Two men and two women live together. The woman has a child. 2+2=5You mean two men + two women + a baby = five persons.You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".Bruno
I only see that one person plus another person can eventually equal three or more people.
It depends when you start counting and how long it takes you to finish.
Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QjkYW9tKq6EJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>> So lets see, "a giant junkyard magnet" is a devastating logical argument but "a junkyard car crusher" is not. Explain to me how that works.
> Because talking about how you want to kill me in an argument about computers is pointless ad hominem venting, but talking about the effect of magnetism on computers in an argument about computers is relevant
> Darwin does not need to be wrong. Consciousness role can be deeper, in the "evolution/selection" of the laws of physics from the coherent dreams (computations from the 1p view) in arithmetic.
On Wed, Oct 17, 2012 at 10:13 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:> Darwin does not need to be wrong. Consciousness role can be deeper, in the "evolution/selection" of the laws of physics from the coherent dreams (computations from the 1p view) in arithmetic.
I have no idea what that means, not a clue,
but I do know that Evolution can't select for something it can't see,
and I do know that Evolution can see intelligence because it produces behavior.
Evolution can't see consciousness directly any better than we can,
so if it produced it
(and it did unless Darwin was dead wrong)
then consciousness MUST be a byproduct of something that it can see.
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 17 Oct 2012, at 17:04, Craig Weinberg wrote:
On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:On 16 Oct 2012, at 18:56, Craig Weinberg wrote:Two men and two women live together. The woman has a child. 2+2=5You mean two men + two women + a baby = five persons.You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".Bruno
I only see that one person plus another person can eventually equal three or more people.With the operation of sexual reproduction, not by the operation of addition.
It depends when you start counting and how long it takes you to finish.It depends on what we are talking about. Person with sex is not numbers with addition.You are just changing definition, not invalidating a proof (the proof that 2+2=4, in arithmetic).
John K Clark
>> I have no idea what that means, not a clue
> Probably for the same reason that you stop at step 3 in the UD Argument.
>You assume a physical reality,
> and you assume that our consciousness is some phenomenon related exclusively to some construct (brain, bodies)
>> so if it [Evolution] produced it [consciousness]
>No. With comp, consciousness was there before.
>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.
Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness.
No, I do understand that.
You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function.
Do you have to know the entire history of how you learned English to read these words? It depends what you mean by know. You don't have to consciously recall learning English, but without that experience, you wouldn't be able to read this. If you had a module implanted in your brain which would allow you to read Chinese, it might give you an acceptable capacity to translate Chinese phonemes and characters, but it would be a generic understanding, not one rooted in decades of human interaction. Do you see the difference? Do you see how words are not only functional data but also names which carry personal significance?
What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?
You think that I don't get that Fading Qualia is a story about a world in which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets say that's true - how would that be? Would your blue be less and less blue? How could you act normally if you...blah, blah, blah'. I get that. It's crystal clear.
What you don't understand is that this carries a priori assumptions about the nature of consciousness, that it is an end result of a distributed process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.
Imagine that we had one eye in the front of our heads and one ear in the back, and that the whole of human history has been to debate over whether walking forward means that objects are moving toward you or whether it means changes in relative volume of sounds.
Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, how would our sight gradually change to sound, or would it suddenly switch over?' Since both options seem absurd, then he concludes that anything that is in the front of the head is an eye and everything on the back is an ear, or that everything has both ear and eye potentials.
The MR model is to understand that these two views are not merely substance dual or property dual, they are involuted juxtapositions of each other. The difference between front and back is not merely irreconcilable, it is mutually exclusive by definition in experience. I am not throwing up my hands and saying 'ears can't be eyes because eyes are special', I am positively asserting that there is a way of modeling the eye-ear relation based on an understanding of what time, space, matter, energy, entropy, significance, perception, and participation actually are and how they relate to each other.
The idea that the newly discovered ear-based models out of the back of our head is eventually going to explain the view eye view out of the front is not scientific, it's an ideological faith that I understand to be critically flawed. The evidence is all around us, we have only to interpret it that way rather than to keep updating our description of reality to match the narrowness of our fundamental theory. The theory only works for the back view of the world...it says *nothing* useful about the front view. To the True Disbeliever, this is a sign that we need to double down on the back end view because it's the best chance we have. The thinking is that any other position implies that we throw out the back end view entirely and go back to the dark ages of front end fanatacism. I am not suggesting a compromise, I propose a complete overhaul in which we start not from the front and move back or back and move front, but start from the split and see how it can be understood as double knot - a fold of folds.
>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.
Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness.
No, I do understand that.Good.You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function.
Do you have to know the entire history of how you learned English to read these words? It depends what you mean by know. You don't have to consciously recall learning English, but without that experience, you wouldn't be able to read this. If you had a module implanted in your brain which would allow you to read Chinese, it might give you an acceptable capacity to translate Chinese phonemes and characters, but it would be a generic understanding, not one rooted in decades of human interaction. Do you see the difference? Do you see how words are not only functional data but also names which carry personal significance?The atoms in my brain don't have to know how to read Chinese. They only need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex behaviour which is reading Chinese comes from the interaction of billions of these atoms doing their simple thing.
If the atoms in my brain were put into a Chinese-reading configuration, either through a lot of work learning the language or through direct manipulation, then I would be able to understand Chinese.
What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?
You think that I don't get that Fading Qualia is a story about a world in which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets say that's true - how would that be? Would your blue be less and less blue? How could you act normally if you...blah, blah, blah'. I get that. It's crystal clear.
What you don't understand is that this carries a priori assumptions about the nature of consciousness, that it is an end result of a distributed process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.
Imagine that we had one eye in the front of our heads and one ear in the back, and that the whole of human history has been to debate over whether walking forward means that objects are moving toward you or whether it means changes in relative volume of sounds.
Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, how would our sight gradually change to sound, or would it suddenly switch over?' Since both options seem absurd, then he concludes that anything that is in the front of the head is an eye and everything on the back is an ear, or that everything has both ear and eye potentials.
The MR model is to understand that these two views are not merely substance dual or property dual, they are involuted juxtapositions of each other. The difference between front and back is not merely irreconcilable, it is mutually exclusive by definition in experience. I am not throwing up my hands and saying 'ears can't be eyes because eyes are special', I am positively asserting that there is a way of modeling the eye-ear relation based on an understanding of what time, space, matter, energy, entropy, significance, perception, and participation actually are and how they relate to each other.
The idea that the newly discovered ear-based models out of the back of our head is eventually going to explain the view eye view out of the front is not scientific, it's an ideological faith that I understand to be critically flawed. The evidence is all around us, we have only to interpret it that way rather than to keep updating our description of reality to match the narrowness of our fundamental theory. The theory only works for the back view of the world...it says *nothing* useful about the front view. To the True Disbeliever, this is a sign that we need to double down on the back end view because it's the best chance we have. The thinking is that any other position implies that we throw out the back end view entirely and go back to the dark ages of front end fanatacism. I am not suggesting a compromise, I propose a complete overhaul in which we start not from the front and move back or back and move front, but start from the split and see how it can be understood as double knot - a fold of folds.I'm sorry, but this whole passage is a non sequitur as far as the fading qualia thought experiment goes. You have to explain what you think would happen if part of your brain were replaced with a functional equivalent.
A functional equivalent would stimulate the remaining neurons the same as the part that is replaced.
The original paper says this is a computer chip but this is not necessary to make the point: we could just say that it is any device, not being the normal biological neurons. If consciousness is substrate-dependent (as you claim) then the device could do its job of stimulating the neurons normally while lacking or differing in consciousness. Since it stimulates the neurons normally you would behave normally. If you didn't then it would be a miracle, since your muscles would have to contract normally. Do you at least see this point, or do you think that your muscles would do something different?
-- Stathis Papaioannou
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.
Bruno,especially in my identification as "responding to relations".Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?) mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).JM
-- Onward! Stephen
On Friday, October 19, 2012 3:29:39 AM UTC-4, Bruno Marchal wrote:On 17 Oct 2012, at 17:04, Craig Weinberg wrote:
On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:On 16 Oct 2012, at 18:56, Craig Weinberg wrote:Two men and two women live together. The woman has a child. 2+2=5You mean two men + two women + a baby = five persons.You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".Bruno
I only see that one person plus another person can eventually equal three or more people.With the operation of sexual reproduction, not by the operation of addition.
Only if you consider the 2+2=5 to be a complex special case and 2+2=4 to be a simple general rule.
It could just as easily be flipped.
I can say 2+2=4 by the operation of reflexive neurology, and 2+2=5 is an operation of multiplication. It depends on what level of description you privilege by over-signifying and the consequence that has on the other levels which are under-signified. To me, the Bruno view is near-sighted when it comes to physics (only sees numbers, substance is disqualified)
and far-sighted when it comes to numbers (does not question the autonomy of numbers).
What is it that can tell one number from another?
What knows that + is different from * and how?
Why doesn't arithmetic truth need a meta-arithmetic machine to allow it to function (to generate the ontology of 'function' in the first place)?
It's all sense. It has to be sense.It depends when you start counting and how long it takes you to finish.It depends on what we are talking about. Person with sex is not numbers with addition.You are just changing definition, not invalidating a proof (the proof that 2+2=4, in arithmetic).
I'm not trying to invalidate the proof within one context of sense, I'm pointing out that it isn't that simple. There are other contexts of sense which reduce differently.
Craig
Bruno
Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QjkYW9tKq6EJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/ma4il48CDGAJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On Sat, Oct 20, 2012 Bruno Marchal <mar...@ulb.ac.be> wrote:>> I have no idea what that means, not a clue> Probably for the same reason that you stop at step 3 in the UD Argument.
Probably. I remember I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that; but I don't remember if that was step 3 or not.
>You assume a physical reality,
I assume that if physical reality doesn't exist then either the words "physical" or "reality" or "exists" are meaningless, and I don't think any of those words are.
> and you assume that our consciousness is some phenomenon related exclusively to some construct (brain, bodies)
If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.
>> so if it [Evolution] produced it [consciousness]>No. With comp, consciousness was there before.
Well I don't know about you but I don't think my consciousness was there before Evolution figured out how to make brains, I believe this because I can't seem to remember events that were going on during the Precambrian. I've always been a little hazy about what exactly "comp" meant but I had the general feeling that I sorta agreed with it, but apparently not.
Bruno,especially in my identification as "responding to relations".Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?) mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg <whats...@gmail.com> wrote:
>> The atoms in my brain don't have to know how to read Chinese. They only
>> need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex
>> behaviour which is reading Chinese comes from the interaction of billions of
>> these atoms doing their simple thing.
>
>
> I don't think that is true. The other way around makes just as much sense of
> not more: Reading Chinese is a simple behavior which drives the behavior of
> billions of atoms to do a complex interaction. To me, it has to be both
> bottom-up and top-down. It seems completely arbitrary prejudice to presume
> one over the other just because we think that we understand the bottom-up so
> well.
>
> Once you can see how it is the case that it must be both bottom-up and
> top-down at the same time, the next step is to see that there is no
> possibility for it to be a cause-effect relationship, but rather a dual
> aspect ontological relation. Nothing is translating the functions of neurons
> into a Cartesian theater of experience - there is nowhere to put it in the
> tissue of the brain and there is no evidence of a translation from neural
> protocols to sensorimotive protocols - they are clearly the same thing.
If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this.
Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not.
You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?
>> If the atoms in my brain were put into a Chinese-reading configuration,
>> either through a lot of work learning the language or through direct
>> manipulation, then I would be able to understand Chinese.
>
>
> It's understandable to assume that, but no I don't think it's like that. You
> can't transplant a language into a brain instantaneously because there is no
> personal history of association. Your understanding of language is not a
> lookup table in space, it is made out of you. It's like if you walked around
> with Google translator in your brain. You could enter words and phrases and
> turn them into you language, but you would never know the language first
> hand. The knowledge would be impersonal - accessible, but not woven into
> your proprietary sense.
I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.
>> I'm sorry, but this whole passage is a non sequitur as far as the fading
>> qualia thought experiment goes. You have to explain what you think would
>> happen if part of your brain were replaced with a functional equivalent.
>
>
> There is no functional equivalent. That's what I am saying. Functional
> equivalence when it comes to a person is a non-sequitur. Not only is every
> person unique, they are an expression of uniqueness itself. They define
> uniqueness in a never-before-experienced way. This is a completely new way
> of understanding consciousness and signal. Not as mechanism, but as
> animism-mechanism.
>
>
>>
>> A functional equivalent would stimulate the remaining neurons the same as
>> the part that is replaced.
>
>
> No such thing. Does any imitation function identically to an original?
In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original.
We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?
--
Stathis Papaioannou
Hi John,On 20 Oct 2012, at 23:16, John Mikes wrote:Bruno,especially in my identification as "responding to relations".Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?) mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).My feeling was first that all homeotherm animals have self-consciousness, as they have the ability to dream, easily realted to the ability to build a representation of one self. Then I have enlarged the spectrum up to some spiders and the octopi, just by reading a lot about them, looking video.But this is just a personal appreciation. For the plant, let us say I know nothing, although I supect possible consciousness, related to different scalings.The following theory seems to have consciousness, for different reason (the main one is that it is Turing Universal):x + 0 = xx + s(y) = s(x + y)x *0 = 0x*s(y) = x*y + xBut once you add the very powerful induction axioms: which say that if a property F is true for zero, and preserved by the successor operation, then it is true for all natural numbers. That is the infinity of axioms:(F(0) & Ax(F(x) -> F(s(x))) -> AxF(x),with F(x) being any formula in the arithmetical language (and thus defined with "0, s, +, *),Then you get Löbianity, and this makes it as much conscious as you and me. Indeed, they got a rich theology about which they can develop maximal awareness, and even test it by comparing the physics retrievable by that theology, and the observation and inference on their most probable neighborhoods.Löbianity is the treshold at which any new axiom added will create and enlarge the machine ignorance. It is the utimate modesty treshold.
>> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that
> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W
yet your agree that both copy will feel to live in only one place
> so the error you have seen was dues to a confusion between first person and third person.
> By the way, it is irrational to stop in the middle of a proof.
> By assuming a physical reality at the start
> But the physical reality can emerge or appear without a physical reality at the start
>> If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.
> But this is easily shown to be false when we assume comp.
> If your state appears in a far away galaxies [...]
> You keep defending comp, in your dialog with Craig,
> You can attach consciousness to the owner of a brain,
> but the owner itself must attach his consciousness to all states existing in arithmetic
namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On Sun, Oct 21, 2012 Bruno Marchal <mar...@ulb.ac.be> wrote:>> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that
> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W
Yes.
yet your agree that both copy will feel to live in only one place
Yes.> so the error you have seen was dues to a confusion between first person and third person.
Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated.
Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.
> By the way, it is irrational to stop in the middle of a proof.
If one of the steps in a proof contains a blunder then it would be irrational to keep reading it.> By assuming a physical reality at the start
That seems like a pretty damn good place to make an assumption.> But the physical reality can emerge or appear without a physical reality at the start
Maybe maybe not, but even if you're right that wouldn't make it any less real; and maybe physical reality didn't even need to emerge because there was no start.>> If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.> But this is easily shown to be false when we assume comp.
It's not false and I don't need to assume it and I haven't theorized it from armchair philosophy either, I can show it's true experimentally. And when theory and experiment come into conflict it is the theory that must submit not the experiment. If I insert drugs into your bloodstream it will change the chemistry of your brain, and when that happens your conscious state will also change. Depending on the drug I can make you happy-sad, friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.
> If your state appears in a far away galaxies [...]
Then he will be me and he will remain me until differences between that far away galaxy and this one cause us to change in some way, such as by forming different memories; after that he will no longer be me, although we will still both be John K Clark because John K Clark has been duplicated, the machine duplicated the body of him and the environmental differences caused his consciousness to diverge. As I've said before this is a odd situation but in no way paradoxical.
> You keep defending comp, in your dialog with Craig,
I keep defending my ideas, "comp" is your homemade term not mine, I have no use for it.
> You can attach consciousness to the owner of a brain,
Yes, consciousness is what the brain does.> but the owner itself must attach his consciousness to all states existing in arithmetic
Then I must remember events that happened in the Precambrian because arithmetic existed even back then, but I don't, I don't remember existing then at all. Now that is a paradox! Therefore one of the assumptions must be wrong, namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".
John K Clark
On Sun, Oct 21, 2012 at 12:46 PM, John Clark <johnk...@gmail.com> wrote:On Sun, Oct 21, 2012 Bruno Marchal <mar...@ulb.ac.be> wrote:>> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that
> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W
Yes.
yet your agree that both copy will feel to live in only one place
Yes.> so the error you have seen was dues to a confusion between first person and third person.
Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated.
Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.
John,I think you are missing something. It is a problem that I noticed after watching the movie "The Prestige" and it eventually led me to join this list.Unless you consider yourself to be only a single momentary atom of thought, you probably believe there is some stream of thoughts/consciousness that you identify with. You further believe that these thoughts and consciousness are produced by some activity of your brain. Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent. We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated). It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it. From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.Assuming you are with me so far, an interesting question emerges: what happens to your consciousness when duplicated? Either an atom for atom replica of yourself is created in two places or your VM image which contains your brain emulation is copied to two different computers while paused, and then both are resumed. Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances, but then something interesting happens once different input is fed to the two instances: they split. You could say they split in the same sense as when someone opens the steel box to see whether the cat is alive or dead. All the splitting in quantum mechanics may be the result of our infinite instances discovering/learning different things about our infinite environments.
I strongly believe that computational complexity plays a huge role in many aspects of the hard problem of consciousness and that the Platonic approach to computer science is obscuring solutions as it is blind to questions of resource availability and distribution.
No, although the consensus appears to be that quantum effects are notIn a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?
Is the brain strictly a classical system?
significant in its functioning. In any case, this does not invalidate
functionalism.
Well, I don't follow the crowd. I agree that functionalist is not dependent on the type of physics of the system, but there is an issue of functional closure that must be met in my conjecture; there has to be some way for the system (that supports the conscious capacity) to be closed under the transformation involved.
Of course. As I see it, there is no brain change without a mind change and vice versa. The mind and brain are dual, as Boolean algebras and topological spaces are dual, the relation is an isomorphism between structures that have oppositely directed arrows of transformation. The math is very straight forward... People just have a hard time understanding the idea that all of "matter" is some form of topological space and there is no known calculus of variations for Boolean algebras (no one is looking for it, except for me, that I know of). Care to help me? The idea of SPK-E -> SPK-E+C, that you mentioned, is an example of a variation of Boolean algebra!The psychoactive substances cause a physical change in your brain andAs I said, technical problems with computers are not relevant to theLet's see. If I ingest psychoactive substances, there is a 1p observable
argument. The implant is just a device that has the correct timing of
neural impulses. Would it necessarily preserve consciousness?
effect.... Is this a circumstance that is different in kind from that
device?
thereby also a psychological change.
--
Onward!
Stephen
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Also, it isn't quite clear to me how something needs to be added to Turing universality to expand the capabilities of consciousness, if all consciousness is the result of computation.
Thanks,
> I think you are missing something. It is a problem that I noticed after watching the movie "The Prestige"
> you probably believe there is some stream of thoughts/consciousness that you identify with.
> You further believe that these thoughts and consciousness are produced by some activity of your brain.
> Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.
> Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent.
> We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated).
It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it. From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.
> what happens to your consciousness when duplicated?
> Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances
> but then something interesting happens once different input is fed to the two instances: they split.
On Sun, Oct 21, 2012 Bruno Marchal <mar...@ulb.ac.be> wrote:>> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that
> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W
Yes.
yet your agree that both copy will feel to live in only one place
Yes.> so the error you have seen was dues to a confusion between first person and third person.
Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated.
Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical.
And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.
> By the way, it is irrational to stop in the middle of a proof.
If one of the steps in a proof contains a blunder then it would be irrational to keep reading it.
> By assuming a physical reality at the start
That seems like a pretty damn good place to make an assumption.
> But the physical reality can emerge or appear without a physical reality at the start
Maybe maybe not, but even if you're right that wouldn't make it any less real; and maybe physical reality didn't even need to emerge because there was no start.>> If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.> But this is easily shown to be false when we assume comp.
It's not false and I don't need to assume it and I haven't theorized it from armchair philosophy either, I can show it's true experimentally.
And when theory and experiment come into conflict it is the theory that must submit not the experiment.
If I insert drugs into your bloodstream it will change the chemistry of your brain, and when that happens your conscious state will also change. Depending on the drug I can make you happy-sad, friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.
> If your state appears in a far away galaxies [...]
Then he will be me and he will remain me until differences between that far away galaxy and this one cause us to change in some way, such as by forming different memories; after that he will no longer be me, although we will still both be John K Clark because John K Clark has been duplicated, the machine duplicated the body of him and the environmental differences caused his consciousness to diverge. As I've said before this is a odd situation but in no way paradoxical.
> You keep defending comp, in your dialog with Craig,
I keep defending my ideas, "comp" is your homemade term not mine, I have no use for it.
> You can attach consciousness to the owner of a brain,
Yes, consciousness is what the brain does.
> but the owner itself must attach his consciousness to all states existing in arithmetic
Then I must remember events that happened in the Precambrian because arithmetic existed even back then, but I don't, I don't remember existing then at all. Now that is a paradox! Therefore one of the assumptions must be wrong, namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".
2012/10/22 Jason Resch <jason...@gmail.com>
On Sun, Oct 21, 2012 at 12:46 PM, John Clark <johnk...@gmail.com> wrote:
On Sun, Oct 21, 2012 Bruno Marchal <mar...@ulb.ac.be> wrote:
>> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that
> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W
Yes.
yet your agree that both copy will feel to live in only one place
Yes.
> so the error you have seen was dues to a confusion between first person and third person.
Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated.
Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.
John,
I think you are missing something. It is a problem that I noticed after watching the movie "The Prestige" and it eventually led me to join this list.
Unless you consider yourself to be only a single momentary atom of thought, you probably believe there is some stream of thoughts/consciousness that you identify with. You further believe that these thoughts and consciousness are produced by some activity of your brain. Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.
Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent. We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated). It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it. From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.
Assuming you are with me so far, an interesting question emerges: what happens to your consciousness when duplicated? Either an atom for atom replica of yourself is created in two places or your VM image which contains your brain emulation is copied to two different computers while paused, and then both are resumed. Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances, but then something interesting happens once different input is fed to the two instances: they split. You could say they split in the same sense as when someone opens the steel box to see whether the cat is alive or dead. All the splitting in quantum mechanics may be the result of our infinite instances discovering/learning different things about our infinite environments.
I would add that what's interresting in the duplication is the what happens next probability (when the "two" copies diverge). If you're about to do an experience (for exemple opening a door and looking what is behind) and that just before opening the door, your are duplicated, the copy is put in the same position in front of an identical door, the fact that you were originally (just before duplication) in front of a door that opens on new york city, what is the probability that when you open it *it is* new york city... in case of a single universe (limited) where not duplications of state could appear the answer is straighforward, it is 100%, but in case of comp or MWI, the probability is not 100%, you must take in account all duplications (now and then) and there relative measure. That is the "measure" problem. The "before" divergence is not interresting, that's the point where John stays stuck willingly.
Quentin
--
Onward!
Stephen
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.
Hi, Stephen,you wrote some points in accordance with my thinking (whatever that is worth) with one point I disagree with:if you want to argue a point, do not accept it as a base for your argument (even negatively not). You do that all the time. (SPK? etc.) -
My fundamental question: what do you (all) call 'mind'?
(Sub: does the brain do/learn mind functions? HOW?)
(('experimentally observed' is restricted to our present level of understanding/technology(instrumentation)/theories.Besides: "miraculous" is subject to oncoming explanatory novel info, when it changes into merely 'functonal'.))
To fish out some of my agreeing statements:"Well, I don't follow the crowd...."Science is no voting matter. 90+% believed the Flat Earth.
"... Alter 1 neuron and you might not have the same mind..."
(Meaning: the 'invasion(?)' called 'altering a neuron' MAY change the functionalist's complexity IN THE MIND!- which is certainly beyond our knowable domain. That makes the 'hard' hard. We 'like' to explain DOWN everything in today's knowable terms. (Beware my agnostic views!)
"Computation" of course I consider a lot more than that (Platonistic?) algorithmic calculation on our existing (and so knowable?) embryonic device. I go for the Latin orig.: to THINK together - mathematically, or beyond. That mat be a deficiency from my (Non-Indo-European) mother tongue where the (improper?) translatable equivalent closes to the term "expectable". "I am counting on your visit tomorrow".
"I strongly believe that computational complexity plays a huge role in many aspects of the hard problem of consciousness and that the Platonic approach to computer science is obscuring solutions as it is blind to questions of resource availability and distribution."(and a lot more, do we 'know' about them, or not (yet).
"Is the brain strictly a classical system? - No,..."
The "BRAIN" may be - as a 'Physical-World' figment of our bio-physio conventional science image, but its mind-related function(?) (especially the hard one) is much more than a 'system': ALL 'parts' inventoried in explained functionality).
And: I keep away from the beloved "thought-experiments" invented to make uncanny ideas practically(?) feasible.
"...As I see it, there is no brain change without a mind change and vice versa. The mind and brain are dual,..."Thanks, Stephen, originally I thought there may be some (tissue-related) minor brain-changes not affecting the mind of which the 'brains' serves as a (material) tool in our "sci"? explanations.Reading your post(s) I realized that it is a complexity and ANY change in one part has consequences in the others.
So whatever 'part' we landscape as the 'neuronal brain' it isstill part of the wider complexity unknowable.
Have a good trip onward
-- Onward! Stephen
Hi meekerdb
There are a number of theories to explain the collapse of the quantum wave function
(see below).1) In subjective theories, the collapse is attributedto consciousness (presumably of the intent or decision to makea measurement).
2) In objective or decoherence theories, some physicalevent (such as using a probe to make a measurement)in itself causes decoherence of the wave function. To me,this is the simplest and most sensible answer (Occam's Razor).
3) There is also the many-worlds interpretation, in which collapseof the wave is avoided by creating an entire universe.This sounds like overkill to me.So I vote for decoherence of the wave by a probe.
On 10/23/2012 3:35 PM, Stephen P. King wrote:
On 10/23/2012 1:29 PM, meekerdb wrote:
On 10/23/2012 3:40 AM, Stephen P. King wrote:
But you wrote, "Both require the prior existence of a solution to a NP-Hard problem." An existence that is guaranteed by the definition.
Hi Brent,
OH! Well, I thank you for helping me clean up my language! Let me try again. ;--) First I need to address the word "existence". I have tried to argue that "to exists" is to be "necessarily possible" but that attempt has fallen on deaf ears, well, it has until now for you are using it exactly how I am arguing that it should be used, as in "An existence that is guaranteed by the definition." DO you see that existence does nothing for the issue of properties? The existence of a pink unicorn and the existence of the 1234345465475766th prime number are the same kind of existence,
I don't see that they are even similar. Existence of the aforesaid prime number just means it satisfies a certain formula within an axiom system. The pink unicorn fails existence of a quite different kind, namely an ability to locate it in spacetime. It may still satisfy some propositions, such as, "The animal that is pink, has one horn, and loses it's power in the presence of a virgin is obviously metaphorical."; just not ones we think of as axiomatic.
once we drop the pretense that existence is dependent or contingent on physicality.
It's not a pretense; it's a rejection of Platonism, or at least a distinction between different meanings of 'exists'.
Is it possible to define Physicality can be considered solely in terms of bundles of particular properties, kinda like Bruno's bundles of computations that define any given 1p. My thinking is that what is physical is exactly what some quantity of separable 1p have as mutually consistent
But do the 1p have to exist? Can they be Sherlock Holmes and Dr. Watson?
(or representable as a Boolean Algebra) but this consideration seems to run independent of anything physical. What could reasonably constrain the computations so that there is some thing "real" to a physical universe?
That's already assuming the universe is just computation, which I think is begging the question. It's the same as saying, "Why this and not that."
There has to be something that cannot be changed merely by changing one's point of view.
So long as you think other 1p viewpoints exist then intersubjective agreement defines the 'real' 3p world.
When you refer to the universe computing itself as an NP-hard problem, you are assuming that "computing the universe" is member of a class of problems.
Yes. It can be shown that computing a universe that contains something consistent with Einstein's GR is NP-Hard, as the problem of deciding whether or not there exists a smooth diffeomorphism between a pair of 3,1 manifolds has been proven (by Markov) to be so. This tells me that if we are going to consider the evolution of the universe to be something that can be a simulation running on some powerful computer (or an abstract computation in Platonia) then that simulation has to at least the equivalent to solving an NP-Hard problem. The prior existence, per se, of a solution is no different than the non-constructable proof that Diffeo_3,1 /subset NP-Hard that Markov found.
So the universe solves that problem. So what? We knew it was a soluble problem. Knowing it was NP-hard didn't make it insoluble.
It actually doesn't make any sense to refer to a single problem as NP-hard, since the "hard" refers to how the difficulty scales with different problems of increasing size.
These terms, "Scale" and "Size", do they refer to some thing abstract or something physical or, perhaps, both in some sense?
They refer to something abstract (e.g. number of nodes in a graph), but they may have application by giving them a concrete interpretation - just like any mathematics.
I'm not clear on what this class is.
It is an equivalence class of computationally soluble problems. http://cs.joensuu.fi/pages/whamalai/daa/npsession.pdf There are many of them.
Are you thinking of something like computing Feynman path integrals for the universe?
Not exactly, but that is one example of a computational problem.
snip.
No, I am trying to explain something that is taken for granted; it is more obvious for the Pre-established harmony of Leibniz, but I am arguing that this is also the case in Big Bang theory: the initial condition problem (also known as the foliation problem) is a problem of computing the universe ahead of time.
That problem assumes GR. But thanks to QM the future is not computed just from the past, i.e. the past does not have to have enough information to determine the future. So the idea that computing the next foliation in GR is 'too hard' may be an artifact of ignoring QM.
Also it's not clear what resources the universe has available with which to compute.
If you consider every Planck volume as capable of encoding a bit, and observe the holographic bound on the information to be computed I think there's more than enough.
-- Onward! Stephen
Hi meekerdb
There are a number of theories to explain the collapse of the quantum wave function
(see below).1) In subjective theories, the collapse is attributedto consciousness (presumably of the intent or decision to makea measurement).
2) In objective or decoherence theories, some physicalevent (such as using a probe to make a measurement)in itself causes decoherence of the wave function. To me,this is the simplest and most sensible answer (Occam's Razor).
3) There is also the many-worlds interpretation, in which collapseof the wave is avoided by creating an entire universe.This sounds like overkill to me.
So I vote for decoherence of the wave by a probe.